Abstract:
We consider a regularization of the classical Lagrange principle and Pontryagin maximum principle in convex programming, optimal control, and inverse problems. We discuss two basic questions, why a regularization of the classical optimality conditions (COCs) is necessary and what it gives, using the example of the “simplest” problems of constrained infinite-dimensional convex optimization. The so-called regularized COCs considered in the paper are expressed in terms of the regular classical Lagrange and Hamilton-Pontryagin functions and are sequential generalizations of their classical analogs. They (1) “overcome” the possible instability and infeasibility of the COCs, being regularizing algorithms for the solution of optimization problems, (2) are formulated as statements on the existence of bounded minimizing approximate solutions in the sense of Warga in the original problem and preserve the general structure of the COCs, and (3) lead to the COCs “in the limit.” All optimization problems in the paper depend on an additive parameter in the infinite-dimensional equality constraint (the perturbation method). As a result, it is possible to study the connection of regularized COCs with the subdifferential properties of the value functions of the optimization problems.