Abstract:
The Lagrange principle in nondifferential form is regularized for a nonlinear (nonconvex) constrained optimization problem with an operator equality constraint in a Hilbert space. The feasible set of the problem belongs to a complete metric space, and the existence of its solution is not assumed a priori. The equality constraint involves an additive parameter, so a “nonlinear version” of the perturbation method can be used to study the problem. The regularized Lagrange principle is used mainly for the stable generation of generalized minimizing sequences (GMSes) in the considered nonlinear problem. It can be treated as a GMS-generating (regularizing) operator that takes each set of initial data of the problem to a subminimal (minimal) of its regular augmented Lagrangian corresponding to this set with the dual variable generated by the Tikhonov stabilization procedure for the dual problem. The augmented Lagrangian is completely determined by the form of “nonlinear” subdifferentials of the lower semicontinuous and generally nonconvex value function, which is regarded as a function of parameters of the problem. As such subdifferentials, we use the proximal subgradient and the Fréchet subdifferential, which are well known in nonsmooth (nonlinear) analysis. The regularized Lagrange principle overcomes the ill-posedness of its classical counterpart and can be treated as a regularizing algorithm, thus providing the theoretical basis for the development of stable methods for practically solving nonlinear constrained optimization problems.