Abstract:
Approximating finite-dimensional mathematical programming problems are studied that arise from piecewise constant discretization of controls in the optimization of distributed systems of a fairly broad class. The smoothness of the approximating problems is established. Gradient formulas are derived that make use of the analytical solution of the original control system and its adjoint, thus providing an opportunity for algorithmic separation of numerical optimization and the task of solving a controlled initial-boundary value problem. The approximating problems are proved to converge to the original optimization problem with respect to the functional as the discretization is refined. The application of the approach to optimization problems is illustrated by solving the semilinear wave equation controlled by applying an integral criterion. The results of numerical experiments are analyzed.
Key words:optimization of distributed parameter systems, differentiation of a functional, piecewise constant approximation of control, control parametrization technique, gradient methods.