Abstract:
In this paper, we consider a terminal linear control system subject to unknown but bounded disturbances. For this system, we consider the problem of constructing worst-case feedback control policy which is allowed to be corrected at $m$ fixed intermediate time moments. It is shown that computation of the policy is equivalent to solving a corresponding convex mathematical programming problem with $m-1$ decision variables. Based on the solution of the mathematical programming problem we derive simple explicit rules for constructing the optimal control policy. The policy should guarantee that, for all admissible uncertainties, the terminal system state lies in a prescribed neighborhood of a given state $z_*$ at a given final moment, and system state at $m$ fixed intermediate time moments $t_i$, $i=1,\dots,m$, lies in prescribed neighborhoods $\delta_i$ of a constructed system states $y_i^0$ and the value of the cost function takes the best guaranteed value.