Abstract:
The paper is concerned with design of an optimal control law for a linear stochastic plant. The control duration is regarded random and is an optimization problem. The time of the control process completion is defined as a Markov moment of the first occasion when the random process of boundary estimation reaches the region of observation continuation. A method is proposed for parametrization of the Bellman function which leads to a feasible solution of the associated extremal boundary value problem.