RUS  ENG
Full version
JOURNALS // Avtomatika i Telemekhanika // Archive

Avtomat. i Telemekh., 2018 Issue 4, Pages 152–166 (Mi at14849)

This article is cited in 12 papers

Intellectual Control Systems, Data Analysis

Stackelberg equilibrium in a dynamic stimulation model with complete information

D. B. Rokhlin, G. A. Ougolnitsky

Southern Federal University, Rostov-on-Don, Russia

Abstract: We consider a stimulation model with Markov dynamics and discounted optimality criteria in case of discrete time and infinite planning horizon. In this model, the regulator has an economic impact on the executor, choosing a stimulating function that depends on the system state and the actions of the executor, who employs positional control strategies. System dynamics, revenues of the regulator and costs of the executor depend on the system state and the executor’s actions. We show that finding an approximate solution of the (inverse) Stackelberg game reduces to solving the optimal control problem with criterion equal to the difference between the revenue of the regulator and the costs of the executor. Here the $\varepsilon$-optimal strategy of the regulator is to economically motivate the executor to follow this optimal control strategy.

Keywords: two-level incentive model, inverse Stackelberg game, discounted optimality criterion, Bellman equation.

Presented by the member of Editorial Board: E. Ya. Rubinovich

Received: 09.08.2017


 English version:
Automation and Remote Control, 2018, 79:4, 701–712

Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024