RUS  ENG
Full version
JOURNALS // Vestnik Sankt-Peterburgskogo Universiteta. Seriya 10. Prikladnaya Matematika. Informatika. Protsessy Upravleniya // Archive

Vestnik S.-Petersburg Univ. Ser. 10. Prikl. Mat. Inform. Prots. Upr., 2014 Issue 2, Pages 36–48 (Mi vspui184)

This article is cited in 1 paper

Applied mathematics

Invariant transformations in the first method of Lyapunov

V. S. Ermolin

St. Petersburg State University, 199034, St. Petersburg, Russia Federation

Abstract: The article describes the development of theoretical bases of the first method of Lyapunov. A family of invariant transformations of linear systems of differential equations is shown. These transformations allow constructing new variable systems of differential equations that have the same characteristics (characteristic numbers of functions, property of correctness) with the original equations. The structure and properties of coefficient matrixes of these transformations are examined in detail. The research is based on the properties of characteristic numbers of functional matrixes. The class of nonsingular square matrixes is identified. The basis for inclusion of any matrix in this class is considered to be the equality to zero of the sum of characteristic numbers of the matrix and of the inverse matrix. Such matrixes are called absolutely correct. The connection between characteristic number of absolutely correct matrix and its determinant is found. It is proved that the characteristic numbers of rows and columns of such matrix are the same and their values are the same as its characteristic number. Invariant transformations are linear transformations with matrixes of coefficients that belong to the class of absolutely correct matrixes with characteristic number equal to zero. The examples of such matrixes are given. It is shown that Lyapunov transformations belong to the constructed family of invariant transformations. The concepts of matrixes correct by columns and matrixes correct by rows are introduced. It is shown that absolutely correct matrixes are correct both by columns and by rows. A theorem is proved according to which a necessary and sufficient condition for matrix correctness by columns or by rows is its possibility to be presented in the form of a product of two matrixes. One of these matrixes should be absolutely correct and the other should be diagonal exponential. In such representation the matrix correct by columns has absolutely correct matrix as its first multiplier and an exponential matrix as its second multiplier. While considering matrix correct by rows, these multipliers are placed in the reverse order. Moreover, it is proved that if the matrix is correct by columns, the inverse matrix will be correct by rows. The connection of characteristic numbers of columns and rows of the inverse matrix is established. With the use of matrix correctness by columns a definition of correctness of the linear differential equation system is introduced. It is shown that the definition is similar to Lyapunov's classical definition of system correctness. It allows transferring results that were received for matrixes correct by columns to the correct systems of equations. In particular, it allows identifying the structure of a normal fundamental matrix of equation system and extending a class of reducible systems via the use of the described invariant transformations. Bibliogr. 4.

Keywords: Lyapunov’s first method, invariant transformations, characteristic numbers, correct systems, Lyapunov transformation, absolutely correct matrix, normal fundamental system, reducible systems.

UDC: 517.926.4

Received: December 19, 2013



© Steklov Math. Inst. of RAS, 2024