This article is cited in
3 papers
MATHEMATICAL MODELING AND NUMERICAL SIMULATION
Direct multiplicative methods for sparse matrices. Newton methods
A. B. Sviridenko FSEI of HPE “Kuban State University”, branch in Novorossiysk,
Geroev Desantnikov st. 87, Novorossiysk, 353922, Russia
Abstract:
We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with
$LU$-decomposition, just another scheme of the Gaussian elimination method implementation.
In this paper, this algorithm is the basis for solving the following problems:
Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.
Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.
Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the
$n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve
$4n-4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is
$NP$-hard, since an
$NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that
$P=NP$, which is based on the development beyond the limits of integer optimization methods.
Keywords:
NP-hard problem, sparse matrices, Newton methods, direct multiplication algorithm, the direction of descent, a new mathematical formulation, necessary and sufficient conditions of optimality, minimization pseudo Boolean functions, pseudo Boolean programming, linear programming.
UDC:
519.85 Received: 20.02.2017
Revised: 22.09.2017
Accepted: 29.09.2017
DOI:
10.20537/2076-7633-2017-9-5-679-703