Abstract:
System Identification is about estimating models of dynamical systems from measured input-output data. Its traditional foundation is basic statistical techniques, such as maximum likelihood estimation and asymptotic analysis of bias and variance and the like. Maximum likelihood estimation relies on minimization of criterion functions that typically are non-convex, and may cause numerical search problems and estimates trapped in local minima. Recent interest in identification algorithms has focused on techniques that are centered around convex formulations. This is partly the result of developments in semidefinite programming, machine learning and statistical learning theory. The development concerns issues of regular-ization for sparsity and for better tuned bias/variance trade-offs. It also involves the use of subspace methods as well as nuclear norms as proxies to rank constraints. A special approach is to look for difference-of-convex programming (DCP) formulations, in case a pure convex criterion is not found. Other techniques are based on Lagrangian relaxation and contraction theory. A quite different route to convexity is to use algebraic techniques to manipulate the model parameterizations. This article will illustrate all this recent development.