Michael Overton (Courant Institute, New York University)
Fast Approximation of the Stability Radius and the $H_\infty$ Norm for Large-Scale Linear Dynamical Systems
The stability radius and the $H_\infty$ norm are well-known quantities in the robust analysis of linear dynamical systems with output feedback. These two quantities, which are reciprocals of each other in the simplest interesting case, respectively measure how much system uncertainty can be tolerated without losing stability, and how much an input disturbance may be magnified in the output. The standard method for computing them, the Boyd-Balakrishnan-Bruinsma-Steinbuch algorithm from 1990, is globally and quadratically convergent, but its cubic cost per iteration makes it inapplicable to large-scale dynamical systems. We present a new class of efficient methods for approximating the stability radius and the $H_\infty$ norm, based on iterative methods to find rightmost points of spectral value sets, which are generalizations of pseudospectra for modeling the linear fractional matrix transformations that arise naturally in analyzing output feedback. We also discuss a method for approximating the real structured stability radius, which offers additional challenges. Finally, we describe our new public-domain MATLAB toolbox for low-order controller synthesis, HIFOOS (H-infinity fixed-order optimization --- sparse). This offers a possible alternative to popular model order reduction techniques by applying fixed-order controller design directly to large-scale dynamical systems.
Joint work with Nicola Guglielmi, Mert Gürbüzbalaban and Tim Mitchell.
Elias Tsigaridas (INRIA Rocquencourt)
Near optimal algorithms for structured matrices and for real and complex root refinement
We combine some powerful techniques developed in the area of univariate root finding to devise new algorithms for refinement of isolated complex and real roots that have nearly optimal Boolean complexity. One of the main ingredients is multipoint evaluation, which in turn is closely related to fast computations with structured matrices. We present nearly optimal complexity bounds for almost all the basic operations with structured matrices.
Joint work with Victor Y. Pan.
Konstantin Usevich (GIPSA Lab, Université de Grenoble)
Structured low-rank matrix completion with nuclear norm minimization: the case of Hankel and quasi-Hankel matrices
Minimal rank completion of matrices with missing values is a difficult nonconvex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is when the two optimization problems lead to the same solution. In the literature, this question was addressed mostly in the case of random positions of missing elements and random known elements. Our focus lies on the completion of structured matrices with fixed pattern of missing values, in particular, on Hankel and quasi-Hankel matrix completion. The latter appears as a subproblem in the computation of symmetric tensor canonical polyadic decomposition. In the talk, we give an overview of these topics and report recent results on the performance of the nuclear norm minimization for completion of rank-r complex Hankel and quasi-Hankel matrices.
Silviu Filip (ENS Lyon)
The Parks-McClellan algorithm and Chebyshev-proxy rootfinding methods
During the 1970s, James McClellan and Thomas Parks developed an iterative routine for designing finite impulse response filters. Their work is based on the theory of minimax polynomial approximation, and today, it is one of the most well-known filter design methods in digital signal processing. In this talk, I will give insight into some recent work that I have been a part of, to improve the practical behavior of this algorithm . My main focus will be on numerically stable root finding routines that rely on determining the eigenvalues of appropriate structured matrices . They appear in the context of polynomial interpolation using the basis of Chebyshev polynomials. Extrema/root finding is usually the most computationally intensive part of the Parks-McClellan algorithm. Through interval subdivision techniques, we were able to construct a very robust implementation of this filter design routine, one that is more scalable than other existing implementations, like those found in Matlab and Scipy.
 Filip, S. A robust and scalable implementation of the Parks-McClellan algorithm for designing FIR filters, submitted for publication, preliminary version available here
 Boyd, J. Finding the Zeros of a Univariate Equation: Proxy Rootfinders, Chebyshev Interpolation, and the Companion Matrix. SIAM Review 55, 2 (2013), 375–396