
Michael Overton (Courant Institute, New York University)
Fast Approximation of the Stability Radius and the $H_\infty$ Norm for LargeScale Linear Dynamical Systems
The stability radius and the $H_\infty$ norm are wellknown quantities in the robust analysis of linear dynamical systems with output feedback. These two quantities, which are reciprocals of each other in the simplest interesting case, respectively measure how much system uncertainty can be tolerated without losing stability, and how much an input disturbance may be magnified in the output. The standard method for computing them, the BoydBalakrishnanBruinsmaSteinbuch algorithm from 1990, is globally and quadratically convergent, but its cubic cost per iteration makes it inapplicable to largescale dynamical systems. We present a new class of efficient methods for approximating the stability radius and the $H_\infty$ norm, based on iterative methods to find rightmost points of spectral value sets, which are generalizations of pseudospectra for modeling the linear fractional matrix transformations that arise naturally in analyzing output feedback. We also discuss a method for approximating the real structured stability radius, which offers additional challenges. Finally, we describe our new publicdomain MATLAB toolbox for loworder controller synthesis, HIFOOS (Hinfinity fixedorder optimization  sparse). This offers a possible alternative to popular model order reduction techniques by applying fixedorder controller design directly to largescale dynamical systems.
Joint work with Nicola Guglielmi, Mert Gürbüzbalaban and Tim Mitchell. 
Elias Tsigaridas (INRIA Rocquencourt)
Near optimal algorithms for structured matrices and for real and complex root refinement
We combine some powerful techniques developed in the area of univariate root finding to devise new algorithms for refinement of isolated complex and real roots that have nearly optimal Boolean complexity. One of the main ingredients is multipoint evaluation, which in turn is closely related to fast computations with structured matrices. We present nearly optimal complexity bounds for almost all the basic operations with structured matrices.
Joint work with Victor Y. Pan. 
Konstantin Usevich (GIPSA Lab, Université de Grenoble)
Structured lowrank matrix completion with nuclear norm minimization: the case of Hankel and quasiHankel matrices
Minimal rank completion of matrices with missing values is a difficult nonconvex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is when the two optimization problems lead to the same solution. In the literature, this question was addressed mostly in the case of random positions of missing elements and random known elements. Our focus lies on the completion of structured matrices with fixed pattern of missing values, in particular, on Hankel and quasiHankel matrix completion. The latter appears as a subproblem in the computation of symmetric tensor canonical polyadic decomposition. In the talk, we give an overview of these topics and report recent results on the performance of the nuclear norm minimization for completion of rankr complex Hankel and quasiHankel matrices. 
Silviu Filip (ENS Lyon)
The ParksMcClellan algorithm and Chebyshevproxy rootfinding methods
During the 1970s, James McClellan and Thomas Parks developed an iterative routine for designing finite impulse response filters. Their work is based on the theory of minimax polynomial approximation, and today, it is one of the most wellknown filter design methods in digital signal processing. In this talk, I will give insight into some recent work that I have been a part of, to improve the practical behavior of this algorithm [1]. My main focus will be on numerically stable root finding routines that rely on determining the eigenvalues of appropriate structured matrices [2]. They appear in the context of polynomial interpolation using the basis of Chebyshev polynomials. Extrema/root finding is usually the most computationally intensive part of the ParksMcClellan algorithm. Through interval subdivision techniques, we were able to construct a very robust implementation of this filter design routine, one that is more scalable than other existing implementations, like those found in Matlab and Scipy.
[1] Filip, S. A robust and scalable implementation of the ParksMcClellan algorithm for designing FIR filters, submitted for publication, preliminary version available here
[2] Boyd, J. Finding the Zeros of a Univariate Equation: Proxy Rootfinders, Chebyshev Interpolation, and the Companion Matrix. SIAM Review 55, 2 (2013), 375–396