14–15 mai 2018
ENS de Lyon
Fuseau horaire Europe/Paris

List of talks

 

List of talks

 

  • Florent Bréhard (LIP - ENS de Lyon and LAAS - CNRS)
    A Newton-like validation method for Chebyshev approximate solutions of linear ordinary differential equations
    A wide range of efficient numerical routines exist for solving function space problems (ODEs, PDEs, optimization, etc.) when no closed form is known for the solution. While most applications prioritize efficiency, some safety-critical tasks, as well as computer assisted mathematics, need rigorous guarantees on the computed result. For that, rigorous numerics aims at providing numerical approximations together with rigorous mathematical statements about them, without sacrificing (too much) efficiency and automation. In the spirit of Newton-like validation methods (see for example [2]), we propose a fully automated algorithm which computes both a numerical approximate solution in Chebyshev basis and a rigorous uniform error bound for a restricted class of differential equations, namely Linear ODEs (LODEs). Functions are rigorously represented using Chebyshev models [1], which are a generalization of Taylor models [3] with better convergence properties.
    Broadly speaking, the algorithm works in two steps:
    (i) After applying an integral transform on the LODE, an infinite-dimensional linear almost-banded system is obtained. Its truncation at a given order N is solved with the fast algorithm of [4].
    (ii) This solution is validated using a specific Newton-like fixed-point operator. This is obtained by approximating the integral operator with a finite-dimensional truncation, whose inverse Jacobian is in turn approximated by an almost-banded matrix, obtained with a modified version of the algorithm of [4].
    A C library implementing this validation method is freely available online at https://gforge.inria.fr/projects/tchebyapprox/
    This is joint work with Nicolas Brisebarre and Mioara Joldes.
    References:
    [1] N. Brisebarre and M. Joldes. Chebyshev interpolation polynomial-based tools for rigorous computing. In Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation, pages 147-154. ACM, 2010.
    [2] J.-P. Lessard and C. Reinhardt. Rigorous numerics for nonlinear differential equations using Chebyshev series. SIAM J. Numer. Anal., 52(1):1-22, 2014.
    [3] K. Makino and M. Berz. Taylor models and other validated functional inclusion methods. International Journal of Pure and Applied Mathematics, 4(4):379-456, 2003.
    [4] S. Olver and A. Townsend. A fast and well-conditioned spectral method. SIAM Review, 55(3):462-489, 2013.

 

  • Sylvie Icart (I3S - Université de Nice)
    About diagonalisation of para-Hermitian matrix
    It is well-known that a Hermitian matrix can be diagonalized by means of a unitary matrix. The aim of this talk is to present the extension of this result to polynomial matrices, known as PEVD (Polynomial Eigen Value Decomposition) [1], occurring e.g. in blind equalization. In this context, the eigenvalues are polynomials instead of scalars. Moreover, polynomials are Laurent polynomials, this means with positive and negative exponents. We will show that in this framework, one can still define unimodular matrices, Smith form and invariant polynomials. We will present the difference between the order (or length) and the degree of a polynomial matrix. Extending polynomial paraconjugation to polynomial matrix, one defines para-hermitianity. We give some properties of these matrices. Then, we will show that diagonalization of para-hermitian matrices is not always possible. Furthermore we will present some results found in the literature on how to approximate a PEVD.
    References:
    [1] J. G. McWhirter, P. D. Baxter, T. Cooper, S. Redif, and J. Foster, An EVD algorithm for Para-Hermitian polynomial matrices, IEEE Transactions on Signal Processing, vol. 55, no. 6, June 2007, pp. 2158-2169.

 

  • Romain Lebreton (LIRMM - Université de Montpellier)
    Algorithms for structured linear systems solving and their implementation
    There exists a vast literature dedicated to algorithms for structured matrices, but relatively few descriptions of actual implementations and their practical performance in symbolic computation. In this talk, we consider the problem of solving Cauchy-like systems, and its application to mosaic Toeplitz systems, in two contexts: first over finite fields where basic operations have unit cost, then over Q. We introduce new variants of previous algorithms and describe an implementation of these techniques and its practical behavior. We pay a special attention to particular cases such as the computation of algebraic approximants.
    This is joint work with Eric Schost and Seung-Gyu Hyun.

 

  • Théo Mary (University of Manchester)
    Bridging the gap between flat and hierarchical low-rank matrix formats
    Matrices possessing a low-rank property arise in numerous scientific applications. This property can be exploited to provide a substantial reduction of the complexity of their LU or LDLT factorization. Among the possible low-rank matrix formats, the flat Block Low-Rank (BLR) format is easy to use but achieves superlinear complexity. Alternatively, the hierarchical formats achieve linear complexity at the price of a much more complex hierarchical matrix representation. In this talk, we propose a new format based on multilevel BLR approximations. Contrarily to hierarchical matrices, the number of levels in the block hierarchy is fixed to a given constant; we prove that this provides a simple way to finely control the desired complexity of the dense multilevel BLR factorization. By striking a balance between the simplicity of the BLR format and the low complexity of the hierarchical ones, the multilevel BLR format bridges the gap between flat and hierarchical low-rank formats. The multilevel BLR format is of particular relevance in the context of sparse (e.g. multifrontal) solvers, for which it is able to trade off the optimal dense complexity of the hierarchical formats to benefit from the simplicity of the BLR format while still achieving O(n) sparse complexity.
    This is joint work with Patrick Amestoy, Alfredo Buttari and Jean-Yves L'Excellent.

 

  • Stefano Massei (EPF Lausanne)
    Exploiting off-diagonal rank structures in the solution of linear matrix equations
    Linear matrix equations, namely Sylvester and Lyapunov equations, play an important role in several applications arising in control theory and PDEs. In the large scale scenario, it is crucial to exploit the structure in the data in order to carry on the computations and store the final solution. We focus on the case in which the coefficients have off-diagonal blocks with low-rank and we study when this property is numerically preserved in the solution. Then, we propose a divide and conquer scheme able to exploit the structure, reaching a linear-polylogarithmic complexity in both time and memory consumption.

 

  • José Henrique de Morais Goulart (Gipsa-lab - Université Grenoble Alpes)
    On minimal ranks and the approximate block-term tensor decomposition
    The block-term tensor decomposition (BTD) is a generalization of the tensor rank (or canonical polyadic) decomposition which is well suited for certain source separation problems. In this talk, I will discuss the existence of a best approximate block-term tensor decomposition (BTD) consisting of a sum of low-rank matrix-vector tensor products. This investigation is motivated by the fact that a tensor might not admit an exact BTD with a given structure (number of blocks and their ranks). After a brief introduction, we will proceed by exploring the isomorphism between third-order tensors and matrix polynomials. To every matrix polynomial one can associate a sequence of minimal ranks, which is unique up to permutation and invariant under the action of the general linear (or tensor equivalence) group. This notion is a key ingredient of our problem, since it induces a natural hierarchy of sets of third-order tensors corresponding to different choices of ranks for the blocks of the BTD. In the particular case of matrix pencils, I will explain how the minimal ranks of a pencil can be directly determined from its Kronecker canonical form. By relying on this result, one can show that no real pencil of dimensions 2k x 2k having full minimal ranks admits a best approximation on the set of real pencils whose minimal ranks are bounded by 2k-1. From the tensor viewpoint, this means that there exist third-order tensors which do not have a best approximation on a certain set of two-block BTDs. These tensors form a non-empty open subset of the space of 2k x 2k x 2 tensors, which is therefore of positive volume. I shall sketch the proof of this result and then discuss some possible extensions of this work and open problems.

 

  • Bernard Mourrain (Inria Sophia Antipolis)
    Tensor decomposition and structured matrices
    Tensor decomposition problems appear in many areas such as Signal Processing, Quantum Information Theory, Algebraic Statistics, Biology, Complexity Analysis, … as a way to recover hidden structures from data. The decomposition is a representation of the tensor as a weighted sum of a minimal number of terms, which are tensor products of vectors. We present an algebraic approach to address this problem, which involves duality, Gorenstein Artinian algebras and Hankel operators. We show the connection with low rank decomposition of Hankel matrices, discuss algebraic and optimization techniques to solve it and illustrate the methods on some examples.

 

  • Simone Naldi (XLIM - Université de Limoges)
    Linear matrix inequalities and structured matrices
    Linear Matrix Inequality (LMI) is the problem of deciding the existence of a positive semidefinite matrix in a given affine space of real symmetric matrices. It is the natural generalization of the feasibility problem of linear programming to the semidefinite cone. Semidefinite Programming (SDP) consists in minimizing linear functions over LMI constraints. Both LMI and SDP have attracted the attention of different communities since many hard problems can be "relaxed" to instances of these problems (e.g. polynomial optimization, graph theory, control theory). In this talk we present the LMI problem and a general method to compute exact solutions. In the special case of Hankel LMIs, exploiting the additional structure yields better complexity bounds.

 

  • Marco Sutti (Université de Genève)
    Connecting geodesics for the Stiefel manifold
    Several applications in optimization, image and signal processing deal with data that belong to the Stiefel manifold St(n, p), that is, the set of n × p matrices with orthonormal columns. Some applications, for example, the computation of the Karcher mean, require evaluating the geodesic distance between two arbitrary points on St(n, p). This can be done by explicitly constructing the geodesic connecting these two points. An existing method for finding geodesics is the leapfrog algorithm introduced by J. L. Noakes, which enjoys global convergence properties. We reinterpret this algorithm as a nonlinear block Gauss-Seidel process and propose a new convergence proof based on this framework for the case of St(n, p).
    This is joint work with Bart Vandereycken.

 

  • Françoise Tisseur (University of Manchester)
    Structured matrix polynomials and their sign characteristics: classical results and recent developments
    The sign characteristic is an invariant associated with particular eigenvalues of structured matrices, matrix pencils, or matrix polynomials. The concept of sign characteristic arises in different forms in many scientific fields, and is essential for the stability analysis in Hamiltonian systems or the perturbation behaviour of eigenvalues under structured perturbations. We will start by discussing the sign characteristics of Hermitian matrix polynomials, and show how to extend its definition to eigenvalues at infinity. We will discuss applications of the sign characteristic in particular in control systems, in the solution of structured inverse polynomial eigenvalue problems and in the characterization of special structured matrix polynomials such as overdamped quadratics, hyperbolic and quasidefinite matrix polynomials.

 

  • Lloyd N. Trefethen (University of Oxford and LIP - ENS de Lyon)
    Axis-alignment in low rank and other structures
    In two or more dimensions, various methods have been developed to represent matrices or functions more compactly. The efficacy of such methods often depends on alignment of the data with the axes. We shall discuss four cases: low-rank approximations, sparse grids, quasi-Monte Carlo, and multivariate polynomials.

 

  • Jean-Claude Yakoubsohn (IMT - Université de Toulouse)
    Certified and fast computation of SVD
    This work is in progress with the collaboration Joris Van Der Hoeven. We define and study a new method to approximate locally the SVD of a general matrix: this method has a local quadratic convergence. We also give result of complexity of this approximation using a such type homotopy method.
    This is joint work with Joris van der Hoeven.