Structured Matrix Days 2018
from
Monday, May 14, 2018 (9:30 AM)
to
Tuesday, May 15, 2018 (4:30 PM)
Monday, May 14, 2018
9:30 AM
Welcome
Welcome
9:30 AM - 10:20 AM
Room: Passerelle
10:20 AM
Introduction
Introduction
10:20 AM - 10:30 AM
Room: Amphi A
10:30 AM
Axis-alignment in low rank and other structures
-
Lloyd N. Trefethen
(
University of Oxford and LIP - ENS de Lyon
)
Axis-alignment in low rank and other structures
Lloyd N. Trefethen
(
University of Oxford and LIP - ENS de Lyon
)
10:30 AM - 11:30 AM
Room: Amphi A
In two or more dimensions, various methods have been developed to represent matrices or functions more compactly. The efficacy of such methods often depends on alignment of the data with the axes. We shall discuss four cases: low-rank approximations, sparse grids, quasi-Monte Carlo, and multivariate polynomials.
11:30 AM
Connecting geodesics for the Stiefel manifold
-
Marco Sutti
(
University of Geneva
)
Connecting geodesics for the Stiefel manifold
Marco Sutti
(
University of Geneva
)
11:30 AM - 12:00 PM
Room: Amphi A
Several applications in optimization, image and signal processing deal with data that belong to the Stiefel manifold St(n, p), that is, the set of n × p matrices with orthonormal columns. Some applications, for example, the computation of the Karcher mean, require evaluating the geodesic distance between two arbitrary points on St(n, p). This can be done by explicitly constructing the geodesic connecting these two points. An existing method for finding geodesics is the leapfrog algorithm introduced by J. L. Noakes, which enjoys global convergence properties. We reinterpret this algorithm as a nonlinear block Gauss-Seidel process and propose a new convergence proof based on this framework for the case of St(n, p).
12:00 PM
Lunch & Coffee break
Lunch & Coffee break
12:00 PM - 2:00 PM
Room: Canteen & Passerelle
2:00 PM
Certified and Fast computation of SVD
-
Jean-Claude Yakoubsohn
(
Institut de Mathématiques de Toulouse
)
Certified and Fast computation of SVD
Jean-Claude Yakoubsohn
(
Institut de Mathématiques de Toulouse
)
2:00 PM - 2:30 PM
Room: Amphi A
This work is in progress with the collaboration Joris Van Der Hoeven. We define and study a new method to approximate locally the SVD of a general matrix: this method has a local quadratic convergence. We also give result of complexity of this approximation using a such type homotopy method.
2:30 PM
A Newton-like Validation Method for Chebyshev Approximate Solutions of Linear Ordinary Differential Equations
-
Florent Bréhard
(
LAAS-CNRS Toulouse and CNRS, LIP - INRIA AriC / Plume - ENS de Lyon
)
A Newton-like Validation Method for Chebyshev Approximate Solutions of Linear Ordinary Differential Equations
Florent Bréhard
(
LAAS-CNRS Toulouse and CNRS, LIP - INRIA AriC / Plume - ENS de Lyon
)
2:30 PM - 3:00 PM
Room: Amphi A
A wide range of efficient numerical routines exist for solving function space problems (ODEs, PDEs, optimization, etc.) when no closed form is known for the solution. While most applications prioritize efficiency, some safety-critical tasks, as well as computer assisted mathematics, need rigorous guarantees on the computed result. For that, rigorous numerics aims at providing numerical approximations together with rigorous mathematical statements about them, without sacrificing (too much) efficiency and automation. In the spirit of Newton-like validation methods (see for example [2]), we propose a fully automated algorithm which computes both a numerical approximate solution in Chebyshev basis and a rigorous uniform error bound for a restricted class of differential equations, namely Linear ODEs (LODEs). Functions are rigorously represented using Chebyshev models [1], which are a generalization of Taylor models [3] with better convergence properties. Broadly speaking, the algorithm works in two steps: (i) After applying an integral transform on the LODE, an infinite-dimensional linear almost-banded system is obtained. Its truncation at a given order N is solved with the fast algorithm of [4]. (ii) This solution is validated using a specific Newton-like fixed-point operator. This is obtained by approximating the integral operator with a finite-dimensional truncation, whose inverse Jacobian is in turn approximated by an almost-banded matrix, obtained with a modified version of the algorithm of [4]. A C library implementing this validation method is freely available online at https://gforge.inria.fr/projects/tchebyapprox/ [1] N. Brisebarre and M. Joldeş. Chebyshev interpolation polynomial-based tools for rigorous computing. In Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation, pages 147-154. ACM, 2010. [2] J.-P. Lessard and C. Reinhardt. Rigorous numerics for nonlinear differential equations using Chebyshev series. SIAM J. Numer. Anal., 52(1):1-22, 2014. [3] K. Makino and M. Berz. Taylor models and other validated functional inclusion methods. International Journal of Pure and Applied Mathematics, 4(4):379-456, 2003. [4] S. Olver and A. Townsend. A fast and well-conditioned spectral method. SIAM Review, 55(3):462-489, 2013.
3:00 PM
Tensor decomposition and structured matrices
-
Bernard Mourrain
(
Inria Sophia-Antipolis
)
Tensor decomposition and structured matrices
Bernard Mourrain
(
Inria Sophia-Antipolis
)
3:00 PM - 4:00 PM
Room: Amphi A
Tensor decomposition problems appear in many areas such as Signal Processing, Quantum Information Theory, Algebraic Statistics, Biology, Complexity Analysis, … as a way to recover hidden structures from data. The decomposition is a representation of the tensor as a weighted sum of a minimal number of terms, which are tensor products of vectors. We present an algebraic approach to address this problem, which involves duality, Gorenstein Artinian algebras and Hankel operators. We show the connection with low rank decomposition of Hankel matrices, discuss algebraic and optimization techniques to solve it and illustrate the methods on some examples.
4:00 PM
Coffee break
Coffee break
4:00 PM - 4:30 PM
Room: Salle Passerelle
4:30 PM
On minimal ranks and the approximate block-term tensor decomposition
-
José Henrique DE MORAIS GOULART
(
GIPSA-lab, CNRS, Grenoble
)
On minimal ranks and the approximate block-term tensor decomposition
José Henrique DE MORAIS GOULART
(
GIPSA-lab, CNRS, Grenoble
)
4:30 PM - 5:30 PM
Room: Amphi A
The block-term tensor decomposition (BTD) is a generalization of the tensor rank (or canonical polyadic) decomposition which is well suited for certain source separation problems. In this talk, I will discuss the existence of a best approximate block-term tensor decomposition (BTD) consisting of a sum of low-rank matrix-vector tensor products. This investigation is motivated by the fact that a tensor might not admit an exact BTD with a given structure (number of blocks and their ranks). After a brief introduction, we will proceed by exploring the isomorphism between third-order tensors and matrix polynomials. To every matrix polynomial one can associate a sequence of minimal ranks, which is unique up to permutation and invariant under the action of the general linear (or tensor equivalence) group. This notion is a key ingredient of our problem, since it induces a natural hierarchy of sets of third-order tensors corresponding to different choices of ranks for the blocks of the BTD. In the particular case of matrix pencils, I will explain how the minimal ranks of a pencil can be directly determined from its Kronecker canonical form. By relying on this result, one can show that no real pencil of dimensions 2k x 2k having full minimal ranks admits a best approximation on the set of real pencils whose minimal ranks are bounded by 2k-1. From the tensor viewpoint, this means that there exist third-order tensors which do not have a best approximation on a certain set of two-block BTDs. These tensors form a non-empty open subset of the space of 2k x 2k x 2 tensors, which is therefore of positive volume. I shall sketch the proof of this result and then discuss some possible extensions of this work and open problems.
8:00 PM
Social dinner
Social dinner
8:00 PM - 10:30 PM
Room: 25 Rue du Bât d'Argent
Tuesday, May 15, 2018
9:30 AM
Structured matrix polynomials and their sign characteristics: classical results and recent developments
-
Françoise Tisseur
(
University of Manchester
)
Structured matrix polynomials and their sign characteristics: classical results and recent developments
Françoise Tisseur
(
University of Manchester
)
9:30 AM - 10:30 AM
Room: Amphi A
The sign characteristic is an invariant associated with particular eigenvalues of structured matrices, matrix pencils, or matrix polynomials. The concept of sign characteristic arises in different forms in many scientific fields, and is essential for the stability analysis in Hamiltonian systems or the perturbation behaviour of eigenvalues under structured perturbations. We will start by discussing the sign characteristics of Hermitian matrix polynomials, and show how to extend its definition to eigenvalues at infinity. We will discuss applications of the sign characteristic in particular in control systems, in the solution of structured inverse polynomial eigenvalue problems and in the characterization of special structured matrix polynomials such as overdamped quadratics, hyperbolic and quasidefinite matrix polynomials.
10:30 AM
Coffee break
Coffee break
10:30 AM - 11:00 AM
Room: Salle Passerelle
11:00 AM
About Diagonalisation of Para-Hermitian Matrix
-
Sylvie Icart
(
I3S Université Côte d’Azur
)
About Diagonalisation of Para-Hermitian Matrix
Sylvie Icart
(
I3S Université Côte d’Azur
)
11:00 AM - 11:30 AM
Room: Amphi A
It is well-known that a Hermitian matrix can be diagonalized by means of a unitary matrix. The aim of this talk is to present the extension of this result to polynomial matrices, known as PEVD (Polynomial Eigen Value Decomposition) [1], occurring e.g. in blind equalization. In this context, the eigenvalues are polynomials instead of scalars. Moreover, polynomials are Laurent polynomials, this means with positive and negative exponents. We will show that in this framework, one can still define unimodular matrices, Smith form and invariant polynomials. We will present the difference between the order (or length) and the degree of a polynomial matrix. Extending polynomial paraconjugation to polynomial matrix, one defines para-hermitianity. We give some properties of these matrices. Then, we will show that diagonalization of para-hermitian matrices is not always possible. Furthermore we will present some results found in the literature on how to approximate a PEVD. [1] J. G. McWhirter, P. D. Baxter, T. Cooper, S. Redif, and J. Foster, An EVD algorithm for Para-Hermitian polynomial matrices, IEEE Transactions on Signal Processing, vol. 55, no. 6, June 2007, pp. 2158-2169.
12:00 PM
Lunch & Coffee
Lunch & Coffee
12:00 PM - 1:30 PM
Room: Canteen & Passerelle
1:30 PM
Exploiting off-diagonal rank structures in the solution of linear matrix equations
-
Stefano Massei
(
École Polytechnique Fédérale Lausanne
)
Exploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei
(
École Polytechnique Fédérale Lausanne
)
1:30 PM - 2:30 PM
Room: Amphi A
Linear matrix equations, namely Sylvester and Lyapunov equations, play an important role in several applications arising in control theory and PDEs. In the large scale scenario, it is crucial to exploit the structure in the data in order to carry on the computations and store the final solution. We focus on the case in which the coefficients have off-diagonal blocks with low-rank and we study when this property is numerically preserved in the solution. Then, we propose a divide and conquer scheme able to exploit the structure, reaching a linear-polylogarithmic complexity in both time and memory consumption.
2:30 PM
Bridging the gap between flat and hierarchical low-rank matrix formats
-
Theo Mary
(
University of Manchester
)
Bridging the gap between flat and hierarchical low-rank matrix formats
Theo Mary
(
University of Manchester
)
2:30 PM - 3:00 PM
Room: Amphi A
Matrices possessing a low-rank property arise in numerous scientific applications. This property can be exploited to provide a substantial reduction of the complexity of their LU or LDLT factorization. Among the possible low-rank matrix formats, the flat Block Low-Rank (BLR) format is easy to use but achieves superlinear complexity. Alternatively, the hierarchical formats achieve linear complexity at the price of a much more complex hierarchical matrix representation. In this talk, we propose a new format based on multilevel BLR approximations. Contrarily to hierarchical matrices, the number of levels in the block hierarchy is fixed to a given constant; we prove that this provides a simple way to finely control the desired complexity of the dense multilevel BLR factorization. By striking a balance between the simplicity of the BLR format and the low complexity of the hierarchical ones, the multilevel BLR format bridges the gap between flat and hierarchical low-rank formats. The multilevel BLR format is of particular relevance in the context of sparse (e.g. multifrontal) solvers, for which it is able to trade off the optimal dense complexity of the hierarchical formats to benefit from the simplicity of the BLR format while still achieving O(n) sparse complexity.
3:00 PM
Algorithms for Structured Linear Systems Solving and their Implementation
-
Romain Lebreton
(
LIRMM - Université de Montpellier
)
Algorithms for Structured Linear Systems Solving and their Implementation
Romain Lebreton
(
LIRMM - Université de Montpellier
)
3:00 PM - 3:45 PM
Room: Amphi A
There exists a vast literature dedicated to algorithms for structured matrices, but relatively few descriptions of actual implementations and their practical performance in symbolic computation. In this talk, we consider the problem of solving Cauchy-like systems, and its application to mosaic Toeplitz systems, in two contexts: first over finite fields where basic operations have unit cost, then over Q. We introduce new variants of previous algorithms and describe an implementation of these techniques and its practical behavior. We pay a special attention to particular cases such as the computation of algebraic approximants.
3:45 PM
Closing remarks & Coffee break
Closing remarks & Coffee break
3:45 PM - 4:30 PM
Room: Amphi A & Passerelle