Orateur
Description
Abstract: In this talk we will discuss recent approaches for dealing with large volumes of data by exploiting data sparsity, which allows data to be compressed without significant loss of information. We discuss first recent algorithms for computing the low rank approximation of a matrix based on deterministic or randomized approaches, that are able to minimize communication on a parallel computer.
We discuss then the approximation of data in larger dimensions which is represented by tensors, or multilinear arrays. We discuss a numerical method to compress a tensor by constructing a piece-wise tensor approximation. This is defined by partitioning a tensor into sub-tensors and by computing a low-rank tensor approximation (in a given format) in each sub-tensor. Neither the partition nor the ranks are fixed a priori, but, instead, are obtained in order to fulfill a prescribed accuracy and optimize, to some extent, the storage. The different steps of the method are detailed and some numerical experiments that consider the Coulomb and the Gibbs potential are proposed to assess its performances.
This work on tensors is joint work with V. Ehrlacher and D. Lombardi.