Rencontres Statistiques Lyonnaises

Clusters Everywhere: A tour of cluster analysis and its application + Model-based Clustering with Sparse Covariance Matrices

by Prof. Brendan Murphy (University College Dublin, School of Mathematics & Statistics, Dublin, Ireland)

Europe/Paris
112 (Lyon1, Doua)

112

Lyon1, Doua

Description

First Part Clusters Everywhere: A tour of cluster analysis and its application

This talk will give an overview of cluster analysis, including some history of the development of clustering, approaches taken and examples of its application in science, medicine and social science.

Second Part Model-based Clustering with Sparse Covariance Matrices

Finite Gaussian mixture models are widely used for model-based clustering of continuous data. Nevertheless, since the number of model parameters scales quadratically with the number of variables, these models can be easily over-parameterized. For this reason, parsimonious models have been developed via covariance matrix decompositions or assuming local independence. However, these remedies do not allow for direct estimation of sparse covariance matrices nor do they take into account that the structure of association among the variables can vary from one cluster to the other. To this end, we introduce mixtures of Gaussian covariance graph models for model-based clustering with sparse covariance matrices. A penalized likelihood approach is employed for estimation and a general penalty term on the graph configurations can be used to induce different levels of sparsity and incorporate prior knowledge. Model estimation is carried out using a structural-EM algorithm for parameters and graph structure estimation, where two alternative strategies based on a genetic algorithm and an efficient stepwise search are proposed for inference. With this approach, sparse component covariance matrices are directly obtained. The framework results in a parsimonious model-based clustering of the data via a flexible model for the within-group joint distribution of the variables. Extensive simulated data experiments and application to illustrative datasets show that the method attains good classification performance and model quality. This work was completed with Michael Fop and Luca Scrucca