Abstract: In this talk, we will adress the following question: how can we enforce both parsimony and structure in a regularized regression model ? Specifically, we investigate strategies that promote not only sparsity but also clustering of correlated features. In fact, we aim to recover these groups without prior knowledge about the clusters. For such problems, the Sorted L1 norm (SLOPE) is already widely used. Yet, it relies on the L1 penalty and therefore shares a major drawback with the Lasso: it is biased and tends to shrink non-null coefficients. In the same way nonconvex penalties have been introduced as unbiased L1 alternatives, we will deal with sorted nonconvex penalties as unbiased SLOPE’s counterparts, and discuss how to use them in practice.