A core difficulty in modern machine learning is to understand the convergence of gradient based methods in random, high-dimensional, non-convex landscapes. In this work, we study the behaviour of gradient flow and online stochastic gradient descent applied to the multi-spike tensor PCA problem, the goal of which is to recover a low-rank tensor from noisy observations. The main thrust of our proof relies on a sharp control of the random part of the dynamics, followed by the analysis of a low-dimensional dynamical system, leading to both sample complexity bounds and a complete description of the set of critical points reached by the dynamics. In particular, we obtain sufficient conditions to reach the global minimizer of the problem from uninformative initializations. This talk is based on joint work with Vanessa Piccolo and Gérard Ben Arous.