Deep learning has emerged as the cornerstone of modern machine learning. However, its theoretical understanding remains challenging due to the multi-layer structure, non-linearity, and complex statistical dependencies introduced by optimization. In this talk, I will focus on deep neural network (DNN) models and present a random matrix analysis approach to both explicit and implicit DNNs for high-dimensional Gaussian mixture data. By examining the DNN conjugate and neural tangent kernel matrices, this approach establishes *explicit* connections between explicit and implicit networks, as well as between shallow and deep networks.