Deep learning has become a prominent approach for many applications, such as computer vision or neural language processing. However, the mathematical understanding of these methods is still incomplete. A recent approach is to consider neural networks as discretized versions of differential equations. I will first give an overview of this emerging field and then discuss new results on residual neural networks, which are state-of-the-art deep learning models.