Sampling high-dimensional probability distributions is a common task in computational chemistry, Bayesian inference, etc. Markov Chain Monte Carlo (MCMC) is the method of choice to perform these calculations, but it is often plagued by slow convergence properties. I will discuss how methods from deep learning (DL) can help enhance the performance of MCMC via a feedback loop in which we simultaneously use DL to learn better samplers based e.g. on generative models, and MCMC to obtain the data for the training of these models. I will illustrate these techniques via several examples, including the sampling of random fields and the calculation of free energies and Bayes factors.