The search for the physics of our universe within string theory is greatly hindered by the existence of a vast number of candidate four-dimensional vacua among which our universe must lie. We describe how modern tools of machine learning help mitigate and address this problem. Using a class of some one million Calabi-Yau manifolds as concrete examples, the paradigm of few-shot machine learning and Siamese Neural Networks represents them as points in R(3). Using these methods, we can compress the search space for exceedingly rare manifolds to within one percent of the original data by training on only a few hundred data points. Key to our methods is a notion of similarity of two string vacua. As a general example of the utility of this idea, consider a human child who has seen a few tens of dogs and cats but no other fauna. At first sight, it would likely recognize that a fox is very similar to a dog, but not a cat, and a fish is completely different from both dogs and cats. By learning similarity, humans are thus able to both learn from very few examples and extrapolate their learning to instances which they have never seen before. Our analyses apply these ideas to the string landscape. We also comment on possible mathematical implications. The talk is based primarily on arXiv:2111.04761.