Imagine you want to solve a maze. Easy peasy, Dijkstra's shortest path algorithm gets the job done.
Now imagine the maze is not given to you as a nice clean graph: instead, you only have a dirty satellite picture. No worries, let's fire up a neural network to analyze the picture and extract relevant information.
But here's the challenge: deep learning models are trained with gradient descent, whereas combinatorial algorithms are fundamentally discrete. Building a hybrid pipeline using both ingredients is far from obvious, and yet it can be extremely useful for data-driven optimization tasks.
This talk will introduce several ways to create differentiable layers from discrete solvers, thanks to a probabilistic perspective on the geometry of linear programs. It will also discuss the concrete implementation of these methods, and their performance on various benchmarks.