Orateur
Description
Neural networks are for the most part treated as black boxes.
In an effort to understand the mathematical structure that underlies them we will explain how ReLU neural nets can be interpreted as zero-sum, turn-based, stopping games.
The game runs in the opposite direction to the net. The input to the net is the terminal reward of the game, the output of every neuron turns out to be equal to the value of the game at a corresponding state. The weights are used to define state-transition probabilities and the biases to define rewards.
Running the ReLU net becomes the same as running the Shapley-Bellman backwards recursion (which in this case is minimax dynamic programming) for the value of the game.
As an application, we obtain bounds for the output of every neuron of the net, given bounds for the input to the net.
Moreover, the game interpretation links the ReLU net with statistical mechanics, interpreting the output of every neuron as a discrete path integral.
We will also explain consequences of the game point of view, to interpretability of the net considered as a classifier.
Adding an entropic regularization to the ReLU net game, allows us to interpret Softplus neural nets as games in an analogous fashion.
This is joint work with Stéphane Gaubert.