Description
In reinforcement learning an agent interacts with an environment, whose underlying mechanism is unknown, by sequentially taking actions, receiving rewards, and transitioning to the next state. With the goal of maximizing the expected sum of the collected rewards, the agent must carefully balance between exploring in order to gather more information about the environment and exploiting the current knowledge to collect the rewards. In this talk, we are interested in solving this exploration-exploitation dilemma by injecting noise into the agent’s decision-making process in such a way that the dependence of the regret on the dimension of state and action spaces is minimised. We also review some recent approaches towards dimension reduction in RL.