In this course, you will learn about Decision Making under Uncertainty. This means how to model this type of problems and how to solve them.

The material will revolve around three of the major frameworks used in this domain:

  1. Markov Decision Processes (MDP) is a type of model used to represent environments in which actions have uncertain effects. You can use them to represent your holiday trip (where uncertainty includes weather conditions and duration of travel), the activities in the kitchen (where orders arrive unpredictably), traffic light controls, robots, even two-player games. Importantly, because the outcome of your actions is (partially) unknown, the solution to the planning problem is no longer a plan (= sequence of actions) but a policy (= a function that tells you what to do in any situation).

  2. Partially ordered MDPs (POMDPs) are an extension of MDP in which the current state is partially unknown. This is important when the input information (particularly, the sensors) is noisy. In this setting, the optimal policy will involve gathering information on the current situation.

  3. Reinforcement Learning (RL) is a framework in which the agent does not know some of the parameters of the environment which is still assumed to be an MDP, generally the transition function (what are the probabilistic effects of the actions) and the reward function (what do I gain when performing such and such action in such and such situation). RL requires interacting with the environment, and trading off exploration (= trying new things) vs exploitation (= doing what worked best so far).

You will study these frameworks, algorithms to solve the planning problems, and applications of these approaches.

Information about the Course

Once registered, most information will be available from the Wattle page.

Updated:    07 Jun 2022 / Responsible Officer:    Head of School / Page Contact:    Alban Grastien / License:    CC BY-NC-SA 4.0