Lectures

Lecture 1: Model-based reinforcement learning I

The field of deep reinforcement learning combines deep learning techniques with reinforcement learning algorithms to develop intelligent agents that can tackle a wide variety of challenging tasks. Recent years have seen the development of a wide class of deep reinforcement learning agents which have been successfully employed in complex environments such as video games, board games and robotics. However, most of the current state-of-the-art agents employ methods which belong to the model-free class of RL algorithms. In this lecture we will take a look into a different class of RL agents which constitute the class of model-based algorithms. These agents make use of an internal model of the world in order to optimise their acting policy. This talk will present different model-based approaches and will consider their pros and cons in comparison to their model-free counterparts.

Lecture 2: Model-based reinforcement learning II

In this lecture we will dive deeper into the different approaches of model-based reinforcement learning. We will examine different modes of model learning (pixel based, implicit, stochastic models etc) and how they can be utilised for planning, augmenting real experience or as auxiliary tasks.

Lecture 3: AlphaZero: A general model-based planning reinforcement learning algorithm for board games

Board games have been widely used in the field of artificial intelligence as test-beds for the development of new algorithms. Chess and Go are among the most studied games in AI as they represent a class of complicated and self-contained environments ideal for AI research. Previous attempts to achieve super-human performance in these domains led to the development of highly specialised and domain-specific methods. In this lecture we will examine AlphaZero, a reinforcement learning algorithm which has mastered the board games of Go, Chess and Shogi achieving state-of-the-art performance without requiring any domain specific adaptations or human data. Unlike AlphaGo, AlphaZero is trained purely through self-play completely from scratch without using any prior human knowledge.



Lecture 1: Introduction to Generative Adversarial Networks

I will cover the basic theory and practice of generative adversarial networks (GANs). These are a powerful and currently popular class of generative model. In these models, a generator network is trained to map random noise to an output distribution that is indistinguishable from the distribution of natural photos, according to an adversary that tries to classify between these two distributions. GANs can synthesize remarkably realistic photos, sounds, and other kinds of data, but the realism often comes at the cost of limited diversity in the samples. I will cover how this can be caused by “mode collapse” and some ways to ameliorate it. I will also talk about a variety of architectures and objectives used in current GAN practice.

Lecture 2: Conditional GANs and Data Prediction

GANs can hallucinate realistic photos but what if we don’t want to just make up data from scratch? More often, we are given some data and wish to make predictions based on it. For example, given the current view of the world, predict what the future will look like. This is an application for conditional GANs, which condition on observed data and make a prediction about unobserved data. Unlike most traditional predictors, conditional GANs are adept at dealing with both high-dimensional input observations and high-dimensional output predictions. I will show a number of applications in vision and robotics.

Lecture 3: GANs for Domain Translation

GANs can be understood as a tool for mapping one data distribution into another. In this third lecture on GANs, we will adopt this perspective and see how it leads to powerful applications for domain adaptation and translation. The idea is to learn a mapping from a source domain to a target domain such that the output is identically distributed as the target domain. I will show how this can be used to translate between various visual styles (e.g., turn photos into monet paintings), make predictions from very limited supervision (e.g., in medical imaging, where supervision is expensive), and achieve “sim2real” transfer, where we train a robotic policy on simulated data but apply it in the real world.



Lecture 1: Learning in the factory and in the wild: designing robot systems that learn

We examine the general problem of designing robot systems from a decision-theoretic perspective that makes it clear when an individual needs to learn when it is actually performing its tasks (in the wild) and when the AI engineers need to use learning methods to design a good robot learner. We will examine several learning paradigms from this perspective and talk about methods for meta-learning, including modular meta-learning and graph element networks.

Lecture 2: Learning factored transition models for planning in complex hybrid spaces

Many robotics problem distributions are better addressed by learning models and using them to do online reasoning (an approach also known as model-predictive control) than by learning a policy or value function. We begin by discussing this claim, and then study the forms of models that are most appropriate for different types of planning problems. We then examine two new approaches for learning models that are appropriate for planning in complex hybrid (mixed discrete and continuous) problems, such as robot task and motion planning. One approach is based on Gaussian-process active learning and another on an extension of graph neural networks.

Lecture 3: Learning to speed up planning in complex hybrid spaces

An important role for learning is to speed up search: this is the critical role that learning plays in methods such as Alpha-Zero. We will examine several different mechanisms that can be used (including learning heuristic or static evaluation functions and learning to bias the action sampling distribution), with a focus on problems that require choosing actions from a continuous or hybrid space.



Lecture 1: Latest advances in enhancing Interpretability in Data Science via means of Mathematical Optimization (Part 1)

Data Science aims to develop models that extract knowledge from complex data and represent it to aid Data Driven Decision Making. Mathematical Optimization has played a crucial role across the three main pillars of Data Science, namely Supervised Learning, Unsupervised Learning and Information Visualization. For instance, Quadratic Programming is used in Support Vector Machines, a Supervised Learning tool. Mixed-Integer Programming is used in Clustering, an Unsupervised Learning task. Global Optimization is used in MultiDimensional Scaling, an Information Visualization tool.

Data Science models should strike a balance between accuracy and interpretability. Interpretability is desirable, for instance, in medical diagnosis; it is required by regulators for models aiding, for instance, credit scoring; and since 2018 the EU extends this requirement by imposing the so-called right-to-explanation. In the first lecture, we show thatMathematical Optimization is the natural tool to model the trade-off between accuracy and interpretability.

Lecture 2: Latest advances in enhancing Interpretability in Data Science via means of Mathematical Optimization (Part 2)

In the second lecture, we zoom in and talk about the optimization of classification trees, to enhance their accuracy without harming interpretability.

Lecture 3: Latest advances in enhancing Interpretability in Data Science via means of Mathematical Optimization (part 3)

Finally, in the third lecture we discuss black-box methods such as support vector machines and how we can enhance their interpretability.



Lecture 1: Introduction to Deep Learning, Neural Networks & Convolutional Neural Networks
Lecture 2: Deep Unsupervised Learning
Lecture 3: Recent Advances and New Challenges for Deep Learning
Lecture 4: Deep Learning for Natural Language Processing/Reading Comprehension


Lecture 1: Introduction to Automated Machine Learning (AutoML)

Automated machine learning is the science of building machine learning models in a data-driven, efficient, and objective way. It replaces manual trial-and-error with automated, guided processes. In the first lecture, we will examine the most prominent problem in automated machine learning: hyperparameter optimization. We will discuss model-free blackbox optimization methods, Bayesian optimization, as well as evolutionary and other techniques. We will also cover multi-fidelity techniques, such as multi-armed bandits, to speed up the optimization of machine learning models and pipelines.

Lecture 2: Meta-learning

When we learn new skills, we (humans) rarely start from scratch. We start from skills learned earlier in related tasks, and reuse experience accumulated over time. This allows us to learn faster, using much less data and trial-and-error. Learning how to build machine learning models based on prior experience is called meta-learning, or learning to learn. We will cover the spectrum from transferring knowledge about machine learning methods in general, via reasoning across tasks, to transferring previously trained machine learning models. We will also see practical tips on how to do meta-learning with OpenML.

Lecture 3: AutoML and meta-learning for neural networks

Finally, we focus on the automated construction of neural networks. We will survey existing approaches for neural architecture search, including differential (gradient-based) techniques, Bayesian optimization, evolutionary techniques, and reinforcement learning. We will also revisit meta-learning in the context of neural networks, to transfer information about previously tried model architecture to new problems.



Lecture 1: Advanced topics: Graph Neural Networks

Recurrent Neural Networks have been the model of choice for processing sequences, but dealing with other structures such as graphs or sets requires models which preserve the invariances present on those, and presents some unique challenges. On the other hand, Transformers have been proposed recently, and have a strong connection with the Graph Neural Networks framework which proposes to structure the computation in a Neural Network as a graph. In this lecture, I’ll discuss these recent advances, and how they connect with one another. The focus of this lecture will, thus, focus on state-of-the-art architectures.

Lecture 2: Reinforcement and Imitation Learning at Scale: AlphaStar and Beyond

Deep Reinforcement Learning has emerged as a sub-field in machine learning which extends the capabilities of Deep Learning systems beyond supervised and unsupervised learning. In the last few years, we have witnessed advances on domains in which complicated decisions must be carried by an “agent” interacting with an “environment”. In this talk, I will summarise the state of deep RL, highlighting successes in StarCraft thanks in part to imitation learning and scaling up self-play. This lecture will focus on imitation learning and the scale using AlphaStar as motivating example.

Lecture 3: Advanced topics: Meta Learning

Learning from a few examples is challenging for most deep learning systems. In this lecture, I will describe recent efforts on advancing this regime through forms of “meta-learning”. In particular, I will contrast the paradigms dominating this space: optimization based, in which an optimization procedure is carried out during inference to adapt to few examples; model based, which trains a neural network receiving a few examples which then should make the model quickly adapt; and metric based meta-learning, which uses a strong inductive bias based on measuring distances between exemplars. This lecture will focus on introducing a problem setup — meta learning — together with an overview of the state of this exciting new field.