A Closer Look at Meta-Learning: Fast Adaptation of Deep Neural Networks

Contact: Arnout Devos

The ability to adapt quickly to new situations is a cornerstone of human intelligence. Artificial learning methods have been shown to be very effective for specific tasks, often surpassing human performance. However, by relying on standard training paradigms for supervised learning or reinforcement learning with deep neural networks, these artificial methods still require much training data and training time to adapt to a new task.

An area of machine learning that learns and adapts from a small amount of data is called few-shot learning. A promising approach for few-shot learning is the field of meta-learning. Meta-learning, also known as learning-to-learn, is a paradigm that exploits cross-task information and training experience to perform well on a new unseen task [1].

The goal of this project is to:

First construct a unifying benchmark to (fairly) compare meta-learning algorithms for classification (similar to [2]) and regression. The second part of this project consists of investigating algorithmic or theoretical improvements in meta-learning.

Required skills:

  • Solid knowledge in Machine Learning (e.g. CS433)
  • Good Python programming language knowledge
  • (Recommended) Experience with Tensorflow or Pytorch

References:

[[1] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks [ICML 2017]](http://proceedings.mlr.press/v70/finn17a.html)
[[2] A Closer Look at Few-shot Classification [ICLR 2019]](https://openreview.net/forum?id=HkxLXnAcFQ)