An Interactive Introduction to Model-Agnostic Meta-Learning

Exploring the world of model-agnostic meta-learning and its variants.

This page is part of a multi-part series on Model-Agnostic Meta-Learning. If you are already familiar with the topic, use the menu on the right side to jump straight to the part that interests you. Otherwise, we suggest you start at the beginning.

Where to go from here

If you have reached this part, you have already worked through a lot of material! How about taking a break?

Suppose your thirst for knowledge is not yet satisfied, and you want to dig even deeper into the field of model-agnostic meta-learning. In that case, you may use the list below as a starting point for further explorations (this is by no means intended to be an exhaustive list or represent the field of model-agnostic meta-learning adequately).

If you are more of a hands-on person, we suggest you take a look at and play around with the four methods we presented. We created a git repository to get you started:

pupuis/maml-tf2

Task-Robust Model-Agnostic Meta-Learning

Liam Collins, Aryan Mokhtari, and Sanjay Shakkottai

This paper focuses on making MAML more robust to the task distribution. The meta-objective is reformulated to optimize not for the average task but for the task the model performs worst on.

NoRML: No-Reward Meta Learning

Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, and Chelsea Finn

While our introduction focuses primarily on classification and regression, this paper utilizes the big strength of MAML, its model-agnosticism, and applies the method to reinforcement learning.

B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic Meta-Learning

Anish Madan and Ranjitha Prasad

Anish Madan and Ranjitha Prasad propose a Bayesian neural network-based MAML algorithm (B-SMALL) that improves the model's parameter footprint and is supposed to decrease the overfitting of the training tasks.

iTAML: An Incremental Task-Agnostic Meta-learning Approach

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Mubarak Shah

The authors of this paper adapt model-agnostic meta-learning for a continual learning setting. One key aspect of this method is to keep an exemplar memory with samples from old tasks to prevent catastrophic forgetting.

Meta-Learning with MAML on Trees

Jezabel R. Garcia, Federica Freddi, Feng-Ting Liao, Jamie McGowan, Tim Nieradzik, Da-shan Shiu, Ye Tian, and Alberto Bernacchia

Meta-Learning typically relies on the assumption that instances from task distribution are sufficiently similar. In this approach (called TreeMAML), tasks are clustered in a tree structure based on tasks similarity, and the gradients are aggregated hierarchically.

Meta-Learning of Neural Architectures for Few-Shot Learning.

Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter

Elsken et al. propose a method of how one might train not only weights but also do differentiable network architecture search (based on a method called DARTS ), in a meta‑learning setting.

Author Contributions

Luis Müller implemented the visualization of MAML, FOMAML, Reptile and the Comparision. Max Ploner created the visualization of iMAML and the svelte elements and components. Both wrote the introduction together and contributed most of the text of the other parts. Thomas Goerttler came up with the idea and sketched out the project. He also wrote parts of the manuscript and helped with finalizing the document. Klaus Obermayer provided feedback on the project.

† equal contributors