This event will take place online on December 13 at 13:30 GMT.
Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to efficiently learn new tasks, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations, classifiers, and policies for acting in environments. In practice, meta-learning has been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems. Moreover, improving one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and neuroscience shows a strong connection between human and reward learning and the growing sub-field of meta-reinforcement learning.
Some of the fundamental questions that this workshop aims to address are:
• How can we exploit our domain knowledge to guide the meta-learning process and make it more efficient?
• What are the meta-learning processes in nature (e.g., in humans), and how can we take inspiration from them?
• Which machine learning approaches are best suited for meta-learning, in which circumstances, and why?
• What principles can we learn from meta-learning to help us design the next generation of learning systems?