van der Schaar Lab

Innovative uses of Large Language Models for Reality-centric AI 

In the past year, we have seen some ground-breaking advances of Large Language Models (LLMs) for a variety of natural language processing tasks. From question-answering to code pilots, code generation, and many other examples.

We are most interested in how can we use these advances of LLMs to further the reality-centric AI agenda that our lab has put forward. The van der Schaar lab, along with the wider ML community, has produced and presented increasingly powerful ML tools. However, ML remains brittle when we apply it to complex real world tasks. How do we capitalise on the power of LLMs to make ML more robust, more trustworthy, easier available to anyone, including non-coders like clinicians, or people who lack sufficient data? Most importantly – how do we do it in a safe, reliable, and human empowering way?

The next step would be to utilise LLMs to provide ML with the ability to transfer very powerful human thoughts into machine thoughts – giving machines some of our thinking that can augment and transcend the mere use of data. This unlocks the ability to bring important concepts like meta-learning to new levels. Vice versa, how we can use LLMs to translate ML know-how into understandable human intelligence that can empower humans?

These are all very important problems and questions with potentially game-changing solutions if we are able to wield the power of large language models effectively. As we step over into a new and exciting era for machine learning, we are using this page to not only discuss advances made by our lab but also to encourage the community to think creatively to use LLMs in support of the reality-centric AI agenda and to empower humanity rather than replace or destroy it and make progress in important fields like in Medicine, Education, and climate change.


L2MAC: Large Language Model Automatic Computer for Unbounded Code Generation (Samuel Holt, Max Ruiz Luyten, M. van der Schaar)

Large Language Models to Enhance Bayesian Optimization (Tennison Liu, Nicolás Astorga, Nabeel Seedat, Mihaela van der Schaar)

Redefining Digital Health Interfaces with Large Language Models (Fergus Imrie, Paulius Rauba, Mihaela van der Schaar)

Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models – 2023 (A. Bisercic, M. Nikolic, M. van der Schaar, B. Delibasic, P. Lio, A. Petrovic)