van der Schaar Lab

Uncertainty quantification

Please note: this page is a work in progress. Please treat it as a “stub” containing only basic information, rather than a full-fledged summary of our lab’s vision for ML for uncertainty quantification and our research to date.

This page is authored and maintained by Mihaela van der Schaar and Nick Maxfield.


The successful application of machine learning models to real-world prediction problems requires us to be able to limit and quantify the uncertainty in model predictions by providing valid and accurate prediction intervals. Simply put: in addition to making a prediction, we need to know how confident we can be in this prediction. This is particularly crucial in high-stakes applications where machine learning outputs will inform critical decision-making, such as healthcare.

While machine learning models may achieve high predictive accuracy across a broad spectrum of tasks, rigorously quantifying their predictive uncertainty remains challenging. Usable estimates of predictive uncertainty should cover the true prediction targets with high probability, while discriminating between high and low confidence prediction instances.

Existing approaches to uncertainty quantification suffer from a range of problems and drawbacks: many are computationally prohibitive, may be difficult to calibrate, may incur high sample complexity, and may require major alterations to the model architecture and training. Additionally, most uncertainty quantification approaches are poorly-suited to time-series setting.

Bayesian neural networks, which are frequently used in methods for uncertainty quantification, tend to exhibit over-confidence in predictions made on target data whose feature distribution differs from the training data; furthermore, they do not provide frequentist coverage guarantees, cannot be applied post-hoc, and their approximate posterior inference undermines discriminative accuracy.

In addition to ensuring that uncertainty quantification is fully incorporated into our own AI and machine learning tools for healthcare, our lab treats the problem of uncertainty quantification itself as an important research pillar in its own right. To that end, we have developed a range of robust and powerful approaches for the healthcare setting and beyond.

As detailed below, our methods (most of which have been introduced in papers published in top-tier AI and machine learning conferences) address the shortcomings of existing approaches to uncertainty quantification, and significantly outperform benchmark methods.

Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift

Alex Chan, Ahmed M. Alaa, Zhaozhi Qian, Mihaela van der Schaar

ICML 2020

Abstract

Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions

Ahmed M. Alaa, Mihaela van der Schaar

ICML 2020

Abstract

Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions

Ahmed M. Alaa, Mihaela van der Schaar

ICML 2020

Abstract

Robust Recursive Partitioning for Heterogeneous Treatment Effects with Uncertainty Quantification

Hyun-Suk Lee, Yao Zhang, William R. Zame, Cong Shen, Jang-Won Lee, Mihaela van der Schaar, Mihaela van der Schaar

NeurIPS 2020

Abstract

Conformal Time-Series Forecasting

Kamilė Stankevičiūtė, Ahmed M. Alaa, Mihaela van der Schaar

NeurIPS 2021

Abstract

AutoCP: Automated Pipelines for Accurate Prediction Intervals

Yao Zhang, William R. Zame, Mihaela van der Schaar

Submitted for publication, 2020

Abstract

Improving Adaptive Conformal Prediction Using Self-Supervised Learning

Nabeel Seedat*, Alan Jeffares*, Fergus Imrie, Mihaela van der Schaar

AISTATS 2023

Abstract

Learn more and get involved

Our research related to uncertainty quantification is closely linked to a number of our lab’s other core areas of focus. If you’re interested in branching out, we’d recommend reviewing our summaries on interpretable machine learning and time series in healthcare.

We would also encourage you to stay up-to-date with ongoing developments in this and other areas of machine learning for healthcare by signing up to take part in one of our two streams of online engagement sessions.

If you are a practicing clinician, please sign up for Revolutionizing Healthcare, which is a forum for members of the clinical community to share ideas and discuss topics that will define the future of machine learning in healthcare (no machine learning experience required).

If you are a machine learning student, you can join our Inspiration Exchange engagement sessions, in which we introduce and discuss new ideas and development of new methods, approaches, and techniques in machine learning for healthcare.

A full list of our papers on uncertainty quantification and related topics can be found here.