van der Schaar Lab

van der Schaar Lab at ICLR 2020: two papers and a keynote

The van der Schaar Lab is taking an active role in this year’s International Conference on Learning Representations (ICLR), the world’s largest deep learning event. The lab’s work will be shared prominently with participants, with Mihaela van der Schaar delivering a keynote and two papers by Ph.D. students selected for the conference.

Keynote by Mihaela van der Schaar

On Wednesday, April 29 at 07:00 EST (12:00 BST/19:00 CST), Mihaela will deliver a keynote entitled “Machine learning: Changing the future of healthcare.” In her keynote, she will share the van der Schaar Lab’s vision for machine learning in medicine, and describe how medicine presents unique challenges and new opportunities that cannot generally be found in other areas of machine learning. Mihaela will also give an overview of some of the van der Schaar Lab’s key recent areas of work, including i) automating the design of clinical predictive analytics; ii) interpretability and explainability; iii) dynamic forecasting; and iv) estimating individualized treatment effects.

Title slide for Mihaela van der Schaar’s keynote

Spotlight presentation by Ioana Bica

Ph.D. student Ioana Bica will give a spotlight presentation on a paper entitled “Estimating Counterfactual Treatment Outcomes over Time through Adversarially Balanced Representations.” The paper was co-authored with Ahmed Alaa, James Jordon and Mihaela van der Schaar. The full paper can be found here. Ioana’s LinkedIn page is here.


Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions.

To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions.

On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.

Oral presentation by Dan Jarrett

Ph.D. student Dan Jarrett will deliver an oral presentation on a paper entitled “Target-Embedding Autoencoders for Supervised Representation Learning.” The paper was co-authored with Mihaela van der Schaar. The full paper can be found here. Dan’s LinkedIn page is here.


Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings.

This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets—encoding the prior that variations in targets are driven by a compact set of underlying factors.

As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures—thereby underscoring the further generality of this framework beyond feedforward instantiations.

For more information on ICLR 2020, visit the event’s site here.

Nick Maxfield

Nick Maxfield

Nick oversees the van der Schaar Lab’s communications, including media relations, content creation, and maintenance of the lab’s online presence.

Nick studied Japanese (BA Hons.) at the University of Oxford, graduating in 2012. Nick previously worked in HQ communications roles at Toyota (2013-2016) and Nissan (2016-2020).

Given his humanities/languages background and experience in communications, Nick is well-positioned to highlight and explain the real-world impact of research that can often be quite esoteric. Thankfully, he is comfortable asking almost endless questions in order to understand a topic.