The van der Schaar Lab is taking an active role in this year’s International Conference on Learning Representations (ICLR), the world’s largest deep learning event. The lab’s work will be shared prominently with participants, with Mihaela van der Schaar delivering a keynote and two papers by Ph.D. students selected for the conference.
Keynote by Mihaela van der Schaar
On Wednesday, April 29 at 07:00 EST (12:00 BST/19:00 CST), Mihaela will deliver a keynote entitled “Machine learning: Changing the future of healthcare.” In her keynote, she will share the van der Schaar Lab’s vision for machine learning in medicine, and describe how medicine presents unique challenges and new opportunities that cannot generally be found in other areas of machine learning. Mihaela will also give an overview of some of the van der Schaar Lab’s key recent areas of work, including i) automating the design of clinical predictive analytics; ii) interpretability and explainability; iii) dynamic forecasting; and iv) estimating individualized treatment effects.

Spotlight presentation by Ioana Bica
Ph.D. student Ioana Bica will give a spotlight presentation on a paper entitled “Estimating Counterfactual Treatment Outcomes over Time through Adversarially Balanced Representations.” The paper was co-authored with Ahmed Alaa, James Jordon and Mihaela van der Schaar. The full paper can be found here. Ioana’s LinkedIn page is here.
Abstract
Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions.
To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions.
On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.
Oral presentation by Dan Jarrett
Ph.D. student Dan Jarrett will deliver an oral presentation on a paper entitled “Target-Embedding Autoencoders for Supervised Representation Learning.” The paper was co-authored with Mihaela van der Schaar. The full paper can be found here. Dan’s LinkedIn page is here.
Abstract
Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings.
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets—encoding the prior that variations in targets are driven by a compact set of underlying factors.
As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures—thereby underscoring the further generality of this framework beyond feedforward instantiations.
For more information on ICLR 2020, visit the event’s site here.
