van der Schaar Lab

Spotlight on organ transplantation research projects

In this post, we will explain how the problem of organ transplantation optimization represents a web of complex problems, and introduce some of our lab’s cutting-edge machine learning methods developed and applied in collaboration with a range of clinical professionals in this domain.

Organ transplantation: scarcity, inefficiency, and complexity

In the U.S., an average of 17 patients died every day in 2018 while waiting to receive an organ. Lengthy waitlists force medical professionals to make tough decisions regarding the best use of scarce organs. Given the constant mismatch between donor supply and recipient demand, each decision to allocate an organ to a specific recipient is a decision that may deprive another waitlisted patient of additional years of life.

In broad terms, transplantation decisions tend to be answered on the basis of urgency and benefit: essentially, the decision relies on an estimate of how long each patient will survive without a transplant, and how long a transplant would extend each patient’s life. Such decisions are generally based on a number of commonly used risk scores, some of which are calculated on a one-size-fits-all basis. Even those that adopt an individual patient perspective rely on linear approaches that only consider a few patient biomarkers. For example, liver allocation in the U.S. and Europe relies on the MELD score, which is calculated using only three lab parameters; in the U.K., allocation relies on simple linear models that vastly underestimate the complexity of organ-to-patient interaction.

This leads to a state of affairs in which the allocation of scarce organs is suboptimal. There is undoubtedly scope for substantial gains in efficiency (and, therefore, individual and collective benefit to patients) through the use of AI and machine learning.

We cannot, however, simply swap out existing approaches such as risk scores with AI and machine learning models. This is because organ transplantation is not a single problem, but rather a web of complex and interrelated problems and sub-problems. For a machine learning lab such as ours, the challenge is to understand and address each problem and sub-problem independently, while also fitting these individual parts together into a functioning whole. This requires us to go beyond familiar areas in AI and machine learning.

While the complexities of organ transplantation from a machine learning perspective will likely form the basis of a more thorough summary in the future, they are briefly summarized below as falling into three categories: prediction, matchmaking, and allocation policy.

Answering this question requires us to estimate two quantities:

1) How long a patient will survive with an organ
2) How long a patient will survive without an organ

Estimating these quantities relies on using tools from the likes of survival analysis and competing risks while also considering complexities such as comorbities.

Beyond this, estimating these quantities from observational data (such as EHR data) presents an additional complication in the form of selection bias. In observational data we only observe the factual outcome (only one of the two quantities above) and not the counterfactuals, and which one we observe is biased by the assignment policy being executed for the collection of the data. This requires methodologies from the realm of individualized treatment effect estimation, though the problem is significantly complicated here by the scarcity and high-dimensionality of the organs.

A further complication is:

When would each patient benefit most from receiving an organ?

The potential benefit of transplantation for an individual patient depends on when they receive an organ. For example, sicker patients generally benefit less from receiving organs. Therefore, different outcomes must be estimated for transplantation at a range of times.

This question has two subparts:

1) What kind of organ would be the best match for each patient?

Donors and organs are unique and high-dimensional; the characteristics of certain donors and organs will, therefore, yield more benefit to certain individual recipients compared to others. This is an individualized treatment effect inference problem with both familiar and unfamiliar elements from a machine learning perspective (as described later on).

Additionally, we must factor in risks such as transplant failure and/or post-operative complications (including infections, chronic rejection, and malignancy); this aspect of matchmaking is also inextricable from the survival prediction challenges described above.

2) How rare or common is each patient’s “best match” organ?

Matchmaking is all the more essential because we only see each organ once, and we cannot guarantee for any given individual that an organ that will become available in the near future will be a better match than one that may be available now.

This is closely tied to some of the predictive problems mentioned above—particularly since the expected patient benefit of transplantation at a given time must be weighed against the likelihood of finding a more suitable organ match in the near future.

Due to the mismatch between supply and demand, there are potential conflicts of prioritization between prevention of waitlist deaths (by allocating organs to the sickest patients first) and overall population benefit (by allocating organs on a “best match” basis).

Allocating organs to the sickest patients may prevent waitlist deaths, but would achieve less impact in terms of life-years gained—both individually and collectively. Conversely, allocating organs on a “best match” basis would maximize the total life-years gained across the population, but would lead to more deaths among the sickest patients.

Rather than weighing in on the fairness of one policy versus another, the challenge for a lab like ours is to work out how to flexibly accommodate different allocation policies into machine learning systems for organ transplantation, on top of models for prediction and matchmaking.

  • In addition to the points mentioned above, a further area of complexity lies in the fact that not all types of organs are the same: there are considerable differences between kidneys and livers in terms of supply-demand mismatch, ease of prediction of life-years gained, ease of donor-recipient matchmaking, and variation in actual life-years gained. A patient on the waitlist for a kidney transplantation can survive for many years thanks to dialysis, whereas this not true for other organs such as hearts or lungs.

Our lab’s work involving organ transplantation

As outlined above, organ transplantation is a high-stakes domain in which there is exceptional potential for real-world impact through increased efficiency, but increasing efficiency in any meaningful way would require us to navigate a highly complex set of interrelated problems.

We have now been working on organ transplantation for a number of years, and in this time have developed a portfolio of groundbreaking data-driven machine learning approaches with the support of clinical collaborators representing a range of specializations within the domain. Our projects tackle the challenges raised by transplantation in general, but also address problems specific to a variety of commonly transplanted organs, including hearts, livers, and lungs. Our work is ongoing, and we continue to develop new and improved methods.

Personalized survival predictions for waitlisted and post-transplantation patients

As explained above, survival prediction before and after transplantation is an especially important problem because transplantation and treatment decisions depend on predictions of patient survival on the waitlist and survival after transplantation. Better predictions may, therefore, increase the number of successful transplantations.

Most commonly-used clinical approaches to survival prediction are based on one-size-fits-all models that apply to the entire population of patients and donors, and do not fully capture the heterogeneity of these populations. In general, such approaches construct a single risk score (a real number) for each patient as a function of the patient’s features and then use that risk score to predict a survival time or a survival curve. As a result, patients with higher risk scores are predicted to have a lower probability of surviving for every given time horizon—so survival curves for different individuals do not intersect.

In a study published in PLoS ONE in 2018, our lab worked with clinical and academic collaborators from the University of California, Los Angeles (UCLA), University of California, Davis (UC Davis), and University College London (UCL) to develop a methodology for personalized prediction of survival for patients with advanced heart failure, both while on the waitlist and after heart transplantation.

The method we developed is called ToPs (short for “tree of predictors”). ToPs can capture the heterogeneity of populations by creating clusters of patients and providing specific predictive models for each cluster. ToPs addresses the interaction of multiple features and, importantly, takes into account the difference between long-term survival and short-term survival.

In comparison with existing clinical risk-scoring methods and other machine learning methods, ToPs significantly improves survival predictions both post- and pre-cardiac transplantation. ToPs provides a more accurate, personalized approach to survival prediction that can benefit patients, clinicians, and policymakers in making clinical decisions and setting clinical policy. Because survival prediction is widely used in clinical decision-making across diseases and clinical specialties, the implications of this are far-reaching.

In addition to being published in PLoS ONE, this project appeared in Newsweek.

If you’d like to learn more about our work in the area of survival analysis, competing risks, and comorbidities, please take a look at our overview here.

Personalized survival predictions via Trees of Predictors: An application to cardiac transplantation

Jinsung Yoon, William R. Zame, Amitava Banerjee, Martin Cadeiras, Ahmed Alaa, Mihaela van der Schaar

PLoS ONE, 2018

Risk prediction is crucial in many areas of medical practice, such as cardiac transplantation, but existing clinical risk-scoring methods have suboptimal performance. We develop a novel risk prediction algorithm and test its performance on the database of all patients who were registered for cardiac transplantation in the United States during 1985-2015.

We develop a new, interpretable, methodology (ToPs: Trees of Predictors) built on the principle that specific predictive (survival) models should be used for specific clusters within the patient population. ToPs discovers these specific clusters and the specific predictive model that performs best for each cluster.

In comparison with existing clinical risk scoring methods and state-of-the-art machine learning methods, our method provides significant improvements in survival predictions, both post- and pre-cardiac transplantation.

Our lab has also developed individualized prediction methods for patients on the lung transplantation waitlist with cystic fibrosis (for more info our work related to cystic fibrosis, see our spotlight page here). In this case, we adapted an algorithmic framework that can automate the process of constructing clinical prognostic models, and used it to establish the optimal timing for referring patients with terminal respiratory failure for lung transplantation. Further details related to this project can be found directly below.

Prognostication and Risk Factors for Cystic Fibrosis via Automated Machine Learning

Ahmed Alaa, Mihaela van der Schaar

Published in Nature Scientific Reports, 2018

Accurate prediction of survival for cystic fibrosis patients is instrumental in establishing the optimal timing for referring patients with terminal respiratory failure for lung transplantation. Current practice considers referring patients for lung transplantation evaluation once the forced expiratory volume (FEV1) drops below 30% of its predicted nominal value. While FEV1 is indeed a strong predictor of cystic fibrosis-related mortality, we hypothesized that the survival behavior of cystic fibrosispatients exhibits a lot more heterogeneity.

To this end, we developed an algorithmic framework, which we call AutoPrognosis, that leverages the power of machine learning to automate the process of constructing clinical prognostic models, and used it to build a prognostic model for cystic fibrosis using data from a contemporary cohort that involved 99% of the cystic fibrosis population in the UK. AutoPrognosis uses Bayesian optimization techniques to automate the process of configuring ensembles of machine learning pipelines, which involve imputation, feature processing, classification and calibration algorithms. Because it is automated, it can be used by clinical researchers to build prognostic models without the need for in-depth knowledge of machine learning.

Our experiments revealed that the accuracy of the model learned by AutoPrognosis is superior to that of existing guidelines and other competing models.

Personalized donor-recipient matching

Even though organ transplantation can increase the life expectancy and quality of life for the recipient, the operation can entail various complications, including infection, acute and chronic rejection, and malignancy. This is a complicated risk assessment problem, since postoperative patient survival depends on different types of risk factors: recipient-related factors (e.g., cardiovascular disease severity of heart recipients), recipient-donor matching factors (e.g., weight ratio and human leukocyte antigen), race, and donor-related factors (e.g., diabetes).

In a project that led to a paper published at AAAI 2017, we sought an enhanced phenotypic characterization for the compatibility of patient-donor pairs through a precision medicine approach. Working alongside Dr. Martin Cadeiras, a heart failure and heart transplant cardiologist at UC Davis, we constructed personalized predictive models tailored to the individual traits of both the donor and the recipient to the finest possible granularity.

The result was ConfidentMatch, an automated system that learns recipient-donor compatibility patterns from EHR data in terms of the probability of transplant success for given recipient-donor pairs. Clinicians can utilize ConfidentMatch as a prognostic tool for managing organ transplantation selection decisions in which information about the donor and recipient are fed to the system, and the output is given as the probability of the transplant’s success.

Personalized Donor-Recipient Matching for Organ Transplantation

Jinsung Yoon, Ahmed Alaa, Martin Cadeiras, Mihaela van der Schaar

AAAI 2017

Organ transplants can improve the life expectancy and quality of life for the recipient but carries the risk of serious post-operative complications, such as septic shock and organ rejection. The probability of a successful transplant depends in a very subtle fashion on compatibility between the donor and the recipient but current medical practice is short of domain knowledge regarding the complex nature of recipient-donor compatibility. Hence a data-driven approach for learning compatibility has the potential for significant improvements in match quality.

This paper proposes a novel system (ConfidentMatch) that is trained using data from electronic health records. ConfidentMatch predicts the success of an organ transplant (in terms of the 3 year survival rates) on the basis of clinical and demographic traits of the donor and recipient. ConfidentMatch captures the heterogeneity of the donor and recipient traits by optimally dividing the feature space into clusters and constructing different optimal predictive models to each cluster. The system controls the complexity of the learned predictive model in a way that allows for assuring more granular and confident predictions for a larger number of potential recipient-donor pairs, thereby ensuring that predictions are “personalized” and tailored to individual characteristics to the finest possible granularity.

Experiments conducted on the UNOS heart transplant dataset show the superiority of the prognostic value of ConfidentMatch to other competing benchmarks; ConfidentMatch can provide predictions of success with 95% confidence for 5,489 patients of a total population of 9,620 patients, which corresponds to 410 more patients than the most competitive benchmark algorithm (DeepBoost).

Donor-recipient matching as an individualized treatment effect problem

While some of the clinical factors pertaining to organ compatibility are already known (as described in the section above), it is hypothesized that donor-recipient compatibility involves additional clinical factors and exhibits a much more intricate pattern of feature interaction.

Since it would be infeasible and unethical to attempt to uncover these factors and patterns using organ allocation trials, the best way to do so is through a data-driven approach using observational data for organ allocations and transplantation outcomes. Seen from this perspective, organ transplantation has similarities to individualized treatment effects problems (you can find an overview of our work in this key research area here).

One major challenge with regard to this problem is that the matching policies underlying the observational data are driven by clinical guidelines, creating a “matching bias.” Additionally, we must also estimate transplant outcomes under counterfactual matches not observed in the data—in other words, we only have data for transplantation decisions that were made, and obviously we lack outcomes for decisions that were not made.

To solve this problem, our lab recently joined forces with two clinical colleagues at UCLA: Dr. Maxime Cannesson (Chair, Department of Anesthesiology & Perioperative Medicine) and Dr. Brent Ershoff (Assistant Professor-In-Residence, Department of Anesthesiology). Together, we developed an approach that learns feature representations by jointly clustering donor features. These donor features are mapped into single donor types, and donor-invariant transformations are applied to recipient features to predict outcomes for a given donor-recipient instance. A more in-depth explanation regarding how this was achieved can be found in the paper linked directly below.

Learning Matching Representations for Individualized Organ Transplantation Allocation

Can Xu, Ahmed Alaa, Ioana Bica, Brent Ershoff, Maxime Cannesson, Mihaela van der Schaar

AISTATS 2021

Organ transplantation can improve life expectancy for recipients, but the probability of a successful transplant depends on the compatibility between donor and recipient features. Current medical practice relies on coarse rules for donor-recipient matching, but is short of domain knowledge regarding the complex factors underlying organ compatibility.

In this paper, we formulate the problem of learning data-driven rules for donor-recipient matching using observational data for organ allocations and transplant outcomes. This problem departs from the standard supervised learning setup in that it involves matching two feature spaces (for donors and recipients), and requires estimating transplant outcomes under counterfactual matches not observed in the data. To address this problem, we propose a model based on representation learning to predict donor-recipient compatibility—our model learns representations that cluster donor features, and applies donor-invariant transformations to recipient features to predict transplant outcomes under a given donor-recipient feature instance.

Experiments on several semi-synthetic and real-world datasets show that our model outperforms state-of-art allocation models and real-world policies executed by human experts.

Combining organ rarity and scarcity with individualized treatment effects and donor-recipient matching

The methods introduced above have primarily addressed two exceedingly complex challenges: personalized survival predictions for waitlisted and post-transplantation patients, and donor-recipient matching. However, a truly effective AI or machine learning approach to improving organ allocation efficiency must integrate the logistics of organ scarcity in addition to solving the problems of personalized survival predictions, individualized treatment effect estimation, and donor-recipient matching.

As described above, the domain of organ transplantation is further complicated by the logistics of organ scarcity. Each organ is unique and high-dimensional, thus rendering outcome estimation for each (also unique) patient very difficult. Additionally, organs arrive in a stream: while a currently available organ might result in a positive outcome for a patient, future organs might have an even greater positive outcome (but we do not know which organs will become available in the future); not only are organs scarce, but organs that optimally match specific patients have varying degrees of rarity. Finally, each patient will presumably die relatively soon if not given an organ, and thus has access to only a limited number of organs.

These are the problems our lab aimed to address in creating OrganITE, an organ-to-patient assignment methodology developed in concert with Dr. Alexander Gimson, a consultant transplant hepatologist at Cambridge University Hospitals NHS Foundation Trust. Our work on OrganITE was published at NeurIPS 2020.

As with the personalized donor-recipient matching project introduced directly above, OrganITE treats organ transplantation as an individualized treatment effect (ITE) problem, and builds upon our lab’s methodologies for accounting for assignment bias and (lack of) counterfactual outcomes.

OrganITE adopts an allocation policy that is a hybrid of sorts, sitting between allocating purely on a best match basis and prioritizing the sickest patients first. This requires us to find a balance between individualized treatment effect (ITE), quality of match, and rarity of organ. Specifically, OrganITE takes into account 1) the unique features of each patient, 2) the expected period of survival of each patient without receiving an organ, 3) the benefit each patient would be expected to receive from each available organ (given the match between donor and organ features), and 4) the relative rarity of each patient’s “optimal” organ.

OrganITE builds on our existing work to create individualized treatment effect models capable of addressing the high dimensionality of the organ space, while also modeling and accounting for organ scarcity and rarity. This approach can significantly increase total life years across the population, compared to the existing greedy approaches that simply optimize life years for the current organ available.

OrganITE: Optimal transplant donor organ offering using an individual treatment effect

Jeroen Berrevoets, James Jordon, Ioana Bica, Alexander Gimson, Mihaela van der Schaar

NeurIPS 2020

Transplant-organs are a scarce medical resource. The uniqueness of each organ and the patients’ heterogeneous responses to the organs present a unique and challenging machine learning problem. In this problem there are two key challenges: (i) assigning each organ “optimally” to a patient in the queue; (ii) accurately estimating the potential outcomes associated with each patient and each possible organ.

In this paper, we introduce OrganITE, an organ-to-patient assignment methodology that assigns organs based not only on its own estimates of the potential outcomes but also on organ scarcity. By modelling and accounting for organ scarcity we significantly increase total life years across the population, compared to the existing greedy approaches that simply optimise life years for the current organ available. Moreover, we propose an individualised treatment effect model capable of addressing the high dimensionality of the organ space.

We test our method on real and simulated data, resulting in as much as an additional year of life expectancy as compared to existing organ-to-patient policies.

Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis

Jeroen Berrevoets, Ahmed M. Alaa, Zhaozhi Qian, James Jordon, Alexander Gimson, Mihaela van der Schaar

ICML 2021

Organ transplantation is often the last resort for treating end-stage illnesses, but managing transplant wait-lists is challenging because of organ scarcity and the complexity of assessing donor-recipient compatibility.

In this paper, we develop a data-driven model for (real-time) organ allocation using observational data for transplant outcomes. Our model integrates a queuing-theoretic framework with unsupervised learning to cluster the organs into “organ types”, and then construct priority queues (associated with each organ type) wherein incoming patients are assigned. To reason about organ allocations, the model uses synthetic controls to infer a patient’s survival outcomes under counterfactual allocations to the different organ types{–} the model is trained end-to-end to optimise the trade-off between patient waiting time and expected survival time. The usage of synthetic controls enable patient-level interpretations of allocation decisions that can be presented and understood by clinicians.

We test our model on multiple data sets, and show that it outperforms other organ-allocation policies in terms of added life-years, and death count. Furthermore, we introduce a novel organ-allocation simulator to accurately test new policies.

Understanding and empowering transplantation decision-making

Clinical variation is a well-observed phenomenon, but may have profound consequences and may unfavourably impact outcomes.

Organ transplantation is a prime example of this, as it is clinicians who must ultimately choose whether to accept or decline an organ offer. Although significant effort has been placed into developing organ allocation algorithms, a donor offered to the first ranked patient in a waitlist is rejected up to 50% of the time. Even for good quality organs, the assigned organ offers could be turned down several times before they are finally accepted. The high ratio of declined organ offers is important as it may impact outcomes for that organ (e.g. due to prolonged cold ischemia time) and the patients involved.

Understanding the factors associated with variation in clinical decision-making is vitally important, as it might be able to inform clinicians of biases which, if rectified, might result in improved clinical outcomes. The development of interpretable, yet highly performant, models for clinical decision-making is essential to bridge the gap to black-box models and further medical knowledge.

In a paper published at NeurIPS 2021, our lab introduced iTransplant (individualized TRANSparent Policy Learning for orgAN Transplantation), a novel data-driven framework to learn interpretable organ offer acceptance policies directly from clinical data. iTransplant learns a patient-wise parametrization of the expert clinician policy that accounts for the differences between patients, a crucial but often overlooked factor in organ transplantation.

Our work on iTransplant is closely tied to quantitative epistemology, a new and transformationally significant research pillar pioneered by the van der Schaar Lab. The purpose of our research into quantitative epistemology is to develop a strand of machine learning aimed at understanding, supporting, and improving human decision-making. We aim to do so by building machine learning models of decision-making, including how humans acquire and learn from new information, establish and update their beliefs, and act on the basis of their cumulative knowledge.

Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation

Yuchao Qin, Fergus Imrie, Alihan Hüyük, Daniel Jarrett, Alexander Gimson, Mihaela van der Schaar

NeurIPS 2021

Significant effort has been placed on developing decision support tools to improve patient care. However, drivers of real-world clinical decisions in complex medical scenarios are not yet well-understood, resulting in substantial gaps between these tools and practical applications.

In light of this, we highlight that more attention on understanding clinical decision-making is required both to elucidate current clinical practices and to enable effective human-machine interactions. This is imperative in high-stakes scenarios with scarce available resources. Using organ transplantation as a case study, we formalize the desiderata of methods for understanding clinical decision-making. We show that most existing machine learning methods are insufficient to meet these requirements and propose iTransplant, a novel data-driven framework to learn the factors affecting decisions on organ offers in an instance-wise fashion directly from clinical data, as a possible solution.

Through experiments on real-world liver transplantation data from OPTN, we demonstrate the use of iTransplant to: (1) discover which criteria are most important to clinicians for organ offer acceptance; (2) identify patient-specific organ preferences of clinicians allowing automatic patient stratification; and (3) explore variations in transplantation practices between different transplant centers. Finally, we emphasize that the insights gained by iTransplant can be used to inform the development of future decision support tools.

Revolutionizing Healthcare roundtable on ML/AI for organ transplantation

On November 30, 2021, our lab held a clinician roundtable on the specific topic of organ transplantation as part of our ongoing Revolutionizing Healthcare engagement series.

Our panel for this session consisted of:

  • Alexander Gimson, MD FRCP (Transplant hepatologist (Cambridge); working with the van der Schaar Lab)
  • Gabriel Oniscu, MD FRCS (Consultant transplant surgeon and honorary reader in transplantation, Royal Infirmary of Edinburgh; Clinical director, Edinburgh Transplant Centre)
  • Martin Cadeiras, MD (Associate professor, medical director, heart failure, heart transplantation and mechanical circulatory support, University of California, Davis)
  • Prof. Michael Nicholson, MD DSc FRCS (Professor of transplant surgery & head of the Division of Academic General & Transplant Surgery, University of Cambridge; Director, NIHR Blood and Transplant Research Unit in Organ Donation and Transplantation)

In the first part of the session, Dr. Alexander Gimson, a Cambridge-based transplant hepatologist, outlined the importance, complexities, and unique challenges of the organ transplantation setting, while also highlighting some recent collaborative projects using machine learning for donor-recipient matchmaking and survival prediction. The latter part of the session featured a clinician roundtable comprising an expert panel of four transplantation specialists. Our panelists answered an array of questions (most of which came from the audience) and discussed the path forward for AI and machine learning for organ transplantation.

Continuing our research

On this page, we have introduced a range of our lab’s pioneering projects in the domain of organ transplantation. In each case, our aim has been to target the specific issues that make it so difficult to truly optimize outcomes when allocating scarce organs to waitlisted patients. All of this work has been guided by transplantation specialists within the clinical community, and we are extremely grateful to our collaborators for sharing their expertise and insights.

Machine learning methods developed around the organ transplantation agenda can substantially improve the overall efficiency of the healthcare system, and lead to the development of new and improved clinical practice guidelines. In particular, we see great potential for future projects combining cutting-edge machine learning with approaches from operations research. We are excited to continue our work in the area of organ transplantation alongside our international network of clinical collaborators, creating new and impactful solutions to known problems while discovering entirely new problems .

If you are a clinician and would like to learn more about how machine learning can be applied to real-world healthcare problems, please sign up for our Revolutionizing Healthcare online engagement sessions (no machine learning knowledge required).

For a full list of the van der Schaar Lab’s publications, click here.

Mihaela van der Schaar

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London.

Mihaela has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.

In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise span signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.

Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.

James Jordon

James is a 3rd year DPhil student at the University of Oxford.

His research focuses on the use of generative adversarial networks in solving supervised, unsupervised and private learning problems including: estimation of individualised treatment effects, feature selection, private synthetic data generation, data imputation and transfer learning.

Of particular interest is the use of generative modelling in creating private synthetic data to allow easier data sharing and therefore more rapid advancement in specialised machine learning technologies.

Jeroen Berrevoets

Jeroen Berrevoets joined the van der Schaar Lab from the Vrije Universiteit Brussel (VUB). Prior to this, he analyzed traffic data at 4 of Belgium’s largest media outlets and performed structural dynamics analysis at BMW Group in Munich.

As a PhD student in the van der Schaar Lab, Jeroen plans to explore the potential of machine learning in aiding medical discovery, rather than simply applying it to non-obvious predictions. His main research interests involve using machine learning and causal inference to gain understanding of various diseases and medications.

Much of this draws from his firmly-held belief that, “while learning to predict, machine learning models captivate some of the underlying dynamics and structure of the problem. Exposing this structure in fields such as medicine, could prove groundbreaking for disease understanding, and consequentially drug discovery.”

Jeroen’s studentship is supported under the W. D. Armstrong Trust Fund. He will be supervised jointly by Mihaela van der Schaar and Dr. Eoin McKinney.

Nick Maxfield

From 2020 to 2022, Nick oversaw the van der Schaar Lab’s communications, including media relations, content creation, and maintenance of the lab’s online presence.