van der Schaar Lab

Spotlight on organ transplantation research projects

In this post, we will explain how the problem of organ transplantation optimization represents a web of complex problems, and introduce some of our lab’s cutting-edge machine learning methods developed and applied in collaboration with a range of clinical professionals in this domain.

Organ transplantation: scarcity, inefficiency, and complexity

In the U.S., an average of 17 patients died every day in 2018 while waiting to receive an organ. Lengthy waitlists force medical professionals to make tough decisions regarding the best use of scarce organs. Given the constant mismatch between donor supply and recipient demand, each decision to allocate an organ to a specific recipient is a decision that may deprive another waitlisted patient of additional years of life.

In broad terms, transplantation decisions tend to be answered on the basis of urgency and benefit: essentially, the decision relies on an estimate of how long each patient will survive without a transplant, and how long a transplant would extend each patient’s life. Such decisions are generally based on a number of commonly used risk scores, some of which are calculated on a one-size-fits-all basis. Even those that adopt an individual patient perspective rely on linear approaches that only consider a few patient biomarkers. For example, liver allocation in the U.S. and Europe relies on the MELD score, which is calculated using only three lab parameters; in the U.K., allocation relies on simple linear models that vastly underestimate the complexity of organ-to-patient interaction.

This leads to a state of affairs in which the allocation of scarce organs is suboptimal. There is undoubtedly scope for substantial gains in efficiency (and, therefore, individual and collective benefit to patients) through the use of AI and machine learning.

We cannot, however, simply swap out existing approaches such as risk scores with AI and machine learning models. This is because organ transplantation is not a single problem, but rather a web of complex and interrelated problems and sub-problems. For a machine learning lab such as ours, the challenge is to understand and address each problem and sub-problem independently, while also fitting these individual parts together into a functioning whole. This requires us to go beyond familiar areas in AI and machine learning.

While the complexities of organ transplantation from a machine learning perspective will likely form the basis of a more thorough summary in the future, they are briefly summarized below as falling into three categories: prediction, matchmaking, and allocation policy.

[Prediction] What is each patient’s potential benefit from transplantation?
[Matchmaking] How easy is it to find and allocate the organ that best matches each patient?
[Allocation policy] What is the best way to allocate organs to patients?
  • In addition to the points mentioned above, a further area of complexity lies in the fact that not all types of organs are the same: there are considerable differences between kidneys and livers in terms of supply-demand mismatch, ease of prediction of life-years gained, ease of donor-recipient matchmaking, and variation in actual life-years gained. A patient on the waitlist for a kidney transplantation can survive for many years thanks to dialysis, whereas this not true for other organs such as hearts or lungs.

Our lab’s work involving organ transplantation

As outlined above, organ transplantation is a high-stakes domain in which there is exceptional potential for real-world impact through increased efficiency, but increasing efficiency in any meaningful way would require us to navigate a highly complex set of interrelated problems.

We have now been working on organ transplantation for a number of years, and in this time have developed a portfolio of groundbreaking data-driven machine learning approaches with the support of clinical collaborators representing a range of specializations within the domain. Our projects tackle the challenges raised by transplantation in general, but also address problems specific to a variety of commonly transplanted organs, including hearts, livers, and lungs. Our work is ongoing, and we continue to develop new and improved methods.

Personalized survival predictions for waitlisted and post-transplantation patients

As explained above, survival prediction before and after transplantation is an especially important problem because transplantation and treatment decisions depend on predictions of patient survival on the waitlist and survival after transplantation. Better predictions may, therefore, increase the number of successful transplantations.

Most commonly-used clinical approaches to survival prediction are based on one-size-fits-all models that apply to the entire population of patients and donors, and do not fully capture the heterogeneity of these populations. In general, such approaches construct a single risk score (a real number) for each patient as a function of the patient’s features and then use that risk score to predict a survival time or a survival curve. As a result, patients with higher risk scores are predicted to have a lower probability of surviving for every given time horizon—so survival curves for different individuals do not intersect.

In a study published in PLoS ONE in 2018, our lab worked with clinical and academic collaborators from the University of California, Los Angeles (UCLA), University of California, Davis (UC Davis), and University College London (UCL) to develop a methodology for personalized prediction of survival for patients with advanced heart failure, both while on the waitlist and after heart transplantation.

The method we developed is called ToPs (short for “tree of predictors”). ToPs can capture the heterogeneity of populations by creating clusters of patients and providing specific predictive models for each cluster. ToPs addresses the interaction of multiple features and, importantly, takes into account the difference between long-term survival and short-term survival.

In comparison with existing clinical risk-scoring methods and other machine learning methods, ToPs significantly improves survival predictions both post- and pre-cardiac transplantation. ToPs provides a more accurate, personalized approach to survival prediction that can benefit patients, clinicians, and policymakers in making clinical decisions and setting clinical policy. Because survival prediction is widely used in clinical decision-making across diseases and clinical specialties, the implications of this are far-reaching.

In addition to being published in PLoS ONE, this project appeared in Newsweek.

If you’d like to learn more about our work in the area of survival analysis, competing risks, and comorbidities, please take a look at our overview here.

Personalized survival predictions via Trees of Predictors: An application to cardiac transplantation

Jinsung Yoon, William R. Zame, Amitava Banerjee, Martin Cadeiras, Ahmed Alaa, Mihaela van der Schaar

PLoS ONE, 2018

Abstract

Our lab has also developed individualized prediction methods for patients on the lung transplantation waitlist with cystic fibrosis (for more info our work related to cystic fibrosis, see our spotlight page here). In this case, we adapted an algorithmic framework that can automate the process of constructing clinical prognostic models, and used it to establish the optimal timing for referring patients with terminal respiratory failure for lung transplantation. Further details related to this project can be found directly below.

Prognostication and Risk Factors for Cystic Fibrosis via Automated Machine Learning

Ahmed Alaa, Mihaela van der Schaar

Published in Nature Scientific Reports, 2018

Abstract

Personalized donor-recipient matching

Even though organ transplantation can increase the life expectancy and quality of life for the recipient, the operation can entail various complications, including infection, acute and chronic rejection, and malignancy. This is a complicated risk assessment problem, since postoperative patient survival depends on different types of risk factors: recipient-related factors (e.g., cardiovascular disease severity of heart recipients), recipient-donor matching factors (e.g., weight ratio and human leukocyte antigen), race, and donor-related factors (e.g., diabetes).

In a project that led to a paper published at AAAI 2017, we sought an enhanced phenotypic characterization for the compatibility of patient-donor pairs through a precision medicine approach. Working alongside Dr. Martin Cadeiras, a heart failure and heart transplant cardiologist at UC Davis, we constructed personalized predictive models tailored to the individual traits of both the donor and the recipient to the finest possible granularity.

The result was ConfidentMatch, an automated system that learns recipient-donor compatibility patterns from EHR data in terms of the probability of transplant success for given recipient-donor pairs. Clinicians can utilize ConfidentMatch as a prognostic tool for managing organ transplantation selection decisions in which information about the donor and recipient are fed to the system, and the output is given as the probability of the transplant’s success.

Personalized Donor-Recipient Matching for Organ Transplantation

Jinsung Yoon, Ahmed Alaa, Martin Cadeiras, Mihaela van der Schaar

AAAI 2017

Abstract

Donor-recipient matching as an individualized treatment effect problem

While some of the clinical factors pertaining to organ compatibility are already known (as described in the section above), it is hypothesized that donor-recipient compatibility involves additional clinical factors and exhibits a much more intricate pattern of feature interaction.

Since it would be infeasible and unethical to attempt to uncover these factors and patterns using organ allocation trials, the best way to do so is through a data-driven approach using observational data for organ allocations and transplantation outcomes. Seen from this perspective, organ transplantation has similarities to individualized treatment effects problems (you can find an overview of our work in this key research area here).

One major challenge with regard to this problem is that the matching policies underlying the observational data are driven by clinical guidelines, creating a “matching bias.” Additionally, we must also estimate transplant outcomes under counterfactual matches not observed in the data—in other words, we only have data for transplantation decisions that were made, and obviously we lack outcomes for decisions that were not made.

To solve this problem, our lab recently joined forces with two clinical colleagues at UCLA: Dr. Maxime Cannesson (Chair, Department of Anesthesiology & Perioperative Medicine) and Dr. Brent Ershoff (Assistant Professor-In-Residence, Department of Anesthesiology). Together, we developed an approach that learns feature representations by jointly clustering donor features. These donor features are mapped into single donor types, and donor-invariant transformations are applied to recipient features to predict outcomes for a given donor-recipient instance. A more in-depth explanation regarding how this was achieved can be found in the paper linked directly below.

Learning Matching Representations for Individualized Organ Transplantation Allocation

Can Xu, Ahmed Alaa, Ioana Bica, Brent Ershoff, Maxime Cannesson, Mihaela van der Schaar

AISTATS 2021

Abstract

3 minute presentation by Ahmed Alaa (Inspiration Exchange – March 30, 2021)

Combining organ rarity and scarcity with individualized treatment effects and donor-recipient matching

The methods introduced above have primarily addressed two exceedingly complex challenges: personalized survival predictions for waitlisted and post-transplantation patients, and donor-recipient matching. However, a truly effective AI or machine learning approach to improving organ allocation efficiency must integrate the logistics of organ scarcity in addition to solving the problems of personalized survival predictions, individualized treatment effect estimation, and donor-recipient matching.

As described above, the domain of organ transplantation is further complicated by the logistics of organ scarcity. Each organ is unique and high-dimensional, thus rendering outcome estimation for each (also unique) patient very difficult. Additionally, organs arrive in a stream: while a currently available organ might result in a positive outcome for a patient, future organs might have an even greater positive outcome (but we do not know which organs will become available in the future); not only are organs scarce, but organs that optimally match specific patients have varying degrees of rarity. Finally, each patient will presumably die relatively soon if not given an organ, and thus has access to only a limited number of organs.

These are the problems our lab aimed to address in creating OrganITE, an organ-to-patient assignment methodology developed in concert with Dr. Alexander Gimson, a consultant transplant hepatologist at Cambridge University Hospitals NHS Foundation Trust. Our work on OrganITE was published at NeurIPS 2020.

As with the personalized donor-recipient matching project introduced directly above, OrganITE treats organ transplantation as an individualized treatment effect (ITE) problem, and builds upon our lab’s methodologies for accounting for assignment bias and (lack of) counterfactual outcomes.

OrganITE adopts an allocation policy that is a hybrid of sorts, sitting between allocating purely on a best match basis and prioritizing the sickest patients first. This requires us to find a balance between individualized treatment effect (ITE), quality of match, and rarity of organ. Specifically, OrganITE takes into account 1) the unique features of each patient, 2) the expected period of survival of each patient without receiving an organ, 3) the benefit each patient would be expected to receive from each available organ (given the match between donor and organ features), and 4) the relative rarity of each patient’s “optimal” organ.

OrganITE builds on our existing work to create individualized treatment effect models capable of addressing the high dimensionality of the organ space, while also modeling and accounting for organ scarcity and rarity. This approach can significantly increase total life years across the population, compared to the existing greedy approaches that simply optimize life years for the current organ available.

OrganITE: Optimal transplant donor organ offering using an individual treatment effect

Jeroen Berrevoets, James Jordon, Ioana Bica, Alexander Gimson, Mihaela van der Schaar

NeurIPS 2020

Abstract

OrganITE presentation by Jeroen Berrevoets (from ITE inference video tutorial series)

Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis

Jeroen Berrevoets, Ahmed M. Alaa, Zhaozhi Qian, James Jordon, Alexander Gimson, Mihaela van der Schaar

ICML 2021

Abstract

Understanding and empowering transplantation decision-making

Clinical variation is a well-observed phenomenon, but may have profound consequences and may unfavourably impact outcomes.

Organ transplantation is a prime example of this, as it is clinicians who must ultimately choose whether to accept or decline an organ offer. Although significant effort has been placed into developing organ allocation algorithms, a donor offered to the first ranked patient in a waitlist is rejected up to 50% of the time. Even for good quality organs, the assigned organ offers could be turned down several times before they are finally accepted. The high ratio of declined organ offers is important as it may impact outcomes for that organ (e.g. due to prolonged cold ischemia time) and the patients involved.

Understanding the factors associated with variation in clinical decision-making is vitally important, as it might be able to inform clinicians of biases which, if rectified, might result in improved clinical outcomes. The development of interpretable, yet highly performant, models for clinical decision-making is essential to bridge the gap to black-box models and further medical knowledge.

In a paper published at NeurIPS 2021, our lab introduced iTransplant (individualized TRANSparent Policy Learning for orgAN Transplantation), a novel data-driven framework to learn interpretable organ offer acceptance policies directly from clinical data. iTransplant learns a patient-wise parametrization of the expert clinician policy that accounts for the differences between patients, a crucial but often overlooked factor in organ transplantation.

Our work on iTransplant is closely tied to quantitative epistemology, a new and transformationally significant research pillar pioneered by the van der Schaar Lab. The purpose of our research into quantitative epistemology is to develop a strand of machine learning aimed at understanding, supporting, and improving human decision-making. We aim to do so by building machine learning models of decision-making, including how humans acquire and learn from new information, establish and update their beliefs, and act on the basis of their cumulative knowledge.

Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation

Yuchao Qin, Fergus Imrie, Alihan Hüyük, Daniel Jarrett, Alexander Gimson, Mihaela van der Schaar

NeurIPS 2021

Abstract

Revolutionizing Healthcare roundtable on ML/AI for organ transplantation

On November 30, 2021, our lab held a clinician roundtable on the specific topic of organ transplantation as part of our ongoing Revolutionizing Healthcare engagement series.

Our panel for this session consisted of:

  • Alexander Gimson, MD FRCP (Transplant hepatologist (Cambridge); working with the van der Schaar Lab)
  • Gabriel Oniscu, MD FRCS (Consultant transplant surgeon and honorary reader in transplantation, Royal Infirmary of Edinburgh; Clinical director, Edinburgh Transplant Centre)
  • Martin Cadeiras, MD (Associate professor, medical director, heart failure, heart transplantation and mechanical circulatory support, University of California, Davis)
  • Prof. Michael Nicholson, MD DSc FRCS (Professor of transplant surgery & head of the Division of Academic General & Transplant Surgery, University of Cambridge; Director, NIHR Blood and Transplant Research Unit in Organ Donation and Transplantation)

In the first part of the session, Dr. Alexander Gimson, a Cambridge-based transplant hepatologist, outlined the importance, complexities, and unique challenges of the organ transplantation setting, while also highlighting some recent collaborative projects using machine learning for donor-recipient matchmaking and survival prediction. The latter part of the session featured a clinician roundtable comprising an expert panel of four transplantation specialists. Our panelists answered an array of questions (most of which came from the audience) and discussed the path forward for AI and machine learning for organ transplantation.

Dr Alexander Gimson on ML and Organ Transplantation

In this video, Dr Alexander Gimson, Consultant Transplant Hepatologist at the Cambridge University Hospitals NHS Foundation Trust and member of the CCAIM faculty, talks about the impact of machine learning on transplantation medicine.

In particular, he discusses the pioneering work which he has done with The van der Schaar lab in using ML and in particular individualised treatment effect estimation to improve organ allocation.

You can find the complete recording of the CCAIM AI Clinic from 10 November 2022 here.

Continuing our research

On this page, we have introduced a range of our lab’s pioneering projects in the domain of organ transplantation. In each case, our aim has been to target the specific issues that make it so difficult to truly optimize outcomes when allocating scarce organs to waitlisted patients. All of this work has been guided by transplantation specialists within the clinical community, and we are extremely grateful to our collaborators for sharing their expertise and insights.

Machine learning methods developed around the organ transplantation agenda can substantially improve the overall efficiency of the healthcare system, and lead to the development of new and improved clinical practice guidelines. In particular, we see great potential for future projects combining cutting-edge machine learning with approaches from operations research. We are excited to continue our work in the area of organ transplantation alongside our international network of clinical collaborators, creating new and impactful solutions to known problems while discovering entirely new problems .

If you are a clinician and would like to learn more about how machine learning can be applied to real-world healthcare problems, please sign up for our Revolutionizing Healthcare online engagement sessions (no machine learning knowledge required).

For a full list of the van der Schaar Lab’s publications, click here.

Mihaela van der Schaar

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London.

Mihaela has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.

In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise span signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.

Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.

James Jordon

James is a 3rd year DPhil student at the University of Oxford.

His research focuses on the use of generative adversarial networks in solving supervised, unsupervised and private learning problems including: estimation of individualised treatment effects, feature selection, private synthetic data generation, data imputation and transfer learning.

Of particular interest is the use of generative modelling in creating private synthetic data to allow easier data sharing and therefore more rapid advancement in specialised machine learning technologies.

Jeroen Berrevoets

Jeroen Berrevoets joined the van der Schaar Lab from the Vrije Universiteit Brussel (VUB). Prior to this, he analyzed traffic data at 4 of Belgium’s largest media outlets and performed structural dynamics analysis at BMW Group in Munich.

As a PhD student in the van der Schaar Lab, Jeroen plans to explore the potential of machine learning in aiding medical discovery, rather than simply applying it to non-obvious predictions. His main research interests involve using machine learning and causal inference to gain understanding of various diseases and medications.

Much of this draws from his firmly-held belief that, “while learning to predict, machine learning models captivate some of the underlying dynamics and structure of the problem. Exposing this structure in fields such as medicine, could prove groundbreaking for disease understanding, and consequentially drug discovery.”

Jeroen’s studentship is supported under the W. D. Armstrong Trust Fund. He will be supervised jointly by Mihaela van der Schaar and Dr. Eoin McKinney.

Nick Maxfield

From 2020 to 2022, Nick oversaw the van der Schaar Lab’s communications, including media relations, content creation, and maintenance of the lab’s online presence.