Please note: this page is a work in progress. Please treat it as a “stub” containing only basic information, rather than a full-fledged summary of our lab’s vision for ML for next-generation clinical trials and our research to date.
Randomized controlled trials: the current gold standard
Randomised controlled trials (RCTs) are the gold standard for evaluating new treatments. Phase I trials are used to evaluate safety and dosage, Phase II trials are used to provide some evidence of efficacy, and Phase III trials are used to evaluate the effectiveness of the new treatment in comparison to the current one.
A typical question that a Phase III RCT is intended to answer is “In population W is drug A at daily dose X more efficacious in improving Z by Q amount over a period of time T than drug B at daily dose Y?”
Traditional free-standing parallel-group RCTs may, however, not always be the most practical option for evaluating certain treatments, since they are costly and time-consuming to implement, and they do not always recruit representative patients. This last point makes external validity an issue for RCTs, as findings sometimes fail to generalise beyond the study population. This may be due to the narrow inclusion criteria in RCTs compared with the real world, where historically, population restrictions with respect to disease severity, co-morbidities, elderly patients, and ethnic minorities can be under-represented. By contrast, when drugs approved by regulators such as the U.S. FDA after the clinical trials stage, they start being administered to a much larger and varied population of patients.
Although there is increasing awareness of such issues and global regulatory authorities are encouraging wider inclusion criteria in clinical trials, it remains an issue that is unlikely to be solved by RCTs and associated integrated and model-based analyses alone.
Next-Generation Clinical Trials
As we can see, clinical trials are complex projects involving multiple data sources, design choices and analytical methods. Some of these aspects are detailed in other research pillars: individualised treatment effect, clustering, survival analysis, etc. In what follows, we focus on next-generation trial designs.
(The figure above was designed together with our collaborator Dr Eoin McKinney)
Next-generation clinical trials are gaining increasing attention as an alternative to RCTs, since they utilise accumulated results to dynamically modify the future trajectory of a trial for better efficiency and ethics, while preserving the integrity and validity of the study. Instead of randomising patients to fixed treatment arms in fixed proportions throughout the trial, adaptive designs use interim analyses to reconfigure patient recruitment criteria, assignment rules and treatment options. More specifically, aspects of the trial that may be modified adaptively include dosage, basis of patient selection, sample size, drug being trialed, and “cocktail” mix.
Studies such as the phase I trial in Acute Myeloid Leukaemia in (published in 2013) and Cancer Research UK study CR0720-11 (published in 2012) have suggested that even some simple forms of adaptive design lead to better usage of resources and require fewer participants. These promising results have spawned the interest in developing adaptive clinical trial methodologies in recent years, which is of great importance because running an actual clinical trial on human subjects is expensive and ethically sensitive. A well-designed trial methodology with thorough theoretical and simulated investigation is widely acknowledged as a crucial first step.
The potential of machine learning for clinical trials
In recent years, there has been a growing trend to leverage ML approaches, especially tools from reinforcement learning (RL) such as Markov decision processes and multi-armed bandits, to improve and expedite clinical trial designs.
The framework of multi-armed bandits is particularly useful in the context of clinical trials because it fits easily and well and because there is an enormous literature on multi-armed bandits. All of this work is designed to address the exploration-exploitation trade-off, which can be interpreted as a trade-off between clinical research (to discover knowledge about treatments) and clinical practice (to benefit the participants) in clinical trials, by assigning new patients to treatment arms on the basis of information from previous patients.
These methods have been shown to speed up learning and identify subgroups for which different treatments might be employed and different treatment responses might be expected. Because these methods are automatic, they are easy to implement (when trial logistics permit). Moreover, the Bayesian nature of many of these algorithms permits smooth incorporation of previously discussed observational evidence as prior information.
Learning in adaptive clinical trials still faces several unique challenges that have not been well addressed, which may have contributed to their lack of adoption in actual clinical trials. In particular, the safety constraints resulting from ethical and societal considerations have been insufficiently researched.
Furthermore, constructing actual trials using adaptive methods powered by machine learning will require convincing both those who conduct the trials (e.g. pharmaceutical companies) and those who assess the results of the trials (e.g. regulatory agencies) that the substantial improvements that are possible justify the changes to the way trials are presently conducted.
Our lab’s publications related to next-generation clinical trials
Next-generation clinical trial design and implementation is one of our lab’s key research priorities, and our lab has developed an array of novel approaches. Much of this work extends on a solid foundation of roughly 10 years of expertise with multi-armed bandits (related publications can be found here).
As mentioned previously, this page is a work in progress and only presents a basic and partial view of our lab’s vision and research to date. A few of our existing publications are provided below, but our work is ongoing. If you would like to track our publications related to next-generation clinical trials on an ongoing basis, you can do so using this URL.
SDF-Bayes: Cautious Optimism in Safe Dose-Finding Clinical Trials with Drug Combinations and Heterogeneous Patient Groups
Hyun-Suk Lee, Cong Shen, William R. Zame, Jang-Won Lee, Mihaela van der Schaar
Phase I clinical trials are designed to test the safety (non-toxicity) of drugs and find the maximum tolerated dose (MTD). This task becomes significantly more challenging when multiple-drug dose-combinations (DC) are involved, due to the inherent conflict between the exponentially increasing DC candidates and the limited patient budget.
This paper proposes a novel Bayesian design, SDF-Bayes, for finding the MTD for drug combinations in the presence of safety constraints. Rather than the conventional principle of escalating or de-escalating the current dose of one drug (perhaps alternating between drugs), SDF-Bayes proceeds by cautious optimism: it chooses the next DC that, on the basis of current information, is most likely to be the MTD (optimism), subject to the constraint that it only chooses DCs that have a high probability of being safe (caution).
We also propose an extension, SDF-Bayes-AR, that accounts for patient heterogeneity and enables heterogeneous patient recruitment. Extensive experiments based on both synthetic and real-world datasets demonstrate the advantages of SDF-Bayes over state of the art DC trial designs in terms of accuracy and safety.
SyncTwin: Treatment Effect Estimation with
Most of the medical observational studies estimate the causal treatment effects using electronic health records (EHR), where a patient’s covariates and outcomes are both observed longitudinally. However, previous methods focus only on adjusting for the covariates while neglecting the temporal structure in the outcomes.
To bridge the gap, this paper develops a new method, SyncTwin, that learns a
patient-specific time-constant representation from the pre-treatment observations. SyncTwin issues counterfactual prediction of a target patient by constructing a synthetic twin that closely matches the target in representation. The reliability of the estimated treatment effect can be assessed by comparing the observed and synthetic pre-treatment outcomes. The medical experts can interpret the estimate by examining the most important contributing individuals to the synthetic twin.
In the real-data experiment, SyncTwin successfully reproduced the findings of a randomized controlled clinical trial using observational data, which demonstrates its usability in the complex real-world EHR.
Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression
Modeling a system’s temporal behaviour in reaction to external stimuli is a fundamental problem in many areas. Pure Machine Learning (ML) approaches often fail in the small sample regime and cannot provide actionable insights beyond predictions.
A promising modification has been to incorporate expert domain knowledge into ML models. The application we consider is predicting the progression of disease under medications, where a plethora of domain knowledge is available from pharmacology. Pharmacological models describe the dynamics of carefully-chosen medically meaningful variables in terms of systems of Ordinary Differential Equations (ODEs).
However, these models only describe a limited collection of variables, and these variables are often not observable in clinical environments. To close this gap, we propose the latent hybridisation model (LHM) that integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system and to link the expert and latent variables to observable quantities.
We evaluated LHM on synthetic data as well as real-world intensive care data of COVID-19 patients. LHM consistently outperforms previous works, especially when few training samples are available such as at the beginning of the pandemic.
Robust Recursive Partitioning for Heterogeneous Treatment Effects with Uncertainty Quantification
Subgroup analysis of treatment effects plays an important role in applications from medicine to public policy to recommender systems. It allows physicians (for example) to identify groups of patients for whom a given drug or treatment is likely to be effective and groups of patients for which it is not.
Most of the current methods of subgroup analysis begin with a particular algorithm for estimating individualized treatment effects (ITE) and identify subgroups by maximizing the difference across subgroups of the average treatment effect in each subgroup. These approaches have several weaknesses: they rely on a particular algorithm for estimating ITE, they ignore (in)homogeneity within identified subgroups, and they do not produce good confidence estimates.
This paper develops a new method for subgroup analysis, R2P, that addresses all these weaknesses. R2P uses an arbitrary, exogenously prescribed algorithm for estimating ITE and quantifies the uncertainty of the ITE estimation, using a construction that is more robust than other methods.
Experiments using synthetic and semi-synthetic datasets (based on real data) demonstrate that R2P constructs partitions that are simultaneously more homogeneous within groups and more heterogeneous across groups than the partitions produced by other methods. Moreover, because R2P can employ any ITE estimator, it also produces much narrower confidence intervals with a prescribed coverage guarantee than other methods.
Contextual Constrained Learning for Dose-Finding Clinical Trials
Clinical trials in the medical domain are constrained by budgets. The number of patients that can be recruited is therefore limited. When a patient population is heterogeneous, this creates difficulties in learning subgroup specific responses to a particular drug and especially for a variety of dosages. In addition, patient recruitment can be difficult by the fact that clinical trials do not aim to provide a benefit to any given patient in the trial.
In this paper, we propose C3T-Budget, a contextual constrained clinical trial algorithm for dose-finding under both budget and safety constraints. The algorithm aims to maximize drug efficacy within the clinical trial while also learning about the drug being tested. C3T-Budget recruits patients with consideration of the remaining budget, the remaining time, and the characteristics of each group, such as the population distribution, estimated expected efficacy, and estimation credibility. In addition, the algorithm aims to avoid unsafe dosages.
These characteristics are further illustrated in a simulated clinical trial study, which corroborates the theoretical analysis and demonstrates an efficient budget usage as well as a balanced learning-treatment trade-off.
Sequential Patient Recruitment and Allocation for Adaptive Clinical Trials
Onur Atan, William R. Zame, Mihaela van der Schaar
Randomized Controlled Trials (RCTs) are the gold standard for comparing the effectiveness of a new treatment to the current one (the control). Most RCTs allocate the patients to the treatment group and the control group by uniform randomization.
We show that this procedure can be highly sub-optimal (in terms of learning) if – as is often the case – patients can be recruited in cohorts (rather than all at once), the effects on each cohort can be observed before recruiting the next cohort, and the effects are heterogeneous across identifiable subgroups of patients.
We formulate the patient allocation problem as a finite stage Markov Decision Process in which the objective is to minimize a given weighted combination of type-I and type-II errors. Because finding the exact solution to this Markov Decision Process is computationally intractable, we propose an algorithm Knowledge Gradient for Randomized Controlled Trials (RCT-KG) – that yields an approximate solution.
Our experiment on a synthetic dataset with Bernoulli outcomes shows that for a given size of trial our method achieves significant reduction in error, and to achieve a prescribed level of confidence (in identifying whether the treatment is superior to the control), our method requires many fewer patients.
Machine learning for clinical trials in the era of COVID-19
The world is in the midst of a pandemic. We still know little about the disease COVID-19 or about the virus (SARS-CoV-2) that causes it. We do not have a vaccine or a treatment (aside from managing symptoms). We do not know if recovery from COVID-19 produces immunity, and if so for how long, hence we do not know if “herd immunity” will eventually reduce the risk or if a successful vaccine can be developed – and this knowledge may be a long time coming.
In the meantime, the COVID-19 pandemic is presenting enormous challenges to medical research, and to clinical trials in particular. This paper identifies some of those challenges and suggests ways in which machine learning can help in response to those challenges.
We identify three areas of challenge: ongoing clinical trials for non-COVID-19 drugs; clinical trials for repurposing drugs to treat COVID-19, and clinical trials for new drugs to treat COVID-19. Within each of these areas, we identify aspects for which we believe machine learning can provide invaluable assistance.
Learning for Dose Allocation in Adaptive Clinical Trials with Safety Constraints
Cong Shen, Zhiyang Wang, Sofia Villar, Mihaela van der Schaar
Phase I dose-finding trials are increasingly challenging as the relationship between efficacy and toxicity of new compounds (or combination of them) becomes more complex. Despite this, most commonly used methods in practice focus on identifying a Maximum Tolerated Dose (MTD) by learning only from toxicity events.
We present a novel adaptive clinical trial methodology, called Safe Efficacy Exploration Dose Allocation (SEEDA), that aims at maximizing the cumulative efficacies while satisfying the toxicity safety constraint with high probability.
We evaluate performance objectives that have operational meanings in practical clinical trials, including cumulative efficacy, recommendation/allocation success probabilities, toxicity violation probability, and sample efficiency. An extended SEEDA-Plateau algorithm that is tailored for the increase-then-plateau efficacy behavior of molecularly targeted agents (MTA) is also presented.
Through numerical experiments using both synthetic and real-world datasets, we show that SEEDA outperforms state-of-the-art clinical trial designs by finding the optimal dose with higher success rate and fewer patients.
Learn more and get involved
Our research related to adaptive clinical trials is closely linked to the problem of individualized treatment effect (ITE) inference—another of the lab’s core areas of focus. To learn more about our work on ITE inference, visit our dedicated research pillar page.
We would encourage you to stay abreast of ongoing developments in this and other areas of machine learning for healthcare by signing up to take part in one of our two streams of online engagement sessions.
If you are a practicing clinician, please sign up for Revolutionizing Healthcare, which is a forum for members of the clinical community to share ideas and discuss topics that will define the future of machine learning in healthcare (no machine learning experience required).
If you are a machine learning student, you can join our Inspiration Exchange engagement sessions, in which we introduce and discuss new ideas and development of new methods, approaches, and techniques in machine learning for healthcare.
A full list of our papers on this and related topics can be found here.