van der Schaar Lab

Time series in healthcare: challenges and solutions

The transformation of healthcare through machine learning depends heavily on the successful application of time series data to model longitudinal trajectories for health and disease. As explained below, this is an extremely challenging undertaking that has arguably received insufficient consideration to date.

This page showcases our lab’s work in developing machine learning models for a wide range of purposes across the time series setting. It is a living document, the content of which will evolve as we continue to develop approaches and build a vision for this new research area.

This page is authored and maintained by Mihaela van der Schaar and Nick Maxfield.

Our time series software: TemporAI

TemporAI is our Machine Learning-centric time-series library for medicine, focusing on tasks such as: time-series prediction, time-to-event (survival) analysis with time-series data, and counterfactual inference (treatment effects).

This AAAI tutorial was given by Mihaela van der Schaar and Fergus Imrie on February 23, 2022.

The full talk can be found below, and is highly recommended viewing for anyone who would like to know more about building models for time series in machine learning for healthcare.

On July 24, 2021, Mihaela van der Schaar gave an invited talk entitled “Time-series in healthcare: challenges and solutions” as part of the ICML 2021 Time Series Workshop.

Time series: the backbone of bespoke medicine

Time series datasets such as electronic health records (EHR) and registries represent valuable (but imperfect) sources of information spanning a patient’s entire lifetime of care. Whether intentionally or not, they capture genetic and lifestyle risks, signal the onset of diseases, show the advent of new morbidities and comorbidities, indicate the time and stage of diagnosis, and document the development of treatment plans, as well as their efficacy.

Using patient data generated at each of these points in the pathway of care, we can develop machine learning models that give us a much deeper and more interconnected understanding of individual trajectories of health and disease—including the number of states needed to provide an accurate representation of a disease, how to infer a patient’s current state, what triggers transitions from one state to another, and much more.

Our own lab, for example, has used time series datasets to produce new discoveries and a develop understanding of progression and clinical trajectories across a wide range of diseases, including cancer, cystic fibrosis, Alzheimer’s, cardiovascular disease and COVID-19, as well as within specific settings such as intensive care.

Armed with a fully quantitative and scientific understanding of the progression of multiple diseases over time, we can unlock the full capabilities of machine learning to create long-term comprehensive patient management programs that evolve with each individual’s changing context and history and consider not only a single risk but multiple risks (and the evolution of such risks over time).

This kind of truly personalized end-to-end medical care is what our lab refers to as bespoke medicine. Whereas current approaches to precision or personalized medicine tend to fit the patient to a pattern (based, for example, on genetic information), bespoke medicine seeks to recognize and adapt to changes in patterns caused by age, lifestyle changes, onset of new conditions, and progress in the course of treatment.

The multi-faceted nature of time series

The development of models for time series is a complex, hard-to-define research task that touches every other area of machine learning for healthcare—including dynamic forecasting, survival analysis, clustering and phenotyping, screening and monitoring, early diagnosis, and treatment effect estimation.

Navigate directly to one of these areas:

It may be tempting to try to simplify this complexity by positioning time series as merely a dimension of all other areas within machine learning for healthcare, rather than as a research domain in its own right. Such a prospect would perhaps make sense if static models for healthcare could be readily and uniformly “upgraded” to dynamic versions capable of meaningfully incorporating time series data, regardless of the clinical problem at hand. In reality, as the many examples given below will demonstrate, this is not the case.

Static and dynamic models are fundamentally different animals, and the problems we aim to solve in healthcare are so diverse that there is no single approach to making them all work with time series data. We cannot, therefore, treat time series as a dimension of all kinds of healthcare problems.

Additionally, there are some model development challenges that are unique to the time series setting, and these further justify the treatment of time series as a domain of research in its own right. These challenges (some of which which are shown at the bottom of the figure below and explored on this page) are generally related to the need to ensure that the outputs of models are actionable, informative, and reliable.

Navigate directly to one of these areas:

Tailoring development of time series models to healthcare challenges

In the section below, we will introduce a number of problems in healthcare, and highlight the distinct challenges they present when developing machine learning models for time series.

Forecasting disease trajectories

Chronic diseases such as cardiovascular disease, cancer, and diabetes progress slowly throughout a patient’s lifetime. This progression can be segmented into “stages” that that manifest through clinical observations. A growing area in precision medicine is the forecasting of personalized disease trajectories using patterns in temporal correlations and associations between related diseases.

Our aim here is to build disease progression models from electronic health records and other informative datasets, to learn the model parameters at training time, and then to issue personalized dynamic forecasts. In addition to providing accurate forecasts for the patient at hand, we should be able to make new discoveries regarding disease progression mechanisms at the population level, at the sub-group level, and at the personalized level.

Dynamic forecasting in the time series setting presents an array of unique challenges. For example, we must work with multiple streams of measurement that are often sparse and irregularly (and informatively) sampled. It is also necessary to forecast multiple outcomes rather than a single outcome, and these outcomes themselves may change over time since patients with one chronic disease typically develop other long-term conditions. An additional challenge lies in the fact that true clinical states tend to be inherently unobservable—the timing of diagnosis, for example, may not reliably indicate the timing of disease onset. Furthermore, it is important to factor in the heterogeneity of patients, which may lead to many possible patterns from which to learn.

The figure above illustrates how understandng of disease stages and progression can be used to predict likelihood of onset of comorbidities. This example shows a machine learning model’s learned representation of 3 progression stages for cystic fibrosis. The left-hand side shows the estimated mean of the emission distribution for the forced expiratory volume (FEV1) biomarker in each stage. The right-hand side plots the risks of various comorbidities (diabetes, asthma, ABPA, hypertension and depression) for patients in the 3 progression stages.

One of our lab’s approaches aiming to overcome such limitations is attentive state-space modeling (ASSM), first introduced in a paper published at NeurIPS 2019. ASSM was developed to learn accurate and interpretable structured representations for disease trajectories, and offers a deep probabilistic model of disease progression that capitalizes on both the interpretable structured representations of probabilistic models and the predictive strength of deep learning methods.

Unlike conventional Markovian state-space models, ASSM uses recurrent neural networks (RNNs) to capture more complex state dynamics. Since it learns hidden disease states from observational data in an unsupervised fashion, ASSM is well-suited to EHR data, where a patient’s record is seldom annotated with “labels” indicating their true health state.

As implied by the name, ASSM captures state dynamics through an attention mechanism, which observes the patient’s clinical history and maps it to attention weights that determine how much influence previous disease states have on future state transitions. In that sense, attention weights generated for an individual patient explain the causative and associative relationships between the hidden disease states and the past clinical events for that patient.

ASSM also features a structured inference network trained to predict posterior state distributions by mimicking the attentive structure of our model. The inference network shares attention weights with the generative model, and uses those weights to create summary statistics needed for posterior state inference.

To the best of our knowledge, ASSM is the first deep probabilistic model that provides clinically meaningful latent representations, with non-Markovian state dynamics that can be made arbitrarily complex while remaining interpretable.

Attentive state-space modeling of disease progression

Ahmed Alaa, Mihaela van der Schaar

NeurIPS 2019

Abstract

Time-to-event and survival analysis

Survival analysis (often referred to as time-to-event analysis) refers to the study of the duration until one or more events occur. This is vital to a great many predictive tasks across numerous fields of application, including economics, finance, and engineering—and, of course, healthcare.

In the medical setting, survival analysis is often applied to the discovery of risk factors affecting survival, comparison among risks of different subjects at a certain time of interest, and decisions related to cost-efficient acquisition of information (e.g. screening for cancer). Specifically, our goal is to dynamically estimate the probability of occurrence of various types of future events happening at a particular time in the future, while taking competing risks into account.

To do this, it is essential to incorporate longitudinal measurements of biomarkers and risk factors into a model. Rather than discarding valuable information recorded over time, this allows us to make better risk assessments on the clinical events. This is why our lab developed Dynamic-DeepHit, a novel architecture presented in IEEE Transactions on Biomedical Engineering in 2019.

While inheriting the neural network structure of its predecessor DeepHit (introduced in a paper published at AAAI 2018) and maintaining the ability to handle competing risks, Dynamic-DeepHit learns, on the basis of the available longitudinal measurements, a data-driven distribution of first event times of competing events. This completely removes the need for explicit model specifications (i.e., no assumptions about the form of the underlying stochastic processes are made) and enables us to learn the complex relationships between trajectories and survival probabilities.

As shown above, Dynamic-DeepHit updates its survival predictions (presented as cumulative incidence functions) as new observations are collected over time.

Gray solid lines, yellow dotted lines, and stars indicate times at which measurement are taken, the time at which a patient is censored, and the time at which an event occurred, respectively.

A temporal attention mechanism is employed in the hidden states of the RNN structure when constructing the context vector. This allows Dynamic-DeepHit to access the necessary information, which has progressed along with the trajectory of the past longitudinal measurements, by paying attention to relevant hidden states across different time stamps. The cause-specific subnetworks then take the context vector and the last measurements as an input, and estimate the joint distribution of the first event time and competing events, which are used for further risk predictions.

Dynamic-DeepHit: a deep learning approach for dynamic survival analysis with competing risks
based on longitudinal data

Changhee Lee, Jinsung Yoon, Mihaela van der Schaar

IEEE Transactions on Biomedical Engineering, 2019

Abstract

Additionally, on a paper published in ACM Transactions on Computing for Healthcare in 2020, our lab presented Bayesian nonparametric dynamic survival (BNDS), a model capable of 1) quantifying the uncertainty around model predictions in a principled manner at the individual level while 2) avoiding making assumptions on the data generating process and adapting model complexity to the structure of the data. Both contributions are particularly important for personalizing health care decisions, as predictions may be uncertain due to lack of data, whereas we can expect the underlying heterogeneous physiological time series to vary wildly across patients.

BNDS can use sparse longitudinal data to give personalized survival predictions that are updated as new information is recorded. Our approach has the advantage of not imposing any constraints on the data generating process, which, together with novel postprocessing statistics, expands the capabilities of current methods.

Flexible modelling of longitudinal medical data: a Bayesian nonparametric approach

Alexis Bellot, Mihaela van der Schaar

ACM Transactions on Computing for Healthcare, 2020

Abstract

See also: A hierarchical Bayesian model for personalized survival predictions
(Alexis Bellot, Mihaela van der Schaar; IEEE journal of biomedical and health informatics, 2019)

To learn more about our lab’s research in the area of survival analysis, competing risks, and comorbidities, click here.

Clustering and phenotyping

Phenotyping and identifying subgroups of patients are important challenges that become particularly complicated in a dynamic setting where longitudinal datasets are in use. At present, the conventional notion of clustering and phenotyping examines similarities in time series observations, clustering patients together based on the observations about them to date.

However, this type of clustering yields information that is of relatively limited use to clinicians and patients—after all, chronic diseases such as cystic fibrosis and dementia are heterogeneous in nature, with widely differing outcomes, even in narrow patient subgroups.

What clinicians and patients actually need to know is what types of events (including events related to competing risks) will likely occur in the future. We are, therefore, interested in a type of clustering or phenotyping over time in which patients are grouped based on similarity of future outcomes, rather than on similarity of observations.

Identifying patient subgroups with similar progression patterns can be advantageous for understanding such heterogeneous diseases. This allows clinicians to anticipate patients’ prognoses by comparing them to “similar” patients, and to design treatment guidelines tailored to homogeneous subgroups.

Our lab has developed a method for temporal phenotyping in this manner using deep predictive clustering of disease progression, as presented at ICML 2020. This provides a notion of temporal phenotyping that is predictive of similar future outcomes, on the basis of which doctors and patients can actively plan. The focus here is on learning discrete representations of past observations that best describe and predict future events and outcomes of interest.

Temporal phenotyping using deep predictive clustering of disease progression

Changhee Lee, Mihaela van der Schaar

ICML 2020

Abstract

See also: Outcome-oriented deep temporal phenotyping of disease progression
(Changhee Lee, Jem Rashbass, Mihaela van der Schaar; IEEE transactions on biomedical engineering, 2020)

Screening and monitoring

At the core of the problem of screening and monitoring are multiple questions related to determining, for each individual, what kind of clinical information to acquire, when to begin acquiring it, how frequently to do so, and the means by which this information should be acquired.

Our objective here is to move from a one-size-fits-all screening and monitoring setting into a personalized setting. We must consider that monitoring and screening are costly, both from a monetary perspective and from the perspective of the patient (as certain forms of screening may be incur detrimental side-effects). Accordingly, we require tools that can optimally balance the benefit of acquiring specific information for each individual at a particular time against the cost of acquiring that information.

This is a challenging undertaking, however, since the value of information is unknown and changes dynamically when we are working in the time series setting; this needs to be learned on the basis of the available data.

To solve this problem, our lab developed a deep learning architecture called Deep Sensing, which was first introduced in a paper for ICLR 2018. At training time, Deep Sensing uses a neural network to learn how to build predictions at various cost-performance points. In doing so, we can create multiple representations associated with different measurements and costs. These are learned recursively and adaptively by introducing missing data at different points in time, letting us model the different cost-benefit trade-offs for different classes of patients and (by extension) the value of information over time as the patient progresses. At runtime, the operator prescribes a performance level or a cost constraint, and Deep Sensing determines what measurements to take and what to infer from those measurements, and then issues predictions.

Deep Sensing: Active Sensing using Multi-directional Recurrent Neural Networks

Jinsung Yoon, William R. Zame, Mihaela van der Schaar

ICLR 2018

Abstract

A number of other papers to consider in this area are listed below.

ASAC: Active Sensing using Actor-Critic models

Jinsung Yoon, James Jordon, Mihaela van der Schaar

Machine Learning for Healthcare (MLHC) 2019

Abstract

ConfidentCare: A Clinical Decision Support System for Personalized Breast Cancer Screening

Ahmed Alaa, Kyeong Moon, William Hsu, Mihaela van der Schaar

IEEE Transactions on Multimedia, 2016

Abstract

Disease-Atlas: Navigating Disease Trajectories using Deep Learning

Bryan Lim, Mihaela van der Schaar

MLHC 2018

Abstract

Early diagnosis

The ability to identify a disease such as breast cancer early in a patient can lead to better and more effective treatment, potentially saving lives. Since diagnosis is the practice of inferring a patient’s current disease state, diagnosing any disease early requires us to correctly understand and characterize the staging and likely trajectory of that disease.

This is only possible, however, if we already possess a thorough understanding of disease trajectories gained from time series data. It is crucial, for example, to determine how many stages of disease exist per disease, how individuals may progress through disease stages differently, the triggers for transitions between stages, and how long individuals may stay in stages. Armed with such understanding, we can successfully predict and diagnose disease early on the basis of changing patient characteristics, symptoms, and morbidities.

Using approaches such as the attentive state-space model (ASSM) introduced above, we can learn the trajectories of diseases, as well as the symptoms and morbidities that are likely to precede diagnosis. This can then be used to build dynamic forecasting models that are personalized and interpretable, and provide more accurate and effective screening and diagnosis. Another relevant approach in this regard is the hidden absorbing semi-Markov model, introduced by our lab in a paper published in the Journal of Machine Learning Research in 2018.

A Hidden Absorbing Semi-Markov Model for Informatively Censored Temporal Data: Learning and Inference

Ahmed Alaa, Mihaela van der Schaar

JMLR, 2018

Abstract

Treatment effects

A major challenge in the domain of healthcare is ascertaining whether a given treatment influences or determines an outcome—for instance, whether there is a survival benefit to prescribing a certain medication, such as the ability of a statin to lower the risk of cardiovascular disease.

Our goal is to use machine learning to estimate the effect of a treatment on an individual using observational data. Observational datasets contain valuable information on complex time-dependent treatment scenarios, such as where the efficacy of treatments changes over time (for example, drug resistance in cancer patients), or where patients receive multiple interventions administered at different points in time (such as joint prescriptions of chemotherapy and radiotherapy)

Estimating the effects of treatments over time therefore offers unique opportunities, such as understanding how diseases evolve under different treatment plans, how individual patients respond to medication over time, and which timings may be optimal for assigning treatments, thus providing new tools to improve clinical decision support systems.

This is very challenging in the time series setting, due to the need to deal with time-dependent confounders (i.e., patient covariates that affect the treatment assignments and are themselves affected by past treatments) which bias the treatment assignment in the observational dataset.

In a NeurIPS 2018 paper entitled “Forecasting Treatment Responses Over Time Using Recurrent Marginal Structural Networks,” we proposed a new deep learning model, which we refer to as recurrent marginal structural networks (RMSN). Drawing inspiration from marginal structural models, a class of methods in epidemiology which use propensity weighting to adjust for time dependent confounders, RMSN adopts a sequence to-sequence architecture to directly learn time-dependent treatment responses from observational data.

We used two sets of deep neural networks to build our RMSN: 1) a set propensity network to compute treatment probabilities for inverse probability of treatment weighting, and 2) a prediction network used to determine the treatment response for a given set of planned interventions.

Using simulations of a state-of-the-art pharmacokinetic pharmacodynamic (PK-PD) model of tumor growth, we demonstrated the ability of our network to accurately learn unbiased treatment responses from observational data—even under changes in the policy of treatment assignments—and performance gains over benchmarks.

Forecasting Treatment Responses Over Time Using Recurrent Marginal Structural Networks

Bryan Lim, Ahmed Alaa, Mihaela van der Schaar

NeurIPS 2018

Abstract

In an ICLR 2020 paper entitled “Estimating counterfactual treatment outcomes over time through adversarially balanced representations,” we introduced the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasing availability of patient observational data, as well as recent advances in representation learning and domain adversarial training, to estimate treatment effects over time.

To handle the bias from time varying confounders, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions.

Using a model of tumor growth, we validated CRN in realistic medical scenarios, demonstrating that, when compared with existing state-of-the-art methods, CRN achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment.

Estimating Counterfactual Treatment Outcomes over Time through Adversarially Balanced Representations

Ioana Bica, Ahmed Alaa, James Jordon, Mihaela van der Schaar

ICLR 2020

Abstract

To learn more about our lab’s research in the area of individualized treatment effect inference, click here. We have also produced a video tutorial series on this topic, which is available here.

Unlocking the full potential of time series models

The following section will explore a range of features and attributes that are necessary in order to make AI and machine learning models as useful as possible in the clinical setting. While these features, such as interpretability, are common across the static and dynamic settings alike, the time series setting presents some unique and substantially more complex challenges, as outlined below.

AutoML

For any given prediction or forecasting problem in the clinical setting, we are likely to be able to choose from a range of time series models. However, it is extremely difficult to attempt to manually identify the best model for a particular problem at a particular moment in time, as the effectiveness of any algorithm at any stage will depend on a number of factors, including temporal distribution shifts and changing risk factors over time.

AutoML frameworks are well-suited to this kind of problem, as they are designed to provide optimal model selection. This formed the basis of our lab’s work on Stepwise Model Selection for Sequence Prediction via Deep Kernel Learning (SMS-DKL), which was introduced in an AISTATS 2020 paper.

SMS-DKL is a hyperparameter optimization tool for sequence modeling, and uses a novel Bayesian optimization algorithm to tackle the challenge of model selection in the time series. This is accomplished by treating the performance at each time step as its own black-box function. In order to solve the resulting multiple black-box function optimization problems jointly and efficiently, we exploit potential correlations among black-box functions using deep kernel learning (DKL).

Stepwise Model Selection for Sequence Prediction via Deep Kernel Learning

Yao Zhang, Daniel Jarrett, Mihaela Schaar

AISTATS 2020

Abstract

To learn more about our lab’s research in the area of automated machine learning (AutoML), click here.

Interpretability

There are several reasons to make a “black box” machine learning model for healthcare interpretable. First, an interpretable output can be more readily understood and trusted by its users (for example, clinicians deciding whether to prescribe a treatment), making its outputs more actionable. Second, a model’s outputs often need to be explained by its users to the subjects of its outputs (for example, patients deciding whether to accept a proposed treatment course) . Third, by uncovering valuable information that otherwise would have remained hidden within the model’s opaque inner workings, an interpretable output can empower users such as researchers with powerful new insights.

To date, the vast majority of the existing work on interpretability (including our own lab’s work) has focused on the static setting, with very little research having explored interpretability in the time series setting.

At ICML 2021, our lab presented a first model for explaining time series predictions in healthcare. Our method, DynaMask, is specifically designed for multivariate time series and uses saliency masks to identify and highlight important features at each time step.

Dynamask produces instance-wise importance scores for each feature at each time step by fitting a perturbation mask to the input sequence.

Explaining Time Series Predictions With Dynamic Masks

Jonathan Crabbé, Mihaela van der Schaar

ICML 2021

Abstract

To learn more about our lab’s research in the area of interpretable machine learning, click here.

Discovery and understanding of event data

It is also important to use time series datasets and models to make discoveries and understand event data.

One example of this is the development of personalized morbidity and comorbidity networks that enable us to understand how particular morbidities may trigger other morbidities over time.

Current state-of-the-art morbidity and comorbidity networks in healthcare are only capable of mapping the relationships between different diseases in a static manner at the population level. There is much to be gained from creating models that are both personalized (i.e., they depend on the unique characteristics, such as genetic information, of each specific individual) and dynamic (i.e., they depend on the order in which morbidities occur).

This is what our lab achieved through an approach we call deep diffusion processes (DDP), which allows us to model the temporal relationships between comorbid disease onsets expressed through a dynamic graph. Our work in this area is showcased in a paper published at AISTATS 2020.

DDP offers a deep probabilistic model for diffusion over comorbidity networks based on mutually-interacting point processes. We modeled DDP’s intensity function as a combination of contextualized background risk and networked disease interaction, using a deep neural network to (dynamically) update the disease’s influence on future events. This enables principled predictions based on clinically interpretable parameters which map patient history on to personalized comorbidity networks.

The dynamic comorbidity network learned by DDP for an individual patient at three time steps, together with the corresponding intensity function. Nodes for diseases that have not occurred are colored in gray, and diseases already diagnosed are assigned a distinct color. Edge thickness corresponds to the disease likelihood at the given time step. In the upper left panel, we plot the Jaccard distance of the patient’s network with respect to the average population as a function of time (on a logarithmic scale). The static comorbidity network obtained by counting disease co-occurrences and using the counts as graph edges is depicted on the right panel.

Learning Dynamic and Personalized Comorbidity Networks from Event Data using Deep Diffusion Processes

Zhaozhi Qian, Ahmed Alaa, Alexis Bellot, Jem Rashbass, Mihaela van der Schaar

AISTATS 2020

Abstract

Uncertainty estimation

For the outputs of a model to be trustworthy and actionable, the model must offer reliable uncertainty estimates. Estimation of the uncertainty of a healthcare associated inference is often as important as the prediction itself, as it allows the clinician to know how much weight to give it.

This is important for both static and dynamic models, but again the latter category presents an array of unique challenges; however, such challenges are not accounted for by commonly used approaches. For example, RNNs typically only produce single point forecasts, whereas we would ideally have sequential confidence intervals. This is why our lab developed an approach based on frequentist uncertainty in recurrent neural networks via blockwise influence functions, which we introduced in a paper published at at ICML 2020.

In this setting, we are computing uncertainty estimates by measuring the variability in the resampled RNN outputs. This is achieved by perturbing the model parameters through iterative deletion of blocks of data and retraining the model on the remaining data.

This approach is particularly well-suited to healthcare as it yields post-hoc uncertainty estimates that do not affect model accuracy or interfere with model training. It is also highly versatile, and can be applied to a wide range of sequence prediction settings without requiring changes to model architecture. Importantly, we can also provide frequentist coverage guarantees, which require formal frequentist procedures.

Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions

Ahmed Alaa, Mihaela van der Schaar

ICML 2020

Abstract

(Informatively) missing data

Missing data (inclusively informatively missing data) is another important area in time series.

Some of our lab’s earlier work in this area involved development of multi-directional recurrent neural networks (M-RNN) that enable us to interpolate features over time as well as across features. In other words, the M-RNN approach provides for simultaneous imputation across different medical time series streams and across time. Unlike bidirectional recurrent neural networks, we can use M-RNNs to perform imputation in a causal manner, since we do not need to consider the future: we simply use information that has been made available so far for imputation.

Estimating Missing Data in Temporal Data Streams Using Multi-Directional Recurrent Neural Networks

Jinsung Yoon, William Zame, Mihaela van der Schaar

IEEE Transactions on Biomedical Engineering, 2018

Abstract

It is also important, however, to bear in mind that clinical data is generally not missing at random: its missingness is often informative. We can learn about (and from) the underlying clinical judgements by building probabilistic models for learning not only from the value of clinical data, but also from its presence and absence.

Our lab achieved this in an approach presented in a paper published at ICML 2017, where we modeled a patient trajectory as a marked point process modulated by the health state. This model captures informatively sampled patient episodes: the clinicians’ decisions on when to observe a hospitalized patient’s vital signs and lab tests over time are represented by a marked Hawkes process, with intensity parameters modulated by the patient’s latent clinical states, and with observable physiological data (mark process) modeled as a switching multitask Gaussian process. In addition, our model captures informatively censored patient episodes by representing the patient’s latent clinical states as an absorbing semi-Markov jump process.

Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis

Ahmed Alaa, Scott Hu, Mihaela van der Schaar

ICML 2017

Abstract

Synthetic data generation

Machine learning has the potential to catalyze a complete transformation in healthcare, but researchers in our field are still hamstrung by a lack of access to high-quality data, which is the result of perfectly valid concerns regarding privacy.

If the purpose of sharing a dataset is to develop and validate machine learning methods for a particular task (e.g. prognostic risk scoring), real data is not necessary; it would suffice to have a synthetic dataset that is sufficiently like the real data. Generating synthetic patient records based on real patient records can, therefore, be an alternative way of providing machine learning researchers with the data that they need to be able to develop appropriate methods for the task at hand, while avoiding sharing sensitive patient information. This could dramatically swing the balance between risks and benefits in favor of the latter.

Synthetic data can also provide researchers with datasets that have been tailored to specific needs, while still based on real data. Varying types of synthetic datasets could, for instance, be created specifically for ICU admission prediction, for clinical trials, for estimating treatment effects, and for time series datasets (to name a few examples).

Generating realistic synthetic time series datasets that preserve the temporal dynamics of real datasets is challenging: we must capture the distribution of features within each time point as well as the complex dynamics of variables across time points.

Existing methods do not adequately attend to the temporal correlations unique to time series data. At the same time, supervised models for sequence prediction—which allow finer control over network dynamics—are inherently deterministic. This led our lab to develop TimeGAN, a generative model for time series data, which we presented in a paper for NeurIPS 2019. TimeGAN straddles the intersection of multiple strands of research, combining themes from autoregressive models for sequence prediction, GAN-based methods for sequence generation, and time series representation learning.

Since TimeGAN is trained adversarially and jointly via a learned embedding space with both supervised and unsupervised losses, it offers both the flexibility of unsupervised GAN frameworks and the control afforded by supervised training in autoregressive models. 

Importantly, TimeGAN handles mixed-data settings, where both static and time series data can be generated at the same time.

Time-series Generative Adversarial Networks

Jinsung Yoon, Daniel Jarrett, Mihaela van der Schaar

NeurIPS 2019

Abstract

TimeGAN does, however, have a number of limitations. It is hard to train (especially for time series data) and difficult to evaluate quantitatively due to the absence of a computable likelihood function. It is also vulnerable to training data memorization. This is why our lab more recently developed an approach to generative time series modeling based on Fourier flows, as presented in an ICLR 2021 paper.

In this case, our focus was to develop a generative model that can sample synthetic time series data while providing explicit likelihood models that are easy to optimize and evaluate.

Generative Time-series Modeling with Fourier Flows

Ahmed Alaa, Alex Chan, Mihaela van der Schaar

ICLR 2021

Abstract

To learn more about our lab’s research in the area of synthetic data generation, assessment, and evaluation, click here.

Reproducibility and visualization

Reproducibility is another essential attribute of any successful AI or machine learning model for healthcare—whether static or dynamic.

One example of a reproducible model developed by our lab for the time series setting is Clairvoyance, a unified, end-to-end pipeline for clinical decision support. Clairvoyance is capable of predictions, forecasts, monitoring and personalized treatment planning over time. Since reproducibility is the focus of Clairvoyance, all the code is available and can be freely tested, augmented, and benchmarked. Additionally, we are currently developing a comprehensive visualization tool for time series models that will be usable by both clinicians and patients.

Clairvoyance: A Pipeline Toolkit for Medical Time Series

Daniel Jarrett, Jinsung Yoon, Ioana Bica, Zhaozhi Qian, Ari Ercole, Mihaela van der Schaar

ICLR 2021

Abstract

The code for Clairvoyance can be found here on our lab’s GitHub.

Find out more and get involved

This page has served as an introduction to a range of challenges and solutions unique to the time series setting—from the perspective of both healthcare and machine learning.

We have demonstrated the importance of understanding the nature and drivers of disease progression, as well as the value of insight into how individuals transition between disease states and how multiple morbidities may interact.

Machine learning tools such as those introduced above can enable us to build a comprehensive view of patient health that incorporates the past, present, and future and can factor in the evolving interactions and causal relationships between multiple competing risks and comorbidities. This is the key to accelerating the advent of bespoke medicine and truly moving beyond one-size-fits-all approaches.

We would also encourage you to stay abreast of ongoing developments in this and other areas of machine learning for healthcare by signing up to take part in one of our two streams of online engagement sessions.

If you are a practicing clinician, please sign up for Revolutionizing Healthcare, which is a forum for members of the clinical community to share ideas and discuss topics that will define the future of machine learning in healthcare (no machine learning experience required).

If you are a machine learning student, you can join our Inspiration Exchange engagement sessions, in which we introduce and discuss new ideas and development of new methods, approaches, and techniques in machine learning for healthcare.

A full list of our papers on this and related topics can be found here.