van der Schaar Lab

The case for Reality-centric AI

Introduction

We are proposing today a new agenda to reorientate Artificial Intelligence (AI) towards the complexities of the real world. We set out the reasons for the agenda, and its goals.
We do not address how to achieve those goals in a detailed way – that will be the subject of subsequent papers by the van der Schaar Lab and, we hope, by others.

Summary

It is little noticed that two camps have emerged in AI and machine learning (ML). One, which we call Petri-dish AI and which is exemplified by clean, simple-to-define yet challenging-to-solve problems like playing games or making biological or chemical  discoveries. The other camp, in which the van der Schaar Lab is a leader, and which we call Reality-centric AI, puts the inherent and unavoidable complexity of the real world at the heart of designing, training, testing, and deploying models.

The purpose of the agenda is to address the usually whispered secret of machine learning: that today’s machine-learning models cannot operate … within the real world.

We define Reality-centric AI as AI which can operate effectively, reliably, and accountably in the real world.

We believe that the balance between the two camps needs to change. Today, numerous investments are made on Petri-dish AI and not nearly enough on Reality-centric AI.

(Prominent examples of the Reality-centric AI agenda in which significant research and financial investments have been made are self-driving cars and robotics. Yet, numerous other areas in which Reality-centric AI can make a difference have received much less attention.) Therefore, we propose a reality-centric research agenda consisting of eight pillars to pull together disparate fields of current research1 and build the necessary new ML tools and models to deliver the world-changing promise of AI.

Two world views

Of course, without specifying what problem we are attempting to solve, neither camp is inherently more correct than the other.

The two approaches are very different, as set out in the following table.

Reality-centric AI starts with the real world and simplifies it sufficiently through abstractions and explicit assumptions in order to develop or implement machine-learning methods. Therefore, it should be possible to add back incrementally real-world complexity which is, as it were, in its DNA, because reality-centric models are designed starting with real-world complexity. For Petri-dish AI to navigate the messy real world would require it to add complexity to its models designed for simple domains. Even if it were theoretically possible, this seems much harder to do.

Therefore, where the problem to be solved revolves around the complexity of the real world (and this is where many of AI’s most important contributions will be made), we believe Reality-centric AI must be the better approach.

A complex world

Most real-world domains are made up of many human, and increasingly machine, actors and actions, judgements and decisions. Outcomes and actions are also shaped by countless macro and micro factors, some far from obvious, not just by the decisions and actions of the domain protagonists. This means:

  • The problems and opportunities in these domains are often too large to be solved by one organisation
  • The environment is determined by the interplay of diverse stakeholders and stakeholder groups with diverse, changing, and probably unaligned goals, rewards, and costs
  • The rules of engagement among, and for, agents in the domain may change
  • Data the model will learn from is often costly to acquire, and is usually limited, incomplete and changes over time. This includes information about other agents (human and machine)
  • Domain components and sub-components that deploy machine learning will increasingly not operate in isolation, but rather will interact with other components (which may or may not involve machine learning). Components need to learn and interact with each other smoothly and efficiently. Different components may need to learn and operate at different time scales and under different constraints
  • There is no simple objective function to measure decisions and actions against
  • Humans need to be able to understand, and interact with, the model
  • Complex domains are among humanity’s most important. They include medicine, education, finance, business, defense, criminal justice, markets (for example energy markets), smart grids and capacity planning, communication and transportation networks, logistics, operations and many more areas – effectively wherever humans have impact.

We are not saying that machine learning cannot play an important role in these domains – rather, we need to think fundamentally differently about machine learning.

Changing circumstances

Outside of domains such as gaming or mathematics-based sciences, the “right” answer (ie the “right” prediction, recommendation, explanation etc) is typically constantly evolving because the “environment” and human goals change. Therefore, challenges posed by the real-world do not go away once a model is trained, tested, and successfully deployed. Unlike with games, the rules can change because of shifts in regulation, consumer needs, demand and supply, technology, geo-politics, climate change, incentives, fashion – the list is virtually endless.

If a machine-learning system is widely adopted, its model will need to consider the impact it has on its environment as the humans or non-human agents it empowers change their decisions and behave differently.

Today’s machine-learning paradigm lacks a systematic way of deciding how to represent the world

By sticking to a limited class of problems, the petri-dish paradigm does not need to address the issue that it is complex, messy, and fraught with compromises and trade-offs to establish a useful abstraction and formalisation of the world, policy scope, and effective valuation function. However, an appropriate abstraction is fundamental to how useful the model’s output will be. Deciding upon the right way to represent the world such that machine learning can be helpful in a real-world setting involves many implicit model-design choices which determine the possible range of model outputs, and the effectiveness of the model. Given a set of decisions or assumptions about what the blueprint of a model’s states, policies, and valuation functions representing the world should be, we can then use one or more of the many available machine-learning tools to learn how to operate. But to date, there has been little research into how to choose the best abstraction or formalisation of the real-world situation for machine learning to work within, and almost none into how to have the machine-learning system learn how to make those assumptions itself.

The research agenda

We propose a research agenda to develop systematic approaches to ensuring machine learning operates effectively, sustainably, and accountably in the domains we have been describing. The purpose of the agenda is to address the usually whispered secret of machine learning: that today’s machine-learning models cannot operate within, or are fatally brittle in, the real world.  

We recognise that the agenda is ambitious. We are confident that significant progress can be made rapidly. Some of the work being done in machine learning (including by the van der Schaar Lab) already fits this agenda. Some can be used as-is (eg Data-centric AI,  time-series analysis and forecasting, and causal-effect inference); some will require significant extension (eg new methods for creating synthetic data, modelling dynamical systems, sequential decision making under uncertainty, multi-agent environments, and trustworthy and accountable reinforcement learning); but much of the Reality-centric AI agenda will require new principles for, and ways of developing, ML.

The Reality-centric AI research agenda has eight pillars, which we group into three themes:

The model’s inputs

Pillar one: models need a systematic way to identify how to model the world

The output of a model is just as dependent on the way in which it formalises, simplifies and abstracts the real-world situation it is supposed to learn about (and the assumptions it makes) as it is on the data it learns from. That translation of real-world complexity into decisions about the design of the model (for example, for a reinforcement learning model, what states, granularity of data, relevant agents and their interactions, scope of the reward function etc) is at best implicit in today’s models, but is usually completely absent or ignored. Reality-centric AI requires that there is a systematic approach to determining the right formalisation of the problem. In fact, beyond a methodology for the model developer to determine a useful formalisation given the domain and the use-case, the system should be able to learn the best abstraction, even though that is highly dependent, not just on the world, but on the model’s purpose.

Pillar two: models must be designed to operate with real-world data

Real-world data is usually incomplete, unavailable, biased, full of errors and noise, delayed, costly to acquire, subject to privacy or other constraints, and can change as circumstances change. Models need to be designed to cope effectively with real-world data2.

Pillar three: learning systems need a systematic way to determine what data to acquire

Current ML models are typically trained without considering the quantity, quality, and diversity of features of the available data on their performance. We need to develop a systematic methodology to determine what data should be used for a specific ML task. It is usually assumed that the more data, the better; but in the real world, data is often scarce and/or costly. In fact, rather than assume the more, the better, we should normally seek the minimum data necessary for a specific ML task, or rather we should value the model taking into account the cost of the data and other constraints. This calculation is not a one-off exercise around training data – the value needs to be calculated and monitored during deployment too to ensure the model continues to deliver a return.

The model’s outputs

Pillar four: models must adapt to changing circumstances post-deployment

Models cannot assume that the rules under which they operate will stay the same – fashion, regulation, and many other factors can change the possibility or relative desirability of outcomes and how to achieve them. The model may not be able to be retrained because insufficient new data may be available, the cost of retraining may be too high, it may be unclear when to trigger retraining, and the model may have become too important to be taken offline. Therefore, models must adapt post-deployment3 and where it cannot, or chooses not, to adapt, the model should be aware of and tell the user its confidence about its predictions.

Pillar five: models need to respond to dynamic measures of success

Unlike games, where the value is ultimately defined by a game’s outcome, in numerous real-world domains the best outcome may be complex, may change, and may be highly dependent on circumstances, including the perceived state of other agents and forces. It may not be possible to learn what the best outcome should be, but neither will it be possible to define the valuation function up front.

In many domains, there are multiple and usually conflicting objectives and constraints for different agents including around performance, fairness, monetary costs, opportunity cost (actions not taken because resources were consumed by the preferred option), and risks associated with the outcome, not all of which the model will have access to. Also, an outcome will typically need to be measured against outcomes of other possible interventions and the impact they would have on humans or other models as well as for the model itself. So, deciding on an action may require assessment of counterfactual actions or policies.

Pillar six: models need to respect human constraints

Real-world models need to adapt as the world changes but in many high-stakes domains like medicine and finance, models need to stay under human control. For example, there may be regulation to which they need to conform, which may stipulate rigorous testing and human sign-off. In addition, there may be other constraints such as around ethical, or resource considerations. In high-stakes environments, models’ adherence to these constraints needs to be formalised and monitored. In essence, models must remain accountable to humans. This includes providing uncertainty estimates, guarantees on performance, and alignment with human goals. Systems must also consider privacy, transparency, maintaining human control and other human-centric requirements, all of which may change over time.

The model’s ecosystem

Pillar seven: model design must consider when and how to communicate with humans

In human-centric domains, it is essential that machine-learning models partner effectively with humans. Various human stakeholder groups may become important – at different times stakeholders may include researchers, experts and other users with specialist or group-specific requirements and needs. Stakeholders will need different ways to communicate with machine-learning systems, and they will often have disparate criteria for assessing the performance of a model.  

Pillar eight: Models must support interoperability with other models, components or systems

As AI becomes a more prominent feature of the world, AI models and agents will need to interact with each other, and this will require interoperability standards. Standards will allow for innovation, increased resilience, greater flexibility (because components can be more easily swapped in and out), and speedier delivery. With standards, models can also become more specialist, relying on other models and agents for certain functions, and governance and transparency will be improved.

Conclusion

For machine learning to add value to human-centric, or human-impacted, domains, we must embrace the complexity, and the error-prone and constantly changing nature, of these domains and not pretend the world is simple and then hope to sort real-world complexity afterwards. We believe that real-world domains pose significantly harder challenges for machine learning than solving games or straightforward-to-formalise science problems.

This is why Reality-centric AI requires much more attention, putting the complexity and changing nature of the world at the of the heart of the problem to be solved. We believe that great progress can be made rapidly. Over the last two decades, the van der Schaar Lab has developed research in important and complex domains (healthcare, education, communication networks, logistics, transportation networks, finance, smart grids and more). The Lab will continue to play its part in developing tools and approaches to address these reality-centric challenges, and we encourage others to engage with us on it so that we can together deliver the promise of machine learning and have a major real-world impact that benefits society and humanity.

Footnotes

  1. There are a number of specialist streams of research today around data-centric AI, Robust AI and Continual Learning, MLOps, Model observability and MLSys, which typically seek to take a model or methodology designed for a simplistic view of the world and make it robust and scalable. We believe the implications of the real world are much more profound and need to be built into all aspects of a model’s design – and in fact should drive fundamental aspects of the model and its implementation.
  2. Rather than fixing or selecting data before it reaches the model, which is largely what data-centric AI aims to do, this pillar maintains that operating with messy data needs to be an inherent capability of the model.
  3. Fortunately, some of the techniques that are needed here are already being explored as part of the ML community’s research into MLOps and model observability and continual learning.

Mihaela van der Schaar

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London.

Mihaela has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.

In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise span signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.

Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.

Andrew Rashbass

Andrew Rashbass is the former CEO of The Economist Group, Reuters and Euromoney Institutional Investor PLC, and now works closely with Mihaela and the van der Schaar Lab on a range of initiatives.