van der Schaar Lab

The case for Human AI Empowerment

Responsible AI and AI Alignment are not enough. We also need AI to empower humans

Introduction

The availability and formidable power of ChatGPT and other large language models mean that the risks for humanity from unconstrained Artificial Intelligence (AI) have rightly moved centre stage. The worry is that, by mistake or through malice, humans will develop AI systems so powerful but so uncontrolled that AIs will seek to, and succeed in, performing acts, from influencing elections to destroying energy infrastructure and worse, that will be unsanctioned by, and detrimental to, society. This has led to massively increased interest in two areas: “Responsible AI” and “AI Alignment”.

Responsible AI is about making sure AI systems are built and operate transparently, ethically, safely, and accountably. AI Alignment is about making sure the goals of AI systems match humanity’s goals. (The van der Schaar Lab has done extensive research on Responsible AI with a special focus on Responsible AI in healthcare.)

Responsible development and deployment of AI that is aligned with human goals is undoubtedly important. However, seeking to constrain how AI systems should be developed through, for example, technology and regulation, will not manage two other risks:

  1. In seeking to manage the downside risks of AI, we may miss out on the unprecedented potential benefits of AI for humanity; and

  2. In the successful (responsible and aligned) deployment of AI, large sections of humanity may be marginalised as AI systems displace millions of people from their jobs

We must not become so focused just on preventing an apocalypse that we diminish the quality of life for millions of people or lose sight of AI’s extraordinary potential to improve the world.

We therefore propose adding a positive third requirement to this debate alongside “Responsible AI” and “AI Alignment”, namely “Human AI Empowerment” (HAE). HAE does not just put humans in charge of setting the machine’s goals as AI Alignment seeks to do but introduces an aspiration for AI to improve humans’ ability to have satisfying jobs and perform them well, learn more effectively and make better decisions. We believe that just focusing on Responsible AI and AI Alignment could result in the marginalisation of humans in many professions and domains. Adding the HAE requirement alongside Responsible AI and AI Alignment will encourage the development of AI systems that seek to make humans more effective and fulfilled rather than marginalised and displaced while at the same time (because of the other two sets of requirements) ensuring the systems are safe.

Human AI Empowerment

We define Human AI Empowerment as the goal of developing AI technologies to enhance human capabilities, well-being, and autonomy, ensuring that AI systems are designed to support, augment, and elevate human abilities, experiences, and values across diverse domains of life.

HAE is different to both Responsible AI and AI Alignment. Responsible AI focuses on the principles and practices of developing and deploying AI systems in a manner that is transparent, ethical, safe, and accountable. It addresses fairness, bias, privacy, and the responsible use of AI. HAE shares some goals with Responsible AI, but explicitly aims to improve human capabilities and well-being.

AI Alignment is concerned with designing AI systems with goals and intentions aligned with human values and objectives, so that AI systems act in the best interests of humans while avoiding harmful consequences. AI Alignment focuses on creating AI systems that follow human-defined goals without causing unintended negative consequences. In contrast, Human AI Empowerment goes beyond setting goals for AI systems that align with human values. It emphasises developing AI systems that actively augment human abilities, enhance well-being, and promote human autonomy.

Human AI Empowerment will mean that humans will always be in the loop. Even though they are geared to helping humans, Responsible and Aligned AI can bypass humans in looking to “help” them. HAE will always involve people and the AI’s contribution may be to make people achieve their goals more effectively without ongoing AI involvement.

HAE establishes a focus that distinguishes it from both Responsible AI and AI Alignment, emphasizing the positive and proactive role AI can play in enhancing human experiences and capabilities. We believe society needs to ensure AI delivers all three: Responsible AI, AI Alignment and Human AI Empowerment.

Although not badged as Human AI Empowerment, there is much outstanding fundamental work on empowering humans through AI being done by major labs across the world including by the van der Schaar Lab.

Challenges with Human AI Empowerment

There are challenges in developing Human AI Empowerment systems and approaches. For example, HAE systems need to protect users’ privacy and they need to avoid bias and discrimination and respect ethical guidelines. Another important challenge is that unlike Responsible AI and AI Alignment, where the usual measures of performance for AI can be applied (eg accuracy, precision, uncertainty estimation, calibration etc.), in HAE we need new measures of success which are human-centric such as demonstratable new capabilities acquired by humans, increased and/or improved human collaboration, new inventions etc. Another complex challenge is how to create environments where these human-supportive AI technologies can suitably be tested.

As with Responsible AI and AI Alignment, there is also the challenge of how to make HAE a central requirement of AI development, whether through regulation or other means.

HAE is necessarily multidisciplinary

The van der Schaar Lab has partnered over the past decades with domain experts (for example, doctors, teachers, system designers, energy experts, urban planners, and policy makers) from around the world to develop new machine-learning methods. HAE will require insights and research from a combination of machine learners, ethicists, lawyers, cognitive scientists and psychologists, linguists, operational researchers, statisticians, economists, and political scientists.

Conclusion

Although Responsible AI and AI Alignment are crucial for safely developing and deploying AI systems, Human AI Empowerment should also be a central goal.

The machine-learning community, regulators, and others considering the implications for society of the rapid development of AI should therefore consider introducing Human AI Empowerment as the third imperative alongside Responsible AI and AI Alignment. It will be hard to introduce HAE across AI development, but it does not have to be all-or-nothing. The more HAE development there is, the greater the chance of fulfilling a vital strand AI’s promise: to support, augment, and elevate human experiences and values across diverse domains of life. 

Mihaela van der Schaar

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute in London.

Mihaela has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.

In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise span signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.

Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.