iMIMIC 2021 keynote
Quantitative Epistemology: Conceiving a new human-machine partnership
Quantitative epistemology is a new and transformational area of research pioneered by our lab in Cambridge as a strand of machine learning aimed at understanding, supporting, and improving human decision-making. We are developing machine learning models that capture how humans acquire new information, how they pay attention to such information, how their beliefs may be represented, how their internal models may be structured, how these different levels of knowledge are leveraged in the form of actions, and how such knowledge is learned and updated over time. Because our approach is driven by observational data in studying knowledge as well as using machine learning methods for supporting and improving knowledge acquisition and its impact on decision-making, we call this “quantitative epistemology.”
Our methods are aimed at studying human decision-making, identifying potential suboptimalities in beliefs and decision processes (such as cognitive biases, selective attention, imperfect retention of past experience, etc.), and understanding risk attitudes and their implications for learning and decision-making. This would allow us to construct decision support systems that provide humans with information pertinent to their intended actions, their possible alternatives and counterfactual outcomes, as well as other evidence to empower better decision-making.
Location and local date/time
This event will take place online on September 27 at 16:05 CEST (15:05 BST).
About the event
The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2021 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI.
The program of the workshop includes keynote presentations of experts working in the field of interpretability of machine learning. A selection of submitted manuscripts will be chosen for short oral presentations (10 minutes + 3 minutes Q&A) alongside the keynotes. Finally, we will have a group discussion which leaves room for a brainstorming on the most pressing issues in interpretability of machine intelligence in the context of MICCAI.