The van der Schaar Lab’s seventh Revolutionizing Healthcare engagement session for the clinical community took place virtually on April 27, 2021.
This session was the second roundtable in a double-header focusing on interpretability in ML/AI for healthcare. Following a quick introduction by Mihaela van der Schaar, a panel of four clinicians and the audience of clinicians discussed a range of complex issues surrounding interpretability, including whether or not current expectations among the clinical community are realistic.
Our panel for this session consisted of:
- Alexander Gimson, MD FRCP (Consultant transplant hepatologist, Cambridge University Hospitals NHS Foundation Trust)
- Prof Henk van Weert, MD PhD (Professor, general practice, Amsterdam UMC; Research programs in oncology and cardiovascular diseases)
- Martin Cadeiras, MD (Associate professor, medical director, heart failure, heart transplantation and mechanical circulatory support, University of California, Davis)
- Maxime Cannesson, MD PhD (Chair, Department of Anesthesiology & Perioperative Medicine, University of California, Los Angeles)
The next roundtable in the Revolutionizing Healthcare series will be held on May 27, and will focus on personalized therapeutics/individualized treatment effect inference. Details can be found here.
Introduction – 0:00
Meet the roundtable panelists – 2:13
Declaration of interests – 3:07
Mihaela’s presentation on terminology and types of interpretability – 4:02
Initial questions for Mihaela on interpretability methods [Aneeq Rehman] – 13:10
Question for Mihaela on generating hypotheses through interpretable ML [David Chong] – 16:09
Question for panelists expectations for ML versus clinical scoring systems [David Chong] – 18:20
Discussion with panelist Maxime Cannesson: “All boxes are black” – 23:34
Discussion among panelists: expectations from ML models – 32:48
Question for panelists on knowledge gaps among personnel [Harpreet Sood] – 44:29
Question for panelists on preparing clinicians to improve healthcare outcomes [Timing Liu] – 49:19
Question for panelists on expectations of ML interpretability versus humans [Venkat Reddy] – 55:15
Intro to next sessions and note on CPD credits – 1:06:28
NOTE: This information was up-to-date at the time of the presentation but does not take into account material published since then.
Sign up for our upcoming sessions here.