New frontiers in machine learning interpretability
Medicine has the potential to be transformed by machine learning (ML) by addressing core hallenges such as time-series forecasts, clustering (phenotyping), and heterogeneous treatment effect estimation. However, to be embraced by clinicians and patients, ML approaches need to be nterpretable. So far though, ML interpretability has been largely confined to explaining the predictions of static classifiers. In this keynote, I describe an extensive new framework for ML interpretability. This framework allows us to 1) interpret ML ethods for time-series forecasting, clustering phenotyping), and heterogeneous treatment effect estimation sing feature and example-based explanations, 2) rovide personalized explanations of ML methods with eference to a set of examples freely selected by the user, and 3) autonomously (re)discover known scientific concepts using concept activation regions, which are generalizations of concept-based explanations. To learn more about our work in this area – see our website dedicated to this topic – https://www.vanderschaar-lab.com/interpretable-machine-learning/ and our github – https://github.com/vanderschaarlab/Interpretability
Location and local date/time
This event will take place in person on December 14 at 8:00 GMT.
About the event
VCIP 2022 will carry on this tradition of VCIP in disseminating the state of art of visual communication technology, brainstorming and envisioning the future of visual communication technology and applications. The main theme would be new media, including VR, point cloud capture and playback, and new visual processing tools including deep learning for intelligence distilling in visual information pre- and post-processing such as de-blurring, super resolution, 3D understanding, and content-based image enhancement.