Trustworthy AI for the Future of Risk Management
Event details
Mihaela van der Schaar will deliver a keynote as part of Trustworthy AI for the Future of Risk Management hosted by Ulster University/IEEE.
Title
Machine learning for discovery – The new frontier
Location and local date/time
This event will take place online on June 14 at 09:00 – 09:45 BST.
About the event
Internet of Things (IoT) together with ubiquitous connectivity through 5G, pervasive systems, as well as increased productivity and scale through emerging AI-assisted and data-driven enabled technology and cloud infrastructure, present far-reaching opportunities for businesses. However, the growing complexity, pace and scale of global interconnectivity will present organisations with increasing systemic digital threats and challenges in risk management. The shockwave effects of the COVID-19 pandemic put the need for resilient risk management into sharp perspective.
The ability of AI to analyse large amounts of information substantially improves the identification of data relevant for risk management. Specific use cases include threats analysis and management, risk reduction, fraud detection, and data classification. AI and predictive analytics are powerful tools in the arsenal of any risk management strategy and offer exciting prospects. AI, however, also brings new risks and manifests those risks in perplexing ways that is hard to ignore, e.g., AI’s disruptive and, sometimes, disorderly capabilities. AI-related risks need to be a primary concern and a key priority for organizations to adopt and scale AI applications and to fully realize the potential of AI. Effective risk management hinges on establishing a trustworthy AI foundation and the construction of trustworthy AI systems will become paramount as AI becomes more prominent across the globe.
Currently, there is significant interest in trustworthy AI from the AI community and its stakeholders. Trustworthy AI represents the evolution of AI, and offers opportunities for industries to create AI systems that are transparent, explainable, fair, robust, and preserve privacy (the so-called five pillars of trustworthy AI representing important factors in establishing reliable technology), especially in high-risk and safety-critical applications.