CIMPLE – Explainable AI Project
Explainability is of significant importance in the move towards trusted, responsible and ethical Artificial Intelligence (AI), yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability). Explainable AI (XAI) considers how intelligent algorithms can be understood by human users. The understandability of such explanations and their suitability to particular user groups and application domains received very little attention so far. Hence there is a need for an interdisciplinary and fundamental evolution in XAI methods.
Project Overview
Funded within the European CHIST-ERA Program line, the CIMPLE Project investigates how to counter information manipulation with Explainable AI. It will help to design more understandable, reconfigurable and personalisable explanations. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing Explainable AI methods do not suffice. The complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective. This is particularly evident in the case of black-box algorithms.
Knowledge Graphs offer significant potential to better structure the core of AI models, using semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations. The project will use computational creativity techniques to generate powerful, engaging and easy to understand explanations of complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information. The planned experiments will take into account social, psychological and technical explainability needs and requirements.
Metadata for Explainable AI Experiments
webLyzard technology will lead the second work package on Metadata Enrichment and Visualisation. The methods to enrich, analyse and visualise real-time streams of digital content will benefit from scalability and maturity of the existing platform. We expect synergies between the CIMPLE project and webLyzard’s existing collaborations with the United Nations Environment Programme (UNEP) in terms of information manipulation in the sustainability domain, and with our work for the National Oceanic and Atmospheric Administration (NOAA) in the context of misinformation on climate change including its impact and the effectiveness of mitigation strategies.