Goals

Ambition and quality of the objectives

CIMPLE aims to research and develop innovative social and knowledge driven creative AI explanations, and testing it in the domain of detection and tracking of manipulated information, while taking into account social, psychological, and technical explainability needs and requirements. 

Much research on misinformation detection uses datasets of false and true information to produce binary classifiers to determine whether or not a piece of information is credible or not credible. Beyond such binary classifications, some research focused on producing more intuitive, unambiguous, and intelligible ways to label information. Recently, we produced a mapping of the credibility labels used by fact-checkers and highlighted the need for more standardised labeling mechanisms to reflect similarities and differences in credibility assessments. In many recent events, such as Brexit, the political elections in the US and UK, and the Coronavirus outbreak in China, reliable and truthful information was creatively manipulated to deceive and to promote certain opinions and perceptions. In such cognitive warfares, manipulations render the information not entirely false, but omit key detail, sensationalise and emotionalise the content, and promote certain content over others. Capturing such creative manipulations forms an important element in the detection and explanation of misinforming content and authorship, but largely overlooked in current AI research.

Explaining how information has been manipulated is crucial to understand the reasons of manipulation and modification (i.e., why such manipulations, what is the intent) and to help users make appropriate choices. We recently explored how the targeting of social values affects peoples’ shareability of misinformation online, drawing attention to the need to couple social and technical approaches to understand misinformation sharing behaviour and how to tackle it effectively.

In several scenarios, large (annotated) datasets and more computing power have enabled machine learning applications to learn statistical models that reach human accuracy. However, resulting models are complex and humans cannot understand their results and decisions. We refer to these models as “black-boxes” and contrast them with methods that act as “white-boxes”, as they allow the exposure of explanations for their outcome. For example, symbolic reasoning methods that exploit logic to derive conclusions can lead to quality results comparable to statistical models in terms of fact checking accuracy. However, while collecting the derivation steps in symbolic reasoning is immediately consumable by a final user, it is far from being clear or convincing. Explanations are useless if they are not comprehensible to the users. Knowledge Graphs, as large repositories of facts, are a natural resource for information verification. Even more importantly, they come with rich schemas (ontologies) that have been manually crafted for human understanding and factual sources. These data and metadata are key resources for generating convincing and appealing narratives, such as the description of logical inconsistencies or of data “cherry-picking” in biased claims. Given people’s confirmation bias and distrust in the media and in algorithmic methods, and the complexity of the misinformation field, it is crucial to understand how to design and deploy effective explainability solutions.

To this end, the general goals of CIMPLE are to pull together machine learning, psychology, knowledge graphs, linguisticanalysis, textual computational creativity analysis, and creative information visualisations, to offer novel, transformational, and effective, explainable, and engaging AI mechanisms to tackle misinformation.

What are our main goals?

  1. Acquire stakeholder requirements for XAI and AI-driven misinformation detection.
  2. Develop a novel Knowledge Graph-based AI framework to realise XAI-by-design models.
  3. Develop XAI models for detecting information manipulation across time and media sources.
  4. Automatically generate creative and engaging explainability visualisations, co-created with stakeholders.
  5. Realise personalised XAI, where the explanations are tailored to end-user skills, topic-affinity, and application domain.

CIMPLE aims to achieve the objectives above with an interdisciplinary consortium composed of social scientists, computer scientists, and industries, supported by associated partners of fact-checkers, journalists, and social platforms, through agile and co-creational designs and implementations. It aims to significantly advance the state of the art in multiple areas at the core of XAI. Given the ambitious and innovativeness of the project, the target Technology Readiness Levels of CIMPLE is TRL 4 “technology validated in lab”.

What results are we expecting to achieve?

  • Proposing new methods for social & knowledge-driven explainable AI that go beyond interpretability toward shaping perception.
  • Developing reliable socio-technical components for detecting and tracking manipulated information and use creativity in the response and explanation brought to the user.
  • Measuring the impact of such approaches on users’ understanding, of their acceptance of explanations, and of their position towards manipulated information
  • Experimenting and validating how to combine symbolic reasoning and statistical learning (e.g., graph convolutional networks that also embed the semantics of entities and their relations or by combining language models and human-curated encyclopedia knowledge graphs) for developing explainability by design.

Contact

Contact Us

Subscribe

Join our email list to receive the latest updates.