Explainable Artificial Intelligence (XAI)

Welcome to the temporarily webpage of the Explainable Artificial Intelligence (XAI) research theme of the department of Department of Advanced Computing Sciences of Maastricht University (UM). This research theme was formulated in October 2017 next to several other research themes. The official website with all research themes that were formulated in 2017, will go online in September 2022.

 

Two of the major challenges for Artificial Intelligence are to provide ‘explanations’ for recommendations made by intelligent systems, and guarantees of their ‘reliability’. Explanations are important to help people involved understand the reasons why a recommendation was made. Reliability is important when decisions concern people’s safety or profoundly affect their lives.

Recent high-profile successes in Machine Learning have mostly been achieved using deep neural networks, which yield ‘black-box’ input-output mappings that can be challenging to explain to users. Especially in the medical, military and legal fields, black-box machine-learning techniques are unacceptable, since decisions may have a profound impact on peoples’ lives. As reported in recent news, AI may amplify the biases that already pervade our society without strict ethical standards, unbiased data collection, and algorithmic bias considerations.

Research focus and application
Research within the XAI theme Investigates different ways to make intelligent systems better explainable and more reliable. Some of the research foci are:

  • Analyzing whether the input-output relations of a system can provide high-level correlation-based explanations of the black-box AI system.
  • Logic-based systems can provide explanations and are able to reason with (legal) regulations that should be adhered. Integrating logic-based approaches with machine learning approaches is one possible way to realize explainable artificial intelligence, and is an important challenge for the near future.
  • Learning explainable models instead of learning a mapping may improve explainability with minor or no decrease in the quality of predictions / decisions / recommendations.
  • Improving the understanding of deep neural networks. Especially w.r.t. the influence of input and intermediate layers on the outputs, information flow through the network, detecting changes that can make the system collapse and determining the tolerance of the system.
  • Using machine learning and causal inference to explain and understand the relationship between variables in high-dimensional and complex data.
  • Developing model-checking tools for properties of safety-critical engineering systems and medical interventions.
  • Making reliable prediction with confidence bounds on the error.