Explainable Artificial Intelligence

Explainable Artificial Intelligence in one of the six research themes at DKE. This research theme was founded in 2017 by Pieter Collins and Nico Roos. The research theme, originally named Explainable and Reliable AI (ERAI) focuses on AI system that can make explainable and reliable decision or predictions, and on explainable AI-based controllers.

Within the explainable AI research theme, Nico Roos has been addressing the following topics:

  • Learning arguments from data for explainable decision making
  • Realizing reasoning through the construction of mental models using neural networks
  • Defeasible reasoning with a controlled natural language
  • An explainable gait controller for a Nao robot

Publications

Paweł Mąka, Jelle Jansen, Theodor Antoniou, Thomas Bahne, Kevin Müller, Can Türktas, Nico Roos and Kurt Driessens, Combining Mental Models with Neural Networks, Artificial Intelligence and Machine Learning 33nd Benelux Conference, BNAIC/Benelearn (2021) 256-270. [pdf]

Jonas Bei, David Pomerenke, Lukas Schreiner, Sepideh Sharbaf, Pieter Collins and Nico Roos, Explainable AI through the Learning of Arguments, Artificial Intelligence and Machine Learning 33nd Benelux Conference, BNAIC/Benelearn (2021) 241-255. [pdf]

N. Roos, Z. Sun, Explainable Robotics applied to bipedal gait development, BeNeLux Artificial Intelligence Conference (BNAIC): Proceedings of the Reference AI & ML Conference for Belgium, Netherlands & Luxemburg, CEUR Workshop Proceedings 2491 (2019) 15 p. [pdf]