Explainable Artificial Intelligence
Explainable artificial intelligence (XAI) is a fast growing research area, mainly aimed at providing more insight into the decisions of black-box models. My research interests are two-fold. On the one hand, I aim to develop a human cognition-friendly explanation method based on formal argumentation and, on the other hand, I study how formal argumentation can be applied to make black-box models more transparent.
Dynamic Formal Argumentation
(Human) argumentation is inherently dynamic, yet most research on formal argumentation is on static settings. In order to let the theory better correspond to real-life settings, dynamic argumentation should be studied. For example, dynamic structured argumentation is one of the core components of the applications at the Netherlands Police based on inquiry.
Logic-Based Argumentation
Dung’s abstract argumentation frameworks need to be instantiated by deductive or structured approaches, which provide a logical justification to the notions of arguments and counter-arguments. In this line of research I study meta-theoretic properties of logical argumentation, especially for sequent-based argumentation.
Agent-Based Modeling
The project of ArgABM was initiated to create an agent-based model (ABM) of scientific inquiry which accounts for the social aspects of scientific inquiry and for which the robustness of the findings can easily be verified. To this end, ArgABM is based on formal argumentation, programmed to only contain a few hidden and fixed parameters and available open source.