Evaluating Explainability Methods in the Context of Predictive Process Monitoring

Universität Ulm

PhD defense, Ghada Elkhawaga, Ulm, Germany, Date: 17.July 2024, Time: 09:00, Room: O29/2006

Predictive Process Monitoring (PPM) emerged as a value-adding use case of process mining. Capitalizing on the recent advances and growing adoption of machine learning techniques, PPM takes business process-related data (i.e., event logs) as input and utilizes machine learning techniques to train predictive models. At runtime, the trained models generate predictions about the future of currently executed processes. Examples of the predictions involve the next steps that will be executed, the resource that will be executing a particular upcoming step, performance-related information (e.g., the remaining time until the end of
the execution), and the outcome of an ongoing execution.

Performance improvements in machine learning techniques do not usually come for free. Notions of complexity and opaqueness are common labels of machine learning-based models. Having the business process stakeholders at the center of focus in PPM necessitates mitigating the consequences of the opaqueness associated with complex predictive models. Explainability tends to increase trust in the generated predictions and boost human interaction with predictive models as a result of increased understanding and transparency.
Furthermore, explanations may be utilized to uncover potential problems resulting from the
training of a predictive model on biased data and improve the performance of the predictive
model. Several eXplainable Artificial Intelligence (XAI) methods have been proposed, but
the right mechanisms to evaluate their application should be in place in order to apply them.
However, evaluating XAI in the context of PPM is a difficult task, due to the lack of a shared
and accepted definition of explainability and its associated characteristics and evaluation
criteria.

The contributions of this thesis include an analysis framework designed to systematically investigate the implications of applying different PPM techniques on explainability from different perspectives. As a second contribution, an approach to evaluate global explainability methods is proposed. This approach analyzes the consistency of explanations when compared with data-related facts extracted from business process data. As a final contribution, the thesis introduces an approach to assess the interpretability of explanations produced for specific predictions. In particular, the proposed approach considers rule-based explanations according to different interpretability-related criteria. The thesis further discusses the results and lessons learned using a number of experiments that follow different settings. As a merit of this research, all contributions were validated from a PPM perspective based on real-life process-related data.