Abstract
In order for us to trust artificial intelligence systems, they need to be able to explain their decisions. This article investigates a case-based reasoning (CBR) approach for generating explanations where a new event is explained by combining and modifying explanations of multiple previous events. This approach is implemented in a CBR system for incident analysis where the goal is to identify causes of transportation incidents. The system generates explanations that connect the observed events to the root causes of the incident through intermediate states and events. Explanations for past incidents are automatically extracted from textual incident reports using natural language processing, thus avoiding the manual effort of constructing explanations. The system is evaluated on incident reports from the Transportation Safety Board of Canada, supporting the hypothesis that adapting explanations of multiple previous events rather than a single event results in a more comprehensive explanation of a new event.
Keywords
Get full access to this article
View all access options for this article.
