|Welcome to ExaCt 2012|
Both within AI systems and in interactive (socio-technical) systems, the ability to explain reasoning processes and results can have substantial impact. There, their main purpose has been to increase the confidence of the user in the system’s results or the system as a whole, by providing evidence of how the results were derived.
Explanation-awareness in computing system development aims at making systems able to interact more effectively or naturally with their users, or better able to understand and exploit knowledge about their own processing. Explanation-awareness in software engineering looks at new ways to guide software designers and engineers to build purposeful explanation-aware software systems. When the word “awareness” is used in conjunction with the word “explanation” it implies some consciousness about explanation.
Thinking of the Web not only as a collection of web pages, but as providing a Web of experiences exchanged by people on many platforms, gives rise to new challenges and opportunities to leverage experiential knowledge in explanation. The interplay of provenance information with areas such as trust and reputation, reasoning and meta-reasoning, and explanation are known, but not yet well exploited, making this another area to explore.
Outside of artificial intelligence, disciplines such as cognitive science, linguistics, philosophy of science, psychology, and education have investigated explanation as well. They consider varying aspects, making it clear that there are many different views of the nature of explanation and facets of explanation to explore. Two relevant examples of these are open learner models in education, and dialogue management and planning in natural language generation.
This workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems, and increase transparency, user sense of control, trust, acceptance, and decision support.