THE CONCEPTUAL MENTAL MODEL OF EXPLANATION IN AN ARTIFICIAL INTELLIGENCE SYSTEM

Authors

DOI:

https://doi.org/10.20998/2079-0023.2023.01.11

Keywords:

explanation, artificial intelligence system, understandable artificial intelligence, dependencies, mental model, causal dependence

Abstract

The subject of research is the process of formation of explanations in artificial intelligence systems. To solve the problem of the opacity of decision-making in artificial intelligence systems, users should receive an explanation of the decisions made. The explanation allows you to trust these solutions and ensure their use in practice. The purpose of the work is to develop a conceptual mental model of explanation to determine the basic dependencies that determine the relationship between input data, as well as actions to obtain a result in an intelligent system, and its final solution. To achieve the goal, the following tasks are solved: structuring approaches to building mental models of explanations; construction of a conceptual mental model of explanation based on a unified representation of the user's knowledge. Conclusions. The structuring of approaches to the construction of mental models of explanations in intelligent systems has been carried out. Mental models are designed to reflect the user's perception of an explanation. Causal, statistical, semantic, and conceptual approaches to the construction of mental models of explanation are distinguished. It is shown that the conceptual model sets generalized schemes and principles regarding the process of functioning of the intellectual system. Its further detailing is carried out on the basis of a causal approach in the case of constructing an explanation for processes, a statistical approach when constructing an explanation about the result of the system's work, as well as a semantic approach when harmonizing the explanation with the user's basic knowledge. A three-level conceptual mental model of the explanation is proposed, containing levels of concepts regarding the basic principles of the functioning of the artificial intelligence system, an explanation that details this concept in an acceptable and understandable way for the user, as well as basic knowledge about the subject area, which is the basis for the formation of the explanation. In a practical aspect, the proposed model creates conditions for building and organizing a set of agreed explanations that describe the process and result of the intelligent system, considering the possibility of their perception by the user.

Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics

Doctor of Technical Sciences, Full Professor, Kharkiv National University of Radio Electronics, Professor of the Department of Information Control System, Kharkiv

Irina Leshchynska, Kharkiv National University of Radio Electronics

Candidate of Technical Sciences (PhD), Associate Professor, Kharkiv National University of Radio Electronics, Associate Professor at the Department of Software Engineering доцент кафедри програмної інженерії, Kharkiv

References

Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ: John Wiley & Sons, 2007. 632 р.

Castelvecchi D. Can we open the black box of AI? Nature News 2016. Vol. 538 (7623). P. 20.

Tintarev N., Masthoff J. A survey of explanations in recommender systems. The 3rd International workshop on web personalisation, recommender systems and intelligent user interfaces (WPRSIUI'07). 2007, pp. 801–810.

Gunning D., Vorm E., Wang J., Turek M. DARPA's explainable AI (XAI) program: A retrospective. Applied AI Letters. Vol. 2, no. 4, 2021. DOI: https://doi.org/10.1002/ail2.61.

Gilpin L. H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L. Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv:1806.00069. 2018.

Miller T. Explanation in artificial intelligence: Insights from the social sciences, Artificial. Intelligence. 2019. Vol. 267 P. 1–38.

Chi M., de Leeuw N., Chiu M., Lavancher C. Eliciting self-explanations improves understanding. Cognitive Science. 1994. Vol.18. P. 439–477.

Carey S. The origin of concepts. New York: Oxford University Press. 2009. 608 p.

Holyoak Keith J., Morrison Robert G. The Oxford Handbook of Thinking and Reasoning. Oxford University Press, 2012. 864 p.

Chalyi S., Leshchynskyi V., Leshchynska I. Deklaratyvno-temporalnyi pidkhid do pobudovy poiasnen v intelektualnykh informatsiinykh systemakh [Declarative-temporal approach to the construction of explanations in intelligent information systems]. Visnyk Nats. tekhn. un-tu "KhPI": zb. nauk. pr. Temat. vyp. Systemnyi analiz, upravlinnia ta informatsiini tekhnolohii [Bulletin of the National Technical University "KhPI": a collection of scientific papers. Thematic issue: System analysis, management and information technology]. Kharkov, NTU "KhPI" Publ., 2020, no. 2(4), pp. 51–56.

Halpern J. Y., Pearl J. Causes and explanations: A structural-model approach. Part II: Explanations. Available at: https://arxiv.org/pdf/cs/0208034.pdf (accessed 11.05.202).

Chalyi S., Leshchynskyi V. Temporal representation of causality in the construction of explanations in intelligent systems. Advanced Information Systems. 2020, vol. 4, no 3, pp. 113–117.

Published

2023-07-15

How to Cite

Chalyi, S., & Leshchynska, I. (2023). THE CONCEPTUAL MENTAL MODEL OF EXPLANATION IN AN ARTIFICIAL INTELLIGENCE SYSTEM. Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, (1 (9), 70–75. https://doi.org/10.20998/2079-0023.2023.01.11

Issue

Section

MATHEMATICAL AND COMPUTER MODELING