AN EXPLANATION MODEL IN AN INTELLIGENT SYSTEM AT THE LOCAL, GROUP AND GLOBAL LEVELS OF DETAIL

Authors

DOI:

https://doi.org/10.20998/2079-0023.2022.02.16

Keywords:

explanation, intelligent information system, dependencies, levels of detail of explanations, cause-and-effect relationships

Abstract

The subject of research is the process of formation of explanations in intellectual information systems. Machine learning methods are used in modern intelligent systems. The process of obtaining the solution formed on the basis of such methods is usually opaque to the user. As a result of such opacity, the user may not trust the solutions proposed by the intelligent system. This reduces the efficiency of its use. Explanations are used to increase the transparency of decisions. The explanation is represented by knowledge about the reasons for the formation of the result in the intellectual system, as well as about the reasons for individual actions in the process of formation of the result. Also, the explanation may contain knowledge about the influence of individual functions on the results obtained by the intelligent system. Therefore, it is advisable to form an explanation at different levels of detail in order to show both the generalized reasons and effects on the obtained decision, as well as the reasons for choosing individual intermediate actions. The purpose of the work is to develop a generalized model of explanation considering the states of the decision-making process in an intelligent system to build explanations based on known data regarding the sequence of states and the properties of these states. To achieve the goal, the following tasks are solved: structuring the properties of explanations; determining the possibilities of approaches to building explanations based on the states and structure of the decision-making process, as well as on the basis of input data; construction of an explanatory model. Conclusions. A generalized model of explanation in an intelligent system for local, group and global levels of detail of the decision-making process is proposed. The model is represented by an ordered sequence of weighted dependencies between events or states of the decision-making process. The model is focused on presenting the possibility to highlight a local explanation within the framework of a global explanation and to present a chain of group explanations between the events of obtaining input data and the resulting decision. In practical terms, the proposed model is intended for the construction of explanations using approaches based on the simplification of the process of functioning of the intelligent system and on the basis of highlighting the influence of individual functions and actions on the final result. Additional capabilities of the model are related to the detailing of the events of the decision-making process from the selection of individual variables that characterize the state of this process, which makes it possible to form an explanation based on the use of known concepts and concepts in the subject area.

Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics

Doctor of Technical Sciences, Professor, Kharkiv National University of Radio Electronics, Professor of the Department of Information Control System, Kharkiv

Volodymyr Leshchynskyi, Kharkiv National University of Radio Electronics

PhD, Associate Professor, Kharkiv National University of Radio Electronics, Associate Professor of the Department of Software Engineering, Kharkiv

References

Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ, John Wiley & Sons, 2007. 632 р.

Castelvecchi D. Can we open the black box of AI? Nature News. 2016, vol. 538 (7623), pp. 20–23.

Gunning D., Aha D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019, no 40 (2), pp. 44–58.

Preece A., Harborne D., Braines D., Tomsett R., Chakraborty S. Stakeholders in Explainable AI. arXiv:1810.00184. 2018.

Gilpin L. H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L. Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv:1806.00069. 2018.

Miller T. Explanation in artificial intelligence: Insights from the social sciences, Artificial. Intelligence. 2019, vol. 267, pp. 1–38.

Zhang Q., Wu N. Y., Zhu S.-C, Interpretable convolutional neural networks, IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 8827–8836.

Deng H. Interpreting tree ensembles with intrees. arXiv:1408.5456. 2014.

Chalyi S., Leshchynskyi V., Leshchynska I. Deklaratyvnotemporalnyi pidkhid do pobudovy poiasnen v intelektualnykh informatsiinykh systemakh [Declarative-temporal approach to the construction of explanations in intelligent information systems]. Visnyk Nats. tekhn. un-tu "KhPI": zb. nauk. pr. Temat. vyp. Systemnyi analiz, upravlinnia ta informatsiini tekhnolohii [Bulletin of the National Technical University "KhPI": a collection of scientific papers. Thematic issue: System analysis, management and information technology]. Kharkov, NTU "KhPI" Publ, 2020, no. 2(4), pp. 51–56.

Halpern J. Y., Pearl J. Causes and explanations: A structural-model approach. Part I: Causes. The British Journal for the Philosophy of Science. 2005, no. 56 (4), pp. 843–887.

Chalyi S., Leshchynskyi V. Temporal representation of causality in the construction of explanations in intelligent systems. Advanced Information Systems. 2020, vol. 4, no 3, pp. 113–117.

Chalyi S. F., Leshchynskyi V. O., Leshchynska I. O. Modelyuvannya poyasnen shodo rekomendovanogo pereliku ob’yektiv z urahuvannyam temporalnogo aspektu viboru koristuvacha [Modeling explanations for the recommended list of items based on the temporal dimension of user choice]. Sistemi upravlinnya, navigaciyi ta zv’yazku [Control, Navigation and Communication Systems]. 2019, vol. 6, no 58, pp. 97–101.

Published

2023-01-13

How to Cite

Chalyi, S., & Leshchynskyi, V. (2023). AN EXPLANATION MODEL IN AN INTELLIGENT SYSTEM AT THE LOCAL, GROUP AND GLOBAL LEVELS OF DETAIL. Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, (2 (8), 100–105. https://doi.org/10.20998/2079-0023.2022.02.16

Issue

Section

INFORMATION TECHNOLOGY