AN EXPLANATION MODEL IN AN INTELLIGENT SYSTEM AT THE LOCAL, GROUP AND GLOBAL LEVELS OF DETAIL
DOI:
https://doi.org/10.20998/2079-0023.2022.02.16Keywords:
explanation, intelligent information system, dependencies, levels of detail of explanations, cause-and-effect relationshipsAbstract
The subject of research is the process of formation of explanations in intellectual information systems. Machine learning methods are used in modern intelligent systems. The process of obtaining the solution formed on the basis of such methods is usually opaque to the user. As a result of such opacity, the user may not trust the solutions proposed by the intelligent system. This reduces the efficiency of its use. Explanations are used to increase the transparency of decisions. The explanation is represented by knowledge about the reasons for the formation of the result in the intellectual system, as well as about the reasons for individual actions in the process of formation of the result. Also, the explanation may contain knowledge about the influence of individual functions on the results obtained by the intelligent system. Therefore, it is advisable to form an explanation at different levels of detail in order to show both the generalized reasons and effects on the obtained decision, as well as the reasons for choosing individual intermediate actions. The purpose of the work is to develop a generalized model of explanation considering the states of the decision-making process in an intelligent system to build explanations based on known data regarding the sequence of states and the properties of these states. To achieve the goal, the following tasks are solved: structuring the properties of explanations; determining the possibilities of approaches to building explanations based on the states and structure of the decision-making process, as well as on the basis of input data; construction of an explanatory model. Conclusions. A generalized model of explanation in an intelligent system for local, group and global levels of detail of the decision-making process is proposed. The model is represented by an ordered sequence of weighted dependencies between events or states of the decision-making process. The model is focused on presenting the possibility to highlight a local explanation within the framework of a global explanation and to present a chain of group explanations between the events of obtaining input data and the resulting decision. In practical terms, the proposed model is intended for the construction of explanations using approaches based on the simplification of the process of functioning of the intelligent system and on the basis of highlighting the influence of individual functions and actions on the final result. Additional capabilities of the model are related to the detailing of the events of the decision-making process from the selection of individual variables that characterize the state of this process, which makes it possible to form an explanation based on the use of known concepts and concepts in the subject area.
References
Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ, John Wiley & Sons, 2007. 632 р.
Castelvecchi D. Can we open the black box of AI? Nature News. 2016, vol. 538 (7623), pp. 20–23.
Gunning D., Aha D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019, no 40 (2), pp. 44–58.
Preece A., Harborne D., Braines D., Tomsett R., Chakraborty S. Stakeholders in Explainable AI. arXiv:1810.00184. 2018.
Gilpin L. H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L. Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv:1806.00069. 2018.
Miller T. Explanation in artificial intelligence: Insights from the social sciences, Artificial. Intelligence. 2019, vol. 267, pp. 1–38.
Zhang Q., Wu N. Y., Zhu S.-C, Interpretable convolutional neural networks, IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 8827–8836.
Deng H. Interpreting tree ensembles with intrees. arXiv:1408.5456. 2014.
Chalyi S., Leshchynskyi V., Leshchynska I. Deklaratyvnotemporalnyi pidkhid do pobudovy poiasnen v intelektualnykh informatsiinykh systemakh [Declarative-temporal approach to the construction of explanations in intelligent information systems]. Visnyk Nats. tekhn. un-tu "KhPI": zb. nauk. pr. Temat. vyp. Systemnyi analiz, upravlinnia ta informatsiini tekhnolohii [Bulletin of the National Technical University "KhPI": a collection of scientific papers. Thematic issue: System analysis, management and information technology]. Kharkov, NTU "KhPI" Publ, 2020, no. 2(4), pp. 51–56.
Halpern J. Y., Pearl J. Causes and explanations: A structural-model approach. Part I: Causes. The British Journal for the Philosophy of Science. 2005, no. 56 (4), pp. 843–887.
Chalyi S., Leshchynskyi V. Temporal representation of causality in the construction of explanations in intelligent systems. Advanced Information Systems. 2020, vol. 4, no 3, pp. 113–117.
Chalyi S. F., Leshchynskyi V. O., Leshchynska I. O. Modelyuvannya poyasnen shodo rekomendovanogo pereliku ob’yektiv z urahuvannyam temporalnogo aspektu viboru koristuvacha [Modeling explanations for the recommended list of items based on the temporal dimension of user choice]. Sistemi upravlinnya, navigaciyi ta zv’yazku [Control, Navigation and Communication Systems]. 2019, vol. 6, no 58, pp. 97–101.
Downloads
Published
How to Cite
Issue
Section
License
LicenseAuthors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).