A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY

Authors

DOI:

https://doi.org/10.20998/2079-0023.2023.02.14

Keywords:

explanation, evaluation of explanation, artificial intelligence system, intelligent system, comprehensible artificial intelligence, associative dependence, causal dependence, precedent, information system, recommendation system

Abstract

The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.

Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics

The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.

Volodymyr Leshchynskyi, Kharkiv National University of Radio Electronics

Candidate of Technical Sciences (PhD), Associate Professor, Kharkiv National University of Radio Electronics, Associate Professor at the Department of Software Engineering, Kharkiv

References

Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ: John Wiley & Sons, 2007. 632 р.

Alonso J.M., Castiello C., Mencar C. A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field. In: Medina, J., et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. IPMU. Communications in Computer and Information Science. 2018, vol. 853, pp. 3–15.

Gunning D., Aha D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019, vol. 40 (2), pp. 44–58.

Tintarev N., Masthoff J. A survey of explanations in recommender systems. The 3rd international workshop on web personalisation, recommender systems and intelligent user interfaces (WPRSIUI'07). 2007, pp. 801–810.

Gilpin L. H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L. Explaining Explanations: An Overview of Interpretability of Machine Learning. arXiv:1806.00069. 2018.

Miller T. Explanation in artificial intelligence: Insights from the social sciences, Artificial. Intelligence. 2019, vol. 267, pp. 1–38.

Camburu O.M, Giunchiglia E., Foerster J., Lukasiewicz T., Blunsom P. Can I trust the explainer? Verifying post-hoc explanatory methods. 2019. arXiv:1910.02065.

Gunning D., Vorm E., Wang J., Turek M. Darpa's Explainableai (XAI). Program: a Retrospective. Applied AI Letters. 2021, vol. 2, no. 4. DOI: https://doi.org/10.1002/ail2.61.

Chalyi S., Leshchynskyi V. Temporal-oriented model of causal relationship for constructing explanations for decision-making process. Advanced Information Systems. 2022, no. 6 (3), pp. 60–65.

Chalyi S., Leshchynskyi V. Possible evaluation of the correctness of explanations to the end user in an artificial intelligence system. A.I.S. 2023, no. 7, pp.75–79.

Chalyi S, Leshchynskyi V. Probabilistic counterfactual causal model for a single input variable in explainability task. Advanced Information Systems. 2022, no. 7 (3), pp. 54–59. DOI: https://doi.org/10.20998/2522-9052.2023.3.08.

Chalyi S. Leshchynskyi V. Otsinka chutlyvosti poiasnen v intelektualnii informatsiinii systemi [Evaluation of the sensitivity of explanations in the intelligent information system]. Systemy upravlinnia, navihatsii ta zviazku. Zbirnyk naukovykh prats [Control, navigation and communication systems. Collection of scientific papers]. 2023, no. 2, pp. 165–169.

Published

2023-12-19

How to Cite

Chalyi, S., & Leshchynskyi, V. (2023). A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY. Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, (2 (10), 95–101. https://doi.org/10.20998/2079-0023.2023.02.14

Issue

Section

INFORMATION TECHNOLOGY