CONSTRUCTION OF PROBABILISTIC CAUSAL RELATIONSHIPS BETWEEN EQUIVALENCE CLASSES OF DATA IN AN INTELLIGENT INFORMATION SYSTEM

Authors

DOI:

https://doi.org/10.20998/2079-0023.2024.01.16

Keywords:

Causal dependency, cause-and-effect relationship, temporal dependency, possibility, necessity, explanation, artificial intelligence system, intelligent system, explainable artificial intelligence, information system

Abstract

The subject of this research is the processes involved in generating explanations for decision-making in artificial intelligence systems. Explanations in such systems enable the decision-making process to be transparent and comprehensible for the user, thereby increasing user trust in the obtained results. The aim of this work is to develop an approach for constructing a probabilistic causal explanation model that takes into account the equivalence classes of input, intermediate, and resulting data. Solving this problem creates conditions for building explanations in the form of causal relationships based on the available information about the properties of input data as well as the properties of the results obtained in the artificial intelligence system.  To achieve this aim, the following tasks are addressed: developing a causal dependency model between the equivalence classes of input and output data; developing methods for constructing equivalence classes of data in the decision-making process and a method for constructing causal explanations. A probabilistic model of causal dependency is proposed, which includes a causal relationship between the equivalence classes of input or intermediate and resulting data obtained during the decision-making process in the artificial intelligence system. This relationship considers the estimates of the possibility and necessity of such a dependency. The model creates conditions for explaining the possible causes of the obtained decision. A set of methods for constructing equivalence classes of data in the decision-making process and for constructing causal explanations is proposed, establishing a causal relationship between the equivalence classes. When constructing equivalence classes, relations of mandatory and optional data refinement, requirements or exclusions of data, as well as data conjunctions, are established. When constructing causal explanations, the possibility and limitations of the necessity of such a dependency are calculated, allowing explanations to be built based on the available information about the obtained decisions and the input and intermediate data used to form these decisions.

Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics

Doctor of Technical Sciences, Full Professor, Kharkiv National University of Radio Electronics, Professor of the Department of Information Control System, Kharkiv

Volodymyr Leshchynskyi, Kharkiv National University of Radio Electronics

Candidate of Technical Sciences (PhD), Associate Professor, Kharkiv National University of Radio Electronics, Associate Professor at the Department of Software Engineering, Kharkiv

References

Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ: John Wiley & Sons, 2007. 632 р.

Alonso J.M., Castiello C., Mencar C. A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field. In: Medina, J., et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. IPMU. Communications in Computer and Information Science. 2018, vol. 853. pp. 3–15.

Chalyi S. F., Leshchynska I. O. Kontseptualna mentalna model poiasnennia v systemi shtuchnoho intelektu. Visnyk Natsionalnoho tekhnichnoho universytetu «KhPI». Seriia: Systemnyi analiz, upravlinnia ta informatsiini tekhnolohii [Bulletin of the National Technical University "KPI". Series: System Analysis, Control and Information Technology]. Kharkiv, NTU "KhPI" Publ., 2023, no. 1 (9), pp. 70–75.

Tintarev N., Masthoff J. A survey of explanations in recommender systems. The 3rd international workshop on web personalisation, recommender systems and intelligent user interfaces (WPRSIUI'07). 2007, pp. 801-810.

Camburu O.M, Giunchiglia E., Foerster J., Lukasiewicz T., Blunsom P. Can I trust the explainer? Verifying post-hoc explanatory methods. 2019. arXiv:1910.02065.

Gunning D., Aha D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019, vol. 40(2). pp. 44-58.

D. Gunning, E. Vorm, J. Wang, M. Turek, "Darpa's Explainableai(XAI) Program: a Retrospective", Applied AI Letters. 2021, vol. 2, no. 4, https://doi.org/10.1002/ail2.61

Friedman J.H. Greedy Function Approximation: A Gradient Boosting Machine. Annals of Statistics. 2001, vol. 29(5), pp.1189-1232.

Lundberg S.M., Lee, S.I. A Unified Approach to Interpreting Model Predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017). 2017, pp. 4765-4774.

Ribeiro M.T., Singh S. Guestrin, C. Why Should I Trust You?: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016, pp. 1135-1144.

Chalyi, S., & Leshchynskyi, V. Temporal-oriented model of causal relationship for constructing explanations for decision-making process. Advanced Information Systems. 2022 №6(3), P. 60–65.

Chalyi S, Leshchynskyi V. Probabilistic counterfactual causal model for a single input variable in explainability task. Advanced Information Systems. 2022, no. 7(3), pp.54–59. https://doi.org/10.20998/2522-9052.2023.3.08.

Published

2024-07-30

How to Cite

Chalyi, S., & Leshchynskyi, V. (2024). CONSTRUCTION OF PROBABILISTIC CAUSAL RELATIONSHIPS BETWEEN EQUIVALENCE CLASSES OF DATA IN AN INTELLIGENT INFORMATION SYSTEM. Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, (1 (11), 97–102. https://doi.org/10.20998/2079-0023.2024.01.16

Issue

Section

INFORMATION TECHNOLOGY