CONSTRUCTION OF PROBABILISTIC CAUSAL RELATIONSHIPS BETWEEN EQUIVALENCE CLASSES OF DATA IN AN INTELLIGENT INFORMATION SYSTEM
DOI:
https://doi.org/10.20998/2079-0023.2024.01.16Keywords:
Causal dependency, cause-and-effect relationship, temporal dependency, possibility, necessity, explanation, artificial intelligence system, intelligent system, explainable artificial intelligence, information systemAbstract
The subject of this research is the processes involved in generating explanations for decision-making in artificial intelligence systems. Explanations in such systems enable the decision-making process to be transparent and comprehensible for the user, thereby increasing user trust in the obtained results. The aim of this work is to develop an approach for constructing a probabilistic causal explanation model that takes into account the equivalence classes of input, intermediate, and resulting data. Solving this problem creates conditions for building explanations in the form of causal relationships based on the available information about the properties of input data as well as the properties of the results obtained in the artificial intelligence system. To achieve this aim, the following tasks are addressed: developing a causal dependency model between the equivalence classes of input and output data; developing methods for constructing equivalence classes of data in the decision-making process and a method for constructing causal explanations. A probabilistic model of causal dependency is proposed, which includes a causal relationship between the equivalence classes of input or intermediate and resulting data obtained during the decision-making process in the artificial intelligence system. This relationship considers the estimates of the possibility and necessity of such a dependency. The model creates conditions for explaining the possible causes of the obtained decision. A set of methods for constructing equivalence classes of data in the decision-making process and for constructing causal explanations is proposed, establishing a causal relationship between the equivalence classes. When constructing equivalence classes, relations of mandatory and optional data refinement, requirements or exclusions of data, as well as data conjunctions, are established. When constructing causal explanations, the possibility and limitations of the necessity of such a dependency are calculated, allowing explanations to be built based on the available information about the obtained decisions and the input and intermediate data used to form these decisions.
References
Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ: John Wiley & Sons, 2007. 632 р.
Alonso J.M., Castiello C., Mencar C. A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field. In: Medina, J., et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. IPMU. Communications in Computer and Information Science. 2018, vol. 853. pp. 3–15.
Chalyi S. F., Leshchynska I. O. Kontseptualna mentalna model poiasnennia v systemi shtuchnoho intelektu. Visnyk Natsionalnoho tekhnichnoho universytetu «KhPI». Seriia: Systemnyi analiz, upravlinnia ta informatsiini tekhnolohii [Bulletin of the National Technical University "KPI". Series: System Analysis, Control and Information Technology]. Kharkiv, NTU "KhPI" Publ., 2023, no. 1 (9), pp. 70–75.
Tintarev N., Masthoff J. A survey of explanations in recommender systems. The 3rd international workshop on web personalisation, recommender systems and intelligent user interfaces (WPRSIUI'07). 2007, pp. 801-810.
Camburu O.M, Giunchiglia E., Foerster J., Lukasiewicz T., Blunsom P. Can I trust the explainer? Verifying post-hoc explanatory methods. 2019. arXiv:1910.02065.
Gunning D., Aha D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019, vol. 40(2). pp. 44-58.
D. Gunning, E. Vorm, J. Wang, M. Turek, "Darpa's Explainableai(XAI) Program: a Retrospective", Applied AI Letters. 2021, vol. 2, no. 4, https://doi.org/10.1002/ail2.61
Friedman J.H. Greedy Function Approximation: A Gradient Boosting Machine. Annals of Statistics. 2001, vol. 29(5), pp.1189-1232.
Lundberg S.M., Lee, S.I. A Unified Approach to Interpreting Model Predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017). 2017, pp. 4765-4774.
Ribeiro M.T., Singh S. Guestrin, C. Why Should I Trust You?: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016, pp. 1135-1144.
Chalyi, S., & Leshchynskyi, V. Temporal-oriented model of causal relationship for constructing explanations for decision-making process. Advanced Information Systems. 2022 №6(3), P. 60–65.
Chalyi S, Leshchynskyi V. Probabilistic counterfactual causal model for a single input variable in explainability task. Advanced Information Systems. 2022, no. 7(3), pp.54–59. https://doi.org/10.20998/2522-9052.2023.3.08.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).