Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies http://samit.khpi.edu.ua/ <p><strong>Collection of scientific papers</strong></p> <p><img style="width: 250px;" src="http://samit.khpi.edu.ua/public/journals/49/cover_issue_16936_uk_UA.jpg" alt="" /></p> <p><strong>Year of foundation:</strong> 1961</p> <p><strong>Aims and Scope:</strong> Peer-reviewed open access scientific edition that publishes new scientific results in the field of system analysis and management of complex systems, based on the application of modern mathematical methods and advanced information technology. Edition publishes works related to artificial intelligence, big data analysis, modern methods of high-performance computing in distributed decision support systems.</p> <p><strong>Target audience:</strong> For scientists, teachers of higher education, post-graduate students, students and specialists in the field of systems analysis, management and computer technology.</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2079-0023">2079-0023</a> (Print)</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2410-2857">2410-2857</a> (Online)</p> <p>Media identifier <strong><a href="https://drive.google.com/file/d/1POp1f3OPs6wWTgpUZXdVVKlUSORms-g1/view?usp=sharing">R30-01544</a></strong>, according to the <a href="https://drive.google.com/file/d/1o3jlce-hW2415D2fiaa7gbrj307yvKf3/view?usp=share_link"><strong>decision of the National Council of Ukraine on Television and Radio Broadcasting of 16.10.2023 No. 1075</strong></a>.</p> <p><strong><a href="https://drive.google.com/open?id=1BJybDTz3S9-ld7mUSnDpBeQzDBH61OO9">Order of the Ministry of Education and Science of Ukraine No. 1643 of December 28, 2019</a></strong> "On approval of decisions of the Attestation Board of the Ministry on the activity of specialized scientific councils of December 18, 2019", Annex 4, <strong>"Bulletin of the National Technical University "KPI". Series: System Analysis, Control and Information Technology" is added to category B</strong> of the "List of scientific professional publications of Ukraine in which the results of the dissertation works for obtaining the scientific degrees of doctor of sciences, candidate of sciences, and doctor of philosophy can be published".</p> <p><strong>Indexing </strong>in Index Copernicus, DOAJ, Google Scholar, and <a href="http://samit.khpi.edu.ua/indexing">other systems</a>.</p> <p>Edition publishes scientific works in the following fields:</p> <ul> <li>124 - System analysis</li> <li>122 - Computer science</li> <li>126 - Information systems and technologies</li> <li>121 - Software engineering</li> <li>151 - Automation and computer-integrated technologies</li> <li>113 - Applied mathematics</li> </ul> <p><strong>Frequency:</strong> Biannual - June and December issues (deadlines for submission of manuscripts: until May 15 and November 15 of each year; manuscripts submitted late may be considered separately).</p> <p><strong>Languages:</strong> Ukrainian, English (mixed languages).</p> <p><strong>Founder and publisher:</strong> National Technical University "Kharkiv Polytechnic Institute" (<a href="https://www.kpi.kharkov.ua/eng/">University website</a>, <a href="https://ndch.kpi.kharkov.ua/en/bulletin-of-ntu-khpi/">Scientific and Research Department</a>).</p> <p><strong>Chief editor:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=57202891828">M. D. Godlevskyi</a>, D. Sc., Professor, National Technical University "KhPI".</p> <p><strong>Editorial board</strong> staff is available <a href="http://samit.khpi.edu.ua/editorialBoard">here</a>.</p> <p><strong>Address of the editorial office:</strong> 2, Kyrpychova str., 61002, Kharkiv, Ukraine, NTU "KhPI", Department of System analysis and information-analytical technologies.</p> <p><strong>Responsible secretary:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=6507139684">M. I. Bezmenov</a>, PhD, Professor, National Technical University "KhPI".</p> <p><strong>Phone numbers:</strong> +38 057 707-61-03, +38 057 707-66-54</p> <p><strong>E-mail:</strong> mykola.bezmenov@khpi.edu.ua</p> <p>This journal is practicing and supporting a policy of open access according to the <strong><a href="https://www.budapestopenaccessinitiative.org/read">Budapest Open Access Initiative (BOAI)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/open-access.png" alt="Open Access" /></p> <p>Published articles are distributed under the terms and conditions of the <strong><a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution (CC BY)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/cc-by.png" alt="CC-BY" /></p> <p>The editorial board adheres to international standards of publishing ethics and the recommendations of the <strong><a href="https://publicationethics.org/resources/guidelines/principles-transparency-and-best-practice-scholarly-publishing">Committee on Publication Ethics (COPE)</a></strong> on the Principles of Transparency and Best Practice in Scholarly Publishing.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/sm-cope.png" alt="" width="74" height="50" /></p> en-US <p><span>Authors who publish with this journal agree to the following terms:</span></p><ul><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ul> mykola.bezmenov@khpi.edu.ua (Безменов Микола Іванович (Mykola Bezmenov, Безменов Николай Иванович)) andrii.kopp@khpi.edu.ua (Копп Андрій Михайлович (Andrii Kopp, Копп Андрей Михайлович)) Tue, 30 Jul 2024 14:59:10 +0300 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 RESEARCH ON ARTIFICIAL INTELLIGENCE TOOLS FOR AUTOMATING THE SOFTWARE TESTING PROCESS http://samit.khpi.edu.ua/article/view/309185 <p>The subject matter of the article is artificial intelligence (AI) tools for automating the software testing process. The rapid development of the software development industry in recent decades has led to a significant increase in competition in the IT technology market and, as a result, stricter requirements for corresponding products and services. AI-driven test automation is becoming increasingly relevant due to its ability to solve complex tasks that previously required significant human resources.</p> <p>The goal of the work is to investigate the possibilities of using AI technologies to automate manual testing processes, which will increase testing efficiency, reduce costs, and improve software quality.</p> <p>The following tasks were solved in the article: analysis of existing tools and approaches to test automation using AI; development of a conceptual model of a system that integrates AI into the testing process; exploring the potential of AI to automate various aspects of software testing, such as generating test scenarios, detecting defects, predicting errors, and automatically analyzing test results.</p> <p>The following methods are used: theoretical analysis of the literature and existing solutions in the field of test automation, experimental study of the effectiveness of the proposed test automation methods.</p> <p>The following results were obtained: the concept of a system that integrates AI technologies for automating software testing is presented. It has been found that the use of AI allows automating routine testing tasks, significantly reducing the number of human errors, and improving the quality of software products and the effectiveness of verification and validation processes.</p> <p>Conclusions: The development and implementation of AI-based testing automation systems are extremely relevant and promising. The use of AI technologies makes it possible to significantly increase the efficiency of testing, reduce the costs of its implementation, and improve the quality of software. The proposed approach to the development of an AI-based test automation system can be used as a basis for further research and development in this field.</p> Olga Vorochek, Illia Solovei Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309185 Tue, 30 Jul 2024 00:00:00 +0300 FORECASTING CHANGE IN THE LEVEL OF FOREST COVERAGE USING THE GLOBAL FOREST WATCH SERVICE AND THE PROGRAMMING AND DATA ANALYSIS LANGUAGE R http://samit.khpi.edu.ua/article/view/309319 <p>The problem of calculating the level of forest cover is considered, including forecasting changes in forest cover in individual forestry. It is stated that the authors previously developed software for calculating forest cover and processing information about forest plantations using the example of the village of Spivakivka in the Izyum district of the Kharkiv region. A comparison of forest cover over a number of years was also made using the Global Forest Watch resource. From this resource, images of Prydonetsk Forestry were taken with conventional designations: areas where new forest plantations are being planted are shown in blue, and areas where cutting is taking place are shown in pink. It is proposed to divide each of the uploaded images of the selected forestry into squares, and then analyze the data for each square. The pink color saturation was calculated and stored in the table. It is noted that forecasting the change in forest stands on the selected site, that is, the change in the percentage of felling, can be done in different ways. First, use regression analysis - apply the regression equation separately to the values of each square, as well as to the entire forestry. Secondly, to form a list of input factors containing indicators on the selected plot in the two previous years and the same indicators on neighboring plots. Thus, the number of factors will be equal to 27: 26 input and 1 output (values on the studied square). Such a forecasting problem can be solved either by the method of multivariate linear regression or by the method of artificial neural networks. The R programming and data analysis language was used to perform calculations using both methods. A script was created that performs calculations by constructing regression lines and an artificial neural network, and also allows determining the best architecture of a neural network and a more effective method of its training for a certain data set. The calculation of the felling dynamics in the entire forestry (the forecast for the last year provides an error of 1&nbsp;%) and the calculation of the felling dynamics in the selected square (the forecast for the last year provides an error of 3.5&nbsp;%) are given. After many runs of the script, it was found that the best result is provided by a perceptron with two hidden layers and two neurons in each layer. The results of the calculations indicate a high correlation of the data for determining the percentage of forest that will be cut down in a certain square. Application of this perceptron for forecasting for the last year showed an error of 3&nbsp;%.</p> Oleksandr Melnykov, Viktoriia Denysenko Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309319 Tue, 30 Jul 2024 00:00:00 +0300 TWO-LEVEL CONCEPT FOR SIMULATING UNIFORM INTERFERENCE-RESISTANT DIGITAL DATA TRANSMISSION http://samit.khpi.edu.ua/article/view/309321 <p>The article formalizes, creates and provides for consideration the concept of a single secure interference-resistant data transmission channel. In modern cybersecurity theory and practice, there is the NIST Cybersecurity Framework, which is a set of recommendations for reducing risks for organizations. In order for high-level data to be secure, there are requirements for the SIA triad, namely confidentiality, integrity and availability of information. Therefore, in the future, the expediency of the work and its result will directly depend on the satisfaction of the SIA conditions. As you know, high-level data: such as e-mail, visual data in the GUI of various applications, etc., are transmitted over low-level communication channels: such as cables, wireless radio communication channels, and others. At each of the levels for safe transmission of information, there are specific pests. At high levels, the main threat to information is man and the human factor. The lower the level of information transmission becomes, the more nature, natural obstacles and random short phenomena begin to influence. In order for the user to be able to use various devices without a threat to the confidentiality, integrity and availability of information, it is necessary to actively and continuously develop, improve and improve the existing methods of data protection, restoration, transmission and storage. Each aspect in this struggle for security is both an advantage and a disadvantage: excessive security is not appropriate for mass traffic, complexity does not always provide adequate security, and so on. Therefore, the optimality and expediency of the methods used becomes an important factor. For these reasons, the paper proposes a relatively simple, but no less effective approach to maintaining SIA requirements.</p> Vladyslav Sharov, Olena Nikulina Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309321 Tue, 30 Jul 2024 00:00:00 +0300 APPLICATION OF A NEURONETWORK FOR DETERMINING THE TYPE OF ELEMENTS OF A SYMMETRICAL COMPENSATION DEVICE OF AN UNSYMMETRICAL SYSTEM WITH A ZERO WIRE http://samit.khpi.edu.ua/article/view/309322 <p>In article the possibility of using neural networks in the field of increasing the energy performance of a four-wire power supply system with an uneven load in the phases is being investigated. An uneven load in the phases causes asymmetry of currents in the network and contributes to the increase in the current in the neutral wire, which has an extremely negative effect on both the supply itself and its consumers. To eliminate asymmetry and reduce the current in the neutral wire, you can connect a symmetrical compensating device. Such a device is a set of reactive elements, the parameters of which are determined by search optimization. To determine the type of the required element, the defined parameters are recalculated. That is, the solution of such a problem consists of two component sub-problems – structural and parametric synthesis. This approach provides a high accuracy of calculations, but has a significant drawback: the calculations are cumbersome. In order to simplify the solution of the synthesis problem, it makes sense to determine the type of elements using neural networks, which will significantly reduce the time and resources spent on calculating the values of the parameters of the symmetrical compensating device. The subject of the article is the study of the possibility of using neural networks to predict the types of reactive elements of the symmetrical compensating device. During of the study, the parameters and type of neural network were determined, which provide the most accurate prediction of the topology of the structure of the symmetric-compensating device. The input parameters of the neural network were formed from sets consisting of eight parameters – resistances and inductances of the transmission lines and the neutral wire. The target matrix was formed from a set of data sets consisting of six elements containing information on the types of elements connected (0 – capacitor, 1 – inductance) between phases and between phases and the neutral wire. During research, the results of a quasi-solution were obtained, the values of which turned out to be commensurate with the accurate calculations for determining the structure of the symmetrical compensating device of the power supply system with a zero wire. This indicates the high quality of the developed neural network. Applying the minimax strategy to the received results provides an opportunity to reduce the received values to 0 and 1 to ensure clarity of the results obtained by the neural network.</p> Kateryna Yagup, Valery Yagup Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309322 Tue, 30 Jul 2024 00:00:00 +0300 OPTIMIZATION OF THE DEVELOPMENT PROCESS OF MONOLITHIC MULTI-MODULE PROJECTS IN JAVA http://samit.khpi.edu.ua/article/view/309324 <p>In recent years, there has been an increase in the complexity of Java software development and a change in the scope of projects, including an increase in the number of modules in projects. The multi-modularity of projects, although it improves manageability to a certain extent, but often creates a number of problems that can complicate development and, a problem that will appear in the future, require more resources to support. This article will analyze the main problems of monolithic multi-module Java projects and will try to consider a number of possible solutions to overcome the above problems. The article discusses the peculiarities of working with multi-module monolithic projects using Java as the main programming language. The purpose of this article is to identify features and obstacles using the above architectural approach of the software, analysis of the main possible issues of working with the monolithic multi-module Java projects, as well as providing recommendations for eliminating these obstacles or describing the features of the process that could help engineers in supporting this kind of projects. In other word the main goal of this work is to create recommendations, provide modern best practices for working with monolithic multi-modular software architecture and the most popular modern technological solutions used in corporate development. The proposed recommendations allow the team, primarily developers and the engineering side, to avoid possible obstacles that lead to the loss of efficiency of the monolithic software development process. The most important advantage, from the recommendations given in the article, is the optimization of resource costs (time, money and labor) for the development process. As a result of the article, a general list of recommendations was obtained, which allows the developer to better analyze what changes in the project should (if necessary) be made to optimize the development, assembly and deployment processes of a monolithic Java project, as well as advice before designing new software to avoid the main obstacles of monolithic architecture in the future.</p> Maksym Veres, Natalia Golian Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309324 Tue, 30 Jul 2024 00:00:00 +0300 EXTERNALIZATION OF TACIT KNOWLEDGE IN THE MENTAL MODEL OF A USER OF AN ARTIFICIAL INTELLIGENCE SYSTEM http://samit.khpi.edu.ua/article/view/309328 <p>The subject of the study is the processes of forming the user's mental model in artificial intelligence systems. The construction of such a model is associated with solving the problem of opacity and incomprehensibility of the decision-making process in such systems for end users. To solve this problem, the system user needs to receive an explanation of the obtained decision. The explanation should take into account the user's perception of the decision and the decision-making process, which is formalized within the user's mental model. The mental model considers the user's use of explicit and implicit knowledge, the latter of which usually lacks formal representation. The externalization of such knowledge ensures its transformation into a formal form. The aim of the work is to develop an approach to the externalization of implicit knowledge based on identifying patterns and causal dependencies for the decision-making process in an intelligent system when constructing the user's mental model. To achieve this goal, the following tasks are solved: developing a user's mental model of an artificial intelligence system that takes into account both explicit and implicit knowledge and developing an approach to the externalization of implicit knowledge of the user of the artificial intelligence system. A user's mental model of an artificial intelligence system that accounts for both explicit and implicit knowledge of the user is proposed. The model considers the connections between the user's explicit and implicit knowledge regarding the artificial intelligence system, the decision-making process, the method of using the decisions, and the general concept of the intelligent system. This creates conditions for the externalization of the user's implicit knowledge and the subsequent use of this knowledge in forming explanations regarding the decision-making process in the artificial intelligence system. An approach to the externalization of knowledge from the statistical and semantic layers of the user's mental model is proposed. In practical terms, the approach makes it possible to translate into explicit form the conditions and constraints regarding the formation and use of decisions in the artificial intelligence system.</p> Serhii Chalyi, Irina Leshchynska Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309328 Tue, 30 Jul 2024 00:00:00 +0300 SOFTWARE ARCHITECTURE SYSTEM DESIGN FOR THE MASS SERVICE SYSTEMS MODELING TO BE IMPLEMENTED IN GO PROGRAMMING LANGUAGE http://samit.khpi.edu.ua/article/view/309325 <p>The subject of the article is the methods and approaches to organizing the architecture of software implementation designed for modeling the behavior of mass service systems. The goal of the work is to design a software architecture for implementation in Go language, intended to replicate the behavior of various types of mass service systems, without considering the failure of individual service channels, using parallel computing. The article addresses the following tasks: consider the basis for designing the architecture and conclude its appropriateness; develop requirements for the future software product for more effective resource use and clear definition of successful completion; analyze the approaches to organizing software architecture and make a justified decision on the application of one of them; design a general algorithm scheme taking into account all requirements; identify the components of the modeled system and their interactions; build process diagrams considering the specifics of the Go programming language; define the method and contracts of interaction with the software. The research will utilize the following methods: Go programming language, concurrency, architectural UML diagrams, C4 diagrams, process diagrams. The following results were obtained: the requirements for the software for modeling mass service operations (SMO) were defined; common approaches to organizing architecture were considered and a comparative analysis was conducted; the structure of the future program was developed at the necessary levels of abstraction; for the first time, an architecture of the software product for modeling various mass service systems using parallel computing and the concurrency approach under the implementation in the Go programming language was proposed.</p> Denys Goldiner Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309325 Tue, 30 Jul 2024 00:00:00 +0300 CONSTRUCTION OF PROBABILISTIC CAUSAL RELATIONSHIPS BETWEEN EQUIVALENCE CLASSES OF DATA IN AN INTELLIGENT INFORMATION SYSTEM http://samit.khpi.edu.ua/article/view/309331 <p>The subject of this research is the processes involved in generating explanations for decision-making in artificial intelligence systems. Explanations in such systems enable the decision-making process to be transparent and comprehensible for the user, thereby increasing user trust in the obtained results. The aim of this work is to develop an approach for constructing a probabilistic causal explanation model that takes into account the equivalence classes of input, intermediate, and resulting data. Solving this problem creates conditions for building explanations in the form of causal relationships based on the available information about the properties of input data as well as the properties of the results obtained in the artificial intelligence system.&nbsp; To achieve this aim, the following tasks are addressed: developing a causal dependency model between the equivalence classes of input and output data; developing methods for constructing equivalence classes of data in the decision-making process and a method for constructing causal explanations. A probabilistic model of causal dependency is proposed, which includes a causal relationship between the equivalence classes of input or intermediate and resulting data obtained during the decision-making process in the artificial intelligence system. This relationship considers the estimates of the possibility and necessity of such a dependency. The model creates conditions for explaining the possible causes of the obtained decision. A set of methods for constructing equivalence classes of data in the decision-making process and for constructing causal explanations is proposed, establishing a causal relationship between the equivalence classes. When constructing equivalence classes, relations of mandatory and optional data refinement, requirements or exclusions of data, as well as data conjunctions, are established. When constructing causal explanations, the possibility and limitations of the necessity of such a dependency are calculated, allowing explanations to be built based on the available information about the obtained decisions and the input and intermediate data used to form these decisions.</p> Serhii Chalyi, Volodymyr Leshchynskyi Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309331 Tue, 30 Jul 2024 00:00:00 +0300 ANALYSIS OF BLOCKCHAIN TYPES AND THEIR SUITABILITY FOR IMAGE STORAGE http://samit.khpi.edu.ua/article/view/309332 <p>There different types of blockchains and their possible use for creating an image repository are investigated. The purpose of the study was to evaluate the advantages and limitations of different types of blockchains in terms of image storage. Data processing methods were applied to analyze the technical characteristics of various types of blockchains and comparative analysis of efficiency and reliability parameters. The results were obtained, which made it possible to formulate the principles of choosing the type of blockchain for creating image storage and to identify the advantages and limitations of each type from the point of view of image storage, depending on the priorities of the software product. The conclusion is that the use of blockchain provides a high level of security and integrity of images, some types of blockchains exhibit high speed and scalability. However, it is important to understand that the preservation process may remain centralized, so more research is needed to optimally use and develop these technologies. Future research may include an analysis of the possibilities of ensuring the privacy of participants and the development of standards for the sharing of multimedia content via blockchain. It is important to consider that the use of blockchain can contribute to increasing transparency and trust in the process of storing and sharing multimedia content, which is important for the development of the digital economy. However, in order to achieve the full potential of blockchain in the field of multimedia, it is necessary to develop effective strategies to solve the problems of privacy, scalability and centralization that arise when implementing these technologies. Such a comprehensive approach will provide a stable and effective infrastructure for managing multimedia content in the digital environment.</p> Glib Tereshchenko, Yelyzaveta Pysarenko Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309332 Tue, 30 Jul 2024 00:00:00 +0300 RESEARCH ON ERROR PROBABILITY ASSESSMENT IN USER PERSONAL DATA PROCESSING IN GDPR-COMPLIANT BUSINESS PROCESS MODELS http://samit.khpi.edu.ua/article/view/309113 <p>The only right strategy for businesses and government organizations in Ukraine and other countries that may face aggression is to recognize themselves as a potential target for cyberattacks by the aggressor (both by its government agencies and related cybercriminal groups) and take appropriate measures in accordance with the European Union’s General Data Protection Regulation (GDPR). The main purpose of the GDPR is to regulate the rights to personal data protection and to protect EU citizens from data leaks and breaches of confidentiality, which is especially important in today’s digital world, where the processing and exchange of personal data are integral parts of almost every business process. Therefore, the GDPR encourages organizations to transform their day-to-day business processes that are involved in managing, storing, and sharing customers’ personal data during execution. Thus, business process models created in accordance with the GDPR regulations must be of high quality, just like any other business process models, and the probability of errors in them must be minimal. This is especially important with regard to the observance of human rights to personal data protection, since low-quality models can become sources of errors, which, in turn, can lead to a breach of confidentiality and data leakage of business process participants. This paper analyzes recent research and publications, proposes a method for analyzing business process models that ensure compliance with the GDPR regulations, and tests its performance based on the analysis of BPMN models of business processes for obtaining consent to data processing and withdrawal of consent to user data processing. As a result, the probability of errors in the considered business process models was obtained, which suggests the possibility of confidentiality violations and data leaks of the participants of the considered business processes associated with these errors, and appropriate recommendations were made.</p> Andrii Kopp, Dmytro Orlovskyi, Oleksii Kizilov, Olha Halatova Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309113 Tue, 30 Jul 2024 00:00:00 +0300 DEVELOPMENT AND RESEARCH OF SOFTWARE SOLUTION FOR BUSINESS PROCESS MODEL CORRECTNESS ANALYSIS USING MACHINE LEARNING http://samit.khpi.edu.ua/article/view/309175 <p>Poorly designed business process models are a source of errors and the subsequent costs associated with these errors, such as monetary costs, lost time, or even some harmful impact on people or the environment if the erroneous business process models are associated with critical industries. The BPM (Business Process Management) lifecycle usually consists of designing, implementing, monitoring, and controlling the business process execution, but it lacks continuous quality control of the created BPMN (Business Process Model and Notation) models. Thus, this paper considers the problem of business process models classification based on their correctness, which solution will ensure quality control of the designed business process models. Thus, this study aims to improve the quality of business process models by developing a software solution for business process models classification based on their correctness. The subject of the study is the process of business process models classification based on their correctness, which uses quality measures and thresholds, usually, complexity measures. The subject of the study is a software solution for business process models classification based on their correctness. Therefore, in this study, the algorithm to solve the problem of BPMN models classification using logistic regression, interface complexity, and modularity measures is proposed, the software requirements are determined, the software development tools are selected, the software for business process models classification based on their correctness is designed, the corresponding software components are developed, the use of a software solution for solving the problem of business process models classification based on their correctness is demonstrated, the obtained results are analyzed and discussed. The developed software indicates high performance of BPMN models classification based on their correctness, achieving high accuracy (99.14 %), precision (99.88 %), recall (99.23 %), and F-score (99.56 %), highlighting the high performance of modeling errors detection.</p> Andrii Kopp, Dmytro Orlovskyi, Uliya Litvinova Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309175 Tue, 30 Jul 2024 00:00:00 +0300 MULTI-AGENT SIMULATION MODEL OF INFECTIOUS DISEASE SPREAD http://samit.khpi.edu.ua/article/view/309182 <p>The aim of the research is to develop a multi-agent simulation model for predicting the spread of infectious diseases, particularly COVID-19. In the context of the COVID-19 pandemic, there emerged an urgent need to create tools for forecasting and analyzing the dynamics of epidemics, as well as for evaluating the effectiveness of management decisions. The use of mathematical models in this process allows for an adequate description of the infection spread dynamics, which is essential for making informed decisions. The article discusses traditional approaches to epidemic modeling, such as the predator-prey model and the compartmental SIR (Susceptible-Infectious-Recovered) model. The predator-prey model describes the interaction between two species in an ecosystem using differential equations, which allows for modeling population dynamics. The compartmental SIR model divides the population into three groups: susceptible, infected, and recovered, which enables the analysis of the spread of infectious diseases. However, these models have limitations, particularly due to assumptions about population homogeneity and constant parameters. To more accurately model complex epidemic processes, a multi-agent simulation model was developed. In this model, agents interact within a defined area, mimicking real conditions of infection spread. Agents are divided into three classes: healthy, infected, and recovered. The movement of agents is modeled using random walk in a two-dimensional space, taking into account the possibility of contact between them, which can lead to infection. Infected agents transition to the recovered class after a certain period of illness and can no longer be infected. Modeling results showed that the multi-agent model allows for more accurate prediction of infection spread dynamics. Numerous experiments were conducted, demonstrating the model's adequacy in replicating the infection process, peak infection rates, and recovery periods. The influence of various parameters, such as the duration of illness, on the epidemic dynamics was investigated. The obtained results confirm that considering individual characteristics and behavioral traits of agents improves the accuracy of modeling. This allows the multi-agent simulation model to be used for developing effective control strategies and predicting the spread of infectious diseases, which can be useful for making management decisions in real pandemic conditions.</p> Daria Ivashchenko, Oleksandr Kutsenko Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309182 Tue, 30 Jul 2024 00:00:00 +0300 MODELS OF REMOTE IDENTIFICATION OF PARAMETERS OF DYNAMIC OBJECTS USING DETECTION TRANSFORMERS AND OPTICAL FLOW http://samit.khpi.edu.ua/article/view/309183 <p>The tasks of remote identification of parameters of dynamic objects are important for various fields, including computer vision, robotics, autonomous vehicles, video surveillance systems, and many others. Traditional methods of solving these problems face the problems of insufficient accuracy and efficiency of determining dynamic parameters in conditions of rapidly changing environments and complex dynamic scenarios. Modern methods of identifying parameters of dynamic objects using technologies of detection transformers and optical flow are considered. Transformer detection is one of the newest approaches in computer vision that uses transformer architecture for object detection tasks. This transformer integrates the object detection and boundary detection processes into a single end-to-end model, which greatly improves the accuracy and speed of processing. The use of transformers allows the model to effectively process information from the entire image at the same time, which contributes to better recognition of objects even in difficult conditions. Optical flow is a motion analysis method that determines the speed and direction of pixel movement between successive video frames. This method allows obtaining detailed information about the dynamics of the scene, which is critical for accurate tracking and identification of parameters of moving objects. The integration of detection transformers and optical flow is proposed to increase the accuracy of identification of parameters of dynamic objects. The combination of these two methods allows you to use the advantages of both approaches: high accuracy of object detection and detailed information about their movement. The conducted experiments show that the proposed model significantly outperforms traditional methods both in the accuracy of determining the parameters of objects and in the speed of data processing. The key results of the study indicate that the integration of detection transformers and optical flow provides reliable and fast determination of parameters of moving objects in real time, which can be applied in various practical scenarios. The conducted research also showed the potential for further improvement of data processing methods and their application in complex dynamic environments. The obtained results open new perspectives for the development of intelligent monitoring and control systems capable of adapting to rapidly changing environmental conditions, increasing the efficiency and safety of their work.</p> Olena Nikulina, Valerii Severyn, Oleksii Kondratov, Oleksii Olhovoy Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309183 Tue, 30 Jul 2024 00:00:00 +0300 AN ADAPTIVE METHOD FOR BUILDING A MULTIVARIATE REGRESSION http://samit.khpi.edu.ua/article/view/309104 <p>We propose an adaptive method for building a multivariate regression given by a weighted linear convolution of known scalar functions of deterministic input variables with unknown coefficients. As, for example, when multivariate regression is given by a multivariate polynomial. In contrast to the general procedure of the least squares method that minimizes only a single scalar quantitative measure, the adaptive method uses six different quantitative measures and represents a systemically connected set of different algorithms which allow each applied problem to be solved on their basis by an individual adaptive algorithm that, in the case of an active experiment, even for a relatively small volume of experimental data, implements a strategy of a statistically justified solving. The small amount of data of the active experiment we use in the sense that, for such an amount, the variances of estimates of unknown coefficients obtained by the general procedure of the least squares method do not allow to guarantee the accuracy acceptable for practice. We also proposed to significantly increase the efficiency of the proposed by O.&nbsp;A.&nbsp;Pavlov. and M.&nbsp;M.&nbsp;Holovchenko modified group method of data handling for building a multivariate regression which is linear with respect to unknown coefficients and given by a redundant representation. We improve it by including some criteria and algorithms of the adaptive method for building a multivariate regression. For the multivariate polynomial regression problem, the inclusion of a partial case of the new version of the modified group method of data handling in the synthetic method proposed by O.&nbsp;A.&nbsp;Pavlov, M.&nbsp;M.&nbsp;Golovchenko, and V.&nbsp;V.&nbsp;Drozd, for building a multivariate polynomial regression given by a redundant representation, also significantly increases its efficiency.</p> Alexander Pavlov, Maxim Holovchenko, Valeriia Drozd Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309104 Tue, 30 Jul 2024 00:00:00 +0300 QUALITY ASSESSMENT OF THE SOFTWARE DEVELOPMENT PROCESS OF AN IT COMPANY BASED ON THE USE OF THE UTILITY FUNCTION http://samit.khpi.edu.ua/article/view/309106 <p>The paper considers the software development process as an object of research, which is a poorly structured system. A description of such systems is given in the form of general characteristics, which include: difficulties in building an analytical model; incompleteness, inaccuracy, unreliability and uncertainty of information; benchmarks required for assessing weakly structured systems are often absent; uniqueness of the decision-making process; dynamic nature of models of poorly structured systems, etc. In this paper, the quality assessment of the software development process is considered based on maturity model standards, which can have continuous and discrete variants. The continuous version assesses the quality of the individual focus areas and processes of the maturity models. For this purpose, a discrete point scale of the first type is used, when the assessment is carried out according to an objective criterion. The quality assessment of individual focus areas and processes characterizes the local criteria for assessing the quality of the entire software development process. Therefore, the task is to form some kind of integral quality assessment on their basis. One of the options for solving this problem is a discrete maturity model, where the scale for assessing the entire software development process has five gradations called maturity levels. Starting from the second level, each gradation is characterized by a set of focus areas with corresponding levels of capability. The availability of such a scale allows not only assessing the quality of the entire software development process, but also solving the task of planning to improve its quality. But first, it is necessary to analyse such a scale from the point of view of its balance, namely, that the distances on the scale between the gradations are approximately equal. Therefore, the paper analyses the existing scales that can be proposed for expert assessment for the quality of the software development process. Their construction can be realized on the basis of a utility function using the local criteria of maturity models formalized in this paper. For this purpose, a fundamental property of systems is used. Namely, the dependence of the utility (efficiency) of a complex system on the invested resources over the life cycle interval, which usually takes the form of a logistic curve. Further research will be devoted to using this fact to build a balanced scale for assessing the entire software development process based on maturity models.</p> Volodymyr Sokol, Mykhaylo Godlevskyi, Dmytro Malets Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309106 Tue, 30 Jul 2024 00:00:00 +0300 USING THE GEOSPATIAL MULTI-CRITERIA DECISION ANALYSIS MODEL AND METHODS FOR SOIL DEGRADATION RISK MAPPING http://samit.khpi.edu.ua/article/view/309107 <p>Modern methods of spatial analysis and modeling are increasingly being combined with decision-making methods and fuzzy set theory. The latter are actively integrated into the environment of Geographic Information Systems (GIS), such as well-known ones like ArcGIS or QGIS, in the form of separate tools, plugins, or Python scripts. Decision-making methods allow structuring the problem in geographical space, as well as taking into account the knowledge and judgments of experts and the preferences of the decision-maker in determining the priorities of alternative solutions. This paper provides a description of a geospatial multi-criteria decision analysis model, which allows addressing a wide range of ecological and socio-economic issues. An example of applying this model to map soil degradation risk in Ukraine is presented in the paper. According to the object-spatial approach, the properties of a territory are determined as the result of the action (impact) of a set of objects (processes) belonging to this territory. The territory is represented as a two-dimensional discrete grid, each point of which (local area) is an alternative. The set of local areas of the territory constitutes the set of alternatives. The representation of the territory model as a system of objects and relationships between them allows justifying the choice of a set of criteria (factors) for assessing soil degradation risk. Each criterion is a separate raster layer of the map. To build a hierarchical decision-making structure and calculate the importance coefficients of the criteria, the Analytic Hierarchy Process (AHP) method is used. To account for uncertainty in assessments and judgments of experts at the stages of standardization of alternative attributes by different criteria and aggregation of their assessments, expert membership functions for fuzzy sets and fuzzy quantifiers are applied. The particular feature of the proposed multi-criteria decision analysis model is its low computational complexity and ease of integration into the GIS environment.</p> Svitlana Kuznichenko, Dmytro Ivanov, Dmytro Kuznichenko Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309107 Tue, 30 Jul 2024 00:00:00 +0300 TWO APPROACHES TO THE FORMATION OF A QUANTITATIVE MEASURE OF STABILITY BASED ON MULTIPLE ESTIMATES OF THE PARAMETERS OF AN ENSEMBLE OF TRANSIENT PROCESSES http://samit.khpi.edu.ua/article/view/309111 <p>The article is devoted to the further development of the theory of stability of dynamic systems, namely of quantitative methods of stability assessment. A review and critical analysis of various approaches, which allow to introduce a quantitative measure of stability of dynamic systems to one degree or another, is given. The limitations of the existing methods, which are primarily related to the assessment of the behavior of the transient processes of individual trajectories, as well as the difficulty of obtaining an assessment of the behavior of the ensemble of transient processes when trying to apply the methods of N. D. Moiseyev. are substantiated. A method of quantitative assessment of a dynamic system stability based on the numerical estimates of the behavior of the area of initial deviations from the equilibrium position on the trajectories of the dynamic system is substantiated. Based on the Liouville formula, it is shown that changes in the volume of the area of the initial deviations on the trajectories of the system does not depend on the form of the latter one. This allowed to limit the area of initial deviations in the shape of a hypersphere and to obtain a simple expression for a quantitative measure of the stability of a linear stationary dynamic system, the geometric sense of which is to estimate the rate of change of the volume of the control surface. The article proposes and substantiates the criterion of uniformity of deformation of the area of initial deviations. The essence of the problem is that in the transient process, the values of some components of the phase vector may reach unacceptable deviations from the equilibrium position. A theoretical estimate of deformation non-uniformity for linear systems is obtained, which is taken to be the deviation of the trace of the ellipsoid matrix from the deviations of the trace of the hypersphere matrix of the corresponding volume. A method for a quantitative measure of the stability based on an integral quadratic functional calculated on a set of transient processes of initial deviations in the form of a set of ellipsoids with a normalized volume is proposed and substantiated. Diagonal positive normalized matrices are considered as a set of matrices of the integral quadratic criterion. A simple algorithm for calculation of the multiple integral quadratic criterion is proposed.</p> Oleksandr Kutsenko, Mykola Bezmenov, Serhii Kovalenko Copyright (c) 2024 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/309111 Tue, 30 Jul 2024 00:00:00 +0300