http://samit.khpi.edu.ua/issue/feed Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies 2025-07-11T14:20:39+03:00 Безменов Микола Іванович (Mykola Bezmenov) mykola.bezmenov@khpi.edu.ua Open Journal Systems <p><strong>Collection of scientific papers</strong></p> <p><img style="width: 250px;" src="http://samit.khpi.edu.ua/public/journals/49/cover_issue_16936_uk_UA.jpg" alt="" /></p> <p><strong>Year of foundation:</strong> 1961 (Bulletin of KhPI), 1979 (Series)</p> <p><strong>Aims and Scope:</strong> Peer-reviewed open access scientific edition that publishes new scientific results in the field of system analysis and management of complex systems, based on the application of modern mathematical methods and advanced information technology. Edition publishes works related to artificial intelligence, big data analysis, modern methods of high-performance computing in distributed decision support systems.</p> <p><strong>Target audience:</strong> For scientists, teachers of higher education, post-graduate students, students and specialists in the field of systems analysis, management and computer technology.</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2079-0023">2079-0023</a> (Print)</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2410-2857">2410-2857</a> (Online)</p> <p>Media identifier <strong><a href="https://drive.google.com/file/d/1POp1f3OPs6wWTgpUZXdVVKlUSORms-g1/view?usp=sharing">R30-01544</a></strong>, according to the <a href="https://drive.google.com/file/d/1o3jlce-hW2415D2fiaa7gbrj307yvKf3/view?usp=share_link"><strong>decision of the National Council of Ukraine on Television and Radio Broadcasting of 16.10.2023 No. 1075</strong></a>.</p> <p><strong><a href="https://drive.google.com/open?id=1BJybDTz3S9-ld7mUSnDpBeQzDBH61OO9">Order of the Ministry of Education and Science of Ukraine No. 1643 of December 28, 2019</a></strong> "On approval of decisions of the Attestation Board of the Ministry on the activity of specialized scientific councils of December 18, 2019", Annex 4, <strong>"Bulletin of the National Technical University "KhPI". Series: System Analysis, Control and Information Technology" is added to category B</strong> of the "List of scientific professional publications of Ukraine in which the results of the dissertation works for obtaining the scientific degrees of doctor of sciences, candidate of sciences, and doctor of philosophy can be published".</p> <p><strong>Indexing </strong>in Index Copernicus, DOAJ, Google Scholar, and <a href="http://samit.khpi.edu.ua/indexing">other systems</a>.</p> <p>Edition publishes scientific works in the following fields:</p> <ul> <li>F1 (113) - Applied mathematics</li> <li>F2 (121) - Software engineering</li> <li>F3 (122) - Computer science</li> <li>F4 (124) - System analysis and data science</li> <li>F6 (126) - Information systems and technologies</li> <li>G7 (151/174) - Automation, computer-integrated technologies and robotics</li> </ul> <p><strong>Frequency:</strong> Biannual - June and December issues (deadlines for submission of manuscripts: until May 15 and November 15 of each year; manuscripts submitted late may be considered separately).</p> <p><strong>Languages:</strong> Ukrainian, English (mixed languages).</p> <p><strong>Founder and publisher:</strong> National Technical University "Kharkiv Polytechnic Institute" (<a href="https://www.kpi.kharkov.ua/eng/">University website</a>, <a href="https://ndch.kpi.kharkov.ua/en/bulletin-of-ntu-khpi/">Scientific and Research Department</a>).</p> <p><strong>Chief editor:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=57202891828">M. D. Godlevskyi</a>, D. Sc., Professor, National Technical University "KhPI".</p> <p><strong>Editorial board</strong> staff is available <a href="http://samit.khpi.edu.ua/editorialBoard">here</a>.</p> <p><strong>Address of the editorial office:</strong> 2, Kyrpychova str., 61002, Kharkiv, Ukraine, NTU "KhPI", Department of System analysis and information-analytical technologies.</p> <p><strong>Responsible secretary:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=6507139684">M. I. Bezmenov</a>, PhD, Professor, National Technical University "KhPI".</p> <p><strong>Phone numbers:</strong> +38 057 707-61-03, +38 057 707-66-54</p> <p><strong>E-mail:</strong> mykola.bezmenov@khpi.edu.ua</p> <p>This journal is practicing and supporting a policy of open access according to the <strong><a href="https://www.budapestopenaccessinitiative.org/read">Budapest Open Access Initiative (BOAI)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/open-access.png" alt="Open Access" /></p> <p>Published articles are distributed under the terms and conditions of the <strong><a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution (CC BY)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/cc-by.png" alt="CC-BY" /></p> <p>The editorial board adheres to international standards of publishing ethics and the recommendations of the <strong><a href="https://publicationethics.org/resources/guidelines/principles-transparency-and-best-practice-scholarly-publishing">Committee on Publication Ethics (COPE)</a></strong> on the Principles of Transparency and Best Practice in Scholarly Publishing.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/sm-cope.png" alt="" width="74" height="50" /></p> http://samit.khpi.edu.ua/article/view/334681 RESEARCH INTO THE INFLUENCE OF VARIOUS FACTORS ON THE LEVEL OF MORBIDITY DURING A PANDEMIC USING ARTIFICIAL NEURAL NETWORKS AND THE R PROGRAMMING AND DATA ANALYSIS LANGUAGE 2025-07-06T20:54:34+03:00 Oleksandr Melnykov alexandr@melnikov.in.ua Dmytro Kozub dima.kozub13@gmail.com <p>The problem of analyzing the impact of various factors on the level of morbidity during a pandemic is considered. The task of calculating the effectiveness of anti-epidemic measures and changing the percentage of general patients in general and those who suffered the disease in a severe form is formulated. The input factors of the predictive model are considered to be the “mask regime”, quarantine, distance learning, the possibility of vaccination, the availability of mandatory vaccination, and the percentage of vaccinated people. The output factors are the percentage of total infected and the percentage of those who developed complications after the disease (in the latter case, the percentage of total infected is added to the input factors). The task of calculating the impact of various factors on the level of population morbidity was also formulated using the example of COVID-19 statistics in a number of countries (Brazil, Germany, Japan, Ukraine, and the USA). An analysis of the main input factors was conducted, such as climatic conditions, population density, population age, vaccination level, socio-economic conditions, population size, and measures to counter the pandemic. Data sets based on real indicators were created. The artificial neural network method was used to solve both problems. A script was developed in the R programming and data analysis language. Calculations show that to achieve the best result in predicting the number of total infected, it is necessary to use a perceptron, which has two hidden layers, each of which consists of five neurons. To achieve the best result in predicting the number of seriously ill patients, it is necessary to apply a perceptron, which has three hidden layers, each of which consists of three neurons. In both variants, a sigmoidal activation function is recommended. The first model was used to analyze the level of influence of the listed factors on the level of morbidity. It was found that the minimum impact on determining the change in the number of generally infected people is either the general opportunity to be vaccinated or the combination of a mandatory mask regime with the introduction of distance learning. The minimum weight when calculating the number of seriously ill patients is the combination of the same opportunity to be vaccinated with the introduction of distance learning. In both cases, the maximum impact is the introduction of mandatory vaccination. During the study of the impact of indicators in different countries, it was found that the level of morbidity depends to a large extent on factors such as population density, vaccination level and socio-economic conditions. The results obtained can be used to improve strategies for anti-epidemic measures and improve management decisions in the field of health care.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334848 ADAPTIVE DYNAMIC RESOURCE ALLOCATION IN SYSTEMS WITH MULTI-TENANCY ARCHITECTURE 2025-07-08T10:00:08+03:00 Vladyslav Verbivskyi vverbavlad@gmail.com Valeriy Volovshchykov Valeriy.Volovshchykov@khpi.edu.ua Vladlen Shapo vladlen.shapo@gmail.com Maksym Levinskyi MaxLevinskyi@te.net.ua Valeriy Levinskyi ValeryLevinskyi@gmail.com <p>The article considers the problem of efficient allocation of computing resources in cloud software systems based on the principle of multi-tenant architecture. This approach allows to simultaneously serve several users within a single software instance while ensuring the isolation of their data and configurations. This approach reduces infrastructure costs and simplifies maintenance, but sharing resources creates new challenges associated with uneven load and potential overload of individual system components. The study analyzes classical approaches to distributing database connections among users, both static, which fix the restrictions in advance, and basic dynamic, which consider only the current number of requests. The limitations of these methods under conditions of variable and uneven load are revealed. A new adaptive methodology for dynamic resource optimization is proposed, which considers not only the intensity of requests but also the average processing time, historical activity indicators, and individual characteristics of each user. The methodology also allows considering the weighting factor that determines the impact of each factor on the final calculation. Experimental verification of the model based on three scenarios with different request intensities showed a significant reduction in the average response time by up to 20&nbsp;% compared to the baseline method, without increasing the total number of connections used. The results demonstrate the effectiveness of the proposed approach in real-world conditions. This methodology can be implemented in modern cloud platforms to improve performance, peak load resilience, and rational use of infrastructure resources.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334856 INTEGRATED GRAPH-BASED TESTING PIPELINE FOR MODERN SINGLE-PAGE APPLICATIONS 2025-07-08T10:36:11+03:00 Nataliia Golian nataliia.golian@nure.ua Varvara Tisheninova varvara.tisheninova@nure.ua <p>In the modern software development ecosystem, Single-Page Applications (SPAs) have become the de facto standard for delivering rich, interactive user experiences. Frameworks such as React, Vue, and Angular enable developers to build highly responsive interfaces; however, they also introduce intricate client-side state management and complex routing logic. As applications grow in size and complexity, manually writing and maintaining end-to-end tests for every possible user journey becomes infeasible. Moreover, ensuring comprehensive coverage&nbsp;− across functionality, security, performance, and usability&nbsp;− requires an integrated and adaptive testing strategy that can scale with rapid release cadences.</p> <p>This paper introduces a novel, integrated testing pipeline that augments conventional unit, component, integration, API, performance, security, and accessibility testing with a formal Graph-Based Testing (GBT) model. We model the SPA as a directed graph, where each vertex represents a distinct UI state or view, and each directed edge corresponds to a user-triggered transition (e.g., clicks, form submissions, navigation events). Leveraging graph algorithms, our approach automatically identifies missing paths to achieve exhaustive node, edge, and simple path coverage up to a configurable length, synthesizes minimal test sequences, and generates executable test scripts in frameworks such as Jest (unit&nbsp;/&nbsp;component), Cypress or Playwright (integration&nbsp;/&nbsp;E2E), and Postman (API).</p> <p>To select and tune the appropriate tools for each testing facet, we employ a multi-criteria decision framework based on linear additive utility and Pareto analysis. Each tool is evaluated across five normalized dimensions&nbsp;− defect detection accuracy, execution speed, licensing or infrastructure cost, adoption effort, and scalability&nbsp;− weighted according to project priorities.</p> <p>Finally, we integrate this GBT-driven test generation and tool orchestration into a CI / CD pipeline, enriched with pre-production security scans via OWASP ZAP and periodic load tests with JMeter. The result is a continuous, self-healing suite of tests that adapts to UI changes, automatically refactors itself against graph-differencing alerts, and maintains high confidence levels even under aggressive sprint schedules. Empirical evaluation on two large-scale SPAs demonstrates a 40 % reduction in manual test authoring effort and a 25 % increase in overall coverage metrics compared to traditional approaches.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334862 HIERARCHICAL FACTOR CLASSIFICATION ANALYSIS IN THE FRAMEWORK OF INFORMATIONEXTREME INTELLECTUAL TECHNOLOGY 2025-07-08T10:49:55+03:00 Igor Shelehov i.shelehov@snau.edu.ua Dmytro Prylepa d.prylepa@cs.sumdu.edu.ua Oleksandr Tymchenko o.tymchenko@cs.sumdu.edu.ua <p>The application of information-extreme intelligent technologies for the management of chemical-technological processes is an important direction in the development of automation in industry, particularly in chemical enterprises. A method for designing hierarchical control systems has been proposed, based on the use of hierarchical factor classification analysis to optimize the training and self-learning of automated process control systems. The main feature of the method is the use of mathematical models to analyze the functional states of processes, which allows for the accurate determination of optimal control parameters and the adaptation of the system to real-time changes. The work demonstrates that the use of hierarchical factor classification analysis increases the effectiveness of detecting functional deviations in technological processes, reducing the likelihood of errors in decision-making. To enhance the accuracy and probability of correctly determining the states, it is proposed to use algorithms for optimizing geometric parameters and control tolerances. It has been established that this method works effectively even in complex conditions where the number of functional states may vary. The research shows that the application of hierarchical factor classification analysis is effective for optimizing management processes, providing increased decision-making reliability and stability to changes in the conditions of complex chemical-technological process production. Furthermore, the proposed approach enhances the system's ability to self-learn and adapt, making it an effective tool for future intelligent automated systems.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334982 HIERARCHICAL INFORMATION-EXTREME MACHINE LEARNING OF UAV FOR SEMANTIC SEGMENTATION FOR A DIGITAL IMAGE OF THE REGION USING A DECURSIVE DATA STRUCTURE 2025-07-09T16:38:11+03:00 Ihor Naumenko 790905@ukr.net Serhii Kovalevskyi Kovalevskiy@ms.sumdu.edu.ua <p>The purpose of the study is to increase the accuracy of machine learning of an autonomous unmanned aerial vehicle (UAV) for identifying frames of a digital image of the observation region. A functional categorical model is proposed, on the basis of which an algorithm for information-extreme machine learning of an autonomous UAV by linear data structure with optimization of control tolerances for recognition features is developed and programmatically implemented. The formation of the input training brightness matrix was carried out by using the Cartesian coordinate system to process the brightness values for digital images of machine learning objects that belonged to the “texture” type. The modified Kullback measure was used as a criterion for optimizing machine learning parameters. Since the implementation of machine learning on a linear data structure did not allow to achieve high accuracy of machine learning, information-extreme machine learning was implemented on a hierarchical structure in the form of a decursive binary tree. The transition from a linear data structure to a hierarchical one allowed to reduce multi-class machine learning to the two-class learning at each stratum of a decursive binary tree, which allowed to increase the averaged value of the information criterion over the strata of the decursive tree. For the recognition classes in the stratum of decursive tree, where high accuracy of machine learning was not obtained, information-extreme machine learning was implemented with sequential optimization of parameters. As a result, it was possible to construct decision rules that are error-free according to the training matrix. In addition, it was experimentally proven that when the number of recognition classes is more than two, it is advisable to switch to information-extreme machine learning on a hierarchical data structure in the form of a decursive binary tree.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334988 OPTIMIZATION OF THE ANNOTATION PROCESS FOR BIOLOGICAL OBJECT IMAGES USING COMPUTER VISION METHODS 2025-07-09T17:25:41+03:00 Anton Kovalenko anton.kovalenko@cs.khpi.edu.ua <p>This study presents an approach to the automated creation of an annotated dataset containing images of biological objects, particularly cells. The proposed methodology is based on a modified CRISP-DM framework, adapted to the specifics of computer vision tasks. A sequence of stages and steps has been developed to enable effective detection and localization of biological objects in microscopic images.&nbsp;The process involves preprocessing the images, including binarization, filtering, brightness and contrast adjustment, as well as correction of illumination artifacts. These operations help enhance the quality of the input images and improve the accuracy of subsequent detection steps. Detected objects are automatically localized based on morphological analysis, followed by clustering using the k-means algorithm. Grouping is based on features such as object size and mean color value, which allows for distinguishing between different types of cells or structures based on visual characteristics.&nbsp;Bounding boxes are automatically generated for the localized objects, and their coordinates are stored in a structured tabular format (.csv). The resulting dataset can be used to train or test deep learning models, particularly for tasks such as object localization, classification, or segmentation.&nbsp;The proposed approach was validated using images of blood smears containing various types of cells. All computations were carried out using the Python programming language and libraries such as Pandas, NumPy, OpenCV, and Matplotlib. The analysis of detection and classification accuracy demonstrated satisfactory results, confirming the feasibility of using the developed pipeline for automated generation of annotated biological image datasets.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334990 INTELLIGENT ANALYSIS OF OPTICAL IMAGES BASED ON A HYBRID APPROACH 2025-07-09T17:52:56+03:00 Maksym Holikov maksym.holikov@karazin.ua Volodymyr Donets vol.donets@gmail.com Viktoriia Strilets viktoria.strilets@karazin.ua <p>The article considers an intelligent approach to real-time analysis of optical images based on a combination of face recognition methods using deep learning and classical computer vision algorithms for tracking them. A system with hybrid approach is proposed that integrates preliminary face recognition based on vector features (embeddings) generated by the FaceNet neural network and face tracking using the CSRT (Channel and Spatial Reliability Tracker) algorithm, which is part of the OpenCV library. The implemented system allows to recognise and automatically identify users in a video stream from a webcam, store new faces in the database, and effectively track identified faces over subsequent frames. The frame processing algorithm is implemented in a multi-threaded mode using queues and thread synchronisation mechanisms to ensure stable operation in real time. To recognise unknown persons, a unique ID is automatically created and their features are added to the common database of emblems. Particular attention is paid to assessing the spatial overlap of detection zones to avoid duplication of trackers when several people are present in the frame at the same time. In addition, the system is implemented as a web service based on Flask, which provides convenient integration with other software modules and the possibility of remote monitoring via a web interface. The proposed hybrid approach combines the accuracy of modern deep learning models with the flexibility of classical tracking algorithms, making the system suitable for use in security systems, smart offices, educational environments, and other areas where accurate face identification in dynamic environments is important. In summary, this paper demonstrates the practical implementation of an intelligent image analysis system that can be adapted to various use cases, including video surveillance, access control, and crowd management systems, as well as research and educational projects.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335082 COMPARISON OF MODERN GAME ENGINES WITH A CUSTOM CORE FOR NATIVE GAME DEVELOPMENT ON THE ANDROID PLATFORM 2025-07-10T13:40:48+03:00 Valerii Yatsenko v.yatsenko@biem.sumdu.edu.ua Dmytro Parkhomenko dmytro.parkhomenko@student.sumdu.edu.ua Oleksandr Kushnerov o.kushnerov@biem.sumdu.edu.ua Vitaliia Koibichuk v.koibichuk@biem.sumdu.edu.ua Kostiantyn Hrytsenko k.hrytsenko@biem.sumdu.edu.ua <p>The modern game industry is increasingly focused on mobile platforms, particularly devices based on the ARM architecture, which dominates the smartphone and tablet markets. Developers are actively adapting their engines and tools to this architecture, taking into account its energy efficiency and widespread adoption. In this context, the development of a custom game core that can be installed directly on an Android device without the need for additional engines opens new opportunities for optimization, faster prototyping, and full control over device-level performance.&nbsp;This approach is especially relevant in light of the growing popularity of independent game development and the demand for lightweight solutions without unnecessary dependencies. This study presents a comparative analysis of modern game engines (Unity, Unreal Engine, Godot, Defold, Cocos2d-x) and a custom-developed game core designed for direct installation and execution on Android devices with ARM architecture, without relying on any intermediate engine.&nbsp;The paper examines the advantages of ARM architecture, including energy efficiency, scalability, and broad support in mobile devices, making it a suitable platform for native game development. Particular attention is paid to the technical comparison of engine capabilities, including application size, launch speed, API flexibility, access to system resources, and support for low-level languages. It has been revealed that although traditional engines offer extensive functionality and ease of development, they limit hardware-level control and significantly increase APK size. On the other hand, a custom core, specifically designed for ARM devices, provides minimal size, instant launch, and maximum performance due to direct access to graphical APIs (OpenGL ES/Vulkan) and Android system resources.&nbsp;The study also analyzes the suitability of programming languages such as Java, Kotlin, C++, and Rust for Android game development. It outlines the potential of Vulkan as a high-performance graphics API and discusses the feasibility of a core-centered approach for creating lightweight, optimized mobile games and tools.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335102 EVALUATION OF THE EFFECTIVENESS OF THE “INFRASTRUCTURE AS CODE” METHODOLOGY FOR CREATING AND MANAGING CLOUD INFRASTRUCTURE 2025-07-10T15:44:29+03:00 Dmytro Miroshnychenko dmytro.miroshnychenko@student.karazin.ua Olena Tolstoluzka elena.tolstoluzka@karazin.ua <p>The article describes a comprehensive study of the effectiveness of using the Infrastructure as Code (IaC) methodology to create, scale, and manage cloud infrastructure. The IaC methodology is considered one of the key technologies of digital transformation and the DevOps approach, which provides software automation of infrastructure processes, reduces dependence on the human factor, and increases the repeatability and predictability of IT environments. The article provides a comparative analysis of leading IaC implementation tools, in particular Terraform, Pulumi, AWS CloudFormation, and Ansible, from the standpoint of their openness, compatibility with various cloud platforms, architectural approach (declarative or imperative), state management, and level of flexibility. The degree of automation, scalability, speed of infrastructure deployment, adaptability to change, configuration reliability, and ease of management are evaluated as key performance metrics. For each metric, a theoretical justification, analytical assessment, and comparison with traditional approaches to administration are provided. Special attention is paid to the analysis of IaC implementation in leading cloud environments (AWS, Microsoft Azure, Google Cloud Platform, OpenStack) taking into account the corresponding platform solutions (CloudFormation, ARM/Bicep, Deployment Manager, Heat) and third-party multi-cloud tools. It was found that the use of IaC significantly improves DevOps practices, simplifies CI/CD processes and increases the reliability of cloud solutions. As a result, it is proven that the use of IaC provides a significant increase in operational efficiency, reduces infrastructure maintenance costs and promotes its standardization, which makes this methodology strategically important for modern IT systems.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335105 DESIGNING THE ARCHITECTURE AND SOFTWARE COMPONENTS OF THE DOCKERISED BLOCKCHAIN MEDIATOR 2025-07-10T16:17:17+03:00 Pavlo Zherzherunov Pavlo.Zherzherunov@cs.khpi.edu.ua Olexandr Shmatko oleksandr.shmatko@khpi.edu.ua <p>Small and medium enterprises are not adopting blockchain solutions in their supply chains and business processes due to the cost of implementing and deploying the solutions. &nbsp;The architecture, which is described, is aimed at lowering the barrier for smaller-scale businesses to adopt distributed technologies in their supply chains. &nbsp;Docker’s containerization capabilities are leveraged to achieve these goals due to improved horizontal scaling and providing a unified environment for the application deployment. This architecture leverages tools provided by the Docker to design scalable and robust system that is easily maintainable. Some of the key challenges are addressed by the proposed architecture, such as high development costs, incompatibility with existing systems, and complicated setup processes, which are required for every participant in the supply chain. This research describes how utilizing Docker system capabilities can help enable smaller-scale businesses to adapt distributed solutions in their supply chains and cooperation with other companies by tackling the issues of traceability, transparency, and trust. &nbsp;The main components of the architecture are a mediator server containerized within a Docker network, a blockchain node, and an NGINX proxy server container. They are implemented to process request data, store relevant information, and secure it on the Ethereum blockchain ledger. The proposed architecture is also aimed at integrating smoothly with existing company applications to reduce adoption costs. Security of the data in the Ethereum ledger is achieved via security measures such as cryptography mechanisms and hashing already integrated into the Ethereum platform.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335107 IDENTIFICATION PARAMETERS OF DYNAMIC OBJECTS USING TRANSFORMER WITH OPTICAL FLOW AND ENSEMBLE METHODS 2025-07-10T16:35:58+03:00 Oleksii Kondratov kondratovolexiy@gmail.com Olena Nikulina elniknik02@gmail.com <p>The article presents an approach to identifying the parameters of dynamic objects in a video stream using a transformer-based architecture, the GeoNet model, and ensemble machine learning methods, namely bagging and boosting. The identification of parameters such as position, velocity, direction of movement, and depth is of significant importance for a wide range of applications, including autonomous driving, robotics, and video surveillance systems. The paper describes a comprehensive system that integrates the spatiotemporal characteristics of a video stream by computing optical flow and depth maps using GeoNet, further analyzing them through a transformer, and enhancing accuracy via ensemble methods. GeoNet, as a deep convolutional neural network, combines the tasks of depth estimation and optical flow within a single architecture, enabling accurate 3D scene reconstruction. The use of a transformer allows modeling global dependencies across video frames and improves the accuracy of object classification and detection. At the same time, bagging reduces variance by averaging the results of several models trained on different subsets, while boosting focuses on difficult examples to improve prediction accuracy. The proposed system achieves high accuracy under conditions of dynamic background, lighting changes, occlusions, and noise, making it adaptable for real-time use in complex scenes. A detailed description of each system component is provided: the GeoNet architecture, transformer modules, implementation of bagging and boosting, and the result fusion algorithm. The expected results are intended to demonstrate the effectiveness of integrating deep learning methods with classical ensemble approaches for high-precision dynamic object identification tasks. The proposed methodology opens new prospects for the development of next-generation intelligent computer vision systems.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335155 LAYERED DEFENSE IN COMMUNICATION SYSTEMS: JOINT USE OF VPN PROTOCOLS AND LINEAR BLOCK CODES 2025-07-11T13:20:22+03:00 Vladyslav Sharov wyctpiy@gmail.com Olena Nikulina elniknik02@gmail.com <p>With the rapid increase in the volume of transmitted information and the proliferation of distributed network infrastructures, the requirements for the security and reliability of communication channels are steadily intensifying. Traditional protection methods, such as virtual private networks (VPNs), are primarily aimed at ensuring confidentiality and authenticity through cryptographic algorithms, while typically lacking resilience to transmission-level errors arising from noise, interference, or hardware failures. In contrast, error correction codes—such as Hamming codes—are well-established tools for detecting and correcting random errors in physical channels, but they do not address intentional threats like interception, modification, or traffic analysis. This paper presents a hybrid cascading model for secure and reliable data transmission that integrates cryptographic encapsulation via VPN technologies with structural redundancy provided by error correction coding. A specific focus is placed on the use of Hamming codes extended by an additional parity bit applied at the post-encryption stage, enabling the protection of VPN packet integrity even under noisy channel conditions. The architecture of the proposed model is examined in detail, including its modular components, processing flow, and the various possible configurations of encoding and encryption blocks. Particular attention is given to analysing the threat surfaces present at each phase of transmission—prior to tunneling, during transport, and at the decryption stage—and assessing the system’s robustness through probabilistic reliability metrics and redundancy coefficients. Simulation-based modelling supports the theoretical framework and confirms that the combined use of encryption and redundancy coding significantly enhances overall communication resilience. The results underscore the importance of a comprehensive approach to secure data transmission that jointly addresses logical security threats and physical-level vulnerabilities.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335157 SOFTWARE COMPONENT DEVELOPMENT FOR PARALLEL GATEWAYS DETECTION AND QUALITY ASSESSMENT IN BPMN MODELS USING FUZZY LOGIC 2025-07-11T13:30:36+03:00 Andrii Kopp andrii.kopp@khpi.edu.ua Ľuboš Cibák lubos.cibak@vsemba.sk Dmytro Orlovskyi dmytro.orlovskyi@khpi.edu.ua Dmytro Kudii dmytro.kudii@khpi.edu.ua <p>The quality of business process models is a critical factor in ensuring the correctness, efficiency, and maintainability of information systems. Within the BPMN notation, which is nowadays a standard of business processes modeling, parallel (AND) gateways are of particular importance. Errors in their implementation, such as incorrect synchronization or termination of parallel branches, are common and difficult to detect by traditional metrics such as the Number of Activities (NOA) or Control-Flow Complexity (CFC). In this paper, we propose a method for evaluating the correctness of AND-gateways based on fuzzy logic using Gaussian membership functions. The proposed approach is implemented as a software component that analyzes BPMN models, provided in XML format, identifies all AND-gateways, and extracts structural characteristics, i.e. the numbers of incoming and outgoing sequence flows. This features are evaluated using “soft” modeling rules based on fuzzy membership functions. Additionally, an activation function with the 0.5 threshold is used to generate binary quality indicators and calculate an integral quality assessment measure. The software component is developed using Python, as well as third-party libraries: Pandas, NumPy, and Matplotlib. A set of 3729 BPMN models from the Camunda open source repository was used for experimental calculations. Of these, 1355 models contain 3171 AND-gateways. The obtained results demonstrate that 71.2% of the gateways are correct, and 28.8% have structural violations. In 50% of the models, the quality score is 1.00, which indicates high quality, however minimum values of 0.02 indicate the need for automated verification of business process models. The considered approach allows detecting AND-gateways modeling errors, increasing the reliability of BPMN models and offering the capabilities for intelligent business process modeling support.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335158 INTELLIGENT TECHNOLOGY FOR OPTIMIZING THE PROJECT-BASED APPROACH TO TEACHING STUDENTS USING LEARNING MANAGEMENT SYSTEMS 2025-07-11T13:48:13+03:00 Volodymyr Sokol sokol@cs.rwth-aachen.de Mykhaylo Godlevskyi Mykhailo.Hodlevskyi@khpi.edu.ua Mariia Bilova Mariia.Bilova@khpi.edu.ua Roman Tupkalenko Roman.Tupkalenko@cs.khpi.edu.ua <p>This work is devoted to developing of a recommendation system that enables the effective construction of learning trajectories for students studying in universities using learning management systems. The core of the recommendation system will be an artificial deep neural network of forward propagation, which takes as input information about the student and the subject that he or she should study and produces as output the most effective learning trajectory. The neural network is trained on data prepared using multi-agent modeling. The domain was decomposed into separate components and in the process of multi-agent modeling was represented in the form of agents and the environment in which they communicate with each other. The subject of this research is the modeling of the learning process in learning management systems. The purpose of the study is to optimize the student learning process within learning management systems. The subject area was analyzed and studied, the architecture of the recommendation system was developed, the architecture of the multi-agent system was developed, and a mathematical model of agent interaction was developed. To achieve the &nbsp;goals of the study, it is necessary to solve main tasks, namely: to prepare a training data set using multi-agent modeling and to develop and train a recommendation system that is based on an artificial deep neural network on this data. After completing all the tasks of the work, it is expected that the learning process of students in the learning management system will be optimized in terms of time and resources spent on learning, and the average level of knowledge will be increased.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334556 CYBERSECURITY POLICY CONTROL SYSTEM WITH ELEMENTS OF ARTIFICIAL INTELLIGENCE OF THE CORPORATE COMMUNICATION NETWORK 2025-07-04T14:32:57+03:00 Andrii Levterov lai@khadi.kharkov.ua Ganna Pliekhova plehovaanna11@gmail.com Maryna Kostikova kmv_topaz@ukr.net Anton Okun okunanton@gmail.com <p>In today’s digital world, where almost all aspects of life are connected to the Internet, cybersecurity is not just important but an integral part of our reality. It ensures the smooth operation of services: from email to social and corporate networks across all sectors of critical infrastructure in countries worldwide. The development of artificial intelligence in the field of cybersecurity opens up significant potential for improving the effectiveness of information protection in all sectors of critical infrastructure. The developed system relates to the cybersecurity policy with artificial intelligence elements for corporate communication networks.&nbsp;The goal is to improve the accuracy of forming and controlling the cybersecurity policy of corporate communication networks by identifying techniques for executing computer attacks using artificial intelligence and analyzing the situation and vulnerabilities present in the software infrastructure. To achieve this goal, a cybersecurity policy control system with artificial intelligence elements for corporate communication networks was developed. The effectiveness of the developed system was calculated by comparing Theil coefficients to assess its validity. The developed system processes and evaluates information using artificial intelligence, including comparator identification, which allows for understanding the capabilities and techniques of executing each known computer attack. It also uses a situational text predicate to identify and analyze the situation of a cyberattack and predict its trajectory, determining the likelihood of each known attack and comparing it with the effectiveness of existing defense measures. By comparing the calculated Theil coefficients of the system with an existing analogue, it was concluded that the validity of forming a cybersecurity policy with artificial intelligence elements in the developed system is higher than that of the analogue, confirming the achievement of the proposed technical result.&nbsp;The use of AI in corporate communication networks enables faster and more efficient detection, analysis, and response to cyber threats, providing a high level of protection against modern cyberattacks. The integration of AI into cybersecurity also enhances security and improves the ability to respond to the challenges of today’s digital environment. Thus, AI integration in the field of cybersecurity opens up new opportunities for ensuring effective and comprehensive protection against cyber threats in the modern digital world, becoming a key factor in ensuring future cybersecurity.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334680 APPLICATION OF ARTIFICIAL INTELLIGENCE METHODS TO PREDICT THE ULTIMATE STATIC LOAD OF A BEAM MADE OF HOMOGENEOUS MATERIAL ACCORDING TO THE VON MISES CRITERION BASED ON THE DATA OF STRUCTURAL STRENGTH ANALYSIS 2025-07-06T20:45:50+03:00 Gennadii Martynenko Gennadii.Martynenko@khpi.edu.ua Vladyslav Harkusha Vladyslav.Harkusha@infiz.khpi.edu.ua <p>The subject of the study is static structural analysis in mechanics. The aim of the work is to create and train an artificial intelligence model in the form of neural networks to predict the ultimate load on a structural element such as a beam made of a homogeneous material. The strength state of this structural element is determined by equivalent stresses according to the von Mises criterion. The initial and variable parameters are the geometric dimensions and power loads acting on the body. Achieving the goal makes it possible to calculate the strength of a structural element faster in terms of computation and with acceptable error values compared to classical methods of mechanics using numerical methods, in particular the finite element method. To achieve this goal, the following tasks are solved: conducting numerical experiments to analyze the strength state under static loading of a beam structural element using the finite element method; determining the key parameters of the body; preparing and aggregating data for the model; designing and training the model. Numerical experiments were carried out with predefined types of fixings and loads on the beam. There were 3 variations of data preparation and, accordingly, models to ensure the representativeness of predictions by neural networks. All numerical experiments were conducted in computeraided design systems. All numerical experiments were conducted in computeraided design systems. The design of the models was based on the principle of a minimal but sufficient number of hidden network layers and neurons in them. The model was trained on the principle of learning with a teacher, where a certain number of geometric properties and the pressure resisted by the body were selected as input parameters, and the maximum equivalent stress according to the von Mises criterion corresponding to these parameters was selected as an output parameter. These stress values are obtained as a result of analyzes in the computeraided design system. Prediction of the same values for other parameters of the object of study using neural networks is based on a linear regression algorithm and a certain number of input parameters. The models were optimized using the adaptive moment estimation algorithm. The model prediction error was calculated using the mean square error. The result of the study is the creation and training of artificial intelligence models and verification of their ability to predict the maximum equivalent stresses according to the von Mises criterion based on the geometric and force characteristics of a structural element with relative accuracy to a similar calculation in computeraided design systems. The analysis of the obtained results made it possible to prove the possibility of a reliable prediction of the desired maximum values of equivalent stresses characterizing the strength state of the considered structural element at different ratios of geometric and force parameters, without performing strength analysis by traditional methods. This expands the possibilities of finding rational design options.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/335159 ASSESSMENT OF THE QUALITY OF THE STABILIZATION SYSTEM OF SPECIAL EQUIPMENT ON MOBILE VEHICLES 2025-07-11T13:59:45+03:00 Oleksii Haluza Oleksii.Haluza@khpi.edu.ua Olena Akhiiezer olena.akhiiezer@khpi.edu.ua Stanislav Pohorielov Stanislav.Pohorielov@khpi.edu.ua Nataliia Protsai nataliia.protsai@khpi.edu.ua Oleksandr Volkovoi oleksandr.volkovoi@ukr.net <p>The article is devoted to the assessment of the quality of stabilization systems of special equipment used on various types of vehicles, such as combat vehicles, in particular infantry fighting vehicles (IFVs). The main task of such systems is to maintain a stable position or orientation of the object, which avoids external influences and compensates for the movement of the equipment carrier itself. This is especially important for combat assets, where the stability of the equipment affects the accuracy of guidance and the effectiveness of combat operations. &nbsp;Among the features considered are smoothing and mitigating abrupt fluctuations and deviations, as well as stabilizing relative to the target. The article presents a mathematical model and developed software for assessing the quality of stabilization systems of special equipment on mobile vehicles using the method of deviation analysis by calculating deviations in the stabilization process, which takes into account the movement of the carrier. The method involves measuring the angular deviations of the sight relative to the target at each point in time. The main indicators of stabilization quality in the model are mean angular deviation, standard deviation, and maximum deviation. For dynamic target tracking, the principles of a correlation filter are used, which allows you to determine the similarity between the current frame and the reference image of the object. This approach makes it possible to reliably identify an object even in conditions dynamic position change. The correlation tracking described in the article is based on finding an object in the next frame by maximizing the similarity between the current image and the reference. The use of a correlation filter ensures stable subject tracking and adjusts the settings to accurately focus on the target in conditions of changing lighting and angle.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334638 DEVELOPMENT OF AN EDUCATIONAL CHATBOT WITH A CONTEXTUAL INTENT SYSTEM ON THE DIALOGFLOW PLATFORM 2025-07-05T14:10:59+03:00 Oksana Ivashchenko oksana.ivashchenko@khpi.edu.ua Stanislav Filip stanislav.filip@vsemba.sk Bohdan Ratushnyi bohdan.ratushnyi@cs.khpi.edu.ua <p>In the context of digital transformation in higher education, the development of intelligent agents capable of maintaining continuous and effective interaction with students is becoming increasingly relevant. This article presents a complete life cycle of the creation of the contextual chatbot “Pytayko z PIITU” for the Department of Software Engineering and Intelligent Control Technologies of NTU “KhPI”. The chatbot is designed to provide quick and intuitive access to information about academic procedures, communication channels, scholarships, documents, and other common questions related to students' interaction with the department and its website. The system was developed using the Dialogflow platform with Telegram integration and Google Cloud Functions as the fulfillment handler. The core of the system is a structured multi-level intent architecture, where each intent group corresponds to a thematic category such as admissions, documents, or course schedules. This allows the bot to maintain conversation context, ensure precise routing of requests, and reduce ambiguity in user interaction. The prototyping model was selected as the life cycle methodology due to the need for active user feedback and iterative improvement. Based on the analysis of the departmental website and survey data from students, an intent system was created that organizes user queries by categories, each with its own fallback intent and context-based clarification mechanisms. Special attention was paid to the dynamic distribution of queries using webhook logic and centralized reusable intent blocks. The article presents the development algorithm, intent architecture, testing process, and analysis of interaction history. The testing phase included multiple validation cycles, real-time sessions via Telegram, and the assessment of fallback effectiveness. The final implementation achieved a high accuracy rate (~91%) and low error percentage (~3%), demonstrating the feasibility of using Dialogflow for educational automation scenarios. The chatbot architecture supports future scalability and provides 24/7 support for student inquiries without additional administrative workload.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334639 VOLUNTEER MOVEMENT AS A SUPPLY NETWORK: ANALYSIS, CHALLENGES, AND PROSPECTS 2025-07-05T14:26:20+03:00 Dmytro Komarovskyy Komarovskiy@gmail.com Oleksii Haluza Oleksii.Haluza@khpi.edu.ua Olena Akhiiezer olena.akhiiezer@khpi.edu.ua <p>This article is dedicated to the study of the structure, operational features, and efficiency of the volunteer movement in the context of Ukraine's modern socio-economic and political realities. Volunteer organizations play a key role in supporting the population, responding to crisis situations, assisting the military, helping socially vulnerable groups, and developing civil society. In this regard, the relevance of researching volunteer networks and finding optimal ways for their operation becomes particularly significant. The objective of this study is to identify and analyze the key parameters of volunteer networks, which will further enable the development of mathematical models of their activities, as well as the construction of formal criteria for their operational efficiency. This is necessary to optimize the work of such organizations and improve their ability to adapt to rapidly changing conditions. The research methodology includes a systematic analysis that allows viewing volunteer networks as complex dynamic systems; a comparative method, which makes it possible to assess the differences and common features of various volunteer initiatives; and a case-study analysis of practical examples of volunteer activities. This approach has made it possible to obtain a comprehensive picture of the functioning of the volunteer movement, identify its strengths and weaknesses, and develop recommendations for its further development. The study demonstrates the unique characteristics of volunteer supply networks, including high flexibility in decision-making, rapid response to emergencies, a horizontal management structure, and a focus on achieving maximum social impact. Key factors influencing the operational efficiency of volunteer organizations have been identified, among which the motivation of participants, organizational structure, level of technological support, and the presence of well-established communication channels stand out. The results of this study can serve as a basis for further improvement of the activities of volunteer organizations, the development of specialized information platforms for coordinating their work, and the formation of state policy in the field of support and development of the volunteer movement. The application of the proposed approaches can contribute to increasing the resilience of volunteer initiatives, expanding their influence, and improving the overall efficiency of work in the field of social assistance and population support.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025 http://samit.khpi.edu.ua/article/view/334551 STATISTICAL APPROACH TO DETECTION OF ANOMALIES IN WATER DISTRIBUTION NETWORKS 2025-07-04T14:22:01+03:00 Oleg Melnikov Oleg.Melnikov@khpi.edu.ua Yurii Dorofieiev Yurii.Dorofieiev@khpi.edu.ua Natalia Marchenko Natalia.Marchenko@khpi.edu.ua <p>This paper is devoted to solving the problem of developing an automated system for detecting anomalies in water distribution networks. The main causes of such anomalies are background leaks and pipe breaks. To address this problem, a statistical approach is proposed, which consists in testing the null hypothesis that the readings of pressure and/or water flow sensors received in real time correspond to the standard conditions of the network. The paper proposes a three-stage anomaly detection scheme, which includes: statistical profiling of network sensors; system calibration to achieve the desired ratio between the risks of false alarms and the omission of existing anomalies; determination of rules for drawing conclusions about the presence of anomalies. A methodology for statistical profiling and calibration of the system based on simulation modeling in the EPANET software environment using the WNTR software interface in Python was developed. In the process of such modeling, the distribution of pressure readings in the network sensors is obtained based on water demand fluctuations. As an example, the model of the L-Town water supply network was studied, which was developed for the BattLeDIM leak detection and isolation competition. The sensitivity of the anomaly detection results to the range of sensor values that are considered normal, as well as the number of sensors involved in the anomaly detection procedure, is investigated. The dependence of the number of sensors with anomalous readings on the size of the leak is analyzed. It is established that there is no combination of parameters of the proposed anomaly detection system that provides optimal results for all possible leakage sizes simultaneously, as a result of which it is proposed to calibrate the system using a man-machine procedure.</p> 2025-07-11T00:00:00+03:00 Copyright (c) 2025