Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies
http://samit.khpi.edu.ua/
<p><strong>Collection of scientific papers</strong></p> <p><img style="width: 250px;" src="http://samit.khpi.edu.ua/public/journals/49/cover_issue_16936_uk_UA.jpg" alt="" /></p> <p><strong>Year of foundation:</strong> 1961</p> <p><strong>Aims and Scope:</strong> Peer-reviewed open access scientific edition that publishes new scientific results in the field of system analysis and management of complex systems, based on the application of modern mathematical methods and advanced information technology. Edition publishes works related to artificial intelligence, big data analysis, modern methods of high-performance computing in distributed decision support systems.</p> <p><strong>Target audience:</strong> For scientists, teachers of higher education, post-graduate students, students and specialists in the field of systems analysis, management and computer technology.</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2079-0023">2079-0023</a> (Print)</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2410-2857">2410-2857</a> (Online)</p> <p>Media identifier <strong><a href="https://drive.google.com/file/d/1POp1f3OPs6wWTgpUZXdVVKlUSORms-g1/view?usp=sharing">R30-01544</a></strong>, according to the <a href="https://drive.google.com/file/d/1o3jlce-hW2415D2fiaa7gbrj307yvKf3/view?usp=share_link"><strong>decision of the National Council of Ukraine on Television and Radio Broadcasting of 16.10.2023 No. 1075</strong></a>.</p> <p><strong><a href="https://drive.google.com/open?id=1BJybDTz3S9-ld7mUSnDpBeQzDBH61OO9">Order of the Ministry of Education and Science of Ukraine No. 1643 of December 28, 2019</a></strong> "On approval of decisions of the Attestation Board of the Ministry on the activity of specialized scientific councils of December 18, 2019", Annex 4, <strong>"Bulletin of the National Technical University "KPI". Series: System Analysis, Control and Information Technology" is added to category B</strong> of the "List of scientific professional publications of Ukraine in which the results of the dissertation works for obtaining the scientific degrees of doctor of sciences, candidate of sciences, and doctor of philosophy can be published".</p> <p><strong>Indexing </strong>in Index Copernicus, DOAJ, Google Scholar, and <a href="http://samit.khpi.edu.ua/indexing">other systems</a>.</p> <p>Edition publishes scientific works in the following fields:</p> <ul> <li>124 - System analysis</li> <li>122 - Computer science</li> <li>126 - Information systems and technologies</li> <li>121 - Software engineering</li> <li>151 - Automation and computer-integrated technologies</li> <li>113 - Applied mathematics</li> </ul> <p><strong>Frequency:</strong> Biannual - June and December issues (deadlines for submission of manuscripts: until May 15 and November 15 of each year; manuscripts submitted late may be considered separately).</p> <p><strong>Languages:</strong> Ukrainian, English (mixed languages).</p> <p><strong>Founder and publisher:</strong> National Technical University "Kharkiv Polytechnic Institute" (<a href="https://www.kpi.kharkov.ua/eng/">University website</a>, <a href="https://ndch.kpi.kharkov.ua/en/bulletin-of-ntu-khpi/">Scientific and Research Department</a>).</p> <p><strong>Chief editor:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=57202891828">M. D. Godlevskyi</a>, D. Sc., Professor, National Technical University "KhPI".</p> <p><strong>Editorial board</strong> staff is available <a href="http://samit.khpi.edu.ua/editorialBoard">here</a>.</p> <p><strong>Address of the editorial office:</strong> 2, Kyrpychova str., 61002, Kharkiv, Ukraine, NTU "KhPI", Department of System analysis and information-analytical technologies.</p> <p><strong>Responsible secretary:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=6507139684">M. I. Bezmenov</a>, PhD, Professor, National Technical University "KhPI".</p> <p><strong>Phone numbers:</strong> +38 057 707-61-03, +38 057 707-66-54</p> <p><strong>E-mail:</strong> mykola.bezmenov@khpi.edu.ua</p> <p>This journal is practicing and supporting a policy of open access according to the <strong><a href="https://www.budapestopenaccessinitiative.org/read">Budapest Open Access Initiative (BOAI)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/open-access.png" alt="Open Access" /></p> <p>Published articles are distributed under the terms and conditions of the <strong><a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution (CC BY)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/cc-by.png" alt="CC-BY" /></p> <p>The editorial board adheres to international standards of publishing ethics and the recommendations of the <strong><a href="https://publicationethics.org/resources/guidelines/principles-transparency-and-best-practice-scholarly-publishing">Committee on Publication Ethics (COPE)</a></strong> on the Principles of Transparency and Best Practice in Scholarly Publishing.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/sm-cope.png" alt="" width="74" height="50" /></p>NTU "KhPI"en-USBulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies2079-0023<p><span>Authors who publish with this journal agree to the following terms:</span></p><ul><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ul>METHOD OF DETECTING LANDMARKS FOR NAVIGATION OF AUTONOMOUS MOBILE ROBOTS USING FEATURES OF AVERAGE COLOR INTENSITY DISTRIBUTION
http://samit.khpi.edu.ua/article/view/320134
<p>The use of video cameras in the navigation of autonomous mobile robots is one of the possible ways of implementing passive remote methods of detecting ground landmarks. A method for detecting ground landmarks during the navigation of autonomous mobile robots was proposed, which is based on the features of the distribution of average color intensity in the columns of the video camera matrix of the autonomous mobile robot. The main feature of the distribution is manifested in the fact that when a pillar-like object appears in the field of view of the video camera as a possible landmark, a jump or dip appears in it, the amplitude of which can serve as a criterion for landmark detection. The work shows that this operation can be effectively performed on the basis of the image matrix analysis, if the color of the landmark is significantly different from the color of the background image. In other cases, it is proposed to use the averaging of the intensity of red, green, and blue colors along the columns of the video camera matrix. The specified method to increase the probability of landmark detection in the broad conditions of application of a video camera of an autonomous mobile robot is proposed to use as a detection criterion the product of the modulus of the derivative of the distribution of average colors in the columns of the matrix by the modulus of the difference of the specified distribution and its average value across all columns. It was established that the product of the module of the specified derivative by the module of the difference between the distribution of average colors and the average value of this distribution, which is called the determining product, can serve as a criterion for identifying a landmark. It is shown that exceeding the maximum value of the determining product above the threshold value, which is determined based on the analysis of statistical data, in any of the channels of red, green, and blue colors indicates the detection of a ground landmark. Research data show that the determining product in its influence on the probabilistic characteristics of detection is similar to the signal-to-noise ratio in radar.</p>Oleksandr PoliarusYurii Khomenko
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)182410.20998/2079-0023.2024.02.03SYNTHESIS OF DESIGN PARAMETERS OF MULTI-PURPOSE DYNAMIC SYSTEMS
http://samit.khpi.edu.ua/article/view/320136
<p>Two problems related to the optimization of linear stationary dynamic systems are considered. A general formulation of the multi-purpose problem of optimal control with the choice of design parameters is given. As a special case, the problem of multi-objective optimization of a linear system according to an integral quadratic criterion with a given random distribution of initial deviations is considered. The solution is based on the method of simultaneously reducing two positive-definite quadratic forms to diagonal form. Analytical results have been obtained that make it possible to calculate the mathematical expectation of the criterion under the normal multidimensional distribution law of the vector of random initial perturbations. The inverse problem of stability theory is formulated: to find a vector of structural parameters that ensure the stability of the system and a given average value of the quadratic integral quality criterion on a set of initial perturbations. The solution of the problem is proposed to be carried out in two stages. The first stage involves deriving a general solution to the Lyapunov matrix equation in terms of the elements of the system matrix. To achieve this, the state space is mapped onto the eigen-subspace of the positive-definite matrix corresponding to the integral quadratic performance criterion. It has been established that this solution is determined by an arbitrary skew-symmetric matrix or by the corresponding set of arbitrary constants. In contrast, when the system matrix depends linearly on the vector of design parameters, a linear system of equations can be formulated with respect to the unknown parameters and arbitrary constants present in the general solution of the inverse stability problem. In general, such a system is consistent and admits an infinite number of solutions that satisfy the initial requirements for the elements of the symmetric matrices in the Lyapunov.</p>Oleksandr KutsenkoMykhailo AlforovAndrii Alforov
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)252910.20998/2079-0023.2024.02.04MODIFICATION OF THE DECOMPOSITION METHOD OF CONSTRUCTING MULTIVARIATE POLYNOMIAL REGRESSION WHICH IS LINEAR WITH RESPECT TO UNKNOWN COEFFICIENTS
http://samit.khpi.edu.ua/article/view/320127
<p>The authors created a universal method of constructing multivariate polynomial regression given by a redundant representation. The method is synthetic, it organically combines a decomposition method and the modified group method of data handling. First, the decomposition method is implemented, it consists in the decomposition of the multivariate problem into a sequence of subproblems of constructing univariate polynomial regressions and the corresponding systems of linear equations, the variables of which are estimates for the nonlinear terms of the multivariate polynomial regression. Partial cases that guarantee the finding of estimates with a predetermined value of their variances were considered. The formal algorithm for constructing coefficient estimates for nonlinear terms of the multivariate polynomial regression stops working on the first coefficient whose estimation with a predetermined accuracy is not achieved under the specified limitations on the number of tests. The estimation of all coefficients that were not found by the decomposition method is done by a heuristic method, which is an efficient modification of the group method of data handling. The increase in the efficiency of the synthetic method is achieved primarily by finding such new theoretically substantiated algorithmic procedures (aggregated operators) of the decomposition method, which significantly, in comparison with its previous version, increases the number of coefficients for nonlinear terms of a multivariate polynomial regression that can be found in advance given accuracy. The authors showed that this effect is achieved due to new theoretical provisions used in the visual analysis of the structure of the multivariate polynomial regression given by the redundant representation by a professional user. The given illustrative example facilitates the use of the presented results when solving practical problems.</p>Alexander PavlovMaxim HolovchenkoValeriia Drozd
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)31010.20998/2079-0023.2024.02.01ANALYSIS OF THE APPLICATIONS OF THE DATA-DRIVEN APPROACH IN EVALUATING THE THERMAL-PHYSICAL PROPERTIES OF COMPOSITES
http://samit.khpi.edu.ua/article/view/320130
<p>This research analyzes the potential and prospects of a data-driven methodology for examining the thermo-physical properties of composite materials. The research is to provide an analysis of the potential and benefits of employing data-driven procedures, especially in contrast to conventional methods. The analysis examines fundamental principles and advanced machine learning approaches utilized in materials science, highlighting their ability to improve the knowledge, optimization, and overall quality of composite materials. This study thoroughly examines the application of neural networks in forecasting thermal characteristics, highlighting its predictive skills and potential to transform the analysis of thermal properties in composite materials. Additionally, the research underscores the growing reliance on big data analytics in addressing complex challenges in material behavior, particularly under variable environmental conditions. A comparison assessment is performed between the data-driven methodology and traditional analytical methodologies, emphasizing the distinct advantages and drawbacks of each. This comparison elucidates how data-driven methodologies can enhance and refine the precision of thermo-physical analysis. The convergence of machine learning and material science is shown to not only facilitate more accurate predictions but also reduce experimentation time and costs. The report also delineates contemporary techniques for measuring and forecasting the thermo-physical properties of composites, emphasizing the advancements in new technologies in recent years. The function of computational tools and computer technology is elaborated upon, especially with the modeling of thermo-physical properties and the simulation of production processes for composite materials. This paper highlights the growing significance of these technologies in enhancing both theoretical and practical dimensions of material science. The research provides novel insights into composite manufacture, thereby advancing the future of materials science and the practical applications of composite materials. The results have significant implications for enhancing production processes, fostering innovation, and progressing the research of composite materials across diverse industries.</p>Ruslan LavshchenkoGennadiy Lvov
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)111710.20998/2079-0023.2024.02.02SOFTWARE DEVELOPMENT AND RESEARCH FOR MACHINE LEARNING-BASED STRUCTURAL ERRORS DETECTION IN BPMN MODELS
http://samit.khpi.edu.ua/article/view/320149
<p>The most important tool for process management is business process modeling. Business process models allow to graphically represent the sequences of events, activities, and decision points that make up business processes. However, models that contain errors in depicting the business process structure can lead to misunderstanding of a business process, errors in its execution, and associated expenses. Thus, the aim of this study is to ensure the comprehensibility of business process models by detecting structural errors in business process models and their subsequent correction. During the analysis of the Business Process Management (BPM) lifecycle, it was found that the created business process models do not have a stage of control for the presence of errors in them. Therefore, the paper analyzes and improves the BPM lifecycle using the proposed approach. In the improved BPM lifecycle, it is proposed to take into account the correctness validation stage of business process models using the developed software. The paper proposes to process created BPMN (Business Process Model and Notation) models as connected directed graphs. To detect errors in business process models, one of the Machine Learning methods, K-Nearest Neighbors, is chosen, which is a fairly simple and effective classification method. The study also includes the software design and development, its performance validation, and usage to solve the given problem. To analyze the obtained results, the confusion matrix was used and the corresponding quality metrics were calculated. The obtained results confirm the suitability of the developed software for detecting structural errors in business process models. This web application, which is based on the created classification model, allows all interested users to upload business process models in BPMN 2.0 format, view the uploaded models, and analyze them for structural errors.</p>Andrii KoppDmytro OrlovskyiIgor GamayunIllia Sapozhnykov
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)465510.20998/2079-0023.2024.02.08INTELLIGENT TECHNOLOGY FOR SEMANTIC COMPLETENESS ASSESSMENT OF BUSINESS PROCESS MODELS
http://samit.khpi.edu.ua/article/view/320181
<p>In this paper, we present a method for comparing business process models with their textual descriptions, using a semantic-based approach based on the SBERT (Sentence-Bidirectional Encoder Representations from Transformers) model. Business process models, especially those created with the BPMN (Business Process Model and Notation) standard, are crucial for optimizing organizational activities. Ensuring the alignment between these models and their textual descriptions is essential for improving business process accuracy and clarity. Traditional set similarity methods, which rely on tokenization and basic word matching, fail to capture deeper semantic relationships, leading to lower accuracy in comparison. Our approach addresses this issue by leveraging the SBERT model to evaluate the semantic similarity between the text description and the BPMN business process model. The experimental results demonstrate that the SBERT-based method outperforms traditional methods, based on similarity measures, by an average of 31%, offering more reliable and contextually relevant comparisons. The ability of SBERT to capture semantic similarity, including identifying synonyms and contextually relevant terms, provides a significant advantage over simple token-based approaches, which often overlook nuanced language variations. The experimental results demonstrate that the SBERT-based approach, proposed in this study, improves the alignment between textual descriptions and corresponding business process models. This advancement is allowing to improve the overall quality and accuracy of business process documentation, leading to fewer errors, introducing better clarity in business process descriptions, and better communication between all the stakeholders. The overall results obtained in this study contribute to enhancing the quality and consistency of BPMN business process models and related documentation.</p>Oleksandr RudskyiAndrii KoppTetiana GoncharenkoIgor Gamayun
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)566510.20998/2079-0023.2024.02.09APPLICATION OF OPTICAL CHARACTER RECOGNITION AND MACHINE LEARNING TECHNOLOGIES TO CREATE AN INFORMATION SYSTEM FOR AUTOMATIC VERIFICATION OF OFFLINE TESTING
http://samit.khpi.edu.ua/article/view/320182
<p>During the learning process in any field, testing and monitoring the knowledge of students or other learners is an essential part. Teachers often spend considerable time grading large volumes of standardized tests. While online testing systems have been developed to streamline this process, offline paper tests remain popular as they do not require access to computers, electricity, or a stable internet connection. Offline testing is often considered one of the most representative methods for assessment, but it leads to repetitive work for teachers during the grading process. To save time, some educators use test sheets to structure responses, simplifying grading tasks. Consequently, developing a system that automates the grading of offline tests has become increasingly relevant. The purpose of this research was to develop an information system (web platform) that simplifies the offline test grading process using optical character recognition technologies powered by machine learning algorithms. The object of this research is the processes and functionality involved in creating an information system for the automated grading and evaluation of offline tests. The scientific novelty lies in integrating machine learning algorithms with modified image processing algorithms to create a system capable of analyzing and grading a wide range of offline test tasks, including open-ended, closed-ended, sequence identification, and multiple-correct-answer questions. The practical significance of this research is the development of a web platform to automate offline test grading through optical character recognition and machine learning technologies, reducing teachers' time spent on grading, enabling analysis and improvement of educational programs, supporting various test types, and promoting scientific and technological advancement in education. The developed system can recognize handwritten text from photos, create an array of responses, and compare them to the answers provided by the teacher. This approach significantly reduces the time teachers spend on grading tests. For user convenience, a minimalist interface was created, granting access to all main system functions with intuitive controls. A detailed description of the developed algorithms and machine learning models is provided. This project offers broad potential for further development, including integration with other educational platforms, enhancements in recognition technology, and system scalability.</p>Vadym ZiuziunNikita Petrenko
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)667510.20998/2079-0023.2024.02.10MATHEMATICAL RATIONALE FOR CREATING AN APPLICATION FOR CONDUCTING RANDOM MEETINGS «COFFEE BREAK»
http://samit.khpi.edu.ua/article/view/320183
<p>Modern society is facing an increasing trend of social isolation, as people increasingly rely on social media for interaction instead of face-to-face communication. This lack of in-person contact often leads to feelings of loneliness and disconnection. This study proposes the concept of a mobile application, CoffeeBreak, designed to counteract these trends by offering users a platform to arrange brief, in-person meetings, such as a quick coffee chat. By encouraging users to meet in real life, the application aims to foster meaningful social connections and combat the sense of isolation prevalent in today’s digital world. The core innovation of CoffeeBreak lies in its unique approach to matchmaking. Instead of presenting users with an overwhelming array of choices, the app offers a single match within a specified timeframe, thus addressing the common issue of decision paralysis that can arise when users are presented with too many options. By simplifying the process, CoffeeBreak allows users to spend less time making selections and more time connecting with others. This approach is inspired by practices adopted within large companies, where employees use bots in work chat groups to find a partner for a short meeting. These interactions help raise awareness about the activities in other departments and foster informal and professional connections. Expanding this practice to a broader societal level, CoffeeBreak is intended to provide individuals with the opportunity to network beyond their immediate professional circles. This research has established a conceptual system model and developed the mathematical frameworks necessary to support this type of meeting arrangement. Specifically, the study has defined the concept of the CoffeeBreak mobile application, outlined the system model with detailed subsystems and environment interactions, and formulated mathematical models to form the basis of the candidate selection algorithm. The model ensures that users are matched in a way that promotes engagement, as each participant can be assured that their matched partner is equally motivated for the encounter. As the application continues to evolve, it can incorporate additional scheduling criteria to enhance the quality of matches and distribution. For example, if a user attends a meeting within the first two days, they could unlock the potential for additional matches by the end of the week. Ultimately, CoffeeBreak aims to broaden users' horizons, help them form new professional and informal connections, and enhance their social skills. This study’s findings lay the groundwork for a new tool that encourages in-person interactions, enabling individuals to expand their social networks in a balanced and purposeful manner.</p>Vadym ZiuziunDaniil Osoka
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)768010.20998/2079-0023.2024.02.11PRIVACY MODELS AND ANONYMIZATION TECHNIQUES FOR TABULAR HEALTHCARE DATA
http://samit.khpi.edu.ua/article/view/320184
<p>In today's world, issues of privacy and personal data protection are becoming extremely relevant, especially in the healthcare field, where the use of large volumes of data for research is becoming increasingly common. The use of personal data is regulated by relevant laws that require data anonymization to minimize the risks of identifying individuals. Anonymization is a process that allows the use of sensitive data without the risk of disclosing personal information while maintaining its utility. This article discusses the main privacy models and anonymization techniques used to protect tabular healthcare data. Privacy models include <em>k</em>-anonymity, <em>l</em>-diversity, and <em>t</em>-closeness. The <em>k</em>-anonymity model ensures that any combination of quasi-identifiers is shared by at least <em>k</em> records. The <em>l</em>-diversity model complements <em>k</em>-anonymity by requiring at least <em>l</em> unique combinations of sensitive attribute (SA) values in each equivalence class. The <em>t</em>-closeness model considers the distribution of these sensitive attribute values, ensuring that the distance between the SA distribution in the equivalence class and the overall distribution does not exceed a specified threshold. Anonymization techniques include generalization, suppression, relocation, permutation, perturbation, slicing, differential privacy, and synthetic data. Generalization reduces the precision of quasi-identifiers. Suppression removes certain values from the dataset to improve its statistical characteristics. Relocation changes a limited number of values in the data to enhance protection. Permutation mixes the values of quasi-identifiers between records while preserving the overall statistical features of the dataset. Perturbation adds noise to the data, increasing privacy. The idea of differential privacy also involves adding noise, but this is done at the query processing stage. Generating synthetic data allows the creation of new datasets that are similar in characteristics to the original data.</p>Denys KalininValerii SeverynMykola Bezmenov
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)818510.20998/2079-0023.2024.02.12SOFTWARE IMPLEMENTATION USING TRANSFORMER WITH OPTICAL FLOW AND GEONET FOR IDENTIFYING PARAMETERS OF DYNAMIC OBJECTS
http://samit.khpi.edu.ua/article/view/320185
<p>Today, interdisciplinary research in computer science and engineering has become increasingly relevant due to the growing demand for real-time data processing in object detection and tracking applications. The identification of dynamic object parameters plays a crucial role in various domains such as autonomous transportation systems, robotics, and surveillance. Effective automated acquisition and processing of video data represent a promising field for scientists and practitioners working in these interconnected disciplines. This research aims to enhance object detection and tracking processes by developing and implementing an information technology solution based on modern machine learning methods, including DETR (Detection Transformer), Optical Flow, and GeoNet. The research methodology involves designing software using Python programming language and modern libraries and frameworks for image and video processing. The DETR method was employed for precise object detection within video frames, Optical Flow was used to determine the direction and velocity of object movement, and GeoNet provided depth and geometric scene analysis. The proposed technology was tested on diverse video recordings depicting complex scenarios with dynamic conditions, such as varying lighting, object occlusions, and rapid motion changes. The results demonstrate the high accuracy and reliability of the proposed approach for identifying dynamic object parameters under various conditions. The integration of these methods significantly improved the precision and robustness of the detection and tracking system, particularly in challenging environments or low-quality video scenarios. The study concludes that the proposed information technology is effective and can be applied in practical fields such as autonomous systems, robotics, and video surveillance.</p>Oleksii KondratovOlena Nikulina
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)869110.20998/2079-0023.2024.02.13STUDY OF COMPATIBILITY OF METHODS AND TECHNOLOGIES OF HIGH-LEVEL PROTOCOLS AND ERROR-CORRECTING CODES
http://samit.khpi.edu.ua/article/view/320187
<p>Since the year 2000, the fields of error-correction codes and Virtual Private Networks (VPNs) have undergone significant advancements driven by technological demands for higher reliability and security in communication systems. In error-correction codes, the development of turbo codes and Low-Density Parity-Check (LDPC) codes reached new heights, with LDPC codes being adopted in standards like 5G and Wi-Fi 6 for their near-Shannon-limit performance. This period saw groundbreaking contributions from researchers like David MacKay and Radford Neal, who refined LDPC algorithms, and Erdal Arıkan, who introduced polar codes in 2008. Polar codes have since been integrated into 5G systems due to their efficiency and low complexity, marking a milestone in modern coding theory. Advances in decoding methods, such as belief propagation and successive cancellation, further enhanced the utility of these codes in practical applications. Parallel to these developments, VPN technology evolved in response to the growing need for secure and private communication in an increasingly interconnected world. Enhanced encryption protocols such as IPsec and OpenVPN became widespread, supported by innovations in cryptography. Researchers like Hugo Krawczyk contributed to robust authentication mechanisms, such as the HMAC and IKEv2 protocols, ensuring the integrity and confidentiality of VPN tunnels. Meanwhile, the development of WireGuard in the mid-2010s, spearheaded by Jason A. Donenfeld, introduced a lightweight and highly secure VPN protocol, revolutionizing the way modern VPNs operate. These advancements addressed the escalating cyber threats and facilitated the secure exchange of data across global networks. The importance of studying error-correction codes and VPNs in the modern era cannot be overstated. Error-correction codes are integral to overcoming the challenges of high-noise environments, enabling reliable communication in technologies ranging from space exploration to massive IoT networks. Simultaneously, VPNs remain critical for preserving user privacy, securing corporate networks, and protecting sensitive data in the face of sophisticated cyberattacks. Emerging technologies like quantum computing and artificial intelligence introduce both opportunities and threats, necessitating continuous innovation in these fields. Exploring quantum error-correction codes and post-quantum cryptographic protocols represents a vital area for future research. By addressing these challenges, scientists and engineers can ensure the resilience and security of communication systems in an increasingly digital and interconnected world.</p>Vladyslav SharovOlena Nikulina
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)929710.20998/2079-0023.2024.02.14MODIFIED SOFTWARE DEPLOYMENT ALGORITHM USING MULTI-THREADING
http://samit.khpi.edu.ua/article/view/320188
<p>The article presents a modified deployment algorithm for software systems using multithreading in AWS CodeBuild, aimed at optimizing build time and reducing computational resource costs in cloud environments. The key stages of the build process, including parallel test execution, task allocation analysis, and resource management, were modeled using finite automata, timed automata, and Petri nets. Particular attention was given to identifying and addressing the limitations of AWS CodeBuild's standard parallelization mechanisms, which can lead to inefficient resource utilization and extended build durations. The study revealed that AWS CodeBuild's default mechanisms are not always capable of optimally leveraging system resources, especially when handling large software projects with numerous dependencies. To overcome these limitations, the use of Python's multithreading capabilities was proposed as a convenient tool for extending the platform's base functionality. The proposed approach enabled flexible thread management and task distribution at the user scenario level, significantly reducing overall build time. Experimental results demonstrated substantial reductions in build execution time compared to the default AWS CodeBuild settings, confirming the effectiveness of the proposed algorithm in enhancing performance and ensuring high scalability for build processes in cloud environments. The developed algorithm is particularly relevant for large software projects requiring frequent iterative builds and testing. The findings can be utilized to improve automated deployment processes and computational resource management in cloud ecosystems.</p>Nataliia KhatskoMykola SliepushkovKyrylo KhatskoYevhenii Shebanov
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)9810310.20998/2079-0023.2024.02.15ORCHESTRATION OF CUSTOMER INTERACTION WORKFLOWS IN ENTERPRISE APPLICATIONS
http://samit.khpi.edu.ua/article/view/320189
<p>Process automation is an important factor in the development of modern corporate systems, in particular e-commerce systems. One of the tasks that must be solved during development of such systems is the orchestration of customer interaction processes, which are asynchronous in nature. Modern popular orchestration frameworks (Airflow for example), are not very well adapted to simultaneously waiting for a response from the client for a large number of workflows. This leads to unnecessary costs of computing resources and a decrease in the economic efficiency of the system. This paper considers the task of building a system for orchestrating customer interaction workflows based on an event-driven approach. Instead of a sequential-like centralized execution of the graph model of the workflow, it is proposed to present the workflow as a sequence of events and the system's reactions to them, which eliminates the need to explicitly wait for a response from the client. In such model workflow operations will be performed during event processing, and the transition to the next operation will occur by sending a new message with a description of the operation. Centralized and distributed approaches for performing transitions between workflow operations are considered, the advantages and disadvantages of each of them are shown. The implementation of waiting for client response is also considered, for which it is proposed to store the message tied to the client session identifier in a specialized storage before starting interaction with client. And after receiving data from the client, add them to the message and send it back to the message queue to continue the workflow. Taking into account the possibility of using handlers written in different programming languages, JSON format based description of workflow model is proposed to be used. One of the approaches to building model description format is described and its use for demonstrating example of a workflow. The results of the study can be useful when creating systems for orchestrating workflows of interaction with the client.</p>Viacheslav Kolbasin
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)10410710.20998/2079-0023.2024.02.16INTELLIGENT INFORMATION TECHNOLOGY FOR RAPID CLASSIFICATION UNDER CONDITIONS OF OVERLAPPING CLASSES
http://samit.khpi.edu.ua/article/view/320190
<p>The subject of this research is the process of rapid data classification under conditions of overlapping classes. Rapid classification is performed in real-time or near-real-time mode. The aim of the work is to develop an intelligent information technology for rapid classification in online and nearline modes under conditions of overlapping classes. Achieving this goal allows for the consideration of non-stationarity in input data and class imbalance under conditions of streaming data. The tasks of compensating for noise in input data and changes in input data distribution due to non-stationarity, as well as the task of compensating for class imbalance, are interconnected when classifying under conditions of overlapping classes and require the development of a comprehensive solution. To achieve the goal, the following tasks are addressed: structuring approaches to classification of overlapping classes considering non-stationarity in input data and class imbalance; developing an intelligent technology for classification in online and nearline modes. An intelligent information technology for rapid classification under conditions of overlapping classes is proposed. The technology includes stages of preliminary classification considering noise in input data, classification considering class imbalance, and classification considering changes in input data patterns. The technology involves sequential use of a neo-fuzzy system, an adaptive neuro-fuzzy system, and a multilayer neural network with kernel bell-shaped activation functions. The neo-fuzzy system uses neo-fuzzy neurons, ensuring resistance to noise. The adaptive neuro-fuzzy system considers distances between input data and class centers in feature space, ensuring classification under class imbalance conditions. The multilayer neural network with kernel bell-shaped activation functions uses a recurrent learning algorithm, ensuring adaptation to new data with a new distribution. The technology enables rapid iterative refinement of classification decisions according to changes in input data characteristics.</p>Yevhenii BodianskyOlga Chala
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)10811210.20998/2079-0023.2024.02.17SITUATIONAL MODEL OF A MEDICAL BUSINESS PROCESS
http://samit.khpi.edu.ua/article/view/320148
<p>The subject of the research is medical business processes. The aim of the work is to develop a situational approach to modeling a generalized medical business process. Achieving this goal makes it possible to provide a specified level of medical services within the existing resource constraints of the medical business process. To achieve the goal, the following tasks are solved: structuring medical business processes considering the differences in treatment in the clinic, care, and the full process of patient treatment; development of a situational model of the medical business process. The structuring of medical business processes has been carried out. It is shown that such processes consist of a clinical pathway, care pathway, and disease treatment pathway. The clinical pathway is implemented within a single medical institution. The care pathway defines a comprehensive description of the sequence of care and treatment. The complete disease treatment pathway integrates various healthcare institutions. The treatment pathway includes the clinical pathway and care pathway as subprocesses. A situational model of a generalized medical business process is proposed, consisting of a sequence of situations. Each situation involves a choice considering resource, financial, and temporal constraints, and the subsequent execution of a sequence of actions of the medical business process. The model, in accordance with the presented structuring of medical business processes, at the top level of representation contains phases of primary medical care, outpatient treatment, clinical pathway, and discharge and rehabilitation. The model creates conditions for choosing an individualized treatment process considering the patient's needs and temporal and resource constraints. The typical sequence of actions of the medical business process is determined at the level of a set of situations, and the choice of alternatives is made within the situation. The sequence of situations sets the general standard of care or treatment, and individualization is performed within individual situations, taking into account the patient's needs and financial constraints.</p>Kostiantyn PetrovTaras Chalyi
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)424510.20998/2079-0023.2024.02.07A SOFTWARE SOLUTION FOR REAL-TIME COLLECTION AND PROCESSING OF MEDICAL DATA FOR EPILEPSY PATIENTS
http://samit.khpi.edu.ua/article/view/320139
<p>The rapid development of computer technologies has significantly impacted various sectors, including healthcare. The ability to collect, process, and visualize medical data in real time is becoming increasingly important, especially for managing chronic conditions such as epilepsy. This paper presents a web-based application designed for real-time monitoring of health indicators, enabling healthcare professionals to track patient data efficiently. The system automates the process of collecting data from fitness trackers, transmitting it via a mobile device to a server, and visualizing it in a web application. Its architecture employs a thin-client model with Node.js for backend logic and React.js for the user interface, ensuring scalability and responsiveness. Key features include real-time data visualization, historical trend analysis, and the ability to export health metrics for further examination. The system architecture follows a modular approach, with a clear separation of concerns between the client-side, server-side, and database components. MongoDB is used as the database provider, offering flexibility in handling large volumes of health data. The system underwent extensive testing in two stages. During the first stage, real-world data collection demonstrated an average data transmission time of less than 112 ms, ensuring compliance with real-time requirements. In the second stage, stress testing with up to 100 simultaneous users showed an average server response time of 145.8 ms and a 95th percentile response time of 167.1 ms. These results confirm the system’s robustness and suitability for deployment in medical facilities. Future work aims to enhance the system by incorporating advanced real-time alert mechanisms and additional health metrics, such as oxygen saturation and activity levels, to provide comprehensive monitoring. The presented solution showcases the potential of integrating modern web technologies into healthcare, contributing to improved patient outcomes and more efficient workflows for medical professionals.</p>Andrii KoppIryna LiutenkoViktor YamburenkoAndrii Pashniev
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)303710.20998/2079-0023.2024.02.05DESIGN OF CRM INFORMATION SYSTEM IN THE FIELD OF DENTAL ADMINISTRATION
http://samit.khpi.edu.ua/article/view/320145
<p>The article examines the features of optimization and automation of administrative services in dental clinics. Based on the analysis of modern approaches to the management of medical institutions, the main directions for improving the work of administrative personnel have been identified, including the implementation of CRM systems, the use of electronic document management and digital solutions for managing patient data. An important aspect is ensuring high quality administrative services according to key criteria, such as efficiency, accuracy and availability of information. Studies show that the efficiency of the clinic largely depends on the organization of processes related to patient registration, doctor's schedule management, medical data storage and communication with clients. However, numerous challenges arise in the process of work. At the organizational level, the main problems are errors of administrative personnel and insufficient integration of modern technologies. At the technical level, there are problems associated with data transmission delays, software incompatibility or risks of information loss due to technical failures. To solve these problems, integrated management systems have been implemented and existing processes have been improved. At the same time, it is important to maintain a balance between the complexity of the systems and their functionality. Overly complex solutions can create barriers for staff, and overly simple ones may not provide the necessary level of efficiency. The article proposes an approach to the design of administrative services that combines the use of modern technologies, such as CRM systems, patient registration automation and workflow analysis. This allows you to increase management efficiency and ensure high-quality patient care.</p>Mariia KozuliaOleksandr Soldatko
Copyright (c) 2025
https://creativecommons.org/licenses/by/4.0/
2025-01-042025-01-042 (12)384110.20998/2079-0023.2024.02.06