Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies http://samit.khpi.edu.ua/ <p><strong>Collection of scientific papers</strong></p> <p><img style="width: 250px;" src="http://samit.khpi.edu.ua/public/journals/49/cover_issue_16936_uk_UA.jpg" alt="" /></p> <p><strong>Year of foundation:</strong> 1961 (Bulletin of KhPI), 1979 (Series)</p> <p><strong>Aims and Scope:</strong> Peer-reviewed open access scientific edition that publishes new scientific results in the field of system analysis and management of complex systems, based on the application of modern mathematical methods and advanced information technology. Edition publishes works related to artificial intelligence, big data analysis, modern methods of high-performance computing in distributed decision support systems.</p> <p><strong>Target audience:</strong> For scientists, teachers of higher education, post-graduate students, students and specialists in the field of systems analysis, management and computer technology.</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2079-0023">2079-0023</a> (Print)</p> <p><strong>ISSN:</strong> <a href="https://portal.issn.org/resource/ISSN/2410-2857">2410-2857</a> (Online)</p> <p>Media identifier <strong><a href="https://drive.google.com/file/d/1POp1f3OPs6wWTgpUZXdVVKlUSORms-g1/view?usp=sharing">R30-01544</a></strong>, according to the <a href="https://drive.google.com/file/d/1o3jlce-hW2415D2fiaa7gbrj307yvKf3/view?usp=share_link"><strong>decision of the National Council of Ukraine on Television and Radio Broadcasting of 16.10.2023 No. 1075</strong></a>.</p> <p><strong><a href="https://drive.google.com/open?id=1BJybDTz3S9-ld7mUSnDpBeQzDBH61OO9">Order of the Ministry of Education and Science of Ukraine No. 1643 of December 28, 2019</a></strong> "On approval of decisions of the Attestation Board of the Ministry on the activity of specialized scientific councils of December 18, 2019", Annex 4, <strong>"Bulletin of the National Technical University "KhPI". Series: System Analysis, Control and Information Technology" is added to category B</strong> of the "List of scientific professional publications of Ukraine in which the results of the dissertation works for obtaining the scientific degrees of doctor of sciences, candidate of sciences, and doctor of philosophy can be published".</p> <p><strong>Indexing </strong>in Index Copernicus, DOAJ, Google Scholar, and <a href="http://samit.khpi.edu.ua/indexing">other systems</a>.</p> <p><strong>DOI prefix:</strong> <a href="https://doi.org/10.20998">https://doi.org/10.20998</a></p> <p>Edition publishes scientific works in the following fields:</p> <ul> <li>F1 (113) - Applied mathematics</li> <li>F2 (121) - Software engineering</li> <li>F3 (122) - Computer science</li> <li>F4 (124) - System analysis and data science</li> <li>F6 (126) - Information systems and technologies</li> <li>G7 (151/174) - Automation, computer-integrated technologies and robotics</li> </ul> <p><strong>Frequency:</strong> Biannual - June and December issues (deadlines for submission of manuscripts: until May 15 and November 15 of each year; manuscripts submitted late may be considered separately).</p> <p><strong>Languages:</strong> Ukrainian, English (mixed languages).</p> <p><strong>Founder and publisher:</strong> National Technical University "Kharkiv Polytechnic Institute" (<a href="https://www.kpi.kharkov.ua/eng/">University website</a>, <a href="https://ndch.kpi.kharkov.ua/en/bulletin-of-ntu-khpi/">Scientific and Research Department</a>).</p> <p><strong>ROR ID:</strong> <a href="https://ror.org/00yp5c433">https://ror.org/00yp5c433</a></p> <p><strong>USREOU:</strong> 02071180</p> <p><strong>Chief editor:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=57202891828">M. D. Godlevskyi</a>, D. Sc., Professor, National Technical University "KhPI".</p> <p><strong>Editorial board</strong> staff is available <a href="http://samit.khpi.edu.ua/editorialBoard">here</a>.</p> <p><strong>Address of the editorial office:</strong> 2, Kyrpychova str., 61002, Kharkiv, Ukraine, NTU "KhPI", Department of System analysis and information-analytical technologies.</p> <p><strong>Responsible secretary:</strong> <a href="https://www.scopus.com/authid/detail.uri?authorId=6507139684">M. I. Bezmenov</a>, PhD, Professor, National Technical University "KhPI".</p> <p><strong>Phone numbers:</strong> +38 057 707-61-03, +38 057 707-66-54</p> <p><strong>E-mail:</strong> mykola.bezmenov@khpi.edu.ua</p> <p>This journal is practicing and supporting a policy of open access according to the <strong><a href="https://www.budapestopenaccessinitiative.org/read">Budapest Open Access Initiative (BOAI)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/open-access.png" alt="Open Access" /></p> <p>Published articles are distributed under the terms and conditions of the <strong><a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution (CC BY)</a></strong>.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/cc-by.png" alt="CC-BY" /></p> <p>The editorial board adheres to international standards of publishing ethics and the recommendations of the <strong><a href="https://publicationethics.org/resources/guidelines/principles-transparency-and-best-practice-scholarly-publishing">Committee on Publication Ethics (COPE)</a></strong> on the Principles of Transparency and Best Practice in Scholarly Publishing.</p> <p><img src="http://samit.khpi.edu.ua/public/site/images/koppam/sm-cope.png" alt="" width="74" height="50" /></p> en-US <p><span>Authors who publish with this journal agree to the following terms:</span></p><ul><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ul> mykola.bezmenov@khpi.edu.ua (Безменов Микола Іванович (Mykola Bezmenov)) andrii.kopp@khpi.edu.ua (Копп Андрій Михайлович (Andrii Kopp)) Mon, 29 Dec 2025 16:29:53 +0200 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 PRACTICAL AND THEORETICAL ASPECTS OF MATHEMATICAL MODELING OF THE OPTIMIZATION PROCESS OF MANAGING MULTIGROUP BEHAVIOR OF AGENTS IN DISTRIBUTED SYSTEMS BASED ON THE GWO ALGORITHM http://samit.khpi.edu.ua/article/view/348447 <p>This work focused on the applied aspects and features of the gray wolf pack optimizer or the GWO algorithm in the context of application in multi-agent distributed systems. In this paper presented scientific materials regarding the proposed own ideas, assumptions, and hypotheses for analyzing and further verification in the fields of computer sciences, optimization methods and solving of applied mathematical and engineering problems. The object of the research is the process of organizing distributed systems based on computational intelligence. The subject of the research is the organization of algorithmic interaction in multi-agent intelligent systems in the context of mathematical modeling of the optimization process of multi-group behavior management. The goal of the research is to investigate the key practical and theoretical applied aspects and specifics of the application of the gray wolf pack optimizer or the GWO algorithm and its modifications; to study the features of modeling the behavior of intelligent agents of a gray wolf pack under the guidance of computational intelligence. The methods used: the method of analysis and synthesis, abstraction and concretization, comparison and analogies, the method of mathematical modeling and the method of scientific and search experiment. The results obtained: 1) analyzed the solid theoretical materials in the field of applied application of the GWO algorithm; 2) analyzed the key tactical and strategical techniques of mathematical modeling of the behavior of intelligent agents; 3) formed general approaches to mathematical modeling of multi-group interaction of self-organized multi-agent formations; 4) considered and analyzed the problems of coordination and agents interaction in a multi-agent distributed system; 5) considered the applied application of multi-agent systems in problems of science, engineering, computer and robotic systems; 6) identified the main limitations of the application of the gray wolf pack algorithm (GWO). Further developed the concept of mathematical modeling of the gray wolf pack algorithm (GWO) using the example of separately selected tactical and strategic techniques for organizing a wolf pack in the form of a multi-group multi-agent distributed system. Scientific novelty: proposed a new way to solve already solved selected optimization problems (separate optimal spherical objects packing into limited container problems) that we have listed in the paper. The main idea of ​​the paperwork is to increase the iterational speed and accuracy of the search algorithm process by using a heuristic swarm intelligence algorithm, known as the Gray Wolf pack Optimizer or the GWO index. We proposed the use of a special qualitative and numerical indicator to determine the efficiency of individual wolf pack agents by using evaluation parameters during the optimization process or in real time. Were defined new tactical and strategic methods of wolf packs organization in the process of self-organizing in a pack. Practical Significance: 1) we put forward an idea-hypothesis, for verification in subsequent works, which is based on multi-group multi-agent self-organization of a distributed system on the basis of qualitative and numerical indicators, which are planned to be calculated based on complex coordination-characteristic methods and heuristic dynamically changing data. It is proposed to verify the hypothesis about new calculated evaluation parameters of the effectiveness of wolf pack agents; 2) future research works are planned to expand the scope of application of the gray wolf pack algorithm (GWO) in combination with our other promising ideas in the field of computational intelligence for solving already known, but inefficiently solved optimization problems; 3) in the context of the process of mathematical modeling using the GWO algorithm, it is planned to pay attention to the problem of the artificiality of the principle of generation and distribution of a random variable in stochastic variables of the algorithm, the issues of which were not sufficiently covered in the works found in the references or can be modified to increase the efficiency of the algorithm in solving selected problems; 4) proposed as a new solution to use the GWO algorithm in the selected optimal spherical objects packing problems for solving them in more efficient way. Conclusions: this work considered the main practical and theoretical aspects and many-sided application of the optimization algorithm of the gray wolf pack optimizer (GWO). The applied application of this algorithm in various scientific and practical problems in the context of mathematical modeling of multigroups multiple-agents behavior was considered. The basic principles of the organization of a wolf pack were analyzed and separate strategies of coordination and hunting by a wolf pack were determined. The key characteristics and problems of the gray wolf pack optimizer algorithm (GWO) were defined and considered the ways to solve them in the most efficient way.</p> Bohdan Skrypka, Dmytro Yelchaninov Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348447 Mon, 29 Dec 2025 00:00:00 +0200 MATHEMATICAL MODELING FOR UNIVERSITY RESOURCE OPTIMIZATION BASED ON QS WUR INDICATOR http://samit.khpi.edu.ua/article/view/348448 <p>The article presents a retrospective analysis of the key indicators of the QS World University Rankings for Ukrainian higher education institutions with the aim of establishing realistic development targets for NTU “KhPI.” The dynamics of ranking indicators are examined in comparison with leading Ukrainian universities, which made it possible to determine achievable growth limits for each indicator in the medium-term perspective. Based on the obtained results, a system of target values was formed, which can be used by the university to improve its position in the ranking. A mathematical model for optimizing resource allocation is proposed, aimed at minimizing the deviation between actual and target indicator values. The model is presented as a quadratic programming problem with Boolean variables and linear constraints that reflect the university’s limited resources and the set of possible measures for improving each indicator. Given the nonlinearity of interconnections and the incompleteness of initial data, the use of a genetic algorithm is justified, as it ensures an effective search for optimal resource allocation options under multicriteria conditions. It is additionally emphasized that the proposed approach enables the adaptation of the university’s development strategy to the dynamic conditions of the international educational environment and takes into account changes in the weights of individual indicators in the ranking methodology. The model can be used as a tool for scenario analysis and for generating various management decision options. The practical significance of the study lies in the possibility of integrating the obtained results into the university’s strategic planning system. The results form a foundation for creating an information system to support strategic management in higher education institutions. Further research includes experimental validation of the model using retrospective data from NTU “KhPI” and the development of a software tool aimed at enhancing the effectiveness of management decisions and improving the university’s position in international rankings.</p> Marina Grinchenko, Mykyta Shaposhnikov Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348448 Mon, 29 Dec 2025 00:00:00 +0200 ANDROID APPLICATION MODULARIZATION ESTIMATING MODEL http://samit.khpi.edu.ua/article/view/348467 <p>The relevance of the research, the results of which are presented, is determined by the fact that mobile applications have evolved into complex software systems with growing code bases, which complicates development, testing and support. It is shown that improving the maintainability and scalability of Android applications projects is possible by moving from a monolithic architecture to a modular architecture, based either on the list of functions that the application should perform, or on the architectural features of creating the application. To select a modularization option, a classification of approaches to modularization implementing is proposed. Regardless of which direction of modularization implementing is chosen, it is aimed at reducing the impact of changes in one module on the need to make changes to others. Such a dependence between modules can be assessed by determining the cohesion and coherence of the project and individual modules. To quantitatively assess the advantages of modularization, a mathematical model has been developed that takes into account the balance between the cohesion of modules and the integrity of the project in whole. The model proposes to take into account the number of modules into which the monolithic architecture will be divided, the level of interaction between the modules that will be selected, as well as the level of their dependence on each other. Expressions are presented for automating calculations of division options into modules. The results of the assessment of the modularization of the Android application project for e-commerce based on different approaches to modularization implementing are presented. The obtained evaluation data allowed us to demonstrate the potential of modularization in reducing project assembly time, minimizing conflicts, and increasing project flexibility, offering a scalable solution for modern mobile development.</p> Dmytro Dvukhhlavov, Olha Pelypets, Alona Dvukhhlavova Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348467 Mon, 29 Dec 2025 00:00:00 +0200 ARCHITECTURAL APPROACH TO DATA PROTECTION IN DISTRIBUTED SUPPLY CHAIN MANAGEMENT SYSTEM USING BLOCKCHAIN NODES http://samit.khpi.edu.ua/article/view/348295 <p>Dockerised blockchain solution can mitigate the low levels of distributed technology adoption in small and medium enterprises. It can be done via designing and implementing an environment which inherits ease of deployment and scalability of containerized systems with safety and transparency of distributed applications. Practical implementation of a dockerized blockchain solution designed as a demonstrative implementation for existing client–server architecture is described in this paper. This solution uses Docker containers to simplify the setup and deployment of a private blockchain network, a mediator server and a reverse proxy. Implementation of this system on a low scale demonstrates feasibility of integrating blockchain technology into existing business processes without fundamental architectural changes and acknowledges deployment and maintaining challenges that usually accompany distributed systems using private blockchain. Discussed implementation is a demonstration of designed architecture being potentially a reproducible and easily maintainable environment for logging and validating data through an immutable ledger on a smaller scale. Proof of concept successfully validates the core idea. The implementation shows a mediator server intercepting client request, recording them on a private Ethereum blockchain via a JSON-RPC interface, and then forwarding them to the original server. This confirms the solution’s ability to introduce a trusted, intermediate layer for data immutability. The project demonstrates a working framework for embedding distributed ledger technologies into client–server ecosystems. While the current Proof of Work consensus mechanism presents scalability limitations, the architecture provides a strong foundation for future research, including migrating to more efficient consensus mechanisms and integrating smart contracts.</p> Pavlo Zherzherunov, Olexandr Shmatko Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348295 Mon, 29 Dec 2025 00:00:00 +0200 GRAPH NEURAL NETWORKS FOR TRAFFIC FLOW PREDICTION: INNOVATIVE APPROACHES, PRACTICAL USAGE, AND SUPERIORITY IN SPATIO-TEMPORAL FORECASTING http://samit.khpi.edu.ua/article/view/348298 <p>Traffic flow prediction remains a cornerstone of intelligent transportation systems (ITS), facilitating congestion mitigation, route optimization, and sustainable urban planning. Graph Neural Networks (GNNs) have revolutionized this domain by adeptly modeling the intricate graph-structured nature of traffic networks, where nodes represent sensors or intersections and edges denote spatial relationships. Recent years (2023–2025) have witnessed a surge in scientific innovation, with several novel approaches pushing the boundaries of traffic prediction accuracy and robustness. Notably, hybrid GNN-Transformer architectures have emerged, leveraging the spatial reasoning of GNNs and the temporal sequence modeling power of Transformers to capture long-range dependencies and complex spatiotemporal patterns. Physics-informed GNNs integrate domain knowledge, such as conservation laws and traffic flow theory, directly into the learning process, enhancing interpretability and generalization to unseen scenarios. Uncertainty-aware frameworks, including Bayesian GNNs and ensemble methods, provide probabilistic forecasts, crucial for risk-sensitive applications and adaptive traffic management in volatile urban environments. This article provides a comprehensive guide to implementing GNNs for traffic flow prediction, detailing best practices in data preparation (e.g., graph construction, feature engineering, handling missing data), model training (e.g., loss functions, regularization, hyperparameter tuning), and real-time deployment (e.g., edge computing, latency optimization). We critically compare GNNs to traditional statistical and deep learning methods, highlighting their superior ability to capture non-Euclidean spatial dependencies, adapt to dynamic and evolving network topologies, and seamlessly integrate multi-modal data sources such as weather, events, and sensor readings. Empirical evidence from widely used benchmarks, including PeMS and METR-LA, demonstrates that state-of-the-art GNN models achieve up to 15–20&nbsp;% improvements in accuracy metrics such as Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) over conventional baselines. These gains are attributed to the models’ capacity for dynamic graph learning, attention-based feature selection, and robust handling of heterogeneous data. Drawing on these recent innovations, this synthesis highlights GNNs' pivotal role in fostering resilient, AI-driven traffic systems for future smart cities, setting the stage for next-generation ITS solutions that are adaptive, interpretable, and scalable. In addition to these advancements, the integration of real-time sensor data and external information sources has further improved the responsiveness of traffic prediction models. Modern GNN frameworks are capable of handling large-scale urban networks, making them suitable for deployment in metropolitan areas with complex road infrastructures. The use of transfer learning and domain adaptation techniques allows models trained in one city to be effectively applied to others, reducing the need for extensive retraining. Furthermore, explainable AI approaches within GNNs are gaining traction, enabling stakeholders to understand and trust model decisions in critical traffic management scenarios. Recent research also explores the fusion of GNNs with reinforcement learning, enabling adaptive control strategies for traffic signals and congestion pricing. The scalability of GNNs ensures that they can process data from thousands of sensors in real time, supporting city-wide traffic optimization. Advances in hardware acceleration, such as GPU and edge computing, have made it feasible to deploy these models in latency-sensitive environments. Collaborative efforts between academia, industry, and government agencies are driving the adoption of GNN-based solutions in smart city initiatives. As urban mobility continues to evolve, the ability of GNNs to incorporate emerging data modalities, such as connected vehicle telemetry and mobile device traces, will be crucial for future developments. The ongoing refinement of model architectures and training protocols promises even greater accuracy and robustness in traffic flow prediction. Ultimately, the convergence of GNNs with other AI technologies is set to transform intelligent transportation systems, paving the way for safer, more efficient, and sustainable urban mobility.</p> Bohdan Dokhniak, Viktor Khavalko Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348298 Mon, 29 Dec 2025 00:00:00 +0200 INFORMATION TECHNOLOGIES FOR THE INTEGRATION OF CUSTOMER AND CONSUMER DATA http://samit.khpi.edu.ua/article/view/348470 <p>Using the example of a book enterprise that combines the functions of a publisher, distributor, and retailer, it is shown how multi-channel operational activities lead to the accumulation of vast arrays of information in databases that are fragmented, incomplete, unstructured, and contain duplicates. This situation makes it impossible to effectively analyze customer behavior, including the accurate calculation of key performance indicators. The relevance of the work lies in reducing this critical gap between the volume of accumulated information and the business's ability to make effective management decisions based on it. The purpose of this work is to develop a methodological approach to creating a data warehouse based on the star schema architecture and to implement an adaptive ETL chain with built-in quality control rules. An analysis of modern data warehouse design methods was conducted, including the transition from the entity-relationship model to the star schema. Based on the structure of the transactional database and business requirements for data analysis, an analytical warehouse using the star schema was designed, and key facts and dimensions necessary to support comprehensive customer analytics were identified. To transfer data from the transactional system to the warehouse, an extract, transform, and load (ETL) process was developed, and its logic was described: data extraction from sources, its cleaning and transformation in a staging area, and loading into the target warehouse tables. The effectiveness of the developed processes was evaluated based on event log data. The analysis results confirm the reliability and high performance of the proposed solution. The approach proposed in the article provides automated, reliable, and efficient updating of the data warehouse, creating a single source of truth for business analytics.</p> Ihor Babich, Dmytro Orlovskyi, Andrii Kopp Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348470 Mon, 29 Dec 2025 00:00:00 +0200 ALGORITHM FOR AUTOMATIC CREATION OF SEGMENTATION MASK FOR DETECTION OF BIOLOGICAL OBJECTS http://samit.khpi.edu.ua/article/view/348472 <p>The article presents a method for automatically creating segmentation masks for biomedical images, which significantly reduces the laboriousness of manual annotation and increases the reproducibility of data preparation. The proposed approach combines adaptive thresholding with Gaussian matrix coefficients, morphological operations, and geometric filtering of contours by area and roundness coefficient. This combination allows for effective separation of cellular structures under conditions of uneven illumination, noise, and low contrast, which are typical problems of microscopic images. The method was tested on the BBBC030v1 dataset, which contains 60 images of Chinese hamster ovary cells. For each image, the automatically created mask was compared with the provided ground truth annotation using the Dice coefficient. The average value was 0.8954, the median was 0.9013, and the standard deviation was 0.0254, which indicates high accuracy and stability of the method. The narrow interquartile range (IQR = 0.0215) confirms the uniformity of the algorithm's performance on most samples, while single outliers (0.80–0.85) are associated with atypical or low-contrast images. The overall result demonstrates that the classical segmentation approach without the use of neural networks can achieve quality comparable to manual expert labeling. To verify the practical suitability of the generated masks, they were used to train the U-Net neural network for the segmentation task. Comparison with training on real masks showed almost identical results (0.9036 vs. 0.9037), which confirms the possibility of full or partial replacement of manual annotation by an automatic approach. The developed method can be applied to accelerate the preparation of large biomedical datasets and integration into decision support systems in cytology, histology and other fields of biomedicine.</p> Anton Kovalenko, Valerii Severyn Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348472 Mon, 29 Dec 2025 00:00:00 +0200 AI SOLUTIONS FOR OPTIMIZING SCRUM: PREDICTING TEAM PERFORMANCE http://samit.khpi.edu.ua/article/view/348614 <p>This study presents the development, training, and AWS cloud deployment of an AI-based assistant leveraging an LSTM network to enhance Scrum team velocity prediction. The research focuses on analyzing the assistant’s interaction with key Scrum processes, highlighting its potential to optimize sprint planning and improve team performance forecasting. Through this analysis, specific sprint planning challenges suitable for AI-driven solutions were identified, paving the way for enhanced prediction accuracy and reduced uncertainty in project management. The proposed architecture outlines a logical sequence of integrated services that collectively contribute to improving Scrum process efficiency. Initial testing of a locally deployed LSTM network using a smaller dataset validated the suitability of the chosen model and confirmed its capability for accurate performance prediction. These findings establish a foundation for developing a scalable AI assistant capable of supporting Scrum teams in dynamic environments with evolving requirements. This research underscores the feasibility of applying AI technologies, particularly LSTM networks, to Scrum optimization. The results demonstrate significant potential for improving sprint planning, reducing uncertainty, and supporting adaptive project management strategies. The planned advancements in cloud-based deployment and performance evaluation will provide actionable insights into the economic and operational viability of integrating AI-driven prediction tools into real-world Scrum environments. Future work will focus on deploying the trained LSTM model in a production AWS environment to evaluate its practical performance, scalability, and operational costs. This stage will include detailed monitoring of computational resource usage and cost analysis to identify opportunities for optimization. By refining algorithmic components and improving model efficiency, we aim to enhance cost-effectiveness while maintaining high predictive accuracy.</p> Vadym Ziuziun, Nikita Petrenko Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348614 Mon, 29 Dec 2025 00:00:00 +0200 INTEGRATION OF HETEROGENEOUS DATA USING ARTIFICIAL INTELLIGENCE METHODS http://samit.khpi.edu.ua/article/view/348619 <p>Modern AI development and multimodal data analysis methods are gaining critical importance due to their ability to integrate information from diverse sources, including text, audio, sensor signals, and images. Such integration enables systems to form a richer and more context-aware understanding of complex environments, which is essential for domains such as healthcare diagnostics, adaptive education technologies, intelligent security systems, autonomous robotics, and various forms of human-computer interaction. Multimodal approaches also enable AI models to compensate for the limitations inherent in individual modalities, thereby enhancing robustness and resilience to noise or incomplete data. The study employs theoretical analysis of scientific literature, comparative classification of multimodal architectures, systematization of fusion techniques, and formal generalization of model design principles. Additionally, attention is given to evaluating emerging paradigms powered by large-scale foundation models and transformer-based architectures. The primary methods and models for processing multimodal data are summarized, covering both classical and state-of-the-art approaches. Architectures of early (feature-level), late (decision-level), and hybrid (intermediate) fusion are described and compared in terms of flexibility, computational complexity, interpretability, and accuracy. Emerging solutions based on large multimodal transformer models, contrastive learning, and unified embedding spaces are also analyzed. Special attention is paid to cross-modal attention mechanisms that enable dynamic weighting of modalities depending on task context. The study determines that multimodal systems achieve significantly higher accuracy, stability, and semantic coherence in classification, detection, and interpretation tasks when modalities are properly synchronized and fused using adaptive strategies. These findings underscore the promise of further research toward scalable architectures capable of real-time multimodal reasoning, improved cross-modal transfer, and context-aware attention mechanisms.</p> Oleh Zherebetskyi, Oleh Basystiuk Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348619 Mon, 29 Dec 2025 00:00:00 +0200 SOLVING THE MAX-CUT PROBLEM USING THE QASPA ALGORITHM http://samit.khpi.edu.ua/article/view/348625 <p>A novel quantum algorithm named the Quantum Approximate Shift-Phase Algorithm (QASPA) is proposed for the approximate solution of the Max-Cut problem. The Max-Cut problem consists in partitioning the set of vertices into two subsets in such a way that the total weight of the edges connecting vertices from different subsets is maximized. This problem is known to be NP-complete and represents a combinatorial challenge. Classical solution algorithms require excessive computational time, rendering them inefficient for medium and large graphs. Approximate solution methods offer guaranteed approximation ratios but still face significant limitations in terms of accuracy and performance on larger graphs. The proposed algorithm features a simple scheme and a minimal number of optimized parameters. The input graph is represented by an adjacency matrix, after which the edge weights are linearly normalized to fixed phase angles. The quantum circuit begins by applying a Hadamard transform to each qubit, thereby creating an equal superposition of all possible partitions. Subsequently, for each pair of adjacent vertices, a sequence comprising a controlled NOT gate, a single-qubit rotation around the Y-axis by an angle proportional to the edge weight, and a second controlled NOT gate is applied. This encodes the phase information about the edge weights into the quantum state of the system. After circuit execution, the qubits are measured in the standard computational basis, and the resulting probability distribution allows the selection of the most probable partition as an approximate solution to the problem. Experimental studies conducted on quantum processor simulators have demonstrated that the proposed algorithm achieves accuracy comparable to the Variational Quantum Approximate Optimization Algorithm (QAOA) while significantly reducing computation time due to the absence of iterative parameter optimization. Moreover, as the graph size increases, the algorithm's runtime grows considerably slower compared to classical brute-force approaches and QAOA, confirming its potential for solving medium-sized Max-Cut problems.</p> Dmytro Sapozhnyk Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348625 Mon, 29 Dec 2025 00:00:00 +0200 ANALYSIS OF THE IMPACT OF PRELIMINARY NOISY IMAGE RESTORATION BY AUTOCODER ON THE ACCURACY OF CNN CLASSIFICATION http://samit.khpi.edu.ua/article/view/348626 <p>The paper investigates the impact of preliminary image restoration using a denoising autoencoder (DAE) on the classification accuracy of a convolutional neural network (CNN) under various types of noise. The relevance of the topic is due to the fact that in real conditions, optical images often contain distortions caused by changes in lighting, vibrations, camera movement, and other factors, which significantly complicates the task of object recognition. Traditional filters do not always provide sufficient cleaning quality and can lead to the loss of important structural features. In this regard, the use of deep neural networks, in particular autoencoders, is a promising direction for improving the robustness of computer vision algorithms to noise of various nature. The study uses the CIFAR-10 dataset and implements a two-component model: an autoencoder for preliminary cleaning and a CNN for classification. The trained autoencoder restores the image structure after exposure to Gaussian, impulse, Poisson, and speckle noise. Three series of experiments were conducted: classification of clean images, classification of noisy data without cleaning, and classification after preliminary restoration by the autoencoder. The results showed that CNN demonstrates an accuracy of 70.37% on clean data, but when noise is introduced, the accuracy drops to 30–59% depending on the type of distortion. After applying the autoencoder, classification accuracy increased to 56–60% for all types of noise, with the greatest improvement observed for Gaussian noise with high dispersion. The results confirm that using an autoencoder as a preliminary restoration step is an effective method for improving classification accuracy and reducing CNN vulnerability to noise. This approach provides better generalization and stability of the system, which is especially important for real-time applications—in particular, in dynamic systems, robotics, autonomous transport, and navigation systems, where the quality of optical data is often unstable. The study demonstrates the promise of integrating restoration and classification models into a single structure to improve the performance of computer vision systems in challenging conditions.</p> Danylo Yakovlev, Maksym Holikov, Viktoriia Strilets Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348626 Mon, 29 Dec 2025 00:00:00 +0200 INFORMATION SYSTEM FOR SELECTING THE OPTIMAL PREPRESS PROCESSING OPTION FOR NEWSPAPER PUBLICATIONS http://samit.khpi.edu.ua/article/view/348630 <p>The paper analyzes the specific features of prepress processing of newspaper publications as a crucial stage in the production of periodical print media. Pareto-optimal factors with a high level of priority in influencing the quality of the studied process are identified: dimensional parameters, layout design, compositional and graphical formatting, and typesetting. Pairwise comparison of the factors is performed using a relative importance scale. As a result of the normalization of the principal eigenvector components of the pairwise comparison matrix, the weight coefficients of the factors are determined. The correctness of the solution is verified according to normalization criteria. Three alternatives for prepress processing of newspaper publications are designed. The influence level of each factor within the defined alternatives is presented as percentages. A comparison of the probable alternative options for prepress processing is carried out using fuzzy preference relations for each factor. Relation matrices of the alternatives are constructed, where a value of one indicates the presence of a preference or equivalence, and zero indicates its absence. Aggregation of factor relations is conducted, resulting in the first subset of non-dominated alternatives. Further aggregation takes into account the factor weights, leading to the calculation of membership functions and the derivation of a second subset of non-dominated alternatives. The intersection of the two non-dominated subsets is implemented, and a membership function of the combined set is obtained. The values of this function reflect the significance of the designed alternatives, that is, their level of optimality. The optimal prepress processing option for newspaper publications is identified as the third among the proposed alternatives. Based on the proposed methodology, an information system has been developed to support the selection of optimal alternatives for prepress processing of newspaper publications. The practical value of the study lies in providing a reasoned approach to selecting an effective prepress processing method for newspapers, thereby enhancing production efficiency. Future research prospects include expanding the set of input parameters by incorporating time and resource constraints, adapting the proposed model to a digital publishing environment, and integrating fuzzy logic methods with modern machine learning tools.</p> Alona Kudriashova, Yurii Slipetskyi Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348630 Mon, 29 Dec 2025 00:00:00 +0200 DETERMINATION OF THE PRIORITY OF RASTER IMAGE QUALITY FACTORS USING THE RANKING METHOD http://samit.khpi.edu.ua/article/view/348632 <p>Theoretical principles regarding the quality of raster images are provided. A wide range of application areas of raster graphic information is defined, including education, medicine, and printing. An analysis of recent studies and publications is conducted. The aim and main objectives of the research are formulated. A methodological approach to identifying the priority levels of factors influencing raster image quality based on ranking is demonstrated. A set of influencing factors is distinguished, including resolution, color depth, color model, file format, file size, image dimensions, compression level, brightness, saturation, and sharpness. To structure the interrelationships among these parameters, predicate logic constructions are applied. It is established that certain factors may exert both direct and indirect influence on other elements. Tables are developed to represent the connections for each factor. Hierarchical trees of direct and indirect influences and dependencies are constructed. An example of hierarchical trees for one of the selected factors is presented. Based on the analysis of the structure of interconnections, the ranking of quality factors is carried out. For this purpose, the number of each type of connection is counted, and corresponding weight coefficients are introduced. Positive weight values are assigned to influences, while negative ones are assigned to dependencies. The importance scores of the factors are calculated. A normalization of the values is performed to transform the scale into a positive domain. A final evaluation is conducted, taking into account the normalization coefficient. Factor ranks and the corresponding levels of priority are determined. Input data and ranking results are presented in tabular form. A model that reflects the priority levels of influencing factors on raster image quality is developed. The obtained results can be applied for image quality assessment based on fuzzy logic and machine learning methods, followed by the development of a corresponding fuzzy system.</p> Alona Kudriashova, Taras Oliyarnyk Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348632 Mon, 29 Dec 2025 00:00:00 +0200 BRIDGING COMPUTER SCIENCE EDUCATION AND INDUSTRY: A COMPETENCY-BASED ARCHITECTURE USING E-CF http://samit.khpi.edu.ua/article/view/348633 <p>The rapid growth of the information technology (IT) sector has made the existing gap between university training and industry requirements even more noticeable. As a result, many graduates feel the need to pursue additional qualifications to stay competitive in the job market. This paper suggests a recommendation system that connects academic results with professional expectations by using competency-based learning principles and the European e-Competence Framework (e-CF). Competency-based learning shifts the focus from traditional knowledge assessments to skills and real-world outcomes. The e-CF offers a standardized and internationally recognized way to describe IT roles, skills, and proficiency levels. Based on previous research in personalized learning and curriculum changes, the proposed system identifies gaps between competencies gained in a student’s university program and those needed for specific IT roles. Using course similarity measures, the system maps both academic disciplines and job profiles, finds missing competencies, and calculates a personalized learning path that includes the minimum number of extra courses needed to fill these gaps. The architecture uses the IDEF0 functional modeling method, which clearly shows key processes such as analyzing competency gaps, and optimizing course paths. Preliminary evaluations suggest that this approach can reduce the time and effort needed for aligning competencies while improving the accuracy of skill gap detection. The findings are useful for universities looking to update their curricula, individuals aiming to develop specific skills, and employers wanting clearer and more comparable candidate profiles. By combining competency-based learning with a standardized European framework, this system provides a flexible and scalable solution for enhancing the connection between higher education and the changing demands of the IT job market. It can also be applied to other fields with established competency models.</p> Volodymyr Sokol, Pavlo Sapronov Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348633 Mon, 29 Dec 2025 00:00:00 +0200 SYNTHESIS OF QUANTITATIVE MATURITY MODEL SCALES FOR ASSESSING THE QUALITY OF THE SOFTWARE DEVELOPMENT PROCESS http://samit.khpi.edu.ua/article/view/348780 <p>In this work, the concept of quality is defined as one of the most important indicators for evaluating products and services. The main stages of the evolution of this concept are examined. At the fourth stage, characterized by Total Quality Management (TQM), the ISO 9000 series of quality system standards emerges. The TQM stage and these standards are marked by the beginning of their application to software (SW) and the software development (SD) process. The paper reviews standards related to the following maturity models for assessing the SD process: Capability Maturity Model Integration (CMMI) and Software Process Improvement and Capability dEtermination (SPICE). The CMMI model has two usage options: continuous and staged, while the SPICE model is only continuous. In the continuous model, maturity is assessed based on the following components: focus areas for CMMI and processes for SPICE. The staged CMMI model evaluates the quality of the entire software development process. In all three cases, quality is determined using score-based qualitative scales. Further research showed that score-based scales are not fully suitable for planning quality improvement in the SD process. Therefore, the goal of the study was to develop a technology for converting score-based qualitative scales into quantitative ones using a utility function, which made the developed models more adequate to real-world SD processes. Based on this, a technology for transforming a score-based qualitative scale into a quantitative scale using a utility function is proposed. The essence of the technology is that each capability level is treated as an alternative utility value for a focus area or process. Then the methodology of collective expert evaluation is applied, specifically the Analytic Hierarchy Process (AHP) pairwise comparison method by Saaty, in which a team of experts assesses the utility of capability levels relative to one another. As a result, specific utility values for each capability level are obtained on a scale from zero to one. Appropriate resources are required for planning the quality improvement of individual focus areas and processes. Therefore, the next task is cost optimization aimed at maximizing the utility function. A technology for constructing balanced quantitative scales based on the obtained quantitative maturity model scales is presented. The essence of a balanced scale is that the intervals between individual utility estimates for a focus area or process, depending on the resources provided, should not differ significantly. One of the most significant directions for further research is the development of an optimization algorithm for planning the improvement of SD process maturity levels based on the method of sequential option analysis.</p> Volodymyr Sokol, Mykhaylo Godlevskyi, Dmytro Malets, Kostiantyn Afanasiev Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348780 Mon, 29 Dec 2025 00:00:00 +0200 METHOD FOR ADAPTIVE SELECTION OF TIME INTERVALS FOR CONSTRUCTING GRAPHS OF TEMPORAL GRAPH NEURAL NETWORKS http://samit.khpi.edu.ua/article/view/348789 <p>The subject of research is the process of forming graph structures for temporal graph neural networks with adaptive selection of time interval granularity level. The aim of the work is to develop an approach to forming graph structures with adaptive granularity for temporal graph neural networks. Research tasks include: structuring approaches to selecting the granularity level of time intervals when forming graphs of temporal graph neural networks considering changes in the structure of these graphs; developing a method for adaptive selection of time intervals based on graph editing metrics and spectral analysis of graph structure. The developed method includes five stages: graph formation based on co-occurrence frequency of entities; calculation of editing rate between sequential graphs; spectral embedding of graphs through normalized symmetric Laplacian; computation of Kullback – Leibler divergence between spectral densities to detect structural drift; adaptive adjustment of time interval duration considering editing rate criteria and divergence magnitude. The method combines local graph editing metric and global metrics of spectral density, Kullback – Leibler divergence to detect not only the quantity of changes in the graph but also their impact on graph topology. This allows distinguishing noise from significant structural changes in the graph. The method provides automated selection of time granularity without using expert knowledge about threshold values for graph structure changes; reduction of computational costs for graph formation during periods of structure stability; specified accuracy of temporal dependency detection during periods of sharp graph structure changes. The practical significance of the obtained results lies in the possibility of representation and further analysis of dynamic processes in intelligent systems that operationally adapt to changes in relationship structure, for tasks of building explanations, recommendations, monitoring, analysis and forecasting in e-commerce systems, social networks, financial analysis, transportation monitoring.</p> Serhii Chalyi, Rostyslav Kravchenko Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348789 Mon, 29 Dec 2025 00:00:00 +0200 DETECTION METHOD FOR SHORT-TERM SHILLING ATTACKS IN E-COMMERCE SYSTEMS USING ADAPTIVE GRANULARITY OF USER FEEDBACK http://samit.khpi.edu.ua/article/view/348796 <p>The subject of research is the process of detecting short-term shilling attacks in e-commerce systems based on analysis of temporal dependencies between explicit and implicit user feedback. The aim of the work is to develop an approach to detecting shilling attacks using temporal rules and adaptive granularity of both sales and ratings in e-commerce systems. Research tasks include: development of an approach to detecting short-term shilling attacks based on adaptive comparison of temporal rules for sales and ratings; development of a method for detecting short-term shilling attacks based on adaptive granularity of explicit and implicit user feedback. Explicit user feedback is represented by ratings, while implicit feedback is captured through product sales. The developed method includes the following stages: preliminary aggregation of sales data; analysis of sales and ratings variability through the ratio of standard deviation to mean value; identification of intervals with potential attack possibility; formation of a fact set with different granularity levels; construction of temporal rules of two types for sales and ratings; detection of shilling attack intervals based on comparison of rule weight signs for sales and ratings; identification of attacking users based on analysis of user activity across detected shilling attack intervals. The method provides automated selection of time granularity for determining sales facts and forming ratings and thereby improves the accuracy of detecting short-term attacks compared to fixed granularity, as well as enables attack detection in near-online mode. The practical significance of the obtained results lies in the possibility of detecting short-term rating distortions in e-commerce systems, social networks, and recommender systems to increase user trust in recommended products and services.</p> Oksana Chala, Oleksandr Bitchenko, Liliia Saikivska, Anzhelika Kalnitska Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348796 Mon, 29 Dec 2025 00:00:00 +0200 DATA-DRIVEN APPROACH TO PREDICT THE STRENGTH OF COMPOSITES http://samit.khpi.edu.ua/article/view/348293 <p>The rapid development of composites requires accurate prediction of their limit state under complex loading conditions, which cannot be provided by classical mechanical criteria due to the anisotropy and nonlinearity of materials. The paper proposes a data-driven approach using machine learning to determine the limit state of composites based on the components of the stress tensor. The object of study is machine learning processes for determining the limit states of unidirectional reinforced composites under a multiaxial stress state. The aim of the study is to create a universal and accurate model capable of detecting the moment of reaching the strength limit without numerical modeling and large-scale experiments. Balanced synthetic samples of stress states were generated for three composite systems. Several machine learning models were implemented in the study: logistic regression, random forest, and multilayer perceptron neural network. To compare the effectiveness, the classical model for determining the limit state according to the von Mises criterion, with a fixed equivalent stress threshold for the fibres or the matrix, was also employed. The results show that the machine learning models achieve an accuracy of up to 99.9 % on test samples, significantly outperforming the classical approach, which demonstrates an accuracy of about 50 % in all cases. Visualization of the stress state in the form of 2D sections showed a complex and nonlinear structure of the boundary surface, which confirms the feasibility of using ML algorithms. The obtained results confirm the high effectiveness and reliability of the data-driven approach for structural health assessment of composite systems. The developed methodology is universal and can be adapted to various types of reinforced materials and loading conditions. The proposed approach can be applied in real-time technical diagnostics of composite structures. The work also creates a basis for further implementation of interpreted models and digital twins in the field of composite mechanics.</p> Ruslan Lavshchenko, Gennadiy Lvov Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348293 Mon, 29 Dec 2025 00:00:00 +0200 COMPARATIVE ANALYSIS OF PARAMETER CONSISTENCY BETWEEN AGENT-BASED AND SIR EPIDEMIC MODELS http://samit.khpi.edu.ua/article/view/348255 <p>In the context of the rapid spread of new viral infections, particularly during the COVID-19 pandemic, there is an increasing need to develop models that are capable not only of accurately representing the dynamics of the disease, but also of providing a well-grounded interpretation of the parameters used in analytical models. This paper examines the classical compartmental SIR (Susceptible–Infectious–Recovered) model, which allows for the assessment of disease dynamics through the solution of a system of differential equations. It is noted that, despite its wide application, this model has a number of limitations, as it does not take into account individual differences in population behavior, spatial structure, or variability of contacts. To address these limitations, a multi-agent model is proposed, in which individual agents simulate real people moving in a two-dimensional space and interacting with each other. The transition of agents between states (susceptible, infected, recovered) depends on the duration of the disease and the occurrence of spatial contact with an infected agent. The proposed model allows for consideration of the physical meaning of parameters, such as the infection radius and disease duration. Based on the results of agent-based modeling, the parameters of the SIR model – the infection transmission rate and the recovery rate – were identified using the least squares method. Numerical experiments examined how these parameters change depending on the duration of the disease and the spatial interaction distance between agents. The obtained results demonstrated qualitative agreement between the agent-based model and the SIR model when parameters were properly chosen. Thus, multi-agent modeling can not only significantly improve the accuracy of epidemic forecasting but also serve as a tool for the well-grounded identification of parameters in classical mathematical models. The proposed approach can be used to support decision-making in healthcare during real epidemic threats, providing a more substantiated assessment of the potential development of an epidemic, planning of preventive and control measures, and evaluation of the effectiveness of different intervention scenarios, taking into account the spatial and temporal dynamics of infection spread.</p> Daria Ivashchenko, Oleksandr Kutsenko Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348255 Mon, 29 Dec 2025 00:00:00 +0200 ALGORITHMS FOR CONSTRUCTING A REGRESSION LINEAR WITH RESPECT TO UNKNOWN COEFFICIENTS ON A LIMITED AMOUNT OF EXPERIMENTAL DATA http://samit.khpi.edu.ua/article/view/348290 <p>This publication continues the series of scientific works of the authors on the creation of algorithms for constructing multivariate regressions which are linear with respect to unknown coefficients by using linear programming models. To simplify the simulation modeling of their efficiency, we present the algorithms for the multivariate linear regression problem. The use of linear programming models requires minimizing the sum of the absolute differences used in the general procedure of the least squares method. The estimates of the unknown coefficients obtained as a result of solving the linear programming problem are linear with respect to the vector of the values ​​of the regression model in the statistical experiment. It is known that, by virtue of the Markov theorem, the estimates of the unknown coefficients obtained by the general procedure of the least squares method are efficient in the class of linear unbiased estimates. Thus, it would seem that the transition from the least squares method to the least absolute deviations used in the least squares method is a priori unproductive. But this is not so. From the proof of the Markov theorem, it follows that the linear estimation matrix must be constant and independent of the values of the regression model in the statistical experiment. The estimates obtained by the least absolute deviations method do not meet this condition. Indeed, the estimation matrix is the optimal basis for solving the linear programming problem by the simplex method and depends on the values of the regression model in the statistical experiment. Such a formulation of the problem allows introducing, into the optimization model, linear constraints that use the results of statistical tests and implement additional properties of the searched multivariate regression. The first studies of these algorithms have shown their efficiency, this allowed the authors to set the task of creating such algorithms that can not only compete with the general algorithmic procedure of the least squares method, but also be efficient for the case of a limited volume of experimental data, when the ratio of the average absolute value of the realizations of a random factor in the experiment to the average absolute value of the true regression on it is a sufficiently large value. In this case, it is incorrect to raise the problem of finding estimates of unknown coefficients that practically do not differ from the true ones, but, as experiments and, in particular, the examples given in this paper have shown, it is possible to find sufficiently good estimates of the average values of the true regression in the experiments conducted, which can be used, for example, in diagnosing the early stages of the onset of an epidemic of various diseases or in other recognition tasks.</p> Alexander Pavlov, Anton Kushch Copyright (c) 2025 https://creativecommons.org/licenses/by/4.0/ http://samit.khpi.edu.ua/article/view/348290 Mon, 29 Dec 2025 00:00:00 +0200