Volume №1(41) / 2026
Articles in journal
This work presents a brief overview of the current state of quantum computing. In recent years, this field of research has seen rapid progress, primarily in the development of experimental methods for controlling the states of multiparticle quantum systems. At the same time, the question of the practical applications of quantum computing remains open. The most promising areas, where such applications may emerge in the coming years, include solving problems in quantum chemistry, materials science, and various optimization problems. Current experimental work on the implementation of various quantum algorithms using intermediate-scale quantum processors with tens and hundreds of qubits—two-level quantum systems that are the fundamental logical elements of a quantum computer—has generated considerable interest in the scientific community. This is clearly demonstrated by the results of a bibliometric analysis conducted using Google Scholar and ChatGPT.
The experimental implementation of quantum computing is currently characterized by competition among several physical platforms, with the superconducting platform leading the way, a feat recognized by the 2025 Nobel Prize in Physics. At the same time, alternative platforms—photons, ions, and neutral atoms—offer a number of potential advantages. The most significant development in experimental quantum computing in future years is quantum error correction, which should increase the depth of quantum algorithms and enable the implementation of the most complex universal quantum algorithms. The basic principles of quantum error correction and the implementation of a surface code that best suits the architecture of modern quantum processors are presented in the article.
Based on existing superconducting quantum processors, cloud access has been implemented, allowing researchers to conduct both numerical simulations and experiments on the implementation of quantum computing. The IBM Qiskit library, which has become the de facto standard, is widely used for this purpose. As an example, the paper presents a demonstration of elements of a surface code for quantum error correction, implemented using large language models.
The article provides an overview of some issues related to the technical devices’ autonomy subject domain in its current state. Today, this concept does not have a clear methodological or formalized definition, which, in the context of the transition to digital technologies, seems to be a significant problem when designing devices with artificial intelligence elements and a given degree of autonomy. Various approaches to understanding the term "autonomy" are described, emphasizing that autonomy should be viewed as a further development of a technical device’s automation properties, along with its automatism and adaptability. The concepts of "strong autonomy" and "weak autonomy," used by some foreign experts and related to the automation properties of a technical system, are discussed. Foreign and domestic sources containing various criteria for assessing the level of autonomy are presented. It is shown that the evaluation indicators are purely expert and informal in nature and do not relate to the essential quality of the autonomy of a technical device, but to the result of its functioning. Thus, when deciding on the assessment of the device autonomy level, decision-making algorithms in unstructured areas should be used, based on the convolution of a vector criterion, which is hierarchical in nature. Thus, the purpose of this work is to show, by analyzing numerous sources, the limitations of the scientific and methodological apparatus for assessing such a complex concept as device autonomy, which today is based solely on the subjective judgments of the observer, as well as the need to move to formal or at least formalized methods.
Abstract. In the context of the global transition to a low-carbon economy, the design of renewable energy facilities such as solar and wind power plants require consideration of numerous hard-to-predict factors: variability of natural resources, terrain topography, environmental constraints, and economic parameters. Digital twins (DTs) represent a powerful tool for addressing these challenges by providing a virtual representation of a physical asset throughout its entire lifecycle. This article examines proposed methods for constructing Smart Digital Twins, (SDTs) for use in renewable energy facility design. The foundation of the proposed approach is a modified digital twin model based on ontological models. These models formalize key concepts, entities, their attributes within the renewable energy domain, as well as the semantic relationships among them. The ontology provides a unified glossary and data structure, which is critical for integrating heterogeneous information sources and ensuring mutual understanding among system components and specialists. The next step involves transforming this ontology into a smart digital twin model by incorporating intelligent components such as knowledge bases, a virtual environment, artificial intelligence models, schemas, and diagrams. To describe the relationships between models and components, a fractal stratified model is proposed. This model formalizes the knowledge structure and interconnections among ontological, informational, and mathematical models. The article details an adapted ontological engineering methodology tailored for digital twin design tasks, along with a method for constructing a virtual environment that enables debugging of both digital twins and smart digital twins in conditions of intermittent or entirely absent connectivity to the physical asset. To emulate external parameters such as weather conditions, a modified process based on CRISP-DM (Cross-Industry Standard Process for Data Mining) is proposed, facilitating the integration of machine learning models. The practical relevance of the approach is demonstrated through the development of a visualization component for identifying optimal power plant locations and supporting their design. This tool leverages an interactive 3D model of the Earth, satellite data, and meteorological APIs. The implemented solution confirms the feasibility of applying smart digital twins to renewable energy facility design.
The article presents the results of a study aimed at the development and software implementation of methods for mathematical modeling of the plant biomass pyrolysis process. The main objective of the study is to create an intelligent system for analyzing experimental data, which allows us to study the technological parameters of the pyrolysis process. The key components of the developed system include a set of machine learning models (model quality metrics values RMSE < 3.9, R2 > 0.8) for predicting the yield of final pyrolysis products; tools for analyzing differential scanning calorimetry (DSC) curves and assessing the thermal effect of reactions; a subsystem for identifying and visualizing thermogravimetric (TGA) and DSC curves. The predictive models were trained on a sample (750 records) compiled from open datasets on full-scale experiments on the pyrolysis of plant raw materials. The target variables were the percentage content of solids, liquids, and gases in the final pyrolysis products. Independent variables included the physicochemical characteristics of feedstock and the parameters of the pyrolysis process. The practical significance of this study lies in its potential for a deeper understanding of biomass decomposition processes. The developed system provides researchers and technologists with comprehensive tools for analyzing and identifying DSC curves, facilitating the interpretation of experimental data. The scientific novelty of this work lies in the creation of a unified research support platform based on the integration of machine learning methods with traditional approaches to pyrolysis process analysis. This significantly improves forecasting accuracy and optimizes the pyrolysis process for obtaining end products. The results of this study can be applied in research in the field of thermochemical processes, applied bioenergy, the chemical industry, and other areas related to biomass processing by pyrolysis. Future development of the system involves expanding the DSC curve database, improving machine learning algorithms, and integrating additional experimental data analysis methods for a detailed interpretation of the pyrolysis process, taking into account its dynamics.
The article discusses the use of intellectual analysis of educational data to compare the results of academic performance and scores of the Unified State Exam (USE), students studying in the IT specialties of the university. The relevance of the topic is due to the need to make objective and informed decisions to improve the effectiveness of the educational process. The main method is to obtain intelligently enhanced visual forms of data for a more effective understanding of the patterns hidden in them.
The paper considers the possibilities of using the open source software product "Orange" to implement the intellectual analysis of educational data in order to determine the relationship between the USE indicators and the academic performance of students. The tasks are considered: assessing the impact of the USE results on the student's academic success at the university, determining the possibility of using the USE results as an indicator of student academic achievement, identifying students at risk based on the results of the USE.
Visual programming, implemented by the user-friendly graphical interface of the Orange Data Mining software, allowed us to obtain interesting and useful illustrative materials that clearly demonstrate the relationship between USE scores, grades received by students in their first year and the subsequent effectiveness of their studies at the university. Using the Box Plot widget provided the necessary statistical information.
The possibility of evaluating students based on the available USE results at the time of admission to the university is obviously useful and justified, because at the time of admission to the university, this information is the most accessible and informative. In the course of the study, the classification of students was used based on the results of their studies at the junior courses of the university. The revealed patterns confirm the expediency of using the results of the Unified State Exam to predict the success of a university student. At the same time, the results show that some students who have high scores on the Unified State Exam subsequently stop studying. This fact indicates the need to use additional indicators, in particular, constant monitoring of attendance and current academic performance in order to identify students in need of additional support in a timely manner.
The rapid expansion of modern cities and the development of energy infrastructure are bringing residential and public buildings closer to ultra-high-voltage overhead power lines. These lines emit high-intensity industrial-frequency electromagnetic fields (EMF), which pose a health hazard to the public, can cause malfunctions in sensitive electronic equipment, and have a negative impact on the environment. Therefore, ensuring electromagnetic safety is particularly important. To reduce EMF intensity and ensure adequate levels of electromagnetic safety, the following measures are used: converting power lines to lower voltage, replacing overhead lines with cables, installing cable screens, etc. In today's environment, the selection of technically sound solutions should be based on the results of detailed computer modeling, taking into account the actual operating modes of electric power systems. This article presents the results of research aimed at developing models of electric power systems that included power lines equipped with cable screens. A distinctive feature of the proposed approach is its comprehensive consideration of factors ignored in simplified calculations, such as current and voltage asymmetry, the presence of higher harmonics introduced by nonlinear loads (in particular, rectifier electric locomotives), and the dynamics of load changes. The simulation was performed for a 500 kV transmission line, one section of which was equipped with shields using three options: the use of passive cable shields, the installation of active cable shields, and the combined use of active and passive shielding. To evaluate the effectiveness of cable shields, a model of an electrical network with a typical transmission line without shields was implemented. The simulation results showed that the combined installation of active and passive shields resulted in the greatest reduction in electric (up to 60%) and magnetic field (up to 36%) strengths. The obtained results can be used in practice to select measures to improve electromagnetic safety conditions near high-voltage transmission lines.
For reliable and fault-tolerant operation of an intelligent power system (IPS) operating in constantly changing conditions, monitoring and dispatching control in real time are necessary. Although many IPS control processes are automated, namely the dispatching personnel that keeps the system running. Situational awareness (SA) of an IPS operator can be described as complete information about the current IPS state. The insufficient level of the operator's SA significantly affects the likelihood that the system will enter a cascading power outage phase, and this transition is confirmed by numerous incidents in the power systems. The complexity of power systems is constantly rised, it increases the risk that human operators will not be able to manage the network in any situation if their cognitive abilities are not supported by appropriate tools. Such tools include the State Estimation (SE) procedure, which is the most important function that provides real-time calculation of the current EPS state.
The article discusses the issues of using the SE procedure to increase situational awareness in dispatching control of IPS (Smart Grid). The features of Smart Grid are formulated, which determine the relevance of research on the SA problem and the need to develop situational awareness of Smart Grid operators. The Smart Grid operating conditions, also the role of the dispatcher in emergency situations and during recovery measures are considered. The value of the Power System State Estimation (PSSE) in improving the SA of the power system dispatcher is presented. The analysis of the main data sources used to solve the PSSE problem has been performed. The requirements for the PSSE procedure when used in the SA structure are formulated. A PSSE Test Equations method developed at the Melentiev Energy Systems Institute (MESI SB RAS) based on synchronized phasor measurement (PMU data) that meets these requirements is presented. It is shown that the SE procedure provides information of the required quality for the tasks of dispatching control and monitoring of the Smart Grid and is an effective means of supporting decision-making and increasing situational awareness of Smart Grid operators.
The article presents the experience of creating BIM models of thermal power plant boilers, as well as the features of modeling individual elements of a boiler unit. It examines the failure statistics of thermal power equipment and provides a comparative analysis of various boiler equipment elements and heating surfaces. Based on these data, a comparative assessment of the reliability of various boiler unit elements was carried out, which justifies the relevance of detailed modeling of heating surfaces. It is shown that the BIM model is not an end product, but an integral component of a comprehensive information system. In this regard, a concept for integrating the developed models into the structure of a “digital twin” for use in life cycle management and maintenance systems for energy assets is proposed. The technological and algorithmic aspects of creating models for each heating surface and node are described. An algorithmic approach to constructing the geometry of boiler equipment elements is proposed, based on the formalization of modeling operations in the form of a specialized language of sequential transformations (“turtle language”). This language describes a sequence of operations for stretching, rotating, and spatially transforming pipe elements, which allows the complex spatial configuration of heating surfaces to be reproduced based on parametric data from the design documentation. The practical implementation of the approach is carried out in the form of a software library in Python using open tools for forming IFC models, ensuring compatibility with BIM environments and engineering analysis systems. The paper presents a mathematical apparatus for coordinate transformation and algorithms for transitioning from local coordinate systems to a global model system, ensuring correct positioning of elements during sequential transformations. The developed toolkit allows automatic generation of detailed three-dimensional models of boiler units, suitable for subsequent use in CAE calculations, equipment condition monitoring systems, repair planning, and personnel training. The results obtained demonstrate the effectiveness of the proposed approach and its potential for use in the creation of digital twins of thermal power facilities and the development of equipment condition monitoring systems.
The article presents a methodology for improving the lifecycle management of an enterprise's information infrastructure based on a logical and axiological approach and the theory of aggregated systems. This technique is an innovative tool that allows taking into account the complex hierarchical structure of the IIP, the nonlinear interdependencies between its components, and the multilevel categorization of equipment criticality. Unlike traditional methods using linear convolutions, such as weighted averages, the proposed approach is based on nonlinear aggregation implemented within the framework of fuzzy logic. The objective function, developed on the basis of the concepts of "functional deficit" and "component value", takes into account the probabilistic characteristics of the components and their criticality for the functioning of the system. For equipment of category K1, aggregation is applied according to the principle of necessity (using the max function), which ensures maximum system reliability. Category K3 is characterized by aggregation based on the principle of sufficiency, minimizing excess costs. The K2 category assumes mixed aggregation based on probabilistic pooling, which allows flexible adaptation to changing operating conditions. This approach makes it possible to identify critical IIP components, the failure of which can lead to a complete loss of system functionality. This ensures high interpretability of management decisions and allows decision makers to reasonably choose management strategies (operation, repair or replacement). The technique can be integrated into the decision support system (DSS) and the digital twin of the IIP, which increases its practical applicability and adaptability to dynamically changing operating conditions. To demonstrate the correctness and effectiveness of the proposed approach, examples of calculations for a mission-critical train traffic control server (category K1) and a network switch (category K2) are given. The results include a formalized IIP ontological model adapted to the requirements of GOST R 53114-2008, which confirms its compliance with regulatory requirements and a high degree of detail. The calculations performed demonstrate that the proposed approach significantly increases the validity, interpretability and security of management decisions in conditions of high uncertainty and critical dependence on the reliability of the enterprise's information infrastructure.
Abstract. This article proposes an approach for creating a digital profile of a region’s socio-economic development
through automated text data analysis. Grant project descriptions from the Republic of Mari El (2023–2025) were
used as a key indicator. By combining classical LDA with the modern neural BERTopic model, latent thematic
patterns shaping the region’s current development agenda were identified and visualized. The resulting digital
profile allows for an objective evaluation of civic initiatives, highlights dominant areas such as social support,
human capital development, sports, and education, and assesses their alignment with regional strategic goals.
The study found that the target group "children" forms the core of the grant agenda, centering on clusters of educational, rehabilitation, and inclusive projects. LDA modeling identified five consistent thematic areas, including support for families with children and the adaptation of individuals with disabilities. BERTopic, in turn, enabled the detailed elaboration of narrow niche practices, such as inclusive sports, rehabilitation for people with visual impairments, and cultural and leisure programs for children with disabilities, confirming the high semantic sensitivity of the neural network approach. Comparison of the resulting thematic clusters with the draft Strategy for Socioeconomic Development of the Republic of Mari El revealed both areas of complete priority overlap (childhood support, inclusion) and strategic imbalances: the underrepresentation of projects in the creative industries, tourism, and youth work. The proposed methodology demonstrates that machine learning methods open up new opportunities for monitoring and diagnosing the state of regional and municipal systems, providing management teams with a data-driven tool for decision-making and adjusting grant policy. Thus, thematic modeling enables the translation of unstructured text data into an objective analysis, transforming descriptions of civic initiatives into a reliable tool for identifying real social priorities and the basis for strategic planning.
This article is devoted to the development of mathematical tools that allow for the justification of strategic risk management decisions when implementing a construction plan within the established time frame. The process of risk management in the construction industry requires construction companies to adapt to modern realities and implement modern risk management systems for the successful implementation of construction projects. One of the most important indicators of project implementation effectiveness is the compliance of the construction project implementation deadline with the planned values. It is important to correctly assess the construction deadlines, taking into account the risks affecting the project implementation. The purpose of the work is to develop tools that allow modeling the duration of a construction project implementation under risk conditions. The paper analyzes existing approaches to risk management in construction projects and proposes a methodology for developing a risk management strategy, tested on the example of the construction of a typical residential building. For the project under consideration, key risks affecting the duration of construction have been identified. An example of the formation of a matrix of controllable parameters is shown. A method for constructing a game matrix based on a network graph of the construction plan for given player and nature strategies is described: the duration of work is simulated using the Monte Carlo method, and a probability distribution of the project completion time is obtained; the possibility of reducing the computational complexity of the algorithm by approximating a series of game matrix values using the least squares method is shown; the game matrix is constructed, for example, based on the characteristics of the probability distribution “construction period with a given probability (reliability)”, and a decision-making strategy based on Laplace's principle is defined. The application of the developed toolkit for assessing the risks of project failure within the established deadlines will allow for effective risk management in construction projects. The proposed approach may be of high practical significance in the implementation of construction projects and can be integrated into the risk management system at various stages of the construction project life cycle.
This article presents the results of a study aimed at developing an "Educational Program Management" module as part of the university's electronic information and educational environment (EIEE). The module's mathematical and algorithmic support enables forecasting accreditation monitoring indicators, thereby improving the quality of educational management. The study is based on an analysis of long-term data from 20 educational programs at Irkutsk State Agrarian University. The scientific novelty of this work lies in the systematization of data collection, storage, and processing processes, as well as the development of specialized forecasting algorithms adapted to the specific statistical parameters of monitoring indicator time series. Unlike existing developments, the proposed algorithms are implemented as a practical tool integrated into the university's EIEE. To forecast indicators such as the average Unified State Exam score, the percentage of successful completions, and the percentage of employed graduates, a combination of methods was used, including correlation and regression analysis, expert assessment methods, and probabilistic modeling based on a three-parameter gamma distribution. It is proposed to differentiate the accuracy of forecast assessments based on the level of preparation of applicants. The developed module was tested on real data, and the forecast accuracy was assessed. A retrospective forecast of key accreditation indicators was performed, the values of which were compared with actual data. The average relative forecast error was 12.8% for the average Unified State Exam score, 18.8% for the percentage of successful completions, and 6.7% for the percentage of employed graduates. The obtained results demonstrate the practical applicability of the module for the early assessment of educational programs' compliance with accreditation requirements and the formation of informed management decisions.
The paper addresses the problem of calculating uncertainty bands for linear regression with correlated input data. To estimate confidence bands, the generalized least squares method is applied. To determine their boundaries, a coverage factor is introduced; when multiplied by the standard uncertainty of the regression at specific points, this factor yields the required limits. The relevance of this research stems from the fact that standard methods for constructing confidence intervals, based on the assumption of independent errors, lead to a systematic underestimation of the uncertainty band width in the presence of autocorrelation. This, in turn, creates a false impression of forecast accuracy and may result in erroneous statistical conclusions. To construct confidence bands correctly, the structure of the temporal dependence of errors must be taken into account. This study considers the following models of correlated noise: autoregressive processes with exponential correlation decay, and colored noise characterized by power-law decay and long-term memory.
Unlike the classical case of independent errors, where the coverage factor corresponds to a quantile of the normal distribution, no analytical expression exists for this factor in the presence of correlation. The value of the factor directly depends on the structure of the error covariance matrix, the training sample size, and the forecasting horizon. To determine it, the paper employs a numerical Monte Carlo method combined with an iterative bisection procedure, which allows finding the coverage factor with a specified accuracy.
Specialized software has been developed in Python using the NumPy and SciPy libraries. The software implementation solves the problem of estimating hyperbolic-shaped linear regression bands when the error correlation structure is described by the aforementioned models. Corresponding examples for estimating the coverage factor and regression bands are provided, along with a link to the software implementation. The modular architecture of the developed program allows for expansion to other types of correlation structures. The applicability of the work is due to the need for correct uncertainty estimation in the statistical processing of experimental data obtained during the solution of measurement problems.
In recent years, applications providing users with physical activity programs have become very popular. This trend is driven by the accelerating pace of life, the lack of time to visit fitness centers, and the growing need to maintain a healthy level of physical activity. Digital platforms offer users a variety of training programs: specialized workouts for various muscle groups, cardio programs, and customized training plans tailored to the user's fitness level, goals, and limitations. The main advantage of such apps is the ability to train anywhere and anytime. However, a significant drawback of such apps is the lack of professional supervision to ensure proper exercise performance. Incorrect technique can not only reduce the effectiveness of workouts but also lead to injuries, such as strains, joint damage, and muscle damage. This is especially critical for beginners who have not yet mastered basic exercises and safety precautions. The goal of this work is to create an intelligent system for assessing the quality of exercise performance. The system is based on analyzing the video stream from the user's device camera, comparing the user's technique with a reference model, and automatically recognizing the trainer's reference technique. The training process is accompanied by visual feedback: indication of correct body position, detection of technique errors, and display of key body points (joints, limbs, etc.) in real time. This approach allows users to receive high-quality feedback immediately, significantly increasing the effectiveness of training and reducing the risk of injury due to improper technique. The system can provide recommendations for adjusting posture, tempo, and range of motion, making independent exercise safer and more effective. The study demonstrated the feasibility of using the developed system for static exercises, such as yoga. Further research and development focuses on dynamic exercises and significantly expanding the system's functionality and scope of application.
This article describes the development of software for research in historical and mathematical linguistics that utilizes a multimetric approach and the analytic hierarchy process. The program is implemented as a desktop application in the Python programming language. The PyQt5 library was used to develop the graphical interface. Relevant mathematical methods for research in historical and mathematical linguistics are considered and implemented, including: transformations of A.D. Dolgopolsky's words into consonant classes, various word similarity metrics (taking into account the number of identical letters in two words (Ratcliff-Obershelp or RO), the number of letters in the longest common substring (LCS), and the number of elementary operations for combining words (Levenshtein distance or L)). The novelty of this work lies in the application of a multimetric approach to the analysis of the list of correspondences and the construction of rankings based on the analytic hierarchy process. The "Linguist's Calculator" allows one to identify hidden lexical relationships between toponyms and lists of corresponding words, as well as conduct research into the origins of toponyms. The program has been tested on toponyms of the Irkutsk region with lost meanings and identifies the most likely matches among candidate words from various languages, including Evenki, Buryat, and Old Russian. The program supports input of a toponym and candidate words, selection of a word transformation model, and output and export of sorted results in descending order of metric sum to an Excel file. Verification procedures were conducted for the hierarchy analysis method in word multimetrics, using word sets that were specifically modified to test the method's robustness to word distortions. The study demonstrated that the algorithm is robust to distortions. With 50% noise, the quality of matching gradually declines. The algorithm's robustness to distortions makes it suitable for working with real (including distorted) toponyms. In the future, we plan to add functionality for quantitatively assessing borrowings in languages to expand its application in historical and mathematical linguistics for analyzing linguistic interactions and reconstructing the etymology of toponymy in the Irkutsk region.