Volume №3(39) / 2025
Articles in journal
The article proposes a method for constructing generative models based on pyramid neural networks of fast learning (FNN). The model construction is based on the probabilistic principal component method (PPCA). The PPCA method allows us to analytically construct matrices of optimal decoders that are capable to reconstructing images from random latent variables of small dimension distributed according to a normal probability law. The implementation of PPCA- decoders in the FNN class makes it possible to represent decoders in the form of series-parallel structures that provide high performance due to the structural parallelization of operations. The paper presents methods for teaching FNN to decoder matrices. Training is performed in a finite number of steps and does not require iterative procedures. Examples of constructing implementing FNN for the MNIST dataset are given. The results of generating images similar to the MNIST set are shown. A comparison with the classical variational autoencoder has been performed. The field of expedient use of generative models of PPCA is defined.
The paper considers the machine learning task of classifying the states of objects with a complex structure. The state of an object is understood as a certain category that characterizes the properties of an object at a given time, and at the same time is described by a set of features. The complexity of the object structure is expressed by the features that require preliminary processing and have hierarchical or sequential relationships, or unstructured elements. The data to process may contain inaccuracies, and errors. The neural networks as a tool are often applied to the classification tasks. Fuzzy logic systems are applicable to the classification tasks within the context of data fuzziness. Both tools have the properties of a universal approximator, however the neural network results are hard to interpret, at the same time the application of a fuzzy logical system requires the preliminary construction of a fuzzy rules. The paper considers an approach to combine fuzzy logic system and neural network with the fuzzy neuron activation function. The model of the fuzzy function is described by the fuzzy sets, membership functions and fuzzy activation rules. The input of a fuzzy function goes through the transformations stages, including fuzzification, fuzzy inference using the fuzzy logic system’s rules and defuzzification. The approach makes it possible to form a fuzzy activation function with a set of changeable parameters during the neural network model training. An improved normalization of the input of the fuzzy activation function is proposed in order to ensure the feasibility of the information signal requirements in the fuzzy function. Neuro-fuzzy classification systems are analyzed in the tasks classification of objects of various nature: technical objects, biomedical objects textual objects. A comparative assessment of the accuracy of solutions of neuro-fuzzy classification systems relative to similar ANN-based systems showed an increase from 2 to 9% in a number of experiments.
This study examines the problem of applying classical radio signal recovery methods. The use of neural networks, namely autoencoders, is proposed as a solution. The architecture of a variational autoencoder has been developed to solve the problem of radio signal recovery from an unmanned aerial vehicle. To train and test the model, a large dataset containing radio signals of various types of modulations and noise levels was used, DeepSig Dataset: RadioML 2018.01A. The aim of the study is to develop an architecture that shows the best metrics for radio signal recovery than classical methods. The quality metrics used are PSNR, MSE, and MAE, and the Adam optimizer. The Kalman filter was also applied to the dataset, which showed two orders of magnitude worse results in all quality metrics. The data obtained show that classical algorithms for restoring distorted radio signals may not always show decent quality of work.
The article analyzes scientific topics of news and compares information dissemination routes for 2016-2025. The database includes more than 17 thousand news items and documents that were supplied with metadata and systematized on the corporate website of the Siberian Branch of the Russian Academy of Sciences – the SB RAS Portal. The possible impact of artificial intelligence (AI) on the distortion of information obtained with its help is analyzed, which requires expert control when filling websites.
Currently, the global community is showing significant interest in the field called "Knowledge management". Research on this topic is also being carried out in Russia, but not as actively as abroad. Great importance in this research are the issues of developing methods and tools for knowledge management in the knowledge ecosystem. One of the fragments of the knowledge ecosystem can be not only knowledge, but also information about specialists who possess this knowledge, as well as their competencies, which can be determined by knowledge mapping. The article proposes an approach to building a knowledge ecosystem for knowledge-intensive organizations in the energy sector based on knowledge mapping. The novelty of the research lies in the adaptation of a comprehensive knowledge mapping method with the integration of ontological modeling and the assessment of employee competencies. The practical significance of the work is confirmed by the approbation of the method in the Department of Artificial Intelligence Systems of the ISEM SB RAS, where key experts, development areas have been identified and knowledge management processes have been optimized.
The article discusses the methodology of integrating cognitive and mathematical modeling to analyze the directions of development of the fuel and energy complex (fuel and energy complex) from the perspective of energy security. The proposed approach makes it possible to formalize and automate the process of transition from a qualitative analysis of threats and countermeasures to a quantitative assessment of their impact on the fuel and energy sector using optimization models. The methodology is based on the use of ontologies to structure domain knowledge, the construction of cognitive maps to visualize cause-and-effect relationships, and the development of relational databases to integrate semantic and mathematical models. The application of this approach is demonstrated by the example of a computational experiment involving threat modeling and evaluation of the effectiveness of energy security strategies. The results of the study can be used to make managerial decisions in conditions of uncertainty and crisis scenarios.
The paper presents a comprehensive study of the problem of tariff policy optimization in the energy sector of the Republic of Belarus with a proposal to introduce innovative methods of predictive modeling of heat loads using neural networks. The authors conduct a detailed analysis of the current state of tariff policy in Belarus, identifying the key problem a high level of cross-subsidization, with electricity tariffs for industry significantly exceeding those for households, which significantly affects the competitiveness of enterprises. The article notes that the main share of cross-subsidization in electricity tariffs for legal entities is the underpayment of consumers to the level of reasonable costs for heat energy. The study reveals the imperfection of the existing methods of heat load forecasting, which are based mainly on statistical data and do not take into account many dynamic factors. As a solution, we propose an innovative neural network forecasting technology that uses a hybrid architecture of deep neural networks combining the advantages of recurrent neural networks with long short-term memory and convolutional neural networks. The novelty of the study lies in the development of a comprehensive approach to the optimization of tariff policy through improving the accuracy of heat load forecasting, which allows to optimize the operation modes of heat generating equipment, minimize transportation losses and reduce operating costs. The authors note that the introduction of the proposed technology can reduce specific fuel consumption by 8-12% and reduce operating costs by 15-20%, creating a significant reserve for reducing tariffs without affecting the financial stability of energy companies. The results of the study have practical significance for reforming the tariff policy in the energy sector of the CIS countries and improving the competitiveness of national economies.
At present, the use of stand-alone hybrid renewable energy systems (HRES) combining diesel generators and renewable energy sources is an effective way to improve the efficiency of electricity supply to consumers in isolated and hard-to-teach areas. Designing of HRES associated with the need to solve an optimization problem, in which it is necessary to determine the optimal equipment configuration and their installed capacities in the context of multi-criteria. In the case of multi-criteria problem solving, a two-level approach is used in most studies: at the top level, the optimal Pareto configurations of HRES are formed using heuristic algorithms of multi-criteria optimization, at the bottom-level, the simulation of each HRES configuration are considered for detailed evaluation of each solution by a number of criteria. There is a large number of heuristic algorithms applied at the top level for planning the development of energy systems and HRES, which have both advantages and disadvantages, which creates difficulties in choosing an algorithm. This study presents an evaluation of heuristic multi-criteria optimization algorithms based on evolutionary algorithms such as NSGA-II, NSGA-III, AGE-MOEA and MOEA/D using Python and Pymoo package. To compare the algorithms, indicators were used that evaluate the Pareto set uniformity; distance between the true Pareto set and the Pareto-set formed by the heuristic algorithm; efficiency of the algorithm to achieving the best criteria evaluations; the time required to form the Pareto set. Evaluation of optimization algorithms was carried out on the example of solving the problem of development of a hybrid renewable energy system in the remote area of Sakhalin region. According to the results of evaluation of algorithms of multi-criteria sizing of HRES at the top-level of the two-level approach should be chosen heuristic algorithm NSGA-II, as it allows to obtain the Pareto set of high quality, ensure the achievement the minimum estimates on criteria during less time than other algorithms.
The paper presents a methodology used to create a one-dimensional model of thermal-hydraulics in a single heated channel with supercritical water. This model can be used to solve the problem of determining the boundaries of thermal-hydraulic stability. The technique is based on the principles laid down in the semi-implicit SIMPLE algorithm. The model includes equations of mass, momentum and energy conservation, taking into account the compressibility of the fluid. The model is implemented using in-house C++ program code. The IAPWS-IF97 library, implemented in the high-speed seuif97 package, was used to determine thermophysical properties of water. There are several correlations in the program to calculate the coefficient of hydraulic resistance, but the Blasius formula is selected for testing the model. The presented methodology involves solution of the basic thermal hydraulics equations with checking the continuity of the flow four times per one time step. The continuity cycle ends when a mass imbalance of 10-6 is reached, or when 10 iterations are reached. The results of testing the model are demonstrated.
This paper considers the possibility of implementing a neural network controller in the control system of the process of irrigation irrigation of condensers of refrigeration units used in hockey stadiums. The relevance of the study is due to the need to improve energy efficiency and reliability of cooling systems that provide maintenance of optimal temperature conditions of ice in the arena. Traditional methods of regulating the irrigation process have a number of disadvantages, such as the complexity of tuning the system parameters and low adaptability to changing operating conditions. The aim of this work is to develop and implement a neural network model capable of automatically regulating the condenser irrigation process depending on current external factors including ambient temperature, humidity and refrigeration load. In order to achieve this goal, theoretical studies were conducted to analyse existing regulation methods and experimental tests of the developed neural network on a real facility were carried out. In the course of the research, a neural network architecture was developed, including several layers of perceptrons trained on data about the refrigeration plant operation over a certain period of time. Different approaches to neural network training were considered, including the use of error back propagation algorithms and gradient descent method. The experimental part of the work showed high accuracy in predicting the optimal irrigation strategy under different operating conditions. The LSTM neural network was chosen due to its ability to account for temporal dependencies and adapt to changing operating conditions. The model was trained on historical refrigeration plant operation data including ambient temperature parameters, plant loads and coolant characteristics. Gradient descent and regularisation techniques were used during training to prevent overtraining. The results of the experiments showed that the introduction of the neural network controller allowed to significantly improve the accuracy of maintaining the set parameters of the cooling system, as well as to reduce the power consumption of the refrigeration system. In addition, a decrease in the system response time to changes in external conditions was noted, which is especially important when holding sporting events at a hockey stadium.
This study aims are to create meta-models of the system organization with different complexity and a mathematical description of representative capacities that ensure paired subsystems interactions. To solve the problem, the basic ideas of A.A.Bogdanov’ general organizational science are involved, where an organization is an ordered set of internal relationships and system properties with a certain functioning mode. The mathematical foundations of systems theory are also used, namely, set theory, category theory, and differential geometry of the fiber bundle of spaces on a manifold of connections of variables. At various abstract levels, normative meta-models as polysystem without local feedback of coordination management are created in the form of commutative diagrams of the interaction of monosystems, taking into account external or internal factors of influence and possible changes in development goals. For metamodeling of various applications, existing software is used that supports any modeling notation, or traditional methods of scientific research are used, when many models of different objects are made based on a single meta-model. A meta–model is formed as a composition of classes, attributes and class relationships, and is formalized as a generalized organizational function – a manifold as bundle base for distinguishing a polysystem of classes, for example, in the form of separation of powers, work, and the powers of managers and executors. The communication interface between two classes expresses a relationship where functional changes in one class directly or through representatives affect the other class. Metamodeling notation in terms of category theory and mathematical analysis make it possible to reduce the abstraction of expressions in the general theory of systems and structural diagrams, and to derive calculation formulas. Through the fiber bundle procedure, the decomposition of the organizational function into three acts of entry, exit and monosystem state transformation in the form of a universal bilinear function of relative variables is justified, which describes the change of monosystems and the performance of representative actions. According to the limitations on this function, the concentric structure of each layer (fiber) "center-core-periphery" is axiomatically defined and the mechanism of polysystem connectivity of the fibers is modeled. The presented formal patterns and their graphical schemes mathematically display the structure and function of metamodels of organizational systems in conceptual and analytical terms and allow them to be applied in the analysis of data and knowledge, symbolically describe previously empirically established rules of self-organization in nature and society. The development of a bilinear description of an organizational function in parts through a tangent bundle is its quadratic representation in the form of a field of pairwise competitive interaction of objects.
Situational modelling makes it possible to analyze and predict the “behavior” of organizational systems under different conditions. One of the key tools in this area are models of the dynamics of socio-economic systems. Models describing the processes of Land Use Change (LUC) allow us to study both the dynamics of regional development under certain climatic and socio-economic scenario conditions and the prospects and possible consequences of achieving long-term strategic goals of agricultural development. We employed a recursive model of agricultural land use dynamics “GLOBIOM” for situational modelling of the achievement of carbon neutrality by Russian agriculture in the case of two scenarios: inertial socio-economic development under the SSP2 scenario without special taxes on greenhouse gas (GHG) emissions and with the introduction of a tax per unit of GHG emissions. The results of the situational modelling provide an informational basis for understanding the prospects of achieving carbon neutrality of the Russian agriculture under the considered interventions, which is relevant for the development of the national agricultural policy. In addition, modelling at the regional level allows us to reveal the possible consequences for regional agriculture, in particular changes in agricultural production, land use, and farmland distribution. Among the Russian regions, the study focused on the Greater Altai regions (Altai Krai, Altai Republic, and Tyva Republic), where Altai Krai represents a large agro-industrial region with the largest carbon footprint from agricultural production in the country, and Altai and Tyva are ethnic republics with less pronounced agricultural potential and a greater focus on livestock development. The modelling period is 2030-2050.
Studying and enhancing the resilience of energy infrastructure is an actual problem that is associated with high computational complexity of preparing and conducting experiments. The complexity of experiments is due to the range of important factors. These factors include large sets of scenarios for significant external disturbances affecting the infrastructure under study, identification of its critical elements whose failure could lead to significant disruptions in the energy resource generation, transportation, and supply, as well as planning activities aimed at enhancing this infrastructure's resilience. Solving these problems in a computing environment based on executing scientific workflows using a distributed database in the RAM of the environment nodes can significantly reduce the computation time. However, this approach is not supported by known workflow management systems. In this context, we propose a new approach to implement the analyzing the energy infrastructure resilience using the Framework for Development and Execution of Scientific WorkFlows and distributed databases. Specifically, we created a scientific workflow to analyze energy infrastructure resilience. Next, we developed a method for predicting the required RAM size for workflow execution. Then, we implemented a set of testbeds (system workflows) to perform computing according to this method. This method takes into account key parameters in the subject area that significantly impact changes in data size. Finally, we conducted a computational experiment to demonstrate the accuracy of predicting the required RAM size when studying two test models of energy infrastructures of different complexities.
The article describes mathematical methods used to calculate an azimuthally asymmetric electric field for trajectory analysis of electron beams in gyrodevices. These algorithms, including some modifications of the discrete sources method implemented in the ANGEL software package, have shown high accuracy and efficiency in the design of guns and collectors of gyrotrons.
The article presents the results of applying the Apriori association rule mining algorithm to analyze rubber compound formulations in order to identify technologically significant component combinations. A methodology based on data mining is proposed, which enables the automatic detection of stable ingredient combinations, reduces the search space for formulations, and formalizes expert knowledge. The study analyzed a database of 5065 industrial formulations using support, confidence, and lift metrics. Key associations between rubber compound ingredients were identified. The results demonstrate the potential of association analysis methods for optimizing formulations, reducing development time, and digitalizing rubber compound design.