## Volume №3(35) / 2024

#### Articles in journal

The paper explores the thematic diversity of the interdisciplinary journal. The purpose of the research is to build a knowledge graph of the journal for the thematic presentation and systematization of the electronic archive and new publications of the journal. The initial data are journal articles devoted to various information and mathematical technologies in science and management, that is, interdisciplinary research. The systematization of texts using vector analysis methods is proposed. In the process of thematic analysis of the content of the journal, a division into headings is proposed, links of headings and articles with the corresponding descriptions of the specialties of the Higher Attestation Commission are established. To analyze the topic, an exploratory analysis of the source texts is used, then data mining methods are used. The results of the division are provided to the experts of the journal, after which a decision is made on the formation of a thematic heading and the inclusion of the specialties of the Higher Attestation Commission in it. The journal articles are integrated into the LibMeta semantic library, which is why the library's ontology is being completed and the journal's ontology is being formed, and the journal's knowledge graph is being built on this basis. A procedure for navigating through the content of the journal using the knowledge graph in the LibMeta semantic library is proposed, which can become the basis for information support of scientific research and the creation of a digital assistant in an interdisciplinary subject area. Examples are given for specific journal content, but the proposed technology can be extended to other journals, since most journals belonging to several specialties of the Higher Attestation Commission naturally capture several disciplines.

The paper is devoted to the analysis of discursive argumentation techniques within the framework of modern trends in the field of reasoning modeling. Reasoning in scientific discourse is organized as a sequence of discursive techniques, i.e. steps of reasoning corresponding to certain mental operations on the object of the subject area. From a formal point of view, the elementary structure of reasoning corresponds to an argument that represents the transition from premises to the thesis being proved. The paper presents an ontological model describing typical arguments – argumentation schemes based on Douglas Walton's theory. The issues of classification of existing modes of reasoning are discussed and an approach to ontological analysis of discourse structures based on ontological-semantic relations is considered, which provide a conceptual basis for reasoning models and help to understand the nature of entities, relations and reasoning related to a particular context. An annotated corpus of texts that refer to the field of scientific communication was used as material for practical research, including analytical articles with users’ comments from the Habr forum (popular science discourse) and scientific articles with comments from reviewers (actually scientific discourse). Of particular interest in this study were the conflicting argumentative techniques used in the texts of the genres under study: Antithesis (`attack` + `on_thesis`) and Counterargument (`attack` + `on_argument`). A statistical analysis and further classification of these techniques depending on the genre were carried out. The obtained classification is based on 12 representative techniques of the first type and 30 techniques of the second type. The ontology, classification of techniques and the corpus are publicly available on the ArgNetBank Studio platform.

The plasticity of multilayer modular neural networks with the characteristic property of self-similarity is investigated in the paper. The concept of degrees of freedom, known from mechanics, is used to assess plasticity. The number of degrees of freedom of the network is estimated by the maximum dimension of the operator manifold of the neural network formed by variation of the parameters of neural modules and the presence of intermodule connections. To obtain plasticity estimates, neural modules are considered as linear operators of fixed rank. Calculation formulas for calculating the dimension of the operator manifold of a neural module outside and within the network are obtained. A neural network is considered as a dual operator of a complex structure, the input and output of which are vector spaces. At the level of the structural model, the concept of modal network states is introduced, characterizing the dimensions of vector subspaces at the input and output of neural modules in the network. The dimensionality of the network manifold is estimated through its modal states. It is noted that selfsimilar networks belong to a class of weakly coupled networks for which the calculation of modal states does not cause difficulties. Exact formulas for calculating the degree of plasticity of weakly coupled neural networks are obtained, the results of the analysis are used to assess the plasticity of fast neural networks (BNS), and their subsets - pyramidal BNS of direct and reverse orientation

This research explores the possibility of applying combined convolutional neural network architectures to analyze skin lesions. Model architectures have been developed to extract additional features related to the shape pattern of skin lesions. Optimization of the models, including the architecture, was performed in order to minimize I and II types of errors for rare skin lesions. ISIC2017-2020, MED-NODE, SD-198, 7-Point Criteria Database, Light Field Image Dataset of Skin Lesions, PH2, IAP RAS were used in the training process. AdamW optimizer, FocalLoss functions and CosineAnnealingWarmRestarts scheduler were used to train classification models. The BCEDice loss function was used to train the segmentation models. The models were evaluated using weighted classification metrics such as Specificity, Recall, Precision and F1-score. The robustness of model architecture was considered during the validation phase. Models which are using additional convolutional neural networks for the skin lesion extraction shape features showed better metrics performance and also had lower sum of I and II type errors for rare lesions compared to conventional classification models. The results of this research can be used in analyzing medical problems with data imbalance in the training dataset.

The paper considers an approach to the development of an expert system for the analysis of accidents in railway transport. The stages and features of the development of its knowledge base, the possibilities of a tool environment and other issues are discussed. The prototype of the system was developed only for the analysis of derailments, but the knowledge model adopted in it makes it relatively easy to expand the system to other couses of accidents. The system is implemented in an environment using inference based on logic with vector semantics, which allows you to work with both reliable and fuzzy, uncertain and contradictory information.

One of the directions of the modern energy industry is the development of systems involving the utilization of solar energy, wind energy, and the recycling of secondary energy resources (waste). This paper considers an energy system, denoted as a laboratory-scale hybrid microgrid, that incorporates the use of such sources. The energy system includes solar panels, a biomass gasifier, and an electric generator. Of great importance for such systems are monitoring tools to analyze the modes of operation and make modifications. With this in mind, a monitoring system was developed to track various parameters of the laboratory equipment. Thanks to the developed system, data on the operation of the microgrid was obtained, and its limitations and weaknesses were identified.

A systematic approach is being developed to the problem of assessing, forming and correcting the dynamic states of technical objects under vibration loads using embedded structural formations. The methods of theoretical mechanics, oscillation theory, circuit theory, automatic control theory, and system analysis are used. Elements of a structural and functional approach have been developed, which consists in the formation of properties of structural formations of mechanical oscillatory systems capable of performing certain functions within the framework of a common system.

The article presents a functional diagram of the tree-takt pulse-frequency converter (PFC). Imitation models of tree-takt pulse-frequency converter and PFC-DCM electric drive system in the Matlab/Simulink environment have been developed. Modeling of tree-takt pulse-frequency converter and electric drive system is given. A comparison of the results of modeling and calculation electromechanical characteristics of PFC-DCM electric drive system is given. It is shown that the mathematical model of CHIP-D constructed by the authors describes with sufficient accuracy the static modes of an electric drive system with a three-stroke CHIP.

The rapid evolution of parallel and distributed computing systems, telecommunication technologies, and cloud platforms has enabled the development and use of scientific applications to prepare and conduct largescale experiments with large amounts of data. Often, the applications implement a complex problem-solving scheme based on the integrated execution of processes for data transfer, processing and analysis, resource-intensive computation, and decision-making. At the same time, the mathematical models and software of applications may be developed by different groups of specialists from different organizations and focused on heterogeneous computing resources. This requires the use of advanced tools for the design, implementation, deployment, and execution of scientific workflows within a single distributed computing environment, ultimately integrating algorithmic knowledge, software and hardware used, data, and various services. Today, such tools are usually workflow management systems. In this context, the paper is dedicated to discuss the current state of known workflow management systems, as well as to address the problems associated with the development and use of scientific workflows in different computing environments. The problems associated with the development and use of such systems, which are currently not fully solved, are highlighted. In particular, we point out the need to take into account subject domain specificities, the computation scaling, the demand for service-oriented applications, and the efficiency of using heterogeneous distributed environments that integrate high-performance user resources, cluster resources of shared use centers, Grid systems, and cloud platforms

To enhance the efficiency of information protection tasks within an organization, it is proposed to use a budgetary fund where necessary financial resources are accumulated and then expended. Under conditions of uncertainty, both the time intervals for expenditures and the expenditures themselves are random variables, which should be described by probabilistic models when modeling. A study was conducted on the impact of types of probabilistic models and the values of their numerical characteristics on the efficiency indicators of tasks performed by the organization's information security service staff, servicing the corporate information system. The efficiency indicators are the probability of "zeroing" the budgetary fund and the coefficient of variation, which are replaced by point and interval estimates in discrete-event simulation. Practical recommendations have been obtained

There is a need to integrate a PLM system and a third-party information system. The information system is not part of the PLM complex, but is involved in solving problems of information support for managing production processes. At the moment, the task of such integration is performed by the analyst. It will form a structural and process model of an integrated information system. Based on the model, rules for interaction with the system are formed. The operator and the decision maker (DM) are involved in the operation process. An approach is proposed to reduce the load on both the analyst and the operator and decision maker.

The article proposes a modification to the classical minimum degree algorithm, designed to optimize the structure of the global coefficient matrix in the system of linear algebraic equations used in the finite element method. The aim is to improve the efficiency of the computational algorithm when solving contact problems in elasticity theory, specifically when modeling the connection of parts using special contact elements that determine the iterative solution with a gradual refinement of the state. The proposed modification is implemented in conjunction with the direct Cholesky method for solving systems of linear algebraic equations. The computational efficiency of this method is demonstrated to the greatest extent when applied to systems with a symmetric, positivedefinite coefficient matrix, which often has a sparse structure when using the finite element method. A significant problem in the Cholesky decomposition method is the filling of the "triangular multiplier" relative to the original matrix during the decomposition process. The reduction of this filling is typically achieved using well-known heuristic algorithms, among which the minimum degree algorithm has become widely used. The modification of the algorithm proposed in this work, by symmetric permutation of rows and columns of the initial global matrix, reorders the structure of this matrix in a way that, during the iterative solution of contact problems of elasticity theory, allows for partial Cholesky factorization to be performed in repeated iterations, thereby saving calculation time compared to performing a full factorization. The paper demonstrates that overall the matrix structure ordered by the modified algorithm is less preferable than that produced by the classical minimum-degree algorithm, since it is undergoing more filling. However, the efficiency of the computational method is achieved through repeated iterations that gradually specified the state of the contact finite elements of the structures coupling. Using a test finite element model of a rotor, as well as a model of the real aviation gas turbine engine, portraits of the initial matrices and their triangular multipliers are determined and presented, estimates of the number of arithmetic operations and the estimated time required for their execution are calculated using both a classical and a modified minimum degree algorithm.

The article examines the problem of planning the transportation of goods from suppliers to consumers for a year, divided into several periods, during which it is necessary to remove a certain part of the goods. The goods are exported over several flights. To solve this problem, it is proposed to use two models of the transport problem, each of which has a time constraint added. In one of them, the variables are the number of flights, and in the other, the amount of cargo transported from each consumer to each supplier. For the correct application of these models in each period, it is proposed to prepare input data by carrying out a p

To solve the problems of optimizing the production volumes of crop production, taking into account the impact of locust pests on the harvest, a parametric programming model with probabilistic characteristics has been developed. Air temperatures and precipitation during the initial growing season, which affect the number of locust pests, were determined as parameters. Three variants of the mathematical model are distinguished: the absence of the influence of locusts on the income of an agricultural organization; the different influence of locusts on the economic indicator; the maximum possible impact of pests with a probabilistic assessment of the situation. Algorithms for solving extreme problems are proposed. The models and algorithms have been tested on the data of the agricultural organization CJSC Irkutsk Seeds of the Irkutsk region.

Experimental measurements of parameters of a stream radio frequency inductive coupled discharge of intermediate pressure in a discharge chamber and a jet, measurements of ion energy and ion current density on the sample surface, validation of a mathematical model of a stream RF discharge of intermediate pressure on experimental data were carried out. It has been established that a stream RF discharge of intermediate pressure in the pressure range of 13.3-133 Pa is a new type of radio frequency intermediate pressure discharge of a combined type, which differs both from an RF discharge in an atmospheric pressure gas stream with a solenoid inductor, and from a low-pressure RF discharge in a plasma torch with a flat spiral antenna, and from an RF discharge-low pressure discharge with a solenoid inductor.