The Characteristics, Methods, Trends and Applications of Intelligent Systems

– Interaction between intelligent systems and their human operators in dynamic, shifting, and unpredictable natural and social settings is novel area of study within the topic of intelligent computing systems. Robots of the past were not effective at making decisions on their own; instead, they routinely carried out the same set of actions in the same situations because they believed that the world was predictable. Nowadays, decisions may be made quickly and effectively by intelligent systems in practical settings. Modern intelligent systems include characteristics such as intelligent search and optimization, artificial evolution, and autonomous decision support that are unavailable in a traditional information system. An in-depth analysis of the methods used to create intelligent systems is presented in this paper. These techniques are often categorized as either artificial intelligence or soft computing. Some examples include the use of a neural network, fuzzy logic, a hybrid system, or a swarm of intelligent insects. In addition, this article gives an overview of two applications of intelligent systems and technologies such as Geothermal Heat Pumps (GHPs) and Heat exchangers (HEXs).


INTRODUCTION
The development of technology may be traced back in significant part to human's capacity to create ever more effective and complex tools.Many tools and devices were developed during the period of the industrial revolution to facilitate the elimination of laborious manual processes.Computer-based instruments are being developed in the digital era to automate activities that formerly required human thought.These instruments' capacities have been expanded throughout time to enable them to carry out tasks that need ever-greater levels of intelligence.Due to this development, we now have access to what are known as "intelligent systems."Professional sectors such medical diagnostics (for example, recognizing malignancies on x-ray pictures) and airport administration both benefit from the use of intelligent systems to aid in the execution of specialized activities (e.g., general a novel task of the airport game in the prevalence of incidents).
These systems may also do risky or unpleasant activities for humans, such as exploring uncharted territory or operating an autonomous vehicle (e.g., underwater exploration).Creating systems like this is now an IT engineering specialization that calls for efficient processes and software.Since it relies on notions linked to cognition-a field that is far from entirely understood and whose language and degree of abstraction may vary depending to the area of study-the exact characterization of an intelligent system is not straightforward (neuroscience, computer science, robotics, philosophy, cognitive psychology, etc.).Novel computational frameworks of intelligence and novel scientific researches linked to our understanding of the mind may even prompt a re-evaluation of some of the terminology already in use.
In this article, "intelligent system" refers to any machine programmed to carry out a task for which humans find it useful, such as driving a vehicle, diagnosing an infectious illness, recommending an economic investment, or even cleaning a vacuum.However, what exactly does it mean to claim that a computer has intelligence?In this study, we provide a first response to this topic by analyzing the machine's actual characteristic and behavior.A prevalent viewpoint in the field of Artificial Intelligence (AI), which this theory parallels, defines an intelligent machine as a system that acts as an agent and behaves logically.To function as an agent, a system must be able to take initiative and gain insight about its surroundings.There is also the possibility of interaction between the system and external actors (such as people or other artificial systems).An observer could conclude that the system is acting rationally if it takes measures that increase its chances of success.However, if the system can provide reasonable explanations for its views, the observer can conclude that it is rational.It may be summed up by the definition that follows: Definition (a): An intelligent system is an artificial framework that both (1) acts as an agent by perceiving its surrounding, acting in that surroundings, and interacting with other agents, and (2) displays rational behavior by acting rationally (to optimize the success of its tasks) and displaying rational thought (by justifying beliefs through reasoning).The goal of the above concept is not to provide a concrete test for identifying whether or not a system is intelligent.Instead, it seeks to catalog the characteristics that are usually associated with intelligent behavior but are not always present in AI.Intelligent computing systems have the capacity to communicate and learn naturally with humans to accomplish tasks that neither humans nor robots could do on their own.Modern interest in intelligent systems originates from their increased use in real-time contexts.The limitations of traditional computer-based systems are broad and varied, including non-parallel execution, insufficient optimization, inadequate decision-making, and insufficient automation.An intelligent system is planned to address the aforementioned issues.Artificial intelligence (AI) [1] and soft computing (SC) [2] are two of the most prominent methods.Soft computing techniques are exceptionally complex biologically impacted techniques, whereas Artificial Intelligence is the computer science restraint linked with methods of embedding a variety of knowledge using symbols instead of heuristics and number symbols, rather than approaches for data analysis.
Many approaches to developing intelligent systems [3] have been discussed at length in this article.This paper presents a review of characteristics, methods, trends and applications of intelligent systems.In this section, an overview of an intelligent system is provided.
The definition provided in this section is a brief one, but different versions of intelligent systems definitions have been provided in Section II.Section III presents a discussion of the key differences between intelligent systems and traditional computer systems.Section IV focuses on the methods of designing intelligent systems, while Section V reviews the trends in the methods of intelligent systems.Section VI reviews two applications of intelligent methods.Lastly, Section VII draws a conclusion to the research and proposes direction for future research.

II. INTELLIGENT SYSTEMS
As earlier mentioned, intelligent computing systems tend to communicate and learn naturally with humans to accomplish tasks that neither humans nor robots could do on their own.The functional capabilities of an intelligent system are the collection of tasks for which it has been designed.The ability to automatically learn and adapt to new situations is a hallmark of a really intelligent system (automatic learning).To create artificial intelligence, computers must be taught to reason like humans.Intelligence, reasoning, and learning capabilities (together with the ability to learn and adapt on their own) are all hallmarks of intelligent systems.According to Apriando,Mardinata,and Ardianto [4], an Intelligent Decision Support System (IDSS) is intelligent information system designed to shorten the time needed to make judgments and to increase their consistency and quality.In the field of environmental decision-making, an IEDSS excels as a useful tool for making recommendations.What makes IEDSS so remarkable is the information it incorporates, which gives the system improved capabilities for reasoning about the environmental system in a more trustworthy manner.

Table 1. Different Versions of Defining an Intelligent Agent
Definition 1 Computational systems with intelligence are known as intelligent agents, and they are able to detect and act independently in a complex and dynamic environment in order to achieve a set of objectives or tasks.

Definition 2
Intelligent agents are able to continually perceive changing situations in the environment, act to alter those variables, and reason to interpret those perceptions, resolve issues, make conclusions, and decide on actions.

Definition 3
Intelligent agents are capable of making decisions on their own, gaining insight into their surroundings, surviving for extended periods of time, adjusting to new circumstances, and establishing and working to achieve meaningful objectives.

Definition 4
Intelligent agents are hardware or software systems that can operate entirely or in part independently to complete tasks in highly dynamic and complex environments.An agent's activities are its means of interaction with and modification of its surroundings.The term "intelligent agent" is used in Artificial Intelligence (AI) literature to describe intelligent systems.This idea offers a high enough level of generalization to make it possible to classify and organize a wide variety of attributes.Definitions of intelligent system acting as an agent are provided in Table 1.Each of these definitions emphasizes the system's ability to interact with its surroundings, acquire new knowledge, and decide on a course of action.Some writers (e.g., Zhao et al. [5]) and academic fields in information technology (e.g., the titles of university courses and academic publications) use the term system (instead of agent) (e.g., International Journal of Intelligent and Robotic System and IEEE Intelligent Systems).The use of the term "system" here highlights the interdependence of several parts that must work together properly to produce intelligence.One of the most important skills in the creation of such systems is the ability to effectively combine the elements and versions presented in Table 1.
In this study, we combine aspects of these several agent-based concepts into a single characterisation of an intelligent system.For instance, at a given level of abstraction, it is necessary to identify a distinct complex dynamic environment in order to provide a sufficient operational context.Cognitive capacities (such as perception, reasoning, and learning) that are often listed by agent-based definitions are also used in this study.
On the other hand, this study defines the aforementioned skills in two distinct categories.On the one hand, we single out a base set of cognitive abilities for interacting with the world, including perception, action management, deliberation, and language.But the system has more than just the fundamentals; it also knows how to put them to use, which allows it to exhibit more sophisticated forms of intelligence.These supplementary skills have to do with (1) being logical in order to get the most out of a system, (2) modifying one's conduct in order to fit new situations, and (3) looking inside and analyzing one's own actions in order to provide explanations for their results.
In light of this, the following definition of an intelligent system is offered as a fundamental concept to organize the subsequent characterisation of such a system in this study.Each of the three components of the concept is elaborated upon below.That said, this categorization does not rely on hard-and-fast rules for determining whether a system is intelligent or not.Instead, system engineers should see it as malleable so that it may be modified to meet their unique requirements in conducting application analysis and development.
Definition (b): An intelligent system (In a resource-constrained, dynamic environment) an instrument that (1) presents complex intelligent activity that is supported by skills like rationality, adaptability via training, or the capability to define different application of its knowledge based on introspection; and (2) contains fundamental cognitive capacities like perception, action control, reasoning, or language usage.

III. DIFFERENCE BETWEEN INTELLIGENT SYSTEM AND TRADITIONAL COMPUTER SYSTEMS Overview of Intelligent Systems and Traditional Computer Systems Intelligent Systems
The characteristics of an intelligent system are presented in Table 2.
Table 2. Intelligent Systems Characteristics Use of knowledge Ability to apply knowledge to specific activities in order to address specific issues.

Handling ambiguity
Ability to deal with ambiguity in software of any kind

Expert justification
Ability to offer justification in the event of an expert system failure Solving optimization problems Ability to use search to find optimal solutions to difficult optimization issues

Human reasoning
Ability to think and reason like a human Learning from experience Ability to gain knowledge via experience to formal or informal instruction

Finding solutions
Ability to discover solutions through methods analogous to evolution in nature Level of user engagement Ability to give a higher level of user engagement using techniques such as speech recognition and generation, natural language processing, and image evaluation Traditional Computer Systems A traditional computer system works by computing the output as a function of the input after first receiving it from the user.It essentially works by computing functions based on the input.A typical computer system is one in which the computer is in charge of determining the order in which processes take place.Batch processing is the most common application for this kind of computer system.During this sort of processing, the computer receives a set of instructions from one storage device, carries out the operations that are described in those instructions, and then writes the results of those actions to another storage device.The various characteristics of traditional computer systems have been presented in Table 3 below.
Table 3. Traditional Computing System Characteristics Pre-determined performance They are intended to perform in a predetermined way

Speed operation
They are typically constructed to function at a particular speed Purpose They are created with the intention of serving a certain function when they are put into use

Time of use
They are normally intended to be used for a certain amount of time throughout their lifetime

Group demographics
They are intended to be used by a particular subset of the population in order to function properly.

Data type
They are intended to process a certain category of data

Distinction between Traditional Computer Systems and Intelligent Systems Table 4. Presents Some of the Key Differences Between Intelligent Systems and Traditional Computers Table 4: Characteristics of traditional computer systems and intelligent systems Distinguished behavior
Intelligent systems may be distinguished from traditional systems thanks to their unique behavior and characteristics.

Intelligence in providing solutions
Traditional computers show no signs of intelligence when it comes to finding solutions compared to their counterparts.They operate according to predetermined rules or algorithms that have been meticulously crafted by computer programmers.

Flexibility and adaptability
Unlike traditional systems, intelligent systems can transform and adapt to match their environment.They enable the inference mechanisms necessary for the processing of knowledge.Instead of being a single step, this processing is made possible by the complex interaction of various elements within the system.

Ability to provide results
In contrast to traditional systems, which cannot provide results in situations when there is minimal information, intelligent systems may nonetheless produce relevant results.
IV. METHODS OF DESIGNING INTELLIGENT SYSTEMS Designing intelligent systems involves three main activities: (i) creating software that will allow robots to think like humans, (ii) attempting to construct systems oriented on a model of knowledge processing and representation in the human brain, and (iii) researching the brain's anatomy and physiology.In the creation of today's most advanced intelligent systems, the following disciplines' methods are most often employed:

Soft Computing
As a subfield of computing, soft computing focuses on practical uses.Knowledge engineering, training, searching, optimisation, and categorization are just some of the areas where soft computing's clever approaches have been put to use.The primary components of soft computing do not compete with one another but rather complement one another.Such a feature allows for the hybridization of many techniques to result in the construction of intelligent systems.Machinelearning systems, fuzzy-logic-based systems, evolutionary systems, neural networks, and multi-agent systems are all included in the umbrella of soft computing.Hybrid systems, such fuzzy-neural-genetic, evolutionary-neural, neural-fuzzy, evolutionary-fuzzy are particularly trendy currently.

Artificial Intelligence
Methods of expressing information other than numbers, for instance via the use of symbols; and heuristics, rather than algorithms, might be used for processing information.Research in this area focuses on the human mind as well as how to use computers to depict such processes.The goals of AI are as follows: (i) to create intelligent computers; (ii) to learn what intelligence is; (iii) to create useful machines; (iv) to utilize knowledge to influence the environment; (v) to apply reasoning in decision-making; (vi) to analyze and interpret complexity; (vii) to learn from experiences; and (ix) to apply reasoning and critical thinking for practical activities.Case-based reasoning, knowledge-based systems, expert systems, and natural language processing that rely on rules are all AI-based system samples.

V. TRENDS IN METHODS OF INTELLIGENT SYSTEMS Natural Language Processing
Natural Language Processing (NLP) lets humans communicate in their own languages rather than English.NLP can analyze texts written in natural languages to decipher their meanings.Computational linguistics (CL), natural language engineering (NLE) and human language technology (HLT) are all phrases that signify this kind of processing, which reflects on the usage of computer techniques in the study of language.The following are some examples of NLP: The four main areas of artificial intelligence research are: (i) text mining (ii) text routing/categorization (iii) question answering (iv) machine (Aided) Translation (v) Education in Foreign Languages (vi) Fixing the spelling.

Expert systems (ESs)
Expert systems are sophisticated computer programs that use knowledge and inference techniques to find answers to situations that are too complex for a human expert to handle on their own.An expert system incorporates human knowledge and experience into its decision-making process.Data gathered from humans and kept in a computer system for the purpose of addressing problems.

Types of Expert systems
The following elements may combine to make an expert system: (i) Rule-based systems [6], in which logic and knowledge is signified as a collection of rules; (ii) Frame-based systems [7], in which knowledge is signified as a collection conceptual frames (iii) Hybrid systems [8], which integrate elements of other systems (often rules and frames) to create a new system that rely on models to replicate its structure and behavior known as model-based systems (iv) Packages intended for widespread use (v) Tailored Solutions, which involves finding a way to fill a certain void, and (vi) Temporal Methodologies: Response times are capped at certain thresholds.

Machine Learning
Essentially, machines that learn from examples are engaging in machine learning.Any learning approach, such as a rule learner, a decision tree learner, or even a neural network, might be used in machine learning mode to develop descriptions that can distinguish between different groups of people in a population.A structural (relational) learning approach, such as a model that trains representations in an annotated and predicate computation, is required if people are to be described using such structures.The subsections below presents the two primary learning strategies used in Machine Learning:

Supervised Learning
Information about an application domain, or "previous experiences," is used to train a computer system.Discrete class attributes, such as approval status, risk level, and so on, may be predicted using a target function.Many names have been used for this process, but supervised learning, categorization, and inductive learning are the most prevalent.

Unsupervised Learning
Learning is carried out independently of a trainer.Here, the learning process is unsupervised and autonomous.The kind of learning process utilized by systems that do not need a "trainer" is known as unsupervised learning.

Intelligent Agents
Intelligent computer programs, often known as "intelligent agents," are becoming common.They're programs that use the idea of "agency" in some way.This means that such apps stand in for a user and accomplish the desired outcomes of the work on their own, without needing any guidance from the user.When it comes to applications, agents are the ones that show intelligent behavior (such learning or categorization) but aren't really AI.In addition to these two methodologies, agent-based approaches like multi-agent systems and agent-oriented computing exist.

Soft Computing Systems
Since designing an autonomous system is a significant effort that cannot be accomplished by conventional computing methods, Soft computing approaches have been created in order to facilitate the designing of an intelligent system.The methods of soft computing are typically employed because they are integrated approaches well-suited to solving the kinds of complicated, ill-defined, and hard-to-model issues that have recently gained attention.Many distinct methods, each of which contributes to the field of soft computing, are included because of how they complement one another.Among these methods are Probabilistic Reasoning, Genetic Programming, Neural Networks, Fuzzy Logic, Genetic Algorithms, Evolutionary Strategies and Evolutionary Programming.Because they are so adaptable to a wide range of real-world needs, soft computing paradigms are well suited for use in the creation of intelligent systems.
In recent years, hybridization of soft computing approaches has gained traction.The following are a description of the most notable features of soft computing: (i) Problem modeling is essential for practical uses.However, there are times when conventional computers cannot be used in the design process.(ii) There are a wide variety of challenges that must be met by real-time applications.When it comes to solving a wide range of issues, the family of computing known as soft computing is at the top of the list.In addition, (iii) the algorithms that may be used with soft computing are adaptive ones.Since this is the case, it is now feasible to implement new algorithms.When it comes to (iv), soft computing offers less complicated, more cost-effective options.This function is useful in developing and deploying smart systems.This is because (v) the intelligent search approach provided by soft computing is able to discover the optimal answer.(vi) The quickest solutions for real-time applications may be found in the parallel computing environment made possible by soft computing.

Fuzzy Systems
Modelling issues with vague or conflicting data is a strong suit for a fuzzy system.Fuzzy systems are computerized methods that are formulated using the principles of fuzzy logic.Fuzzy logic, which is used to define vagueness, is a kind of multi-valued logic, which takes advantage of the range of possible logic values, from 0 (totally false) to 1 (entirely truthful) (completely true).Fuzzy sets, the theoretical foundation of fuzzy logic, are used to quantify fuzziness.The concept that anything can be scaled up or down is a part of it.Fuzzy logic's membership functions allow for a more accurate representation of knowledge.

Biologically Hybrid Models Neural Networks
A neural network is a considerably parallel distributed processing model, which is composed on interlinked neural computing elements with the capacity of acquire information and learn behaviors before making it plausible for us.It is a simplified estimation of the biological neuron system.Neural networks are used in artificial intelligence and machine learning.For the most part, NNs are seen of as learning machines, which operate based on accumulated observational data.Knowledge about the world may be learned by a connectionist system through seeing specific occurrences.No a priori conceptual pattern exists that may initiate a learning process.For NN, each "unit" is a basic CPU with potentially limited local memory."Connections" are the mechanism by which the units exchange information with one another; this information is often numerical in nature (as opposed to symbolic) and encoded in a number of ways.NN may be considered of, for the most part, as a nonlinear classification system consisting of connected nodes.This system has the ability to learn the fundamental behavior patterns present in a dataset from a given collection of samples.Neural networks may be taught to recognize a wide range of patterns, or they can be left to their own devices to rummage through massive datasets and discover patterns on their own.

Evolutionary Algorithms
The term "evolutionary computation" refers to a class of computer-based problem-solving technologies that are conceived on the basis of computational representations of the evolutionary process (EC).Cognitive systems have come into the spotlight in recent years due to their use of an evolutionary strategy to solve computing challenges.Standard evolutionary computation components include GAs, ES, EP, and GP.

Evolutionary Fuzzy Hybrid Systems
A substantial number of novel hybrid evolutionary systems have emerged in recent years.The standard evolutionary method used to resolve optimization issues may be hybridized in several ways.Evolutionary algorithms are hybridized with FL so that they can learn and cope with ambiguous information.The term "evolutionary fuzzy hybridization" describes this process.Using an evolutionary strategy, it is feasible to effectively encode and design different rule semantics, aggregation operators, rule-centric aggregations, de-fuzzification approaches, and rule semantics.

Neural-Genetic-Fuzzy Hybrid Systems
Within an integrated neuro-fuzzy framework, a consolidation of different neural network approaches and the effective tunings of fuzzy inference systems are not guaranteed.The optimisation of fuzzy inference systems might benefit from a meta-heuristic strategy that combines neural network learning algorithms with evolutionary algorithms.An evolving Neural Fuzzy framework like this might be adapted to operate with several fuzzy inference systems, including Mamdani, Takagi-Sugeno, and others.A fuzzy framework that can modify its membership functions (form and number), fuzzy processors, rule base (structure), and learning variables in response to its environment is known as an adaptive fuzzy system.The design and the process of evolution might be thought of as a generic framework for such a system.

Neural Fuzzy Hybrid Systems
Complex nonlinear interactions may be modeled with ease by a neural network, making them an ideal candidate for categorizing data into known groups.However, the accuracy of outputs is often restricted, therefore it is not possible to have zero errors; rather, the goal is to minimize least square errors.The combination of NN and FL is a powerful tool for the construction of intelligent systems.This is due to the fact that fuzzy systems are able to account for imprecision.It's possible that the fuzzy inference system may be learnt in a context this is interactive and integrated, employing techniques that include neural network training.To calibrate fuzzy inference system parameters in a neuro-fuzzy integrated model, learning techniques are used to neural networks.

Swarm Intelligent systems
The term "swarm intelligence model" is used to describe computer models that are based on observations of natural swarm systems.Any effort to create algorithms or decentralized problem-solving devices that take cues from the cooperative nature of social insect colony or other animal populations is an example of this.

Applications of Intelligent Methods for GHP
Geothermal heat pumps' (GHPs) efficiency may be affected by a number of factors.GHP performance may be modeled with sufficient accuracy utilizing smart methods.Intelligent methods' provided models may be used for future performance forecasting.The suggested models are flexible enough to be used for a wide variety of tasks, including system optimization, control system design, and GHP design facilitation in the future.Fig. 1 shows a few of the most crucial uses of AI in GHPs.
The outputs of GHPs cannot be predicted without first considering the trendy components as inputs.The effectiveness of a GHP shown in Fig. 2 was assessed using ANNs, linear and nonlinear regression, and other methods in a research by Müller [9].Their models computed the heat transfer rate by factoring in the water flow rate, thickness of the U-tube size of the well, water temperature, heat capacity of soil and vertical well depth.Using regression analysis, they showed that the accuracy was very poor, with an R 2 of just 0.114.; however, while utilizing nonlinear regression and ANN, the exactness was higher at 0.842 and 0.947, respectively.Since ANNs have a more intricate design than correlations, which allows them to simulate more complex systems, they are expected to predict outcomes with a higher degree of accuracy.Artificial neural networks (ANN) and multiple linear regressions (MLR) were employed by Riandari,Sihotang,and Husain [10] to evaluate the performance of heat and hourly projections of a GHP.Models were compared using the RMSE coefficient of variation, which revealed that MLR models were 3.56 percent accurate and ANN models were 1.75 percent accurate.Recent research (e.g., by Acosta, Amoroso, Sant'Anna, and Junior [11]), with ANN showing more accuracy than traditional regressions, have been shown in other models hypothesized for frameworks and characteristics.Several other researches in the area have employed this method because of the excellent precision of ANNs.By using the temperature changes of the air entering and departing the condenser section, the water antifreeze solutions that enters and leaves the heat exchangers, and the ground temperature as an input, authors in [12] utilize ANN to evaluate GHP's COP.The high value of R 2 in their regression indicates that the projected values are significantly close to their actual values.The support vector machine (SVM) is another clever approach that has been utilized to represent GHPs.In their study of SVM for predicting COP of a GHP, Han,Zhang,Li,and Liu [13] showed that it achieved an astounding R 2 of 0.999999 in terms of prediction accuracy.

Fig. 2: Representation of a GHP System
Different approaches, functions, and architectures may have varying effects on the accuracy of suggested models for the performance of GHPs.In order to predict the GHP's COP employed in Wuhan and Shaoxing, China, for instance, authors in [14] employed ANN, with varying numbers of neurons, and ANFIS.In ANN implementations, the range of neuron counts was from 3 up to 23, in increments of 2. All of the evaluated configurations of artificial neural networks (ANN) showed adequate accuracy; however, the configuration with five neurons in the network yielded the maximum exactness (R 2 = 0.9982).However, when ANFIS was used, R 2 increased to 0.9994.Based on a contrast of RMSE values in the ANFIS and ANN examples, the latter is more accurate with a value of 0.05524 while the former was 0.06475.Esen and Inalli [15] used ANFIS and ANN to project a vertical GHP's effectiveness.Pola-Ribiere conjugate gradient (CGP), Scaled conjugate gradient (SCG), and Levenberg-Marquardt (LM) were among the learning approaches experimented, along with varying numbers of neurons (6, 8, and 10).The LM algorithm composed of 8 neurons generated the best results based on RMSE and R2 for both heating and cooling settings.
Gauss2, Gbell, trap and triangular membership functions were among those tried out for use with ANFIS.Using Gauss2 with dual function values for memberships meant for cooling and Triangular with dual membership function numbers for heating was found to provide the greatest R 2 values.Based on their research, they concluded that ANFIS, when used in an ideal structure, yielded more precision than the optimal ANN.Talpur,Salleh,and Hussain [16]  where these membership functions ranged in size from 2 to 5, with 2 being the most common.Throughout their research, they used the following applied functions: Guass2mf, Trimmf, Gaussmf, Trapmf and Gbellmf.It was shown that the best results were achieved by using a total of two membership functions, and that increasing the number of functions resulted in a decline in prediction accuracy.All of the above-mentioned functions achieved R 2 values more than 0.99 in models that included two membership functions, but Gaussmf yielded the greatest R 2 value.
In addition to single-output models, multi-output predictions may be made with the use of smart methods.Authors in [17] compared the performance of ANNs that modeled the compressor's power consumption and heating capacity using a variety of learning algorithms and neuron counts.The network was updated with readings from the direct expansion hydrothermal evaporator's outlet and inlet as well as the condenser's cooling water's inlet and outlet.It was discovered that the lowest RMS and highest R2 values could be achieved while employing LM with 28 neurons.By factoring in the coefficient of performance (COP) and the energy efficiency rate, Xing,Wang,Luo,Wang,Bai,and Fan [18] evaluated the accuracy of ANN (back propagation type) and random forest (RF) estimates of a GHP's performance.When comparing the RF and ANN models, in terms of robustness, we discovered that the RF model was around 3.3% more resilient.
Improving efficiency and precision may be achieved by combining smart strategies with correct optimization methods.By minimizing the difference between the anticipated and actual values, optimization techniques are often used to finetune the hyperparameters for the models' accuracy.Cho et al. [19] used SVM, tree ensemble, and ANN to estimate a GHP's COP.There are just nine required inputs, and they are: cooling and cool water mass rate of flow, cooling water outlet and inlet temperature, mean ground temperature in storage volume, chilled water outlet and inlet temperatures, fan coil unit output air temperature, and interior temperature.
For this set of algorithms, ANN performed best in terms of accuracy, followed by SVM and tree ensemble.Bayesian optimization was used to fine-tune the ANN's hyperparameters, resulting in a more accurate model.They found that the R2 values for both the testing and training datasets increased, from 0.9737 and 0.9727 to 0.9966 and 0.9966.

Applications of Intelligent Methods in HEXs
As stated in Section I, intelligent modeling and performance prediction techniques may be used to thermal media, such as heat exchangers (HEXs).Reviews of research on the use of intelligent methods in HEXs are presented below.Among the methods helpful for evaluating the performance of HEXs, using intelligent models has various advantages, including a little impact on computing time, a high accuracy level, and a cost-effective outcome.In the demonstrating of earth-to-air HEX, Authors in [20] evaluated two models, including intelligent and deterministic.The first model was derived from a study of heat transmission and coupled mass in the earth, while the second was developed using artificial neural networks.The mass flow air rate was one of six variables examined as inputs to the ANN-centric model along with lengths, humidity, ambient air temperatures, ground surface rate of heating, and ground rate of heating at burial depths.
The team found that the ANN-based model outperformed the deterministic model in terms of accurately predicting the outlet temperature.A supplementary study by Anand,Jawahar,Bellos,and Malmquist [21] implemented ANN to simulate a cross-flow HEX found in phosphoric acid inhibition plants.Their research focused on the thermal efficiency of HEX, and they did it by taking into account many factors such as measured values for MSE (mean squared error) and R-squared (R-squared), which showed that the model was dependable for efficiency forecasting.Models that can be used with a wide range of geometries can be constructed with the right kind of data.The authors developed a neural network-centric framework for forecasting the efficiency of fin and tubes HEXs with cut and basic fins constructed of longitudinal deltawinglet turbulators.Twelve variables were included into the model, and the Friction factor and Nusselt number of the fluid flow were the results.Through trial and error with several network architectures, we found that two hidden layers (5 in the bottom layer and 9 unique neurons in the top layer) produced the best accurate model, with a deviation between the observed data and the projected data of less than 4%.The effectiveness of HEXs may be predicted in both static and dynamic modes with the use of ANNs.
In order to predict the behavior of air-to-water fintube HEX in transitory situations, Yang,Lee,and Song [22] used ANN.The network was trained with the help of experimental experiments conducted under varied operating circumstances, such as by changing the mass flow rates and temperature with the use of a heater.Water and air mass flow rates and intake temperatures were used to forecast performance.It was found that using an ANN-based model allowed for precise prediction of the working fluids' transient temperatures.The rapid rise in temperature experienced by the streams when the heater is turned on is seen in Fig 3 .The use of nanofluids to improve heat transfer is a brilliant concept that can be used to a wide variety of heat transfer mechanisms, including heat exchangers (HEXs), microchannels, heat pipes, thermosyphons, etc. Nanofluidic HEXs may be modeled and their future performance predicted using intelligent approaches.
The pressure drop and transfers of heat within the double-tube HEX operational with water/Ag nano-fluid was predicted using ANN by Zendehboudi,Ye,Hafner,Andresen,and Skaugen [23].See Fig 4 for a diagram of the network we developed to foretell changes in the Nusslet number and pressure.To find the most precise network, we tried out many dissimilar transfer functionalities including RBF and Tansig and Logsig.Highest R-squared values for comparative pressure drop (99.5%) and Nusselt number (99.99%) were found for the RBF framework.A comparable approach was taken by Estellé,Halelfadl,and Maré [24], who used a CNT/water nanofluid to predict the Nusselt number of coil HEXs with the help of several different intelligence frameworks.In particular, the frameworks utilized the spiral percentage of HEXs, the volume percentage of nanostructured materials in water, and the Prandtl number as inputs.Diverse metrics, such as mean squared error (MSE) and correlations were employed to assess the different models, and it was found that LSSVM produced the most impressive results.The mean squared error (MSE) was 0.003, 3.451, and 3.799 for LSSVM, MLP, and ANNFIS, respectively.Nusselt number was shown to be more sensitive to the number of helical coils, however all three inputs were taken into account for the sensitivity analysis.
Heat transfer from the condensation of a brazed plate in the shape of a herringbone that contains a coolant HEX structure was predicted using ANN by Jokar,Eckels,Honsi,and Gielda [25] (references abound that elaborate on these specific HEXs).1884 datasets from various experimental experiments were utilized to provide a holistic model.The network's inputs included the driving temperature difference, the same Reynolds value, the Prandtl value, the ratio of deformation amplification (a geometrical variable), and water superheat.Heat transfer factor was the network's output.The remaining 30% of the dataset was used for testing and evaluating the network, while the remaining seventieth of the dataset was put to use for the network's training.Results from the network's predictions were compared to those from analytical-computational models, and the findings indicated that ANN-centric models were generally of a higher accuracy.It is feasible to predict HEX flow characteristics using ANNs.Two-phase flow patterns in HEX microchannel heat sink was predicted using ANN.A HEX in the HVAC system provided the training data.Reynolds number, Capillary number, Froude number, and void percent were employed as a network input in their publications.The capillary number, which was used as one of the outputs and was described as the ratio of interfacial tension to friction force, and the take-off ratio, which was used to represent the dispersion of two-phase flows, were both integral parts of the system.Initially, the inputs were standardized to fall between -1 and 1 for use in an ANN-based model.
Kuri-Morales [26] optimized network architectures to determine the best possible design using a network architecture consisting of three hidden layers, each containing three neurons.The generated model had an R2 value of 0.9802, which is excellent for such a complex phenomenon as two-phase flow distribution.Projection values obtained by the ANN-based model were shown to be much more accurate than those created by other techniques, such as a power law interpretation and a construction derived using Prigogine's theory.Using ANNs, we can anticipate the behavior of HEXs other than the traditional ones, which are only employed for heat transmission.Oscillatory heat transfer coefficient of thermoacoustic HEX was predicted using ANN by Machesa,Tartibu,and Okwu [27].Their model took into consideration both mean pressure and frequency.Oscillatory heat transfer coefficient was predicted using a network with 10 hidden layers after evaluating various numbers of neurons within the network's singular hidden layer.This network produced the highest Rsquared value and lowest mean square error of the regressions.The recommended network showed respectable performance, with a mean error of 3.2% between projected values and experimental data.Their model was quite accurate, but it was too narrow in scope since it focused on a single scenario with fixed geometry and parameters.More experimental data under a variety of situations is needed to enhance the comprehensiveness.Relatedly, Suthahar and Brindha Devi [28] used ANN to forecast the outcomes of a tripartite concentric tube HEX.The transfer rates and heat pressure drops were measured as inputs and seven other variablesincluding helix pitch, groove depth, rates of the cold stream mass flows, outer sie/diameters, rate of hot stream mass flows, cold stream inlet temperatures and intake temperatures-were taken into account.Through trial and error, they settled on a configuration with two hidden layers, each with 21 neurons in the first concealed neurons and in the second 15 neurons, yielding the maximum accuracy.Pressure dropping and heat exchange rate cases in this model achieved R2 values of 0.99 and 0.9924, correspondingly.Intelligent approaches may be used to anticipate not just thermal performance, but also the occurrence of problems such the fouling of HEXs.Predicting fouling of a flue gas HEX using ANN was performed by Tang,Li,Wang,He,and Tao [29].
The alkali-acid ratio, which characterizes the effect of coal ash on the fouling propensity, the transverse pitch and the longitudinal pitch were the three inputs into the network they developed to make fouling predictions.In their research, they came up with the concept of a "fouling factor index" to characterize the level of fouling.An index calculated the amount of fouling on the surface of the heat transfer as a percentage of the functional particles that had hit the surface.The utilised NN only took into account a single hidden layer in its architecture, and although 87.5% of the supplied information was utilized for neural network training, the remainder 12.5% was utilized for testing.When training the network, several sizes of neuron from 1 to 20 were tried before settling on 12 as the optimal size.The developed ANN was quite trustworthy, with an R-squared of 0.9996 based on the contrast between predicted and actual values of the fouling factor index.By using the method, fouling of the HEX may be correctly predicted, which will allow for improvements to be made to its design and structure, as shown by the R-squared value of the suggested predictive regression.Another research that used ANN to achieve HEX prediction was conducted by Verma, Nashine, Singh, Singh, and Panwar [30].The amount of solid material and fluids fouling factor of a heat exchanger's surface, were analyzed in terms of the fluid temperature, surface temperature, fluid density, fluid velocity, fluid passage diameter, dissolved oxygen time and content.The experimental data was utilized to inform a model they suggested based on more than 11,000 datasets, MLP, and ANN.R2 for the achieved model was 0.978, while the mean absolute relative deviation was 5.4%.Technical considerations are not the only reason ANNs are helpful; they can also be used to estimate how much HEXs will cost in advance.In order to calculate how much shell and tube HEX would be set back, Gay,Mackley,and Jenkins [31] used ANN.Shell and tube diameters, stationary head factor the rear head factor, and tube pitch, were the five parameters sent into the network.

VII. CONCLUSION AND FUTURE RESEARCH
This article offers a comprehensive review of the characteristics, methods, trends and applications of intelligent systems.In the second section of the article, we review some of the versions of defining intelligent systems.Parallel to this is a description of the characteristics (in the third section) that set intelligent systems apart from more traditional computerbased ones.In the fourth section, we discuss the widely used methods of intelligent systems before reviewing the trends associated with these methods in the fifth section.In the sixth section, we examine the practical uses of each design approach.The investigations that focused on the application of intelligent strategies to predict the HEXs' behavior indicated that the methods used were effective and reliable.This means that there is room for future usage and improvement of the tools at hand.Many publications and books have been written on the topic, yet there are still questions that may need further investigation.To begin, although most studies have relied on traditional ANNs, alternative intelligent approaches like LSSVM and ANFIS, which may have better performance overall, should be employed more often.More inputs, especially those that affect outputs, will provide more complete models, however most suggested ones only apply to certain classes of HEXs and narrow operating situations.
This research offers recommendations for further research and development in the subject of GHP.First, it would be helpful to provide more complete prediction models for the GHPs.Adding extra information to a model using intelligent procedures has been found to increase its usefulness and scope.To make the models generalizable across systems, it is important to account for data collected under a range of scenarios.Additional factors, such as component and operating fluid characteristics, must be taken into account.In addition, additional functions may be added to the models in order to identify the most effective ones in terms of precision and speed.Increased usage of optimization algorithms such particle swarm optimization, genetic algorithm, and their hybrids is also considered for fine-tuning the hyper-parameters of diverse intelligent procedures, leading to the creation of models with greater precision.Second, future research might also take into account energy efficiency as well as economic and environmental indices as possible model outcomes.These models might be used to make more predictions about GHP.Lastly, nanofluids may be used in GHPs as an effective heat transfer Re Bias    ∀ ∆  ∆   fluid.Future studies might from thinking about how to simulate the performance of GHPs using nanofluid parameters as inputs alongside other significant aspects.
Fig. 2: Representation of a GHP SystemDifferent approaches, functions, and architectures may have varying effects on the accuracy of suggested models for the performance of GHPs.In order to predict the GHP's COP employed in Wuhan and Shaoxing, China, for instance, authors in [14] employed ANN, with varying numbers of neurons, and ANFIS.In ANN implementations, the range of neuron counts was from 3 up to 23, in increments of 2. All of the evaluated configurations of artificial neural networks (ANN) showed adequate accuracy; however, the configuration with five neurons in the network yielded the maximum exactness (R 2 = 0.9982).However, when ANFIS was used, R 2 increased to 0.9994.Based on a contrast of RMSE values in the ANFIS and ANN examples, the latter is more accurate with a value of 0.05524 while the former was 0.06475.Esen  and Inalli [15]  used ANFIS and ANN to project a vertical GHP's effectiveness.Pola-Ribiere conjugate gradient (CGP), Scaled conjugate gradient (SCG), and Levenberg-Marquardt (LM) were among the learning approaches experimented, along with varying numbers of neurons(6, 8, and 10).The LM algorithm composed of 8 neurons generated the best results based on RMSE and R2 for both heating and cooling settings.Gauss2, Gbell, trap and triangular membership functions were among those tried out for use with ANFIS.Using Gauss2 with dual function values for memberships meant for cooling and Triangular with dual membership function numbers for heating was found to provide the greatest R 2 values.Based on their research, they concluded that ANFIS, when used in an ideal structure, yielded more precision than the optimal ANN.Talpur, Salleh, and Hussain [16]  used ANFIS to simulate a GHP by taking into account a variety of membership functions and their effects on COP prediction,

Fig 3 .
Fig 3. Temperature Fluctuations at the (a) Water and (b) Air Outlets (-experiment, -ANN)The recommended network showed respectable performance, with a mean error of 3.2% between projected values and experimental data.Their model was quite accurate, but it was too narrow in scope since it focused on a single scenario with fixed geometry and parameters.More experimental data under a variety of situations is needed to enhance the comprehensiveness.Relatedly,Suthahar and Brindha Devi [28]  used ANN to forecast the outcomes of a tripartite concentric tube HEX.The transfer rates and heat pressure drops were measured as inputs and seven other variablesincluding helix pitch, groove depth, rates of the cold stream mass flows, outer sie/diameters, rate of hot stream mass flows, cold stream inlet temperatures and intake temperatures-were taken into account.Through trial and error, they settled on a configuration with two hidden layers, each with 21 neurons in the first concealed neurons and in the second 15 neurons, yielding the maximum accuracy.Pressure dropping and heat exchange rate cases in this model achieved R2 values of 0.99 and 0.9924, correspondingly.Intelligent approaches may be used to anticipate not just thermal performance, but also the occurrence of problems such the fouling of HEXs.Predicting fouling of a flue gas HEX using ANN was performed byTang, Li, Wang, He, and Tao [29].The alkali-acid ratio, which characterizes the effect of coal ash on the fouling propensity, the transverse pitch and the longitudinal pitch were the three inputs into the network they developed to make fouling predictions.In their research, they came up with the concept of a "fouling factor index" to characterize the level of fouling.An index calculated the amount of fouling on the surface of the heat transfer as a percentage of the functional particles that had hit the surface.The utilised