Novel determination of dierential-equation solutions: universal approximation method

Similar documents
Fuzzy relational equation with defuzzication algorithm for the largest solution

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Parallel layer perceptron

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

Ecient Higher-order Neural Networks. for Classication and Function Approximation. Joydeep Ghosh and Yoan Shin. The University of Texas at Austin

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Neural Network to Control Output of Hidden Node According to Input Patterns

Introduction to Machine Learning Spring 2018 Note Neural Networks

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

Error Empirical error. Generalization error. Time (number of iteration)

t x 0.25

Using ranking functions in multiobjective fuzzy linear programming 1

A mixed nite volume element method based on rectangular mesh for biharmonic equations

Bidirectional Representation and Backpropagation Learning

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Data Mining Part 5. Prediction

An artificial neural networks (ANNs) model is a functional abstraction of the

Modelling of Pehlivan-Uyaroglu_2010 Chaotic System via Feed Forward Neural Network and Recurrent Neural Networks

1 What a Neural Network Computes

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

Course 395: Machine Learning - Lectures

Optimal blocking of two-level fractional factorial designs

G. Larry Bretthorst. Washington University, Department of Chemistry. and. C. Ray Smith

International Journal of Advanced Research in Computer Science and Software Engineering

Taylor series based nite dierence approximations of higher-degree derivatives

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

Revision: Neural Network

= w 2. w 1. B j. A j. C + j1j2

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

The WENO Method for Non-Equidistant Meshes

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Networks and Deep Learning

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

NEURO-FUZZY SYSTEM BASED ON GENETIC ALGORITHM FOR ISOTHERMAL CVI PROCESS FOR CARBON/CARBON COMPOSITES

On reaching head-to-tail ratios for balanced and unbalanced coins

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

Design Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions

Statistical Analysis of Backtracking on. Inconsistent CSPs? Irina Rish and Daniel Frost.

An Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations

Artificial Neural Network Approach for Land Cover Classification of Fused Hyperspectral and Lidar Data

Measure of a fuzzy set. The -cut approach in the nite case

Estimation of Inelastic Response Spectra Using Artificial Neural Networks

4. Multilayer Perceptrons

1 Ordinary points and singular points

Stability of backpropagation learning rule

1 Matrices and Systems of Linear Equations

Artificial Neural Network and Fuzzy Logic

Generalization of Hensel lemma: nding of roots of p-adic Lipschitz functions

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

x i2 j i2 ε + ε i2 ε ji1 ε j i1

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

A fourth-order nite-dierence method based on non-uniform mesh for a class of singular two-point boundary value problems

Neural Networks and the Back-propagation Algorithm

Using Neural Networks for Identification and Control of Systems

1e N

CS:4420 Artificial Intelligence

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

AI Programming CS F-20 Neural Networks

Study on the use of neural networks in control systems

w 1 output input &N 1 x w n w N =&2

Artificial Intelligence

Radial-Basis Function Networks

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Artificial Neural Networks. Edward Gatt

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Multi Variable Calculus

John P.F.Sum and Peter K.S.Tam. Hong Kong Polytechnic University, Hung Hom, Kowloon.

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3].

Unit III. A Survey of Neural Network Model

MANY methods have been developed so far for solving

2 Multidimensional Hyperbolic Problemss where A(u) =f u (u) B(u) =g u (u): (7.1.1c) Multidimensional nite dierence schemes can be simple extensions of

Identification of two-mass system parameters using neural networks

Machine Learning. Nathalie Villa-Vialaneix - Formation INRA, Niveau 3

EPL442: Computational

Artificial Neural Networks

Neural Network Identification of Non Linear Systems Using State Space Techniques.

Artificial Neural Network : Training

Multilayer Neural Networks

Artificial Neural Networks Examination, March 2004

Minimal enumerations of subsets of a nite set and the middle level problem

in the company. Hence, we need to collect a sample which is representative of the entire population. In order for the sample to faithfully represent t

4. Higher Order Linear DEs

Degradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report #

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

Design Collocation Neural Network to Solve Singular Perturbation Problems for Partial Differential Equations

Rening the submesh strategy in the two-level nite element method: application to the advection diusion equation

Least-squares collocation meshless method

Neural Networks Based on Competition

Z i = a X i + b Y i + r i (1) 22. Kolloquium Elektromagnetische Tiefenforschung, Hotel Maxičky, Děčín, Czech Republic, October 1-5,

Neural computing thermal comfort index for HVAC systems

A Novel Activity Detection Method

Neural networks. Chapter 19, Sections 1 5 1

Training Radial Basis Neural Networks with the Extended Kalman Filter

Transcription:

Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda School of Industrial and Mechanical Engineering, Sirindhorn International Institute of Technology, Thammasat University, P.O. Box 22, Thammasat-Rangsit, P.O., 12121 Pathum Thani, Thailand Received 30 October 2000; received in revised form 5 January 2002 Abstract In a conventional approach to numerical computation, nite dierence and nite element methods are usually implemented to determine the solution of a set of dierential equations (DEs). This paper presents a novel approach to solve DEs by applying the universal approximation method through an articial intelligence utility in a simple way. In this proposed method, neural network model (NNM) and fuzzy linguistic model (FLM) are applied as universal approximators for any nonlinear continuous functions. With this outstanding capability, the solutions of DEs can be approximated by the appropriate NNM or FLM within an arbitrary accuracy. The adjustable parameters of such NNM and FLM are determined by implementing the optimization algorithm. This systematic search yields sub-optimal adjustable parameters of NNM and FLM with the satisfactory conditions and with the minimum residual errors of the governing equations subject to the constraints of boundary conditions of DEs. The simulation results are investigated for the viability of eciently determining the solutions of the ordinary and partial nonlinear DEs. c 2002 Elsevier Science B.V. All rights reserved. Keywords: Finite dierence; Finite element; Universal approximation; Neural network model; Fuzzy linguistic model; Solving dierential equations 1. Introduction Proper design for engineering applications requires detailed information of the system-property distributions such as temperature, velocity, density, stress, concentration, etc. in space and time domain. This information can be obtained by either experimental measurement or computational E-mail address: thanan@siit.tu.ac.th (T. Leephakpreeda). 0377-0427/02/$ - see front matter c 2002 Elsevier Science B.V. All rights reserved. PII: S 0377-0427(02)00397-7

444 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Fig. 1. Functional approximation of two elements in FDM as well as FEM and UAM in a given domain. simulation. Although experimental measurement is reliable, it needs a lot of labor eorts and time. Therefore, the computational simulation has become a more and more popular method as a design tool since it needs only a fast computer with a large memory. Frequently, those engineering design problems deal with a set of dierential equations (DEs), which are to be numerically solved such as heat transfer, solid and uid mechanics. The conventional way to obtain the numerical solutions of DEs is to implement either a nite dierent method (FDM) or nite element method (FEM). FDM is the concept of approximating derivatives of DEs by dierence. The linearity is assumed for the purposes of evaluating the derivatives. Although such an approximation method is conceptually easy to understand, it has a number of shortcomings. Obviously, it is dicult to apply for systems with irregular geometry or unusual boundary conditions. On the other hand, FEM may be suited for such systems. The key idea of FEM is in nding approximate solutions to DEs, which are broken up into a number of shaped sub-domains (nite elements). At the end, the total solution is generated by assembling all the individual solutions of elements. Like FDM, it is usually assumed that the approximate solutions vary linearly over each individual element. This fact may be true only for the case of the suciently small element. Therefore, the interval size in FDM and the element size in FEM are required to be small enough so that a reasonable approximation can be made to get better solutions. Consequently, the computational cost is large accordingly. Additionally, the problem formulation of FDM and FEM can be tedious in some cases. In order to resolve these problems, this paper proposes a novel methodology to solve DEs by using the universal approximators, that is the neural network model (NNM) and the fuzzy linguistic model (FLM). NNM and FLM are used to approximate the solutions of DEs for the entire domains. The functional characteristics of NNM and FLM satisfy all the conditions on the governing and boundary equations of DEs. This proposed method is called universal approximation method (UAM) in this study. Fig. 1 shows the dierences of the functional approximation of two elements in a given domain between FEM as well as FDM and UAM in one-variable function f(x). Unlike FDM and FEM, the proposed UAM is an alternative approach to the determination of dierential-equation solution via optimization techniques that have been rapidly developed in research for more and more ecient usage in dealing with high-dimensional and nonlinear problems. Optimization Toolbox of MATLAB [3] is only an example, which is implemented in this study.

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 445 2. Universal approximators In computational articience, NNM and FLM have been clearly recognized as attractive alternatives to the functional approximation schemes because they are able to realize nonlinear mappings of any continuous functions [1,5]. Conceptually, these functional relationships between input=output variables, mathematically called dependent=independent variables, are captured by the adjustable parameters of both NNM and FLM. To obtain more understanding, the mathematical background of NNM and FLM is rst introduced as follows. 2.1. Neural network model In this study, the multilayer feedforward neural network is used for the input=output variable mapping of the functional relations. Actually, many dierent structural architectures of NNM have been proposed [4]; however, the multiple-layer feedforward NNM is especially interesting since it is commonly used. Furthermore, NNM possesses inherent nonlinear structures which are capable of approximating the complex functions. Cotter [2] demonstrated that a multilayer feedforward NNM can approximate any continuous nonlinear function, provided that there are sucient numbers of the nonlinear processing nodes or the so-called neurons in the hidden layers. For convenience in this discussion, NNM with an input layer, a single hidden layer, and an output layer in Fig. 2 is represented as a basic structural architecture. Here, the dimension of NNM is denoted by the number of neurons in each layer, that is I J K NNM, where I; J and K are the number of neurons in the input layer, the hidden layer and the output layer, respectively. However, in general, more than one hidden layer of NNM can be implemented for dealing with high nonlinear functions. The architecture of the model shows how NNM transforms the I inputs (x 1 ;:::;x i ;:::;x I ) into the K outputs (y 1 ;:::;y k ;:::;y K ) throughout the J hidden neurons (z 1 ;:::;z j ;:::;z J ), where the cycles represent the neurons in each layer. Let b j be the bias for neuron z j ;c k be the bias for neuron y k ;w ji be the weight connecting neuron x i to neuron z j, and w kj be the weight connecting neuron z j to neuron y k. Fig. 2. Multiple layer feed-forward NNM.

446 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 The output of NNM, f : R I R K, can be determined as follows: J y k = f y w kj z j + c k j=1 with ( I ) z j = f z w ji x i + b j ; i=1 (1) (2) where f y and f z are the activation functions which are normally nonlinear functions. The usual choices of the activation function [4] are the sigmoid transfer function: f()=1=(1+e ). Now, it should be noted that NNM in Eqs. (1) and (2) can capture the hidden relationship between inputs and outputs by tuning the adjustable parameters, that is w; b and c of NNM in order to achieve some desired end. 2.2. Fuzzy linguistic model In addition to NNM, the study of Ying [6] has shown that FLM of Takagi and Sugeno can be used as a universal approximator of any nonlinear continuous function. For the sake of simplicity, an I-input and a single-output fuzzy rule of Takagi and Sugeno can be expressed as follows: jth Rule : If x 1 is A j 1 and x 2 is A j 2 and ::: x I is A j I then y is a j 0 + a j 1 x 1 + a j 2 x 2 + + a j I x I; (3) where j is the jth fuzzy linguistic rule, I is the total number of the input variables, A is the fuzzy sets of the input variables, a is the coecient of the piecewise linear rules, x and y are the input and output variables of the FLM. In the case of more than one output variable, e.g., K outputs y k for k =1; 2;:::;K, the number of FLMs in Eq. (3) is used according to the number of the output variables. For this study, the membership values of the fuzzy linguistic sets are specied by the triangle-shaped functions as shown in Fig. 3. The linear proles for the right and left membership function of the two successive sets A h and A h+1, that is R h ( ) and L h+1 ( ), respectively, can be determined as R h (x)=1 s h (x X h ); L h+1 (x)=1 s h (X h+1 x); where s h is the slope, X h and X h+1 are the centers of the fuzzy sets A h and A h+1, respectively, and h is the index of the fuzzy set. Therefore, the membership values of the fuzzy set A h can be computed by R h (x); X h 6 x 6 X h+1 ; p(x)= L h (x); X h 1 6 x 6 X h ; (5) 0; else: (4a) (4b)

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 447 Fig. 3. Denitions of membership functions. With the typical center-average defuzzication of the fuzzy inference, the output of FLM can be determined by r j=1 y = [( I i=1 p(x i))(a j 0 + I i=1 aj i x i)] r j=1 ( I i=1 p(x ; (6) i)) where r is the total number of the fuzzy rules which contribute to the output. Like NNM, a j 0 and a j i for all j =1; 2;:::;r and i =1; 2;:::;I in Eq. (6) can be regarded as the adjustable parameters for capturing the non-linearity of DEs solution. Nowit can be concluded that NNM and FLM can be used to approximate the existing solutions of DEs. This goal can be accomplished by following these two steps: (1) selecting the numbers of the hidden neurons and layers of NNM or the numbers of the fuzzy sets and rules and (2) tuning the adjustable parameters of NNM or FLM. More details will be discussed in Section 3. 3. Problem formulation As is known, the main concept of solving DEs is to nd the solution that must satisfy the two separate conditions: (1) the governing equations and (2) the boundary equations. Hence, in our proposed UAM, the solutions of DEs can be simply obtained by the following procedures: Step 1: Initiate either trial NNM or FLM. Choose the numbers of neurons and hidden layers for NNM or the numbers of the fuzzy sets and rules for FLM as small as possible at the beginning. Step 2: Apply an optimization technique to determine the sub-optimal adjustable parameters of NNM or FLM in such a way that the residual errors of the governing equations are minimized and the boundary conditions are satised.

448 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Fig. 4. Diagram of proposed UAM in determining the solutions of DEs. Step 3: If the residual errors are less than tolerance and the boundary equations are satised, then stop. If not, then try to increase the various numbers in Step 1 and go to Step 2. Fig. 4 shows the overall diagram of the proposed UAM in determining the solution of DEs. Let us arrange the I inputs (or dependent variables) and K outputs (or independent variables) to be the elements of the input vector and output vector as follows: * x =[x 1 x 2 x I ] T and y * = [y 1 y 2 y K ] T, respectively. To tune the adjustable parameters of NNM or FLM, a batch of n input data: { * x} 1 ; { * x} 2 ; { * x} 3 ;:::;{ * x} n is generated by appropriately choosing the point locations within the considered domain and at the boundary. The numbers and the locations of these points can be selected in order to yield the solution of the DEs for the problem of interest eectively. After that, the input * x is injected to the nonlinearity of NNM or FLM in order to provide its outputs y * and/or derivatives. The derivatives can be calculated by an analytical method from the mathematical model in Eqs. (1) and (2) for NNM or in Eq. (6) for FLM. Alternatively, the numerical calculation can be implemented to determine the derivatives, which are used in this study. With the algorithm of optimization, the adjustable parameters of NNM or FLM are systematically updated in such a way that not only the residual errors of the governing equations are suciently small but also the boundary equations are satised for all the chosen points. Hence, the problem formulation can be expressed as the typical minimization problem: Minimize J ( * u) Subject to n ( * u) 0; n=1; 2; 3;:::;N; m ( * u)=0;m=1; 2; 3;:::;M; (7) where * u is the vector containing all the adjustable parameters of NNM or FLM, J is the function of the residual errors of the governing equations, and are the functions of the boundary equations, m and n are the indices.

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 449 Now, solving DEs is deduced to the minimization problem to nd the sub-optimal adjustable parameters of NNM or FLM. Most available algorithms of optimization can perform repetitive operations eciently in seeking a vector * u. 4. Examples This section illustrates howto implement the proposed UAM in order to obtain the solutions of the linear and nonlinear governing equation with some boundary conditions. First, a nonlinear governing equation and two Dirichlet boundary conditions are given as follows: The governing equation is d dx ( y dy dx ) = 2 x 2 : Boundary conditions are (8a) BC1: y(x =1)=0:9; BC2: y(x =2)=0:2: (8b) (8c) Let us propose an approximate solution to DE in Eq. (8) by using 1 5 1 NNM in Eqs. (1) and (2). It should be noted that the single-input single-output NNM (I = K = 1) with ve neurons in the hidden layer (J = 5) is considered and the domain of NNM input is [1,2] in this case. From Eq. (8a), the residual error can be dened as RE g = d dx 1 ( y 1 dy 1 dx 1 ) 2 x1 2 ; where y 1 is the output of NNM, which is the approximate solution to DE in Eq. (8), that is y. On the other hand, x 1 is equivalent to x. Here, three basic extensions of the minimization problem can be formulated as follows: (A) Penalty method: Minimize G (RE g ) 2 +! 1 (y 1 (x 1 =1) 0:9) 2 +! 2 (y 1 (x 1 =2) 0:2) 2 : (9) g=1 (B) Constrained method: Minimize G (RE g ) 2 g=1 Subject to (y 1 (x 1 =1) 0:9) 2 6 0; (y 1 (x 1 =2) 0:2) 2 6 0: (10)

450 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Fig. 5. Comparisons between the solutions to DE in Eq. (8), from NNM in Eqs. (1) and (2) and the analytical solution in Eq. (11): (a) penalty method, (b) constrained method and (c) minimax method. (C) Minimax method: { } Minimize max (RE g ) x 1;g=1;:::;G Subject to (y 1 (x 1 =1) 0:9) 2 6 0; (y 1 (x 1 =2) 0:2) 2 6 0; (11) where G is the total number of points chosen within the domain of x 1 ;gis the point index,! 1 and! 2 are the positive penalty constants. In this simulation study, the values of! 1 and! 2 are set to be 10. The nine locations of the points (G = 9) are chosen so as to be divided into equal intervals of domain x 1. The performance of NNM to approximate the solution of DE in Eq. (8) is shown in Fig. 5. All the minimization problems in Eqs. (10) (12) are numerically solved by using commands, i.e., fminu, constr, and minimax, respectively, in Optimization Toolbox of MATLAB. In Fig. 5, the analytical solution of DE (8) can be determined from the following equations: y(x)=( 4ln(x)+2:0026x 1:1926) 1=2 : (12) Table 1 lists the numerical values of the sub-optimal weights and biases of NNM in each case study. The resulting approximation of NNM to the analytical solution is quite good. It is found that the dierent methods of the minimization yield the distinct optimal weights and biases of NNM.

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 451 Table 1 The numerical values of the sub-optimal weights and biases of 1 5 1 NNM j k 1 2 3 4 5 A. Penalty method w j1 2:9104 6:0552 0.8561 15.6493 16.5859 b j 1:4061 0:3189 1:0846 31:0771 11:1912 w 1j 0:7472 492.1312 13:4946 0.5073 48:6444 c k 55.7701 B. Constrained method w j1 5.1332 6.6296 2.1974 3.4382 3:2185 b j 10.8124 27.8705 0:1995 8.1451 16:2064 w 1j 5.0046 9.0967 34:2662 13.0898 8.7671 c k 5.1802 C. Minimax method w j1 0:8039 2:1538 8:7359 2:1525 2.1537 b j 0.4649 21:9965 5.5399 9:5640 10.1468 w 1j 15.6558 3.8860 21.7564 1:8679 6:1580 c k 0.9864 Therefore, there is a set of the sub-optimal adjustable parameters considered as the solutions of DEs with corresponding accuracy. For an example of solving DE with FLM, let us consider the dierent DEs with one Neumann boundary condition and one Dirichlet boundary condition. The governing equation is d dx ( x dy dx ) = 2 x 2 : Boundary conditions are BC1: y(x =1)=0:9; BC2: x dy dx = 1 x=2 2 : The analytical solution of Eq. (13) can be determined by (13a) (13b) (13c) y(x)= 2 x + 1 ln(x) 1:1: 2 (14) In this problem, FLM is proposed by three fuzzy rules, that is Rule 1 : If variable x 1 is near 1 (A 1 ); then variable y 1 = a 1 0 + a 1 1x 1 : Rule 2 : If variable x 1 is near 1:5 (A 2 ); then variable y 1 = a 2 0 + a 2 1x 1 : Rule 3 : If variable x 1 is near 2 (A 3 ); then variable y 1 = a 3 0 + a 3 1x 1 : (15)

452 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Fig. 6. Membership function associated with three fuzzy sets. Table 2 The numerical values of the sub-optimal coecients of three-fuzzy-rule FLM r 1 2 3 A. Penalty method a r 0 1.8204 1.1956 0.9950 a r 1 0:9255 0:5070 0:3713 B. Constrained method a r 0 12:7779 20:7338 28:2583 a r 1 13.6779 14.1045 14.2435 C. Minimax method a r 0 11:3357 18:4597 25:0225 a r 1 12.2357 12.6543 12.6986 The membership functions of the three fuzzy sets are dened as shown in Fig. 6. The minimization problem is formulated and solved similar to NNM cases. The sub-optimal coecients of FLM in Eq. (15) are obtained in Table 2. Fig. 7 shows the sub-optimal solutions of three case studies from FLM by graphical comparison with the analytical solution in Eq. (14). The simulation results of FLM are closer to the results of the analytical solution. However, it can be seen that the functional approximation with the minimax method in Fig. 7(c) does not really yield the satisfactory result. To explain this, let us consider two FLM cases of the constrained method and the minimax method, which have the same constraint conditions but dierent objective functionals. The corresponding values of the sum-squared residual error and the maximum residual error are, respectively, 1.7923 and 0.9718 in the constrained method as well as 3.8891 and 0.6701 in the minimax method, respectively. The sum-squared residual error of the constrained method is lower than that of the minimax method as the maximum residual error of the constrained method is higher than that of the minimax method.

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 453 Fig. 7. Comparisons between the solutions to DE in Eq. (14) from FLM in Eq. (6) and the analytical solution in Eq. (15): (a) penalty method, (b) constrained method and (c) minimax method. This can be interpreted as the overall squared residual errors that are minimized by the constrained method. On the other hand, the minimax method attempts to minimize only the maximum value of the squared residual error each time within the whole domain, while the boundary at (x 1 =2) is the free end. Therefore, the resulting curve of FLM with minimax method deviates from the analytical solution in order that it may fulll such a min max objective function and boundary conditions. However, it may be useful in the sense that the FLM with the minimax method provides the rough shape of the solution prole in this case. To showthe eectiveness of the proposed UAM, the conventional FDM is performed to obtain the numerical solutions of DEs in Eqs. (8) and (13). First, the equal size of the domain interval x is dened as 0.1; therefore, there are 11 points on the domain, which are identical to those of the two cases studied earlier. First- and second-order derivatives are, respectively, approximated by dy dx = y (i+1) y (i 1) 2x (16) and d 2 y dx 2 = y (i+1) 2y (i) + y (i 1) (x) 2 ; (17) where i is the index of the points.

454 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Table 3 Comparison between conventional FDM and UAM Numerical solution of Eq. (8) Numerical solution of Eq. (13) FDM UAM with ANN FDM UAM with FLM e avg 0.0004 0.0016 (A) 0.0157 0.0034 (A) 0.0150 (B) 0.0103 (B) 0.0251 (C) 0.0791 (C) Numbers of values 9 16 10 6 to be determined (y) (w; b; c) (y) (a) Formulating Yes No Yes No requirement for computational solving Flexibility of point Possible Yes Possible Yes location (but re-formulating (but re-formulating is required) is required) After applying Eqs. (16) and (17) to Eqs. (8) and (13) for all the points within domain, two sets of nonlinear and linear equations can be obtained as Eqs. (18) and (19), respectively. with and with and y(i+1) 2 8y(i) 2 + y(i 1) 2 +4y (i+1) y (i) 2y (i+1) y (i 1) +4y (i) y (i 1) = 8(x)2 x(i) 2 ; i=2; 3;:::;10 y (1) =0:9 at x (1) =1 y (11) =0:2 at x (11) =1; (2x (i) +x)y (i+1) 4x (i) y (i) +(2x (i) x)y (i 1) = 4(x)2 x(i) 2 ; i=2; 3;:::;10 (18) y (1) =0:9 at x (1) =1 y (11) y (10) x (11) = 1 at x (11) =2: (19) x 2 As shown in Fig. 8, the numerical solutions of Eqs. (8) and (13) can be obtained by solving two sets of Eqs. (18) and (19), respectively. Table 3 presents a comparison between the conventional FDM

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 455 Fig. 8. Numerical solutions obtained by conventional FDM: (a) Eq. (8) and (b) Eq. (13). and the proposed UAM. It can be explained as follows. For accuracy, the mean value of the absolute error e avg in Table 3 is dened by the summation of the absolute values of the dierence magnitude between the analytical value and numerical value, which is divided by the total number of points. It can be seen that the values of e avg from the UAM with NNM and FLM are similar to the value of e avg from the conventional FDM. However, it should be noticed that the UAM from FLM with penalty method ( e avg =0:0034) yields better numerical results than the FDM ( e avg =0:0157), while the numbers of numerical values to be determined are less (6 for UAM, 10 for FDM). Unlike UAM, the conventional FDM requires formulating the function of the numerical values to be determined as expressed in Eqs. (18) and (19). In UAM, it is also easy to adjust the location of the considered points on domain of interest without reformulating. To illustrate the performance on high-dimensional problem, the UAM is applied to solve the partial DE with irregularly shaped boundaries. The governing equation is 9 2 y 9x 2 1 + 92 y 9x 2 2 =0: (20) The boundary conditions are specied as illustrated in Fig. 9. The space domain of the consideration is between the outer circle and the inner rectangle. The solid dots represent the locations of the points, which are used to compute the squared residual errors of the governing equation in Eq. (20)

456 T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 Fig. 9. Specication of boundary conditions for the governing equation in Eq. (20). Fig. 10. Three-dimensional viewof DE solution in Eq. (20) by implementing UAM with 2 5 5 1 NNM. and the boundary equations with the penalty method in Eq. (9). The penalty constants are set to be 100. The 2 5 5 1 NNM is chosen to perform the functional approximation of DE solution in Eq. (20) and the boundary conditions in Fig. 9. The simulation result of this problem is shown in Fig. 10. The numerical values of the 10 sample points within domain and at the boundaries are listed in Table 4. It can be seen that the residual errors of the sample points within the domain are closer to zero, while the values of the NNM at the boundary conditions approach their specied values.

T. Leephakpreeda / Journal of Computational and Applied Mathematics 146 (2002) 443 457 457 Table 4 Numerical values of 2 5 5 1 NNM function at boundary and residual errors within domain x 1 x 2 Residual errors Within domain 3 1 0:0338 3.5 1.5 0:2468 4 5 0.0587 1:5 3:5 0:1503 2 2 0.5953 x 1 x 2 Function values a At boundary 0 5 4:7685( 5) 5 5 4.6026(5) 2.1794 4:5 4:3206( 4:5) 1 0.5 0.0023(0) 1 1 0.0019(0) a Values specied at boundary conditions. 5. Conclusion Solving DEs by using the universal approximators, that is, NNM and FLM is presented in this paper. The problem formulation of the proposed UAM is quite straightforward. To obtain the Best-approximated solution of DEs, the adjustable parameters of NNM and FLM are systematically searched by using the optimization algorithms. From the simulation results, the proposed UAM can be eciently implemented to obtain the approximate solutions of DEs. References [1] J.L. Castro, M. Delgado, Fuzzy systems with defuzzication are universal approximators, IEEE Trans. Systems Man Cybernet., Part B: Cybernet. 26 (1996) 149 152. [2] N.E. Cotter, The Stone Weierstrass theorem and its application to neural nets, IEEE Trans. Neural Networks 1 (1990) 290 295. [3] A. Grace, Optimization Toolbox, Mathworks Inc., Natick, MA, 1995. [4] M.T. Hagan, H.B. Demuth, M. Beale, Neural Network Design, PWS publishing company, Massachusetts, 1996. [5] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2 (1989) 359 366. [6] H. Ying, General Takagi Sugeno fuzzy systems with simplied linear rule consequent are universal controllers, models and lters, Inform. Sci. 108 (1998) 91 107.