Solving Nonlinear Differential Equations by a Neural Network Method
|
|
- Brent Dickerson
- 5 years ago
- Views:
Transcription
1 Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs, Stevnweg, 68 CN Delft, The Netherlands l.aarts@ctg.tudelft.nl, p.vdveer@ct.tudelft.nl Abstract. In ths paper we demonstrate a neural network method to solve nonlnear dfferental equatons and ts boundary condtons. The dea of our method s to ncorporate knowledge about the dfferental equaton and ts boundary condtons nto neural networks and the tranng sets. Hereby we obtan specfcally structured neural networks. To solve the nonlnear dfferental equaton and ts boundary condtons we have to tran all obtaned neural networks smultaneously. Ths s realzed by applyng an evolutonary algorthm. Introducton In ths paper we present a neural network method to solve a nonlnear dfferental equaton and ts boundary condtons. In [] we have alrea demonstrated how we could solve lnear dfferental and lnear partal dfferental equatons by our neural network method. In [] we showed how to use our neural network method to solve systems of coupled frst order lnear dfferental equatons. In ths paper we demonstrate how we ncorporate knowledge about the nonlnear dfferental equaton and ts boundary condtons nto the structure of the neural networks and the tranng sets. Tranng the obtaned neural networks smultaneously now solves the nonlnear dfferental equaton and ts boundary condtons. Snce several of the obtaned neural networks are specfcally structured, the tranng of the networks s accomplshed by applyng an evolutonary algorthm. An evolutonary algorthm tres to fnd the mnmum of a gven functon. Normally one deals wth an evolutonary algorthm workng on a sngle populaton,.e. a set of elements of the soluton space. We however use an evolutonary algorthm workng on multple subpopulatons to obtan results more effcently. At last we graphcally llustrate the obtaned results of solvng the nonlnear dfferental equaton and ts boundary condtons by our neural network method. V.N. Alexandrov et al. (Eds.): ICCS, LNCS 74, pp. 8 89,. Sprnger-Verlag Berln Hedelberg
2 8 L.P. Aarts and P. Van der Veer Problem Statement Many of the general laws of nature, lke n physcs, chemstry, bology and astronomy, fnd ther most natural expresson n the language of dfferental equatons. Applcatons also abound n mathematcs tself, especally n geometry, and n engneerng, economcs, and many other felds of appled scence. In [] one derves the followng nonlnear dfferental equaton and ts boundary condtons for the descrpton of the problem of fndng the shape assumed by a flexble chan suspended between two ponts and hangng under ts own weght. Further the y -axs pass through the lowest pont of the chan. d y = +, () y ( ) =, () ( ) = Here the lnear densty of the chan s assumed to be a constant value. In [] the analytcal soluton s derved for system (), () and (3) and s gven by y x) = x -x ( e + e ) (. In ths paper we consder the system (), () and (3) on the nterval [-, ]. x. (3) (4) 3 Outlne of the Method By knowng the analytcal soluton of (), () and (3), we may assume that y(x) and ts frst two dervatves are contnuous mappngs. Further we defne the logsgmod functon f as f ( x) + exp( -x) =. By results n [8] we can fnd such real values of a, w and b that for a certan natural number m the followng mappngs (5)
3 Solvng Nonlnear Dfferental Equatons by a Neural Network Method 83 m j, ( x ) = a f ( w x + b ) = (6) dj ( x) = m = a w df ( w x + b ), (7) m d j d f ( x) = a w = ( w x + b ), (8) respectvely approxmate y x), ( and d y arbtrarly well. The networks represented by (6), (7) and (8) have one hdden layer contanng m neurons and a lnear output layer. Further we defne the DE-neural network of system (), () and (3) as the not fully connected neural network whch s constructed as follows. The output of the g ( x) = x as network represented by (7) s the nput of a layer havng the functon transfer functon. The layer contans one neuron and has no bas. The connecton weght between the network represented by (7) and the layer s. The output of the layer s the nput of a layer wth the functon h ( x) = x as transfer functon. The layer contans one neuron and has a bas wth value. The connecton weght between the two layers s. x d j + - dj g h g ( x) = x, h( x) = x Fg.. The DE-neural network for system (), () and (3)
4 84 L.P. Aarts and P. Van der Veer The output of the last layer s subtracted from the output of the network represented by (8). A sketch of the DE-neural network of system (), () and (3) s gven n Fg.. Snce the learnablty of neural networks to smultaneously approxmate a gven functon and ts unknown dervatves s made plausble n [5], we observe the followng. Assume that we have found such values of the weghts that the networks represented by (6), (7) and (8) respectvely approxmate y (x) and ts frst two dervatves arbtrarly well on a certan nterval. By consderng the nonlnear dfferental equaton gven by () t then follows that the DE-neural network must have a number arbtrarly close to zero as output for any nput of the nterval. In [6] t s alrea stated that any network sutably traned to approxmate a mappng satsfyng some nonlnear partal dfferental equatons wll have an output functon that tself approxmately satsfes the partal dfferental equatons by vrtue of ts approxmaton of the mappng s dervatves. Further the network represented by (6) must have for nput x = an output arbtrarly close to one and the network represented by (7) must gve for the same nput an output arbtrarly close to zero. The dea of our neural network method s based on the observaton that f we want to fulfl a system lke (), () and (3) the DE-neural network should have zero as output for any nput of the consdered nterval[ -,]. Therefore we tran the DE-neural network to have zero as output for any nput of a tranng set wth nputs x [-, ]. Further we have the followng restrctons on the values of the weghts. The neural network represented by (6) must be traned to have one as output for nput x = and for the same nput the neural network represented by (7) must be traned to have zero as output. If the tranng of the three networks has well succeeded the mappng j and ts frst two dervatves should respectvely approxmate y and ts frst two dervatves. Note that we stll have to choose the number of neurons n the hdden layers of the networks represented by (6), (7) and (8),.e. the natural number m by tral and error. The three neural networks have to be traned smultaneously as a consequence of ther nter-relatonshps. It s a specfc pont of attenton how to adjust the values of the weghts of the DE-neural network. The weghts of the DE-neural network are hghly correlated. In [] t s stated that an evolutonary algorthm makes t easer to generate neural networks wth some specal characterstcs. Therefore we use an evolutonary algorthm to adjust smultaneously the weghts of the three neural networks. Before we descrbe how we manage ths, we gve a short outlne of what an evolutonary algorthm s. 4 Evolutonary Algorthms wth Multple Subpopulatons Evolutonary algorthms work on a set of elements of the soluton space of the functon we would lke to mnmze. The set of elements s called a populaton and the elements of the set are called ndvduals. The man dea of evolutonary algorthms s that they explore all regons of the soluton space and explot promsng areas through applyng recombnaton, mutaton, selecton and renserton operatons to the ndvdu-
5 Solvng Nonlnear Dfferental Equatons by a Neural Network Method 85 als of a populaton. In ths way one hopefully fnds the mnmum of the gven functon. Every tme all procedures are appled to a populaton, a new generaton s created. Normally one works wth a sngle populaton. In [9] Pohlhem however states that results are more effcently obtaned when we are workng wth multple subpopulatons nstead of just a sngle populaton. Every subpopulaton evolves over a few generatons solated (lke wth a sngle populaton evolutonary algorthm) before one or more ndvduals are exchanged between the subpopulatons. To apply an evolutonary algorthm n our case, we defne e, e and e 3 as the means of the sum-ofsquares error on the tranng sets of respectvely the DE-neural network, the network represented by (6) and the network gven by (7). Here we mean by the mean of the sum-of-squares error on the tranng set of a certan network, that the square of the dfference between the target and the output of the network s summed for all nputs and that ths sum s dvded by the number of nputs. To smultaneously tran the DE- dj we mnmze the expres- neural network and the networks represented by j and son e + e e3 +, (9) by usng an evolutonary algorthm. Here equaton (9) s a functon of the varables a, w and b. 5 Results In ths secton we show the results of applyng our neural network method to the system (), () and (3). Some practcal aspects of tranng neural networks that are well known n lterature also hold for our method. In e.g. [3] and [7], t s stated that f we want to approxmate an arbtrary mappng wth a neural network represented by (6), t s advantageous for the tranng of the neural networks to scale the nputs and targets so that they fall wthn a specfed range. In ths way we can mpose fxed lmts on the values of the weghts. Ths prevents that we get stuck too far away from a good optmum durng the tranng process. By tranng the networks wth scaled data all weghts can reman n small predctable ranges. In [] more can be found about scalng the varable where the unknown of the dfferental equaton depend on and scalng the functon values of the unknown of the dfferental equaton, to mprove the tranng process of the neural networks. To make sure that n our case the weghts of the networks can reman n small predctable ranges, we scale the functon values of the unknown of the nonlnear dfferental equaton. Snce we normally do not know much about the functon values of the unknown we have to guess a good scalng of the functon values of the unknown. For solvng the system (), () and (3) on the consdered nterval [-,] we decde to scale y n the followng way:
6 86 L.P. Aarts and P. Van der Veer y y M =. () Hereby the system (), () and (3) becomes d y M = + 4 M M, () M ( ) = y, () M ( ) =. We now solve the system (), () and (3) by applyng the neural network method descrbed n Sect. 3. A sketch of the DE-neural network for system (), () and (3) s gven n Fg.. (3) x d j + - dj g h 4 g ( x) = x, h( x) = x Fg.. The DE-neural network for system (), () and (3) We mplemented the neural networks by usng the Neural Network Toolbox of Matlab 5.3 ([4]). Further we used the evolutonary algorthm mplemented n the GEATbx toolbox ([9]). When we work wth the evolutonary algorthms mplemented n the GEATbx toolbox, the values of the unknown varables a, w and b have to fall wthn a specfed range. By experments we notced that we obtan good results f we restrct the values of the varables a, w and b to the nterval [- 5,5]. The DE- x -,-.9,..,.9, and neural network s traned by a tranng set wth nputs { }
7 Solvng Nonlnear Dfferental Equatons by a Neural Network Method 87 the correspondng targets of all nputs are zero. Further we have to tran the neural network represented by j to have one as output for nput x = and for the same dj nput the neural network represented by must have zero as output. The number of neurons n the hdden layer of the neural networks represented by (6), (7) and (8) s taken equal to 6. Therefore the number of varables whch have to be adapted s equal to 8. After runnng the chosen evolutonary algorthm for 5 generatons wth 6 ndvduals dvded over 8 subpopulatons we take the set x {-, -.95,.9,..,.95,} as nput to compute the output of the neural networks dj d j represented by j, and. We also compute the analytcal soluton of x -,-.95,.9,..,.95,. By (), () and (3) and ts frst two dervatves for { } comparng the results we conclude that the approxmaton of y and ts frst two dervatves by respectvely j and ts frst two dervatves are very good. Both the neural network method soluton of (), () and (3) and ts frst two dervatves as the analytcal soluton of (), () and (3) and ts frst two dervatves are graphcally llustrated n Fg. 3 and Fg. 4. The errors between the neural network method soluton of (), () and (3) and ts frst two dervatves on the one hand and the analytcal soluton of (), () and (3) and ts frst two dervatves on the other hand are graphcally llustrated n Fg. 5. The dfference between the target of the DE-neural network of the system (), () and (3),.e. zero for any nput x of the set {-, -.95,.9,..,.95,} and ts actual output s also llustrated n Fg. 5. Consderng Fg. 5, we can conclude that the approxmatons of the soluton of (), () and (3) and ts frst dervatve are somewhat better than the approxmaton of the second dervatve of the soluton of (), () and (3). Snce we are however n most numercal solvng methods for dfferental equatons nterested n the approxmaton of just the soluton tself, our results are really satsfyng. 6 Concludng Remarks In ths paper we used our neural network method to solve a system consstng of a nonlnear dfferental equaton and ts two boundary condtons. The obtaned results are very promsng and the concept of the method appears to be feasble. In further research more attenton wll be pad to practcal aspects lke the choce of the evolutonary algorthm that s used to tran the networks smultaneously. We wll also do more extensve experments on scalng ssues n practcal stuatons, especally the scalng of the varable where the unknown of the dfferental equaton depends on.
8 88 L.P. Aarts and P. Van der Veer References. Aarts, L.P., Van der Veer, P.: Neural Network Method for Solvng Partal Dfferental Equatons. Accepted for publcaton n Neural Processng Letters (?). Aarts, L.P., Van der Veer, P.: Solvng Systems of Frst Order Lnear Dfferental Equatons by a Neural Network Method. Submtted for publcaton December 3. Bshop, C.M.: Neural Networks for Pattern Recognton. Clarendon Press, Oxford (995) 4. Demuth, H., Beale, M.: Neural Networks Toolbox For Use wth Matlab, User s Gude Verson 3. The Math Works, Inc., Natck Ma (998) 5. Gallant, R.A., Whte H.: On Learnng the Dervatves of an Unknown Mappng Wth Multlayer Feedforward Networks. Neural Networks 5 (99) Hornk, K., Stnchcombe, M., Whte, H.: Unversal Approxmaton of an Unknown Mappng and Its Dervatves Usng Multlayer Feedforward Networks. Neural Networks 3 (99) Masters, T.: Practcal Neural Networks Recpes n C++. Academc Press, Inc. San Dego (993) 8. L, X.: Smultaneous approxmatons of multvarate functons and ther dervatves by neural networks wth one hdden layer. Neurocomputng (996), Pohlhem, H., Documentaton for Genetc and Evolutonary Algorthm Toolbox for use wth Matlab (GEATbx): verson.9, more nformaton on (999). Smmons, G.F.: Dfferental equatons wth applcatons and hstorcal notes. nd ed. McGraw-Hll, Inc., New York (99). Yao, X.: Evolvng Artfcal Neural Networks. Proceedngs of the IEEE 87(9) (999) y(x) o=neural network *=analytcal -> y Fg.3. The soluton of system (), () and (3)
9 Solvng Nonlnear Dfferental Equatons by a Neural Network Method 89 4 /(x) -> / o=neural network *=analytcal -> d y/ d y/ (x) 3 o=neural network *=analytcal Fg.4. The frst two dervatves of the soluton of system (), () and (3). analytcal - nn y(x) analytcal - nn /(x) -> error > error analytcal - nn d y/ (x) output DE-neural network (), (), (3)..5 -> error > output Fg. 5. The errors between the analytcal solutons of (), () and (3) and ts frst two dervatves on the one hand and the neural network method soluton of (), () and (3) and ts frst two dervatves on the other hand. Also the output of the DE-neural network of (), () and (3) s llustrated
EEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationMATH 567: Mathematical Techniques in Data Science Lab 8
1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationOpen Systems: Chemical Potential and Partial Molar Quantities Chemical Potential
Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,
More informationU-Pb Geochronology Practical: Background
U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationFuzzy Boundaries of Sample Selection Model
Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationFixed point method and its improvement for the system of Volterra-Fredholm integral equations of the second kind
MATEMATIKA, 217, Volume 33, Number 2, 191 26 c Penerbt UTM Press. All rghts reserved Fxed pont method and ts mprovement for the system of Volterra-Fredholm ntegral equatons of the second knd 1 Talaat I.
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationCHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION
CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationNeural Networks & Learning
Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred
More informationALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION
ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N.) MATEMATICĂ, Tomul LIX, 013, f.1 DOI: 10.478/v10157-01-00-y ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION BY ION
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationApplication of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems
Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &
More informationCOMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD
COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD Ákos Jósef Lengyel, István Ecsed Assstant Lecturer, Professor of Mechancs, Insttute of Appled Mechancs, Unversty of Mskolc, Mskolc-Egyetemváros,
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationErrors in Nobel Prize for Physics (7) Improper Schrodinger Equation and Dirac Equation
Errors n Nobel Prze for Physcs (7) Improper Schrodnger Equaton and Drac Equaton u Yuhua (CNOOC Research Insttute, E-mal:fuyh945@sna.com) Abstract: One of the reasons for 933 Nobel Prze for physcs s for
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationColor Rendering Uncertainty
Australan Journal of Basc and Appled Scences 4(10): 4601-4608 010 ISSN 1991-8178 Color Renderng Uncertanty 1 A.el Bally M.M. El-Ganany 3 A. Al-amel 1 Physcs Department Photometry department- NIS Abstract:
More informationNote 10. Modeling and Simulation of Dynamic Systems
Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationGaussian Mixture Models
Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous
More informationResearch Article Green s Theorem for Sign Data
Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of
More informationESCI 341 Atmospheric Thermodynamics Lesson 10 The Physical Meaning of Entropy
ESCI 341 Atmospherc Thermodynamcs Lesson 10 The Physcal Meanng of Entropy References: An Introducton to Statstcal Thermodynamcs, T.L. Hll An Introducton to Thermodynamcs and Thermostatstcs, H.B. Callen
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationUsing Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*
Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationNew Method for Solving Poisson Equation. on Irregular Domains
Appled Mathematcal Scences Vol. 6 01 no. 8 369 380 New Method for Solvng Posson Equaton on Irregular Domans J. Izadan and N. Karamooz Department of Mathematcs Facult of Scences Mashhad BranchIslamc Azad
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationSimultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals
Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,
More informationOn the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros
Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationGeometry of Müntz Spaces
WDS'12 Proceedngs of Contrbuted Papers, Part I, 31 35, 212. ISBN 978-8-7378-224-5 MATFYZPRESS Geometry of Müntz Spaces P. Petráček Charles Unversty, Faculty of Mathematcs and Physcs, Prague, Czech Republc.
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationAn Extended Hybrid Genetic Algorithm for Exploring a Large Search Space
2nd Internatonal Conference on Autonomous Robots and Agents Abstract An Extended Hybrd Genetc Algorthm for Explorng a Large Search Space Hong Zhang and Masum Ishkawa Graduate School of L.S.S.E., Kyushu
More information6.3.4 Modified Euler s method of integration
6.3.4 Modfed Euler s method of ntegraton Before dscussng the applcaton of Euler s method for solvng the swng equatons, let us frst revew the basc Euler s method of numercal ntegraton. Let the general from
More informationProf. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model
EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationSupplementary Notes for Chapter 9 Mixture Thermodynamics
Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects
More information= z 20 z n. (k 20) + 4 z k = 4
Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationLecture 16 Statistical Analysis in Biomaterials Research (Part II)
3.051J/0.340J 1 Lecture 16 Statstcal Analyss n Bomaterals Research (Part II) C. F Dstrbuton Allows comparson of varablty of behavor between populatons usng test of hypothess: σ x = σ x amed for Brtsh statstcan
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More information6) Derivatives, gradients and Hessian matrices
30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationMarginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients
ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationNON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS
IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationConsistency & Convergence
/9/007 CHE 374 Computatonal Methods n Engneerng Ordnary Dfferental Equatons Consstency, Convergence, Stablty, Stffness and Adaptve and Implct Methods ODE s n MATLAB, etc Consstency & Convergence Consstency
More informationNumerical Solution of Ordinary Differential Equations
Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples
More informationx i1 =1 for all i (the constant ).
Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More information