Radial Basis-Function Networks
|
|
- Emery Henry
- 5 years ago
- Views:
Transcription
1 Raial Basis-Function Networks Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Raial Basis-Function Networks Gaussian response function Location of center u Determining sigma Why oes RBF network work
2 Back-propagation The algorithm gives a prescription for changing the weights w ij in any feeforwar network to learn a training set of input output pairs {x,t } We consier a simple two-layer network x k x x x 3 x 4 x 5
3 Given the pattern x the hien unit j receives a net input net j = k= w jk x k an prouces the output 5 V j = f (net j ) = f ( w jk x k ) 5 k= Output unit i thus receives 3 net i = W ij V j = (W ij f ( w jk x k )) j= j= k= An prouce the final output 3 3 o i = f (net i ) = f ( W ij V j ) = f ( (W ij f ( w jk x k ))) j= j= k=
4 In our example E becomes E[ w ] = E[ w ] = m = i= m (t i o i ) = i= 3 (t i f ( W ij f ( w jk x k ))) E[w] is ifferentiable given f is ifferentiable Graient escent can be applie j 5 k= For hien-to-output connections the graient escent rule gives: ΔW ij = η E = η W ij ΔW ij = η m = m = (t i o i ) f ' (net i ) V j (t i o i ) ( f ' (net i )) V j δ i = f ' (net i )(t i o i ) m ΔW ij = ηδ i V j = 4
5 For the input-to hien connection w jk we must ifferentiate with respect to the w jk Using the chain rule we obtain Δw jk = η E = η w jk m = E V V j j w jk Δw jk = η = i= (t i δ i = f ' (net i )(t i o i ) Δw jk = η m δ j = f ' (net j ) Δw jk = η m = i= m = δ j o i ) f ' (net i )W ij f ' (net j ) x k δ i W ij f ' (net j ) x k x k W ij δ i i= 5
6 Example w ={w =0.,w =0.,w 3 =0.,w 4 =0.,w 5 =0.} w ={w =0.,w =0.,w 3 =0.,w 4 =0.,w 5 =0.} w 3 ={w 3 =0.,w 3 =0.,w 33 =0.,w 34 =0.,w 35 =0.} W ={W =0.,W =0.,W 3 =0.} W ={W =0.,W =0.,W 3 =0.} X ={,,0,0,0}; t ={,0} X ={0,0,0,,}; t ={0,} f (x) = σ(x) = + e ( x) f ' (x) = σ ' (x) = σ(x) ( σ(x)) net = w k x k 5 k= net = w k x k 5 k= net 3 = w 3k x k 5 k= V = f (net ) = + e net net =*0.+*0.+0*0.+0*0.+0*0. V =f(net )=/(+exp(-0.))= V = f (net ) = + e net V =f(net )=/(+exp(-0.))= V 3 = f (net 3 ) = + e net 3 V 3=f(net 3 )=/(+exp(-0.))=
7 3 net = W j V j j= o = f (net ) = + e net net = * * *0.= o = f(net)=/(+exp( ))= net = W j V j j= o = f (net ) = + e net net = * * *0.= o = f(net)=/(+exp( ))= ΔW ij = η m (t i o i ) f ' (net i ) V j = We will use stochastic graient escent with η= ΔW ij = (t i o i ) f ' (net i )V j f ' (x) = σ ' (x) = σ(x) ( σ(x)) ΔW ij = (t i o i )σ(net i )( σ(net i ))V j δ i = (t i o i )σ(net i )( σ(net i )) ΔW ij = δ i V j 7
8 δ = (t o )σ(net )( σ(net )) ΔW j = δ V j δ =( )*(/(+exp( )))*(-(/(+exp( ))))= δ = (t o )σ(net )( σ(net )) ΔW j = δ V j δ =( )*(/(+exp( )))*(-(/(+exp( ))))= Δw jk = δ i W ij f ' (net j ) x k Δw jk = δ i W ij σ(net j )( σ(net j )) x k δ j = σ(net j )( σ(net j )) Δw jk = δ j x k W ij δ i i= 8
9 δ = σ(net )( σ(net )) W i δ i i= δ = /(+exp(- 0.))*(- /(+exp(- 0.)))*(0.* *( )) δ = e-04 δ = σ(net )( σ(net )) δ = e-04 W i δ i i= δ 3 = σ(net 3 )( σ(net 3 )) i= δ 3 = e-04 W i3 δ i First Aaptation for x (one epoch, aaptation over all training patterns, in our case x x ) Δw jk = δ j x k ΔW ij = δ i V j δ = e-04 δ = δ = e-04 δ = δ 3 = e-04 x = v = x = v = x 3 =0 v 3 = x 4 =0 x 5 =0 9
10 5/4/ Raial Basis-Function Networks RBF networks train rapily No local minima problems No oscillation Universal approximators Can approximate any continuous function Share this property with fee forwar networks with hien layer of nonlinear neurons (units) Disavantage After training they are generally slower to use 0
11 Gaussian response function Each hien layer unit computes D i σ h i = e x = an input vector u = weight vector of hien layer neuron i D i = ( x u i ) T ( x u i ) The output neuron prouces the linear weighte sum o = n i= 0 w i h i The weights have to be aopte (LMS) Δw i = η(t o)h i
12 The operation of the hien layer One imensional input (x u) σ h = e Two imensional input
13 Every hien neuron has a receptive fiel efine by the basis-function x=u, maximum output Output for other values rops as x eviates from u Output has a significant response to the input x only over a range of values of x calle receptive fiel The size of the receptive fiel is efine by σ u may be calle mean an σ stanar eviation The function is raially symmetric aroun the mean u Location of centers u The location of the receptive fiel is critical Apply clustering to the training set each etermine cluster center woul correspon to a center u of a receptive fiel of a hien neuron 3
14 Determining σ The object is to cover the input space with receptive fiels as uniformly as possible If the spacing between centers is not uniform, it may be necessary for each hien layer neuron to have its own σ For hien layer neurons whose centers are wiely separate from others, σ must be large enough to cover the gap Following heuristic will perform well in practice For each hien layer neuron, fin the RMS istance between u i an the center of its N nearest neighbors c N j RMS = n c lk n u l= k Assign this value to σ i i= k N 4
15 5
16 5/4/ Why oes a RBF network work? The hien layer applies a nonlinear transformation from the input space to the hien space In the hien space a linear iscrimination can be performe 6
17 Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Raial Basis-Function Networks Gaussian response function Location of center u Determining sigma Why oes RBF network work Bibliography Wasserman, P. D., Avance Methos in Neural Computing, New York: Van Nostran Reinhol, 993 Simon Haykin, Neural Networks, Secen eition Prentice Hall, 999 7
18 Support Vector Machines 8
Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More information03/12/15. SAD: 8º Projecto
SAD: 8º Projecto 1 2 n Given the pattern x the hien unit j receives a net input net j = k=1 w jk x k n an prouces the output 5 V j = f (net j ) = f ( w jk x k ) 5 k=1 3 n Output unit i thus receives 3
More informationInfluence of weight initialization on multilayer perceptron performance
Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -
More informationAnalytic Scaling Formulas for Crossed Laser Acceleration in Vacuum
October 6, 4 ARDB Note Analytic Scaling Formulas for Crosse Laser Acceleration in Vacuum Robert J. Noble Stanfor Linear Accelerator Center, Stanfor University 575 San Hill Roa, Menlo Park, California 945
More informationNeural Network Training By Gradient Descent Algorithms: Application on the Solar Cell
ISSN: 319-8753 Neural Networ Training By Graient Descent Algorithms: Application on the Solar Cell Fayrouz Dhichi*, Benyounes Ouarfi Department of Electrical Engineering, EEA&TI laboratory, Faculty of
More informationMathematics. Circles. hsn.uk.net. Higher. Contents. Circles 1. CfE Edition
Higher Mathematics Contents 1 1 Representing a Circle A 1 Testing a Point A 3 The General Equation of a Circle A 4 Intersection of a Line an a Circle A 4 5 Tangents to A 5 6 Equations of Tangents to A
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Neural Networks. Tobias Scheffer
Universität Potsam Institut für Informatik Lehrstuhl Maschinelles Lernen Neural Networks Tobias Scheffer Overview Neural information processing. Fee-forwar networks. Training fee-forwar networks, back
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationLab 5: 16 th April Exercises on Neural Networks
Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationMachine Learning
Machine Learning 10-601 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 02/10/2016 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationArtificial Neural Networks
Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationMachine Learning
Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical
More informationMultilayer Neural Networks
Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient
More informationNeural networks III: The delta learning rule with semilinear activation function
Neural networks III: The delta learning rule with semilinear activation function The standard delta rule essentially implements gradient descent in sum-squared error for linear activation functions. We
More informationIntroduction to Markov Processes
Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationLecture 3 Notes. Dan Sheldon. September 17, 2012
Lecture 3 Notes Dan Shelon September 17, 2012 0 Errata Section 4, Equation (2): yn 2 shoul be x2 N. Fixe 9/17/12 Section 5.3, Example 3: shoul rea w 0 = 0, w 1 = 1. Fixe 9/17/12. 1 Review: Linear Regression
More informationImplicit Differentiation. Lecture 16.
Implicit Differentiation. Lecture 16. We are use to working only with functions that are efine explicitly. That is, ones like f(x) = 5x 3 + 7x x 2 + 1 or s(t) = e t5 3, in which the function is escribe
More informationAdmin BACKPROPAGATION. Neural network. Neural network 11/3/16. Assignment 7. Assignment 8 Goals today. David Kauchak CS158 Fall 2016
Amin Assignment 7 Assignment 8 Goals toay BACKPROPAGATION Davi Kauchak CS58 Fall 206 Neural network Neural network inputs inputs some inputs are provie/ entere Iniviual perceptrons/ neurons Neural network
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationElectric Potential. Slide 1 / 29. Slide 2 / 29. Slide 3 / 29. Slide 4 / 29. Slide 6 / 29. Slide 5 / 29. Work done in a Uniform Electric Field
Slie 1 / 29 Slie 2 / 29 lectric Potential Slie 3 / 29 Work one in a Uniform lectric Fiel Slie 4 / 29 Work one in a Uniform lectric Fiel point a point b The path which the particle follows through the uniform
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationPattern Classification
Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors
More informationBackcalculation of Airport Flexible Pavement Non-Linear Moduli Using Artificial Neural Networks
Backcalculation of Airport Flexible Pavement Non-Linear Mouli Using Artificial Neural Networks Kasthurirangan Gopalakrishnan an Marshall R. Thompson Grauate Research Assistant an Professor Emeritus Department
More informationMatrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan
Matrix Recipes Javier R Movellan December 28, 2006 Copyright c 2004 Javier R Movellan 1 1 Definitions Let x, y be matrices of orer m n an o p respectively, ie x 11 x 1n y 11 y 1p x = y = (1) x m1 x mn
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationArtificial Neural Networks
0 Artificial Neural Networks Based on Machine Learning, T Mitchell, McGRAW Hill, 1997, ch 4 Acknowledgement: The present slides are an adaptation of slides drawn by T Mitchell PLAN 1 Introduction Connectionist
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationIMPLICIT DIFFERENTIATION
Mathematics Revision Guies Implicit Differentiation Page 1 of Author: Mark Kulowski MK HOME TUITION Mathematics Revision Guies Level: AS / A Level AQA : C4 Eecel: C4 OCR: C4 OCR MEI: C3 IMPLICIT DIFFERENTIATION
More informationarxiv: v5 [cs.lg] 28 Mar 2017
Equilibrium Propagation: Briging the Gap Between Energy-Base Moels an Backpropagation Benjamin Scellier an Yoshua Bengio * Université e Montréal, Montreal Institute for Learning Algorithms March 3, 217
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationcosh x sinh x So writing t = tan(x/2) we have 6.4 Integration using tan(x/2) 2t 1 + t 2 cos x = 1 t2 sin x =
6.4 Integration using tan/ We will revisit the ouble angle ientities: sin = sin/ cos/ = tan/ sec / = tan/ + tan / cos = cos / sin / tan = = tan / sec / tan/ tan /. = tan / + tan / So writing t = tan/ we
More informationIntroduction to Machine Learning
How o you estimate p(y x)? Outline Contents Introuction to Machine Learning Logistic Regression Varun Chanola April 9, 207 Generative vs. Discriminative Classifiers 2 Logistic Regression 2 3 Logistic Regression
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More information5-4 Electrostatic Boundary Value Problems
11/8/4 Section 54 Electrostatic Bounary Value Problems blank 1/ 5-4 Electrostatic Bounary Value Problems Reaing Assignment: pp. 149-157 Q: A: We must solve ifferential equations, an apply bounary conitions
More informationLecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations
Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:
More informationEM-algorithm for Training of State-space Models with Application to Time Series Prediction
EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationChapter 2 Lagrangian Modeling
Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie
More informationWJEC Core 2 Integration. Section 1: Introduction to integration
WJEC Core Integration Section : Introuction to integration Notes an Eamples These notes contain subsections on: Reversing ifferentiation The rule for integrating n Fining the arbitrary constant Reversing
More informationNeural Networks Analysis of Airfield Pavement Heavy Weight Deflectometer Data
The Open Civil Engineering Journal, 28, 2, 15-23 15 Neural Networks Analysis of Airfiel Pavement Heavy Weight Deflectometer Data Kasthurirangan Gopalakrishnan* Department of Civil, Construction an Environmental
More informationmodel considered before, but the prey obey logistic growth in the absence of predators. In
5.2. First Orer Systems of Differential Equations. Phase Portraits an Linearity. Section Objective(s): Moifie Preator-Prey Moel. Graphical Representations of Solutions. Phase Portraits. Vector Fiels an
More informationCascaded redundancy reduction
Network: Comput. Neural Syst. 9 (1998) 73 84. Printe in the UK PII: S0954-898X(98)88342-5 Cascae reunancy reuction Virginia R e Sa an Geoffrey E Hinton Department of Computer Science, University of Toronto,
More informationNeural networks. Chapter 20, Section 5 1
Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of
More informationDynamics of Cortical Columns Self-Organization of Receptive Fields
Preprint, LNCS 3696, 31 37, ICANN 2005 Dynamics of Cortical Columns Self-Organization of Receptive Fiels Jörg Lücke 1,2 an Jan D. Bouecke 1 1 Institut für Neuroinformatik, Ruhr-Universität, 44780 Bochum,
More information8. Lecture Neural Networks
Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge
More informationarxiv: v2 [cs.ds] 11 May 2016
Optimizing Star-Convex Functions Jasper C.H. Lee Paul Valiant arxiv:5.04466v2 [cs.ds] May 206 Department of Computer Science Brown University {jasperchlee,paul_valiant}@brown.eu May 3, 206 Abstract We
More informationLecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.
b. Partial erivatives Lecture b Differential operators an orthogonal coorinates Recall from our calculus courses that the erivative of a function can be efine as f ()=lim 0 or using the central ifference
More informationTraining Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow
Plan training single neuron with continuous activation function training 1-layer of continuous neurons training multi-layer network - back-propagation method single neuron with continuous activation function
More informationNeural Networks DWML, /25
DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference
More informationIntroduction to Machine Learning
Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationwater adding dye partial mixing homogenization time
iffusion iffusion is a process of mass transport that involves the movement of one atomic species into another. It occurs by ranom atomic jumps from one position to another an takes place in the gaseous,
More informationStatic Equilibrium. Theory: The conditions for the mechanical equilibrium of a rigid body are (a) (b)
LPC Physics A 00 Las Positas College, Physics Department Staff Purpose: To etermine that, for a boy in equilibrium, the following are true: The sum of the torques about any point is zero The sum of forces
More informationSummary: Differentiation
Techniques of Differentiation. Inverse Trigonometric functions The basic formulas (available in MF5 are: Summary: Differentiation ( sin ( cos The basic formula can be generalize as follows: Note: ( sin
More informationCalculus and optimization
Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function
More informationComparative Approaches of Calculation of the Back Water Curves in a Trapezoidal Channel with Weak Slope
Proceeings of the Worl Congress on Engineering Vol WCE, July 6-8,, Lonon, U.K. Comparative Approaches of Calculation of the Back Water Curves in a Trapezoial Channel with Weak Slope Fourar Ali, Chiremsel
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationSwitching Time Optimization in Discretized Hybrid Dynamical Systems
Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set
More informationImplicit Differentiation
Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,
More informationIntroduction to Neural Networks: Structure and Training
Introduction to Neural Networks: Structure and Training Professor Q.J. Zhang Department of Electronics Carleton University, Ottawa, Canada www.doe.carleton.ca/~qjz, qjz@doe.carleton.ca A Quick Illustration
More informationEuler equations for multiple integrals
Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................
More informationIntroduction to the Vlasov-Poisson system
Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its
More informationLearning Vector Quantization
Learning Vector Quantization Neural Computation : Lecture 18 John A. Bullinaria, 2015 1. SOM Architecture and Algorithm 2. Vector Quantization 3. The Encoder-Decoder Model 4. Generalized Lloyd Algorithms
More informationPH 132 Exam 1 Spring Student Name. Student Number. Lab/Recitation Section Number (11,,36)
PH 13 Exam 1 Spring 010 Stuent Name Stuent Number ab/ecitation Section Number (11,,36) Instructions: 1. Fill out all of the information requeste above. Write your name on each page.. Clearly inicate your
More informationThermal conductivity of graded composites: Numerical simulations and an effective medium approximation
JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationSlide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)
Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to
More informationd dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1
Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationBasic Principles of Unsupervised and Unsupervised
Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization
More informationDiagonalization of Matrices Dr. E. Jacobs
Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is
More informationOutline. Calculus for the Life Sciences II. Introduction. Tides Introduction. Lecture Notes Differentiation of Trigonometric Functions
Calculus for the Life Sciences II c Functions Joseph M. Mahaffy, mahaffy@math.ssu.eu Department of Mathematics an Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State
More informationNeural Network Controller for Robotic Manipulator
MMAE54 Robotics- Class Project Paper Neural Network Controller for Robotic Manipulator Kai Qian Department of Biomeical Engineering, Illinois Institute of echnology, Chicago, IL 666 USA. Introuction Artificial
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationThe Press-Schechter mass function
The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for
More informationMath 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v
Math Fall 06 Section Monay, September 9, 06 First, some important points from the last class: Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v passing through
More informationArtificial Neural Networks
Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples
More informationTutorial on Maximum Likelyhood Estimation: Parametric Density Estimation
Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing
More informationVectors in two dimensions
Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication
More informationON THE RIEMANN EXTENSION OF THE SCHWARZSCHILD METRICS
ON THE RIEANN EXTENSION OF THE SCHWARZSCHILD ETRICS Valerii Dryuma arxiv:gr-qc/040415v1 30 Apr 004 Institute of athematics an Informatics, AS R, 5 Acaemiei Street, 08 Chisinau, olova, e-mail: valery@ryuma.com;
More informationHow the potentials in different gauges yield the same retarded electric and magnetic fields
How the potentials in ifferent gauges yiel the same retare electric an magnetic fiels José A. Heras a Departamento e Física, E. S. F. M., Instituto Politécnico Nacional, México D. F. México an Department
More informationarxiv: v1 [hep-ex] 4 Sep 2018 Simone Ragoni, for the ALICE Collaboration
Prouction of pions, kaons an protons in Xe Xe collisions at s =. ev arxiv:09.0v [hep-ex] Sep 0, for the ALICE Collaboration Università i Bologna an INFN (Bologna) E-mail: simone.ragoni@cern.ch In late
More informationThe new concepts of measurement error s regularities and effect characteristics
The new concepts of measurement error s regularities an effect characteristics Ye Xiaoming[1,] Liu Haibo [3,,] Ling Mo[3] Xiao Xuebin [5] [1] School of Geoesy an Geomatics, Wuhan University, Wuhan, Hubei,
More informationStatistical Learning Theory. Part I 5. Deep Learning
Statistical Learning Theory Part I 5. Deep Learning Sumio Watanabe Tokyo Institute of Technology Review : Supervised Learning Training Data X 1, X 2,, X n q(x,y) =q(x)q(y x) Information Source Y 1, Y 2,,
More informationApplications of the Wronskian to ordinary linear differential equations
Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.
More informationTHE DISPLACEMENT GRADIENT AND THE LAGRANGIAN STRAIN TENSOR Revision B
HE DISPLACEMEN GRADIEN AND HE LAGRANGIAN SRAIN ENSOR Revision B By om Irvine Email: tom@irvinemail.org Febrary, 05 Displacement Graient Sppose a boy having a particlar configration at some reference time
More informationChapter 9 Method of Weighted Residuals
Chapter 9 Metho of Weighte Resiuals 9- Introuction Metho of Weighte Resiuals (MWR) is an approimate technique for solving bounary value problems. It utilizes a trial functions satisfying the prescribe
More informationLinear Models for Regression. Sargur Srihari
Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood
More information