Nonlinear System Identification Using Maximum Likelihood Estimation
|
|
- Joseph Owens
- 5 years ago
- Views:
Transcription
1 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 onlinear System Identification Using Maximum Lielihood Estimation Siny Paul, Bindu Elias Associate Professor, Department of EEE, Mar Athanasius College of Engineering, Kothamangalam, Kerala, India, Abstract: Different algorithms can be used to train the eural etwor Model for onlinear system identification. Here the Maximum Lielihood Estimation is implemented for modeling nonlinear systems and the performance is evaluated. Maximum lielihood is a well-established procedure for statistical estimation. In this procedure first formulate a log lielihood function and then optimize it with respect to the parameter vector of the probabilistic model under consideration. Four nonlinear systems are used to validate the performance of the model. Results show that eural etwor with the algorithm of Maximum Lielihood Estimation is a good tool for system identification, when the inputs are not well defined. Keywords: eural etwor, onlinear system, Mean square error, Modeling. I. IRODUCIO his paper concentrates on modeling problem which arise when we can identify a certain quantity as a definite measurable output or effect but the causes are not well defined. his is called time series modeling, where inputs or causes are numerous and not quite nown in addition to often being unobservable. his type modeling is also called stochastic modeling. In system identification we are concerned with the determination of the system models from records of system operation. he problem can be represented diagrammatically as below. Fig. System Configuration [] where t) is the nown input vector of dimension m z(t) is the output vector of dimension p w(t) is the input disturbance vector n(t) is the observation noise vector v(t) is the measured output vector of dimension p hus the problem of system identification is the determination of the system model from records of t) and y(t). Copyright to IJAREEIE 64
2 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 An artificial neural networ is a powerful tool for many complex applications such as function approximation, optimization, nonlinear system identification and pattern recognition. his is because of its attributes lie massive parallelism, adaptability, robustness and the inherent capability to handle nonlinear system. It can extract information from heavy noisy corrupted signals. System identification can be either state space model or input-output model []. II. IPU-OUPU MODEL An I/O model can be expressed as y( t) g( ( t, )) e( t), where, is the vector containing adjustable parameters which in the case of neural networ are nown as weights, g is the function realized by neural networ and is the regression vector. Depends on the choice of regression vector different model structures emerge. Using the same regressors as for the linear models, corresponding families of nonlinear models were obtained which are named ARX, ARMAX, etc. Different model structures in each model family can be obtained by maing a different assumption about noise. ( t, ) y( t ), y( t ),... y( t, t ),... t m) () ARX ( t, ) y( t ),... y( t, t ),... t m), e( t ),... e( t ) () ARMAX Where y(t) is the output, t),the input and e(t) is the error. For the implementation of the above system, Feed forward neural networ can be used. MLFF Fig.. ARX model [] Copyright to IJAREEIE 65
3 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 ARX Model is well suited for Input-Output modeling of stochastic nonlinear systems [3] So in the proposed wor, ARX model is chosen as the system model; in which the model structure is a Multi Layer Feed Forward eural etwor(mlff) as shown in Fig. For all the models (using different algorithms). III. MAXIMUM LIKELIHOOD ESIMAIO. he term maximum lielihood estimate with the desired asymptotic properties usually refers to a root of the lielihood equation that globally maximizes the lielihood function [3]. In other words the ML estimate x ML is that value of the parameter vector x for which the conditional probability density function P(z/x) is maximum [4].he maximum lielihood estimate x ML of the target parameters x is the mode of the conditional probability density function(lielihood functio: he log lielihood function Where r is the residual. p( z / x) ( ) r / exp( r r ) log( p( z / x)) () d z σ is the std. deviation, d =desired value, z = measurement(estimated value). Maximizing log lielihood function log(p(z/x)) is equivalent to minimizing the negative log lielihood function L(z,x). By using the negative log-lielihood function L(z,x) the ML problem is reformulated as a nonlinear least square problem: Minimize L( z, x) r (3) he ML estimate must satisfy the following optimality condition: xl ( z, xml ) xml ) xml ) 0 (4) Where x) is the dimensional residual vector and x) the x n Jacobian matrix. R ( x) [ r ( x)... r ( x)] (5) J ( x) x x) (6) he operator x is defined as x [ / x.. / x.. / x3... / x ] Since the problem is nonlinear it is not possible to solve analytically. But certain optimization methods seem suitable to find the ML Estimate. Gauss-ewton method is one such optimization technique. A. System Modeling Using Gauss-ewton Method. () Copyright to IJAREEIE 66
4 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 A Feed forward eural etwor model similar to earlier cases is designed for the identification of the same nonlinear systems and trained using Gauss ewton method [5]. he Gauss-ewton method is applicable to a cost function that is expressed as the sum of error squares. Let E( x) ) (7) he error signal ) is a function of adjustable state vector x. Given an operating point x(, we linearize the dependence of r on x by writing ' r (, x) ) [ ) / x] x x( x x(, =, n (8) Equivalently, by using matrix notation we may write r' (, x) ) x x( (9) he updated state vector x(n+) is then defined by ' x( n ) arg.min ) r ( n, x (0) squared Euclidean norm of r (n,x), ' r n Hence differentiating this expression with respect to x and setting the result equal to zero, we obtain (, ) n x ( x x( ) ( x x( ( x x( )) () Solving this equation for x, ( x x( ) 0 x( n ) x( ( ). (3) which describes the pure form of the Gauss-ewton method. Unlie ewton s method that requires nowledge of the Hessian matrix of the cost function E(, Gauss ewton method only requires the Jacobian matrix of the error vector. However, for the Gauss- ewton iteration to be computable, the matrix product must be nonsingular. Unfortunately, there is no guarantee that this condition will always hold. o guard against the possibility that is ran deficient, the customary practice is to add the diagonal matrix δi to the matrix. he parameter δ is a small positive constant chosen to ensure that; + δi : positive definite for all n. On this basis, the Gauss-ewton method is implemented in slightly modified form: x( n ) x( ( I) (4) Here x represents the state vector of the system. In the eural etwor model x becomes the weight vector w. hus the update equation of the weight vector becomes; w( n ) w( ( I) (5) where is the Jacobian matrix equal to w x ) (ie derivative of the error w.r.t weights). IV. RESULS AD DISCUSSIOS. he same four nonlinear systems are modeled using feed forward neural networ. he ARX model with 4 inputs is used similar to the earlier cases. he performance analysis is done by plotting the mean square error in each case. Copyright to IJAREEIE 67 ()
5 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 ( epoch : 00 samples) Fig 3. Superposition of model output and desired output onlinear System y = Sin(x + x) Fig. 4. MSE Vs data samples. Fig 5. Superposition of model output and desired output of Ambient oise. Fig 6. MSE Vs data samples. Fig. 7. Superposition of model output and Fig. 8. MSE Vs data samples. desired output of Accoustic source A. Copyright to IJAREEIE 68
6 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 Fig 9. Superposition of model output and desired output of Accoustic source B. Fig 0. MSE Vs data samples. ABLE I. MEA SQUARE ERROR FOR DIFFERE SYSEMS. System From the above results it is seen that the model is giving good performance for all the four nonlinear systems which proves that ARX Model is well suited for Input-Output modeling of stochastic nonlinear systems V. COCLUSIO In this paper, a comprehensive analysis for nonlinear system identification is done and its performance is compared by implementation of the same in a eural etwor ARX model using MALAB programs. he adaptive feature revealed by feed forward and recurrent neural networ as well as their ability to model nonlinear time varying process, provides a surplus value to the model based predictive control. When applied correctly, a neural or adaptive system may considerably outperform other methods. his is an attempt to provide guideline to the practitioners to choose the suitable method for their specific problem in the field of system identification especially in the stochastic modeling of nonlinear systems. It is proved that MLE can be reformulated as a minimization problem; he results show good performance of the models and it is proven that MLE is good for nonlinear system identification. Four different nonlinear systems are used to chec the consistency of the performance of algorithm. he performance of MLE is good in terms of mean square error. REFERECES Mean Square Error y = sin (x +x) Ambient noise Acoustic source A 0.08 Acoustic source B [].K Sinha and B Kuszta, Modeling and Identification of Dynamic systems, Vol., ew Yor,Van ostrand Reinhold Company, 983. [] Simon Hayin, eural etwors a comprehensive Foundation, nd ed.,india, Pearson Education, 999. [3]A.V. Balarishnan, Kalman Filtering heory, ew Yor, Optimization Software Inc. Publications Division,,99. Copyright to IJAREEIE 69
7 ISS (Print) : ISS (Online): (An ISO 397: 007 Certified Organizatio Vol., Special Issue, December 03 [4] Ben James, Brian D.O, Anderson, and Robert. C. Williamson Conditional Mean and Maximum Lielihood approaches to Multi harmonic frequency estimation., IEEE ransactions on Signal Processing, Vol 4, o.6, pp , June 994. [5] Roy L Streit and od E Luginbubl, Maximum Lielihood raining of Probabilistic eural etwors, IEEE ransactions on eural etwors, Vol.5, o.5, pp , September 994. [6] sung-an Lin, C.Lee Giles and Sun-Yaun Kung, A Delay Damage Model Selection Algorithm for ARX eural etwors, IEEE ransactions on Signal Processing Vol.45,o., pp ,ovember 997. [7] Wen Yu and Xiaoou Li Some ew Results on System Identification with Dynamic eural etwor, IEEE ransactions on eural etwor, Vol, o., pp 4-47, March 00. [8] X.M. Ren, A.B. Rad, P.. Chan and Wai Lan Lo, Identification and control of continuous time onlinear Systems via Dynamic eural etwors IEEE ransactions on Industrial Electronics, Vol 50,o.3, pp , June 003. [9] Songwu Lu and amer Basar, Robust onlinear System Identification using eural etwor Models, IEEE ransactions on eural etwors, Vol.9, o.3, pp , May 998. [0] Mohamed Ibnabla, Statistical Analysis of eural etwor Modeling and Identification of onlinear Systems with Memory, IEEE ransactions on Signal Processing, Vol. 50, o.6, pp , June 00. [] ader Sadegh, A Perceptron etwor for Functional Identification and Control of onlinear Systems, IEEE ransactions on eural etwors, Vol.4, o.6, pp , ovember 993. []Octavian Stan and Edward Kamen, A localized Least Squares Algorithm for raining Feed forward eural etwors, IEEE ransactions on eural networs, Vol., o., pp , march 000. [3] Si Zhao Qin, Hong e Su and homas J McAvoy, Comparison of four eural etwor Learning Methods for Dynamic System Identification, IEEE ransactions on eural etwors, Vol. 3, o., pp -30, January 99. [4] G.V. Pusorius and L.A Feldamp, euro Control of nonlinear dynamical systems with Kalman Filter trained recurrent networs IEEE ransactions on neural networs, Vol 5, o., pp 79-97,March 994. [5] K.S. arendra and K Parthasarathy, Identification and control of Dynamical systems using neural networs, IEEE ransactions on eural etwors, Vol, o., pp 4-7, March 990 [6] Shuhi Li, Comparative Analysis of bac propagation and Extended Kalman filter in Pattern and Batch forms for training eural etwors, IEEE ransactions on eural etwor, Vol, o.4, pp44-49, June 00. Copyright to IJAREEIE 630
Bifurcation Analysis of Chaotic Systems using a Model Built on Artificial Neural Networks
Bifurcation Analysis of Chaotic Systems using a Model Built on Artificial Neural Networs Archana R, A Unnirishnan, and R Gopiaumari Abstract-- he analysis of chaotic systems is done with the help of bifurcation
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationHand Written Digit Recognition using Kalman Filter
International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit
More informationStable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems
Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch
More informationLECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form
LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationLogistic Regression. COMP 527 Danushka Bollegala
Logistic Regression COMP 527 Danushka Bollegala Binary Classification Given an instance x we must classify it to either positive (1) or negative (0) class We can use {1,-1} instead of {1,0} but we will
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University
ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many
More informationModel structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations
System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationLast updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship
More informationA Comparison of Nonlinear Kalman Filtering Applied to Feed forward Neural Networks as Learning Algorithms
A Comparison of Nonlinear Kalman Filtering Applied to Feed forward Neural Networs as Learning Algorithms Wieslaw Pietrusziewicz SDART Ltd One Central Par, Northampton Road Manchester M40 5WW, United Kindgom
More informationIN neural-network training, the most well-known online
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan
More informationIterative Reweighted Least Squares
Iterative Reweighted Least Squares Sargur. University at Buffalo, State University of ew York USA Topics in Linear Classification using Probabilistic Discriminative Models Generative vs Discriminative
More informationLINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,
More informationHammerstein System Identification by a Semi-Parametric Method
Hammerstein System Identification by a Semi-arametric ethod Grzegorz zy # # Institute of Engineering Cybernetics, Wroclaw University of echnology, ul. Janiszewsiego 11/17, 5-372 Wroclaw, oland, grmz@ict.pwr.wroc.pl
More informationEM-algorithm for Training of State-space Models with Application to Time Series Prediction
EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research
More informationRecurrent Neural Network Training with the Extended Kalman Filter
Recurrent Neural Networ raining with the Extended Kalman Filter Peter REBAICKÝ Slova University of echnology Faculty of Informatics and Information echnologies Ilovičova 3, 842 16 Bratislava, Slovaia trebaticy@fiit.stuba.s
More informationLeast Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD
Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (
More informationQ-Learning and Stochastic Approximation
MS&E338 Reinforcement Learning Lecture 4-04.11.018 Q-Learning and Stochastic Approximation Lecturer: Ben Van Roy Scribe: Christopher Lazarus Javier Sagastuy In this lecture we study the convergence of
More informationCity, University of London Institutional Repository
City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering
More informationA Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation
A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation Vu Malbasa and Slobodan Vucetic Abstract Resource-constrained data mining introduces many constraints when learning from
More informationElman Neural Network Application with accelerated LMA Training for East Java-Bali Electrical Load Time Series Data Forecasting
Elman eural etwork Application with accelerated LA raining for East Java-Bali Electrical Load ime Series Data Forecasting F. Pasila,,. Lesmana, H. Ferdinando Electrical Engineering Department, Petra Christian
More information6.435, System Identification
SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).
More informationNonlinear State Estimation Methods Overview and Application to PET Polymerization
epartment of Biochemical and Chemical Engineering Process ynamics Group () onlinear State Estimation Methods Overview and Polymerization Paul Appelhaus Ralf Gesthuisen Stefan Krämer Sebastian Engell The
More informationError Empirical error. Generalization error. Time (number of iteration)
Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp
More informationELMAN NEURAL NETWORK WITH ACCELERATED LMA TRAINING FOR EAST JAVA-BALI ELECTRICAL LOAD TIME SERIES DATA FORECASTING
ELMA EURAL EWORK WIH ACCELERAED LMA RAIIG FOR EAS AVA-BALI ELECRICAL LOAD IME SERIES DAA FORECASIG F. Pasila,. Lesmana, H. Ferdinando 3,,3 Electrical Engineering Department, Petra Christian University
More informationLinear Regression (continued)
Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression
More informationMachine Learning with Neural Networks. J. Stuart McMenamin, David Simons, Andy Sukenik Itron, Inc.
Machine Learning with Neural Networks J. Stuart McMenamin, David Simons, Andy Sukenik Itron, Inc. Please Remember» Phones are Muted: In order to help this session run smoothly, your phones are muted.»
More informationLecture 16: Introduction to Neural Networks
Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationArtificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!
Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error
More informationCh 4. Linear Models for Classification
Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,
More informationCramér-Rao Bounds for Estimation of Linear System Noise Covariances
Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationStatic Parameter Estimation using Kalman Filtering and Proximal Operators Vivak Patel
Static Parameter Estimation using Kalman Filtering and Proximal Operators Viva Patel Department of Statistics, University of Chicago December 2, 2015 Acnowledge Mihai Anitescu Senior Computational Mathematician
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationMulticlass Logistic Regression
Multiclass Logistic Regression Sargur. Srihari University at Buffalo, State University of ew York USA Machine Learning Srihari Topics in Linear Classification using Probabilistic Discriminative Models
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More informationUsing Neural Networks for Identification and Control of Systems
Using Neural Networks for Identification and Control of Systems Jhonatam Cordeiro Department of Industrial and Systems Engineering North Carolina A&T State University, Greensboro, NC 27411 jcrodrig@aggies.ncat.edu
More informationVariable Learning Rate LMS Based Linear Adaptive Inverse Control *
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationSCRAM: Statistically Converging Recurrent Associative Memory
SCRAM: Statistically Converging Recurrent Associative Memory Sylvain Chartier Sébastien Hélie Mounir Boukadoum Robert Proulx Department of computer Department of computer science, UQAM, Montreal, science,
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationLinear Models for Regression
Linear Models for Regression CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 The Regression Problem Training data: A set of input-output
More informationMachine Learning in Modern Well Testing
Machine Learning in Modern Well Testing Yang Liu and Yinfeng Qin December 11, 29 1 Introduction Well testing is a crucial stage in the decision of setting up new wells on oil field. Decision makers rely
More informationy(x n, w) t n 2. (1)
Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,
More informationMark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.
University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationKalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise
More informationRobust Adaptive Estimators for Nonlinear Systems
Abdul Wahab Hamimi Fadziati Binti and Katebi Reza () Robust adaptive estimators for nonlinear systems. In: Conference on Control and Fault-olerant Systems (Sysol) --9 - --. http://d.doi.org/.9/sysol..6698
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of
More informationDual Estimation and the Unscented Transformation
Dual Estimation and the Unscented Transformation Eric A. Wan ericwan@ece.ogi.edu Rudolph van der Merwe rudmerwe@ece.ogi.edu Alex T. Nelson atnelson@ece.ogi.edu Oregon Graduate Institute of Science & Technology
More informationMatrix and Tensor Factorization from a Machine Learning Perspective
Matrix and Tensor Factorization from a Machine Learning Perspective Christoph Freudenthaler Information Systems and Machine Learning Lab, University of Hildesheim Research Seminar, Vienna University of
More informationFurther Results on Model Structure Validation for Closed Loop System Identification
Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More informationLecture 6. Notes on Linear Algebra. Perceptron
Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors
More informationSTOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang
ICIC Express Letters ICIC International c 2009 ISSN 1881-803X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen,
More informationBidirectional Representation and Backpropagation Learning
Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationControl Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli
Control Systems I Lecture 2: Modeling Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch. 2-3 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 29, 2017 E. Frazzoli
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES
International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC
More informationSuper-Resolution. Shai Avidan Tel-Aviv University
Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Ric Szelisi Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Mii Elad Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
More informationLinear Regression (9/11/13)
STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationParameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation
Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential
More informationShort Term Memory Quantifications in Input-Driven Linear Dynamical Systems
Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,
More informationLinear-Optimal State Estimation
Linear-Optimal State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 218 Linear-optimal Gaussian estimator for discrete-time system (Kalman filter) 2 nd -order example
More informationSignal Processing, Sensor Fusion, and Target Recognition X, Ivan Kadar, Editor, Proceedings of SPIE Vol (2001) 2001 SPIE X/01/$15.
Particle lters for combined state and parameter estimation Hubert Y. Chan a and Michael A. Kouritzin a a MITACS-PINTS, Department of Mathematical Sciences at the University of Alberta, Edmonton, Canada
More informationANALYSIS OF NONLINEAR PARTIAL LEAST SQUARES ALGORITHMS
ANALYSIS OF NONLINEAR PARIAL LEAS SQUARES ALGORIHMS S. Kumar U. Kruger,1 E. B. Martin, and A. J. Morris Centre of Process Analytics and Process echnology, University of Newcastle, NE1 7RU, U.K. Intelligent
More informationDirect Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions
Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies
More informationLecture 6. Regression
Lecture 6. Regression Prof. Alan Yuille Summer 2014 Outline 1. Introduction to Regression 2. Binary Regression 3. Linear Regression; Polynomial Regression 4. Non-linear Regression; Multilayer Perceptron
More informationA STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS
A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of
More informationDeep Learning & Artificial Intelligence WS 2018/2019
Deep Learning & Artificial Intelligence WS 2018/2019 Linear Regression Model Model Error Function: Squared Error Has no special meaning except it makes gradients look nicer Prediction Ground truth / target
More informationA NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1
A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,
More informationIntelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera
Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects
More information1. Introduction. 2. Artificial Neural Networks and Fuzzy Time Series
382 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.9, September 2008 A Comparative Study of Neural-Network & Fuzzy Time Series Forecasting Techniques Case Study: Wheat
More informationA Machine Intelligence Approach for Classification of Power Quality Disturbances
A Machine Intelligence Approach for Classification of Power Quality Disturbances B K Panigrahi 1, V. Ravi Kumar Pandi 1, Aith Abraham and Swagatam Das 1 Department of Electrical Engineering, IIT, Delhi,
More informationNeural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationImpulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter
BIOSIGAL 21 Impulsive oise Filtering In Biomedical Signals With Application of ew Myriad Filter Tomasz Pander 1 1 Division of Biomedical Electronics, Institute of Electronics, Silesian University of Technology,
More informationA SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS
P. Pandian et al. / International Journal of Engineering and echnology (IJE) A SINGLE NEURON MODEL FOR SOLVING BOH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS P. Pandian #1, G. Selvaraj # # Dept. of Mathematics,
More informationComparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms
Comparison of the Complex Valued and Real Valued Neural Networks rained with Gradient Descent and Random Search Algorithms Hans Georg Zimmermann, Alexey Minin 2,3 and Victoria Kusherbaeva 3 - Siemens AG
More informationLECTURE NOTE #NEW 6 PROF. ALAN YUILLE
LECTURE NOTE #NEW 6 PROF. ALAN YUILLE 1. Introduction to Regression Now consider learning the conditional distribution p(y x). This is often easier than learning the likelihood function p(x y) and the
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationRecurrent Neural Network Identification: Comparative Study on Nonlinear Process M.Rajalakshmi #1, Dr.S.Jeyadevi #2, C.Karthik * 3
ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationIn the Name of God. Lecture 11: Single Layer Perceptrons
1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just
More informationExploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability
Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,
More informationMini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra
Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State
More informationCLOSE-TO-CLEAN REGULARIZATION RELATES
Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationCSC242: Intro to AI. Lecture 21
CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages
More informationChapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions
Chapter ML:VI VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-1 Neural Networks STEIN 2005-2018 The iological Model Simplified model of a neuron:
More informationDeep Feedforward Networks
Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3
More information