Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy

Size: px
Start display at page:

Download "Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy"

Transcription

1 Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy

2 2 Piero Baraldi Data Industry Digitalization 2.8 Trillion GD (ZD) generated in 2016 Available data Analytics Time Data Analytics

3 3 Piero Baraldi Data Industry Digitalization 2.8 Trillion GD (ZD) generated in 2016 Available data Analytics Time Data Analytics Predictive Maintenance

4 4 Piero Baraldi Predictive Maintenance 4 Ambient & Operating Conditions u 1 Monitored Signals Analytics Industry 4.0 u 2 u N Prognostic Model Remaining Useful Life (RUL) t p Present time Failure time

5 5 Piero Baraldi Predictive Maintenance 5 Ambient & Operating Conditions u 1 Monitored Signals Analytics Industry 4.0 u 2 u N Prognostic Model Remaining Useful Life (RUL) t p Present time Failure time Safety improvement, Cost Saving, New Business Maximum Availability Business continuity Warehouse savings Zero-defect production Zero-waste production Optimal Maintenance Decisions Future demand Logistics options

6 In this Presentaton 6 Prognostics Recurrent Neural Network (RNN) Reservoir Computing Echo State Network Application to the Prediction of Turbofan Engine RUL 6 Piero Baraldi

7 Temperature Prognostics: What is the Problem? Aircraft Turbofan Engine N Monitored Signals Signal 1 Signal 2.. Signal N 7 Time

8 Temperature Prognostics: What is the Problem? RUL Aircraft Turbofan Engine N Monitored Signals Signal 1 Signal 2.. Signal N Prognostic Model Aircraft Engine RUL Prediction 8 Time Time

9 9 Piero Baraldi Prognostics: the Challenge 9 System evolution depends on present and past signal values (the memory of the history) 2 identical components Monitored Signal u Same measurements at time t p : u t p u(t p ) t p 0 t p Monitored Signal u t f t p t p t f

10 10 Prognostics: Methods Feedforward Neural Network Connection: RUL t p u 1 w 11 w 11 u 1 Computational unit (neurons): u 1 w 11 u 2 w 21 f 3 u i w i1 i=1 u 3 w 31

11 11 Prognostics: Methods Feedforward Neural Network RUL t p connections only "from left to right", no connection cycle no memory

12 Prognostics: Methods Feedforward Neural Network Recurrent Neural Network RUL t p RUL t p 12 connections only "from left to right", no connection cycle no memory at least one connection cycle activation can "reverberate", persist even with no input system with memory

13 In this presentaton 13 Prognostics Recurrent Neural Network (RNN) Reservoir Computing Echo State Network Application to the Prediction of Turbofan Engine RUL 13 Piero Baraldi

14 14 Piero Baraldi Recurrent NN: General Idea 14 Time trajectory u N t = 1 u 1: t p t = t p u 1 u 1: t p PROGNOSTIC MODEL RUL t p

15 15 Piero Baraldi Recurrent NN: General Idea 15 Non Linear Expansion M N Time trajectory u N x 2 Linear Regression x t p t = 1 u 1: t p u 1 x 1 RUL t p rul t = t p x M x t p = f u 1: t p u 1: t p Recursive definition RUL t p = W out x(t p ) x t p = f x(t p 1), u t p

16 16 Piero Baraldi Recurrent NN 16 Non Linear Expansion W W in u 1 (t p ) u 2 (t p ) u 3 (t p ) u 1: t p x 1 (t p ) = f N i=1 M w in i1 u i (t p ) + i=1 w i1 x i (t p 1)

17 17 Piero Baraldi Recurrent NN 17 Non Linear Expansion W Linear Regresion u 1 (t p ) W in W out u 2 (t p ) u 3 (t p ) RUL t p u 1: t p x(t p ) = f W in u(t p ) + Wx(t p 1) RUL t p = W out x(t p )

18 18 RNN: Training TRAINING SET u 1, RUL GT 1 = t f 1 u 5, RUL GT 5 = t f 5 u t f 1, RUL GT t f 1 = 1 W in, W, W out u Run-to-failure degradation trajectory u(5) 5 t f t RUL GT 5 = t f 5

19 19 19 RNN: Training TRAINING SET u 1, RUL GT 1 = t f 1 u 2, RUL GT 2 = t f 2 u t f 1, RUL GT t f 1 = 1 W in, W, W out Training Objective: minimize the error function t f 1 E RUL, RUL GT =RMSE= t=1 1 t f 1 RUL(t) RULGT (t) 2

20 20 20 RNN: Training TRAINING SET u 1, RUL GT 1 = t f 1 u 2, RUL GT 2 = t f 2 u t f 1, RUL GT t f 1 = 1 W in, W, W out Training Objective: minimize the error function t f 1 E RUL, RUL GT =RMSE= t=1 1 t f 1 RUL(t) RULGT (t) 2 Training Methods: Gradient-descent-based methods Reservoir Computing

21 21 21 Gradient-descent-based methods for RNN W W in W out RUL GT (t) u(t) RUL(t) - Error(t) RNN are difficult to train using gradient-descent-based methods: Bifurcations Many updating cycles Too long training times Hard to obtain long range memory

22 In this presentaton 22 Prognostics Recurrent Neural Network (RNN) Reservoir Computing Echo State Network Application to the Prediction of Turbofan Engine RUL 22 Piero Baraldi

23 23 Piero Baraldi Reservoir Computing (RC): Terminology Reservoir readout What is it? Non-linear temporal expansion function Linear function Purpose Expand the input history u 1: t p into a rich-enough reservoir space x(t p ) Combine the neuron signals x(t p ) into the desired output signal target RUL t p

24 24 Piero Baraldi Reservoir Computing (RC): Basic Idea 24 Reservoir and readout serve different purposes They can be separately trained Reservoir readout What is it? Non-linear temporal expansion function Linear function Purpose Expand the input hystory u 1: t p into a rich-enough reservoir space x(t p ) Combine the neuron signals x(t p ) into the desired output signal target RUL t p

25 Reservoir Methods 25 Echo State Networks Liquid State Machines Evolino Backpropagation-Decorrelation Temporal Recurrent Networks 25 Piero Baraldi

26 In this presentaton 26 Prognostics Recurrent Neural Network (RNN) Reservoir Computing Echo State Network Application to the Prediction of Turbofan Engine RUL 26 Piero Baraldi

27 Generate the reservoir Purpose: obtain a rich enough reservoir space x(n) Recipe: Big reservoir (M up to 10 4 ) rich enough reservoir space Sparsely connected W is sparse (no more than 20% of possible connections) Randomly connected weights of the connections are randomly generated from a uniform distribution symmetric around the zero value W

28 28 Readout Purpose: learn the weights W out which minimize: E RUL, RUL GT t f 1 =RMSE= t=1 1 t f 1 W outx(t) RUL GT (t) 2 u 1 (t) u 2 (t) W in W W out x t = RUL GT (t) Readout Linear regression W out RUL(t p t ) = W out x(t) u N (t)

29 29 29 Training: Traditional RNN VS ESN Traditional RNN ESN W RUL GT random W RUL GT u W in W out RUL - u W in W out RUL - error error

30 30 The Echo State Property The effect of x t and u t on a future state x t + k should vanish gradually as time passes (i.e., k ) and not persist or even get amplified. For most practical purposes: ρ W : spectral radius of W = largest absolute eigenvalue of W < 1 Echo State Property is satisfied

31 31 31 In this presentaton Prognostics: Recurrent Neural Networks (RNN) Reservoir Computing Echo State Network Application to the prediction of Turbofan Engine RULs

32 Temperature Prognostics: What is the Problem? RUL Aircraft Turbofan Engine N Monitored Signals Signal 1 Signal 2.. Signal N Prognostic Model Aircraft Engine RUL Prediction 32 Time Time

33 The C-MAPPS dataset* 260 run-to-failure trajectories 21 measured signals + 3 signals representative of the operating conditions 6 different operating conditions Data Preprocessing** 33 * A. Saxena, K. Goebel, D. Simon, N. Eklund, Damage propagation modeling for aircraft engine run-to-failure simulation, PHM2008 **M. Rigamonti, P. Baraldi, E. Zio, I. Roychoudhury, K. Goebel, S. Poll, Echo State Network for Remaining Useful Life Prediction of a Turbofan Engine, PHM 2016, Bilbao

34 34 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Output Scaling

35 35 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Output Scaling

36 36 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling/Shifting 5) Output Scaling/Shifting 6) Output Feedback

37 37 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Output Scaling

38 38 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Input Shifting

39 39 ESN Architecture Optimization Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Input Shifting Sigmoidal Activation Function

40 40 ESN Architecture Optimization Experience + trial & errors difficult, good performance not guaranteed Differential evolution Optimization Algorithm Population-based: Objective function: RA = σ RULGT R UL RUL GT CHROMOSOME Network Dimensions Connectivity Spectral Radius Input Scaling Inout Shifting Evolutionary-based Initialization Mutation Crossover Selection

41 41 Optimal Architecture Network Architecture Optimization: Parameters RUL(t) 1) Network Dimensions 2) Spectral Radius 3) Connectivity 4) Input Scaling 5) Input Shifting Network Dimensions Connectivity Spectral Radius Input Scaling Output Scaling

42 42 RUL (Cycle) ESN for Prognostics: Results (I) RUL Prediction for Tansient 157 True RUL ESN FS ELM Time (Cycle) ESN = Echo State Network FS = Fuzzy Similarity-based Prognosti Method ELM = Extreme Learning Machine

43 43 43 ESN for Prognostics: Results (II) Results Prognostic Metrics (70 test trajectories) Cumulative Relative Accuracy RA RUL ˆ RUL RUL GT Alpha-Lambda α = 0. 2 SI t Steadiness var( T t t) : ), ( t Extreme Learning Machine Fuzzy Similarity-based Method 0.42 ± ± ± ± ± ± 0.7 Echo State Network 0.37 ± ± ± 1.2

44 44 RUL Conclusions Recurrent Neural Network Dynamic problem Time Training: Reservoir Computing Echo State Network Accurate RUL prediction Short Training Time Able to catch the system dynamics

45 45 45 Acknowledgments Dr. Sameer Al-Dahidi Francesco Cannarile Dr. Michele Compare Dr. Francesco Di Maio Dr. Marco Rigamonti Mingjing Xu Zhe Yang Prof. Enrico Zio

46 46 Thank You!

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication Publishde in Science Magazine, 2004 Siamak Saliminejad Overview Eco State Networks How to build ESNs Chaotic

More information

Modern Reliability and Maintenance Engineering for Modern Industry

Modern Reliability and Maintenance Engineering for Modern Industry Modern Reliability and Maintenance Engineering for Modern Industry Piero Baraldi, Francesco Di Maio, Enrico Zio Politecnico di Milano, Department of Energy, Italy ARAMIS Srl, Italy LOW SMART KID POSITIONING

More information

A Data-driven Approach for Remaining Useful Life Prediction of Critical Components

A Data-driven Approach for Remaining Useful Life Prediction of Critical Components GT S3 : Sûreté, Surveillance, Supervision Meeting GdR Modélisation, Analyse et Conduite des Systèmes Dynamiques (MACS) January 28 th, 2014 A Data-driven Approach for Remaining Useful Life Prediction of

More information

REMAINING USEFUL LIFE ESTIMATION IN HETEROGENEOUS FLEETS WORKING UNDER VARIABLE OPERATING CONDITIONS

REMAINING USEFUL LIFE ESTIMATION IN HETEROGENEOUS FLEETS WORKING UNDER VARIABLE OPERATING CONDITIONS REMAINING USEFUL LIFE ESTIMATION IN HETEROGENEOUS FLEETS WORKING UNDER VARIABLE OPERATING CONDITIONS Sameer Al-Dahidi 1, Francesco Di Maio 1*, Piero Baraldi 1, Enrico Zio 1,2 1 Energy Department, Politecnico

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Negatively Correlated Echo State Networks

Negatively Correlated Echo State Networks Negatively Correlated Echo State Networks Ali Rodan and Peter Tiňo School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {a.a.rodan, P.Tino}@cs.bham.ac.uk

More information

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training Aakash Jain a.jain@iu-bremen.de Spring Semester 2004 1 Executive Summary

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why

More information

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Christian Emmerich, R. Felix Reinhart, and Jochen J. Steil Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld

More information

Advanced Methods for Recurrent Neural Networks Design

Advanced Methods for Recurrent Neural Networks Design Universidad Autónoma de Madrid Escuela Politécnica Superior Departamento de Ingeniería Informática Advanced Methods for Recurrent Neural Networks Design Master s thesis presented to apply for the Master

More information

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Computing and Informatics, Vol. 30, 2011, 321 334 MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Štefan Babinec, Jiří Pospíchal Department of Mathematics Faculty of Chemical and Food Technology

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

Assessment of the Performance of a Fully Electric Vehicle Subsystem in Presence of a Prognostic and Health Monitoring System

Assessment of the Performance of a Fully Electric Vehicle Subsystem in Presence of a Prognostic and Health Monitoring System A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 33, 2013 Guest Editors: Enrico Zio, Piero Baraldi Copyright 2013, AIDIC Servizi S.r.l., ISBN 978-88-95608-24-2; ISSN 1974-9791 The Italian Association

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Echo State Networks with Filter Neurons and a Delay&Sum Readout

Echo State Networks with Filter Neurons and a Delay&Sum Readout Echo State Networks with Filter Neurons and a Delay&Sum Readout Georg Holzmann 2,1 (Corresponding Author) http://grh.mur.at grh@mur.at Helmut Hauser 1 helmut.hauser@igi.tugraz.at 1 Institute for Theoretical

More information

Lecture 5: Recurrent Neural Networks

Lecture 5: Recurrent Neural Networks 1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

A Practical Guide to Applying Echo State Networks

A Practical Guide to Applying Echo State Networks A Practical Guide to Applying Echo State Networks Mantas Lukoševičius Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany m.lukosevicius@jacobs-university.de Abstract. Reservoir computing has

More information

Neural Networks: Backpropagation

Neural Networks: Backpropagation Neural Networks: Backpropagation Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others

More information

Reservoir Computing in Forecasting Financial Markets

Reservoir Computing in Forecasting Financial Markets April 9, 2015 Reservoir Computing in Forecasting Financial Markets Jenny Su Committee Members: Professor Daniel Gauthier, Adviser Professor Kate Scholberg Professor Joshua Socolar Defense held on Wednesday,

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Recurrent neural networks

Recurrent neural networks 12-1: Recurrent neural networks Prof. J.C. Kao, UCLA Recurrent neural networks Motivation Network unrollwing Backpropagation through time Vanishing and exploding gradients LSTMs GRUs 12-2: Recurrent neural

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter

Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter Artificial Neural Networks Francesco DI MAIO, Ph.D., Politecnico di Milano Department of Energy - Nuclear Division IEEE - Italian Reliability Chapter (Chair) STF - China Fellow francesco.dimaio@polimi.it

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Peter Barančok and Igor Farkaš Faculty of Mathematics, Physics and Informatics Comenius University in Bratislava, Slovakia farkas@fmph.uniba.sk

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward ECE 47/57 - Lecture 7 Back Propagation Types of NN Recurrent (feedback during operation) n Hopfield n Kohonen n Associative memory Feedforward n No feedback during operation or testing (only during determination

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Reservoir Computing with Stochastic Bitstream Neurons

Reservoir Computing with Stochastic Bitstream Neurons Reservoir Computing with Stochastic Bitstream Neurons David Verstraeten, Benjamin Schrauwen and Dirk Stroobandt Department of Electronics and Information Systems (ELIS), Ugent {david.verstraeten, benjamin.schrauwen,

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Error Entropy Criterion in Echo State Network Training

Error Entropy Criterion in Echo State Network Training Error Entropy Criterion in Echo State Network Training Levy Boccato 1, Daniel G. Silva 1, Denis Fantinato 1, Kenji Nose Filho 1, Rafael Ferrari 1, Romis Attux 1, Aline Neves 2, Jugurta Montalvão 3 and

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Neural Networks. Intro to AI Bert Huang Virginia Tech

Neural Networks. Intro to AI Bert Huang Virginia Tech Neural Networks Intro to AI Bert Huang Virginia Tech Outline Biological inspiration for artificial neural networks Linear vs. nonlinear functions Learning with neural networks: back propagation https://en.wikipedia.org/wiki/neuron#/media/file:chemical_synapse_schema_cropped.jpg

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Advanced Methods for Fault Detection

Advanced Methods for Fault Detection Advanced Methods for Fault Detection Piero Baraldi Agip KCO Introduction Piping and long to eploration distance pipelines activities Piero Baraldi Maintenance Intervention Approaches & PHM Maintenance

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

A Fault Diagnostic tool based on a First Principle Model Simulator

A Fault Diagnostic tool based on a First Principle Model Simulator A Fault Diagnostic tool based on a First Principle Model Simulator F. Cannarile 1,2*, M. Compare 1,2, E. Zio 1,2,3 1 Energy Department, Politecnico di Milano, Milano, Italy 2 Aramis Srl, Milano, Italy

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

Learning Long Term Dependencies with Gradient Descent is Difficult

Learning Long Term Dependencies with Gradient Descent is Difficult Learning Long Term Dependencies with Gradient Descent is Difficult IEEE Trans. on Neural Networks 1994 Yoshua Bengio, Patrice Simard, Paolo Frasconi Presented by: Matt Grimes, Ayse Naz Erkan Recurrent

More information

Short Term Memory and Pattern Matching with Simple Echo State Networks

Short Term Memory and Pattern Matching with Simple Echo State Networks Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,

More information

arxiv: v1 [cs.lg] 2 Feb 2018

arxiv: v1 [cs.lg] 2 Feb 2018 Short-term Memory of Deep RNN Claudio Gallicchio arxiv:1802.00748v1 [cs.lg] 2 Feb 2018 Department of Computer Science, University of Pisa Largo Bruno Pontecorvo 3-56127 Pisa, Italy Abstract. The extension

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

A Particle Filtering-Based Approach for the Prediction of the Remaining Useful Life of an Aluminum Electrolytic Capacitor

A Particle Filtering-Based Approach for the Prediction of the Remaining Useful Life of an Aluminum Electrolytic Capacitor A Particle Filtering-Based Approach for the Prediction of the Remaining Useful Life of an Aluminum Electrolytic Capacitor Marco Rigamonti 1, Piero Baraldi 1*, Enrico Zio 1,2, Daniel Astigarraga 3, and

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Neural Networks. Volker Tresp Summer 2015

Neural Networks. Volker Tresp Summer 2015 Neural Networks Volker Tresp Summer 2015 1 Introduction The performance of a classifier or a regression model critically depends on the choice of appropriate basis functions The problem with generic basis

More information

T Machine Learning and Neural Networks

T Machine Learning and Neural Networks T-61.5130 Machine Learning and Neural Networks (5 cr) Lecture 11: Processing of Temporal Information Prof. Juha Karhunen https://mycourses.aalto.fi/ Aalto University School of Science, Espoo, Finland 1

More information

1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n)

1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n) Neuro-Fuzzy, Revision questions June, 25. A discrete-time recurrent network is described by the following equation: y(n + ) = A y(n) + B x(n) where A =.7.5.4.6, B = 2 (a) Sketch the dendritic and signal-flow

More information

Recurrent Neural Networks for Online Remaining Useful Life Estimation in Ion Mill Etching System

Recurrent Neural Networks for Online Remaining Useful Life Estimation in Ion Mill Etching System Recurrent Neural Networks for Online Remaining Useful Life Estimation in Ion Mill Etching System Vishnu TV 1, Priyanka Gupta 2, Pankaj Malhotra 3, Lovekesh Vig 4, and Gautam Shroff 5 1,2,3,4,5 TCS Research,

More information

NN V: The generalized delta learning rule

NN V: The generalized delta learning rule NN V: The generalized delta learning rule We now focus on generalizing the delta learning rule for feedforward layered neural networks. The architecture of the two-layer network considered below is shown

More information

Jakub Hajic Artificial Intelligence Seminar I

Jakub Hajic Artificial Intelligence Seminar I Jakub Hajic Artificial Intelligence Seminar I. 11. 11. 2014 Outline Key concepts Deep Belief Networks Convolutional Neural Networks A couple of questions Convolution Perceptron Feedforward Neural Network

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16 COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS6 Lecture 3: Classification with Logistic Regression Advanced optimization techniques Underfitting & Overfitting Model selection (Training-

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Solutions. Part I Logistic regression backpropagation with a single training example

Solutions. Part I Logistic regression backpropagation with a single training example Solutions Part I Logistic regression backpropagation with a single training example In this part, you are using the Stochastic Gradient Optimizer to train your Logistic Regression. Consequently, the gradients

More information

NARX Time Series Model for Remaining Useful Life Estimation of Gas Turbine Engines

NARX Time Series Model for Remaining Useful Life Estimation of Gas Turbine Engines NARX Series Model for Remaining Useful Life Estimation of Gas Turbine Engines Oguz Bektas, Jeffrey A. Jones 2,2 Warwick Manufacturing Group, University of Warwick, Coventry, UK O.Bektas@warwick.ac.uk J.A.Jones@warwick.ac.uk

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

INTRODUCTION TO ARTIFICIAL INTELLIGENCE v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)

More information

Good vibrations: the issue of optimizing dynamical reservoirs

Good vibrations: the issue of optimizing dynamical reservoirs Good vibrations: the issue of optimizing dynamical reservoirs Workshop on ESNs / LSMs, NIPS 2006 Herbert Jaeger International University Bremen (Jacobs University Bremen, as of Spring 2007) The basic idea:

More information

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University Deep Feedforward Networks Seung-Hoon Na Chonbuk National University Neural Network: Types Feedforward neural networks (FNN) = Deep feedforward networks = multilayer perceptrons (MLP) No feedback connections

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System

In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System In-Flight Engine Diagnostics and Prognostics Using A Stochastic-Neuro-Fuzzy Inference System Dan M. Ghiocel & Joshua Altmann STI Technologies, Rochester, New York, USA Keywords: reliability, stochastic

More information

Online Learning in High Dimensions. LWPR and it s application

Online Learning in High Dimensions. LWPR and it s application Lecture 9 LWPR Online Learning in High Dimensions Contents: LWPR and it s application Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal, Incremental Online Learning in High Dimensions, Neural Computation,

More information

Gianluca Pollastri, Head of Lab School of Computer Science and Informatics and. University College Dublin

Gianluca Pollastri, Head of Lab School of Computer Science and Informatics and. University College Dublin Introduction ti to Neural Networks Gianluca Pollastri, Head of Lab School of Computer Science and Informatics and Complex and Adaptive Systems Labs University College Dublin gianluca.pollastri@ucd.ie Credits

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

A Hybrid Time-delay Prediction Method for Networked Control System

A Hybrid Time-delay Prediction Method for Networked Control System International Journal of Automation and Computing 11(1), February 2014, 19-24 DOI: 10.1007/s11633-014-0761-1 A Hybrid Time-delay Prediction Method for Networked Control System Zhong-Da Tian Xian-Wen Gao

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

Neural Networks in Structured Prediction. November 17, 2015

Neural Networks in Structured Prediction. November 17, 2015 Neural Networks in Structured Prediction November 17, 2015 HWs and Paper Last homework is going to be posted soon Neural net NER tagging model This is a new structured model Paper - Thursday after Thanksgiving

More information

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Jan Drchal Czech Technical University in Prague Faculty of Electrical Engineering Department of Computer Science Topics covered

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Simple neuron model Components of simple neuron

Simple neuron model Components of simple neuron Outline 1. Simple neuron model 2. Components of artificial neural networks 3. Common activation functions 4. MATLAB representation of neural network. Single neuron model Simple neuron model Components

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Lecture 4: Deep Learning Essentials Pierre Geurts, Gilles Louppe, Louis Wehenkel 1 / 52 Outline Goal: explain and motivate the basic constructs of neural networks. From linear

More information

Deep Learning Recurrent Networks 2/28/2018

Deep Learning Recurrent Networks 2/28/2018 Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

CSC242: Intro to AI. Lecture 21

CSC242: Intro to AI. Lecture 21 CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

More Optimization. Optimization Methods. Methods

More Optimization. Optimization Methods. Methods More More Optimization Optimization Methods Methods Yann YannLeCun LeCun Courant CourantInstitute Institute http://yann.lecun.com http://yann.lecun.com (almost) (almost) everything everything you've you've

More information

Long-Short Term Memory

Long-Short Term Memory Long-Short Term Memory Sepp Hochreiter, Jürgen Schmidhuber Presented by Derek Jones Table of Contents 1. Introduction 2. Previous Work 3. Issues in Learning Long-Term Dependencies 4. Constant Error Flow

More information