Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2007, ročník LIII, řada strojní článek č.

Size: px
Start display at page:

Download "Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2007, ročník LIII, řada strojní článek č."

Transcription

1 Sborník vědeckých prací Vysoké školy báňské - echnické univerzity Ostrava číslo 2 rok 2007 ročník LIII řada strojní článek č 1570 Jolana ŠKUOVÁ * AN ORHOGONAL NEURAL NEWORK FOR NONLINEAR FUNCION ODELLING ODELOVÁNÍ NELINEÁRNÍ FUNKCE OROGONÁLNÍ NEURONOVOU SÍÍ Abstract Neural netorks provide one of the means for identification or control of a nonlinear system ultilayer feedforard netork is often used in those areas Using this kind of neural netork may result in premature termination of learning before the netork eights reach a global minimum a small training convergence an unsuitable eight parameters initialize and the stability for on-line implementation In this paper an orthogonal neural netorks (ONN) is presented he ONN s hidden layer consists of neurons ith orthogonal activation function he ONN have a much higher learning speed than the multilayered feedforard netorks Abstrakt V oblasti modelování nebo řízení neuronových sítí se často využívá neuronová síť se zpětným šířením tzv perceptronová neuronová síť která se potýká s problémy lokálního minima nízkou rychlostí konvergence volbou počátečních parametrů vah a počtem neuronů V tomto příspěvku je prezentována ortogonální neuronová síť s jednou skrytou vrstvou která využívá ortogonální aktivační funkce ato neuronová síť umožňuje odstranit výše uvedené problémy Ortogonální neuronová síť má mnohem vyšší rychlost učení než dopředné sítě 1 INRODUCION In area of a function approximation a prediction as the identification of unknon system a system control and optimization robotics are no common used the neural netorks here are multilayer feedforard neural netorks are often applied concretely perceptron and radial neural netork by type of the activation function With their application feedforard neural netork have very problematic point of learning convergence speed an acquirement of global minimum a settings of initial eight parameters here are many publications deal ith various methodologies hich remove this problems and one of the radical approach is the selection novel structure of neural netork to removing this sticking point his contribution describes novel model of neural netork ith unconventional internal ordering of neuron s connection and an orthogonal type of activation function Finally it s proposed conclusion of simulation modelling of orthogonal neural netork for chosen nonlinear function 2 ORHOGONAL NEURAL NEWORK SRUCURE ONN (orthogonal neural netork) is feedforard neural netork ith multi inputs and single output (ISO multi-input-single-output) and a hidden layer ith orthogonal activation function in hidden neurons (Fig 1) he inputs of neural netorks are distributed into orthogonal neurons blocks * Ing Department of Control Systems and Instrumentation Faculty of echanical Engineering VSB echnical University of Ostrava 17 listopadu 15/2172 Ostrava-Poruba tel (+420) jolanaskutova@vsbcz 153

2 for each input he number of neurons for each input signals is arbitrary and ith neuron correspond to ith orthogonal function order he number of orthogonal neurons is given by m the number of inputs [-] N i the number of neurons for each input [-] m N ortg = N i he other layer of ONN is arranged to nodes hich consist of products combinations of the particular outputs from orthogonal neurons and it s defined as m φn 1 K nm i= 1 ( x) = φ ( x ) m the dimension of input vector [-] ni i i= 1 x = x x ] [ 1 2 K φ i the output of orthogonal function implemented by each hidden layer neuron [] - x m (1) (2) Orthogonal function Produkt function Sum function x K Nm 2 1K Nm x 2 N1 1K N m 1 2 K Nm 2 2 K Nm Σ y 1 N 2 K N m 2 N 2 K N m x m N 1 N 2 K N m Fig 1 he structure of orthogonal neural netork 154

3 he ONN output is given by sum of the all output nodes from previous layer (2) and one is can be mathematically expressed as yˆ Φ the transformed input vector [-] N N 1 m ( x v) L φ ( x) = Φ ( x) ˆ = ŵ the transformed eight vector [-] n1 = 1 nm = 1 n1lnm n1 Lnm (3) Orthogonal Activation Function he rate of convergence due to using of orthogonal functions as activation function in neural netork is larger then the rate of convergence for perceptron or radial neural netorks hen it s sticking point he set of one-dimensional orthogonal functions φ are defined as Φ δ ij Kronecker delta function [-] hich is given by x ( N ) ( x) φ ( x) max ( N ) φi j dx = δij xmin 1 δij = 0 pokud pokud n i (4) i = j (5) i j here has been number of the orthogonal polynomials eg Hermite Legendre Laguerre Chebyshev and Fourier polynomials [5] hus far for modelling given nonlinear function applied Legendre and Fourier polynomial only he orthogonal neural netork ith Fourier activation function had a sufficient modelled outputs hence next experiments as realized by the ONN ith Legendre polynomial 3 RAINING EHOD OF ORHOGONAL NEURAL NEWORK Generally the learning process is performed by adapting the netork eight such that the expected value of the mean squared error beteen netork output and training output is minimized he gradient descent-based learning algorithms are popular training algorithm for neural netorks o train proposed netork learning rules are determined from Lyapunov-like stability analysis he cost function ONN is given by e the learning error [-] y ) the output of ONN [-] y the actual output [-] ) E = e = ( y y) 2 (6)

4 he gradient descent algorithm adjusts netork eights in such a ay that the square of the neural netork learning error changes in a negative gradient direction and then eight adaptation is given by ) the variation of netork eights [-] δ the netork learning rate [-] ) the netork learning rate [-] E the mean square error [-] ) E = δ ) (7) hus the ONN s eight update la for the instantaneous gradient descent algorithms is given as ) ) ( t) = ( t 1) + δeφ( t) (8) ) the transformed input vector consisting of orthogonal functions [-] e the learning error [-] o guarantee of ONN s training stability the learning rate δ must be bounded by 0 2 δ < (9) Φ Φ < his learning rate is one of the parameters that are the part of experiment to the acquisition of neural netork optimal model depending on chosen modelled system [1] 4 ODELLING OF NONLINEAR FUNCION his paper presents the nonlinear function modelling results that are step to development in modelling of real-time system as ell as the orthogonal neural netork apply to control system subsequently he algorithm for orthogonal neural netork implementation in atlab/simulink program have already been designed and compiled Further the necessary source code optimization has been performed regarding to decrease in the computational time he verification and experiments ith particular structure design has been implemented on an approximation of nonlinear function that is expressed as 2 y ( x) = 1 (10) 2x 1+ e he neural netork structure is given by one input and one output of nonlinear function and hidden layer includes 20 neurons ith orthogonal function of Legendre activation function kind concretely from 1st to 20th degree he orking range of input signal correspond to the values on the interval [-25 25] he ONN s training carry out in epochs (one epoch presents one training algorithm loop for hole training data) o quality results of the neural netork eight update a number of epochs have been chosen maximally 100 epochs he learning rate value has been the question of experiments and one as settings on fourth part of maximum parameter value δ in Eq (9) 156

5 he initial eight as selected in interval [-1; 1] randomly he global minimum acquisition as demonstrated by multiple running of training process ith different initial eight values he final eight values have been steadied (Fig 4) compared ith eight values in first training epoch he orthogonal neural netork as implemented in Simulink program ith minimal number of the blocks Further the individual degrees of Legendre polynomial ere realized by particular -files he very sticking point as the composition of ONN s product nodes in Simulink program Fig 2 he ONN s error for given nonlinear function Fig 3 he desired actual output and ONN s output for given nonlinear function Fig 4 he ONN s eight values for first epoch (solid line) and last epoch (dash line) of training process Fig 5 he ONN training convergence 5 CONCLUSIONS he orthogonal neural netorks pruning aay some fundamental sticking point of perceptron neural netorks applications as a rate of convergence speed of the ONN learning process an achieving of global minimum hile neural netork training In contrast ith perceptron netorks it requires different training data to acquirement of a robust neural netork It no proceed tests and experiments ith the selection of inputs number and inputs pattern the selection of training data sets and the number of orthogonal neurons (Legendre polynomials order) for real pressure-air system For a system control using neural netork exists many control strategies using neural netorks that design of a suitable neural netorks structure is examined and testing in order to a quality control 157

6 his ork as developed ith the support of the research project GACR 101/06/0491 REFERENCES [1] LEONDES C Control and Dynamic Systems Academic Press pp ISBN [2] YANG S S & SENG C S An Orthogonal Neural Netork for Function Approximation In IEEE ransaction on Systems an Cybernetics Part B 1996 Vol 26 Issue 5 pp ISSN [3] SOLOWAY D & HALEY P J Neural Generalized Predictive Control A Neton-Raphson Implementation In Proceedings of the IEEE International Symposium on Intelligent Control Dearborn: ISICs 1996 pp ISBN [4] GUO B & YU J odel Adaptive Control Based on a Compound Orthogonal Neural Netork International Journal of Information echnology 2006 Vol 12 Issue 5 pp ISSN [5] WIKIPEDIA he Free Encyclopedia [online] available from : <URL: Revieer: doc Ing Zora Jančíková CSc VŠB - echnical University of Ostrava 158

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2006, ročník LII, řada strojní článek č.

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2006, ročník LII, řada strojní článek č. Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo, rok 006, ročník LII, řada strojní článek č. 1540 Roman KORZENIOWSKI *, Janusz PLUTA ** MATLAB-SIMULINK MODEL OF ELECTROPNEUMATIC

More information

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2008, ročník LIV, řada strojní článek č.

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2008, ročník LIV, řada strojní článek č. Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2008, ročník LIV, řada strojní článek č. 1587 Jiří HAVLÍK *, Tomáš HAVLÍK **, Václav KRYS *** TESTING OF THE OPENCAST

More information

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2014, ročník XIV, řada stavební článek č. 18.

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2014, ročník XIV, řada stavební článek č. 18. Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo, rok, ročník XIV, řada stavební článek č. 8 Martin PSOTNÝ NONLINER NLYSIS OF BUCKLING & POSTBUCKLING bstract The stability

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

ADAPTIVE NEURAL NETWORK CONTROL OF MECHATRONICS OBJECTS

ADAPTIVE NEURAL NETWORK CONTROL OF MECHATRONICS OBJECTS acta mechanica et automatica, vol.2 no.4 (28) ADAPIE NEURAL NEWORK CONROL OF MECHARONICS OBJECS Egor NEMSE *, Yuri ZHUKO * * Baltic State echnical University oenmeh, 985, St. Petersburg, Krasnoarmeyskaya,

More information

Networks of McCulloch-Pitts Neurons

Networks of McCulloch-Pitts Neurons s Lecture 4 Netorks of McCulloch-Pitts Neurons The McCulloch and Pitts (M_P) Neuron x x sgn x n Netorks of M-P Neurons One neuron can t do much on its on, but a net of these neurons x i x i i sgn i ij

More information

Neural networks and support vector machines

Neural networks and support vector machines Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith

More information

INTRODUCTION 1 SOFTWARE DYNAFORM 5.2

INTRODUCTION 1 SOFTWARE DYNAFORM 5.2 Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2008, ročník LIV, řada strojní článek č. 1581 Radek ČADA *, Miloš KIJONKA ** INFLUENCE OF INPUT VALUES CHANGES ON

More information

Machine Learning: Logistic Regression. Lecture 04

Machine Learning: Logistic Regression. Lecture 04 Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input

More information

ANALYSIS OF CHEMICAL REHEATING OF STEEL BY MEANS OF REGRESSION AND ARTIFICIAL NEURAL NETWORKS. Ondřej Zimný a Jan Morávka b Zora Jančíková a

ANALYSIS OF CHEMICAL REHEATING OF STEEL BY MEANS OF REGRESSION AND ARTIFICIAL NEURAL NETWORKS. Ondřej Zimný a Jan Morávka b Zora Jančíková a ANALYSIS OF CHEMICAL REHEATING OF STEEL BY MEANS OF REGRESSION AND ARTIFICIAL NEURAL NETWORKS Ondřej Zimný a Jan Morávka b Zora Jančíková a a VŠB - Technical University of Ostrava, Tř. 17. listopadu 15,

More information

Multilayer Feedforward Networks. Berlin Chen, 2002

Multilayer Feedforward Networks. Berlin Chen, 2002 Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be

More information

Artificial Neural Networks. Part 2

Artificial Neural Networks. Part 2 Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological

More information

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition Radial Basis Function Networks Ravi Kaushik Project 1 CSC 84010 Neural Networks and Pattern Recognition History Radial Basis Function (RBF) emerged in late 1980 s as a variant of artificial neural network.

More information

Forecasting Time Series by SOFNN with Reinforcement Learning

Forecasting Time Series by SOFNN with Reinforcement Learning Forecasting Time Series by SOFNN ith einforcement Learning Takashi Kuremoto, Masanao Obayashi, and Kunikazu Kobayashi Abstract A self-organized fuzzy neural netork (SOFNN) ith a reinforcement learning

More information

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2006, ročník LII, řada strojní článek č.

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2006, ročník LII, řada strojní článek č. Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2006, ročník LII, řada strojní článek č. 1524 Abstract Milan ADÁMEK *, Petr NEUMANN ** MICROFLOW SENSOR SENZOR MIKROPRŮTOKU

More information

Linear Discriminant Functions

Linear Discriminant Functions Linear Discriminant Functions Linear discriminant functions and decision surfaces Definition It is a function that is a linear combination of the components of g() = t + 0 () here is the eight vector and

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7. Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

NEURAL CONTROLLERS FOR NONLINEAR SYSTEMS IN MATLAB

NEURAL CONTROLLERS FOR NONLINEAR SYSTEMS IN MATLAB NEURAL CONTROLLERS FOR NONLINEAR SYSTEMS IN MATLAB S.Kajan Institte of Control and Indstrial Informatics, Faclt of Electrical Engineering and Information Technolog, Slovak Universit of Technolog in Bratislava,

More information

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1 90 8 80 7 70 6 60 0 8/7/ Preliminaries Preliminaries Linear models and the perceptron algorithm Chapters, T x + b < 0 T x + b > 0 Definition: The Euclidean dot product beteen to vectors is the expression

More information

Comparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms

Comparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms Comparison of the Complex Valued and Real Valued Neural Networks rained with Gradient Descent and Random Search Algorithms Hans Georg Zimmermann, Alexey Minin 2,3 and Victoria Kusherbaeva 3 - Siemens AG

More information

2 FLUID IN RECTANGULAR TANK DURING EARTHQUAKE

2 FLUID IN RECTANGULAR TANK DURING EARTHQUAKE Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 214, ročník XIV, řada stavební článek č. 12 Kamila KOTRASOVÁ 1 FUID IN RECTANGUAR TANK FREQUENCY ANAYSIS Abstract

More information

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects

More information

Transactions of the VŠB Technical University of Ostrava Civil Engineering Series, No. 2, Vol. 15, 2015 paper #16. Jozef MELCER 1

Transactions of the VŠB Technical University of Ostrava Civil Engineering Series, No. 2, Vol. 15, 2015 paper #16. Jozef MELCER 1 1.11/tvsb-1-16 Transactions of the VŠB Technical University of Ostrava Civil Engineering Series, No., Vol. 1, 1 paper #16 Jozef MELCER 1 INFLUENCE OF DAMPING ON FRF OF VEHICLE COMPUTING MODEL Abstract

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Deterministic convergence of conjugate gradient method for feedforward neural networks

Deterministic convergence of conjugate gradient method for feedforward neural networks Deterministic convergence of conjugate gradient method for feedforard neural netorks Jian Wang a,b,c, Wei Wu a, Jacek M. Zurada b, a School of Mathematical Sciences, Dalian University of Technology, Dalian,

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Neural networks III: The delta learning rule with semilinear activation function

Neural networks III: The delta learning rule with semilinear activation function Neural networks III: The delta learning rule with semilinear activation function The standard delta rule essentially implements gradient descent in sum-squared error for linear activation functions. We

More information

The Perceptron Algorithm

The Perceptron Algorithm The Perceptron Algorithm Greg Grudic Greg Grudic Machine Learning Questions? Greg Grudic Machine Learning 2 Binary Classification A binary classifier is a mapping from a set of d inputs to a single output

More information

Preliminary Testing and Analysis of an Adaptive Neural Network Training Kalman Filtering Algorithm

Preliminary Testing and Analysis of an Adaptive Neural Network Training Kalman Filtering Algorithm Proceedings of the IV Brailian Conference on Neural Netorks - IV Congresso Brasileiro de Redes Neurais pp. 47-5, July 0-, 999 - IA, São José dos Campos - SP - Brail Preliminary esting and Analysis of an

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Lecture 3a: The Origin of Variational Bayes

Lecture 3a: The Origin of Variational Bayes CSC535: 013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton The origin of variational Bayes In variational Bayes, e approximate the true posterior across parameters

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering

Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Cascade Neural Networks with Node-Decoupled Extended Kalman Filtering Michael C. Nechyba and Yangsheng Xu The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 Abstract Most neural networks

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava Lenka OCENASOVA* Stanislav TUREK** Robert CEP ***, Ivan LITVAJ****

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava Lenka OCENASOVA* Stanislav TUREK** Robert CEP ***, Ivan LITVAJ**** Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 1, rok 2008, ročník LIV, řada strojní článek č. 1604 Lenka OCENASOVA *, Stanislav TUREK **, Robert CEP ***, Ivan LITVAJ

More information

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation 1 Introduction A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation J Wesley Hines Nuclear Engineering Department The University of Tennessee Knoxville, Tennessee,

More information

NN V: The generalized delta learning rule

NN V: The generalized delta learning rule NN V: The generalized delta learning rule We now focus on generalizing the delta learning rule for feedforward layered neural networks. The architecture of the two-layer network considered below is shown

More information

GRADIENT DESCENT. CSE 559A: Computer Vision GRADIENT DESCENT GRADIENT DESCENT [0, 1] Pr(y = 1) w T x. 1 f (x; θ) = 1 f (x; θ) = exp( w T x)

GRADIENT DESCENT. CSE 559A: Computer Vision GRADIENT DESCENT GRADIENT DESCENT [0, 1] Pr(y = 1) w T x. 1 f (x; θ) = 1 f (x; θ) = exp( w T x) 0 x x x CSE 559A: Computer Vision For Binary Classification: [0, ] f (x; ) = σ( x) = exp( x) + exp( x) Output is interpreted as probability Pr(y = ) x are the log-odds. Fall 207: -R: :30-pm @ Lopata 0

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Introduction Goal: Classify objects by learning nonlinearity There are many problems for which linear discriminants are insufficient for minimum error In previous methods, the

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK DAAAM INTERNATIONAL SCIENTIFIC BOOK 2011 pp. 547-554 CHAPTER 44 NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK BURLAK, V. & PIVONKA, P. Abstract: This article is focused on the off-line identification

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

Optimal Control of Nonlinear Systems using RBF Neural Network and Adaptive Extended Kalman Filter

Optimal Control of Nonlinear Systems using RBF Neural Network and Adaptive Extended Kalman Filter 9 American Control Conerence Hyatt Regency Riverront, St. Louis, MO, USA June -, 9 WeA. Optimal Control o Nonlinear Systems using RBF Neural Network and Adaptive Extended Kalman Filter Peda V. Medagam,

More information

Logistic Regression. Machine Learning Fall 2018

Logistic Regression. Machine Learning Fall 2018 Logistic Regression Machine Learning Fall 2018 1 Where are e? We have seen the folloing ideas Linear models Learning as loss minimization Bayesian learning criteria (MAP and MLE estimation) The Naïve Bayes

More information

Neural Network Control of Robot Manipulators and Nonlinear Systems

Neural Network Control of Robot Manipulators and Nonlinear Systems Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Seiya Satoh and Ryohei Nakano Abstract In a search space of a multilayer perceptron having hidden units, MLP(), there exist

More information

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels ESANN'29 proceedings, European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning. Bruges Belgium), 22-24 April 29, d-side publi., ISBN 2-9337-9-9. Heterogeneous

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

N-bit Parity Neural Networks with minimum number of threshold neurons

N-bit Parity Neural Networks with minimum number of threshold neurons Open Eng. 2016; 6:309 313 Research Article Open Access Marat Z. Arslanov*, Zhazira E. Amirgalieva, and Chingiz A. Kenshimov N-bit Parity Neural Netorks ith minimum number of threshold neurons DOI 10.1515/eng-2016-0037

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks C. James Li and Tung-Yung Huang Department of Mechanical Engineering, Aeronautical Engineering

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Linear models and the perceptron algorithm

Linear models and the perceptron algorithm 8/5/6 Preliminaries Linear models and the perceptron algorithm Chapters, 3 Definition: The Euclidean dot product beteen to vectors is the expression dx T x = i x i The dot product is also referred to as

More information

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 3(B), March 2012 pp. 2285 2300 FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

Experimental and numerical studies of the effect of high temperature to the steel structure

Experimental and numerical studies of the effect of high temperature to the steel structure Experimental and numerical studies of the effect of high temperature to the steel structure LENKA LAUSOVÁ IVETA SKOTNICOVÁ IVAN KOLOŠ MARTIN KREJSA Faculty of Civil Engineering VŠB-Technical University

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

Social Network Games

Social Network Games CWI and University of Amsterdam Based on joint orks ith Evangelos Markakis and Sunil Simon The model Social netork ([Apt, Markakis 2011]) Weighted directed graph: G = (V,,), here V: a finite set of agents,

More information

Multilayer Perceptron = FeedForward Neural Network

Multilayer Perceptron = FeedForward Neural Network Multilayer Perceptron = FeedForward Neural Networ History Definition Classification = feedforward operation Learning = bacpropagation = local optimization in the space of weights Pattern Classification

More information

EPL442: Computational

EPL442: Computational EPL442: Computational Learning Systems Lab 2 Vassilis Vassiliades Department of Computer Science University of Cyprus Outline Artificial Neuron Feedforward Neural Network Back-propagation Algorithm Notes

More information

Speaker Representation and Verification Part II. by Vasileios Vasilakakis

Speaker Representation and Verification Part II. by Vasileios Vasilakakis Speaker Representation and Verification Part II by Vasileios Vasilakakis Outline -Approaches of Neural Networks in Speaker/Speech Recognition -Feed-Forward Neural Networks -Training with Back-propagation

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks

Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Online Identification And Control of A PV-Supplied DC Motor Using Universal Learning Networks Ahmed Hussein * Kotaro Hirasawa ** Jinglu Hu ** * Graduate School of Information Science & Electrical Eng.,

More information

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet ANN

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS P. Pandian et al. / International Journal of Engineering and echnology (IJE) A SINGLE NEURON MODEL FOR SOLVING BOH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS P. Pandian #1, G. Selvaraj # # Dept. of Mathematics,

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance

Adaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated

More information

Transactions of the VŠB Technical University of Ostrava No. 2, 2012, Vol. XII, Civil Engineering Series paper #26

Transactions of the VŠB Technical University of Ostrava No. 2, 2012, Vol. XII, Civil Engineering Series paper #26 10.2478/v10160-012-0026-2 Transactions of the VŠB Technical University of Ostrava No. 2, 2012, Vol. XII, Civil Engineering Series paper #26 Tomáš PETŘÍK 1, Eva HRUBEŠOVÁ 2, Martin STOLÁRIK 3, Miroslav

More information

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann ECLT 5810 Classification Neural Networks Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann Neural Networks A neural network is a set of connected input/output

More information

Weight Initialization Methods for Multilayer Feedforward. 1

Weight Initialization Methods for Multilayer Feedforward. 1 Weight Initialization Methods for Multilayer Feedforward. 1 Mercedes Fernández-Redondo - Carlos Hernández-Espinosa. Universidad Jaume I, Campus de Riu Sec, Edificio TI, Departamento de Informática, 12080

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Training Radial Basis Neural Networks with the Extended Kalman Filter

Training Radial Basis Neural Networks with the Extended Kalman Filter Cleveland State University EngagedScholarship@CSU Electrical Engineering & Computer Science Faculty Publications Electrical Engineering & Computer Science Department 10-1-2002 Training Radial Basis Neural

More information

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neuro-Fuzzy Comp. Ch. 4 March 24, R p 4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors,

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2007, ročník LIII, řada strojní článek č.

Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2007, ročník LIII, řada strojní článek č. Sborník vědeckých prací Vysoké školy báňské - Technické univerzity Ostrava číslo 2, rok 2007, ročník LIII, řada strojní článek č. 1567 Abstract Lubomír SMUTNÝ * TEMPERATURE MEASUREMENT BY CONTACT SMART

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Multilayer Neural Networks

Multilayer Neural Networks Pattern Recognition Lecture 4 Multilayer Neural Netors Prof. Daniel Yeung School of Computer Science and Engineering South China University of Technology Lec4: Multilayer Neural Netors Outline Introduction

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information