Deep Belief Network using Reinforcement Learning and its Applications to Time Series Forecasting
|
|
- Jonah Thompson
- 5 years ago
- Views:
Transcription
1 Deep Belef Network usng Renforcement Learnng and ts Applcatons to Tme Seres Forecastng Takaom HIRATA, Takash KUREMOTO, Masanao OBAYASHI, Shngo MABU Graduate School of Scence and Engneerng Yamaguch Unversty Tokwada -16-1, Ube, Yamaguch JAPAN {v003we, wu, m.obayas, Kunkazu KOBAYASHI School of Informaton Scence and Technology Ach Prefectural Unversty 15-3 Ibaragabasama, Nagakute, Ach JAPAN Abstract. Artfcal neural networks (ANNs) typfed by deep learnng (DL) s one of the artfcal ntellgence technology whch s attractng the most attenton of researchers recently. However, the learnng algorthm used n DL s usually wth the famous error-backpropagaton (BP) method. In ths paper, we adopt a renforcement learnng (RL) algorthm Stochastc Gradent Ascent (SGA) proposed by Kmura & Kobayash nto a Deep Belef Nets (DBN) wth multple restrcted Boltzmann machnes nstead of BP learnng method. A longterm predcton experment, whch used a benchmark of tme seres forecastng competton, was performed to verfy the effectveness of the novel DL method. Keywords: deep learnng, restrcted Boltzmann machne, stochastc gradent ascent, renforcement learnng, error-backpropagaton 1 Introducton A tme seres s a data strng to be observed n a temporal change n a certan phenomenon. For example, foregn currency exchange rate, stock prces, the amount of ranfall, the change of sunspot, etc. There s a long hstory of tme seres analyss and forecastng [1], however, the predcton study of tme-seres data n the real world s stll on the way because of the nonlnearty and the nose affecton. Artfcal neural networks (ANNs) have been wdely used n pattern recognton, functon approxmaton, nonlnear control, tme seres forecastng, and so on, snce 1940s []-[11]. After a learnng algorthm named error-backpropagaton (BP) was proposed by Rumelhart, Hnton & Wllam for mult-layer perceptrons (MLPs) [], adfa, p. 1, 011. Sprnger-Verlag Berln Hedelberg 011
2 ANNs had ts heyday from 1980s to 1990s. As a successful applcaton, ANNs are utlzed as tme seres predctors today [3]-[11]. Especally, a deep belef net (DBN) composed by restrcted Boltzmann machnes (RBMs) [6] and Mult-Layer Perceptron (MLP) [] was proposed n [8]-[10] recently. Deep learnng (DL), whch s a knd of novel ANNs tranng method, s attractng the most attenton of artfcal ntellgent (AI) researchers. By a layer-by-layer tranng algorthm and stack structures, bg data n the hgh dmensonal space s able to be classfed, recognzed, or sparsely modeled by DL [6]. However, although there are varous knds of deep networks such as auto-encoder, deep Boltzmann machne, convolutonal neural network (CNN), etc., the learnng algorthm used n DL s usually wth the famous BP method. In ths paper, a renforcement learnng method named stochastc gradent ascent (SGA) proposed by Kmura & Kobayash [13] s ntroduced to the DBN wth RBMs as the fne-tunng method nstead of the conventonal BP learnng method. The error between the output of the DBN and the sample s used as reward/punshment to modfy the weghts of connectons of unts between dfferent layers. Usng a benchmark named CATS data used n the predcton competton [4] [5], the effectveness of the proposed deep learnng method was confrmed by the tme seres forecastng results. DBN wth BP Learnng (The Conventonal Method) In [8], Kuremoto et al. frstly appled Hnton & Slakhutdnov s deep belef net (DBN) wth restrcted Boltzmann machnes (RBMs) to the feld of tme seres forecastng. In [9] and [10], Kuremoto, Hrata, et al. constructed a DBN wth RBMs and a mult-layer perceptron (MLP) to mproved the prevous tme seres predctor wth RBMs only. In ths secton, these conventonal methods are ntroduced..1 DBN wth RBM and MLP Restrcted Boltzmann machne (RBM) RBM s a Boltzmann machne consstng of two layers of the vsble layer and the hdden layer, no connectons between neurons on the same layer. It s possble to extract features to compress the hgh-dmensonal data n low-dmensonal data by performng an unsupervsed learnng algorthm. Each unt of the vsble layer has a symmetrc connecton weghts wth respect to the unt each of the hdden layer. Couplng between the unts are b-drecton. The value of the connecton weghts between unts s the same n both drectons. Unt of the vsble layer has a bas b, the hdden layer b. All neurons n the vsble layer and the hdden layer stochastcally output 1 or 0, accordng to ts probablty wth sgmod functon (Eq. (1) and Eq. ()). 1 p( h 1 v) (1) 1 exp( b w v )
3 1 p( v 1 h) () 1 exp( b w h ) Usng a learnng rule whch modfes the weghts of connectons, RBM network can reach a convergent state by observng ts energy functon: E( v, h) b v b h w v h (3) Detals of the learnng algorthm of RBM can be found n [8]., DBN wth RBM and MLP DBN s a mult-layer neural network whch usually composes multple RBMs [6] [8]. DBN can extract the feature of features of hgh-dmensonal data, so t s also called deep learnng, and appled to many felds such as dmensonalty reducton, mage compresson, pattern recognton, tme seres forecastng, and so on. A DBN predcton system s proposed recently by composng of plural RBMs n [8], and another DBN predcton system usng RBM and MLP s proposed n [9] (see Fg. 1). However, all of these DBNs used the learnng algorthm proposed n [6],.e. RBM s unsupervsed learnng and BP learnng, supervsed learnng as fne-tunng conventonally. Fg. 1. A structure of DBN composed by RBM and MLP 3 DBN wth SGA (The Proposed Method) BP s a powerful learnng method for the feed-forward neural network, however, t modfes the model strctly accordng to the teacher sgnal. So the robustness of the ANNs bult by BP s restrcted. Meanwhle, noses usually exst n the real tme-seres data. Therefore, we consder that predcton performance may be mproved by usng a renforcement learnng (RL) algorthm to modfy the ANN predctors accordng to the rewards/punshment correspondng to a good/bad output. That s, f the absolute error between the output and the sample s small enough (e.g. usng a threshold), then t s consdered as a good output, and a postve reward s used to modfy parameters of ANNs. In [7], MLP and a self-organzed fuzzy neural network (SOFNN) wth a RL algorthm called stochastc gradent ascent (SGA) [13] were shown ther prorty to the conventonal BP learnng algorthm. Here, we ntend to nvestgate the case when SGA s used n DBN nstead of BP.
4 Fg..The structure of MLP desgned for SGA 3.1 The structure of ANNs wth SGA In Fg., a MLP type ANN s desgned for SGA [7]. The man dfference to the conventonal MLP s that the output of network s gven by two unts whch are the parameters of a stochastc polcy whch s a Gaussan dstrbuton functon.sga Learnng algorthm for ANNs The SGA algorthm s gven as follows [7] [13]. 1. Observe an nput x( on tme t.. Predct a future data y ( as yˆ ( accordng to a probablty 1 ( yˆ( ) ( y ˆ( W, x( ) exp( ) wth ANN models whch are constructed by parameters W,, w, w, v }. { 3. Receve a scalar reward/punshment r t by calculatng the predcton error. 1 r t 1 f ( y( ) ( else (4) Where ( MSE (, MSE ( t ( y( ), β s a postve constant. t 4. Calculate characterstc elgblty e ( and elgblty trace D (. (W) (5) D ( e( D ( t 1) (6) Where 0 1 s a dscount factor, w denotes th element of the nternal varable vector W. Calculate ( : w ( ( r b) D ( (7) w t
5 Where b denotes the renforcement baselne. 5. Improve the polcy ( yˆ( x(, W ) by renewng ts nternal varable W. W W W (8) Where α s the learnng rate. 6. For the next tme step t 1, f the predcton error MSE( s converged enough then end the learnng process, else, return to step 1. Fg. 3. Tme seres data of CATS 4 Predcton Experments and Results 4.1 CATS benchmark tme seres data CATS tme seres data s the artfcal benchmark data for forecastng competton wth ANN methods [4] [5].Ths artfcal tme seres s gven wth 5,000 data, among whch 100 are mssed (hdden by competton the organzers). The mssed data exst n 5 blocks:elements 981 to 1,000;elements 1,981 to,000;elements,981 to 3,000; elements 3,981 to 4,000; elements 4,981 to 5,000. The mean square error E 1 s used as the predcton precson n the competton, and t s computed by the 100 mssng data and ther predcted values as followng: E 1 { ( y( ) ( y( ) ( y( ) ( y( ) 100 t981 t1981 t981 t3981 where y ˆ( s the long term predcton result of the mssed data. t4981 ( y( ) } (9)
6 4. Optmzaton of meta parameters The number of RBM that consttute the DBN and the number of neurons of each layer affect predcton performance serously. In ths paper, these meta parameters are optmzed usng random search method [1]. In the optmzaton of the ANN structure, random search s known to exhbt hgher performance than the grd search. The meta parameters of DBN and ther exploraton lmts are shown as followng: the number of RBMs: 0-3; the number of neurons n each layers: -0; learnng rate of each RBM: ; learnng rate of SGA : ; dscount factor: ; constant β n Eq. (8) Fg. 4. The predcton results of dfferent methods for CATS (Block 1). Table 1. The comparson of performance between dfferent methods usng CATS data Method E 1 DBN(SGA) (proposed) 170 DBN(BP)+ARIMA (10) 44 DBN (9) (BP) 57 Kalman Smoother (The best of IJCNN '04) (5) 408 DBN (8) ( RBMs) 115 MLP (8) 145 A herarchcal Bayesan Learnng Scheme for Autoregressve Neural 147 Networks (The worst of IJCNN '04) (4) 4.3 Experments Result The predcton results of the frst blocks of CATS data are shown n Fg. 4. Comparng to the conventonal learnng method of DBN,.e., usng Hnton s RBM unsu-
7 pervsed learnng method [6] [8] and back-propagaton (BP), the proposed method whch used the renforcement learnng method SGA nstead of BP showed ts superorty accordng to the measure of the average predcton precson E 1. Addtonally, the result by the proposed method acheved the top of rank of all prevous studes such as MLP wth BP, the best predcton of CATS competton IJCNN 04 [5], the conventonal DBNs wth BP [8] [9], and hybrd models [10] [11]. The detals are shown n Table 1.The optmal parameters obtaned by random search method are shown n Table. Table. Parameters of DBN used for the CATS data (Block 1) DBN wth SGA (proposed) DBN wth BP [9] The number of RBMs 3 1 Learnng rate of RBM Structure of DBN (the number of neurons n each layer) Learnng rate of SGA or BP Dscount factor Coeffcent Concluson In ths paper, we proposed to use a renforcement learnng method stochastc gradent ascent (SGA) to realze fne-tunng of a deep belef net (DBN) composed by multple restrcted Boltzmann machnes (RBMs) and mult-layer perceptron (MLP). Dfferent from the conventonal fne-tunng method usng the error backpropagaton (BP), the proposed method used a rough udgment of renforcement learnng concept,.e., good output owns postve rewards, and bad one wth negatve reward values, to modfy the network. Ths makes the avalable of a sparse model buldng, avodng to the over-fttng problem whch occurs by the lost functon wth the teacher sgnal. Tme seres predcton was appled to verfy the effectveness of the proposed method, and comparng to the conventonal methods, the DBN wth SGA showed the hghest predcton precson n the case of CATS benchmark data. The future work of ths study s to nvestgate the effectveness to the real tme seres data and other nonlnear systems such as chaotc tme seres data. Acknowledgment: Ths work was supported by JSPS KAKENHI Grant No and No
8 References: 1. G. E. P. Box, D. A. Perce, Dstrbuton of Resdual Autocorrelatons n Autoregressve- Integrated Movng Average Tme Seres Models, Journal of the Amercan Statstcal Assocaton, Vol. 65, No. 33, Dec., 1970, pp D. E., Rumelhart, G. E., Hnton, G. E. & R. J. Wllams, Learnng representaton by backpropagatng errors, Nature, Vol. 3, No. 9, 1986, pp M. Casdagl, Nonlnear predcton of chaotc tme seres, Physca D, Vol.35, 1981, pp A. Lendasse, E. Oa, O. Smula, M. Verleysen, Tme Seres Predcton Competton: The CATS Benchmark, Proceedngs of Internatonal Jont Conference on Neural Networks (IJCNN'04), 004, pp A. Lendasse, E. Oa, O. Smula, M. Verleysen, Tme Seres Predcton Competton: The CATS Benchmark, Neurocomputng, Vol. 70, 007, pp G.E. Hnton, R.R. Salakhutdnov, Reducng the dmensonalty of data wth neural networks, Scence, Vol.313, 006, pp T. Kuremoto, M. Obayash, M. Kobayash, Neural Forecastng Systems. In Renforcement Learnng, Theory and Applcatons (ed. C. Weber, M. Elshaw and N. M. Mayer), Chapter 1, pp. 1-0, 008, INTECH. 8. T. Kuremoto, S. Kmura, K. Kobayash, M. Obayash, Tme seres forecastng usng a deep belef network wth restrcted Boltzmann machnes, Neurocomputng, Vol.137, No.5, Aug. 014, pp T. Kuremoto, T. Hrata, M. Obayash, S. Mabu, K. Kobayash, Forecast Chaotc Tme Seres Data by DBNs, Proceedngs of the 7th Internatonal Congress on Image and Sgnal Processng (CISP 014), Oct. 014, pp T. Hrata, T. Kuremoto, M. Obayash, S. Mabu, Tme Seres Predcton Usng DBN and ARIMA, Internatonal Conference on Computer Applcaton Technologes (CCATS 015), Matsue, Japan, Sep. 015,pp G.P. Zhang, Tme seres forecastng usng a hybrd ARIMA and neural network model, Neurocomputng, Vol.50, 003, pp J. Bergstra, Y. Bengo, Random Search for Hyper-Parameter Optmzaton, Journal of Machne Learnng Research, Vol. 13, 01, pp H. Kmura, S. Kobayash, Renforcement learnng for contnuous acton usng stochastc gradent ascent, Proceedngs of 5 th Intellgent Autonomous Systems, 1998, pp
Other NN Models. Reinforcement learning (RL) Probabilistic neural networks
Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn
More informationCSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing
CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationEvaluation of classifiers MLPs
Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationRBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis
Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural
More informationDeep Learning. Boyang Albert Li, Jie Jay Tan
Deep Learnng Boyang Albert L, Je Jay Tan An Unrelated Vdeo A bcycle controller learned usng NEAT (Stanley) What do you mean, deep? Shallow Hdden Markov models ANNs wth one hdden layer Manually selected
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationShort Term Load Forecasting using an Artificial Neural Network
Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationUsing deep belief network modelling to characterize differences in brain morphometry in schizophrenia
Usng deep belef network modellng to characterze dfferences n bran morphometry n schzophrena Walter H. L. Pnaya * a ; Ary Gadelha b ; Orla M. Doyle c ; Crstano Noto b ; André Zugman d ; Qurno Cordero b,
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationEvaluation for Prediction Accuracies of Parallel-type Neuron Network
Proceedngs of the Internatonal ultconference of Engneers and Computer Scentsts 009 Vol I, IECS 009, arch 8-0, 009, Hong Kong Evaluaton for Predcton Accuraces of Parallel-type Neuron Networ Shunsue Kobayaawa
More informationAn Extended Hybrid Genetic Algorithm for Exploring a Large Search Space
2nd Internatonal Conference on Autonomous Robots and Agents Abstract An Extended Hybrd Genetc Algorthm for Explorng a Large Search Space Hong Zhang and Masum Ishkawa Graduate School of L.S.S.E., Kyushu
More informationUsing Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*
Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationStudy on Project Bidding Risk Evaluation Based on BP Neural Network Theory
1192 Proceedngs of the 7th Internatonal Conference on Innovaton & Management Study on Proect Bddng Rs Evaluaton Based on BP Neural Networ Theory Wang Xnzheng 1, He Png 2 1 School of Cvl Engneerng, Nanyang
More informationMulti-Step-Ahead Prediction of Stock Price Using a New Architecture of Neural Networks
Journal of Computer & Robotcs 8(), 05 47-56 47 Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs Mohammad Taleb Motlagh *, Hamd Khaloozadeh Department of Systems and Control,
More informationIntroduction to the Introduction to Artificial Neural Network
Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp
More informationMicrowave Diversity Imaging Compression Using Bioinspired
Mcrowave Dversty Imagng Compresson Usng Bonspred Neural Networks Youwe Yuan 1, Yong L 1, Wele Xu 1, Janghong Yu * 1 School of Computer Scence and Technology, Hangzhou Danz Unversty, Hangzhou, Zhejang,
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationNeural Networks & Learning
Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationMultilayer neural networks
Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationFORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES
Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 8- August 005 FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES DING-ZHOU CAO, SU-LIN PANG, YUAN-HUAI
More informationA Network Intrusion Detection Method Based on Improved K-means Algorithm
Advanced Scence and Technology Letters, pp.429-433 http://dx.do.org/10.14257/astl.2014.53.89 A Network Intruson Detecton Method Based on Improved K-means Algorthm Meng Gao 1,1, Nhong Wang 1, 1 Informaton
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationApplication research on rough set -neural network in the fault diagnosis system of ball mill
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(4):834-838 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Applcaton research on rough set -neural network n the
More informationTime Series Forecasting Using Artificial Neural Networks under Dempster Shafer Evidence Theory and Trimmed-winsorized Means
Internatonal Journal of Informaton and Computaton Technology. ISSN 0974-2239 Volume 3, Number 5 (2013), pp. 383-390 Internatonal Research Publcatons House http://www. rphouse.com /jct.htm Tme Seres Forecastng
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationMATH 567: Mathematical Techniques in Data Science Lab 8
1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W
More informationBearing Remaining Useful Life Prediction Based on an Improved Back Propagation Neural Network
nternatonal Journal of Performablty Engneerng, Vol. 10, No. 6, September 2014, pp.653-657. RAMS Consultants Prnted n nda Bearng Remanng Useful Lfe Predcton Based on an mproved Back Propagaton Neural Network
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationFeature Selection in Multi-instance Learning
The Nnth Internatonal Symposum on Operatons Research and Its Applcatons (ISORA 10) Chengdu-Juzhagou, Chna, August 19 23, 2010 Copyrght 2010 ORSC & APORC, pp. 462 469 Feature Selecton n Mult-nstance Learnng
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationAPPLICATION OF RBF NEURAL NETWORK IMPROVED BY PSO ALGORITHM IN FAULT DIAGNOSIS
Journal of Theoretcal and Appled Informaton Technology 005-01 JATIT & LLS. All rghts reserved. ISSN: 199-8645 www.jatt.org E-ISSN: 1817-3195 APPLICATION OF RBF NEURAL NETWORK IMPROVED BY PSO ALGORITHM
More informationAn Improved multiple fractal algorithm
Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton
More informationdoi: / Chaotic Time Series Prediction Based on RBF Neural Network
Rev. Téc. Ing. Unv. Zula. Vol. 39, Nº, 339-345, 06 do:0.3/00.39..38 Chaotc Tme Seres Predcton Based on RBF Neural Network Yepng Peng School of Software and Servce Outsourcng, Jshou Unversty, Zhangae47000,
More informationThe Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL
The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationMulti-layer neural networks
Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationAn Empirical Study of Fuzzy Approach with Artificial Neural Network Models
An Emprcal Study of Fuzzy Approach wth Artfcal Neural Networ Models Memmedaga Memmedl and Ozer Ozdemr Abstract Tme seres forecastng based on fuzzy approach by usng artfcal neural networs s a sgnfcant topc
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationMAXIMUM A POSTERIORI TRANSDUCTION
MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,
More informationINF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018
INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton
More informationA Fast Computer Aided Design Method for Filters
2017 Asa-Pacfc Engneerng and Technology Conference (APETC 2017) ISBN: 978-1-60595-443-1 A Fast Computer Aded Desgn Method for Flters Gang L ABSTRACT *Ths paper presents a fast computer aded desgn method
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationLecture 6: Introduction to Linear Regression
Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6
More informationNeural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17
Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer
More informationVIDEO KEY FRAME DETECTION BASED ON THE RESTRICTED BOLTZMANN MACHINE
Journal of Appled Mathematcs and Computatonal Mechancs 2015, 14(3), 49-58 www.amcm.pcz.pl p-issn 2299-9965 DOI: 10.17512/amcm.2015.3.05 e-issn 2353-0588 VIDEO KEY FRAME DETECTION BASED ON THE RESTRICTED
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationPERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS
PERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS Mohd Yusoff Mashor School of Electrcal and Electronc Engneerng, Unversty Scence Malaysa, Pera Branch Campus,
More informationThe Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei
The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of
More informationSEASONAL TIME SERIES PREDICTION WITH ARTIFICIAL NEURAL NETWORKS AND LOCAL MEASURES. R. Pinto, S. Cavalieri
SEASONAL TIME SERIES PREDICTION WITH ARTIFICIAL NEURAL NETWORKS AND LOCAL MEASURES R. Pnto, S. Cavaler Unversty of Bergamo, Department of Industral Engneerng, Italy vale Marcon, 5. I-24044 Dalmne BG Ph.:
More informationHopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen
Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The
More informationScroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator
Latest Trends on Crcuts, Systems and Sgnals Scroll Generaton wth Inductorless Chua s Crcut and Wen Brdge Oscllator Watcharn Jantanate, Peter A. Chayasena, and Sarawut Sutorn * Abstract An nductorless Chua
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More informationJournal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article
Avalable onlne www.jocpr.com Journal of Chemcal and Pharmaceutcal Research, 014, 6(5):1683-1688 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Multple mode control based on VAV ar condtonng system
More informationFundamentals of Neural Networks
Fundamentals of Neural Networks Xaodong Cu IBM T. J. Watson Research Center Yorktown Heghts, NY 10598 Fall, 2018 Outlne Feedforward neural networks Forward propagaton Neural networks as unversal approxmators
More informationMulti-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data
Journal of Physcs: Conference Seres PAPER OPEN ACCESS Mult-step-ahead Method for Wnd Speed Predcton Correcton Based on Numercal Weather Predcton and Hstorcal Measurement Data To cte ths artcle: Han Wang
More informationA New Scrambling Evaluation Scheme based on Spatial Distribution Entropy and Centroid Difference of Bit-plane
A New Scramblng Evaluaton Scheme based on Spatal Dstrbuton Entropy and Centrod Dfference of Bt-plane Lang Zhao *, Avshek Adhkar Kouch Sakura * * Graduate School of Informaton Scence and Electrcal Engneerng,
More informationAN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING
AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology
More informationPulse Coded Modulation
Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal
More informationHierarchical State Estimation Using Phasor Measurement Units
Herarchcal State Estmaton Usng Phasor Measurement Unts Al Abur Northeastern Unversty Benny Zhao (CA-ISO) and Yeo-Jun Yoon (KPX) IEEE PES GM, Calgary, Canada State Estmaton Workng Group Meetng July 28,
More informationPCA-PSO-BP Neural Network Application in IDS Lan SHI a, YanLong YANG b JanHui LV c
Internatonal Power, Electroncs and Materals Engneerng Conference (IPEMEC 2015) PCA-PSO-BP Neural Network Applcaton n IDS Lan SHI a, YanLong YANG b JanHu LV c College of Informaton Scence and Engneerng,
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationTransient Stability Assessment of Power System Based on Support Vector Machine
ransent Stablty Assessment of Power System Based on Support Vector Machne Shengyong Ye Yongkang Zheng Qngquan Qan School of Electrcal Engneerng, Southwest Jaotong Unversty, Chengdu 610031, P. R. Chna Abstract
More informationA Self-Adaptive Prediction Model for Dynamic Pricing and Inventory Control
A Self-Adaptve Predcton Model for Dynamc Prcng and Inventory Control Hejun Lang* Shangha Unversty of Fnance and Economcs, School of Informaton Management and Engneerng, Shangha 00, Chna *E-mal: langhejun@fudan.edu.cn
More informationLogistic Regression Maximum Likelihood Estimation
Harvard-MIT Dvson of Health Scences and Technology HST.951J: Medcal Decson Support, Fall 2005 Instructors: Professor Lucla Ohno-Machado and Professor Staal Vnterbo 6.873/HST.951 Medcal Decson Support Fall
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationSimulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests
Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth
More informationDiscretization of Continuous Attributes in Rough Set Theory and Its Application*
Dscretzaton of Contnuous Attrbutes n Rough Set Theory and Its Applcaton* Gexang Zhang 1,2, Lazhao Hu 1, and Wedong Jn 2 1 Natonal EW Laboratory, Chengdu 610036 Schuan, Chna dylan7237@sna.com 2 School of
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More information