Using Artificial Neural Networks and Support Vector Regression to Model the Lyapunov Exponent
|
|
- Darlene Atkins
- 6 years ago
- Views:
Transcription
1 Usng Artfal Neural Networks and Support Vetor Regresson to Model the Lyapunov Exponent Abstrat: Adam Maus* Aprl 3, 009 Fndng the salent patterns n haot data has been the holy gral of Chaos Theory. Examples of haot data nlude the flutuatons of the stok market, weather, and many other natural systems. Real world data has proven to be extremely dffult to predt due to t hgh dmensonalty and the potental for nose. It has been shown that artfal neural networks have been able to aurately alulate the Lyapunov Exponent, a feature that determnes the dvergene of two ntally lose trajetores and thus haos. A varaton of the support vetor mahne alled support vetor regresson s used n funton approxmaton and applyng ths tool to haot data may yeld another model that an also be used n the predton of the Lyapunov Exponent. Sne support vetor mahnes have been found to arrve at a predton muh faster than the artfal neural network, t may prove to be the model to use n tme senstve applatons. Introduton Nonlnear predton attempts to desrbe the nherent qualtes of a gven dynamal system usng mathematal models. Models an range n omplexty from a smple set of equatons suh as the logst equaton used to model the arryng apaty of a spees n an eosystem [] to the reaton of artfal neural network models wth hundreds of parameters used n foreastng exhange rates []. These dynamal systems as well as the rse and fall of populatons, the stok market, and the weather exhbt haos where there s a senstve dependene on ntal ondtons. Sne two ntally lose ondtons wll dverge exponentally fast, haot dynams pose a formdable hallenge for long term foreastng. * Unversty of Wsonsn Madson Contat Emal: amaus@ws.edu Asde from long term foreasts, one goal of nonlnear predton s to reate a mathematal model that aurately represents data take from a system. The hope s that by reatng an aurate model, underlyng features n the system wll be replated as well. One suh feature, the largest Lyapunov exponent (LE) s valuable n determnng whether or not a gven system s haot. Its value desrbes the average rate at whh predtablty s lost. The largest Lyapunov exponent s usually geometrally averaged along the orbt and s always a real number. Ths Lyapunov exponent s alulated numerally usng the proedure n [3]. When alulatng the LE for a model, we run fxed length foreasts usng the models reated by the artfal neural network and support vetor regresson model on tranng data. These foreasts are fxed at 5000 tme steps, 5000 was hosen beause t gves a good approxmaton of the true LE and s
2 quker to alulate than one mllon tme steps to fnd the true LE of the models and systems beng studed. The foreast from a model an onverge to a number of dfferent attrators that represent varyng LEs. A foreast that onverges exponentally fast to a fxed pont wll have an LE wth a value of negatve nfnty. The lmt yle s a ase that s often hard to dstngush wthout the alulaton of the LE beause the data may be followng a strt path wth a large number of perods or steps before repeatng. The LE of a lmt yle ranges between negatve nfnty and 0. An LE of zero denotes a system suh as x ( x ) n+ = 3xn n, requrng an nfnte number of ponts to onverge to an attrator. Dynamal systems suh as the Hénon map [4] have a postve LE and onverge to what are alled strange attrators. These attrators appear to follow an orbt but t never repeats. The Hénon map whose form x, n+ =.4xn 0. 3xn an be ntalzed wth x = (.,.) and produes a strange attrator plotted by x n versus x n as n Fgure. The LE alulated for the Hénon map usng 5000 tme steps equals.4093 and the goal of ths paper s to desrbe how neural networks and support vetor regresson models an be used to alulate the LE for ths map and other smlar dynamal systems. Models Studed Artfal neural networks have been used n past for modelng tme seres [5], nonlnear predton [6], and analyzng the underlyng features of data [7]. Hornk et al. [8] lamed that these models were unversal approxmators wth the ablty to represent any funton wth an arbtrary number of hdden unts. Ths model, seen shematally n Fg., used to replate the LE uses a sngle layer of hdden unts to perform funton approxmaton and takes the form, n d xˆ k = b tanh a0 + aj xn j, = j= where a s an n d matrx of oeffents, and b s represented by a vetor of length the number of neurons n. The a matrx represents the onneton strengths of the neurons, and the b vetor s used to modulate the strength of a partular neuron. In ths model, a 0 represents the bas term that s ommonly used n neural network arhtetures. FIG : Strange Attrator of the Hénon Map FIG. : Sngle Layer, Feed-Forward Neural Network
3 The neural network uses the d last ponts from a salar tme seres x n the predton of the next step k. In ths model, d s onsdered the dmenson or embeddng of the neural network. It represents the number of nputs or prevous tme lags used to predt eah subsequent value n the tme seres. The weghts n a and b are updated usng a varant of the smulated annealng algorthm where temperature s held onstant at, effetvely removng the term. For eah oeffent, a Gaussan neghborhood s searhed whose sze s hanged based on progress n tranng. The oeffents are hosen to mnmze the mean-square predton error, e = = ( xˆ x ) The other model studed, Support Vetor Mahnes have tradtonally been used n lassfaton problems; n 996 Vapnk et al. [9] extended ths model to perform robust funton approxmaton that s nsenstve to small hanges n the data. The LbSVM Toolbox for Support Vetor Mahnes by Chang and Ln [0] was used n ths projet. Vapnk lamed that ε-insenstve Support Vetor Regresson would provde effetve estmaton of funtons [] and SVR has also been used n foreastng the stok pre hange wth the help of analysts []. As Smola stated n [] SVR s used to approxmate a funton f (x) that has at most ε devaton from the targets x k. Ths effetvely removes ertan data ponts from the model s approxmaton of the data. A funton s found that an approxmate just a subset of the atual data and beause of the haot nature of the data beng studed; usng a subset of ponts should hange the model s ablty to replate the same haot dynams of the system beng studed. However, we found that t performs just as well as neural networks n approxmatng the LE. The funton that the SVR model produes takes the form f ( x) = w, x + b, where w, x denotes the dot produt of w and x. Calulaton of w and b nvolves mnmzng w through a onvex optmzaton problem [3] that mnmzes a Lagrangan funton [4]. The omputaton of b an be performed n dfferent ways. LbSVM omputed b by fndng the average value of the support vetors wth Lagrange multplers between 0 and the user-defned varable C. One a model was produed, the foreast of p ponts took the form xˆ = b + w K, p s = where s represents the number of support vetors used n the model. In the LbSVM toolbox, ths was automatally alulated. K represents the kernel used n the model and the approxmaton. In ths experment, K took the form of a Gaussan Radal Bass funton wth an adjustable parameter σ. The tranng data and foreast were ntalzed wth data from the Hénon map. Two tme seres were used to tran the model. One tme seres served as tranng data for the model, the other tme seres served the test tme seres that predted the foreastng error. After obtanng a model wth a low foreastng error ompared to the system beng studed, foreastng was used to alulate the LE. Durng the tranng of the SVR model, we optmze the model on the set of
4 data but there are a number of parameters to tran to reate a low test error. The sze of ε, the ost of vetors on the wrong sde of the funton C, the kernel parameters, et, must all be hosen. Sne eah parameter an be any real number, a tranng shedule was reated that was smlar to the neural networks tranng. The parameters traned C, ε, σ, p, were ntalzed wth values.00,.0,., and., respetvely. To tran the parameters, the parameters were vared aordng to a Gaussan neghborhood whose sze depends on the progress beng made n tranng the model. To objetvely hoose one model over another the mean-square predton error was used to ompare the foreast of the SVR and test data wth one aveat. The sze of the test tme seres used was effetvely shortened by a weghtng sale reated so the model foused on replatng the frst ponts n the test tme seres. e ' = = ψ ( xˆ x ) The ψ was hosen arbtrarly for ths projet and set to seven and for eah addtonal k, ψ k was halved, though further researh may fnd that ψ an be optmzed for the system beng studed. Ths sheme was reated beause of the haot nature of both the model reated and the data beng modeled, senstve dependene on ntal ondtons would ause the two sets of data to be nomparable after just tens of steps. If we tred to mnmze the mean square error between the entre foreast and the test data, t eventually led to a model that found the average between the hgh and low peaks n the test data. Numeral Results After tranng the SVR model on 400 ponts, the LE was alulated and found to be.488 for the Hénon map wth a test error of x 0-5. The strange attrator of the SVR model produed was vsually ndstngushable. It s mportant to note for all ases that dfferent tranng nstanes lead to dfferent results and these were hosen beause they were the lowest mean square errors found. The neural network was traned for one mllon epohs wth = 400 ponts taken from the Hénon Map. After tranng, the LE of the model was alulated. The LE was found to be.3843 wth a mean-square error of x 0-5. Fgure 3 shows the strange attrator produed by ths network, as seen, the neural network had some trouble reprodung the attrator for the Hénon map exatly though other better traned neural networks may perform better. Lkewse, the dfferene between the true LE of the system and the neural network may arse from stoppng the tranng too soon. A more aurate model seems to orrelate wth a more aurate LE of the model to the test system. FIG. 3: The Hénon map attrator reprodued by the Neural Network 3
5 Two other systems were studed usng the Neural Network (NN) and SVR models, the Logst map (LM) [5], x = 4x ( x n+ n n and the delayed Hénon map (DHM) [6], ) x n+ =.6xn 0. xn 4 whh led to the followng table of LEs. FIG. 5: The delayed Hénon map attrator reprodued by the SVR model Atual NN SVR LM DHM Table : The LE approxmatons usng 5000 steps n the foreast of eah system and model The neural network and SVR model both produed strange attrators for the delayed Hénon map, Fg. 4 and Fg. 5, respetvely that were vsually smlar to the atual strange attrator, Fg. 6. The Neural Network and SVR model aheved a mean square error of x 0-6 and x 0-6, respetvely. FIG. 6: Strange Attrator for delayed Hénon map The SVR model dd not reonstrut the exat struture of the strange attrator for the Logst map but ths ould be due to nadequate tranng, the error was The neural network was able to reonstrut the attrator and the LE wth an error of x 0-6. The attrator of the logst map and the attrator produed by the SVR model are seen n Fg. 7 and 8, respetvely. FIG. 4: The delayed Hénon map attrator reprodued by the Neural Network FIG. 7: Strange Attrator for Logst map 4
6 FIG. 8: The Logst map attrator reprodued by the SVR model Though both models produe mean square errors, t s not far to ompare them usng these values. The weghtng on the errors n SVR model was dfferent from that of the Neural Network. These objetve funtons were smply used to tran eah model. If a omparson was needed, a more aurate measurement would be to dentfy the dfferene between the true Lyapunov exponent of the system beng studed and the Lyapunov exponent produed by eah model. Dsussons and Conluson Artfal neural networks serve as exellent models for reonstrutng the data and studyng the LE for the Hénon Map. If we appled these models on data taken from a system whose LE s unknown, they would perform well gven that they are adequately traned. Certan measures ould be taken to ensure that the orret LE s found whh nlude ross valdaton and ensembles of neural networks among other ways to provde that the neural network does not overft the data t s gven. One drawbak of ths neural network model s that they requre many more alulatons than SVR and unlke SVR (pror to tranng the parameters), neural networks often fal to fnd the global optmum or absolute best way to model the data therefore t s mperatve to tran many dfferent networks on the same data. Support Vetor Regresson performed muh better than expeted. Mattera et al. found that reonstrutng the strange attrator and the haot dynams for a gven system usng SVR s not always possble [7] but by usng a weghtng system on the foreast of the data, t s possble to tran the parameters for these models to produe an aurate representaton of the data. The number of free parameters reated a hallenge for tryng to alulate the LE, a tranng shedule was reated to alter these parameters and reonstrut both the haot dynams and the strange attrator. Lke the neural network, usng ths type of tranng auses the model parameters to fall nto loal optmums, one way to overome ths s to use an ensemble of SVR models to fnd an average Lyapunov exponent. One mportant aspet of tme seres analyss nvolves determnng the embeddng dmenson for a model. For the attrator to be reonstruted t must be embedded n an approprate dmenson so t an be unfolded so one premage does not produe two dfferent mages. A suffent ondton for the embeddng dmenson was provded by Takens [8] who showed that omplete unfoldng s guaranteed f the tme-delayed embeddng spae has a dmenson at least one greater than twe the dmenson of the orgnal state spae. For the purpose of ths projet, the data was embedded n a spae that was guaranteed to unfold t, by lookng at the tme-delayed equatons of eah system, the embeddng ould be dedued. For the Hénon map, the embeddng was, for the delayed Hénon map, 4, and for the Logst map,. For an unknown system, embeddng an be alulated usng an algorthm known as false nearest neghbors [9]. 5
7 Support Vetor Regresson has shown to be useful n funton approxmaton, foreastng, and reonstruton of haot dynams for a gven system. Support Vetor Regresson reates a model that uses far less ponts than the artfal neural network. Tranng the SVR model parameters takes less tme than the neural network and n some ases t arrves at a better approxmaton of the LE for the system beng studed. SVR serves as another model to approxmate the LE for a dynamal system and t would be nterestng to test SVR more rgorously on ontnuous systems and real-world data. Referenes [] E. Allman and J. Rhodes, Mathematal Models n Bology: An Introduton (Cambrdge, New York, 004). [] L. Yu, S. Wang, and K. K. La, Foregn-Exhange-Rate Foreastng wth Artfal Neural Networks (Sprnger, New York, 007). [3] J. C. Sprott, Chaos and Tme-Seres Analyss (Oxford, New York, 003). [4] M. Hénon, Comm. Math. Phys., (976). [5] S. Trona, M. Gonab, and R. Baratta, Neuroomputng 55, 3-4 (003). [6] G. Zhang, B. Patuwo, and M. Hu, Int. J. Foreastng 4, (998). [7] J. Prnpe, A. Rathe, and J. Kuo, Int. J. Bfuraton Chaos, 4 (99). [8] K. Hornk, M. Stnhoombe, and H. Whte, Neural Networks, 5 (989). [9] V. Vapnk, S.E. Golowh, and A. Smola, n Advanes n Neural Informaton Proessng Systems, San Mateo, 996, edted by M. Mozer, M. Jordan, and T. Petshe, p. 55. [0] C. Chang and C. Ln, LbSVM: a lbrary for support vetor mahnes (00). [] V. Vapnk. The Nature of Statstal Learnng Theory. (Sprnger, New York, 995). [] Z. Zhang, C. Sh, S. Zhang, and Z. Sh, n Advanes n Neural Networks -ISNN, Chengdu, 006, edted by J. Wang, Z. Y, J. Zurada, B. Lu, H. Yn. [3] A. Smola and B. Sholkopf. Stat. Comput. 4, 3 (004). [4] S. Haykn Neural Networks, a Comphrensve Foundaton. (Prente Hall, New Jersey, 99). [5] R. May, Nature, 6 (976). [6] J. C. Sprott, Eletron. J. Theor. Phys. 3, (006). [7] D. Mattera and S. Haykn n Advanes n Kernel Methods: Support Vetor Learnng, edted by B. Shölkopf, C. J. Burges, and A. J. Smola, p.. [8] F. Takens, Detetng strange attrators n turbulene (Sprnger, Berln, 98). [9] H. Abarbanel, Analyss of Observed Chaot Data (Sprnger, New York, 996 6
Machine Learning: and 15781, 2003 Assignment 4
ahne Learnng: 070 and 578, 003 Assgnment 4. VC Dmenson 30 onts Consder the spae of nstane X orrespondng to all ponts n the D x, plane. Gve the VC dmenson of the followng hpothess spaes. No explanaton requred.
More informationClustering. CS4780/5780 Machine Learning Fall Thorsten Joachims Cornell University
Clusterng CS4780/5780 Mahne Learnng Fall 2012 Thorsten Joahms Cornell Unversty Readng: Mannng/Raghavan/Shuetze, Chapters 16 (not 16.3) and 17 (http://nlp.stanford.edu/ir-book/) Outlne Supervsed vs. Unsupervsed
More informationInstance-Based Learning and Clustering
Instane-Based Learnng and Clusterng R&N 04, a bt of 03 Dfferent knds of Indutve Learnng Supervsed learnng Bas dea: Learn an approxmaton for a funton y=f(x based on labelled examples { (x,y, (x,y,, (x n,y
More informationOutline. Clustering: Similarity-Based Clustering. Supervised Learning vs. Unsupervised Learning. Clustering. Applications of Clustering
Clusterng: Smlarty-Based Clusterng CS4780/5780 Mahne Learnng Fall 2013 Thorsten Joahms Cornell Unversty Supervsed vs. Unsupervsed Learnng Herarhal Clusterng Herarhal Agglomeratve Clusterng (HAC) Non-Herarhal
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationController Design for Networked Control Systems in Multiple-packet Transmission with Random Delays
Appled Mehans and Materals Onlne: 03-0- ISSN: 66-748, Vols. 78-80, pp 60-604 do:0.408/www.sentf.net/amm.78-80.60 03 rans eh Publatons, Swtzerland H Controller Desgn for Networed Control Systems n Multple-paet
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationtechnische universiteit eindhoven Analysis of one product /one location inventory control models prof.dr. A.G. de Kok 1
TU/e tehnshe unverstet endhoven Analyss of one produt /one loaton nventory ontrol models prof.dr. A.G. de Kok Aknowledgements: I would lke to thank Leonard Fortun for translatng ths ourse materal nto Englsh
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationA Theorem of Mass Being Derived From Electrical Standing Waves (As Applied to Jean Louis Naudin's Test)
A Theorem of Mass Beng Derved From Eletral Standng Waves (As Appled to Jean Lous Naudn's Test) - by - Jerry E Bayles Aprl 4, 000 Ths paper formalzes a onept presented n my book, "Eletrogravtaton As A Unfed
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationVoltammetry. Bulk electrolysis: relatively large electrodes (on the order of cm 2 ) Voltammetry:
Voltammetry varety of eletroanalytal methods rely on the applaton of a potental funton to an eletrode wth the measurement of the resultng urrent n the ell. In ontrast wth bul eletrolyss methods, the objetve
More informationAccurate Online Support Vector Regression
Aurate Onlne Support Vetor Regresson Junshu Ma, James Theler, and Smon Perkns MS-D436, NIS-2, Los Alamos Natonal Laboratory, Los Alamos, NM 87545, USA {junshu, jt, s.perkns}@lanl.gov Abstrat Conventonal
More informationA Theorem of Mass Being Derived From Electrical Standing Waves (As Applied to Jean Louis Naudin's Test)
A Theorem of Mass Beng Derved From Eletral Standng Waves (As Appled to Jean Lous Naudn's Test) - by - Jerry E Bayles Aprl 5, 000 Ths Analyss Proposes The Neessary Changes Requred For A Workng Test Ths
More informationSome Results on the Counterfeit Coins Problem. Li An-Ping. Beijing , P.R.China Abstract
Some Results on the Counterfet Cons Problem L An-Png Bejng 100085, P.R.Chna apl0001@sna.om Abstrat We wll present some results on the ounterfet ons problem n the ase of mult-sets. Keywords: ombnatoral
More informationEfficient Sampling for Gaussian Process Inference using Control Variables
Effent Samplng for Gaussan Proess Inferene usng Control Varables Mhals K. Ttsas, Nel D. Lawrene and Magnus Rattray Shool of Computer Sene, Unversty of Manhester Manhester M 9PL, UK Abstrat Samplng funtons
More informationA solution to the Curse of Dimensionality Problem in Pairwise Scoring Techniques
A soluton to the Curse of Dmensonalty Problem n Parwse orng Tehnques Man Wa MAK Dept. of Eletron and Informaton Engneerng The Hong Kong Polytehn Unversty un Yuan KUNG Dept. of Eletral Engneerng Prneton
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationChapter 7. Ab initio Theory
Chapter 7. Ab nto Theory Ab nto: from the frst prnples. I. Roothaan-Hall approah Assumng Born-Oppenhemer approxmaton, the eletron Hamltonan: Hˆ Tˆ e + V en + V ee Z r a a a + > r Wavefunton Slater determnant:
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationALGEBRAIC SCHUR COMPLEMENT APPROACH FOR A NON LINEAR 2D ADVECTION DIFFUSION EQUATION
st Annual Internatonal Interdsplnary Conferene AIIC 03 4-6 Aprl Azores Portugal - Proeedngs- ALGEBRAIC SCHUR COMPLEMENT APPROACH FOR A NON LINEAR D ADVECTION DIFFUSION EQUATION Hassan Belhad Professor
More informationAdaptive Multilayer Neural Network Control of Blood Pressure
Proeedng of st Internatonal Symposum on Instrument Sene and Tenology. ISIST 99. P4-45. 999. (ord format fle: ISIST99.do) Adaptve Multlayer eural etwork ontrol of Blood Pressure Fe Juntao, Zang bo Department
More informationFAULT DETECTION AND IDENTIFICATION BASED ON FULLY-DECOUPLED PARITY EQUATION
Control 4, Unversty of Bath, UK, September 4 FAUL DEECION AND IDENIFICAION BASED ON FULLY-DECOUPLED PARIY EQUAION C. W. Chan, Hua Song, and Hong-Yue Zhang he Unversty of Hong Kong, Hong Kong, Chna, Emal:
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationThe calculation of ternary vapor-liquid system equilibrium by using P-R equation of state
The alulaton of ternary vapor-lqud syste equlbru by usng P-R equaton of state Y Lu, Janzhong Yn *, Rune Lu, Wenhua Sh and We We Shool of Cheal Engneerng, Dalan Unversty of Tehnology, Dalan 11601, P.R.Chna
More informationGravity Drainage Prior to Cake Filtration
1 Gravty Dranage Pror to ake Fltraton Sott A. Wells and Gregory K. Savage Department of vl Engneerng Portland State Unversty Portland, Oregon 97207-0751 Voe (503) 725-4276 Fax (503) 725-4298 ttp://www.e.pdx.edu/~wellss
More informationQueueing Networks II Network Performance
Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationPhase Transition in Collective Motion
Phase Transton n Colletve Moton Hefe Hu May 4, 2008 Abstrat There has been a hgh nterest n studyng the olletve behavor of organsms n reent years. When the densty of lvng systems s nreased, a phase transton
More informationPerformance Modeling of Hierarchical Memories
Performane Modelng of Herarhal Memores Marwan Sleman, Lester Lpsky, Kshor Konwar Department of omputer Sene and Engneerng Unversty of onnetut Storrs, T 0669-55 Emal: {marwan, lester, kshor}@engr.uonn.edu
More informationAnalytical calculation of adiabatic processes in real gases
Journal of Physs: Conferene Seres PAPER OPEN ACCESS Analytal alulaton of adabat roesses n real gases o te ths artle: I B Amarskaja et al 016 J. Phys.: Conf. Ser. 754 11003 Related ontent - Shortuts to
More informationCharged Particle in a Magnetic Field
Charged Partle n a Magnet Feld Mhael Fowler 1/16/08 Introduton Classall, the fore on a harged partle n eletr and magnet felds s gven b the Lorentz fore law: v B F = q E+ Ths velot-dependent fore s qute
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationInterval Valued Neutrosophic Soft Topological Spaces
8 Interval Valued Neutrosoph Soft Topologal njan Mukherjee Mthun Datta Florentn Smarandah Department of Mathemats Trpura Unversty Suryamannagar gartala-7990 Trpura Indamal: anjan00_m@yahooon Department
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationSPECTRAL RECONSTRUCTION IN CASE OF DIFFERENT ILLUMINANTS
AALS of Faulty Engneerng Hunedoara Internatonal Journal of Engneerng Tome XIV [6] Fasule [February] ISS: 584-665 [prnt; onlne] ISS: 584-673 [CD-Rom; onlne] a free-aess multdsplnary publaton of the Faulty
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationCurve Fitting with the Least Square Method
WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationMATH 567: Mathematical Techniques in Data Science Lab 8
1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationJSM Survey Research Methods Section. Is it MAR or NMAR? Michail Sverchkov
JSM 2013 - Survey Researh Methods Seton Is t MAR or NMAR? Mhal Sverhkov Bureau of Labor Statsts 2 Massahusetts Avenue, NE, Sute 1950, Washngton, DC. 20212, Sverhkov.Mhael@bls.gov Abstrat Most methods that
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationFusion of Neural Classifiers for Financial Market Prediction
Fuson of Neural Classfers for Fnanal Market Predton Trsh Keaton Dept. of Eletral Engneerng (136-93) Informaton Senes Laboratory (RL 69) Calforna Insttute of Tehnology HRL Laboratores, LLC Pasadena, CA
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and
More informationComplement of an Extended Fuzzy Set
Internatonal Journal of Computer pplatons (0975 8887) Complement of an Extended Fuzzy Set Trdv Jyot Neog Researh Sholar epartment of Mathemats CMJ Unversty, Shllong, Meghalaya usmanta Kumar Sut ssstant
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More informationPrediction of Solid Paraffin Precipitation Using Solid Phase Equation of State
Predton of old Paraffn Preptaton Usng old Phase Equaton of tate Proeedngs of European Congress of Chemal Engneerng (ECCE-6) Copenhagen, 16- eptember 7 Predton of old Paraffn Preptaton Usng old Phase Equaton
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More information425. Calculation of stresses in the coating of a vibrating beam
45. CALCULAION OF SRESSES IN HE COAING OF A VIBRAING BEAM. 45. Calulaton of stresses n the oatng of a vbratng beam M. Ragulsks,a, V. Kravčenken,b, K. Plkauskas,, R. Maskelunas,a, L. Zubavčus,b, P. Paškevčus,d
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationRadial-Basis Function Networks
Radal-Bass uncton Networs v.0 March 00 Mchel Verleysen Radal-Bass uncton Networs - Radal-Bass uncton Networs p Orgn: Cover s theorem p Interpolaton problem p Regularzaton theory p Generalzed RBN p Unversal
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationAPPROXIMATE OPTIMAL CONTROL OF LINEAR TIME-DELAY SYSTEMS VIA HAAR WAVELETS
Journal o Engneerng Sene and ehnology Vol., No. (6) 486-498 Shool o Engneerng, aylor s Unversty APPROIAE OPIAL CONROL OF LINEAR IE-DELAY SYSES VIA HAAR WAVELES AKBAR H. BORZABADI*, SOLAYAN ASADI Shool
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationELEKTRYKA 2016 Zeszyt 3-4 ( )
ELEKTRYKA 206 Zeszyt 3-4 (239-240) Rok LXII Waldemar BAUER, Jerzy BARANOWSKI, Tomasz DZIWIŃSKI, Paweł PIĄTEK, Marta ZAGÓROWSKA AGH Unversty of Sene and Tehnology, Kraków OUSTALOUP PARALLEL APPROXIMATION
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We
More informationPhysics 2B Chapter 17 Notes - Calorimetry Spring 2018
Physs 2B Chapter 17 Notes - Calormetry Sprng 2018 hermal Energy and Heat Heat Capaty and Spe Heat Capaty Phase Change and Latent Heat Rules or Calormetry Problems hermal Energy and Heat Calormetry lterally
More informationPHYSICS 212 MIDTERM II 19 February 2003
PHYSICS 1 MIDERM II 19 Feruary 003 Exam s losed ook, losed notes. Use only your formula sheet. Wrte all work and answers n exam ooklets. he aks of pages wll not e graded unless you so request on the front
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationSupport Vector Machines
Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class
More informationBrander and Lewis (1986) Link the relationship between financial and product sides of a firm.
Brander and Lews (1986) Lnk the relatonshp between fnanal and produt sdes of a frm. The way a frm fnanes ts nvestment: (1) Debt: Borrowng from banks, n bond market, et. Debt holders have prorty over a
More information3D Numerical Analysis for Impedance Calculation and High Performance Consideration of Linear Induction Motor for Rail-guided Transportation
ADVANCED ELECTROMAGNETICS SYMPOSIUM, AES 13, 19 MARCH 13, SHARJAH UNITED ARAB EMIRATES 3D Numeral Analss for Impedane Calulaton and Hgh Performane Consderaton of Lnear Induton Motor for Ral-guded Transportaton
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationA computer-aided optimization method of bending beams
WSES Internatonal Conferene on ENGINEERING MECHNICS, STRUCTURES, ENGINEERING GEOLOGY (EMESEG '8), Heraklon, Crete Island, Greee, July 22-24, 28 omputer-aded optmzaton method of bendng beams CRMEN E. SINGER-ORCI
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationGEL 446: Applied Environmental Geology
GE 446: ppled Envronmental Geology Watershed Delneaton and Geomorphology Watershed Geomorphology Watersheds are fundamental geospatal unts that provde a physal and oneptual framewor wdely used by sentsts,
More informationTenth Order Compact Finite Difference Method for Solving Singularly Perturbed 1D Reaction - Diffusion Equations
Internatonal Journal of Engneerng & Appled Senes (IJEAS) Vol.8, Issue (0)5- Tent Order Compat Fnte Dfferene Metod for Solvng Sngularl Perturbed D Reaton - Dffuson Equatons Faska Wondmu Galu, Gemes Fle
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationAn Iterative Modified Kernel for Support Vector Regression
An Iteratve Modfed Kernel for Support Vector Regresson Fengqng Han, Zhengxa Wang, Mng Le and Zhxang Zhou School of Scence Chongqng Jaotong Unversty Chongqng Cty, Chna Abstract In order to mprove the performance
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationGeometric Clustering using the Information Bottleneck method
Geometr Clusterng usng the Informaton Bottlenek method Susanne Stll Department of Physs Prneton Unversty, Prneton, NJ 08544 susanna@prneton.edu Wllam Balek Department of Physs Prneton Unversty, Prneton,
More informationRelevance Vector Machines Explained
October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes
More informationThe Similar Structure Method for Solving Boundary Value Problems of a Three Region Composite Bessel Equation
The Smlar Struture Method for Solvng Boundary Value Problems of a Three Regon Composte Bessel Equaton Mngmng Kong,Xaou Dong Center for Rado Admnstraton & Tehnology Development, Xhua Unversty, Chengdu 69,
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationChapter 6 Support vector machine. Séparateurs à vaste marge
Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationKey words: path synthesis, joint clearances, Lagrange s equation, Differential evaluation (DE), optimization.
Rub Mshra, T.K.Naskar, Sanb Ahara / nternatonal Journal of Engneerng Researh and Applatons (JERA) SSN: 8-96 www.era.om Vol., ssue, Januar -Februar 0, pp.9-99 Snthess of oupler urve of a Four Bar Lnkage
More information