Multi-Step-Ahead Prediction of Stock Price Using a New Architecture of Neural Networks
|
|
- Charles Bates
- 5 years ago
- Views:
Transcription
1 Journal of Computer & Robotcs 8(), Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs Mohammad Taleb Motlagh *, Hamd Khaloozadeh Department of Systems and Control, Industral Control Center of Excellence, K.N.Toos Unversty of Technology, Tehran, Iran Receved December 04; accepted 8 January 05 Abstract Modellng and forecastng Stoc maret s a challengng tas for economsts and engneers snce t has a dynamc structure and nonlnear characterstc. Ths nonlnearty affects the effcency of the prce characterstcs. Usng an Artfcal Neural Networ (ANN) s a proper way to model ths nonlnearty and t has been used successfully n onestep-ahead and mult-step-ahead predcton of dfferent stocs prces. Several factors, such as nput varables, preparng data sets, networ archtectures and tranng procedures, have huge mpact on the accuracy of the neural networ predcton. The purpose of ths paper s to predct mult-step-ahead prces of the stoc maret and derve the method, based on Recurrent Neural Networs (RNN), Real-Tme Recurrent Learnng (RTRL) networs and Nonlnear Autoregressve model process wth exogenous nput (NARX). Ths model s traned and tested by Tehran Securtes Exchange data. Keywords: stoc prce; neural networ; predct; mult-step-ahead predcton.. Introducton Due to the absence of accurate nformaton nfluencng the stoc prce, lnear regresson models such as ARIMA models are unable to provde a useful predcton of prce. After 980, there has been a turnover n the feld of tme seres predcton, when t became clear that ths type of predcton s a sutable applcaton for an ANN. Over the last few decades, ANNs have become one of the successful technques that are beng used for modellng stoc maret behavour. In stoc tradng, t s very desrable that the models provde an accurate mult-step-ahead predcton. A smple neural networ s unable to provde a relable long-term predcton []. Lately, RNNs have attracted much attenton for multstep-ahead predctng. Due to ther dynamc nature, RNNs have been appled successfully to a wde varety of problems, such as fnancal tme seres [], pea power load forecastng [3] and stream-flow forecastng [4]. An archtectural approach of RNN wth embedded memory, NARX networ, shows promsng qualtes for dynamc system applcatons. Studes show that * Correspondng author. Emal: m_taleb@outloo.com
2 48 M. T. Motlagh et al. / Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs NARX networs are often much better at dscoverng long term dependences than RNNs [5]. Most of the presented neural networs are statc, whch can only smulate the short-term memory structures wthn process; consequently some characterstcs could not be modelled well. The RTRL networs, whch represent dynamc nternal feedbac loops to store nformaton for later use and to enhance the effcency of learnng, provde a better way for modellng [6].. Neural Networs The basc dea for forecastng s usually the case that addtonal nformaton from prevous observed values and model outputs wll be useful to forecast the future values. When a neural networ s used to forecast, results tend to be closer to the true values, as the model can be teratvely adusted through model performance wth a reducton of errors... RNN Networs To predct the future behavour of a process, t s necessary to model ts dynamcs. RNNs are capable of representng arbtrary nonlnear dynamcal systems. The RNN shown n Fg. s used n [] for longterm predcton of stoc maret. Ths networ has an archtecture, whch s based on the error vector as an auxlary nput vector. In ths networ, each element of the output vector represents the daly prce whch s to be predcted. In fact, the predcted values of prce for several days ahead are estmated usng the avalable prce daly data. In the tranng procedure of ths networ, the prce values of the next T days are predcted usng prce values of the last N days and actual error of estmaton for the next T days. In the test procedure the error vector s consdered zero snce the actual prce of the next T days are not avalable. x x n x n x n RTRL Networs Fg.. RNN for long-term predcton xˆ xˆ Another RNN as shown n Fg. whch has used RTRL algorthm, s appled n [6] for two-step-ahead predcton of stream flow. Ths networ has a representaton of dynamc nternal feedbac loops to store nformaton for later use. Ths would enhance the effcency of learnng. As ndcated n the archtecture of ths networ, the outputs of the hdden layer are fed bac to the nput of the networ. In aforementoned RNN networ, all layers perform a smultaneous mappng process x t y t z t. However, n ths networ, each layer perform a onestep-ahead mappng process x t y t z t..3. NARX Networs NARX s a useful mathematcal model of dscretetme nonlnear systems. Nonlnear systems can be approxmated by a NARX networ, whch s a powerful dynamc model for tme seres predcton. ( Y () t f X ( t DX ),, X ( t ), X ( t), Y ( t DY ),, Y ( t ) X t and Where, Y t represent the nput and output of the networ at tmet, DX and DY are the nput and output orders, and the functon f s a nonlnear functon. The archtecture of a NARX networ based on a multlayer perceptron neural networ, conssts of DX antecedent values of ) xˆ ()
3 Journal of Computer & Robotcs 8(), exogenous nput vectors X t ; DY prevous actual values Y t, whch are tapped-delay nputs or fed bac from the model s output; and a sngle n-stepahead output Y t n. The NARX model for mult-step-ahead predcton, can be mplemented n many ways, but the smpler, seems to be by usng a feedforward neural networ wth the embedded memory, as shown n Fg. 3 and a delayed connecton from the output of the second layer to nput. Ths networ s used n [5] for predcton of chaotc tme seres. z(t+) K y(t+) Output layer 3. The proposed archtecture In ths secton a new archtecture of neural networ, as depcted n Fg. 4, s presented for twostep-ahead predcton of stoc prces. An earler verson of the proposed archtecture was presented n [8]. As the archtecture shows, outputs of the hdden layer and output errors are consdered as addtonal nputs. When the output errors are appled as nput as n [], the networ tends to learn the error dynamcs durng the tranng procedure. In the valdaton and test procedure, snce the output errors are not avalable, they are consdered zero. When the outputs of the hdden layer are fed bac, the networ would mantan a sort of state that could perform such tass as sequence predcton whch are beyond the power of a standard multlayer perceptron. It also enhances the effcency of learnng. N Processng layer 3.. Data Normalzaton u() u( ) N z z y(t) Tme delay M Input x(t) Input layer Fg.. RTRL for two-step-ahead predcton u( ) b y( ) z y(+) Ths process helps to normalze the mum and maxmum values of each data seres to a boundary. In ths case because of the nature of the sgmod functon, whch s used as the transfer functon of the networ, the nput data need to be n,. Logreturn s used for normalzng here. The ratonale s that the movement of the prce value s more mportant than absolute value of the movement, whether t s gong up or down and how much the movement s n percentage. The defnton of logreturn s: y( ) y() z z X normalzed log X t t X () Fg. 3. Archtecture of the R-NARX networ durng tranng and testng phases n the recurrent mode. Ths fact has been observed prevously, n the sense that gradent-descent learnng s more effectve n NARX networs than n RNN networ archtectures [7]. 3.. Networ topology There are nfnte ways to construct a neural networ. The archtecture of a neural networ defnes ts geometrcal structure.
4 50 M. T. Motlagh et al. / Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs x x ( t ) x n () t v ( t ) v n () t x () n t () nk t y ( t) y J () t z ( t ) + z K + () t yt ( ) y ( ) J t Fg. 4. Proposed networ for mult-step-ahead predcton z ( t ) z ( ) K t z ( ) K t The followng factors have to be consdered n desgnng a proper neural networ: The number of nputs: n ths networ the followng matrx s used as nput X X V E Y (3) n n where, X n and V n are the normalzed closng prce and normalzed tradng volume, E s the actual error of estmaton as used n [] and Y s the output of the hdden layer. The prce and the volume of the two past days are used as nput. The number of hdden layers: Theoretcally, a neural networ that has one hdden layer and suffcent number of hdden neurons s capable of generalzng any contnuous functon [9]. In ths networ one hdden layer s used. The number of neurons n each layer: In order to fnd the optmal number of neurons, the networ s performed for dfferent numbers of neurons. The number of outputs: Snce the goal s twostep-ahead predcton, the networ has two outputs Tranng Procedure The process of tranng a neural networ s to let the networ learn patterns n the tranng data. The goal of ths process s to fnd the best set of weghts and bases. For ths networ, gradent-descent T bacpropagaton has been appled, to fnd the desred set of weghts and bas. 70% of the data are used for tranng, 5% for valdaton and 5% for testng. Hyperbolc tangent functon s used as the transfer functon of the hdden layer and purln s used for the output layer. The same method as used n [] and mentoned n.. s used here for tranng and testng. Fg. 4 shows the proposed networ for two-stepahead predcton. Specfyng how to adust weghts and bases, s the man ssue n the tranng procedure of ths networ. Snce the output of the hdden layer and the output layer are fed bac to the nput, the adustment formulas of the weghts and the bases should be recalculated for ths networ. Let A denote the set of ndces for whch X () t s the stoc prce, B denote the set of ndces for whch Y () t s the volume of the stoc trade, C denote the set of ndces for whch e () t s the error of the networ output and D denote the set of ndces for whch Y () t s the output of the hdden layer of the networ. Thus the nput s defned by t x t A v t B e t C y t D The net actvty and the output of the neuron and neuron are computed by net t w t t b t ABCD y t f net t (4) (5) ABCD net t v t y t b t z t f net t Where,,, respectvely represent the ndex of the each nput, each neuron of the hdden layer and
5 Journal of Computer & Robotcs 8(), each output. The error of the predcton for th day at tme t s e t d t z t (6) Where, d t s the desred output at tme t. Then the networ error at tme t and the total networ error are E t e t total T t E E t Snce the gradent descent s used for bac propagaton, the weghts and bases should be adusted as follows t t v t w b t b b v E t v E t w w E t b Where,,, v w t Et b t b t b and t (7) (8) b are the learnng rates of the weghts and bases of the layers. The partal dervatve terms can be obtaned by the chan rule for dfferentaton. For v we have E t v t v t e e e t t t t v t t t f net t e t v t e t For w we have E t w t w t t t t e v z v e net v f net t y t f net t y t e { e t t t t f net t t t e e w z w } t e t v t y w t t (9) (0) It could be assumed that the synaptc weghts do not rely on the ntal state of the networ at tme t 0, so we have
6 5 M. T. Motlagh et al. / Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs 0 0 y 0 w y t net t f net t w t w t net t w t w t ABCD w t w t w w t 0 m & n w t other t t t () The value of m s, f and only f m ; otherwse 0. w t t t 0 0 e w y w t t t t e y t v t f net t w t w t A B C D m n () Snce s a member of C set, then s equal to,,..., K, where K s the total number of the outputs. Let a t t t y t y t a t w t w t y t y t a t w t w t y w (3) Thus a t can be calculated by the followng recursve equaton D [ w tv t a t f net t t C m D a t f net t w t a t Wth the ntal condton a (0) 0. ] (4) Snce s a member of D set n the term w t a t, then s equal to,,...,j, where s the total number of the neurons n the hdden layer. Then the weght adustment for computed by ( w t e t v t w ) f net t a t w t can be (5) Smlarly bas adustment relatons are obtaned by b ( b t e t f net t b (6) f net t ) a t b t e t v t However, for the bases, a t wll be obtaned by a t f net t [ C m D w t v t ] a t f net t w t a t (7)
7 Journal of Computer & Robotcs 8(), Applcaton and Results In order to verfy the performance of proposed networ, an experment has been carred out on real stoc data. The networ s appled to predct two-stepahead closng value of Saderat Ban exchange (BSDR) form June 0th of 009 to October 5th of 05. For comparson, RNN networ, NARX networ and RTRL networ are also performed. Mean Square Error (MSE), Root Mean Square Error (RMSE), Normalzed Mean Square Error (NMSE) and Mean Absolut Error (MAE) are used to compare the predcton precson of these models. The defnton of these error ndces are gven by N MSE z N d N MAE z N d N RMSE z d N z d NMSE d (8) Where, N represents the total number of data ponts, z s the predcton result of th day, d s the desred output of th day and e s the estmaton error. The smaller the value of these ndces, the more precse the predcton model wll be. Daly prce of Saderat Ban exchange form June 0th of 009 to October 5th of 05 s used for smulaton. Fg. 5 shows the closng prce and the tradng volume chart of BSDR for the gven date range. Fg. 5. BSDR daly closng prce and tradng volume 4.. RNN Networ The same networ used n [] whch s presented n II.A s used here wth nputs, hdden layer, 3 neurons n the hdden layer, output neurons, hyperbolc tangent functon as the transfer functon of the hdden layer and the output layer. 4.. NARX Networ A NARX networ as descrbed n.3. s used here wth days prce as nput, hdden layer wth 3 neurons, outputs, hyperbolc tangent functon as the transfer functon of the hdden layer and the output layer. Both nput and output orders ( DX, D Y ) are set to Proposed Archtecture The proposed networ s confgured as descrbed n III. The tranng procedure s performed n 800 epochs n all four of the networs Smulaton Results The networ s performed for dfferent values of learnng rate and as mentoned n 3.. for dfferent neuron numbers, n order to fnd the best parameters. Table shows the networs MSE and MAE errors for dfferent neuron numbers and dfferent learnng rate values. Ths table ndcates that the optmal values of these two parameters are 3 and 0..
8 54 M. T. Motlagh et al. / Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs Table MSE and MAE errors of the networ for dfferent neuron numbers ( ) and dfferent learnng rate ( ) MAE MSE MAE MSE MAE MSE MAE MSE The performance of the four networs s evaluated by MSE, MAE, RMSE and NMSE. Table shows the performance of the four explaned networs. Accordng to the results, all four of the crterons ndcate that the proposed networ performs better than the other networs. Fg. 6 represents one-step-ahead and two-stepahead predctons of the proposed networ for BSDR. Real prce s shown by green lne, one-step-ahead predcton s shown by red dashed lne and two-stepahead predcton s shown by blue dashed lne. Table Predcton error of dfferent networs Proposed networ RNN NARX RTRL MSE MAE RMSE NMSE
9 Journal of Computer & Robotcs 8(), As the results ndcate, the proposed networ has a better performance and could be effectve n modelng such dynamc system. 5. Conclusons Decson mang under uncertanty problems, such as portfolo optmzaton, pea power load and flood warnng systems, whch have a dynamc structure, requre a proper mult-step-ahead predcton. In ths paper a method for two-step-ahead predcton has been proposed, whch could be extended for more future tme steps. For comparson, two-step-ahead forecastng by usng RNN networ, NARX networ and RTRL networ were also performed. The results showed that the proposed archtecture outperforms the aforementoned networs and yelds a predcton wth less error. References Fg. 6. One-step-ahead and two-step-ahead predcton of the proposed networ [] H. Khaloozadeh, A. K. Sedgh, 00. "Long term predcton of Tehran prce ndex (TEPIX) usng neural networs". In IFSA World Congress and 0th NAFIPS Internatonal Conference. Jont 9th (Vol., pp ). IEEE. [] E. W. Saad, D. V. Prohov, D. C. Wunch, 998. "Comparatve study of stoc trend predcton usng tme delay, recurrent and probablstc neural networs". IEEE Transactons on Neural Networs 9(6): [3] H. C. Huang, R. C. Hwang, J. G. Hseh, 00. "A new artfcal ntellgent pea power load forecaster based on non-fxed neural networs". Internatonal ournal of electrcal power & energy systems, 4(3), [4] P. Coulbaly, F. Anctl, B. Bobee, 00. "Multvarate reservor nflow forecastng usng temporal neural networs". Journal of Hydrologcal Engneerng 6(5): Fg. 7. Two-step-ahead predcton of the four networs Fg. 7 represents two-step-ahead predcton of the networs. [5] E. Daconescu, 008. "The use of NARX Neural Networs to predct Chaotc Tme Seres". WSEAS Transactons on Computer Research 3(3):8-9. [6] L. C. Chang, F. J. Chang, Y. M. Chang, 004. "A twostep-ahead recurrent neural networ for stream-flow forecastng". Hydrologcal Processes, 8(), 8-9.
10 56 M. T. Motlagh et al. / Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs [7] T. Ln, B. G. Horne, P. Tňo, C. L. Gles, 996. "Learnng long-term dependences n NARX recurrent neural networs". Neural Networs, IEEE Transactons on, 7(6), [8] M. T. Motlagh, H. Khaloozadeh, 06. "A new archtecture for modellng and predcton of dynamc systems usng neural networs: applcaton n Tehran stoc exchange". In 4th Internatonal Conference on Control, Instrumentaton and Automaton (ICCIA), 06. [9] A. S. Chen, M. T. Leung, H. Daou, 003. "Applcaton of neural networs to and emergng fnancal maret: forecastng and tradng the Tawan stoc Index". Computers and Operatons Research, vol. 30.
EEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationShort Term Load Forecasting using an Artificial Neural Network
Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationNote 10. Modeling and Simulation of Dynamic Systems
Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada
More informationPERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS
PERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS Mohd Yusoff Mashor School of Electrcal and Electronc Engneerng, Unversty Scence Malaysa, Pera Branch Campus,
More informationNeural Networks & Learning
Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationMulti-layer neural networks
Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationParameter Estimation for Dynamic System using Unscented Kalman filter
Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationScroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator
Latest Trends on Crcuts, Systems and Sgnals Scroll Generaton wth Inductorless Chua s Crcut and Wen Brdge Oscllator Watcharn Jantanate, Peter A. Chayasena, and Sarawut Sutorn * Abstract An nductorless Chua
More informationEvaluation of classifiers MLPs
Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label
More informationOdd/Even Scroll Generation with Inductorless Chua s and Wien Bridge Oscillator Circuits
Watcharn Jantanate, Peter A. Chayasena, Sarawut Sutorn Odd/Even Scroll Generaton wth Inductorless Chua s and Wen Brdge Oscllator Crcuts Watcharn Jantanate, Peter A. Chayasena, and Sarawut Sutorn * School
More informationTime Series Forecasting Using Artificial Neural Networks under Dempster Shafer Evidence Theory and Trimmed-winsorized Means
Internatonal Journal of Informaton and Computaton Technology. ISSN 0974-2239 Volume 3, Number 5 (2013), pp. 383-390 Internatonal Research Publcatons House http://www. rphouse.com /jct.htm Tme Seres Forecastng
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationRBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis
Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural
More informationMultilayer neural networks
Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationThe Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei
The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of
More informationFORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES
Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 8- August 005 FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES DING-ZHOU CAO, SU-LIN PANG, YUAN-HUAI
More informationTransient Stability Assessment of Power System Based on Support Vector Machine
ransent Stablty Assessment of Power System Based on Support Vector Machne Shengyong Ye Yongkang Zheng Qngquan Qan School of Electrcal Engneerng, Southwest Jaotong Unversty, Chengdu 610031, P. R. Chna Abstract
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationNeural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17
Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationA neural network with localized receptive fields for visual pattern classification
Unversty of Wollongong Research Onlne Faculty of Informatcs - Papers (Archve) Faculty of Engneerng and Informaton Scences 2005 A neural network wth localzed receptve felds for vsual pattern classfcaton
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationCOEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN
Int. J. Chem. Sc.: (4), 04, 645654 ISSN 097768X www.sadgurupublcatons.com COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN R. GOVINDARASU a, R. PARTHIBAN a and P. K. BHABA b* a Department
More informationBearing Remaining Useful Life Prediction Based on an Improved Back Propagation Neural Network
nternatonal Journal of Performablty Engneerng, Vol. 10, No. 6, September 2014, pp.653-657. RAMS Consultants Prnted n nda Bearng Remanng Useful Lfe Predcton Based on an mproved Back Propagaton Neural Network
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationUncertainty as the Overlap of Alternate Conditional Distributions
Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant
More informationData Mining Using Surface and Deep Agents Based on Neural Networks
Assocaton for Informaton Systems AIS Electronc Lbrary (AISeL) AMCIS 2010 Proceedngs Amercas Conference on Informaton Systems (AMCIS) 8-2010 Based on Neural Networks Subhash Kak Oklahoma State Unversty,
More informationUsing deep belief network modelling to characterize differences in brain morphometry in schizophrenia
Usng deep belef network modellng to characterze dfferences n bran morphometry n schzophrena Walter H. L. Pnaya * a ; Ary Gadelha b ; Orla M. Doyle c ; Crstano Noto b ; André Zugman d ; Qurno Cordero b,
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationA Comparison on Neural Network Forecasting
011 nternatonal Conference on Crcuts, System and Smulaton PCST vol.7 (011) (011) ACST Press, Sngapore A Comparson on Neural Networ Forecastng Hong-Choon Ong 1, a and Shn-Yue Chan 1, b 1 School of Mathematcal
More informationSTATS 306B: Unsupervised Learning Spring Lecture 10 April 30
STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear
More informationAn Empirical Study of Fuzzy Approach with Artificial Neural Network Models
An Emprcal Study of Fuzzy Approach wth Artfcal Neural Networ Models Memmedaga Memmedl and Ozer Ozdemr Abstract Tme seres forecastng based on fuzzy approach by usng artfcal neural networs s a sgnfcant topc
More informationStatistical Evaluation of WATFLOOD
tatstcal Evaluaton of WATFLD By: Angela MacLean, Dept. of Cvl & Envronmental Engneerng, Unversty of Waterloo, n. ctober, 005 The statstcs program assocated wth WATFLD uses spl.csv fle that s produced wth
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationBACKPROPAGATION NEURAL NETWORK APPROACH FOR MEAN TEMPERATURE PREDICTION
IJRRAS 9 () October 6 www.arpapress.com/volumes/vol9issue/ijrras_9.pdf BACKPROPAGATIO EURAL ETWORK APPROACH FOR MEA TEMPERATURE PREDICTIO Manal A. Ashour,*, Soma A. ElZahaby & Mahmoud I. Abdalla 3, Al-Azher
More informationPARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY
POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 86 Electrcal Engneerng 6 Volodymyr KONOVAL* Roman PRYTULA** PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY Ths paper provdes a
More informationSHORT-TERM PREDICTION OF AIR POLLUTION USING MULTI- LAYER PERCPTERON & GAMMA NEURAL NETWORKS
Control 4, Unversty of Bath, UK, September 4 ID-6 SHORT-TER PREDICTIO OF AIR POUTIO USI UTI- AYER PERCPTERO & AA EURA ETWORKS. Alyar. Shoorehdel *,. Teshnehlab, A. Kha. Sedgh * PhD student. of Elect. Eng.
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationRadial-Basis Function Networks
Radal-Bass uncton Networs v.0 March 00 Mchel Verleysen Radal-Bass uncton Networs - Radal-Bass uncton Networs p Orgn: Cover s theorem p Interpolaton problem p Regularzaton theory p Generalzed RBN p Unversal
More informationUncertainty in measurements of power and energy on power networks
Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:
More informationOn the correction of the h-index for career length
1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationIntroduction to the Introduction to Artificial Neural Network
Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp
More informationApplication research on rough set -neural network in the fault diagnosis system of ball mill
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(4):834-838 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Applcaton research on rough set -neural network n the
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationSDMML HT MSc Problem Sheet 4
SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationCOMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD
COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD Ákos Jósef Lengyel, István Ecsed Assstant Lecturer, Professor of Mechancs, Insttute of Appled Mechancs, Unversty of Mskolc, Mskolc-Egyetemváros,
More informationSimultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals
Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationOpen Systems: Chemical Potential and Partial Molar Quantities Chemical Potential
Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,
More informationIntroduction to Neural Networks. David Stutz
RWTH Aachen Unversty Char of Computer Scence 6 Prof. Dr.-Ing. Hermann Ney Selected Topcs n Human Language Technology and Pattern Recognton WS 13/14 Introducton to Neural Networs Davd Stutz Matrculaton
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationStudy on Active Micro-vibration Isolation System with Linear Motor Actuator. Gong-yu PAN, Wen-yan GU and Dong LI
2017 2nd Internatonal Conference on Electrcal and Electroncs: echnques and Applcatons (EEA 2017) ISBN: 978-1-60595-416-5 Study on Actve Mcro-vbraton Isolaton System wth Lnear Motor Actuator Gong-yu PAN,
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationOil Price Forecasting with an EMD-Based Multiscale Neural Network Learning Paradigm
Ol Prce Forecastng wth an EMD-Based Multscale Neural Network Learnng Paradgm Lean Yu 1,2, Kn Keung La 2, Shouyang Wang 1, and Kajan He 2 1 Insttute of Systems Scence, Academy of Mathematcs and Systems
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationSTAT 3008 Applied Regression Analysis
STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,
More informationMicrowave Diversity Imaging Compression Using Bioinspired
Mcrowave Dversty Imagng Compresson Usng Bonspred Neural Networks Youwe Yuan 1, Yong L 1, Wele Xu 1, Janghong Yu * 1 School of Computer Scence and Technology, Hangzhou Danz Unversty, Hangzhou, Zhejang,
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationResearch Article Relative Smooth Topological Spaces
Advances n Fuzzy Systems Volume 2009, Artcle ID 172917, 5 pages do:10.1155/2009/172917 Research Artcle Relatve Smooth Topologcal Spaces B. Ghazanfar Department of Mathematcs, Faculty of Scence, Lorestan
More informationA LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS. Dr. Derald E. Wentzien, Wesley College, (302) ,
A LINEAR PROGRAM TO COMPARE MULTIPLE GROSS CREDIT LOSS FORECASTS Dr. Derald E. Wentzen, Wesley College, (302) 736-2574, wentzde@wesley.edu ABSTRACT A lnear programmng model s developed and used to compare
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationIntegrated approach in solving parallel machine scheduling and location (ScheLoc) problem
Internatonal Journal of Industral Engneerng Computatons 7 (2016) 573 584 Contents lsts avalable at GrowngScence Internatonal Journal of Industral Engneerng Computatons homepage: www.growngscence.com/ec
More information