An Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations

Size: px
Start display at page:

Download "An Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations"

Transcription

1 Neural Processng Letters 10: , Kluwer Acadec Publshers. Prnted n the Netherlands. 121 An Accurate Measure for Multlayer Perceptron Tolerance to Weght Devatons JOSE L. BERNIER, J. ORTEGA, M. M. RODRÍGUEZ, I. ROJAS and A. PRIETO Dpto. Arqutectura y Tecnología de Coputadores, Unversdad de Granada, E18071 Granada, Span, e-al: berner@atc.ugr.es, apreto@atc.ugr.es Abstract. The nherent fault tolerance of artfcal neural networks ANNs) s usually assued, but several authors have claed that ANNs are not always fault tolerant and have deonstrated the need to evaluate ther robustness by quanttatve easures. For ths purpose, varous alternatves have been proposed. In ths paper we show the drect relaton between the ean square error MSE) and the statstcal senstvty to weght devatons, defnng a easure of tolerance based on statstcal senttvty that we have called Mean Square Senstvty MSS); ths allows us to predct accurately the degradaton of the MSE when the weght values change and so consttutes a useful paraeter for choosng between dfferent confguratons of MLPs. The experental results obtaned for dfferent MLPs are shown and deonstrate the valdty of our odel. Key words: ean square error degradaton, ultlayer perceptron, fault tolerance, statstcal senstvty 1. Introducton Based on the analogy between Artfcal Neural Networks ANNs) and natural ones, t s frequently assued that ANNs are nherently tolerant to faults. Ths assupton has been shown to be false [1, 2]; consequently, soe confguratons of weghts for a fxed structure of ANN are ore tolerant than others, although they provde slar perforance wth respect to learnng ablty. Thus t would be useful to have an accurate eans of easurng ths tolerance. Dfferent easureents have been proposed. In [3] the probablty of errors n Multlayer Perceptrons MLPs) that use the sple-step functon as actvaton s used to study the tolerance of such structures, and n [4] the study s extended to neurons wth a ultple-step actvaton functon. The authors found that the probablty of error s not affected by an ncrease n the nuber of neurons per layer. In [1] a sulaton-derved quanttatve easure akng use of the worst case hypothess to consder the nuber of neurons that present a stuck-at fault s used, whle n [2] a procedure based on the replcaton of hdden unts s proposed

2 122 JOSE L. BERNIER ET AL. to acheve fault-tolerant ANNs, provdng etrcs for tolerance as a functon of redundancy. The authors of [5] show that the nserton of synaptc nose durng the learnng process ncreases the fault tolerance of ANNs. They also showed that enlargng the networks does not prove fault tolerance at all, but on the contrary akes the ore susceptble to faults. Hence, t s clear that the supposed fault tolerance of MLPs s relatve, and that the degree of fault tolerance can and should be easured as s the learnng perforance. Ths becoes sgnfcant when the learnng process s perfored n eda dfferent fro where the MLP s pleented, as there ay be dfferences n the accuracy used to store the values of the weghts or other agntudes. These dfferences ay serously degrade the perforance of the MLP. A dscusson of hardware error sources s presented n [5]. Cho et al. [7] proposed statstcal senstvty as a easure of tolerance n MLPs and thus as a crteron for selectng a confguraton of weghts fro dfferent alternatves presentng slar perforance wth respect to learnng. Statstcal senstvty easures the output changes of the MLP when the values of the weghts change, a low value plyng less degradaton of the learnng perforance. Ths fact s plcty proved wth the results obtaned. In ths letter, an explct relaton between the statstcal senstvty to weght devatons and the ean square error MSE) between the desred and obtaned output n the MLP s shown, such that t s possble to accurately predct the degradaton of the MSE for a partcular value of ths devaton. A new fgure that we call Mean Square Senstvty MSS) s proposed as a easureent of the fault tolerance to weght devatons. The MSS can be easly coputed after tranng and s shown to consttute a good easureent of the tolerance of the network. The statstcal senstvty can be coputed locally for each neuron and so, dfferent alternatves arse for future work and resarch possbltes; for exaple, the study of tolerance to weght perturbatons of partcular eleents n the network or the odfcaton of the tranng algorth n order to reduce the MSS. Other easures proposed affect the whole network [1, 2, 3, 4] and so ther use to study tolerance s ore lted. Furtherore, as the MSS s drectly related to the MSE degradaton, an accurate and quanttatve easure of MLP tolerance s obtaned, wthout akng use of any hypotheses. 2. The Statstcal Senstvty of a Multlayer Perceptron A ultlayer perceptron MLP) s coposed of M layers, where each layer = 1...M)has N neurons. The neuron of layer s connected to the N neurons of the prevous layer by a set of weghts w = 1,...,N ).

3 AN ACCURATE MEASURE FOR MULTILAYER PERCEPTRON TOLERANCE 123 The output y of a neuron belongng to a layer, s a functon f actvaton functon) of the weghted su of the outputs cong fro the neurons n the prevous layer z ): N y = f z ) = f 1) =1 w y where y consttutes the outputs of the N neurons of the prevous layer. Specfcally, y 0 = 1,...,N 0) are the nputs to the network. Durng learnng, the weghts are adapted n order to nze the MSE by usng a gradent descent algorth known as backpropagaton. Dependng on the ntal values of the weghts and other paraeters of the algorth, dfferent values are reached after tranng. These posble solutons ay present a slar MSE, but they dffer wth respect to the fault tolerance obtaned. Fault tolerance s related to a unfor dstrbuton of learnng, such that the salency of the weghts s regular [6]. The values of the weghts n an electronc pleentaton can be changed f a crcut presents a defect. Moreover, the backpropagaton algorth s often executed n a general purpose coputer and the weghts obtaned are physcally pleented; t s thus possble for dfferences n the loaded values to occur. The statstcal senstvty S allows us to easure the degradaton of the expected output of a neuron n layer n a quanttatve way when the values of the weghts change. The statstcal senstvty s defned n [7] by the followng expresson: var y S = l ) 2) σ 0 σ where σ represents the standard devaton of the changes n the weghts, and var y ) s the varance of the devaton n the output wth respect to the output of the MLP n the absence of perturbatons) due to these changes and can be coputed as: var y ) = E[ y ) 2 ] E[ y ]) 2 3) where E[ ] s the expected value of [ ]. Statstcal senstvty s fundaentally dfferent fro output senstvty as used n [6]. The output senstvty s defned as the dervatve of the output of the neuron wth respect to the value of a weght and so t easures the dependence of the output wth respect to ths partcular value of the weght. The statstcal senstvty easures the varaton range of the output of a neuron due to changes n the weghts. To copute expresson 2), a ultplcatve odel of weght devatons s assued that satsfes: a) E[ w ]=0 b) E[ w )2 ]=σw )2 c) E[ w w )]=0f = or = or =

4 124 JOSE L. BERNIER ET AL. If the devatons of the weghts of a neuron n layer are sall enough, the correspondng devaton n the output can be approxated as: where y f = =1 = f df dz y w =1 w + y y y ) y w + w y ) 4) Proposton 1: f E[ w ]=0,, then E[ y ]=0,. Proof 1: t wll be shown by nducton over.for = 1: E [ ] N 0 ) N 0 y 1 = E f 1 y 0 w 1 = f 1 y 0 E [ ] w 1 = 0 5) =1 =1 Assung that E[ y ]=0, for layer : [ ] = E E [ y = f = 0 f =1 =1 y y w + w y [ ] [ E w + w E ) ] y ]) 6) Proposton 2: the statstcal senstvty to ultplcatve perturbatons of a neuron n layer can be expressed as: S = f N ) N 2 y w + w wk C k 7) Ck = =1 k=1 where the ters Ck can be recursvely coputed as: ) 2 f ) 2 wr y r + w r f f k r=1 w r r=1 s=1 s=1 w s C rs ) f = k w ks C rs otherwse 8)

5 AN ACCURATE MEASURE FOR MULTILAYER PERCEPTRON TOLERANCE 125 Proof 2: akng use of Proposton 1, var y ) = E[ y ) 2 ]. The ters E[ y yk ] >0 can be obtaned as: [ N E[ y y k ] = E = r=1 r=1 N fk s=1 N N f y r y s +w r w ks E [ y r +y s = f f k w r + w r y r ) wks + ) ] w ks y s [ ] f f k yr ys E wr w ks + s=1 ] [ ys + y r wks E wr E [ ] ) wks y r [ ] yr ys E wr w ks + r=1 s=1 ] ) w r w ks E [ y r y s wr y s If Ck s defned as C k E[ y y k ])/σ2 and the odel for perturbatons consdered s appled n 9), t s straghtforward to obtan Equaton 8). The ntal condton for 8) s that Ck 0 = 0,k as the nputs y0 are supposed to be free of errors. At ths pont, takng nto account that E[ y ) 2 ] = σ 2 C and substtutng n 2) Proposton 2 s proved. In the partcular case of MLPs wth only one hdden layer, expresson 7) can be coputed n a ore copact for: S = f N w )2 =1 y ) ) 2 + S ) 2 ] + 9) = 1, 2 10) takng nto account that the statstcal senstvty of the nputs s zero,.e., S 0 = 0. In ths way, as the above expresson 10) shows, the coputaton of the statstcal senstvty can be perfored n a relatvely easy way for each neuron of the MLP and for each nput pattern n the case of MLPs wth one hdden layer, because there are no cross ters as n expresson 7).

6 126 JOSE L. BERNIER ET AL. 3. The Mean Square Senstvty The goal of the backpropagaton algorth s to reduce the ean square error MSE) whch can be expressed as: ε = 1 2N p N p N M p=1 =1 d p) y M p) ) 2 11) where N p s the nuber of nput patterns consdered, N M s the nuber of neurons n the output layer, and d p) and y M p) are the desred and obtaned outputs of the neuron of the output layer for the nput pattern p, respectvely. If the weghts of the MLP suffer any devaton, the MSE s altered and so, by developng expresson 11) wth a Taylor expanson near the nonal MSE found after learnng, ε 0, t s obtaned that: ε = ε N p + 1 2N p N p N M p=1 =1 N p M p=1 =1 d p) y M p) ) y M p) y M p) ) ) Now, f we copute the expected value of ε and take nto account that E[ y M ]= 0andthatE[ y M ) 2 ] can be obtaned fro expressons 2) and 3) as E[ y M ) 2 ] σ 2 S M ) 2, the followng expresson s obtaned: E[ε ]=ε 0 + σ2 2N p N p N M p=1 =1 S M ) 2 13) By analogy wth the defnton of MSE, we defne the followng fgure as Mean Square Senstvty MSS): MSS = 1 2N p N p N M p=1 =1 S M ) 2 14) The MSS can be coputed fro the statstcal senstvty of the neurons belongng to the output layer, as expresson 14) shows. In ths way, cobnng expressons 13) and 14), the expected degradaton of the MSE, E[ε ] can be coputed as: E[ε ]=ε 0 + σ 2 MSS 15) Thus, expresson 15) shows the drect relaton between the MSE degradaton and the MSS. As the MSS can be drectly coputed after tranng t s possble to predct the degradaton of the MSE when the weghts are devated fro ther

7 AN ACCURATE MEASURE FOR MULTILAYER PERCEPTRON TOLERANCE 127 Table I. MSE and MSS obtaned after tranng. Approxator Predctor ε MSS nonal values nto a range wth standard devaton equal to σ. Moreover, as can be observed n the expresson obtaned, a lower value of the MSS ples a lower value of the degradaton of the MSE, so we propose usng the MSS as a sutable easure of the tolerance of MLPs to weght devatons. Note that as the statstcal senstvty of a partcular neuron can be coputed ndependently, several lnes of research are open to study the tolerance of partcular eleents or to develop new tranng algorths that take nto account the MSS as another ter to nze durng learnng. In [7] t s proposed to use the average statstcal senstvty as a crteron to select a weght confguraton fro dfferent possbltes whch present slar MSE after tranng. However, the MSS s drectly obtaned fro the MSE degradaton, as expresson 15) shows, and thus consttutes a better easureent of MLP tolerance aganst weght perturbatons. 4. Results In order to valdate expresson 15) we copared the results obtaned for the MSE when the MLPs are subect to devatons wth the predcted value obtaned by usng ths expresson. Two MLPs were consdered: an approxator of the sne functon [8] and a predctor of the Mackey-Glass teporal seres [9]. The approxator had 1 nput neuron, 11 neurons n the hdden layer and 1 output neuron, and the predctor conssted of 3 nput neurons, 11 neurons n the hdden layer and 1 output neuron. All the neurons consdered contaned a bas nput. Table I shows the values of MSE and MSS obtaned after tranng wth the test patterns dfferent fro those used for tranng). All the weghts obtaned after learnng have been devated fro ther nonal values by usng the ultplcatve odel such that each weght w has a value equal to w 1 + δ ),whereδ s a rando varable wth standard devaton equal to σ and average equal to zero. Note that n the case of consderng analogue crcuts, σ s the sae as the tolerance of the analogue eleents. Table II shows the values of the MSE predcted and obtaned experentally for dfferent values of σ analogue tolerance). For each value of σ consdered, the experental values of MSE are averaged over 100 tests where each test conssts of a rando devaton of all the weghts of the MLP. The confdence nterval at 95% s also presented n Table II.

8 128 JOSE L. BERNIER ET AL. Table II. Coparson between predcted and experental E[ɛ ]. Approxator Predctor σ %) Predcted Experental Predcted Experental x 1e-3) x 1e-3) x 1e-3) x 1e-3) ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Approxator Experental Predcted 0.06 Mean Square Error MSE) Standard devaton of weght perturbatons Fgure 1. MSE predcted and experental for the approxator of the sne functon. Expresson 15) s shown to be vald; t accurately predcts the degradaton of the MSE when the weghts present perturbatons. It s also proven that the lower the value of the MSS, the lower the degradaton of the MSE. Thus, even f a partcular confguraton presents a lower MSE after tranng, f the MSS s hgh, ths nonal MSE s strongly degraded when devatons are present, and so the MSS ust be consdered when a weght confguraton s to be chosen.

9 AN ACCURATE MEASURE FOR MULTILAYER PERCEPTRON TOLERANCE Predctor Experental Predcted Mean Square Error MSE) Standard devaton of weght perturbatons Fgure 2. MSE predcted and experental for the predctor of the Mackey-Glass teporal seres. Fgures 1 and 2 show the degradaton of the MSE for dfferent values of σ.the values predcted and obtaned experentally are represented for the approxator and the predctor, respectvely. Each experental value s plotted wth ts respectve confdence level at 95% obtaned wth 100 saples. In a slar way to Table II, the predcted values of the MSE accurately ft those obtaned experentally. For exaple, n Fgure 2 the predcton s vald up to weght devatons of about 20% fro ther nonal values; n an analogue crcut ths eans that the MSE degradaton can be predcted for tolerance argns of ther eleents of about 20%. The atchng between the predcted and the experental values of MSE s better when weght devatons are saller; however, for noral values of analogue tolerance, the predcted value s accurate and, n any case consttutes an upper bound for the MSE degradaton. 5. Conclusons In ths letter we have presented the relaton between the ean square error MSE) and the statstcal senstvty. As the statstcal senstvty easures the devaton n the output of a MLP when ts weghts are perturbed, ths relaton allows us to obtan a useful crteron to evaluate the fault tolerance of the network. To copare dfferent weght confguratons, we propose the use of ean square senstvty MSS), whch s coputed fro the statstcal senstvty. Lower values of MSS ply lower degradatons of MSE. Results show the correctness of the expressons

10 130 JOSE L. BERNIER ET AL. obtaned. To dstngush MSS fro other easures proposed to assess the tolerance of MLPs, t s drectly related to MSE degradaton and also, as statstcal senstvty can be coputed for each neuron of the MLP, new research possbltes are opened for the study of related aspects. As future work, a new backpropagaton algorth that ncludes the obectve of nzng MSS, ontly wth MSE, wll be developed n order to obtan weght confguratons that axze fault tolerance whle antanng learnng perforance. As MSS s an accurate easure of MSE degradaton, the perforance of such an algorth wll probably be better than that descrbed n [10] for a slar tranng algorth based on average statstcal senstvty nzaton. References 1. Segee, B. E. and Carter, M. J.: Coparatve fault tolerance of parallel dstrbuted processng networks, IEEE Trans on Coputers 4311) 1994), Pathak, D. S. and Koren, I.: Coplete and partal fault tolerance of feedforward neural nets, IEEE Trans. on Neural Networks 62) 1995), Stevenson, M., Wnter, R. and Wdrow, B.: Senstvty of neural networks to weght errors, IEEE Trans. on Neural Networks 11) 1990), Alpp, C., Pur, V. and Sa, M.: Senstvty to errors n artfcal neural networks: a behavoral approach, In: Proc. IEEE Int. Syp. on Crcuts & Systes, pp , May Edwards, P. J. and Murray, A. F.: Fault tolerance va weght-nose n analogue VLSI pleentatons a case study wth EPSILON, IEEE Proc. on Crcuts and Systes II: Analog and Dgtal Sgnal Processng 459) 1998), Edwards, P. J. and Murray, A. F.: Can deternstc penalty ters odel the effects of synaptc weght nose on network fault-tolerance?, Int. Journal of Neural Systes 64) 1995), Cho, J. Y. and Cho, C.: Senstvty analyss of ultlayer perceptron wth dfferentable actvaton functons, IEEE Trans. on Neural Networks 31) 1992), Sudkap, T. and Haell, R.: Interpolaton, copleton and learnng fuzzy rules, IEEE Trans. on Systes, Man & Cybernetcs 242) 1994), Wang, L.: Adaptve Fuzzy Systes and Control. Desgn and Stablty Analyss, Englewood Clffs, Prentce Hall, Berner, J. L., Ortega, J. and Preto, A.: A odfed backpropagaton algorth to tolerate weght errors,lecture Notes n Coputer Scence 1240, 1997), Sprnger-Verlag,

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

Gradient Descent Learning and Backpropagation

Gradient Descent Learning and Backpropagation Artfcal Neural Networks (art 2) Chrstan Jacob Gradent Descent Learnng and Backpropagaton CSC 533 Wnter 200 Learnng by Gradent Descent Defnton of the Learnng roble Let us start wth the sple case of lnear

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

1 Definition of Rademacher Complexity

1 Definition of Rademacher Complexity COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the

More information

Computational and Statistical Learning theory Assignment 4

Computational and Statistical Learning theory Assignment 4 Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}

More information

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (

More information

XII.3 The EM (Expectation-Maximization) Algorithm

XII.3 The EM (Expectation-Maximization) Algorithm XII.3 The EM (Expectaton-Maxzaton) Algorth Toshnor Munaata 3/7/06 The EM algorth s a technque to deal wth varous types of ncoplete data or hdden varables. It can be appled to a wde range of learnng probles

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng

More information

Applied Mathematics Letters

Applied Mathematics Letters Appled Matheatcs Letters 2 (2) 46 5 Contents lsts avalable at ScenceDrect Appled Matheatcs Letters journal hoepage: wwwelseverco/locate/al Calculaton of coeffcents of a cardnal B-splne Gradr V Mlovanovć

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

PROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE

PROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE V. Nollau Insttute of Matheatcal Stochastcs, Techncal Unversty of Dresden, Gerany Keywords: Analyss of varance, least squares ethod, odels wth fxed effects,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Xiangwen Li. March 8th and March 13th, 2001

Xiangwen Li. March 8th and March 13th, 2001 CS49I Approxaton Algorths The Vertex-Cover Proble Lecture Notes Xangwen L March 8th and March 3th, 00 Absolute Approxaton Gven an optzaton proble P, an algorth A s an approxaton algorth for P f, for an

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Our focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e.

Our focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e. SSTEM MODELLIN In order to solve a control syste proble, the descrptons of the syste and ts coponents ust be put nto a for sutable for analyss and evaluaton. The followng ethods can be used to odel physcal

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

halftoning Journal of Electronic Imaging, vol. 11, no. 4, Oct Je-Ho Lee and Jan P. Allebach

halftoning Journal of Electronic Imaging, vol. 11, no. 4, Oct Je-Ho Lee and Jan P. Allebach olorant-based drect bnary search» halftonng Journal of Electronc Iagng, vol., no. 4, Oct. 22 Je-Ho Lee and Jan P. Allebach School of Electrcal Engneerng & oputer Scence Kyungpook Natonal Unversty Abstract

More information

Revision: December 13, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: December 13, E Main Suite D Pullman, WA (509) Voice and Fax .9.1: AC power analyss Reson: Deceber 13, 010 15 E Man Sute D Pullan, WA 99163 (509 334 6306 Voce and Fax Oerew n chapter.9.0, we ntroduced soe basc quanttes relate to delery of power usng snusodal sgnals.

More information

Introducing Entropy Distributions

Introducing Entropy Distributions Graubner, Schdt & Proske: Proceedngs of the 6 th Internatonal Probablstc Workshop, Darstadt 8 Introducng Entropy Dstrbutons Noel van Erp & Peter van Gelder Structural Hydraulc Engneerng and Probablstc

More information

The Impact of the Earth s Movement through the Space on Measuring the Velocity of Light

The Impact of the Earth s Movement through the Space on Measuring the Velocity of Light Journal of Appled Matheatcs and Physcs, 6, 4, 68-78 Publshed Onlne June 6 n ScRes http://wwwscrporg/journal/jap http://dxdoorg/436/jap646 The Ipact of the Earth s Moeent through the Space on Measurng the

More information

Designing Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate

Designing Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate The Frst Internatonal Senar on Scence and Technology, Islac Unversty of Indonesa, 4-5 January 009. Desgnng Fuzzy Te Seres odel Usng Generalzed Wang s ethod and Its applcaton to Forecastng Interest Rate

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Perceptual Organization (IV)

Perceptual Organization (IV) Perceptual Organzaton IV Introducton to Coputatonal and Bologcal Vson CS 0--56 Coputer Scence Departent BGU Ohad Ben-Shahar Segentaton Segentaton as parttonng Gven: I - a set of age pxels H a regon hoogenety

More information

Preference and Demand Examples

Preference and Demand Examples Dvson of the Huantes and Socal Scences Preference and Deand Exaples KC Border October, 2002 Revsed Noveber 206 These notes show how to use the Lagrange Karush Kuhn Tucker ultpler theores to solve the proble

More information

COMP th April, 2007 Clement Pang

COMP th April, 2007 Clement Pang COMP 540 12 th Aprl, 2007 Cleent Pang Boostng Cobnng weak classers Fts an Addtve Model Is essentally Forward Stagewse Addtve Modelng wth Exponental Loss Loss Functons Classcaton: Msclasscaton, Exponental,

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

1.3 Hence, calculate a formula for the force required to break the bond (i.e. the maximum value of F)

1.3 Hence, calculate a formula for the force required to break the bond (i.e. the maximum value of F) EN40: Dynacs and Vbratons Hoework 4: Work, Energy and Lnear Moentu Due Frday March 6 th School of Engneerng Brown Unversty 1. The Rydberg potental s a sple odel of atoc nteractons. It specfes the potental

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

1 Review From Last Time

1 Review From Last Time COS 5: Foundatons of Machne Learnng Rob Schapre Lecture #8 Scrbe: Monrul I Sharf Aprl 0, 2003 Revew Fro Last Te Last te, we were talkng about how to odel dstrbutons, and we had ths setup: Gven - exaples

More information

Short Term Load Forecasting using an Artificial Neural Network

Short Term Load Forecasting using an Artificial Neural Network Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung

More information

Collaborative Filtering Recommendation Algorithm

Collaborative Filtering Recommendation Algorithm Vol.141 (GST 2016), pp.199-203 http://dx.do.org/10.14257/astl.2016.141.43 Collaboratve Flterng Recoendaton Algorth Dong Lang Qongta Teachers College, Haou 570100, Chna, 18689851015@163.co Abstract. Ths

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Comparative Analysis of Bradley-Terry and Thurstone-Mosteller Paired Comparison Models for Image Quality Assessment

Comparative Analysis of Bradley-Terry and Thurstone-Mosteller Paired Comparison Models for Image Quality Assessment Coparatve Analyss of Bradley-Terry and Thurstone-Mosteller Pared Coparson Models for Iage Qualty Assessent John C. Handley Xerox Corporaton Dgtal Iagng Technology Center 8 Phllps Road, MS 85E Webster,

More information

Study of the possibility of eliminating the Gibbs paradox within the framework of classical thermodynamics *

Study of the possibility of eliminating the Gibbs paradox within the framework of classical thermodynamics * tudy of the possblty of elnatng the Gbbs paradox wthn the fraework of classcal therodynacs * V. Ihnatovych Departent of Phlosophy, Natonal echncal Unversty of Ukrane Kyv Polytechnc Insttute, Kyv, Ukrane

More information

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17 Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Statistics for Managers Using Microsoft Excel/SPSS Chapter 13 The Simple Linear Regression Model and Correlation

Statistics for Managers Using Microsoft Excel/SPSS Chapter 13 The Simple Linear Regression Model and Correlation Statstcs for Managers Usng Mcrosoft Excel/SPSS Chapter 13 The Smple Lnear Regresson Model and Correlaton 1999 Prentce-Hall, Inc. Chap. 13-1 Chapter Topcs Types of Regresson Models Determnng the Smple Lnear

More information

,..., k N. , k 2. ,..., k i. The derivative with respect to temperature T is calculated by using the chain rule: & ( (5) dj j dt = "J j. k i.

,..., k N. , k 2. ,..., k i. The derivative with respect to temperature T is calculated by using the chain rule: & ( (5) dj j dt = J j. k i. Suppleentary Materal Dervaton of Eq. 1a. Assue j s a functon of the rate constants for the N coponent reactons: j j (k 1,,..., k,..., k N ( The dervatve wth respect to teperature T s calculated by usng

More information

An Optimal Bound for Sum of Square Roots of Special Type of Integers

An Optimal Bound for Sum of Square Roots of Special Type of Integers The Sxth Internatonal Syposu on Operatons Research and Its Applcatons ISORA 06 Xnang, Chna, August 8 12, 2006 Copyrght 2006 ORSC & APORC pp. 206 211 An Optal Bound for Su of Square Roots of Specal Type

More information

Statistics for Business and Economics

Statistics for Business and Economics Statstcs for Busness and Economcs Chapter 11 Smple Regresson Copyrght 010 Pearson Educaton, Inc. Publshng as Prentce Hall Ch. 11-1 11.1 Overvew of Lnear Models n An equaton can be ft to show the best lnear

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V Fall Analyss o Experental Measureents B Esensten/rev S Errede General Least Squares wth General Constrants: Suppose we have easureents y( x ( y( x, y( x,, y( x wth a syetrc covarance atrx o the y( x easureents

More information

Chapter One Mixture of Ideal Gases

Chapter One Mixture of Ideal Gases herodynacs II AA Chapter One Mxture of Ideal Gases. Coposton of a Gas Mxture: Mass and Mole Fractons o deterne the propertes of a xture, we need to now the coposton of the xture as well as the propertes

More information

Centroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems

Centroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems Centrod Uncertanty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Probles Jerry M. Mendel and Hongwe Wu Sgnal and Iage Processng Insttute Departent of Electrcal Engneerng Unversty of Southern

More information

Chapter 12 Lyes KADEM [Thermodynamics II] 2007

Chapter 12 Lyes KADEM [Thermodynamics II] 2007 Chapter 2 Lyes KDEM [Therodynacs II] 2007 Gas Mxtures In ths chapter we wll develop ethods for deternng therodynac propertes of a xture n order to apply the frst law to systes nvolvng xtures. Ths wll be

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Solutions for Homework #9

Solutions for Homework #9 Solutons for Hoewor #9 PROBEM. (P. 3 on page 379 n the note) Consder a sprng ounted rgd bar of total ass and length, to whch an addtonal ass s luped at the rghtost end. he syste has no dapng. Fnd the natural

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Reliability estimation in Pareto-I distribution based on progressively type II censored sample with binomial removals

Reliability estimation in Pareto-I distribution based on progressively type II censored sample with binomial removals Journal of Scentfc esearch Developent (): 08-3 05 Avalable onlne at wwwjsradorg ISSN 5-7569 05 JSAD elablty estaton n Pareto-I dstrbuton based on progressvely type II censored saple wth bnoal reovals Ilhan

More information

Basic Statistical Analysis and Yield Calculations

Basic Statistical Analysis and Yield Calculations October 17, 007 Basc Statstcal Analyss and Yeld Calculatons Dr. José Ernesto Rayas Sánchez 1 Outlne Sources of desgn-performance uncertanty Desgn and development processes Desgn for manufacturablty A general

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9 Chapter 9 Correlaton and Regresson 9. Correlaton Correlaton A correlaton s a relatonshp between two varables. The data can be represented b the ordered pars (, ) where s the ndependent (or eplanator) varable,

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Fermi-Dirac statistics

Fermi-Dirac statistics UCC/Physcs/MK/EM/October 8, 205 Fer-Drac statstcs Fer-Drac dstrbuton Matter partcles that are eleentary ostly have a type of angular oentu called spn. hese partcles are known to have a agnetc oent whch

More information

Three Algorithms for Flexible Flow-shop Scheduling

Three Algorithms for Flexible Flow-shop Scheduling Aercan Journal of Appled Scences 4 (): 887-895 2007 ISSN 546-9239 2007 Scence Publcatons Three Algorths for Flexble Flow-shop Schedulng Tzung-Pe Hong, 2 Pe-Yng Huang, 3 Gwoboa Horng and 3 Chan-Lon Wang

More information

The Parity of the Number of Irreducible Factors for Some Pentanomials

The Parity of the Number of Irreducible Factors for Some Pentanomials The Party of the Nuber of Irreducble Factors for Soe Pentanoals Wolfra Koepf 1, Ryul K 1 Departent of Matheatcs Unversty of Kassel, Kassel, F. R. Gerany Faculty of Matheatcs and Mechancs K Il Sung Unversty,

More information

arxiv: v2 [math.co] 3 Sep 2017

arxiv: v2 [math.co] 3 Sep 2017 On the Approxate Asyptotc Statstcal Independence of the Peranents of 0- Matrces arxv:705.0868v2 ath.co 3 Sep 207 Paul Federbush Departent of Matheatcs Unversty of Mchgan Ann Arbor, MI, 4809-043 Septeber

More information

Quantum Particle Motion in Physical Space

Quantum Particle Motion in Physical Space Adv. Studes Theor. Phys., Vol. 8, 014, no. 1, 7-34 HIKARI Ltd, www.-hkar.co http://dx.do.org/10.1988/astp.014.311136 Quantu Partcle Moton n Physcal Space A. Yu. Saarn Dept. of Physcs, Saara State Techncal

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

PARAMETER ESTIMATION IN WEIBULL DISTRIBUTION ON PROGRESSIVELY TYPE- II CENSORED SAMPLE WITH BETA-BINOMIAL REMOVALS

PARAMETER ESTIMATION IN WEIBULL DISTRIBUTION ON PROGRESSIVELY TYPE- II CENSORED SAMPLE WITH BETA-BINOMIAL REMOVALS Econoy & Busness ISSN 1314-7242, Volue 10, 2016 PARAMETER ESTIMATION IN WEIBULL DISTRIBUTION ON PROGRESSIVELY TYPE- II CENSORED SAMPLE WITH BETA-BINOMIAL REMOVALS Ilhan Usta, Hanef Gezer Departent of Statstcs,

More information

ASYMMETRIC TRAFFIC ASSIGNMENT WITH FLOW RESPONSIVE SIGNAL CONTROL IN AN URBAN NETWORK

ASYMMETRIC TRAFFIC ASSIGNMENT WITH FLOW RESPONSIVE SIGNAL CONTROL IN AN URBAN NETWORK AYMMETRIC TRAFFIC AIGNMENT WITH FLOW REPONIVE IGNAL CONTROL IN AN URBAN NETWORK Ken'etsu UCHIDA *, e'ch KAGAYA **, Tohru HAGIWARA *** Dept. of Engneerng - Hoado Unversty * E-al: uchda@eng.houda.ac.p **

More information

Slobodan Lakić. Communicated by R. Van Keer

Slobodan Lakić. Communicated by R. Van Keer Serdca Math. J. 21 (1995), 335-344 AN ITERATIVE METHOD FOR THE MATRIX PRINCIPAL n-th ROOT Slobodan Lakć Councated by R. Van Keer In ths paper we gve an teratve ethod to copute the prncpal n-th root and

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

On the number of regions in an m-dimensional space cut by n hyperplanes

On the number of regions in an m-dimensional space cut by n hyperplanes 6 On the nuber of regons n an -densonal space cut by n hyperplanes Chungwu Ho and Seth Zeran Abstract In ths note we provde a unfor approach for the nuber of bounded regons cut by n hyperplanes n general

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner. (C) 998 Gerald B Sheblé, all rghts reserved Lnear Prograng Introducton Contents I. What s LP? II. LP Theor III. The Splex Method IV. Refneents to the Splex Method What s LP? LP s an optzaton technque that

More information

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Handling Overload (G. Buttazzo, Hard Real-Time Systems, Ch. 9) Causes for Overload

Handling Overload (G. Buttazzo, Hard Real-Time Systems, Ch. 9) Causes for Overload PS-663: Real-Te Systes Handlng Overloads Handlng Overload (G Buttazzo, Hard Real-Te Systes, h 9) auses for Overload Bad syste desgn eg poor estaton of worst-case executon tes Sultaneous arrval of unexpected

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

Gadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281

Gadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281 Reducng Fuzzy Relatons of Fuzzy Te Seres odel Usng QR Factorzaton ethod and Its Applcaton to Forecastng Interest Rate of Bank Indonesa Certfcate Agus aan Abad Subanar Wdodo 3 Sasubar Saleh 4 Ph.D Student

More information

Statistical analysis of Accelerated life testing under Weibull distribution based on fuzzy theory

Statistical analysis of Accelerated life testing under Weibull distribution based on fuzzy theory Statstcal analyss of Accelerated lfe testng under Webull dstrbuton based on fuzzy theory Han Xu, Scence & Technology on Relablty & Envronental Engneerng Laboratory, School of Relablty and Syste Engneerng,

More information

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com

More information

Chapter 5 Multilevel Models

Chapter 5 Multilevel Models Chapter 5 Multlevel Models 5.1 Cross-sectonal multlevel models 5.1.1 Two-level models 5.1.2 Multple level models 5.1.3 Multple level modelng n other felds 5.2 Longtudnal multlevel models 5.2.1 Two-level

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

CHAPTER 10 ROTATIONAL MOTION

CHAPTER 10 ROTATIONAL MOTION CHAPTER 0 ROTATONAL MOTON 0. ANGULAR VELOCTY Consder argd body rotates about a fxed axs through pont O n x-y plane as shown. Any partcle at pont P n ths rgd body rotates n a crcle of radus r about O. The

More information

Final Exam Solutions, 1998

Final Exam Solutions, 1998 58.439 Fnal Exa Solutons, 1998 roble 1 art a: Equlbru eans that the therodynac potental of a consttuent s the sae everywhere n a syste. An exaple s the Nernst potental. If the potental across a ebrane

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form SET OF METHODS FO SOUTION THE AUHY POBEM FO STIFF SYSTEMS OF ODINAY DIFFEENTIA EUATIONS AF atypov and YuV Nulchev Insttute of Theoretcal and Appled Mechancs SB AS 639 Novosbrs ussa Introducton A constructon

More information

Determination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm

Determination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm Internatonal Conference on Inforaton Technology and Manageent Innovaton (ICITMI 05) Deternaton of the Confdence Level of PSD Estaton wth Gven D.O.F. Based on WELCH Algorth Xue-wang Zhu, *, S-jan Zhang

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information