Chapter 4 High Breakdown Regression Procedures

Size: px
Start display at page:

Download "Chapter 4 High Breakdown Regression Procedures"

Transcription

1 Chapter 4 Hgh Breakdown Regresson Procedures Introducton Whle M and BI estmators provde an mprovement over OLS f the data has an outler or hgh nfluence pont, they cannot provde protecton aganst data wth large amounts of contamnaton. As seen n Example 2.3 (Secton 2.5.3), clusters of bad data may overwhelm these methods, whch leads to poor estmators. Thus, the dscusson next turns to methods wth hgh breakdown ponts (see also Markatou and He (1994)). They have the ablty to sft through as much as 5% of the data beng contamnated and stll provde decent estmators for explanng the general trend. 4.1 Least Medan of Squares (LMS) All of the methods mentoned above have an objectve functon that nvolves a Σ operator. Changng the functon nsde ths summaton operator, as n M or BI regresson, had lmted success. Rousseeuw (1984) suggested replacng the summaton by the medan, parallelng the fact that the sample medan s more robust than the sample mean n locaton estmaton. Ths leads to Least Medan of Squares regresson (LMS), wth the objectve functon beng mn med ( y ) b 2 xb. 1/3 O( n ) The resultng estmator has a 5% breakdown pont, but converges at the slow rate of makng ts asymptotc effcency aganst normal errors zero (Rousseeuw and Leroy, 1987, page 179). In the locaton model there exsts a closed-form algorthm to calculate LMS. Orgnally, no exact algorthm was avalable to calculate LMS n the regresson settng (excludng, of course, degenerate cases). Stromberg (1993) then provded an exact LMS algorthm, but 6

2 n requres that all C p + 1 subsets be nvestgated. Ths beng a dauntng task, the approach generally taken s to approxmate LMS by one of the handful of avalable subsamplng algorthms. Consder a subsample of sze p, the number of unknown coeffcents. Ths s referred to as an elemental set because assumng that the reduced X matrx,.e. the matrx formed by usng only the p rows of X that correspond to the p subsampled observatons, has full rank, an exact ft can be obtaned from these ponts and the objectve functon can then be evaluated. One subsamplng algorthm (Rousseeuw and Leroy, 1987, page 197) smply draws an elemental set and evaluates the objectve functon. Ths s then repeated a large number of tmes, and the fnal LMS estmate corresponds to the estmate that had the smallest observed objectve functon. Ths estmate retans the hgh breakdown and convergence propertes of the theoretcal LMS (Rousseeuw and Bassett, 1991). However, even f an exhaustve search of all n C p elemental sets s performed, the resultng estmator s generally not the true LMS estmator, but rather only an estmate of t (just as n the case of MVE estmaton va ths method). Furthermore, the p calculatons nvolved to obtan the estmated regresson coeffcents are of order O( n + 1 ) (Rousseeuw, 1993). Instead, the number of elemental subsets selected can be based on a probablstc argument of how lkely t s to obtan an elemental set that contans only good observatons. The drawback s that snce ths probablty s less than 1, the algorthm may break down n ts calculaton of a hgh breakdown estmator. Ths defeats the purpose of the hgh breakdown phlosophy. In any event, the number of elemental subsets needed s roughly 3 2 p and p for 95% and 99% probabltes, respectvely, of obtanng at least one purely good elemental set. To obtan the regresson coeffcents, the order of calculaton would then be these numbers of subsets multpled by n, respectvely (Rousseeuw, 1993). Of course, whle the probablstc argument provdes for the analyss of a strctly good elemental subset wth hgh probablty, there s no guarantee that any of those randomly obtaned 61

3 good elemental subsets reflects the general trend by tself. Therefore, the resultng estmator can be potentally very msleadng. A second algorthm was ntroduced by Rousseeuw (1993) to reduce the order of computatons requred and elmnate the problem of havng an algorthm breakdown due to the probablty of obtanng a purely good elemental set beng less than one. Bascally, the data s randomly assgned nto blocks of sze 2p 2. Any extra ponts are dsbursed as evenly as possble. Then, wthn each block, all possble subsets of sze p are evaluated wth the objectve functon. Agan, the fnal LMS estmate corresponds to the estmate that had the smallest observed objectve functon y x Fgure 4.1: Possble confguraton of the four blocks used n the second LMS subsamplng algorthm, gven eght observatons on a scatterplot. 62

4 It s guaranteed that at least one elemental set wll consst only of good observatons. The advantage of ths algorthm s theoretcal. It acheves a hgh breakdown pont, but the estmates obtaned could be very msleadng. A major problem s that there s no guarantee that any block wll provde enough nformaton concernng the general trend. To llustrate ths dea, suppose that a smple lnear regresson (SLR) model s posed for a dataset of eght good observatons. Here, the block sze s 2(2) 2 = 2, and there are 82= 4 blocks. Next, suppose that the data are shown n Fgure 4.1, and are labeled as to whch block each observaton was randomly assgned. Even though the slope s obvously negatve, all four blocks result n postve slope estmates. It seems that the algorthm reles on asymptotc combnatorc probabltes, that obtanng ths type of block structure goes to zero as n gets large. If there are replcatons at the regressor locatons, such as n a desgned experment, ths scenaro may not be all that uncommon. The frst algorthm wll be utlzed n the case studes to come. The number of randomly drawn elemental sets s generally taken to be 5 or 1 n practce. In some case studes to follow, as many as 5, randomly drawn elemental subsets were used n an attempt to avod an algorthm breakdown. Addtonally, the LMS estmator that s obtaned from a random subsamplng algorthm can be modestly mproved by adjustng the ntercept. By vewng the resduals as a unvarate sample, the exact LMS algorthm for locaton estmaton can be performed. The updated ntercept s found as the locaton LMS of the resduals from the current LMS regresson estmator. Ths procedure s guaranteed to reduce (or not change) the LMS objectve functon, and t also elmnates the condton of always havng at least p resduals beng zero (because of the exact ft). Ths ntercept adjustment procedure s ncorporated nto all LMS calculatons. 4.2 Least Trmmed Squares (LTS) Recall that one drawback to LMS s that t possesses a very slow convergence rate of 1/3 O( n ). Rousseeuw (1984) ntroduced Least Trmmed Squares (LTS) to remedy ths stuaton. 63

5 1/2 It also possesses a 5% breakdown pont, but converges at the faster rate of O( n ). Here, the objectve functon s mn b h 2 r[] = 1 whch represents the sum of the h smallest squared resduals. As mentoned before (n Chapter 3), h s generally taken to be [( n p 1) 2], + +. The problem s that no closed-form algorthm exsts, except for the locaton model, to construct the true LTS estmator. Instead, the algorthms stated prevously for approxmatng LMS can also be used to approxmate LTS smply by changng the objectve functon beng evaluated for each elemental set. The ntercept adjustment step s also avalable for the LTS estmator. Smply replace the current ntercept wth the locaton LTS estmate of the resduals from the current LTS regresson estmator. Ths update s performed n all LTS calculatons. Agullo (21) offers more dscusson regardng LTS algorthms. Both LMS and LTS are neffcent estmators. In fact, for the locaton model, LMS has a 1.39% asymptotc effcency versus the sample mean under normal errors. LTS s only slghtly better, havng a 7.14% asymptotc effcency versus the sample mean under normal errors. Therefore, several methods employ LTS, whch converges more rapdly than LMS, as an ntal estmator and perform some mprovement calculaton. These are referred to as one-step estmators. The dea s to utlze the hgh breakdown propertes of these ntal estmators, but mprove on ther lack of effcency. However, ths ntroduces another problem. The mprovement step generally wll requre weghts for the observatons. Thus, robust weghts are requred to retan the desred hgh breakdown propertes. Ths leads back to the materal of Chapter 3. As mentoned n Secton 3.3, a popular choce among the robust weghtng schemes s the MVEbased Mallows weght. The remander of Chapter 4 wll ntroduce two competng one-step estmators and provde an overall hgh breakdown regresson analyss of the stackloss case study. 64

6 4.3 One-Step Generalzed M Estmators The prevous hgh breakdown estmators, LMS and LTS, attack the regresson problem by changng the objectve functon to an expresson that leads to mproved breakdown pont capabltes. Because of poor effcency and numercal senstvty due to both the random subsamplng process as well as to small nternal movements of the data (these topcs are dscussed later n Chapter 8), other hgh breakdown regresson technques have been developed. One remedy for the poor effcency s to ncorporate a hgh breakdown ntal estmator wth the generalzed M estmator to obtan the one-step generalzed M estmator. The objectve functon has the same form as the bounded nfluence estmator of Secton 2.1, but wth robust leverage weghts. The soluton s no longer found va the IRLS procedure, but nstead through a one step Taylor seres expanson of the objectve functon. Ths estmator can be wrtten as the sum of two terms: the ntal estmator and the one-step mprovement calculaton. LTS has become the ntal estmator of choce for many one-step mprovement algorthms. It has a hgh breakdown pont and converges more rapdly than LMS. The GM-estmators nhert the hgh breakdown pont of LTS, but mprove on the effcency aspect. In the followng dscusson on one-step GM estmators, t s understood that (1) the ntal estmator, ˆβ, s LTS, (2) the resduals from the ntal ft are denoted by r ˆ ˆ ( β) = y xβ, (3) a dagonal matrx W = dag( w ( x )) of robust Mallows weghts, wth 2 χ.95, p 1 w = mn 1, 2, RD s calculated usng MVE estmates (based solely on the regressor space), and (4) The robust scale estmate, ˆ σ, s based on the LMS estmate (Rousseeuw and Leroy, 1987, pg. 22), and s found as 5 σ ˆ ˆ = med r ( β ). n p 65

7 4.3.1 Mallows 1-Step Estmator The Mallows-1-step (M1S) estmator, a Generalzed M estmator, was ntroduced by Smpson, Ruppert, and Carroll (1992). The focus of the 1-step mprovement s to ncorporate a leverage control term and an outler control term n the estmaton of ˆβ. The resduals from the ntal estmate are utlzed, wth the M1S estmator beng the soluton to the altered normal equatons n r ( ˆ β) wψ = = 1 ˆ σ x. Outlers are controlled by the ψ-functon downweghtng large scaled resduals. For our dscusson the Huber ψ-functon s used. In addton, wth the w ' s beng robust Mallows weghts, hgh leverage ponts get downweghted due to ths term. Usng Newton-Raphson to solve the altered normal equatons, the form of the estmator s where and g ˆ ˆ β= β + H g, 1 r ( βˆ ) wx n = ˆ σ ψ = 1 ˆ σ n (1) r ˆ ( β) H = wxx ψ = 1 ˆ σ, (1) (1) ψ ( u) wth ψ beng the frst dervatve of ψ,.e. ψ ( u) =. Ths can be smplfed when u wrtten n matrx notaton as βˆ = βˆ + ( XBX ) XWψ ˆ σ. 1 Here, ψ s an n 1 vector, W s the n n weght matrx defned earler n Secton 4.3, and B s the n n dagonal matrx such that dagonal elements of B become B (1) = dag wψ ˆ σ r ( βˆ ). Usng the Huber ψ-functon, the 66

8 b w, ( ˆ ˆ f r β) cσ, =, otherwse. The ψ vector elements are calculated as c, f r ˆ ˆ ( β) < cσ, r ( ˆ β) ψ, ( ˆ ˆ = f r β) c σ, ˆ σ c, f r( ˆ ˆ β) > cσ. To further the analyss beyond estmaton, standard errors are needed for each of the coeffcents. If the p p matrx M s defned as n r ˆ ( β) M ˆ = σ w xx ψ =1 ˆ σ, then the estmated (asymptotc) covarance matrx for the parameter estmates s gven by Cov( βˆ ) = H M H. 1 1 By defnng the matrx wrtten n matrx form as V r ( βˆ ) dag wψ, the estmated covarance matrx can be = ˆ σ ˆ ˆ = σ Cov( β) ( XBX) ( XV X)( XBX ). Thus, standard errors for the M1S coeffcents are determned by the square root of the dagonal elements of ths estmated covarance matrx Schweppe 1-Step Estmator Another generalzed M estmator s the Schweppe-1-Step (S1S) estmator ntroduced by Coakley and Hettmansperger (1993). The focus s on the selecton of an approprate weghtng scheme. The M1S estmator s modfed by replacng the Mallows form of the altered normal equatons wth the Schweppe form of the altered normal equatons. Bascally, ths entals addng 67

9 a weght to the denomnator of the ψ-functon argument, whch mproves the effcency of the estmator (Coakley and Hettmansperger (1993)). The altered normal equatons for the S1S estmator are n wψ = 1 ˆ σ w r ( βˆ ) x =. These equatons are of the same form as those for BI regresson. The dfference s that the S1S method uses the same Mallows weghts that are used n the M1S method rather than the hat dagonal-based Welsch weghts that are used n BI regresson. A Gauss-Newton approxmaton usng a frst-order Taylor seres expanson about the ntal estmate ˆβ yelds a one-step mprovement of the form βˆ = βˆ + ( XBX ) XWψ ˆ σ. 1 Ths has the same form as the M1S method, and uses the same weght matrx, W, but wth changes n the defnton of the B matrx and ψ vector. Now, the Huber ψ-functon, the dagonal elements of B are B dag ψ r ( βˆ ) ˆ σ w. Usng (1) = b 1, f r ( ˆ ˆ β) cσ w, =, otherwse. The ψ vector entres are calculated as ψ c, f r( ˆ ˆ β) < cσ w, r βˆ, ( ), c, f r( ˆ ˆ β) > cσ w. ( ) ˆ ˆ = f r β cσw ˆ σ w 68

10 By defnng the matrx wrtten n matrx form as V r ( βˆ ) dag wψ, the estmated covarance matrx can be = ˆ σ w ˆ ˆ = σ Cov( β) ( XBX) ( XV X)( XBX ). Standard errors for the S1S coeffcents are then determned by the square root of the dagonal elements of ths estmated covarance matrx. 4.4 Case Study: Stackloss Data In order to obtan the M1S and S1S estmates, LTS and MVE (wthout the 1-step mprovement) are frst estmated to provde a hgh breakdown startng pont and robust weghts for the mprovement step. Usng the repeated subsample (elemental set) algorthm wth 5, teratons yelds and 1 ( ) MVE Z 56.5 = MVE2 ( Z ) = , as the MVE estmates defnng an ellpsod havng a (mnmum) volume of Correspondng to an objectve functon evaluaton of , the LTS estmator produces the ftted equaton yˆ = x x +. x Gven these prelmnary calculatons, along wth the LTS-based scale estmate of ˆ σ = 1.793, the hgh breakdown regresson estmators M1S and S1S both yeld the equaton ŷ = x x.2133 x, as shown n Table 4.1, along wth ther respectve asymptotc standard errors. 69

11 Table 4.1: Hgh breakdown regresson for stackloss data. Parameter LTS M1S M1S s.e. S1S S1S s.e. Intercept x x x M1S and S1S essentally dffer n ther respectve weghtng schemes, partcularly n cases wth hgh nfluence ponts. Snce the stackloss data has no hgh nfluence ponts, just the four outlers, t s not surprsng that the two 1-step methods agree. By vewng the standard errors for the M1S and S1S estmators, t s evdent that acd concentraton ( x 3 ) s not sgnfcant n the presence of ar flow ( x 1 ) and temperature ( x 2 ). One could extend the analyss further by vewng observaton weghts, leverage weghts, plots, etc., but ths s omtted for the dscusson purposes here. 4.5 Computatonal Issues for Hgh Breakdown Regresson The man goal of hgh breakdown regresson methods lke those mentoned n ths chapter s to keep large quanttes (up to 5% of the data) of outlers from runng the analyss. Thus, outlers n the response and hgh nfluence ponts are under control, not exertng any undue nfluence on the regresson analyss. There s, however, a major problem that stll needs to be addressed. The M1S and S1S methods requre both an ntal estmate and a set of weghts n order to proceed wth the one-step mprovement. The ntal estmate s taken to be LTS, wth the weghts beng robust Mallows weghts based on the MVE estmator. Both have problems attached to them. Recall, LTS s generally not the soluton to ts objectve functon, but merely an estmate of the estmate. As ponted out by Hettmansperger and Sheather (1992), LMS (and LTS for that matter) s hghly senstve to small changes n the mddle of the regressor space. The process of repeated subsamplng can easly result n drastcally dfferent fnal estmates for LTS. Ths results from the objectve functon n queston havng many local mnma at vastly dfferent locatons. 7

12 In a smlar fashon, the MVE estmator s also very unstable n the sense that drastcally dfferent results are common f the repeated subsamplng algorthm were tself repeated. Thus, the robust weghts generated from ths algorthm may become very dfferent n another smulaton. Cook and Hawkns (199) dscuss ths lack of repeatablty problem when tryng to mmc the results of Rousseeuw and van Zomeren (199). Of note, for a data set havng 2 observatons and 5 regressors, t took the authors nearly 6, teratons to obtan the true MVE. Ths ndcates that determnng the number of teratons needed n subsamplng by probablstc arguments such as those gven by Rousseeuw and Leroy (1987) and Rousseeuw (1993) may fal to fnd the proper estmates. The alternatve would be to ncorporate the FSA approach n obtanng the MVE estmates, whch s much more stable n terms of the effects of random starts. Ths approach would defntely ncrease the computatonal tme requred to perform the regresson snce the ntal regresson estmator and the robust weghts are no longer calculated n a parallel fashon. Basng a method on LTS-based ntal estmate leaves the method vulnerable to an nternal breakdown. To obtan decent results wll requre an enormous amount of calculaton, not the small number of teratons (say under 1) as currently suggested (Rousseeuw and Leroy (1987), Rousseeuw (1993)). Even so, the researcher s not guaranteed to avod msleadng results. Ths extends to M1S and S1S methods as well. These methods are very relant on ther ntal estmates, and very dfferent results are a dsturbng realty. Agostnell and Markatou (1998) also offer a one-step robust regresson estmator nto the feld. In concluson, t s stressed that a current hgh breakdown estmator such as LTS may not be reproducble. Two researchers analyzng the same data by the same regresson procedure may obtan vastly dfferent results. Case studes to come n Chapter 7 show a wde dsparty of values over a small number of repeated analyses, whle extendng the dscusson to nclude M1S and S1S, and ther nherent reproducblty ssues, as well. Ths ssue s another reason as to why takng a dfferent approach n obtanng a hgh breakdown regresson estmator s justfed. 71

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Chapter 12 Analysis of Covariance

Chapter 12 Analysis of Covariance Chapter Analyss of Covarance Any scentfc experment s performed to know somethng that s unknown about a group of treatments and to test certan hypothess about the correspondng treatment effect When varablty

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 008 Recall: man dea of lnear regresson Lnear regresson can be used to study

More information

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques

More information

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding

Lecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding Recall: man dea of lnear regresson Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 8 Lnear regresson can be used to study an

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Statistics II Final Exam 26/6/18

Statistics II Final Exam 26/6/18 Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Chapter 15 - Multiple Regression

Chapter 15 - Multiple Regression Chapter - Multple Regresson Chapter - Multple Regresson Multple Regresson Model The equaton that descrbes how the dependent varable y s related to the ndependent varables x, x,... x p and an error term

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

SPANC -- SPlitpole ANalysis Code User Manual

SPANC -- SPlitpole ANalysis Code User Manual Functonal Descrpton of Code SPANC -- SPltpole ANalyss Code User Manual Author: Dale Vsser Date: 14 January 00 Spanc s a code created by Dale Vsser for easer calbratons of poston spectra from magnetc spectrometer

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

Interpreting Slope Coefficients in Multiple Linear Regression Models: An Example

Interpreting Slope Coefficients in Multiple Linear Regression Models: An Example CONOMICS 5* -- Introducton to NOT CON 5* -- Introducton to NOT : Multple Lnear Regresson Models Interpretng Slope Coeffcents n Multple Lnear Regresson Models: An xample Consder the followng smple lnear

More information

8.6 The Complex Number System

8.6 The Complex Number System 8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Influence Diagnostics on Competing Risks Using Cox s Model with Censored Data. Jalan Gombak, 53100, Kuala Lumpur, Malaysia.

Influence Diagnostics on Competing Risks Using Cox s Model with Censored Data. Jalan Gombak, 53100, Kuala Lumpur, Malaysia. Proceedngs of the 8th WSEAS Internatonal Conference on APPLIED MAHEMAICS, enerfe, Span, December 16-18, 5 (pp14-138) Influence Dagnostcs on Competng Rsks Usng Cox s Model wth Censored Data F. A. M. Elfak

More information

The Granular Origins of Aggregate Fluctuations : Supplementary Material

The Granular Origins of Aggregate Fluctuations : Supplementary Material The Granular Orgns of Aggregate Fluctuatons : Supplementary Materal Xaver Gabax October 12, 2010 Ths onlne appendx ( presents some addtonal emprcal robustness checks ( descrbes some econometrc complements

More information

28. SIMPLE LINEAR REGRESSION III

28. SIMPLE LINEAR REGRESSION III 8. SIMPLE LINEAR REGRESSION III Ftted Values and Resduals US Domestc Beers: Calores vs. % Alcohol To each observed x, there corresponds a y-value on the ftted lne, y ˆ = βˆ + βˆ x. The are called ftted

More information

Statistics MINITAB - Lab 2

Statistics MINITAB - Lab 2 Statstcs 20080 MINITAB - Lab 2 1. Smple Lnear Regresson In smple lnear regresson we attempt to model a lnear relatonshp between two varables wth a straght lne and make statstcal nferences concernng that

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

SIMPLE LINEAR REGRESSION

SIMPLE LINEAR REGRESSION Smple Lnear Regresson and Correlaton Introducton Prevousl, our attenton has been focused on one varable whch we desgnated b x. Frequentl, t s desrable to learn somethng about the relatonshp between two

More information

Systems of Equations (SUR, GMM, and 3SLS)

Systems of Equations (SUR, GMM, and 3SLS) Lecture otes on Advanced Econometrcs Takash Yamano Fall Semester 4 Lecture 4: Sstems of Equatons (SUR, MM, and 3SLS) Seemngl Unrelated Regresson (SUR) Model Consder a set of lnear equatons: $ + ɛ $ + ɛ

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students.

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students. PPOL 59-3 Problem Set Exercses n Smple Regresson Due n class /8/7 In ths problem set, you are asked to compute varous statstcs by hand to gve you a better sense of the mechancs of the Pearson correlaton

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Curve Fitting with the Least Square Method

Curve Fitting with the Least Square Method WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH

Turbulence classification of load data by the frequency and severity of wind gusts. Oscar Moñux, DEWI GmbH Kevin Bleibler, DEWI GmbH Turbulence classfcaton of load data by the frequency and severty of wnd gusts Introducton Oscar Moñux, DEWI GmbH Kevn Blebler, DEWI GmbH Durng the wnd turbne developng process, one of the most mportant

More information