NONLINEAR STRUCTURE IDENTIFICATION WITH LINEAR LEAST SQUARES AND ANOVA. Ingela Lind

Size: px
Start display at page:

Download "NONLINEAR STRUCTURE IDENTIFICATION WITH LINEAR LEAST SQUARES AND ANOVA. Ingela Lind"

Transcription

1 OLIEAR STRUCTURE IDETIFICATIO WITH LIEAR LEAST SQUARES AD AOVA Ingela Lind Division of Automatic Control, Department of Electrical Engineering, Linköping University, SE Linköping, Sweden Abstract: The objective of this paper is to find the structure of a nonlinear system from measurement data, as a prior step to model estimation Applying AOVA directly on a dataset is compared to applying AOVA on residuals from a linear model The distributions of the involved test variables are computed used to show that AOVA is effective in finding which regressors give linear effects what regressors produce nonlinear effects The ability to find nonlinear substructures depending on only subsets of regressors is an AOVA feature which is shown not to be affected by subtracting a linear modelcopyright c 2005 IFAC Keywords: System identification, onlinear systems, Structural properties, Analysis of variance, Linear estimation ITRODUCTIO The aim of this paper is to quantify the difference between two different ways to use Analysis of Variance (AOVA) as a tool for nonlinear system identification In general, system identification is data centred Simple things are tried first; Is a linear model sufficient to describe the data? To invalidate a linear model, the residuals are examined with whiteness tests the fit of the model on validation data is used to form an opinion of how good the model is Thus, a linear model is often available, or easily computed AOVA (Miller, 997; Montgomery, 99) can be used for finding proper regressors model structure for a nonlinear model by fitting a locally constant model to the response surface of the data (Lind, 2000; Lind Ljung, 2005, 2003) A clever parameterisation of a locally constant model makes it possible to perform hypothesis tests in a balanced computationally very effective way Let y(t) = g(u(t), u(t T ),, u(t kt )) + e(t) = θ T ϕ (t) + g 2 (ϕ 2 (t)) + e(t) be a general nonlinear finite impulse response model with input u(t) output y(t), sampled with sampling time T Let ϕ (t) be a vector containing the regressors that affect the output linearly (with parameters θ ) ϕ 2 (t) the regressors that affect y(t) nonlinearly through the function g 2 ( ) Three main questions can be answered by both running AOVA directly on identification data running AOVA on the residuals from a linear model: Should the regressor u(t k i T ) be included in the model at all, should it be included in ϕ (t) or ϕ 2 (t)? What interaction pattern is present? Can g( ) be divided into additive parts containing only subsets of the regressors? What subsets? Are there nonlinear effects in the residuals from a linear model?

2 There are much to be gained by the division into a linear a nonlinear subset of the regressors (Ljung et al, 2004) instead of assuming a full nonlinear model The complexity of any black-box type of model depends heavily on the size of ϕ 2 (t) An idealised case is examined to quantify the difference between running AOVA directly on identification data first estimate an affine (linear with constant offset) model then running AOVA on its residuals The input signal is chosen to keep computations simple, while being sufficiently exciting to make a nonlinear black-box identification possible The structure of the paper is as follows: First, the true data model is stated In section 3, the linear model is estimated the residuals are formed In section 4, AOVA is run directly on the estimation data, in section 5 AOVA is run on the residuals from the linear model Section 6 explores the differences between the two approaches give examples The conclusions are made in section 7 2 TRUE DATA MODEL The system is a finite impulse response model: y(t) = g(u(t), u(t T )) + e(t), () where e(t) (0, σ 2 ) is independent identically distributed Gaussian noise with mean zero variance σ 2 The input signal u(t) is a pseudorom multi-level signal with mean zero, in which each level combination of u(t) u(t T ) occur equally many times The last condition defines a balanced dataset will give independence between sums of squares This type of signal can be given a nearly white spectra, see Godfrey (993) The number of levels the signal assumes is m the levels are denoted u i, where i =,, m An integer number, n, of periods from the input/output data are collected denoted by Z ( = np, where p is the length of the period) g( ) is a function of two variables The results extend to more regressors, but since the equations do not fit in this short format, only two regressors are considered below 3 ESTIMATIO OF THE AFFIE MODEL A linear FIR model with an extra parameter for the mean level of the signal, ŷ(t) = âu(t) + ˆbu(t T ) + ĉ [ = â ˆb ĉ] u(t) u(t T ) = ˆθ T ϕ(t), is estimated using linear least squares (Ljung, 999, page 203) The loss function V (θ, Z ) = t= is minimised by the estimate ˆθ LS = 2 (y(t) ŷ(t))2, [ â ˆb ĉ] T = arg min V (θ, Z ) = [ ϕ(t)ϕ T (t)] t= ϕ(t)y(t) t= For the pseudo-rom multi-level input signal we have that: u(t) = 0, u(t T ) = 0, (2) t= t= u(t)u(t T ) = 0, (3) t= u 2 (t) = m t= u 2 i = R u i= If assumption (3) is not valid, change variables to ũ(t T ) = u(t T ) αu(t), such that / t= u(t)ũ(t T ) = 0 ow, [ ] ϕ(t)ϕ T (t) = R t= u 0 0 R u ϕ(t)y(t) t= = u(t) u(t T ) (g(u(t), u(t T )) + e(t)) t= = m u i g(u i, u j ) u m 2 j g(u i, u j ) i= j= g(u i, u j ) + u i e ijk nm 2, i= j= k= u j e ijk e ijk where e ijk is the value of e(t) when u(t) = u i u(t T ) = u j for the k:th time, that is, the k:th measurement in cell ij With vector notation; f = h = u = f f m h h m u u m = m = m, =, g(u, u j ) j= g(u u j ) g(u i, u ) i= g(u i, u m ) = G = GT

3 v = w = v v m w w m = mn = mn j= k= i= k= e jk e mjk e ik e imk = E = ET where G ij = g(u i, u j ) E ij = /n n k= e ijk is the noise average in cell ij, the parameter estimates can be written as: ˆθ LS = u T (f + v) u T (h + w) m 2 T (G + E) (4) The residuals from the affine model are denoted ɛ(t) = g(u(t), u(t T )) + e(t) (ˆθ LS ) T ϕ(t) 4 AOVA APPLIED TO THE DATA DIRECTLY The statistical analysis method AOVA (Miller (997); Montgomery (99), among many others) is a widely spread tool for finding out which factors contribute to given measurements It has been used discussed since the 930 s is a common tool in, eg, medicine quality control applications The method is based on hypothesis tests with F-distributed test variables computed from the residual quadratic sum There are several slightly different variants (Miller, 997) Here the fixed effects model with two factors will be used Assume that the collected measurement data can be described by a linear statistical model, y ijk = µ + τ i + β j + (τβ) ij + w ijk, (5) where the w ijk are independent Gaussian distributed variables with zero mean constant variance σ 2 The parameter µ is the overall mean For each level i =,, m of the first regressor (u(t)) there is a corresponding effect τ i, for each level j =,, m of the second regressor(u(t T )) the corresponding effect is β j The interaction between the regressors is described by the parameters (τβ) ij The sum of a batch of indexed parameters over any of its index is zero For a linear (y(t) = au(t) + bu(t T ) + e(t)) or a non-linear additive system (y(t) = g (u(t)) + g 2 (u(t T )) + e(t)), the interaction parameters (τβ) ij are zero These are needed when the nonlinearities have a non-additive nature, ie, y(t) = g(u(t), u(t T )) + e(t) Since the regressors are quantised, it is a very simple procedure to estimate the model parameters by the computation of means For example, the constant µ would correspond to y, while the effects from the first regressor are computed as τ i = y i y In each cell ij, for the model (), y ijk = g(u i, u j ) + e ijk This gives the total mean over all cells (all data) ȳ = m 2 T (G + E) The m different row means ȳ i, column means ȳ j, are given by ȳ i = f i + v i = m it (G + E) ȳ j = h j + w j = m T (G + E)j respectively Here i j are vectors with one nonzero element in row i j respectively i = j = The m 2 different cell means are given by ȳ ij = i T (G + E)j, AOVA is used for testing which of the parameters that significantly differ from zero for estimating the values of the parameters with stard errors, which makes it a tool for exploratory data analysis The residual quadratic sum, SS T, is used to design test variables for the different batches (eg, the τ i :s) of parameters The total residual sum of squares is divided into the four parts SSA d (ȳ i ȳ ) 2 i= τ 2 i i= (i T (f + v) m T (f + v)) 2 i= = (f + v) T A(f + v), SSB d βj 2 (ȳ j ȳ ) 2 j= j= (j T (h + w) m T (h + w)) 2 j= = (h + w) T A(h + w), SSAB d = n (τβ) 2 ij = n = n i= j= i= j= i= j= (ȳ ij ȳ i ȳ j + ȳ ) 2 ((i m )T Y(j m ))2 = m trace(yt AY) m 2 T Y T AY, SSE d = wijk 2 = (y ijk ȳ ij ) 2 = i= j= k= i= j= k= i= j= k= (e ijk ē ij ) 2,

4 with A (I m T ), Y = G + E where d sts for sums of squares computed directly from the dataset Z The index for each part is related to one batch of parameters in (5) If all the parameters in the batch are zero, the corresponding quadratic sum is χ 2 -distributed if divided by the true variance σ 2 (see, eg, Montgomery (99, page 59)) Since the true variance is not available, the estimate ˆσ 2 = SSE m 2 (n ) is used to form F - distributed test variables, eg, for τ i ; va d = SSd A /(m ) SSE d /(m2 (n )), H0 A,d : τ i = 0 i If all the τ i :s are zero, v A belongs to an F - distribution with m m 2 (n ) degrees of freedom If any τ i is nonzero it will give a large value of va d, compared to an F -table This is, of course, a test of the null hypothesis that all the τ i :s are zero, which correspond to the case that the regressor u(t) does not have any main effect on the measurements y(t) 5 AOVA APPLIED TO THE RESIDUALS FROM THE AFFIE MODEL In each cell ij we have the residuals ɛ ijk = g(u i, u j ) + e ijk âu i ˆbu j ĉ, where the parameters â, ˆb ĉ are computed according to (4) The total mean is now given by ɛ = nm 2 ɛ ijk i= j= k= = m 2 T (G + E) â m ut ˆb m ut ĉ = 0, where the last equality is due to (2) (4) The row means changes to ɛ i = ɛ ijk = i T (f + v âu) ĉ, nm j= k= since the sum over u j is zero The column means are given by ɛ j = ɛ ijk = j T (h + w nm ˆbu) ĉ, i= k=, finally, the cell means are given by ɛ ij = ɛ ijk = i T (G + E)j âi T u n ˆbu T j ĉ k= The sums of squares SS A SS B are changed to SSA r ( ɛ i ɛ ) 2 i= i= ((i T m T it u u T )(f + v)) 2 = (f + v) T A (f + v), with A (I m T uu T ), SSB r ( ɛ j ɛ ) 2 j= ((j T m T jt u u T )(h + w)) 2 j= = (h + w) T A (h + w) It is easy to verify that SSAB r = n ( ɛ ij ɛ i ɛ j + ɛ ) 2 = SSAB d that SS r E = i= j= = i= j= k= i= j= k= (ɛ ijk ɛ ij ) 2 (e ijk ē ij ) 2 = SSE, d (6) where r sts for sums of squares computed from the residuals from the affine model 6 DIFFERECES AD DISTRIBUTIOS The sum of squares corresponding to the regressor u(t), SS A, changes with the following amount when an affine model is extracted from the data: SS d A SS r A = (f + v) T A 2 (f + v), with A 2 = A A = n R u uu T The change in sums of squares corresponding to the regressor u(t T ) is SS d B SS r B = (h + w) T A 2 (h + w) 6 Distributions From, eg, Miller (997), the following is known: SS d AB SS d A σ 2 χ 2 (m, f T Af σ 2 ), SS d B σ 2 χ 2 (m, ht Ah σ 2 ), σ 2 χ 2 ((m ) 2, mtrace(gt AG) T G T AG m 2 σ 2 ), SS 2 E σ 2 χ 2 (m 2 (n )), where χ 2 (d, ω) means distributed as a noncentral Chi-square distribution with d degrees of freedom non-centrality parameter ω The sums of squares SSA d, SSd B, SSd AB SSd E are independently distributed if the dataset is balanced, ie, if all combinations of u(t) = u i, u(t T ) = u j are present equally many times in the input

5 To find the distributions of SSA r, SSr B, SSd A SSA r SSd B SSr B the following theorems, numbered as in Khatri (980), can be applied Let v (µ, σvv) 2 q = v T Av + 2l T v + c The notation in Theorem 2 is changed from matrix valued to vector valued v Theorem 2 q λσvχ 2 2 Ω (d, λ 2 σ ) iff (i) λ is the v 2 nonzero eigenvalue of VA (or AV) repeated d times, (ii) (l T +µ T A)V = k T VAV for some vector k (iii) Ω = (l T + µ T A)V(l T + µ T A) T µ T Aµ+2l T µ+c = (l T + µ T A)V(l T + µ T A) T /λ q λσvχ 2 2 (d) iff (i) VAVAV = λvav (ii) (l T + µ T A)V = 0 = µ T Aµ + 2l T µ + c Theorem 4 Let q i = v T A i v + 2l T i v + c i, i =, 2, where A A 2 are symmetric matrices Then q q 2 are independently distributed iff (i) VA VA 2 V = 0, (ii) VA 2 V(A µ + l ) = VA V(A 2 µ + l 2 ) = 0 (iii) (l + A µ) T V(l 2 + A 2 µ) = 0 ow set q = SS r A q 2 = SS d A SS r A Let l = A f, c = f T A f, l 2 = A 2 f, v (0, σ2 nm I) Then independence is shown by; (i) VA VA 2 V = A A 2 c 2 = f T A 2 f = n2 m R u (I m T uu T )uu T = 0, since u T u = T u = 0 (ii) VA 2 V(A µ + l ) = A 2 A f = n2 m uu T (I R u m T uu T ) = 0, VA V(A 2 µ + l 2 ) = A A 2 f = 0, (iii) (l + A µ) T V(l 2 + A 2 µ) = f T A A 2 f = 0 for the same reasons as in (i) Since conditions (i),(ii) (iii) in Theorem 4 are fulfilled, q q 2 (that is, SSA r SSd A SSA r ) are independently distributed The same argument is valid for the independence of SSB r SSB d SSr B if all occurrences of f are replaced with h v with w If assumption (3) is not valid, the independence is lost To compute the distributions of q q 2 the conditions in Theorem 2 are checked: (i) λ = eig(va ) with d = m 2 λ 2 = eig(va 2 ) with d 2 = (ii) (l + µa )V = f T A = k T VA V for k = f (l 2 + µa 2 )V = f T A 2 = k T VA 2 V for k = f (iii) Ω = (l + µa )V(l + µa ) T = f T A A f = λ f T A f µ T A µ + 2l T µ + c = c = f T A f = Ω /λ Ω 2 = (l 2 + µa 2 )V(l 2 + µa 2 ) T = f T A 2 A 2 f = λ 2 f T A 2 f µ T A 2 µ + 2l T 2 µ + c 2 = c 2 = f T A 2 f = Ω 2 /λ 2 By Theorem 2, SS r A = q σ 2 χ 2 (m 2, f T A f σ 2 ), SS d A SS r A = q 2 σ 2 χ 2 (, f T A 2 f σ 2 ) As before, all SS A can be replaced by SS B if f is replaced by h 62 Interpretation There are five test variables of interest: v AB = SS AB/(m ) 2 SS E /m 2 (n ), H0 AB : (τβ) ij = 0 i, va d = SSd A /(m ) SS E /m 2 (n ), H0 A,d : τ i = 0 i, vb d = SSd B /(m ) SS E /m 2 (n ), H0 B,d : β j = 0 j, va r = SSr A /(m 2) SS E /m 2 (n ), H0 A,r : τ i = 0 i, vb r = SSr B /(m 2) SS E /m 2 (n ), H0 B,r : β j = 0 j All of these belong to F-distributions if the corresponding null hypotheses are true, that is, large values of the test variables (compared to an F (d, d 2 )-table) are interpreted as that there are effects from the corresponding regressor If v AB is large an interaction effect between u(t) u(t T ) is assumed This means that the system can not be decomposed into additive subsystems If v AB is small, the null hypothesis can not be rejected, so it is assumed that the system can be decomposed into additive subsystems For both va d vr A large, the interpretation is that the effect from u(t) is nonlinear If va d is large, but va r is small, the effect from u(t) can be described by the linear model If both va d vr A are small, u(t) can not be shown to affect the output of the system The same reasoning built on vb d vr B is valid for the effects from u(t T ) Since SS AB is not changed when the linear model is extracted from the data, see (6), we can make the conclusion that all information about the interactions in the system is left in the residuals The interaction information is not destroyed by subtracting a linear model in a balanced dataset

6 63 Linear example Let the true system be given by y(t) = au(t) + bu(t T ) + e(t) Then f = au h = bu This gives SS AB σ 2 χ 2 ((m ) 2, 0), SS d A σ 2 χ 2 (m, nm 2 a 2 R u /σ 2 ), (7) SS d B σ 2 χ 2 (m, nm 2 b 2 R u /σ 2 ), (8) SS r A, SS r B σ 2 χ 2 (m 2, 0) (9) The size of the non-centrality parameters in (7) (8) depends on how many data are collected, the size of the true linear effects the variance of the input the noise This is what effects the power of the F-tests In the sums of squares computed from the residuals from the linear model all dependence on the true model is removed (9) Thus the conclusion can be made; that if SSA r or SSB r are found to be large by the F-tests, then the data are probably not collected from a true linear system 64 Quadratic example Let y(t) = u 2 (t) + e(t) Then f = [u 2,, u 2 m] T, h = 0, with SS AB σ 2 χ 2 ((m ) 2, 0), SS d A σ 2 χ 2 (m, nc d ), SS d B σ 2 χ 2 (m, 0), SS r A σ 2 χ 2 (m 2, nc r ), SS r B σ 2 χ 2 (m 2, 0) nc d = n m σ 2 (m u 4 i Ru), 2 i= nc r = n σ 2 (m m i= u 4 i R 2 u R u ( u 3 i ) 2 ) i= Here, it is clear that it matters how the levels of the input signal are chosen m i= u3 i can vary considerably while (2) is valid, since there are no constraints on the level distribution Also â is proportional to m i= u3 i, so a large difference between AOVA directly AOVA applied to the residuals means that the nonlinear effect have been picked up by the affine model, due to irregular sampling of the function If u(t) is symmetric around its mean, it is clear that m i= u3 i = 0, so nc d = nc r 7 COCLUSIOS a linear model estimated with linear least squares The distributions for the sums of squares needed for the AOVA analysis in the latter case were computed These distributions were used to show that by combining the two approaches AOVA is effective in finding what regressors give linear effects what regressors give nonlinear effects In section 62 it was shown how to divide the regressors into a linear a nonlinear subset, depending on the outcome of the AOVA tests The ability to structure the proposed nonlinear function into additive parts depending on only subsets of regressors is an AOVA feature which is not affected by subtraction of a linear model The results in the paper extend to more regressors, but an important limitation is that the dataset should be balanced, see section 2 This work has been supported by the Swedish Research Council(VR) which is gratefully acknowledged REFERECES K Godfrey Perturbation Signals for System Identification Prentice Hall, ew York, 993 C G Khatri Quadratic forms in normal variables In PR Krishnaiah, editor, Hbook of Statistics, volume, pages , Amsterdam, 980 orth-holl I Lind Model order selection of -FIR models by the analysis of variance method In Proc IFAC Symposium SYSID 2000, pages , Santa Barbara, Jun 2000 I Lind L Ljung Structure selection with AOVA: Local linear models In P van der Hof, B Wahlberg, S Weil, editors, Proc 3th IFAC Symposium on System Identification, pages 5 56, Rotterdam, the etherls, aug 2003 I Lind L Ljung Regressor selection with the analysis of variance method Automatica, 4(4): , Apr 2005 L Ljung System Identification, Theory for the User Prentice Hall, ew Jersey, 2nd edition, 999 L Ljung, Q Zhang, P Lindskog, A Juditsky Modeling a non-linear electric circuit with black box grey box models In Proc OLCOS IFAC Symposium on onlinear Control Systems, pages , Stuttgart, Germany, Sep 2004 RG Miller, Jr Beyond AOVA Chapman Hall, London, 997 DC Montgomery Design Analysis of Experiments John Wiley & Sons, ew York, 3rd edition, 99 Applying AOVA directly on a dataset was compared to applying AOVA on the residuals from

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Lec 5: Factorial Experiment

Lec 5: Factorial Experiment November 21, 2011 Example Study of the battery life vs the factors temperatures and types of material. A: Types of material, 3 levels. B: Temperatures, 3 levels. Example Study of the battery life vs the

More information

Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, E(X i X) 2 = (µ i µ) 2 + n 1 n σ2

Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, E(X i X) 2 = (µ i µ) 2 + n 1 n σ2 identity Y ijk Ȳ = (Y ijk Ȳij ) + (Ȳi Ȳ ) + (Ȳ j Ȳ ) + (Ȳij Ȳi Ȳ j + Ȳ ) Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, (1) E(MSE) = E(SSE/[IJ(K 1)]) = (2) E(MSA) = E(SSA/(I

More information

Finite-time experiment design with multisines

Finite-time experiment design with multisines Proceedings of the 7th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-, 8 Finite-time experiment design with multisines X. Bombois M. Barenthin P.M.J. Van den Hof

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1) Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

Analysis of Variance and Design of Experiments-I

Analysis of Variance and Design of Experiments-I Analysis of Variance and Design of Experiments-I MODULE VIII LECTURE - 35 ANALYSIS OF VARIANCE IN RANDOM-EFFECTS MODEL AND MIXED-EFFECTS MODEL Dr. Shalabh Department of Mathematics and Statistics Indian

More information

AR-order estimation by testing sets using the Modified Information Criterion

AR-order estimation by testing sets using the Modified Information Criterion AR-order estimation by testing sets using the Modified Information Criterion Rudy Moddemeijer 14th March 2006 Abstract The Modified Information Criterion (MIC) is an Akaike-like criterion which allows

More information

A. Motivation To motivate the analysis of variance framework, we consider the following example.

A. Motivation To motivate the analysis of variance framework, we consider the following example. 9.07 ntroduction to Statistics for Brain and Cognitive Sciences Emery N. Brown Lecture 14: Analysis of Variance. Objectives Understand analysis of variance as a special case of the linear model. Understand

More information

Multiple Comparisons. The Interaction Effects of more than two factors in an analysis of variance experiment. Submitted by: Anna Pashley

Multiple Comparisons. The Interaction Effects of more than two factors in an analysis of variance experiment. Submitted by: Anna Pashley Multiple Comparisons The Interaction Effects of more than two factors in an analysis of variance experiment. Submitted by: Anna Pashley One way Analysis of Variance (ANOVA) Testing the hypothesis that

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

2 Chapter 1 A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics.

2 Chapter 1 A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. 1 SOME ASPECTS O OLIEAR BLACK-BOX MODELIG I SYSTEM IDETIFICATIO Lennart Ljung Dept of Electrical Engineering, Linkoping University, Sweden, ljung@isy.liu.se 1 ITRODUCTIO The key problem in system identication

More information

Factorial designs. Experiments

Factorial designs. Experiments Chapter 5: Factorial designs Petter Mostad mostad@chalmers.se Experiments Actively making changes and observing the result, to find causal relationships. Many types of experimental plans Measuring response

More information

Two-Way Analysis of Variance - no interaction

Two-Way Analysis of Variance - no interaction 1 Two-Way Analysis of Variance - no interaction Example: Tests were conducted to assess the effects of two factors, engine type, and propellant type, on propellant burn rate in fired missiles. Three engine

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

Solutions to Final STAT 421, Fall 2008

Solutions to Final STAT 421, Fall 2008 Solutions to Final STAT 421, Fall 2008 Fritz Scholz 1. (8) Two treatments A and B were randomly assigned to 8 subjects (4 subjects to each treatment) with the following responses: 0, 1, 3, 6 and 5, 7,

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

Stat 502 Design and Analysis of Experiments General Linear Model

Stat 502 Design and Analysis of Experiments General Linear Model 1 Stat 502 Design and Analysis of Experiments General Linear Model Fritz Scholz Department of Statistics, University of Washington December 6, 2013 2 General Linear Hypothesis We assume the data vector

More information

We like to capture and represent the relationship between a set of possible causes and their response, by using a statistical predictive model.

We like to capture and represent the relationship between a set of possible causes and their response, by using a statistical predictive model. Statistical Methods in Business Lecture 5. Linear Regression We like to capture and represent the relationship between a set of possible causes and their response, by using a statistical predictive model.

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon Cover page Title : On-line damage identication using model based orthonormal functions Author : Raymond A. de Callafon ABSTRACT In this paper, a new on-line damage identication method is proposed for monitoring

More information

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Jacob Roll, Alexander azin, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet,

More information

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

IDENTIFICATION OF FIRST-ORDER PLUS DEAD-TIME CONTINUOUS-TIME MODELS USING TIME-FREQUENCY INFORMATION

IDENTIFICATION OF FIRST-ORDER PLUS DEAD-TIME CONTINUOUS-TIME MODELS USING TIME-FREQUENCY INFORMATION IDETIFICATIO OF FIRST-ORDER PLUS DEAD-TIME COTIUOUS-TIME MODELS USIG TIME-FREQUECY IFORMATIO George Acioli Júnior Marcus A R Berger Isabela S Vieira Péricles R Barros Unidade Acadêmica de Eng Elétrica

More information

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43 3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ

More information

Lecture 10. Factorial experiments (2-way ANOVA etc)

Lecture 10. Factorial experiments (2-way ANOVA etc) Lecture 10. Factorial experiments (2-way ANOVA etc) Jesper Rydén Matematiska institutionen, Uppsala universitet jesper@math.uu.se Regression and Analysis of Variance autumn 2014 A factorial experiment

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 6: Bivariate Correspondence Analysis - part II

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 6: Bivariate Correspondence Analysis - part II MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 6: Bivariate Correspondence Analysis - part II the Contents the the the Independence The independence between variables x and y can be tested using.

More information

Reference: Chapter 14 of Montgomery (8e)

Reference: Chapter 14 of Montgomery (8e) Reference: Chapter 14 of Montgomery (8e) 99 Maghsoodloo The Stage Nested Designs So far emphasis has been placed on factorial experiments where all factors are crossed (i.e., it is possible to study the

More information

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Q = (Y i β 0 β 1 X i1 β 2 X i2 β p 1 X i.p 1 ) 2, which in matrix notation is Q = (Y Xβ) (Y

More information

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35 General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

Sensitivity and Asymptotic Error Theory

Sensitivity and Asymptotic Error Theory Sensitivity and Asymptotic Error Theory H.T. Banks and Marie Davidian MA-ST 810 Fall, 2009 North Carolina State University Raleigh, NC 27695 Center for Quantitative Sciences in Biomedicine North Carolina

More information

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models

More information

Finite Sample Confidence Regions for Parameters in Prediction Error Identification using Output Error Models

Finite Sample Confidence Regions for Parameters in Prediction Error Identification using Output Error Models Proceedings of the 7th World Congress The International Federation of Automatic Control Finite Sample Confidence Regions for Parameters in Prediction Error Identification using Output Error Models Arnold

More information

Fractional Factorial Designs

Fractional Factorial Designs k-p Fractional Factorial Designs Fractional Factorial Designs If we have 7 factors, a 7 factorial design will require 8 experiments How much information can we obtain from fewer experiments, e.g. 7-4 =

More information

Optimal input design for nonlinear dynamical systems: a graph-theory approach

Optimal input design for nonlinear dynamical systems: a graph-theory approach Optimal input design for nonlinear dynamical systems: a graph-theory approach Patricio E. Valenzuela Department of Automatic Control and ACCESS Linnaeus Centre KTH Royal Institute of Technology, Stockholm,

More information

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.

A NEW INFORMATION THEORETIC APPROACH TO ORDER ESTIMATION PROBLEM. Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. A EW IFORMATIO THEORETIC APPROACH TO ORDER ESTIMATIO PROBLEM Soosan Beheshti Munther A. Dahleh Massachusetts Institute of Technology, Cambridge, MA 0239, U.S.A. Abstract: We introduce a new method of model

More information

iron retention (log) high Fe2+ medium Fe2+ high Fe3+ medium Fe3+ low Fe2+ low Fe3+ 2 Two-way ANOVA

iron retention (log) high Fe2+ medium Fe2+ high Fe3+ medium Fe3+ low Fe2+ low Fe3+ 2 Two-way ANOVA iron retention (log) 0 1 2 3 high Fe2+ high Fe3+ low Fe2+ low Fe3+ medium Fe2+ medium Fe3+ 2 Two-way ANOVA In the one-way design there is only one factor. What if there are several factors? Often, we are

More information

Econometrics II - EXAM Answer each question in separate sheets in three hours

Econometrics II - EXAM Answer each question in separate sheets in three hours Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Analysis of Variance

Analysis of Variance Analysis of Variance Blood coagulation time T avg A 62 60 63 59 61 B 63 67 71 64 65 66 66 C 68 66 71 67 68 68 68 D 56 62 60 61 63 64 63 59 61 64 Blood coagulation time A B C D Combined 56 57 58 59 60 61

More information

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n

More information

I i=1 1 I(J 1) j=1 (Y ij Ȳi ) 2. j=1 (Y j Ȳ )2 ] = 2n( is the two-sample t-test statistic.

I i=1 1 I(J 1) j=1 (Y ij Ȳi ) 2. j=1 (Y j Ȳ )2 ] = 2n( is the two-sample t-test statistic. Serik Sagitov, Chalmers and GU, February, 08 Solutions chapter Matlab commands: x = data matrix boxplot(x) anova(x) anova(x) Problem.3 Consider one-way ANOVA test statistic For I = and = n, put F = MS

More information

STAT 430 (Fall 2017): Tutorial 8

STAT 430 (Fall 2017): Tutorial 8 STAT 430 (Fall 2017): Tutorial 8 Balanced Incomplete Block Design Luyao Lin November 7th/9th, 2017 Department Statistics and Actuarial Science, Simon Fraser University Block Design Complete Random Complete

More information

CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING

CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING CHEAPEST IDENTIFICATION EXPERIMENT WITH GUARANTEED ACCURACY IN THE PRESENCE OF UNDERMODELING Xavier Bombois, Marion Gilson Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow) STAT40 Midterm Exam University of Illinois Urbana-Champaign October 19 (Friday), 018 3:00 4:15p SOLUTIONS (Yellow) Question 1 (15 points) (10 points) 3 (50 points) extra ( points) Total (77 points) Points

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 9 0 Linear regression Coherence Φ ( ) xy ω γ xy ( ω) = 0 γ Φ ( ω) Φ xy ( ω) ( ω) xx o noise in the input, uncorrelated output noise Φ zz Φ ( ω) = Φ xy xx ( ω )

More information

Two-Factor Full Factorial Design with Replications

Two-Factor Full Factorial Design with Replications Two-Factor Full Factorial Design with Replications Dr. John Mellor-Crummey Department of Computer Science Rice University johnmc@cs.rice.edu COMP 58 Lecture 17 March 005 Goals for Today Understand Two-factor

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Noise Reduction of JPEG Images by Sampled-Data H Optimal ε Filters

Noise Reduction of JPEG Images by Sampled-Data H Optimal ε Filters SICE Annual Conference 25 in Okayama, August 8-1, 25 Okayama University, Japan Noise Reduction of JPEG Images by Sampled-Data H Optimal ε Filters H. Kakemizu,1, M. Nagahara,2, A. Kobayashi,3, Y. Yamamoto,4

More information

Statistical Hypothesis Testing

Statistical Hypothesis Testing Statistical Hypothesis Testing Dr. Phillip YAM 2012/2013 Spring Semester Reference: Chapter 7 of Tests of Statistical Hypotheses by Hogg and Tanis. Section 7.1 Tests about Proportions A statistical hypothesis

More information

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 017, Mr Ruey S Tsay Solutions to Midterm Problem A: (51 points; 3 points per question) Answer briefly the following questions

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Mixed models Yan Lu March, 2018, week 8 1 / 32 Restricted Maximum Likelihood (REML) REML: uses a likelihood function calculated from the transformed set

More information

L2 gains and system approximation quality 1

L2 gains and system approximation quality 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Chapter 11: Factorial Designs

Chapter 11: Factorial Designs Chapter : Factorial Designs. Two factor factorial designs ( levels factors ) This situation is similar to the randomized block design from the previous chapter. However, in addition to the effects within

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 50 AA242B: MECHANICAL VIBRATIONS Undamped Vibrations of n-dof Systems These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical Vibrations:

More information

Optimal exact tests for complex alternative hypotheses on cross tabulated data

Optimal exact tests for complex alternative hypotheses on cross tabulated data Optimal exact tests for complex alternative hypotheses on cross tabulated data Daniel Yekutieli Statistics and OR Tel Aviv University CDA course 29 July 2017 Yekutieli (TAU) Optimal exact tests for complex

More information

Reference: Chapter 13 of Montgomery (8e)

Reference: Chapter 13 of Montgomery (8e) Reference: Chapter 1 of Montgomery (8e) Maghsoodloo 89 Factorial Experiments with Random Factors So far emphasis has been placed on factorial experiments where all factors are at a, b, c,... fixed levels

More information

arxiv: v1 [cs.sy] 20 Dec 2017

arxiv: v1 [cs.sy] 20 Dec 2017 Adaptive model predictive control for constrained, linear time varying systems M Tanaskovic, L Fagiano, and V Gligorovski arxiv:171207548v1 [cssy] 20 Dec 2017 1 Introduction This manuscript contains technical

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit LECTURE 6 Introduction to Econometrics Hypothesis testing & Goodness of fit October 25, 2016 1 / 23 ON TODAY S LECTURE We will explain how multiple hypotheses are tested in a regression model We will define

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization.

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 1 Chapter 1: Research Design Principles The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 2 Chapter 2: Completely Randomized Design

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

1 One-way analysis of variance

1 One-way analysis of variance LIST OF FORMULAS (Version from 21. November 2014) STK2120 1 One-way analysis of variance Assume X ij = µ+α i +ɛ ij ; j = 1, 2,..., J i ; i = 1, 2,..., I ; where ɛ ij -s are independent and N(0, σ 2 ) distributed.

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

Applied Econometrics (QEM)

Applied Econometrics (QEM) Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear

More information

A Mathematica Toolbox for Signals, Models and Identification

A Mathematica Toolbox for Signals, Models and Identification The International Federation of Automatic Control A Mathematica Toolbox for Signals, Models and Identification Håkan Hjalmarsson Jonas Sjöberg ACCESS Linnaeus Center, Electrical Engineering, KTH Royal

More information

In a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent:

In a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent: Activity #10: AxS ANOVA (Repeated subjects design) Resources: optimism.sav So far in MATH 300 and 301, we have studied the following hypothesis testing procedures: 1) Binomial test, sign-test, Fisher s

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

1 Last time: inverses

1 Last time: inverses MATH Linear algebra (Fall 8) Lecture 8 Last time: inverses The following all mean the same thing for a function f : X Y : f is invertible f is one-to-one and onto 3 For each b Y there is exactly one a

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Pattern correlation matrices and their properties

Pattern correlation matrices and their properties Linear Algebra and its Applications 327 (2001) 105 114 www.elsevier.com/locate/laa Pattern correlation matrices and their properties Andrew L. Rukhin Department of Mathematics and Statistics, University

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

MULTIVARIATE ANALYSIS OF VARIANCE

MULTIVARIATE ANALYSIS OF VARIANCE MULTIVARIATE ANALYSIS OF VARIANCE RAJENDER PARSAD AND L.M. BHAR Indian Agricultural Statistics Research Institute Library Avenue, New Delhi - 0 0 lmb@iasri.res.in. Introduction In many agricultural experiments,

More information

Contents. TAMS38 - Lecture 6 Factorial design, Latin Square Design. Lecturer: Zhenxia Liu. Factorial design 3. Complete three factor design 4

Contents. TAMS38 - Lecture 6 Factorial design, Latin Square Design. Lecturer: Zhenxia Liu. Factorial design 3. Complete three factor design 4 Contents Factorial design TAMS38 - Lecture 6 Factorial design, Latin Square Design Lecturer: Zhenxia Liu Department of Mathematics - Mathematical Statistics 28 November, 2017 Complete three factor design

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Sociology 6Z03 Review II

Sociology 6Z03 Review II Sociology 6Z03 Review II John Fox McMaster University Fall 2016 John Fox (McMaster University) Sociology 6Z03 Review II Fall 2016 1 / 35 Outline: Review II Probability Part I Sampling Distributions Probability

More information

CS 5014: Research Methods in Computer Science

CS 5014: Research Methods in Computer Science Computer Science Clifford A. Shaffer Department of Computer Science Virginia Tech Blacksburg, Virginia Fall 2010 Copyright c 2010 by Clifford A. Shaffer Computer Science Fall 2010 1 / 254 Experimental

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information