A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL

Size: px
Start display at page:

Download "A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL"

Transcription

1 A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL GABRIELA BEGANU The existence of the best linear unbiased estimators (BLUE) for parametric estimable functions in linear models is treated in a coordinate-free approach, and some relations between the general linear model and certain particular models are derived. A class of the BLUE is constructed using the maximal covariance operator. It is shown that a BLUE exists for each parametric estimable function and is unique in this class. AMS 2000 Subject Classification: 62H12, 47A05. Key words: best linear unbiased estimator, maximal covariance operator, finitedimensional inner product space, orthogonal projection. 1. INTRODUCTION The coordinate-free approach is used in this paper to treat the existence of the best linear unbiased estimators of parametric estimable functions in the general linear model, thus extending and unifying some results previously obtained [7], [8], [10], [16], [17]. This approach, which may also be called the geometric approach, for linear regression models has added the understanding to the problems of estimation and hypothesis testing and it was used at least since linear algebra and the theory of Hilbert spaces have been developed in operators form rather than in matrix form. One of the initial authors who treated in a coordinate-free approach the problem of the existence of BLUE for the expected mean in the general linear model is Kruskal [12]. He gave a necessary and sufficient condition for BLUE to be equal to the ordinary least squares estimators. This condition became the foundation for the alternative forms, as well as their extensions, proved in [1], [2], [3], [4], [5], [6], [9], [11], [15]. The purpose of the paper is to construct a limited class of the best linear unbiased estimators for each linear parametric function using the coordinatefree approach which allows an atractive and simple computational form. The paper is structurated as follows. The general linear model describing the functional relationship disturbed stochastically is presented from the coordinate-free point of view in Section 2. Some relations between the general MATH. REPORTS 9(59), 4 (2007),

2 328 Gabriela Beganu 2 linear model and particular linear models regarding the existence of BLUE are also deduced. In Section 3, a class of the best linear unbiased estimators of parametric estimable functions is constructed. It is shown that a BLUE in this class exists for each parametric function in the linear model and this BLUE is unique in this class. 2. RELATIONS BETWEEN LINEAR MODELS REGARDING THE EXISTENCE OF BLUE Let (E,S) be a measurable space and P = {P θ θ Θ} a family of associated probability measures. The complete structure of P is not necessarily known, but Θ is assumed to be a given parameter space. A random real-valued element y is a transformation y : E K,where (K, (, )) is a finite-dimensional inner product space. The expectation of y is the unique element µ θ Ksuch that (1) E θ (a, y) =(a, µ θ ) while the covariance operator V of y is the unique symmetric and non-negative mapping from K to K such that (2) cov θ ((a, y), (b, y)) = (a, V b) for all a, b V. The expectation (1) and the covariance (2) of y exist with respect to all P θ P, i.e., for all θ Θ. Let us introduce some notation: R(A) is the image of a linear mapping A : L K, or the column space of the matrix A; spana is the linear manifold spanned by an arbitrary set A; A is the orthogonal complement of a linear manifold A in a finite-dimensional inner product space. Let (L, [, ]) and (K, (, )) be two finite-dimensional inner product spaces. Then A : K L is the adjoint of the linear operator A, i.e., (Al, k) =[l, A k] for all l L, k K; N(A) isthenullspaceofa. If A is the matrix corresponding to a linear operator A, thena is its transpose, A is a generalized inverse of A(AA A = A) anda + is the Moore-Penrose inverse uniquely determined by A. Let Ω K. The statistical model M(Ω,V) is the set of all random elements y with E θ y = µ θ Ωandcov θ y = V for all θ Θ. If Ω is a linear subspace of K, thenm(ω,v) is called a linear model. When Ω = span {E θ y = µ θ θ Θ} and V =span{cov θ y = V θ Θ}, the general linear model is denoted M(Ω, V). In the sequel, the general linear regression model will be considered. Then Ω = R(X), the range of a linear mapping X : L K,where(L, [, ]) is a finite-dimensional Euclidean vector space.

3 3 A class of the best linear unbiased estimators 329 A parametric function g(θ) is said to be estimable if and only if there exists an element a Ksuch that (3) E θ (a, y) =(a, µ θ )=g(θ) for all θ Θ. Let us assume that Ω R(V ) andlet [λ, β] be a linear parametric estimable function. It is well known that a BLUE of [λ, β] is(a, y) iffx a = λ and cov θ (a, y) =(a, V a) has to be minimized in the class of all linear unbiased estimators, i.e., a M = {d K X d = λ}. SinceM M = X 1 (0), (a, y) is a linear unbiased estimator of minimum covariance iff (Va,d) = 0 for all d X 1 (0), or, equivalently, iff Va= Xb for some b L. The main question in the theory of linear models is the existence of minimum covariance linear unbiased estimators for parametric functions. One of the classical results, which will be used, is a theorem of Lehmann and Scheffé [14]: for the linear model M(Ω, V), (a, y) isablueofe θ (a, y) for all θ Θ if and only if Va Θ for all V V. Let V 0 be a maximal element of V, i.e., R(V ) R(V 0 ) for all V V. Such an element always exists ([13]). Using the Lehmann-Scheffé theorem, another proof will be given for the result below obtained in [8]. Proposition 1. Let V 0 be the maximal element of V. A BLUE of a parametric estimable function exists in the model M(Ω, V) if and only if (4) V0 1 (Ω) V 1 (Ω) for all V V. Proof. V 0 being the maximal element of V, we deduce that if b R(V ) for all V V then b R(V 0 ). Assume now that (a, y) is a BLUE in the model M(Θ, V). By the Lehmann-Scheffé theorem this condition is equivalent to b = Va Ωfor all V V. Thus, R(V 0 ) Ω. Since Ω R(V 0 ) for all V V, we deduce relation (4). Conversely, if (4) holds, then V 0 a Ω implies Va Ω for all V V, which is the condition for (a, y) tobeablueinthemodelm(ω, V). Proposition 2. If (a, y) is a BLUE of a parametric estimable function in the model M(Ω,V 0 ),then(a, y) is a BLUE in the model M(Ω, V) provided that a BLUE in this model exists. Proof. By the Lehmann-Scheffé theoreminthemodelm(ω,v 0 )wehave V 0 a Ω. Since a BLUE exists in the model M(Ω, V), this and relation (4) yield a V 1 (Ω) for all V V, which means that (a, y) isablueinthe model M(Ω, V).

4 330 Gabriela Beganu 4 Proposition 3. (a, y) is a BLUE of a parametric estimable function in the model M(Ω,V 0 ) if and only if (a, y) is a BLUE in the model M(Ω,W), where (5) W = V 0 + P is a symmetric non-negative definite operator and P is the orthogonal projection on Ω. Proof. (a, y) is a BLUE of a linear parametric function (3) in the model M(Ω,V 0 )if and only if V 0 a Ω. Since P is the orthogonal projection on Ω, we have Pa Ω. These conditions are equivalent to Wa Ω. Remark. The orthogonal projection P = XX + on Ω in (5) can be replaced by every symmetric and non-negative definite operator T such that R(T ) Ω. A choice of T can be XX. Proposition 4. A BLUE of a parametric estimable function (3) exists in the model M(Ω, V) if and only if (6) VW (Ω) Ω for all V V,whereW is a generalized symmetric inverse of W. Proof. From Propositions 1, 2 and 3 we have that the condition (a, y) is a BLUE of a parametric function g(θ) for all θ Θ in the model M(Ω, V), is equivalent to the condition that (a, y) isablueofg(θ) in the model M(Ω,W). This last statement is equivalent to Wa = b Ω. It follows from (5) that (7) R(V ) R(V 0 ) R(W ) for all V V. Hence V = WW V or, by transposing, V = VW W and we can write Va= VW Wa= VW b VW(Ω) for all V V.SinceVa Ω for all V V, relation (6) follows. 3. ACLASSOFBLUE In the sequel, the question of the existence of the best linear unbiased estimators for g(θ), θ Θ, will be treated in the general linear model imposing a restrictive condition. Theorem 1. If (a, y) is a BLUE of E θ (a, y) in the linear model M(Ω, V), then there exists a 1 R(W ) such that (a 1,y) is a BLUE of E θ (a, y) for all θ Θ.

5 5 A class of the best linear unbiased estimators 331 Proof. Let (a, y) beablueofe θ (a, y),θ Θ, such that a / R(W ). Then a Kcan be uniquely written as a = a 1 + a 2,wherea 1 R(W )and a 2 R(W ). Since Ω R(W ) (by (7)), we have R(W ) Ω,whichmeansa 2 Ω. Hence (8) E θ (a, y) =(a, Xβ) =(a 1,Xβ)=E θ (a 1,y) for all θ Θ. Moreover, by the Farkas-Minkowski theorem and the symmetry of the operators V and W, relation (7) implies R(W ) = N(W ) R(V ) = N(V ) for all V V. Therefore, a 2 N(V ) or, equivalently, Va 2 =0forallV V. This result allows one to write (9) cov θ (a, y) =(a 1 + a 2,V(a 1 + a 2 )) = (a 1,Va 1 )=cov θ (a 1,y) for all V V and θ Θ. Relations (8) and (9) show that (a 1,y)witha 1 R(W )isablueof E θ (a, y), which means that a BLUE of g(θ) witha 1 R(W )existswhena BLUE of g(θ) witha Kexists in the model M(Ω, V) for all θ Θ. If there exist a, b Ksuch that X a = X b = λ, then(a, y) and(b, y) are linear unbiased estimators for [λ, β]. Although both are unbiased estimators for the same parametric function, it is not necessarily true that (a, y) =(b, y), hence the linear unbiased estimator is not in general unique. However, if N(X )={0} then X a = X b implies that a b N(X ), so that a = b and the linear unbiased estimator of [λ, β] is unique. Note that this uniqueness is dependent upon the condition R(X) =Ω=K. The problem of the uniqueness of the BLUE (a, y) witha R(W )is solved by the result below Theorem 2. If (a, y) and (b, y) with a, b R(W ) are best linear unbiased estimators of a parametric function g(θ) in the linear model M(Ω, V) for all θ Θ, thena = b. Proof. (a, y) and(b, y) being unbiased estimators of the same parametric function, we have E θ (a, y) =E θ (b, y) for all θ Θ, which means that (a b, Xβ) = 0, and this relation is equivalent to a b Ω. Since (a, y) and(b, y) are minimum variance estimators in the model M(Ω, V), we have cov θ (a, y) =(a, V a) =(a, V b) =cov θ (b, y) for all V θ and θ Θ. The same equations are verified by V 0, the maximal element of V and, by Proposition 3, by W. Hence (10) (a b, W (a b)) = 0,

6 332 Gabriela Beganu 6 which yields W (a b) = 0 because W is a non-negative definite operator. Now, by the Farkas-Minkowski theorem, the relations W (a b) =0andW (a b) Ω imply a b W 1 (0) W 1 (Ω) = R(W ) [W (Ω )] = =[R(W ) W (Ω )] = R(W ). But a, b R(W ), hence a b R(W ). It follows that a b =0. REFERENCES [1] S.F. Arnold, A coordinate-free approach to finding optimal procedures for repeated measures designs. Ann. Statist. 7 (1979), [2] J.K. Baksalary and A.C. van Eijnsbergen, A comparison of two criteria for ordinary least squares estimators to be best linear unbiased estimators. Amer.Statist.42 (1988), [3] G. Beganu, Estimation of regression parameters in a covariance linear model. Stud. Cerc. Mat. 39 (1987), [4] G. Beganu, The existence conditions of the best linear unbiased estimators of the fixed effects. Econom. Comput. Econom. Cybernet. Stud. Res. 36 (2003), [5] G. Beganu, On Gram-Schmidt orthogonalizing process of design matrices in linear models as estimating procedure of covariance components. Rev. Real Acad. Cienc. Ser. A Mat. 99 (2005), [6] G. Beganu, A two-stage estimator of individual regression coefficients in multivariate linear growth curve models. Rev. Acad. Colombiana Cienc. 30 (2006), 1 6. [7] H. Drygas, Estimation and prediction for linear models in general spaces. Math. Operationsforsch. Statist. Ser. Statist. 8 (1975), [8] H. Drygas, Multivariate linear models with missing observations. Ann. Statist. 4 (1976), [9] M.L. Eaton, Gauss-Markov estimation for multivariate linear models: A coordinate-free approach. Ann. Statist. 2 (1974), [10] S. Gnot, W. Klonecki and R. Zmyslony, Uniformly minimum variance unbiased estimation in Euclidean vector spaces. Math. Operationsforsch. Statist. Ser. Statist. 8 (1975), [11] S.I. Haberman, How much do Gauss-Markov and least squares estimates differ? A coordinate-free approach. Ann. Statist. 3 (1975), [12] W. Kruskal, When are Gauss-Markov and least squares estimators identical? A coordinate-free approach. Ann. Math. Statist. 39 (1968), [13] L.R. Lamotte, Admissibility in linear estimation. Ann. Statist. 10 (1982), [14] E. Lehmann and H. Scheffé, Completeness, similar regions and unbiased estimation. Sankhya 10 (1950), [15] S. Puntanen, G.P.H. Styan and Y. Tian, Three rank formulas associated with the covariance matrices of the BLUE and the OLSE in the general linear model. Econometric Theory 21 (2005), [16] J. Seely, Linear spaces and unbiased estimation. Ann. Math. Statist. 41 (1970),

7 7 A class of the best linear unbiased estimators 333 [17] J. Seely, Linear spaces and unbiased estimation. Application to the mixed linear models. Ann. Math. Statist. 41 (1970), [18] G. Zyskind, On canonical forms, non-negative covariance matrices and best and simple least squares linear estimators in linear models. Ann. Math. Statist. 38 (1967), Received 8 February 2007 Academy of Economic Studies Department of Mathematics Piaţa Romanǎ nr Bucharest, Romania gabriela beganu@yahoo.com

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

On the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models.

On the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models. RACSAM Rev. R. Acad. Cien. Serie A. Mat. VOL. 101 (1), 2007, pp. 63 70 Estadística e Investigación Operativa / Statistics and Operations Research On the equality of the ordinary least squares estimators

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

The Equality of OLS and GLS Estimators in the

The Equality of OLS and GLS Estimators in the The Equality of OLS and GLS Estimators in the Linear Regression Model When the Disturbances are Spatially Correlated Butte Gotu 1 Department of Statistics, University of Dortmund Vogelpothsweg 87, 44221

More information

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases MTH6140 Linear Algebra II Notes 7 16th December 2010 7 Inner product spaces Ordinary Euclidean space is a 3-dimensional vector space over R, but it is more than that: the extra geometric structure (lengths,

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete

More information

BEST UNBIASED LINEAR ESTIMATION,

BEST UNBIASED LINEAR ESTIMATION, PROBABLTY AND M4THFMATTAL STATSTCS.. - - BEST UNBASED LNEAR ESTMATON, A COORDNATE FREE APPROACH BY S. GNOT, W. RLONECK AND W. ZMYSLONY (WHOCLAW) Abstract. This paper gives further developments of the theory

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Optimal estimation for doubly multivariate data in blocked compound symmetric covariance structure

Optimal estimation for doubly multivariate data in blocked compound symmetric covariance structure THE UNIVERSITY OF TEXAS AT SAN ANTONIO, COLLEGE OF BUSINESS Working Paper SERIES Date April 22, 205 WP # 0006MSS-253-205 Optimal estimation for doubly multivariate data in blocked compound symmetric covariance

More information

On a matrix result in comparison of linear experiments

On a matrix result in comparison of linear experiments Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products. MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products. Orthogonal projection Theorem 1 Let V be a subspace of R n. Then any vector x R n is uniquely represented

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Vectors. Vectors and the scalar multiplication and vector addition operations:

Vectors. Vectors and the scalar multiplication and vector addition operations: Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

Geometric interpretation of signals: background

Geometric interpretation of signals: background Geometric interpretation of signals: background David G. Messerschmitt Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-006-9 http://www.eecs.berkeley.edu/pubs/techrpts/006/eecs-006-9.html

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Take-home final examination February 1 st -February 8 th, 019 Instructions You do not need to edit the solutions Just make sure the handwriting is legible The final solutions should

More information

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 +  1 y 2 = + x 2 +  2 :::::::: :::::::: y N = + x N +  N 1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)

More information

Rank equalities for idempotent and involutory matrices

Rank equalities for idempotent and involutory matrices Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Matrix Differential Calculus with Applications in Statistics and Econometrics

Matrix Differential Calculus with Applications in Statistics and Econometrics Matrix Differential Calculus with Applications in Statistics and Econometrics Revised Edition JAN. R. MAGNUS CentERjor Economic Research, Tilburg University and HEINZ NEUDECKER Cesaro, Schagen JOHN WILEY

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Ch. 12 Linear Bayesian Estimators

Ch. 12 Linear Bayesian Estimators Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).

More information

Estimation under a general partitioned linear model

Estimation under a general partitioned linear model Linear Algebra and its Applications 321 (2000) 131 144 www.elsevier.com/locate/laa Estimation under a general partitioned linear model Jürgen Groß a,, Simo Puntanen b a Department of Statistics, University

More information

Analysis of transformations of linear random-effects models

Analysis of transformations of linear random-effects models Analysis of transformations of linear random-effects models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. Assume that

More information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Dénes Petz Alfréd Rényi Institute of Mathematics Hungarian Academy of Sciences POB 127, H-1364 Budapest, Hungary

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

Economics 620, Lecture 4: The K-Varable Linear Model I

Economics 620, Lecture 4: The K-Varable Linear Model I Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Two remarks on normality preserving Borel automorphisms of R n

Two remarks on normality preserving Borel automorphisms of R n Proc. Indian Acad. Sci. (Math. Sci.) Vol. 3, No., February 3, pp. 75 84. c Indian Academy of Sciences Two remarks on normality preserving Borel automorphisms of R n K R PARTHASARATHY Theoretical Statistics

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations MATHEMATICS Subject Code: MA Course Structure Sections/Units Section A Section B Section C Linear Algebra Complex Analysis Real Analysis Topics Section D Section E Section F Section G Section H Section

More information

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That

More information

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques..

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques.. 1 Use computational techniques and algebraic skills essential for success in an academic, personal, or workplace setting. (Computational and Algebraic Skills) MAT 203 MAT 204 MAT 205 MAT 206 Calculus I

More information

arxiv: v2 [math.st] 22 Oct 2016

arxiv: v2 [math.st] 22 Oct 2016 Vol. 0 (0000) arxiv:1608.08789v2 [math.st] 22 Oct 2016 On the maximum likelihood degree of linear mixed models with two variance components Mariusz Grządziel Department of Mathematics, Wrocław University

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS MAT 217 Linear Algebra CREDIT HOURS: 4.0 EQUATED HOURS: 4.0 CLASS HOURS: 4.0 PREREQUISITE: PRE/COREQUISITE: MAT 210 Calculus I MAT 220 Calculus II RECOMMENDED

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting

More information

IFT 6760A - Lecture 1 Linear Algebra Refresher

IFT 6760A - Lecture 1 Linear Algebra Refresher IFT 6760A - Lecture 1 Linear Algebra Refresher Scribe(s): Tianyu Li Instructor: Guillaume Rabusseau 1 Summary In the previous lecture we have introduced some applications of linear algebra in machine learning,

More information

The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison The General Linear Model Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison How we re approaching the GLM Regression for behavioral data Without using matrices Understand least squares

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

A matrix handling of predictions of new observations under a general random-effects model

A matrix handling of predictions of new observations under a general random-effects model Electronic Journal of Linear Algebra Volume 29 Special volume for Proceedings of the International Conference on Linear Algebra and its Applications dedicated to Professor Ravindra B. Bapat Article 4 2015

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε, THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1

More information

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations Math 46 - Abstract Linear Algebra Fall, section E Orthogonal matrices and rotations Planar rotations Definition: A planar rotation in R n is a linear map R: R n R n such that there is a plane P R n (through

More information

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks 1309701 Theory of ordinary differential equations Review of ODEs, existence and uniqueness of solutions for ODEs, existence

More information

Linear Algebra and Dirac Notation, Pt. 1

Linear Algebra and Dirac Notation, Pt. 1 Linear Algebra and Dirac Notation, Pt. 1 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 1 February 1, 2017 1 / 13

More information

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices University of Wyoming Wyoming Scholars Repository Mathematics Faculty Publications Mathematics 9-1-1997 A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices Bryan L. Shader University

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2) RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made

More information

Testing Hypotheses Of Covariance Structure In Multivariate Data

Testing Hypotheses Of Covariance Structure In Multivariate Data Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 6 2018 Testing Hypotheses Of Covariance

More information

ON THE SINGULAR DECOMPOSITION OF MATRICES

ON THE SINGULAR DECOMPOSITION OF MATRICES An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition

More information

Economics 620, Lecture 2: Regression Mechanics (Simple Regression)

Economics 620, Lecture 2: Regression Mechanics (Simple Regression) 1 Economics 620, Lecture 2: Regression Mechanics (Simple Regression) Observed variables: y i ; x i i = 1; :::; n Hypothesized (model): Ey i = + x i or y i = + x i + (y i Ey i ) ; renaming we get: y i =

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

ECONOMETRIC THEORY. MODULE VI Lecture 19 Regression Analysis Under Linear Restrictions

ECONOMETRIC THEORY. MODULE VI Lecture 19 Regression Analysis Under Linear Restrictions ECONOMETRIC THEORY MODULE VI Lecture 9 Regression Analysis Under Linear Restrictions Dr Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur One of the basic objectives

More information

Pattern Recognition. Parameter Estimation of Probability Density Functions

Pattern Recognition. Parameter Estimation of Probability Density Functions Pattern Recognition Parameter Estimation of Probability Density Functions Classification Problem (Review) The classification problem is to assign an arbitrary feature vector x F to one of c classes. The

More information