5. PROPAGATION OF VARIANCES APPLIED TO LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS., the inverse of the normal equation coefficient matrix is Q
|
|
- Kristian Neal
- 6 years ago
- Views:
Transcription
1 RMI University 5. PROPAGAION OF VARIANCES APPLIED O LEAS SQUARES ADJUSMEN OF INDIREC OBSERVAIONS A most important outcome of a least squares adjustment is that estimates of the precisions of the quantities sought, the elements of, the unknowns or the parameters, are easily obtained from the matri equations of the solution. Application of the Law of Propagation of Variances demonstrates that equal to the cofactor matri N, the inverse of the normal equation coefficient matri is Q that contains estimates of the variances and covariances of the elements of. In addition, estimates of the precisions of the residuals and adjusted observations may be obtained. his most useful outcome enables a statistical analysis of the results of a least squares adjustment and provides the practitioner with a degree of confidence in the results Cofactor matrices for adjustment of indirect observations he observation equations for adjustment of indirect observations is given by v + B f (5.1) f is an (n,1) vector of numeric terms derived from the (n,1) vector of observations l and the (n,1) vector of constants d as f d l (5.) Associated with the vector of observations l is a variance-covariance matri as we as a cofactor matri Q and a weight matri W Q. Remember that in most practical applications of least squares, the matri is unknown, but estimated a priori by Q that contains estimates of the variances and covariances and σ Q where σ is the reference variance or variance factor. Note: In the derivations that foow, the subscript "" is dropped from Q and W If equation (5.) is written as f ( I) l + d (5.3) 5, R.E. Deakin Notes on Least Squares (5) 5 1
2 RMI University then (5.3) is in a form suitable for employing the Law of Propagation of Variances developed in Chapter 3; i.e., if y A + b and y and are random variables linearly related and b is a vector of constants then Q yy AQ A. Hence, the cofactor matri of the numeric terms f is Q ff I Q I Q hus the cofactor matri of f is also the a priori cofactor matri of the observations l. he solution "steps" in the least squares adjustment of indirect observations are set out Chapter and restated as N t B WB B Wf N t v f B ˆl l + v o apply the Law of Propagation of Variances, these equations may be re-arranged in the form y A + b where the terms in parenthesis constitute the A matri. t B W f (5.4) N t (5.5) v f B f f ( BN t BN B Wf ) I BN B W f (5.6) ˆ l l + v l+ f B d B B + d (5.7) Applying the Law of Propagation of Variances to equations (5.4) to (5.7) gives the foowing cofactor matrices 5, R.E. Deakin Notes on Least Squares (5) 5
3 RMI University tt ff Q B W Q B W N (5.8) tt Q N Q N N vv (5.9) ff Q I BN B W Q I BN B W ˆˆ Q BN B (5.1) Q B Q B BN B vv Q Q (5.11) Variance-covariance matrices for t,, v and by the variance factor σ. ˆl are obtained by multiplying the cofactor matri 5.. Calculation of the quadratic form vwv he a priori estimate of the variance factor may be computed from ˆ σ v Wv (5.1) r where v Wv r n u is the quadratic form, and is the degrees of freedom where n is the number of observations and u is the number of unknown parameters. r is also known as the number of redundancies. A derivation of equation (5.1) is given below. he quadratic form in the foowing manner. v Wv may be computed Remembering, for the method of indirect observations, the foowing matri equations N t B WB B Wf N t v f B then 5, R.E. Deakin Notes on Least Squares (5) 5 3
4 RMI University v Wv ( f B) W( f B) ( f B ) W( f B) ( f W B W)( f B) + fwf fwb BWf BWB f Wf f WB+ B WB f Wf t + N f Wf t+ t and vwv fwf t (5.13) 5.3. Calculation of the Estimate of the Variance Factor ˆ σ he variance-covariance matrices of residuals, adjusted observations and computed vv ˆˆ parameters are calculated from the general relationship σ Q (5.14) Cofactor matrices Q, Q and Q are computed from equations (5.9) to (5.11) and so it vv ˆˆ remains to determine an estimate of the variance factor ˆ σ. he development of a matri epression for computing ˆ σ is set out below and foows Mikhail (1976, pp.85-88). Some preliminary relationships wi be useful. 1. If A is an (n,n) square matri, the sum of its diagonal elements is a scalar quantity caed the trace of A and denoted by tr ( A ) he foowing relationships are useful ( + ) + tr A B tr A tr B for A and B of same order (5.15) tr ( A ) tr ( A) (5.16) and for the quadratic form A where A is symmetric A tr ( A) (5.17) 5, R.E. Deakin Notes on Least Squares (5) 5 4
5 RMI University. he variance-covariance matri given by equation (3.1) can be epressed in the foowing manner, remembering that is a vector of random variables and vector of means. {( m )( m ) } {( m)( m) } { m m mm} { } { } { } { { } {} { } E E E + Now from equation (3.18) m E{ } E E m E m + E m m } E E m m E + m m hence m is a { } { } E mm mm + mm E m m (5.18) or { } E + 3. he epected value of the residuals is zero, i.e., mm (5.19) E { v} mv (5.) 4. By definition (see Chapter ) the weight matri W, the cofactor matri Q and the variance-covariance matri are related by W Q σ (5.1) Now, for the least squares adjustment of indirect observations the foowing relationships are recaed v + B f, N B WB, t B Wf Q Q, Q N, Q N ff tt Bearing in mind equation (5.1), the foowing relationships may be introduced and from these foow 1 W, M B B σ 5, R.E. Deakin Notes on Least Squares (5) 5 5
6 RMI University, σ M, M 4 ff tt In addition, the epectation of the vector f is the mean Now since E { } m and { } m f and so we may write { } { } { } { } mf E f E v+ B E v + BE E v m Bm (5.) f Now the quadratic form and from equation (5.13) σ 1 vwv v v (5.3) Using the relationships above vwv fwf t f Wf N v v f f M Now the epected value of this quadratic form is E { v v} E{ f f M} E{ } E{ f f M} Recognising that the terms on the right-hand-side are both quadratic forms, equation (5.17) can be used to give Now using equation (5.19) { v v} { ( ff )} { ( M) } tr ( E{ ff }) tr ( E{ M} ) tr ( E{ ff } ) tr ( E{ } M) E E tr E tr { v v} ( ff + mf mf ) ( + mm M) E tr tr ( Inn mf mf ) tr ( Iuu mmm) ( Inn Iuu ) tr ( mf mf mmm) tr + + tr + m m + m Mm f f 5, R.E. Deakin Notes on Least Squares (5) 5 6
7 RMI University From equation (5.) mf Bm hence using the rule for matri transpose mf Bm mb, then E{ v v} ( n u) mb Bm + mmm mmm + mmm hus according to equation (5.3) and the epression above from which foows E { vwv} σ E{ v v} σ σ E { vwv} n u Consequently, an unbiased estimate of the variance factor ˆ σ can be computed from vwv vwv ˆ σ n u r (5.4) r n u is the number of redundancies in the adjustment and is known as the degrees of freedom Using equation (5.13) an unbiased estimate of the variance factor ˆ σ can be computed from ˆ σ f Wf t (5.5) r 5, R.E. Deakin Notes on Least Squares (5) 5 7
Velocity (Kalman Filter) Velocity (knots) Time (sec) Kalman Filter Accelerations 2.5. Acceleration (m/s2)
KALMAN FILERING 1 50 Velocity (Kalman Filter) 45 40 35 Velocity (nots) 30 25 20 15 10 5 0 0 10 20 30 40 50 60 70 80 90 100 ime (sec) 3 Kalman Filter Accelerations 2.5 Acceleration (m/s2) 2 1.5 1 0.5 0
More information2. LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS
. LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS.. Introduction The modern professional surveyor must be competent in all aspects of surveying measurements such as height differences, linear distances,
More information3. PROPAGATION OF VARIANCES
RMI University 3. PROPAGAION OF VARIANCES In least squares problems, where measurements (with associated estimates of variances and covariances) are used to determine the best estimates unknown quantities
More informationNow, if you see that x and y are present in both equations, you may write:
Matrices: Suppose you have two simultaneous equations: y y 3 () Now, if you see that and y are present in both equations, you may write: y 3 () You should be able to see where the numbers have come from.
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationEigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018
January 18, 2018 Contents 1 2 3 4 Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. Review 1 We looked at general
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationDealing with Rotating Coordinate Systems Physics 321. (Eq.1)
Dealing with Rotating Coordinate Systems Physics 321 The treatment of rotating coordinate frames can be very confusing because there are two different sets of aes, and one set of aes is not constant in
More informationLinear Algebra Review
Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and
More informationLeast squares: introduction to the network adjustment
Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different
More informationLecture 13: Simple Linear Regression in Matrix Format
See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix
More informationLeast Squares and Kalman Filtering
INRODUCION Least Squares and Kalman Filtering R.E. Deain Bonbeach VIC, 396, Australia Email: randm.deain@gmail.com -Sep-5 he theory of least squares and its application to adjustment of survey measurements
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationAssignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:
Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationM.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry
M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population
More informationStatistical Techniques II
Statistical Techniques II EST705 Regression with atrix Algebra 06a_atrix SLR atrix Algebra We will not be doing our regressions with matrix algebra, except that the computer does employ matrices. In fact,
More informationComplex fraction: - a fraction which has rational expressions in the numerator and/or denominator
Comple fraction: - a fraction which has rational epressions in the numerator and/or denominator o 2 2 4 y 2 + y 2 y 2 2 Steps for Simplifying Comple Fractions. simplify the numerator and/or the denominator
More informationWritten as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year
Written as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year 2012-2013 Basic MATHEMATICS First Year Diploma Semester - I First
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationMathematics. EC / EE / IN / ME / CE. for
Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationLecture 9 SLR in Matrix Form
Lecture 9 SLR in Matrix Form STAT 51 Spring 011 Background Reading KNNL: Chapter 5 9-1 Topic Overview Matrix Equations for SLR Don t focus so much on the matrix arithmetic as on the form of the equations.
More informationConfidence Intervals for One-Way Repeated Measures Contrasts
Chapter 44 Confidence Intervals for One-Way Repeated easures Contrasts Introduction This module calculates the expected width of a confidence interval for a contrast (linear combination) of the means in
More informationModern Navigation. Thomas Herring
12.215 Modern Navigation Thomas Herring Basic Statistics Summary of last class Statistical description and parameters Probability distributions Descriptions: expectations, variances, moments Covariances
More informationOne Solution Two Solutions Three Solutions Four Solutions. Since both equations equal y we can set them equal Combine like terms Factor Solve for x
Algebra Notes Quadratic Systems Name: Block: Date: Last class we discussed linear systems. The only possibilities we had we 1 solution, no solution or infinite solutions. With quadratic systems we have
More informationMatrix Algebra Review
APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in
More informationGet Solution of These Packages & Learn by Video Tutorials on Matrices
FEE Download Stud Package from website: wwwtekoclassescom & wwwmathsbsuhagcom Get Solution of These Packages & Learn b Video Tutorials on wwwmathsbsuhagcom Matrices An rectangular arrangement of numbers
More informationStatistics, Data Analysis, and Simulation SS 2015
Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, June 2, 2015 Dr. Michael O. Distler
More informationTOPIC III LINEAR ALGEBRA
[1] Linear Equations TOPIC III LINEAR ALGEBRA (1) Case of Two Endogenous Variables 1) Linear vs. Nonlinear Equations Linear equation: ax + by = c, where a, b and c are constants. 2 Nonlinear equation:
More informationPrincipal Component Analysis
Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNotes on Adjustment Computations
Notes on Adjustment Computations Based on Former Geodetic Science Courses GS650 and GS651 aught at he Ohio State University by Prof. Burkhard Schaffrin Compiled by yle Snow December 1, 018 Contents Introduction
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More information1 Counting spanning trees: A determinantal formula
Math 374 Matrix Tree Theorem Counting spanning trees: A determinantal formula Recall that a spanning tree of a graph G is a subgraph T so that T is a tree and V (G) = V (T ) Question How many distinct
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationWritten as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year
Written as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year 2012-2013 Basic MATHEMATICS First Year Diploma Semester - I First
More informationChapter 2. Ma 322 Fall Ma 322. Sept 23-27
Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is
More informationy ˆ i = ˆ " T u i ( i th fitted value or i th fit)
1 2 INFERENCE FOR MULTIPLE LINEAR REGRESSION Recall Terminology: p predictors x 1, x 2,, x p Some might be indicator variables for categorical variables) k-1 non-constant terms u 1, u 2,, u k-1 Each u
More information7.3. Determinants. Introduction. Prerequisites. Learning Outcomes
Determinants 7.3 Introduction Among other uses, determinants allow us to determine whether a system of linear equations has a unique solution or not. The evaluation of a determinant is a key skill in engineering
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationINNER CONSTRAINTS FOR A 3-D SURVEY NETWORK
eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationThe Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing
1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we
More informationTHE ERROR ELLIPSE. 2S x. 2S x and 2S y define the dimension of the standard error rectangle. S and S show estimated error in station in direction of
THE ERROR ELLIPSE Y B 2Sy 2S x A X 2S x and 2S y define the dimension of the standard error rectangle. S and S show estimated error in station in direction of x y coordinate axes. However, position of
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationWe use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write
1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical
More informationAdditional Notes: Investigating a Random Slope. When we have fixed level-1 predictors at level 2 we show them like this:
Ron Heck, Summer 01 Seminars 1 Multilevel Regression Models and Their Applications Seminar Additional Notes: Investigating a Random Slope We can begin with Model 3 and add a Random slope parameter. If
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More informationESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression
ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression Lecture 11 May 10, 2005 Multivariant Regression A multi-variant relation between a dependent variable y and several independent variables
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationKalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q
Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More informationLARGE SAMPLE VARIANCES OF MAXDUM LJICELIHOOD ESTIMATORS OF VARIANCE COMPONENTS. S. R. Searle. Biometrics Unit, Cornell University, Ithaca, New York
BU-255-M LARGE SAMPLE VARIANCES OF MAXDUM LJICELIHOOD ESTIMATORS OF VARIANCE COMPONENTS. S. R. Searle Biometrics Unit, Cornell University, Ithaca, New York May, 1968 Introduction Asymtotic properties of
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationProperties of the least squares estimates
Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares
More informationMULTINOMIAL PROBABILITY DISTRIBUTION
MTH/STA 56 MULTINOMIAL PROBABILITY DISTRIBUTION The multinomial probability distribution is an extension of the binomial probability distribution when the identical trial in the experiment has more than
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationMATRICES AND ITS APPLICATIONS
MATRICES AND ITS Elementary transformations and elementary matrices Inverse using elementary transformations Rank of a matrix Normal form of a matrix Linear dependence and independence of vectors APPLICATIONS
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationSTA 414/2104, Spring 2014, Practice Problem Set #1
STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,
More informationMultiplying matrices by diagonal matrices is faster than usual matrix multiplication.
7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix
More informationThe Matrix Algebra of Sample Statistics
The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.7 LINEAR INDEPENDENCE LINEAR INDEPENDENCE Definition: An indexed set of vectors {v 1,, v p } in n is said to be linearly independent if the vector equation x x x
More informationBasic concepts in estimation
Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates
More informationAgenda: Understand the action of A by seeing how it acts on eigenvectors.
Eigenvalues and Eigenvectors If Av=λv with v nonzero, then λ is called an eigenvalue of A and v is called an eigenvector of A corresponding to eigenvalue λ. Agenda: Understand the action of A by seeing
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More information0.1. Linear transformations
Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationJUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises
More informationECONOMETRIC THEORY. MODULE XVII Lecture - 43 Simultaneous Equations Models
ECONOMETRIC THEORY MODULE XVII Lecture - 43 Simultaneous Equations Models Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur 2 Estimation of parameters To estimate
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationMatrix inversion and linear equations
Learning objectives. Matri inversion and linear equations Know Cramer s rule Understand how linear equations can be represented in matri form Know how to solve linear equations using matrices and Cramer
More informationCP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013
CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1 Prof. N. Harnew University of Oxford TT 2013 1 OUTLINE 1. Vector Algebra 2. Vector Geometry 3. Types of Matrices and Matrix Operations 4. Determinants
More informationIntroduction to Matrix Algebra and the Multivariate Normal Distribution
Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Structural Equation Modeling Lecture #2 January 18, 2012 ERSH 8750: Lecture 2 Motivation for Learning the Multivariate
More informationThe General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison
The General Linear Model Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison How we re approaching the GLM Regression for behavioral data Without using matrices Understand least squares
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationM M Cross-Over Designs
Chapter 568 Cross-Over Designs Introduction This module calculates the power for an x cross-over design in which each subject receives a sequence of treatments and is measured at periods (or time points).
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationLinear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.
Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation
More informationComputational Stiffness Method
Computational Stiffness Method Hand calculations are central in the classical stiffness method. In that approach, the stiffness matrix is established column-by-column by setting the degrees of freedom
More informationModel Mis-specification
Model Mis-specification Carlo Favero Favero () Model Mis-specification 1 / 28 Model Mis-specification Each specification can be interpreted of the result of a reduction process, what happens if the reduction
More informationModel Estimation Example
Ronald H. Heck 1 EDEP 606: Multivariate Methods (S2013) April 7, 2013 Model Estimation Example As we have moved through the course this semester, we have encountered the concept of model estimation. Discussions
More informationSYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS
SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS STRUCTURE OF EXAMINATION PAPER. There will be one -hour paper consisting of 4 questions..
More informationLinear Independence. e 1 = (1,0,0,...0) e 2 = 0,1,0,...0). e n = (0,0,0,...1) a 1 e 1 = (a 1,0,0,...0) a 2 e 2 = 0,a 2,0,...0)
Linear Independence --17 Definition Let V be a vector space over a field F, and let S V The set S is linearly independent if v 1,,v n S,,, F, and v 1 + + v n = implies = = = An equation like the one above
More information