Least squares: introduction to the network adjustment
|
|
- Philomena Jacobs
- 5 years ago
- Views:
Transcription
1 Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different values therefore high precision observations are not deterministic phenomena, but are characterized by several, unpredictable, errors. High precision observations can be modeled by random variables A proper model of random variable to describe high precision observations is the gaussian or normal distribution, completely defined by the mean ( µ) and the covariance ( C ) xx
2 f ( x)= 1 ( 2π ) n/2 ( detc ) xx n/2 e 1 2 x µ x ( ) T C 1 xx ( x µ x ) Geodetic observations are always high precision observations and can be described as extractions from random variables (normally distributed). Geodetic observations are acquired to estimate positions of points in a given reference system (the frame of the observations).
3 Given X0Y (and measured angles and distances), the coordinates of the points should be determined. From a geometrical point of view, the necessary and sufficient number of observations to estimate positions can always be univocally defined.
4 Two problems arise positions estimates depend on the observation errors errors cannot be checked Solution to the problems the positions of the points are modeled as random variables (normally distributed): accuracies must be computed beside positions redundant (more than necessary) observations are required to perform cross-checks redundant observations allow the evaluation of the errors
5 The effect of redundant observations Given the height H of point 1, the 1 heights H, 2 H of 2 and 3 have to be 3 determined. From a geometric point of view, two observations are necessary and sufficient, for example 12 ΔH : 0 H 2=H 1+ΔH 120 H 3=H 1+ΔH 12 + ΔH ΔH, 23 0 The subscript o denotes the numerical value of the observation, that contain the unknown observation error.
6 Each error ε in the observations 12 0 into the estimates H and 2 H : 3 Ĥ 2=H 1+ΔH 12+ε 12 Ĥ =H +ΔH + ε + ΔH + ε ΔH e 23 0 ΔH directly propagates The redundant observation 31 ΔH is introduced
7 Geometric condition ΔH 12 + ΔH 23 + ΔH 31 = 0 The effect of observation errors: first problem ΔH 12 + ΔH ΔH 0 31 = ε 0 0 ε is the closure error and provides an overall measure of the observations error ε= ε12 + ε23 + ε 31 Note that ε can be computed, while individual errors remain unknown. Second problem
8 The estimates of height (differences) depend on the path * ΔH 12 = ΔH 12 0, ** Δ H12 = ΔH31 Δ H , ΔH ΔH * ** A possible solution The closure error ε is distributed among all the observations in such a way that their a posteriori estimates satisfy the geometric condition ΔH ˆ + ΔH ˆ + ΔH ˆ = 0 the estimate of the heights (differences) is univocal
9 Algebraic position of the example 2 unknowns: H, 2 H in 3 observations 3 3 observation equations H H = ΔH s = H H = ΔH H H = ΔH ΔH 12 0, ΔH 23 0, ΔH 31 0 that are completely equivalent to 1 condition equation (the sum of the above three) ΔH 12 + ΔH 23 + ΔH 31 = 0 that is not satisfied by the 3 equations
10 The system s 0 of 3 equations in 2 unknown is algebraically impossible H H = ΔH s = H H = ΔH H H = ΔH because each equation linearly depends on the other two at the left side of the system, but not at right, due to the observation errors
11 Therefore 3 unknown, ε 12, ε 23, ε 31, are considered, that represent the corrections to apply to the observations, in order to obtain their (corrected) calculated values ΔH ˆ ˆ ˆ 12, ΔH23, Δ H that satisfy 31 s The resulting system ŝ is algebraically underdetermined ŝ = H 2 H 1 + ε 12 = ΔH 12 0 H 3 H 2 + ε 23 = ΔH 230 H 1 H 3 + ε 31 = ΔH unknowns H, 2 H, 3 ε 12, ε 23, ε in only 3 equations 31 Therefore
12 2 more equations are needed to build a system S with an algebraic univocal solution the 2 equations are obtained by imposing an (arbitrary) condition on the unknowns the condition provides a criterion to distribute the closure error and poses the estimation principle for the unknowns.
13 One example of estimation principle Equal distribution of the closure error ε =ε =ε = ε 3 computed observations: observables estimates Δ Ĥ =ΔH ε Δ Ĥ =ΔH ε Δ Ĥ =ΔH ε Heights: parameters estimates H ˆ =H +ΔH ˆ
14 H ˆ =H +ΔH ˆ Verify: estimates do not any more depend on the path! Remarks Estimation of the parameters (positions) starting from redundant observations by applying one estimation principle Moreover estimation of the parameters accuracies by applying the same principle Least Squares is the estimation principle usually adopted to adjust networks. Least Squares principles
15 General properties not dependent on the probability distribution of the observations minimal variance estimates within the class of unbiased and linear ones parameters and their accuracy can be computed in a closed form by simulations final accuracies of the parameters can be predicted from the a priori precision of the observations however not robust estimates
16 Least Squares: data y : observations vector (m elements), extracted from the 0 random variable y(observables), defined in the y must belong to a linear subspace (manifold) dimensions in m R n y V (linear functional model) m R space n V with n the covariance of y is known apart a proportionality factor m 2 σ C yy 2 = σ Q (stochastic model)
17 Least Squares: goals One estimate ŷ of the mean y such that One estimate Cyy = σˆ 2 Q C yy ˆˆ of Estimation principle y V 2 C yy, that means one estimate of σ, given Q ( y y) Q ( y y ) = min T n
18 Geometric interpretation: the leveling triangle y ΔH = ΔH Δ H Functional model (linear) ΔH 12 + ΔH 23 + ΔH 31 = 0 y V (equation of a plane in the space m=2) Stochastic model 2 Q= I Cyy = σ I n
19 Least Squares estimator T 1 ( y0 y) Q ( y0 y) = min n y V ŷ is the point n V at minimum distance from y 0
20 Geometric interpretation of the condition and observation equations: the leveling triangle Condition equation: ΔH 12 + ΔH 23 + ΔH 31 = 0 Observation equation: ΔH 12 = H2 H1 ΔH 23 = H3 H1 ΔH 31 = H1 H3 Both of them represent equation of a plane in R3
21 Application of Least Squares to the network adjustment Condition or Observation equations? Typically, not the observables but other parameters are of "interest", that are typically unknowns but are related to the observables by observation equations In leveling observables: height differences parameters: heights observation equations: H j H i= ΔHij
22 generally, condition equations cannot be easily implemented. On the contrary, observation equations can be easily implemented: one equation for each kind of observation In the following, the observation equations model will be adopted.
23 The observation equation model Given m observations y o y1 o y 2o =... ym o y y ε, [ ] 0 = + E y = y, 0 C C Q 2 yy = εε = σ y is the vector of the unknown observables,
24 ε is a sample drawn from an m-variate random variable modeling the observing error, unknown too, 2 σ is the a-priori variance of the observations error, Q is the cofactor matrix of the m-variate random variable, both known. Given the n-dimensional vector x, containing the unknown parameters: x x 1 2 =... x n x, n m
25 We introduce the following deterministic model, describing the functional relationship between x and y y= Ax+ b A is the m n design matrix, b is an m-dimensional known vector.
26 Least squares principle and estimators We look for two vectors ˆx and ŷ compatibles with each other, where ŷ is at a minimum distance from y ; namely o ˆx and ŷ such that yˆ = Axˆ+ b ˆ T 1 ( yo y) Q ( yo y ) = min ˆ
27 From the previous conditions it follows the normal system: T 1 ˆ = ( o ) T 1 Nx A Q y b, N= A Q A N is called normal matrix; two cases are possible: A is full rank (its columns are linearly independent one from the others) Ax = 0 x = 0 in this case there is no rank deficiency and the normal matrix N is invertible.
28 A is not a full rank matrix, (some of its columns are linear combinations of some others) Ax = 0 for some x 0 in this case the problem has a rank deficiency, N is not invertible.
29 Rank deficiency in a system of observations equations: the leveling triangle The 3 redundant observations ΔH, 12 ΔH, 23 ΔH are not sufficient 31 to estimate the 3 heights H, 1 H, 2 H 3
30 Indeed the 3 redundant observations are not modified by adding a constant H to the 3 heights ( ) ( ) ( ) ( ) ( ) ( ) ΔH 12 = H2 H 1 = H 2 + H H 1 + H s = ΔH 23 = H3 H 2 = H 3 + H H 2 + H ΔH 31 = H1 H 3 = H 1 + H H 3 + H Heights present one degree of freedom with respect to the observables, therefore the system s presents a rank deficiency equal to 1 and the redundant observations allow the estimation of 2 heights when the third has been a priori fixed.
31 Important Redundancy and rank deficiency are completely separate characteristics in a system of observations equations: redundancy is related to the cross check of the observations rank deficiency is related to the estimability of a given set of parameters wrt a given set of observables
32 Rank deficiency: intuitive definitions Rank deficiency can be eliminated by constraining the degrees of freedom of the parameters wrt the observables When the rank deficiency is eliminated by fixing only the degrees of freedom of the parameters, the adjustment is called minimal constraints adjustment Example: by fixing the height of point 1, a reference frame in which A has height H is defined 1
33 Rank deficiency: formalization We define the kernel K of A as follows: { } K ( A) = x Ax = 0 ; 0 0 if ˆx is the solution of yˆ = Axˆ+ b, then also xˆ + x is a solution: 0 Ax ( ˆ+ x) + b= Axˆ+ Ax + b= yˆ+ 0= y ˆ 0 0
34 The observations don t contain enough information to estimate all the unknown parameters; this happens even if the problem is redundant and it is due to the problem design. There is an infinite number of possible solutions for the unknown parameters, which can satisfy the optimal estimation principle.
35 The levelling triangle y O yy ΔH = ΔH Δ H 12O 23O 31O H H 2 = H C = σ I 3 = Ax + ν 2 0 v A is not full rank; in particular
36 1 T K ( A) = 1 H, H, N= A A, det( N) = 0 1 Notes A: R R, K( A) R ; A : R R, S( A ) R n m n T m n T n T K( A) S ( A )
37 Exercise: the kernel identification Let the rank of A be equal to r, with r< n: r columns are linearly independent, the remaining n r= d columns are a combination of the previous ones; by rearranging the columns, it is therefore possible to write A = a1 a2... an = Ar Ad = Ar Ar D = Ar Ir D m n m 1 m 1 m 1 m r m d m r m r r d m r r r r d where D is a coefficient matrix. It is easy to show that
38 D r d K ( A) = xd x Id d 1 d d In fact, one has d D D A x = A [ I D] x = A ( D+ D) x = 0 x I I d r r d r d d d d In order to prove the viceversa, let Ax = 0; consider the decomposition
39 x n 1 xr r 1 = xd d 1 xr Ar[ Ir D] x= 0 [ Ir D] = 0 because Ar is full rank; therefore, x d xr + Dxd = 0 xr = Dx, namely d Dxd D x such as Ax= 0 x= = xd x xd Id d
40 The constrained solution To eliminate the rank deficiency, stochastic constraints Hx = 0 can be introduced on the parameters Hx = 0, ( dim(h) = n k,k d ) to build a new system y 0 = Ax + b, C yy = σ 2 Q 0 = Hx, C hh = σ h 2 I such that A H has full rank. The estimate is obtained by the solution of
41 h 0 = y 0 0 = A H C hh = σ 2 Q hh, Q hh = x + b 0 Q 0, 0 1/ λi, λ = σ 2 2 / σ h and is given by ˆx H = R 1 A T Q 1 (y O b), R = N + λh T H C x x H H = σ R NR 2 1 1
42 The minimum constraints solution Constraints are minimal when T K ( A) + S( H ) = R T K ( A) S( H ) = 0 which implies v n = d. Practical and comprehension exercise: compute all possible solutions for the leveling triangle!
43 Two definitions and a note We define as intrinsic the rank deficiency of a network, made by a given type of observations, which cannot be reduced by modifying the network design; that is, by adding new observables of the same kind to the design itself. In the geodetic network adjustment, solutions which eliminate the intrinsic rank deficiency must be adopted and define the reference system and frame.
44 The not linear problem There s no LS formulation for the not linear problem y O = y + ε = f(x) + ε where f(x) = f 1 (x 1,x 2,...,x n ) f 2 (x 1,x 2,...,x n )... f m (x 1,x 2,...,x n ) In this case at first we have to linearize the problem.
45 To this aim, approximate values of the unknown parameters have to be known!x " x The linearization is obtained by using a Taylor s development around!x, truncated at the first order y = f(!x) + J f (!x)(x!x) The original problem becomes η O = η + ε = Aξ + ε η = y f(!x),ξ = x!x, A ij = J ij = f i x j (!x)
46 Final estimates provided by LS Not rank deficient problems! Unknown parameters estimate ˆx = N 1 A T Q 1 (y o b); Observables and residuals estimates: ŷ = Aˆx + b = AN 1 A T Q 1 (y o b) + b ˆε = ŷ o ŷ = (I AN 1 A T Q 1 )(y 0 b);
47 Redundancy or degrees of freedom: R= m n 2 A posteriori variance, σ, estimate: σˆ 2 = ˆε T Q 1 ˆε m n ; Covariance matrix of estimated parameters: C ˆxˆx = σˆ 2 N 1
48 Covariance matrix of estimated observables: C ŷŷ = σˆ 2 AN 1 A T Covariance matrix of estimated residuals C ˆε ˆε = ˆ σ 2 (Q AN 1 A T ) Remark LS produce unbiased and minimum variance estimates of the unknown parameters; the estimates are independent from apriori variance.
49 Outliers identification and removal Least Squares are not robust: the estimates can be distorted if the functional and/or the stochastic models do not properly describe the observations: outliers however the stochastic and functional models that describe the observations of classic geodetic networks are very simple and well known the stochastic and functional models relevant to the GPS observations are complicated and only partly known in any case Normality hypothesis on the observations
50 Classic geodetic networks Model errors typically due to outliers: external observations wrt the stochastic model N µ N e µ σ µ+σ µ+σ e
51 The observations N[µ,σ 2 ] are well described by the normal distribution The observations are outliers wrt the normal distribution N[µ,σ 2 ], but are consistent with the other normal distribution N e [µ,σ e 2 ]
52 GNSS observations Model errors caused by both outliers and an approximate knowledge of the stochastic model N a N c
53 N a : approximated stochastic model N c : correct stochastic model outlier with respect to both the models Generally, the accuracy of the observations is over estimated therefore at first outliers should be identified, then the stochastic model has to be corrected
54 Algorithms exist in order to a posteriori Verify the global unbiasedness of the adopted models Identify possible errors affecting single observations Assess the stochastic model Assess the reliability of the adjustment results.
55 Hypothesis testing Determine if, with a fixed probability of error (confidence level), a hypothesis H0 can be accepted A statistics is built, whose distribution is known if H0 holds true and that, when H0 is wrong, assumes values very high, the sample value of this statistics is compared to an acceptance interval, the hypothesis is accepted if the sample value belongs to this interval. The test significance level it is the probability to make an error by refusing an hypothesis which is true, the usually adopted values are: 0.01, 0.05, 0.10.
56 Global test on the model (functional and stochastic) Fundamental hypothesis 2 2 H ˆ 0 : σ = σ σ Test statistic: ˆ 2 σ (m n) = χ 2 2 exp 2 2 χ exp ~ χ if o ( m n) H is true 2 χ ( m n) chi square distribution with (m-n) degrees of freedom
57
58 α significance level of the test 2 χ lim 2 = χ m n (α ) such that P(0 χ χ ) = 1 α 2 2 m n lim 2 2( α) if χ χ( m n), H o is accepted 2 2( α) if ( m n) χ > χ, Ho is rejected (probable presence of model errors)
59 Steps to perform the test on the global model Least squares estimate of ˆx, ŷ, ˆε and ˆ 2 σ ; given the significance level α, the value 2 2 χ exp is computed and compared with χ lim. 2 χ lim is determined (tables); Note The test on the global model has been originally designed to identify errors in the deterministic model, but it can fail just because of errors in the stochastic model.
60 Local test on single observations (independent observations) i Fundamental hypothesis: = τexp ~ τ( m n) ˆ ε σ ε i where σ e i = ˆ σ q εε i i τ( n m) Thomson distribution τ at (n-m) degrees of freedom (similar to a normal distribution when (n-m) is big)
61 α /2 The theoretical τlim = τ( m n) is fixed such that P(0 τ τ ) = 1 α, P( τ > τlim) = α τ if exp ( m n) lim τ H0 is accepted, otherwise, H0 is rejected the i-th lim observation is a suspected outlier
62 Remark 1 The acceptance interval leaves two tails of a given probability α: either negative residuals or positive residuals whose absolute value are too high wrt the limits are to be rejected. Remark 2 The not robustness of the estimates complicates the outliers identification: indeed one outlier modifies also the residuals of the other observations Therefore an iterative process is needed to identify outliers (Data Snooping)
63 Data Snooping at each iteration, we eliminate the K-th observation, for which τ > τ exp k lim τ exp =max( τsp ) k Iterations stop when no more suspected observations remain Final residuals of the rejected observations are used to decide whether eliminate them definitively of reintroduce them
64 The test on the stochastic model In some cases, the test on the global model fails but no large outliers (normalized residuals) can be isolated by data snooping. In this case, typically, the hypotheses on the covariance matrix structure were wrong: the accuracy of some groups of observations were over estimated: the corresponding diagonal blocks of ("small") C were under estimated yyii C yy
65 A posteriori estimate of the stochastic model. Let y i1 2 i =..., ii = σ i ii y C Q, y ip 2 y 1 σ1q y =..., C= , 2 y q σ 0 0 qqqq 2 where even the different σ are to be considered as unknown. i 2 Fixed an arbitrary σ we solve the least squares problem by using K σ = σ = ii = C Q; Q ; K Q i2 σ 0 0 K qq ii
66 where, as a first approximation, we can put 2 σ i 1 2 σ ; 2 The rigorous model for the joined estimate of σ and i x is so numerically complex that has no practical use. A reasonable compromise between rigorousness and simplicity is given by ˆ σ i 2 = ˆε i T K ii 1 ˆε i m i tr A i N 1 A i T K ii 1 { } that can be also written as follows
67 σˆ 2 i = ˆε T 1 i K ii ˆε i r jj j i The process is iterative; at each step: LS estimate of the parameters, the residuals and new covariances the process is stopped when the results converge to stable values.
68 Accuracy of the parameters Hypothesis: global test and Data Snooping have been successfully performed C C overall accuracy: covariance matrix 2 1 xx = σˆ N accuracy of the point i: submatrix = ˆ σ EN E 2 1 T xx i i i i E i : matrix to extract the estimates ˆi parameters xˆ i = Ex ˆ i x of the point i, from the vector of
69 Confidence interval of each point I: Region of the p-dimensional space (p is the number of coordinates that define the position of I in the network) to which i belongs with a given probability, centered on the estimates xˆ, C ( x i ˆx ) T i C 1( xi xi ˆx ) ( α ) x i i F p, m n ( ) i x x i i Error ellipse (2D) and ellipsoid (3D) Error ellipse of a point i: 2-D confidence interval (p=2)
70 ( x i ˆx ) T i C 1( xi xi ˆx ) ( α ) x i i F 2, m n α F 2, n m ( ) ( ) ( ) : Fisher distribution with (2,(n-m)) degrees of freedom α : generally the values 0.01, 0.05, 0.10 are adopted Standard error ellipse of a point i: ˆ 1 ˆ 1 T ( x x ) C ( x x ) i i x x i i ( ) ( ) α F 2, m n i i =1 α! 0.61 for ( n m) >10:
71 Error ellipsoid of a point i: 3-D confidence interval (p=3) ( x i ˆx ) T i C 1( xi xi ˆx ) ( α ) x i i F 3, m n ( ) ( ) ( ) : Fisher distribution with (3,(n-m)) degrees of freedom α F 3, n m α : generally the values 0.01, 0.05, 0.10 are adopted Standard error ellipsoid of a point i: ˆ 1 ˆ 1 T ( x x ) C ( x x ) i i x x i i ( ) ( ) α F 3, n m i i =1 α! 0.81 for ( n m) >10
72 Geometric parameters of the error ellipse Given the 2D covariance matrix of point i: C ˆ σ ˆ σ = 2 X XY xx ˆˆ 2 ˆ σ ˆ XY σy σ max : major semiaxis, σ min : minor semiaxis θ: orientation angle of major semiaxis wrt x 1 axis σ are the square root of the eigenvalues of C ˆˆ, max,min xx
73 2 ˆ σ ˆ X λ σ XY det( Cxx ˆˆ λi ) = det 0 2 = ˆ σ ˆ XY σy λ Therefore 2 2 ˆ ˆ X + σy 1 σ λ ˆ ˆ ˆ max,min = ± ( σ σ ) + 4σ X Y XY 2 2 ˆ σ ˆ X + σy ˆ ˆ ˆ STD max, STDmin = ± ( X Y ) + 4 XY σ σ σ σ σ 2 2 Let's use θ to indicate the counter-clockwise angle between X axis and error ellipse major axis. The unitary vector
74 e max cosθ = sinθ provides the direction of the maximum eigenvector e max λ max, C e = λ Ie ; ( C λ I) e = 0 xx ˆˆ max max max xx ˆˆ max max ( ˆ σ λ ) e + ˆ σ e = 0, ˆ σ e + ( ˆ σ λ ) e = X max 1 XY 2 XY 1 Y max 2 tanθ e e 2 = = 1 max max max max λ ˆ σ ˆ σ max XY 2 X
75 from the trigonometric equality 2 tan 2θ 2tan θ / (1 tan θ) =, it follows tan 2θ = ˆ σ ˆ σ 2 ˆ σ 2 2 X Y XY therefore θ = 1 ˆ ˆ arctg σ σ 2 2ˆ σ 2 2 X Y XY The parameters of other ellipses, with F ( ) 1 2,( ) α, are given by the m n σmax = F2,( m n) ( α) σstdmax, σmin = F2,( m n) ( α) σstdmin
76 Internal reliability of the observable Represent the capability of the observables to reciprocally monitor, for example: the height difference Δ H is BD not controlled from any other observation, so it has no reliability Δ H AB, H BC from zero Δ, Δ HCA control each other, so their reliability is different The local r i redundancy of each observable is an internal reliability index.
77 Maximum outlier not identified in each observation (according to Baarda) Outlier identification is based on the mutual control of all the observations therefore the absolute value of the not identified outlier in each observation depends on its reliability Depending on the local redundancy r i and accuracy σ i of each observation, fixed a no-centrality a-priori value δ, according to the probability: α: accept the hypothesis H a (presence of the outlier) even if the outlier is not present (the hypotesis H 0 is true) 1 β ; accept the hypothesis H 0 (outlier not present) even if one outlier δ is present (the hypothesis H a is true)
78 Test reliability Under the alternative hypothesis Ha (one outlier δ i in observation y i) the normalized residual follows a τ ( m n) distribution with not centrality parameter Q εε ˆ σ ii δi Q εε δ ii i : τ ( m n ) ( ) ˆ σ
79 τ lim 1 β = τ (m n) ( Q εε ii δ i β is the power of the test with respect to δ. σˆ ) Assigned α, τ lim can be determined; assigned β, δ i can be computed such that lim ( Qεε δ ii i τ ) 1 ( m n) = β. ˆ σ τ
80 The maximum embedded error in observation i is the maximum error δ i which is not possible to reveal with a test power β. δ i = k α,β It can be proved that σ σ! k (Q 1 α,β P A ) ii P A ii where P = I AN A Q 1 T 1 A
81 Local redundancy of the observation i P A = ii r i Two extreme cases r 0: δ ; r 1: δ min i i i i Internal reliability of the observation i Maximum embedded error δ ; the worst internal reliability: δ = maxδ i i
82 External reliability The influence of an embedded error δ in the unknown parameter i estimation: δxˆ( δ ) = N A Q e δ 1 T 1 i i i Two extreme cases: r = 0: the whole error propagates into the unknown estimates. i r 1: the whole error goes into residuals and does not affect at all i the unknown estimates.
83 External reliability of the parameter x j: δxˆ = max( δxˆ ( δ )) j j i i The worst external reliability: δxˆ = max( δxˆ ). j j
84 Parameters estimability and conditioning of the normal system Even after the removal of the rank deficiency, some parameters could be affected by a bad estimability problem this can be checked by singular value decomposition of the normal matrix T ENE E: matrix of the orthonormal Eigenvectors of A Λ: diagonal matrix containing the Eigenvalues of A The parameters whose Eigenvalues are almost 0 are badly estimable and must be calculated = Λ
85 the conditioning number of the normal system is one index of the estimability of the solution: it is computed accordingly to v λ max = v should be almost equal to 1 λ min if λ min ; 0, v, the solution is unstable or ill conditioned This is neither a rank deficiency problem, nor a accuracy one: however, an ill posed system can degenerate into a rank deficient one, in any case, final accuracies of estimated parameters are mediocre
Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationStrain analysis.
Strain analysis ecalais@purdue.edu Plates vs. continuum Gordon and Stein, 1991 Most plates are rigid at the until know we have studied a purely discontinuous approach where plates are
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental
More informationTrust Regions. Charles J. Geyer. March 27, 2013
Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More information9.2 Eigenanalysis II. Discrete Dynamical Systems
9. Eigenanalysis II 653 9. Eigenanalysis II Discrete Dynamical Systems The matrix equation y = 5 4 3 5 3 7 x predicts the state y of a system initially in state x after some fixed elapsed time. The 3 3
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationGeodetic Network Adjustment Examples
Geodetic Network Adjustment Examples Friedhelm Krumm Geodätisches Institut Universität Stuttgart http//www.gis.uni-stuttgart.de krumm@gis.uni-stuttgart.de Rev. 3.1 June 13, 2018 Contents 1 Introduction
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationSome Notes on Linear Algebra
Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationSTATISTICS OF OBSERVATIONS & SAMPLING THEORY. Parent Distributions
ASTR 511/O Connell Lec 6 1 STATISTICS OF OBSERVATIONS & SAMPLING THEORY References: Bevington Data Reduction & Error Analysis for the Physical Sciences LLM: Appendix B Warning: the introductory literature
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.
ENGI 940 Lecture Notes - Matrix Algebra Page.0. Matrix Algebra A linear system of m equations in n unknowns, a x + a x + + a x b (where the a ij and i n n a x + a x + + a x b n n a x + a x + + a x b m
More informationCointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56
Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationStatistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit
Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationRegression #5: Confidence Intervals and Hypothesis Testing (Part 1)
Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose
More informationBasic Probability Reference Sheet
February 27, 2001 Basic Probability Reference Sheet 17.846, 2001 This is intended to be used in addition to, not as a substitute for, a textbook. X is a random variable. This means that X is a variable
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationMatrix Representation
Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set
More informationANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2
MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality
More informationWeek Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,
Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider
More informationSolutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 28, 2011
Solutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 8, Solutions to Problems 5 are omitted since they involve topics no longer covered on the Comprehensive
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationAlgebra of Random Variables: Optimal Average and Optimal Scaling Minimising
Review: Optimal Average/Scaling is equivalent to Minimise χ Two 1-parameter models: Estimating < > : Scaling a pattern: Two equivalent methods: Algebra of Random Variables: Optimal Average and Optimal
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More information1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i
Problems in basic linear algebra Science Academies Lecture Workshop at PSGRK College Coimbatore, June 22-24, 2016 Govind S. Krishnaswami, Chennai Mathematical Institute http://www.cmi.ac.in/~govind/teaching,
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 9 0 Linear regression Coherence Φ ( ) xy ω γ xy ( ω) = 0 γ Φ ( ω) Φ xy ( ω) ( ω) xx o noise in the input, uncorrelated output noise Φ zz Φ ( ω) = Φ xy xx ( ω )
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS
THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS MATH853: Linear Algebra, Probability and Statistics May 5, 05 9:30a.m. :30p.m. Only approved calculators as announced by the Examinations Secretary
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationInverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna
Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3
More informationSingular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0
Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationECON 4160, Autumn term Lecture 1
ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationAdvanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU
Advanced Engineering Statistics - Section 5 - Jay Liu Dept. Chemical Engineering PKNU Least squares regression What we will cover Box, G.E.P., Use and abuse of regression, Technometrics, 8 (4), 625-629,
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationIntroduction to Least Squares Adjustment for geodetic VLBI
Introduction to Least Squares Adjustment for geodetic VLBI Matthias Schartner a, David Mayer a a TU Wien, Department of Geodesy and Geoinformation Least Squares Adjustment why? observation is τ (baseline)
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More informationCS6964: Notes On Linear Systems
CS6964: Notes On Linear Systems 1 Linear Systems Systems of equations that are linear in the unknowns are said to be linear systems For instance ax 1 + bx 2 dx 1 + ex 2 = c = f gives 2 equations and 2
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationStatistics 203: Introduction to Regression and Analysis of Variance Course review
Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationNotes on generating functions in automata theory
Notes on generating functions in automata theory Benjamin Steinberg December 5, 2009 Contents Introduction: Calculus can count 2 Formal power series 5 3 Rational power series 9 3. Rational power series
More informationMath 118, Fall 2014 Final Exam
Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationBasic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties
Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered
More informationIntroduction: the Abruzzo earthquake The network and the processing strategies. displacements estimation at earthquake epoch. horizontal displacements
The Abruzzo earthquake: temporal and spatial analysis of the first geodetic results L. Biagi, S. Caldera, D. Dominici, F. Sansò Politecnico di Milano Università degli studi de L Aquila Outline Introduction:
More informationInferences about a Mean Vector
Inferences about a Mean Vector Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationGetting Started with Communications Engineering
1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationCamera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s =
Camera Calibration The purpose of camera calibration is to determine the intrinsic camera parameters (c 0,r 0 ), f, s x, s y, skew parameter (s = cotα), and the lens distortion (radial distortion coefficient
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationSimple Linear Regression for the Climate Data
Prediction Prediction Interval Temperature 0.2 0.0 0.2 0.4 0.6 0.8 320 340 360 380 CO 2 Simple Linear Regression for the Climate Data What do we do with the data? y i = Temperature of i th Year x i =CO
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 10 December 17, 01 Silvia Masciocchi, GSI Darmstadt Winter Semester 01 / 13 Method of least squares The method of least squares is a standard approach to
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationIntelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)
Advanced Research Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Intelligence for Embedded Systems Ph. D. and Master Course Manuel Roveri Politecnico di Milano,
More informationNOTES ON LINEAR ALGEBRA CLASS HANDOUT
NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function
More informationA Review of Linear Algebra
A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More information