Feb 14: Spatial analysis of data fields

Similar documents
Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Composite Hypotheses testing

Linear Approximation with Regularization and Moving Least Squares

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Report on Image warping

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

Singular Value Decomposition: Theory and Applications

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Chapter 11: Simple Linear Regression and Correlation

APPENDIX A Some Linear Algebra

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

The Geometry of Logit and Probit

The Order Relation and Trace Inequalities for. Hermitian Operators

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Statistical pattern recognition

Error Bars in both X and Y

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

332600_08_1.qxp 4/17/08 11:29 AM Page 481

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

An efficient algorithm for multivariate Maclaurin Newton transformation

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Lecture 3. Ax x i a i. i i

Lecture 10 Support Vector Machines II

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Perron Vectors of an Irreducible Nonnegative Interval Matrix

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Comparison of Regression Lines

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

The Ordinary Least Squares (OLS) Estimator

Lecture 2: Prelude to the big shrink

Quantum Mechanics for Scientists and Engineers. David Miller

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Quantum Mechanics I - Session 4

Chapter 3 Describing Data Using Numerical Measures

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Section 8.3 Polar Form of Complex Numbers

First Year Examination Department of Statistics, University of Florida

10-701/ Machine Learning, Fall 2005 Homework 3

Primer on High-Order Moment Estimators

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Digital Modems. Lecture 2

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Global Sensitivity. Tuesday 20 th February, 2018

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Lecture Notes on Linear Regression

STAT 3008 Applied Regression Analysis

7. Products and matrix elements

THEOREMS OF QUANTUM MECHANICS

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Formulas for the Determinant

Lecture 3 Stat102, Spring 2007

SL n (F ) Equals its Own Derived Group

1 GSW Iterative Techniques for y = Ax

4DVAR, according to the name, is a four-dimensional variational method.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

From Biot-Savart Law to Divergence of B (1)

1 Matrix representations of canonical matrices

Outline. Multivariate Parametric Methods. Multivariate Data. Basic Multivariate Statistics. Steven J Zeil

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Introduction to Regression

Economics 130. Lecture 4 Simple Linear Regression Continued

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Linear Feature Engineering 11

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

T E C O L O T E R E S E A R C H, I N C.

Some basic statistics and curve fitting techniques

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

2. Differentiable Manifolds and Tensors

Generalized Linear Methods

e i is a random error

Limited Dependent Variables

Solutions Homework 4 March 5, 2018

Assessing inter-annual and seasonal variability Least square fitting with Matlab: Application to SSTs in the vicinity of Cape Town

(A and B must have the same dmensons to be able to add them together.) Addton s commutatve and assocatve, just lke regular addton. A matrx A multpled

Thermal-Fluids I. Chapter 18 Transient heat conduction. Dr. Primal Fernando Ph: (850)

Review: Fit a line to N data points

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Transcription:

Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s so for spectral analyss, dgtal flterng and wavelet analyss. Other technques, such as calculatng emprcal orthogonal functons from a set of spatally dstrbuted data observed at the same tmes (though not necessarly at regular ntervals,.e. Δ t constant) often requre that gaps n the data be flled. hs s true, for example, n the case of satellte observatons where clouds partally obscure the feld of vew, or are subject to data drop out from nstrument or algorthm falngs. In oceanography and meteorology, clmatologes (e.g. seasonal or monthly means) are typcally computed from a complaton of observatons made at rregular locatons and tmes, frequently wth a samplng dstrbuton that can lead to regonal or temporal bases f care s not taen to recognze and address these n the mappng procedure. hese next few lectures wll address technques for producng regularly grdded maps from rregularly sampled data. hese technques have characterstcs of both smoothng and flterng (removng tme/space scales) and nterpolaton (spannng gaps n observatons). We begn by revewng some bascs of matrx and vector algebra, drawng on the Basc Machnery descrbed n Chapter 3 of Wunsch (996). [Wunsch, C., he Ocean Crculaton Inverse Problem, Cambrdge Unversty Press, 44 pp., 996.] Matrx and vector algebra, least squares fttng va the normal equatons opcs covered: Revew of lnear algebra conventons, defntons and rules Weghted least squares Conventonal least squares va the normal equatons Least squares soluton to data desgn matrx equaton John s old notes (scanned) for lnear algebra lecture Lnear algebra defntons: Matrx of M by values: { j} A = a,, M, j Vector of values and ts transpose

q q q = q q = [ q q q ] Inner product he nner, or dot product, of two vectors s ab=ab cosθ where θ s the angle (n -dmensonal space) between the two vectors. If θ = 0 then the vectors are parallel. If θ = π / then the vectors are orthogonal. In more general terms, ab= = = ab from whch t follows that both vectors must be of length (.e. they are conformng) n order to compute the summaton. Bass set Suppose we had vectors e, each of dmenson (length). If t s possble to represent any arbtrary -dmensonal vector, f, as a weghted sum of the vectors, e f = αe = then the e are a called a spannng set, (or more commonly a bass set) because they are suffcent to span the entre -dmensons. o have ths property, the e must be ndependent, meanng that no sngle one of the e can be represented as a weghted sum of the others excludng tself.

he coeffcents α of the expanson can be found by solvng a set of smultaneous equatons descrbng the projecton of f onto each of the e. = αee = ef hs s easly solved n the case that the e are mutually orthogonal and normal (have unt length), n whch case we call them orthonormal. Egenvectors and egenvalues Matrx multplcaton can be thought of as a transformaton of vector x nto vector y Ax = y If vector v has the property that the transformaton leaves ts drecton unchanged, then v s sad to be an egenvector of matrx A. Av = λv If A s square of dmenson, there are egenvectors and they are orthogonal, each wth a correspondng egenvalue λ n : Av = λ v n n n A matrx composed of the egenvectors, say Q, satsfes - A=QΛQ If A s symmetrc, t wll have real egenvalues and orthonormal egenvectors that form a bass set. Orthonormal vectors Orthonormal vectors satsfy the property: ee = δ where δ s the Kronecer delta: δ = f =, (normal) and δ = 0 f (orthogonal).

hen = αδ = α = ef s the projecton of f onto bass vector e and we have easly solved for the coeffcents α. Matrx multplcaton Matrx multplcaton s C P = A B th row of A tmes j th column of B j p pj p= whch requres the dmensons be conformable Mx ~ MxP Px (he requrement that matrx operatons be conformable s your frend n Matlab.) We wrte C=BA Matrx operaton rules: AB BA multplcaton s not commutatve ABC = (AB)C = A(BC) multplcaton s assocatve ( ) AB = B A the expanson of transpose product trace( A ) = a s sum of dagonal elements = A symmetrc matrx has the property A=A so the product product of all rows of the matrx wth themselves. A A s the dot he dentty matrx s symmetrc: 0 0 I = 0 0 so each element Ij = δj 0 0 he nverse of a matrx A s denoted A and defned such that A A = I It follows that

( ) - - - AB = B A orm or length he length or norm of a vector can be defned n many ways, but the conventonal l norm s defned ( ) / f f = f f = = he Cartesan dstance between two vectors s ( x x ) ( y y ) a-b = ( a-b) ( a-b ) = + a b a b / Sometmes the dstance between two vectors s weghted n = ( ) / c = cw c = c Wc where n order to be useful the weghtng matrx would usually be symmetrc and postve defnte. Dfferentaton Consder a scalar, J, (a sngle number, not a vector) that s the product J = rq= qr (so the vectors must be conformable) Dfferentatng ths scalar wth respect to the vector q produces a vector gradent as the result qr = rq = r ( ) ( ) much le the dfferentaton usng the product rule for any two varables r and q.

( rq) = r For a quadratc form where the scalar J may be wrtten: J = qaq (ths requres the matrx A be x) we get J = ( + A A ) q much le the dfferentaton of a quadratc product Aq ( Aq ) = Aq Most spatal analyss of data that entals fttng or smoothng data to ft some statstcal or dynamcal model, nvolves some form of weghted least squares fttng. Least squares fttng In smple least squares fttng of a set of observatons to a lnear functon, or lnear regresson, what s assumed s that a set of observatons y can be descrbed by a model y() t = θ () t + n() t = a+ bt+ n() t Here, n(t) s the measurement nose and s the source of the msft between the observatons and the model. We can wrte ths as a matrx equaton: where Ex + n = y t n y a E= x= n= = b y t M n M y M

Havng zero error would be exceptonal, so n general the parameters a and b wll represent a best possble ft of the model to the observatons. We have many more data ponts that the parameters a and b, so the probel s sad to be over determned. Frequently, the measure of best ft s the parameter choce that mnmzes the mean squared msft of model and data. M = ( ) ( ) ( ) ( ) ( ) mn J = n = nn= Ex-y Ex - y = Ex Ex - y Ex - Ex y + y y Each of these terms s a scalar, so each s ts own transpose. So, ( ) ( ) y Ex = y Ex = Ex y Ex y xey so we have Also, ( ) = J = xeex-xey + y y o mnmze, we dfferentate wth respect to x and set to zero, antcpatng a mnmum. hs leads to the set of normal equatons J = ( EEx- ) Ey = 0 x ( EEx- ) = Ey Assumng the nverse of the normal equatons matrx exsts, the soluton s ( ) - x= E E E y o assumptons have been made about the statstcal probablty densty functons of the errors n. Let s gve some consderaton to how the estmated parameters of the model ft, a and b denoted by x are affected by the random elements of the observatons. Assume the estmated values are unbased, then the true x = x (expected values).

he uncertanty n the estmated values s descrbed by ther varance about the true mean. true true P = x-x x-x true ( EE) E x nn EEE ( ) = - - In the specal case that we have uncorrelated errors, observatons are nown wth an uncertanty ± σ n nn = σ n I,.e. all the So the uncertanty n the parameter estmates s P = σ n ( EE ) and the uncertanty n the estmated derved from these parameters s - y est =Ex est y -y =n P = (n - n)(n - n) - = σ (I - E(E E) E ) n est est (see Wunsch secton 3.3 for detals). You should, at the very least, examne the resduals of the model ft compared to the data to see s they are randomly dstrbuted.