SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Similar documents
4DVAR, according to the name, is a four-dimensional variational method.

Linear Approximation with Regularization and Moving Least Squares

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

2.3 Nilpotent endomorphisms

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Composite Hypotheses testing

Lecture Notes on Linear Regression

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Chapter 11: Simple Linear Regression and Correlation

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Least-Squares Fitting of a Hyperplane

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Laboratory 3: Method of Least Squares

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

x i1 =1 for all i (the constant ).

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Errors for Linear Systems

e i is a random error

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

MMA and GCMMA two methods for nonlinear optimization

STAT 511 FINAL EXAM NAME Spring 2001

Laboratory 1c: Method of Least Squares

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Lecture 10 Support Vector Machines II

Estimation: Part 2. Chapter GREG estimation

Feb 14: Spatial analysis of data fields

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Supporting Information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Difference Equations

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013

CHAPTER 14 GENERAL PERTURBATION THEORY

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

10-701/ Machine Learning, Fall 2005 Homework 3

PHYS 705: Classical Mechanics. Calculus of Variations II

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Lecture 3: Probability Distributions

Chapter 9: Statistical Inference and the Relationship between Two Variables

Math1110 (Spring 2009) Prelim 3 - Solutions

Lecture 2: Prelude to the big shrink

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012

Lecture 21: Numerical methods for pricing American type derivatives

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Primer on High-Order Moment Estimators

Generalized Linear Methods

Properties of Least Squares

Lecture 3: Shannon s Theorem

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Kernel Methods and SVMs Extension

Relevance Vector Machines Explained

EEE 241: Linear Systems

NUMERICAL DIFFERENTIATION

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Solutions Homework 4 March 5, 2018

Three views of mechanics

Inexact Newton Methods for Inverse Eigenvalue Problems

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Global Sensitivity. Tuesday 20 th February, 2018

Uncertainty as the Overlap of Alternate Conditional Distributions

Feature Selection: Part 1

Lecture 12: Classification

Error Bars in both X and Y

Poisson brackets and canonical transformations

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

STATISTICAL MECHANICS

Fisher Linear Discriminant Analysis

Classification as a Regression Problem

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Econometrics of Panel Data

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Problem Set 9 Solutions

Homework Notes Week 7

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Inductance Calculation for Conductors of Arbitrary Shape

Appendix B. The Finite Difference Scheme

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Linear Feature Engineering 11

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

σ τ τ τ σ τ τ τ σ Review Chapter Four States of Stress Part Three Review Review

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

Chapter 4: Regression With One Regressor

Notes on Frequency Estimation in Data Streams

A Comparative Study for Estimation Parameters in Panel Data Model

14 Lagrange Multipliers

Transcription:

SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small perturbatons to t. Usually, the smallest or smoothest perturbaton s found that allows the data to be ft to some tolerance. Ths means that the fnal model can end up wth features of the startng model whch are not requred by the data. These notes descrbe one way of assessng the true resoluton of the data. For concreteness, we consder the free oscllaton nverse problem. Suppose we let m(r) be the true Earth and m (r) be a startng model where m s usually taken to be the trplet of functons: then let m(r) =(ρ(r),k s (r),µ(r)) δm = m m then we obtan for the free-oscllaton problem δω = G (m ),δm + O δm 2 (1) where, for the th mode The braket notaton s shorthand for: G (m ),δm = δω = ω obs ω model (G ρ δρ(r)+g K δk s (r)+g µ δµ(r)) dr and the G s can be computed from the egenfunctons of the th mode for the startng model and so are mplctly a functon of m. Equaton 1 s solved under the assumpton that δm s small enough so that terms of order δm 2 can be neglected. Equaton 1 s a lnearzaton of the problem and so, n prncple, t s possble to fnd many dfferent m whch satsfy the data and whch may not be lnearly close to one another. Ths occurs f the G are rapdly changng functons of m. For the case of free-oscllatons, the lnearzaton s probably vald snce G,δm correctly predcts the perturbaton n frequency, δω, for perturbatons up to several percent. (Ths sn t true for all modes but s true for the vast majorty). A few percent s the typcal uncertanty n the models of the sphercally averaged Earth. Suppose we have found a model whch gves a satsfactory ft to the data,.e. the resduals, δω, are normally dstrbuted wth zero mean. How do we fnd those features of the model whch are truly resolved by the data? Ths queston s addressed by the paper of Backus and Glbert (197). For smplcty, consder a one-dmensonal model (the model conssts of only one functon of radus): 1

δω ± σ = G (r)δm(r) dr for =1to N. Suppose we take a lnear combnaton of data A(r)δm(r) dr where A(r) = a G (r) Suppose we choose the multplers, a so that A(r) approxmates a δ-functon peaked at a partcular radus, r. If we acheved ths perfectly, we would have δ(r r )δm(r) dr = δm(r ) Wth a fnte amount of data we cannot make A(r) a perfect δ-functon but we can try and make t as δ-lke as possble. We then have = A(r)δm(r) dr A(r)(m(r) m (r)) dr = where we have assumed that the model, m (r) s lnearly close to the real Earth, m(r) and that the model fts the data so that the expected value of the resduals s zero. We thus obtan A(r)m(r) dr = A(r)m (r) dr = m(r ) say where m(r ) s an average of the real Earth (averaged wth our approxmaton to a δ-functon) and s dentcal to the same average of our model. We force the average to be unbased by makng A(r) unmodular,.e. A(r) dr =1 The data also have errors (σ ) and we suppose that the uncertantes n the data are characterzed by a covarance matrx, E j. We usually don t know what the covarances between our data are so we assume that the data are ndependent n whch case E j s dagonal wth elements along the dagonal whch are the varances of the data: σ 2. The varance of out estmate, m s then gven by 2

σ 2 = j a a j E j We would lke ths to be as small as possble. We want our multplers to make A(r) as δ-lke as possble at a radus r to localze nformaton about m(r) around r and at the same tme we want the localzed nformaton to be precse. Backus and Glbert show that these ams are mutually exclusve. How do we choose the a s to make A(r) peaked? Consder mnmzng the form S = f(r)a 2 (r) dr If f(r) s dpped near r then we would expect A(r) to be peaked at r. Backus and Glbert suggest usng a parabola: f(r) = 12(r r ) 2 The factor of 12 s ntroduced to make S a measure of the peak wdth of A whch we shall call the spread. (If A(r) s a boxcar of unt area centered at r then S s exactly the wdth of the boxcar). We now have S = j a a j S j where If we defne S j =12 (r r ) 2 G (r)g j (r) dr the unmodularty constrant reads u = G (r) dr a u =1 Snce σ 2 and S cannot be mnmzed smultaneously, we consder the followng combnaton: and mnmse M where M j = S j cos θ + we j sn θ θ π/2 M = a a j M j subject to a u =1 θ s called a tradeoff parameter. When θ =, we choose the a to mnmze the spread. When θ = π/2 we choose the a to mnmze σ 2. At ntermedate values we compute a compromse between 3

spread and error. w s a weghtng factor to make w E and S to be about the same the tradeoff calculaton wll then be centered about θ 45. Wrtten n vector form, our problem s mnmze a M a wth a u =1 The s a problem n calculus of varatons and s solved by ntroducng a Lagrange multpler, λ: mnmze a M a + λ(a u 1) Dfferentatng wth respect to a and settng equal to zero gves Thus 2M a + λu = a = λ 2 M 1 u We can evaluate the Lagrange multpler by dottng the above equaton wth u whch gves So ellmnatng λ/2 gves a u =1= λ 2 u M 1 u a = M 1 u u M 1 u Note that a must be recalculated for each value of r and θ and the calulaton s made much more effcent f M s dagonal. Ths can be acheved (Glbert 1971) but we don t consder numercal ncetes any further here. Once a s computed, the spread: a S a; the varance: a E a, and the resolvng kernel: a G(r) can all be computed. Some results of applyng ths technque to the mode problem are gven n the accompanyng PEPI artcle. In the example, we smultaneously try and peak nformaton about densty at some target radus whle removng senstvty to the elastc modul. Rather than computng a tradeoff curve, we choose a specfc error level (.5% say) then adjust θ untl the a that gves ths error level s found. N.B. The form chosen for S j above s relatvely arbtrary. We mght decde (as s done n the PEPI paper) that we want our resolvng kernel to be a boxcar between rad r 1 and r 2 wth unt area between these lmts: A(r) dr =1 r 1 If B(r) s the desred boxcar then we would mnmze r 2 (A(r) B(r)) 2 dr Substtutng n A = a G and expandng the square gves 4

a S a 2a u + B(r) 2 dr where we have redfned S and u: S j = G (r)g j (r) dr and u = r 2 G (r) dr r 1 Snce a u s forced to be one, the only part of the above equaton that depends on a s a S a. We now form M = S cos θ + we sn θ and fnd that we get the same answer for a as before: a = M 1 u u M 1 u but wth the redefned M and u. Ths form s computatonally effcent snce M no longer depends on the target depth range for the boxcar. Only u has to be recomputed for new r 1,r 2. 5