Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V

Similar documents
Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.

XII.3 The EM (Expectation-Maximization) Algorithm

Least Squares Fitting of Data

LECTURE :FACTOR ANALYSIS

y new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion)

1 Definition of Rademacher Complexity

On Pfaff s solution of the Pfaff problem

International Journal of Mathematical Archive-9(3), 2018, Available online through ISSN

COMP th April, 2007 Clement Pang

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Least Squares Fitting of Data

Lecture 3 Stat102, Spring 2007

Excess Error, Approximation Error, and Estimation Error

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. . For P such independent random variables (aka degrees of freedom): 1 =

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy.

Fermi-Dirac statistics

Our focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e.

Slobodan Lakić. Communicated by R. Van Keer

Applied Mathematics Letters

PHYS 705: Classical Mechanics. Calculus of Variations II

arxiv: v2 [math.co] 3 Sep 2017

4DVAR, according to the name, is a four-dimensional variational method.

System in Weibull Distribution

10-701/ Machine Learning, Fall 2005 Homework 3

CHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS

Finite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003

Linear Approximation with Regularization and Moving Least Squares

General Tips on How to Do Well in Physics Exams. 1. Establish a good habit in keeping track of your steps. For example, when you use the equation

CHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS

Quantum Mechanics for Scientists and Engineers

PROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE

,..., k N. , k 2. ,..., k i. The derivative with respect to temperature T is calculated by using the chain rule: & ( (5) dj j dt = "J j. k i.

Chapter 3 Differentiation and Integration

Xiangwen Li. March 8th and March 13th, 2001

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

Preference and Demand Examples

1 GSW Iterative Techniques for y = Ax

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Solutions Homework 4 March 5, 2018

By M. O'Neill,* I. G. Sinclairf and Francis J. Smith

Lecture 10 Support Vector Machines II

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Modified parallel multisplitting iterative methods for non-hermitian positive definite systems

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

Chapter 11: Simple Linear Regression and Correlation

Laboratory 3: Method of Least Squares

Quantum Particle Motion in Physical Space

Laboratory 1c: Method of Least Squares

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Implicit Integration Henyey Method

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Hopfield Training Rules 1 N

The Geometry of Logit and Probit

Minimization of l 2 -Norm of the KSOR Operator

Limited Dependent Variables

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

COS 511: Theoretical Machine Learning

Gradient Descent Learning and Backpropagation

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Economics 130. Lecture 4 Simple Linear Regression Continued

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

Lecture Notes on Linear Regression

Quantum Mechanics for Scientists and Engineers. David Miller

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

Implicit scaling of linear least squares problems

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Review: Fit a line to N data points

Lecture 4: Universal Hash Functions/Streaming Cont d

Chapter 12 Lyes KADEM [Thermodynamics II] 2007

ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING

University of Washington Department of Chemistry Chemistry 452/456 Summer Quarter 2013

University of Washington Department of Chemistry Chemistry 452/456 Summer Quarter 2014

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

MAE140 - Linear Circuits - Winter 16 Midterm, February 5

Relevance Vector Machines Explained

Chapter 12 Analysis of Covariance

STAT 3008 Applied Regression Analysis

PHYS 2211L - Principles of Physics Laboratory I

Introducing Entropy Distributions

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Quantum Mechanics I - Session 4

Time-Varying Systems and Computations Lecture 6

Modeling and Simulation NETW 707

Complex Variables. Chapter 18 Integration in the Complex Plane. March 12, 2013 Lecturer: Shih-Yuan Chen

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

, are assumed to fluctuate around zero, with E( i) 0. Now imagine that this overall random effect, , is composed of many independent factors,

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Transcription:

Fall Analyss o Experental Measureents B Esensten/rev S Errede General Least Squares wth General Constrants: Suppose we have easureents y( x ( y( x, y( x,, y( x wth a syetrc covarance atrx o the y( x easureents y( x Suppose the theory predcton ( ; λ ( ( ; λ, ( ; λ,, ( ; λ nvolves M (< paraeters λ ( λ, λ,, λm y x y x y x y x n soe general (e not necessarly lnear anner Addtonally, suppose there are unctons ( λ ( λ, ( λ,, ( λ that relate (e constran the M λ -paraeters n soe general (but not necessarly lnear anner va use o Lagrange Multplers α ( α α α he χ ( ; λα s dened as:,,, y( x χ λα ; χ λ + α λ y x y x; λ y x y x; λ + α λ where y( x s the easureents, and the syetrc nverse o the covarance atrx o the y ( x colun vector {nb In the lnear constrant case ( λ Bλ b ( λ ay be non-lnear unctons o the M λ -paraeters} We nze the χ ( λα ; by takng dervatves wrt ( ; λ contans the constrant equatons However, n general the constrant equatons λ α We (agan use the teraton technque here too Suppose that ater ν teratons, we have obtaned a set o approxate values o the M λ -paraeters and Lagrange Multplers α : ν λ ν ν λ λ ν λ M and: ν α ν ν α α ν α We then expand (e lnearze χ ( λα ; n a aylor seres around these ponts ( ν ν ; ν ν ν ν ν ν ν ν then solve or Δλ ( Δλ, Δλ,, ΔλM, Δα ( Δα, Δα,, Δα and terate urther λ α, slar to the dscusson n 598AEM Lect Notes (p 5-9 For addtonal detals, see eg ndvdual progra wrte-ups or eg advanced texts on ths subject * * Let us assue that we have deterned the best values ( ; the Lagrange Multpler constraned LSQ t ethod λ α o these paraeters usng We can obtan a better estate, we wsh, o the easured rando varables y x hs procedure goes by the nae Adjustent o Observatons : 598AEM Lecture Notes

Fall Analyss o Experental Measureents B Esensten/rev S Errede We dene a colun vector o easured values o the rando varables (nb these ay not necessarly be ndependent, wth correspondng syetrc covarance atrx o the easureents We want to know the true values (e expectaton values o the easureents: E ˆ ˆ ˆ ˆ [ ],,, We wll estate the usng a LSQ ttng ethod, and call the estates the tted values o the easureents We obtan the tted values o the easureents by adjustng the easureents so that: Each easureent s allowed to ove by an aount deterned ro the sze o the uncertanty on the easureent, σ he resultng tted values o the easureents satsy one or ore constrants We dene a colun vector: o tted values o, e the estates o ˆ Let there be constrants whch can be expressed n the or: ( (,,,,,,,,, ( or, denng a nb In general, these wll be non-lnear equatons colun vector: ( (,,, (,,, Reeberng the teratve χ nzaton ethod(s, we choose to work wth lnearzed correctons : (,,, c c c or, denng a colun vector: c In ters o χ nzaton, snce the s are just constants, nzng c s equvalent to nzng χ wth respect to χ wth respect to 598AEM Lecture Notes

Fall Analyss o Experental Measureents B Esensten/rev S Errede What should we actually nze? I we use χ ( ( c c χ ( c the soluton s (obvously, e the best estate o ˆ s, tsel In order to do better, we ust add n soe new noraton n ths case, the requreent that the constrants be satsed by the s hus, we nstead nze: ( c χ ; α χ + α where: c c+ α akng dervatves o χ ( c; α α α α α, we obtan: s a colun vector o Lagrange Multplers ( χ α ( c; α ( ( c; χ ( α χ α c ; (e the constrants wll be satsed ( c+ α Note that s a atrx ( B( wth jk th k eleent: B jk, j where j,,, ranges over the tted varables and k,,, ranges over the constrants hus, the equatons that we need to solve n order to accoplsh ths χ ( c; ( and: c + B α α nzaton are: For the general non-lnear case, we ust resort to approxaton ethods We aylor seres expand (e lnearze the constrant equatons around, an ntal estate o the tted values o the easureents hen we requre that: As usual, we assue that ( ( + + s sall enough so that we can saely neglect/gnore the and the hgher- ters n the aylor seres expanson nvolvng hgher powers o ( order dervatves o ( (hs step s known as lnearzng the constrants 598AEM Lecture Notes 3

Fall Analyss o Experental Measureents B Esensten/rev S Errede hen: ( + + B A neat trck exsts or solvng ths convenently We wrte: hen: ( ( ( ( c c where: c ( We rewrte ths as: + B ( cc and: c ( + c c + B cc where t s plctly understood that the dervatves and the constrants are evaluated at the an ntal estate he other equaton we ust solve s: yelds: c Bα hus: ( r B c B B α Hα where: H B B hen: B c B c r c+ Bα, whch, ultplyng on the LHS by By constructon, H B B s a square, syetrc (and real atrx, and thereore, t has a square, syetrc (and real nverse H ( B B hus, ultplyng r B c ( B B α Hα on the LHS by H ( B B Lagrange Multplers: α H r and Fnally, the result o ths step s: + c c B BH r gves the α + gves the correcton We explctly need to check/very whether or not ths new satses the constrants: ( I t does, then we re done I not, then we use ths as a new and repeat (e terate the above procedure untl ( s satsed I ( s satsed, then ( also 598AEM Lecture Notes 4

Fall Analyss o Experental Measureents B Esensten/rev S Errede Now let us calculate χ ( c; I α ro the quanttes that we have obtaned s satsed, recallng that and H B B are syetrc atrces, then: hus, ( c; χ α r α χ c; α c c+ α c c ( BH r ( BH r r H B ( BH r r H ( B B H r H r H H H r H r α r α hs s the value o r χ ater the step to + c Next, we deterne the covarance atrx o the tted values usng error propagaton: Now t s just algebra But: r B c B ( ( BH r + c c r + + + BH, thus: ( r B B B and thus: ( BH B hen: BH B BH B ( ( BH B BH B Multplyng ths out on the RHS and agan usng H ths sples to: B B B H B 598AEM Lecture Notes 5

Fall Analyss o Experental Measureents B Esensten/rev S Errede As beore, snce atrx s syetrc, t has postve dagonal eleents Lkewse, the syetrc H B B H B B also has postve dagonal eleents, and so does hereore, ro ( B H ( B ( B( B B ( B, we see that the dagonal eleents o are saller than the dagonal eleents o hus, the -standard devaton uncertantes assocated the adjusted (e tted easureents are less than the -standard devaton uncertantes on the orgnal easureents ull Quanttes: ull quanttes are dstrbutons o noralzed/ractonal derences between the tted easured quanttes whch can be very helpul n veryng the valdty o the LSQ ttng procedure We dene the th c pull quantty as the noralzed correcton: p c where the brackets are synonyous wth the expectaton value, e: c E[ c ] E[ ] Note that there s no bas, then: c I everythng s nce e the nput easureents are Gaussan/norally-dstrbuted and ther uncertantes, as contaned n the ndvdual eleents o the covarance atrx o the easureents have all been correctly / properly assgned and the varous approxatons and assuptons are all vald, then the N, p should be dstrbuted as By explctly lookng at the dstrbutons (eg hstogras o the p or any ndependent easureents o each o the, we can turn ths around and check the ngredents lsted above, especally whether the uncertantes on the ndvdual have ndeed been correctly assgned or not, by seeng whether the pull dstrbuton p or each N, or not s ndeed dstrbuted as Let us suppose that we have perored the Adjustent o Observatons, startng wth our ntal easureents and arrvng at nal adjusted/tted values It s not trval to evaluate the c he colun vector correcton c We also have the covarance atrx o the easureents and that o the adjusted/tted easureents E[( ( ] ( ( Forally: ˆ ˆ ˆ ˆ j j j j j and: ( E ˆ ˆ ˆ ˆ [] j j j j j 598AEM Lecture Notes 6

Fall Analyss o Experental Measureents B Esensten/rev S Errede I the easureents are truly unbased, then: ˆ ˆ, e E [ ] E[ ] hus: c ( ˆ ( ˆ For convenence, we dene the colun vectors: ( ˆ hen: ( ˆ ( ˆ and: δ and: ( ˆ c δ δ or: δ δ c E[ δδ ] δδ δ c δ c δ δ + c c cδ or: + c c cδ or: c c + cδ c δ hs s what we need, snce the dagonal eleents o the covarance atrx are the c But we need to evaluate cδ n order to nsh the job c c c Let us evaluate c δ or the case where δ Dδ Note that ths s a lnear relatonshp, wth D beng a square atrx hen: δ Dδ ( ˆ D( ˆ Or: ( ˆ ( ( D Dˆ D Dˆ ˆ D D ˆ Now: ( ˆ ( ˆ c δ δ cδ δ δ δ δ δ δδ δδ But ro δ Dδ we get: D ( δ, and ro: D ( D ˆ ( δ we get: D cδ δδ δδ D δδ D D Or: c ( D δ, snce s a syetrc atrx hus: δ c D 598AEM Lecture Notes 7

Fall Analyss o Experental Measureents B Esensten/rev S Errede r But we earler derved: + BH ( BH B and: ( B H ( B cδ ( B H B BH B ( B H ( B ( BH B + hus or the lnear case where δ Dδ : c c + cδ c c p c ( p c ( c ( or: p σ σ Navely, one ght expect between and Snce ( B H ( B c σ σ + σ, but ths gnores/neglects the correlaton, then σ calculatng the p pulls > σ and thus we won t get nto trouble n Exaples o LSQ t pulls are shown n the gures below or a oy Monte Carlo progra that carres out LSQ ts to branchng ratos o neutral and charged chared D esons, ro a paper by Werner M Sun, Sultaneous least-squares treatent o statstcal and systeatc uncertantes, Nucl Inst Meth hys Res A 556 35-33 (6 598AEM Lecture Notes 8