Optimal design for inverse problems

Size: px
Start display at page:

Download "Optimal design for inverse problems"

Transcription

1 School of Mathematics and Statistical Sciences Research Institute University of Southampton, UK Joint work with Nicolai Bissantz, Holger Dette (both Ruhr-Universität Bochum) and Edmund Jones (University of Bristol) Workshop: Experiments for Processes With Time or Space Dynamics Isaac Newton Institute, Cambridge, 20 July 2011

2 1 Introduction to inverse problems What is an inverse problem? The model Estimation 2 3 4

3 Example of an inverse problem What is an inverse problem? The model Estimation

4 Example of an inverse problem What is an inverse problem? The model Estimation

5 Example of an inverse problem What is an inverse problem? The model Estimation

6 Example of an inverse problem What is an inverse problem? The model Estimation Computed tomography: The shape of the object cannot be observed directly We measure the proportion of X-ray photons passing through an object along certain paths These line integrals have to be inverted in order to get a description of the object

7 Applications Introduction to inverse problems What is an inverse problem? The model Estimation Inverse problems occur in many different areas, e.g. Medical imaging Computed tomography Magnetic resonance imaging Ultrasound Materials Science find cracks in objects using computed tomography Geophysics Borehole tomography Astrophysics Imaging of galaxies All these applications have in common that the feature of interest cannot be observed directly.

8 The model - random design What is an inverse problem? The model Estimation The observations are independent pairs (X i, Y i ), i = 1,..., n, where E[Y i X i = x] = (Km)(x) and Var(Y i X i = x) = σ 2 (x). m(x) is the object of interest requires estimation K : L 2 (µ 1 ) L 2 (µ 2 ) is a compact and injective linear operator between L 2 -spaces with respect to the probability measures µ 1 and µ 2 X 1,..., X n are the design points drawn randomly from a density h σ 2 (x) is a positive and finite variance function

9 Singular value decomposition What is an inverse problem? The model Estimation The operator K has a singular system {(λ j, ϕ j, ψ j ) j IN} where λ j ψ j = K ϕ j, j IN ϕi, ϕ j µ 1 = δ ij, i, j IN ψi, ψ j µ 2 = δ ij, i, j IN. The functions m and Km have expansions of the form m = a j ϕ j and Km = j=1 b j ψ j = j=1 a j K ϕ j = j=1 a j λ j ψ j j=1 where a j = m, ϕ j µ 1 and b j = Km, ψ j µ 2.

10 Singular value decomposition What is an inverse problem? The model Estimation The operator K has a singular system {(λ j, ϕ j, ψ j ) j IN} where λ j ψ j = K ϕ j, j IN ϕi, ϕ j µ 1 = δ ij, i, j IN ψi, ψ j µ 2 = δ ij, i, j IN. The functions m and Km have expansions of the form m = a j ϕ j and Km = j=1 b j ψ j = j=1 a j K ϕ j = j=1 a j λ j ψ j j=1 where a j = m, ϕ j µ 1 and b j = Km, ψ j µ 2.

11 Singular value decomposition What is an inverse problem? The model Estimation The operator K has a singular system {(λ j, ϕ j, ψ j ) j IN} where λ j ψ j = K ϕ j, j IN ϕi, ϕ j µ 1 = δ ij, i, j IN ψi, ψ j µ 2 = δ ij, i, j IN. The functions m and Km have expansions of the form m = a j ϕ j and Km = j=1 b j ψ j = j=1 a j K ϕ j = j=1 a j λ j ψ j j=1 where a j = m, ϕ j µ 1 and b j = Km, ψ j µ 2.

12 Estimation Introduction to inverse problems What is an inverse problem? The model Estimation m = a j ϕ j and Km = b j ψ j = a j λ j ψ j j=1 j=1 j=1 Idea: Estimate the coefficients b j from the observations to obtain ˆb 1, ˆb 2,... Use a j = b j /λ j to estimate â j = ˆb j /λ j, j = 1, 2,... (The eigenvalues λ 1, λ 2,... of K are known.) Substitute â j, j = 1, 2,... into the expansion for m

13 Spectral cut-off regularisation What is an inverse problem? The model Estimation Problem: We need to estimate infinitely many parameters from a finite number of observations ill-posed problem There are different types of regularisation to overcome this issue Tikhonov regularisation (ridge regression) Spectral cut-off Lasso... In what follows, we will use spectral cut-off regularisation, i.e. ˆm = M j=1 ˆb j λ j ϕ j, for some M IN.

14 The goal is to minimise the Integrated Mean Squared Error for estimating m, Φ(h), with respect to the design density h(x). Φ(h) = 1 gm (x)(σ 2 (x) + (Km) 2 (x)) dµ 2 (x) n h(x) bj M bj 2 where g M (x) = n λ 2 j=m+1 j λ 2 j=1 j M ψj 2 (x) j=1 λ 2 j Note that Only the first term of the IMSE depends on h This term also depends on the unknown functions σ 2 (x), (Km)(x) and the unknown regularisation parameter M

15 The optimal design density Theorem For fixed M, the objective function Φ(h) is minimised by the density h M (x) = gm (x) σ 2 (x) + (Km) 2 (x) gm (t) σ 2 (t) + (Km) 2 (t) dµ 2 (t). Proof: Application of the Cauchy-Schwartz inequality

16 The optimal design density Theorem For fixed M, the objective function Φ(h) is minimised by the density h M (x) = gm (x) σ 2 (x) + (Km) 2 (x) gm (t) σ 2 (t) + (Km) 2 (t) dµ 2 (t). Proof: Application of the Cauchy-Schwartz inequality

17 Example: convolution Let m(x) L 2 [0, 1] be periodic and symmetric around 0.5 and K be the convolution operator, i.e. (Km)(x) = G m(x) = 1 0 G(x t)m(t) dt for some known symmetric function G. Then ϕ 1 (x) = ψ 1 (x) = 1, ϕ j (x) = ψ j (x) = 2 cos(2(j 1)πx), j 2, and λ j = 1 0 G(t)ϕ j(t) dt. The measures µ 1 (x) and µ 2 (x) are Lebesgue measure.

18 Example: convolution Let G be such that λ j = j 2, j = 1, 2,.... We require plausible values for a j, j = 1, 2,... and σ 2 (x) in order to find the optimal density. For a j = j 2, j = 1, 2,..., the integrated squared bias is of order O(M 3 ) and the integrated variance is of order O(M 5 /n), so we choose ( ) 1/8 n M = c + 1 τ 2 for different values of c and τ 2 = 1 0 (σ 2 (x) + (Km) 2 (x)) dx.

19 Some optimal designs M = 2, σ 2 = 1 M = 5, σ 2 = 1 h M * (x) h M * (x) x x M = 10, σ 2 = 1 M = 20, σ 2 = 1 h M * (x) h M * (x) x x

20 Design assessment - comparison with the uniform design We compare the optimal designs with the uniform design h u (x) 1 using the ratio Φ(h M )/Φ(h u) as a measure of efficiency. n σ 2 = 0.25 σ 2 = 1 σ 2 = 4 c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = Table: Efficiency of the uniform design for different sample sizes, variances and choices of the parameter M used in the spectral cut-off regularization. M = c( n τ 2 ) 1/8 + 1 for various values of c. The uniform design is doing quite well!

21 Design assessment - comparison with the uniform design We compare the optimal designs with the uniform design h u (x) 1 using the ratio Φ(h M )/Φ(h u) as a measure of efficiency. n σ 2 = 0.25 σ 2 = 1 σ 2 = 4 c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = Table: Efficiency of the uniform design for different sample sizes, variances and choices of the parameter M used in the spectral cut-off regularization. M = c( n τ 2 ) 1/8 + 1 for various values of c. The uniform design is doing quite well!

22 Design assessment - model misspecifications We want to assess the robustness of locally optimal designs under various model misspecifications. We calculate 8 locally optimal designs with respect to a j = j 2 or j 1.25 (j = 1, 2,...), σ 2 = 0.25 or 1 and M = 2 or 5 We assess each design under each of these 8 scenarios through its efficiency eff(h) = Φ(h (a j, σ 2, M))/Φ(h) where h (a j, σ 2, M) is the locally optimal design for the respective scenario We also include the uniform design h u in the study

23 Design assessment - model misspecifications design \ scenario a j = j 2 a j = j 1.25 σ 2 = 0.25 σ 2 = 1 σ 2 = 0.25 σ 2 = 1 M = 2 M = 5 M = 2 M = 5 M = 2 M = 5 M = 2 M = 5 h (j 2, 0.25, 2) h (j 2, 0.25, 5) h (j 2, 1, 2) h (j 2, 1, 5) h (j 1.25, 0.25, 2) h (j 1.25, 0.25, 5) h (j 1.25, 1, 2) h (j 1.25, 1, 5) h u Table: Efficiencies of the 9 designs under investigation for 8 different scenarios with n = 100. Note: All off-diagonal 1 s come from rounding to three decimal places.

24 Design assessment - model misspecifications s from this example The uniform design is most robust across all scenarios Misspecification of the coefficients a j or of σ 2 hardly affect the efficiency of the locally optimal designs these designs are fairly similar

25 Design assessment - model misspecifications s from this example Locally optimal densities for M = 5, σ 2 = 0.25 or 1 and a j = j k, k = 2 or 1.25 M = 5 The uniform design is most robust across all scenarios Misspecification of the coefficients a j or of σ 2 hardly affect the efficiency of the locally optimal designs these designs are fairly similar h M * (x) σ 2 = 0.25, k = 2 σ 2 = 1, k = 2 σ 2 = 0.25, k = 1.25 σ 2 = 1, k = x

26 Radon transform Introduction to inverse problems Tomography We want to recover the density m(r, θ) of an object from line integrals through a slice Each line or path is parametrised through the distance s and the angle φ The paths are drawn randomly from the design density h(s, φ) We observe photon counts Poisson distribution

27 Radon transform Introduction to inverse problems The operator is the Radon transform K = R defined through 1 Rm(s, φ) = 2 1 s 2 with singular system 1 s2 1 s 2 m (s cos(φ) t sin(φ), s sin(φ) + t cos(φ)) dt ϕ p,q (r, ϑ) = q + 1 Z p q (r)e ipϑ, ψ p,q (s, φ) = U q (s)e ipφ, in brain space and detector space, respectively, and λ p,q = (q + 1) 1/2, q = 0, 1, 2,..., p = q, q + 2,..., q.

28 Radon transform Introduction to inverse problems The functions q + 1 Z p q (r)e ipϑ are the Zernike polynomials and U q (s) denotes the qth Chebyshev polynomial of the 2nd kind. The measures in brain and detector space are given by dµ B (r, ϑ) = r drdϑ for 0 r 1, 0 ϑ < 2π, π dµ D (s, φ) = 2 π 2 (1 s2 ) 1/2 dsdφ for 0 s 1, 0 φ < 2π.

29 The optimal design Introduction to inverse problems The optimal design density is given by hm (s, φ) = 1 0 Rm (s, φ) + Rm 2 (s, φ) g M (s) 2π 0 Rm (t, ρ) + Rm 2 (t, ρ) 1 t 2 g M (t) dρdt where M g M (s) = g M (s, φ) = (q + 1) 2 Uq 2 (s). q=1

30 Slices of example objects

31 Introduction to inverse problems Scanning a centered disc Suppose we want to scan a solid disc of radius 0.5 positioned in the middle of the scan field Then for each slice, m(r, θ) = { 1 if 0 r otherwise. The Radon transform of this function is given by Rm(s, φ) = s 2 /( 1 s 2 )I [0,0.5] (s).

32 Scanning a centered disc In this case, we can find the optimal density explicitly: 0.5 hm (s, φ) = π gm (s) 2 s gm (t) 2 t s 2 1 s 2 1 s t 2 1 t 2 1 t 2 if 0 s 0.5, 0 φ 2π and hm (s, φ) = 0 otherwise. dt

33 Introduction to inverse problems Scanning a polar rose For the polar rose with 8 petals and radius 0.5, m(r, θ) = 1 if 0 r 0.5 cos(4θ), 0 θ 2π and m(r, θ) = 0 otherwise. Here, the optimal density has to be found numerically.

34 Some optimal designs for centered disc and polar rose M = 5 M = h*(s,phi) h*(s,phi) s phi s phi.. M = 5 M = h*(s,phi) 2 h*(s,phi) s phi s phi..

35 Design assessment - comparison with the uniform design n centered disc polar rose c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = (2).696 (3).607 (6).830 (2).691 (4).632 (8) (3).658 (5).611 (9).910 (3).725 (6).646 (11) (4).733 (8).620 (15).950 (5).842 (9).679 (18) (7).801 (13).623 (26).981 (8).901 (16).661 (32) Table: Efficiency of the uniform design on [0, 1] [0, 2π], in brackets: M. Why is the uniform design doing so poorly this time? Many observations are made along paths which do not hit the object!

36 Design assessment - comparison with the uniform design n centered disc polar rose c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = (2).696 (3).607 (6).830 (2).691 (4).632 (8) (3).658 (5).611 (9).910 (3).725 (6).646 (11) (4).733 (8).620 (15).950 (5).842 (9).679 (18) (7).801 (13).623 (26).981 (8).901 (16).661 (32) Table: Efficiency of the uniform design on [0, 1] [0, 2π], in brackets: M. Why is the uniform design doing so poorly this time? Many observations are made along paths which do not hit the object!

37 Illustration Introduction to inverse problems Scanning a solid disc of radius 0.5 in the centre of the scan field For the uniform design, many paths do not hit the object, so these observations give limited information

38 Design assessment - comparison with the uniform design Suppose we knew in advance the object extends only up to 0.5 from the centre of the scan field use a uniform design with constant density h U,0.5 (s, φ) π/( arcsin(0.5)) on [0, 0.5] [0, 2π] n centered disc polar rose c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = (2).985 (3).981 (6).950 (2).920 (4).912 (8) (3).973 (5).981 (9).989 (3).945 (6).919 (11) (4).985 (8).982 (15).992 (5).973 (9).931 (18) (7).992 (13).981 (26).997 (8).984 (16).926 (32) Table: Efficiency of the uniform design on [0, 0.5] [0, 2π]. This uniform design is doing very well.

39 Introduction to inverse problems Shifted disc and double disc For the final examples, the functions to be estimated are respectively: m(r, θ) = { 1 if 0 r cos(θ), 0 θ 2π 0 otherwise and m(r, θ) = { 1 if 0 r 0.5, 0 θ 2π 0.5 if 0.5 < r 1, 0 θ 2π, i.e. the density of the object is higher towards the center.

40 h*(s,phi) h*(s,phi) Introduction to inverse problems Some optimal designs for shifted disc and double disc M = 5 M = s phi s phi.. M = 5 M = h*(s,phi) 3 h*(s,phi) s phi s phi..

41 Design assessment - comparison with the uniform design n shifted disc double disc c = 0.5 c = 1 c = 2 c = 0.5 c = 1 c = (2).568 (3).541 (6).856 (2).860 (3).863 (5) (3).581 (5).543 (9).873 (2).866 (4).866 (7) (4).644 (8).554 (15).920 (3).873 (6).866 (12) (7).702 (13).559 (26).937 (5).879 (10).867 (20) Table: Efficiency of the uniform design on [0, 1] [0, 2π]. For the double disc, the uniform design is doing reasonably well. For the shifted disc it s performing quite poorly.

42 Introduction to inverse problems The locally optimal designs rarely outperform the uniform design considerably... and if they do it can often be remedied using prior knowledge... but not always The uniform design appears to be more robust with respect to model misspecifications Any prior knowledge on m should be incorporated in the design

43 Future work Introduction to inverse problems Investigate the performance of sequential designs Consider optimal design for different methods of modelling/estimation/regularisation in inverse problems Consider dynamic problems in this context, e.g. images of a beating heart in real time

44 Thank You!

45 Some references Biedermann, S.G.M, Bissantz, N., Dette, H. and Jones, E. (2011). for indirect regression. Under review. Bissantz, N. and Holzmann, H. (2008). Statistical inference for inverse problems. Inverse problems, 24, 17pp. doi: / /24/3/ Johnstone, I. M. and Silverman, B. W. (1990). Speed of estimation in positron emission tomography and related inverse problems. Annals of Statistics, 18,

46

47 Bias Introduction to inverse problems Estimate the coefficients as ˆb j = 1 n n i=1 ψ j (X i ) h(x i ) Y i. Note that this is not the LSE, but a direct estimator avoiding matrix inversion. E[ˆb j ] = (Km)(x)ψ j (x) dµ 2 (x) = b j unbiased! The integrated squared bias for estimating m is given by (E[m(x) ˆm(x)]) 2 dµ 1 (x) = b 2 j λ 2 j=m+1 j.

48 Variance Introduction to inverse problems The integrated variance for estimating m is Var( ˆm(x)) dµ 1 (x) = 1 gm (x)(σ 2 (x) + (Km) 2 (x)) dµ 2 (x) 1 n h(x) n M b 2 j λ 2 j=1 j where M ψj 2 (x) g M (x) =. The first term is usually dominating. j=1 λ 2 j

Optimal Designs for Indirect Regression

Optimal Designs for Indirect Regression Optimal Designs for Indirect Regression Stefanie Biedermann University of Southampton School of Mathematics Highfield SO17 1BJ, UK email: s.biedermann@soton.ac.uk Holger Dette Ruhr-Universität Bochum Fakultät

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

The Radon transform. Chris Stolk. December 18, 2014

The Radon transform. Chris Stolk. December 18, 2014 The Radon transform Chris Stolk December 18, 2014 1 Introduction In two dimensions the Radon transform is an integral transform that maps a function to its integrals over lines. Let θ S 1 and s R then

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

INTRODUCTION TO TOPOLOGY, MATH 141, PRACTICE PROBLEMS

INTRODUCTION TO TOPOLOGY, MATH 141, PRACTICE PROBLEMS INTRODUCTION TO TOPOLOGY, MATH 141, PRACTICE PROBLEMS Problem 1. Give an example of a non-metrizable topological space. Explain. Problem 2. Introduce a topology on N by declaring that open sets are, N,

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Tomography and Reconstruction

Tomography and Reconstruction Tomography and Reconstruction Lecture Overview Applications Background/history of tomography Radon Transform Fourier Slice Theorem Filtered Back Projection Algebraic techniques Measurement of Projection

More information

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

AP Calculus Testbank (Chapter 9) (Mr. Surowski) AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series

More information

Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint

Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint Sparse Kernel Density Estimation Technique Based on Zero-Norm Constraint Xia Hong 1, Sheng Chen 2, Chris J. Harris 2 1 School of Systems Engineering University of Reading, Reading RG6 6AY, UK E-mail: x.hong@reading.ac.uk

More information

Fourier analysis, measures, and distributions. Alan Haynes

Fourier analysis, measures, and distributions. Alan Haynes Fourier analysis, measures, and distributions Alan Haynes 1 Mathematics of diffraction Physical diffraction As a physical phenomenon, diffraction refers to interference of waves passing through some medium

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

XXXV Konferencja Statystyka Matematyczna -7-11/ XII 2009

XXXV Konferencja Statystyka Matematyczna -7-11/ XII 2009 XXXV Konferencja Statystyka Matematyczna -7-11/ XII 2009 1 Statistical Inference for Image Symmetries Mirek Pawlak pawlak@ee.umanitoba.ca 2 OUTLINE I Problem Statement II Image Representation in the Radial

More information

Part IB. Further Analysis. Year

Part IB. Further Analysis. Year Year 2004 2003 2002 2001 10 2004 2/I/4E Let τ be the topology on N consisting of the empty set and all sets X N such that N \ X is finite. Let σ be the usual topology on R, and let ρ be the topology on

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Ensembles and incomplete information

Ensembles and incomplete information p. 1/32 Ensembles and incomplete information So far in this course, we have described quantum systems by states that are normalized vectors in a complex Hilbert space. This works so long as (a) the system

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Partial factor modeling: predictor-dependent shrinkage for linear regression

Partial factor modeling: predictor-dependent shrinkage for linear regression modeling: predictor-dependent shrinkage for linear Richard Hahn, Carlos Carvalho and Sayan Mukherjee JASA 2013 Review by Esther Salazar Duke University December, 2013 Factor framework The factor framework

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

1 Assignment 1: Nonlinear dynamics (due September

1 Assignment 1: Nonlinear dynamics (due September Assignment : Nonlinear dynamics (due September 4, 28). Consider the ordinary differential equation du/dt = cos(u). Sketch the equilibria and indicate by arrows the increase or decrease of the solutions.

More information

Detector. Detector Model. Estimator. Detective Quantum Efficiency. An input photon stream X is presented to the detector.

Detector. Detector Model. Estimator. Detective Quantum Efficiency. An input photon stream X is presented to the detector. Detector An input photon stream X is presented to the detector. Detective Quantum Efficiency Lecture 9 Spring 2002 X is a random variable with a Poisson distribution. P (X k) qk k! e q where q λ The response

More information

Inference for High Dimensional Robust Regression

Inference for High Dimensional Robust Regression Department of Statistics UC Berkeley Stanford-Berkeley Joint Colloquium, 2015 Table of Contents 1 Background 2 Main Results 3 OLS: A Motivating Example Table of Contents 1 Background 2 Main Results 3 OLS:

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

Quantum Ergodicity and Benjamini-Schramm convergence of hyperbolic surfaces

Quantum Ergodicity and Benjamini-Schramm convergence of hyperbolic surfaces Quantum Ergodicity and Benjamini-Schramm convergence of hyperbolic surfaces Etienne Le Masson (Joint work with Tuomas Sahlsten) School of Mathematics University of Bristol, UK August 26, 2016 Hyperbolic

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming

More information

Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends

Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update 3. Juni 2013) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend

More information

List of Symbols, Notations and Data

List of Symbols, Notations and Data List of Symbols, Notations and Data, : Binomial distribution with trials and success probability ; 1,2, and 0, 1, : Uniform distribution on the interval,,, : Normal distribution with mean and variance,,,

More information

To appear in The American Statistician vol. 61 (2007) pp

To appear in The American Statistician vol. 61 (2007) pp How Can the Score Test Be Inconsistent? David A Freedman ABSTRACT: The score test can be inconsistent because at the MLE under the null hypothesis the observed information matrix generates negative variance

More information

Is there an optimal weighting for linear inverse problems?

Is there an optimal weighting for linear inverse problems? Is there an optimal weighting for linear inverse problems? Jean-Pierre FLORENS Toulouse School of Economics Senay SOKULLU University of Bristol October 9, 205 Abstract This paper considers linear equations

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Detective Quantum Efficiency

Detective Quantum Efficiency Detective Quantum Efficiency Lecture 9 Spring 2002 Detector An input photon stream X is presented to the detector. X is a random variable with a Poisson distribution. P (X = k) = qk k! e q where q = λaτ

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Applied Mathematics, Applied Statistics, & Scientific Computation University of

More information

Weak form of Boundary Value Problems. Simulation Methods in Acoustics

Weak form of Boundary Value Problems. Simulation Methods in Acoustics Weak form of Boundary Value Problems Simulation Methods in Acoustics Note on finite dimensional description of functions Approximation: N f (x) ˆf (x) = q j φ j (x) j=1 Residual function: r(x) = f (x)

More information

A Direct Method for reconstructing inclusions from Electrostatic Data

A Direct Method for reconstructing inclusions from Electrostatic Data A Direct Method for reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with:

More information

Kinematics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Kinematics Semester 1, / 15

Kinematics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Kinematics Semester 1, / 15 Kinematics Basilio Bona DAUIN Politecnico di Torino Semester 1, 2016-17 B. Bona (DAUIN) Kinematics Semester 1, 2016-17 1 / 15 Introduction The kinematic quantities used to represent a body frame are: position

More information

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models.

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. 1 Transformations 1.1 Introduction Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. (i) lack of Normality (ii) heterogeneity of variances It is important

More information

1 Radon Transform and X-Ray CT

1 Radon Transform and X-Ray CT Radon Transform and X-Ray CT In this section we look at an important class of imaging problems, the inverse Radon transform. We have seen that X-ray CT (Computerized Tomography) involves the reconstruction

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Rich Tomography. Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute. July 2014

Rich Tomography. Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute. July 2014 Rich Tomography Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute July 2014 What do we mean by Rich Tomography? Conventional tomography reconstructs one scalar image from

More information

Hypothesis Testing via Convex Optimization

Hypothesis Testing via Convex Optimization Hypothesis Testing via Convex Optimization Arkadi Nemirovski Joint research with Alexander Goldenshluger Haifa University Anatoli Iouditski Grenoble University Information Theory, Learning and Big Data

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Terascale Statistics Tools School Spring 2010 Tuesday 23 March Friday 26 March Data unfolding. Volker Blobel Universität Hamburg

Terascale Statistics Tools School Spring 2010 Tuesday 23 March Friday 26 March Data unfolding. Volker Blobel Universität Hamburg Terascale Statistics Tools School Spring 21 Tuesday 23 March Friday 26 March 21 Data unfolding Volker Blobel Universität Hamburg Unfolding is required, due to migration effects, for the measurement of

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

The geometry of the statistical model for range-based localization

The geometry of the statistical model for range-based localization Range geometry of the statistical model for range-based Algebraic Statistics 2015 Genova, Italy June 9, 2015 Range Joint work with Roberto Notari, Andrea Ruggiu, Fabio Antonacci Augusto Sarti. Range Range

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Using Estimating Equations for Spatially Correlated A

Using Estimating Equations for Spatially Correlated A Using Estimating Equations for Spatially Correlated Areal Data December 8, 2009 Introduction GEEs Spatial Estimating Equations Implementation Simulation Conclusion Typical Problem Assess the relationship

More information

Linear Regression Models. Based on Chapter 3 of Hastie, Tibshirani and Friedman

Linear Regression Models. Based on Chapter 3 of Hastie, Tibshirani and Friedman Linear Regression Models Based on Chapter 3 of Hastie, ibshirani and Friedman Linear Regression Models Here the X s might be: p f ( X = " + " 0 j= 1 X j Raw predictor variables (continuous or coded-categorical

More information

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas 0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity

More information

Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.

Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet. 2016 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answersheet.

More information

February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM

February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Abstract. The Rao-Blacwell theorem told us how to improve an estimator. We will discuss conditions on when the Rao-Blacwellization of an estimator

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM Electronic Companion Stochastic Kriging for Simulation Metamodeling

e-companion ONLY AVAILABLE IN ELECTRONIC FORM Electronic Companion Stochastic Kriging for Simulation Metamodeling OPERATIONS RESEARCH doi 10.187/opre.1090.0754ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 009 INFORMS Electronic Companion Stochastic Kriging for Simulation Metamodeling by Bruce Ankenman,

More information

Handlebody Decomposition of a Manifold

Handlebody Decomposition of a Manifold Handlebody Decomposition of a Manifold Mahuya Datta Statistics and Mathematics Unit Indian Statistical Institute, Kolkata mahuya@isical.ac.in January 12, 2012 contents Introduction What is a handlebody

More information

Math 209B Homework 2

Math 209B Homework 2 Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact

More information

Markscheme May 2017 Mathematics Standard level Paper 2

Markscheme May 2017 Mathematics Standard level Paper 2 M17/5/MATME/SP/ENG/TZ1/XX/M Markscheme May 017 Mathematics Standard level Paper 16 pages M17/5/MATME/SP/ENG/TZ1/XX/M This markscheme is the property of the International Baccalaureate and must not be reproduced

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Reducing subspaces. Rowan Killip 1 and Christian Remling 2 January 16, (to appear in J. Funct. Anal.)

Reducing subspaces. Rowan Killip 1 and Christian Remling 2 January 16, (to appear in J. Funct. Anal.) Reducing subspaces Rowan Killip 1 and Christian Remling 2 January 16, 2001 (to appear in J. Funct. Anal.) 1. University of Pennsylvania, 209 South 33rd Street, Philadelphia PA 19104-6395, USA. On leave

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Continuous Probability Distributions from Finite Data. Abstract

Continuous Probability Distributions from Finite Data. Abstract LA-UR-98-3087 Continuous Probability Distributions from Finite Data David M. Schmidt Biophysics Group, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (August 5, 1998) Abstract Recent approaches

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Math 113 Final Exam Practice

Math 113 Final Exam Practice Math Final Exam Practice The Final Exam is comprehensive. You should refer to prior reviews when studying material in chapters 6, 7, 8, and.-9. This review will cover.0- and chapter 0. This sheet has three

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement

More information

Data Analysis and Machine Learning Lecture 12: Multicollinearity, Bias-Variance Trade-off, Cross-validation and Shrinkage Methods.

Data Analysis and Machine Learning Lecture 12: Multicollinearity, Bias-Variance Trade-off, Cross-validation and Shrinkage Methods. TheThalesians Itiseasyforphilosopherstoberichiftheychoose Data Analysis and Machine Learning Lecture 12: Multicollinearity, Bias-Variance Trade-off, Cross-validation and Shrinkage Methods Ivan Zhdankin

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Microlocal Methods in X-ray Tomography

Microlocal Methods in X-ray Tomography Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods

More information

Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions

Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions Journal of Computational and Applied Mathematics 22 (28) 51 57 wwwelseviercom/locate/cam Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions

More information

The X-ray transform for a non-abelian connection in two dimensions

The X-ray transform for a non-abelian connection in two dimensions The X-ray transform for a non-abelian connection in two dimensions David Finch Department of Mathematics Oregon State University Corvallis, OR, 97331, USA Gunther Uhlmann Department of Mathematics University

More information

Inferring from data. Theory of estimators

Inferring from data. Theory of estimators Inferring from data Theory of estimators 1 Estimators Estimator is any function of the data e(x) used to provide an estimate ( a measurement ) of an unknown parameter. Because estimators are functions

More information

STUDY PLAN MASTER IN (MATHEMATICS) (Thesis Track)

STUDY PLAN MASTER IN (MATHEMATICS) (Thesis Track) STUDY PLAN MASTER IN (MATHEMATICS) (Thesis Track) I. GENERAL RULES AND CONDITIONS: 1- This plan conforms to the regulations of the general frame of the Master programs. 2- Areas of specialty of admission

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Solutions to Tutorial 11 (Week 12)

Solutions to Tutorial 11 (Week 12) THE UIVERSITY OF SYDEY SCHOOL OF MATHEMATICS AD STATISTICS Solutions to Tutorial 11 (Week 12) MATH3969: Measure Theory and Fourier Analysis (Advanced) Semester 2, 2017 Web Page: http://sydney.edu.au/science/maths/u/ug/sm/math3969/

More information

Goal. Robust A Posteriori Error Estimates for Stabilized Finite Element Discretizations of Non-Stationary Convection-Diffusion Problems.

Goal. Robust A Posteriori Error Estimates for Stabilized Finite Element Discretizations of Non-Stationary Convection-Diffusion Problems. Robust A Posteriori Error Estimates for Stabilized Finite Element s of Non-Stationary Convection-Diffusion Problems L. Tobiska and R. Verfürth Universität Magdeburg Ruhr-Universität Bochum www.ruhr-uni-bochum.de/num

More information

Nonparametric Modal Regression

Nonparametric Modal Regression Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric

More information

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That

More information

LECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity

LECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity LECTURE 10 Introduction to Econometrics Multicollinearity & Heteroskedasticity November 22, 2016 1 / 23 ON PREVIOUS LECTURES We discussed the specification of a regression equation Specification consists

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information