Sum of Coherent Systems Decomposition by SVD. University of California at Berkeley. Berkeley, CA September 21, 1995.

Similar documents
1. The Polar Decomposition

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Spectral transforms. Contents. ARPEGE-Climat Version 5.1. September Introduction 2

[4] T. I. Seidman, \\First Come First Serve" is Unstable!," tech. rep., University of Maryland Baltimore County, 1993.

Digital Signal Processing, Homework 1, Spring 2013, Prof. C.D. Chung

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Automatic algorithms

Review of similarity transformation and Singular Value Decomposition

The Discrete Fourier Transform (DFT) Properties of the DFT DFT-Specic Properties Power spectrum estimate. Alex Sheremet.

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Functional Analysis Review

8 Singular Integral Operators and L p -Regularity Theory

MATH 583A REVIEW SESSION #1

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Tensor SVD and distributed control

1 Parametric Bessel Equation and Bessel-Fourier Series

524 Jan-Olov Stromberg practice, this imposes strong restrictions both on N and on d. The current state of approximation theory is essentially useless

Linear Algebra (Review) Volker Tresp 2018

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Eigenvalue problems and optimization

Introduction to Mathematical Physics

Lecture 5 Singular value decomposition

Denition 1 Suppose V is a nonempty set and F is a eld, and that operations of vector addition and scalar multiplication are dened in the following way

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

Filtering in the Frequency Domain

Constrained Leja points and the numerical solution of the constrained energy problem

= a. a = Let us now study what is a? c ( a A a )

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

2800 Lyngby, Denmark, Abstract

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

Chapter 13 Partially Coherent Imaging, continued

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Discrete-Time Fourier Transform (DTFT)

Probability density of nonlinear phase noise

Hadamard speckle contrast reduction

Linear Algebra using Dirac Notation: Pt. 2

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Sparse Solutions of an Undetermined Linear System

Linear Regression and Its Applications

Example Linear Algebra Competency Test

Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices

For example, if our Poisson's equation is ex?y (y) dy + sin(x) ; (6) then since L(exp(x)) = 1=(s? 1) and L(sin(x)) = 1=(s 2 + 1),! L?1 1 (s 2 + 1)(1?

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

COMP 558 lecture 18 Nov. 15, 2010

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

22 APPENDIX 1: MATH FACTS

Chapter 8 The Discrete Fourier Transform

Numerical Linear Algebra Homework Assignment - Week 2

2 Tikhonov Regularization and ERM

Principal Component Analysis

IMC 2015, Blagoevgrad, Bulgaria

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Hexagonal QMF Banks and Wavelets. A chapter within Time-Frequency and Wavelet Transforms in Biomedical Engineering,

1 A complete Fourier series solution

Contents. Acknowledgments

Vision 3D articielle Session 2: Essential and fundamental matrices, their computation, RANSAC algorithm

Combinations. April 12, 2006

Automatica, 33(9): , September 1997.

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H

which the theoretical results were obtained. Image Signals The geometry of visual scenes under perspective projection generally yields complex image s

Problem structure in semidefinite programs arising in control and signal processing

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Positive definite preserving linear transformations on symmetric matrix spaces

1 One-dimensional lattice

Some Interesting Problems in Pattern Recognition and Image Processing

Linear Algebra: Characteristic Value Problem

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov

Properties of Matrices and Operations on Matrices

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

Therefore the new Fourier coefficients are. Module 2 : Signals in Frequency Domain Problem Set 2. Problem 1

Examples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems:

Simultaneous Robust Design and Tolerancing of Compressor Blades

Maths for Signals and Systems Linear Algebra in Engineering

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

DYNAMIC WEIGHT FUNCTIONS FOR A MOVING CRACK II. SHEAR LOADING. University of Bath. Bath BA2 7AY, U.K. University of Cambridge

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

Signatures of GL n Multiplicity Spaces

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

a11 a A = : a 21 a 22

Manning & Schuetze, FSNLP (c) 1999,2000

Model Reduction, Centering, and the Karhunen-Loeve Expansion

arxiv: v1 [quant-ph] 8 Sep 2010

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Deep Learning Book Notes Chapter 2: Linear Algebra

I = i 0,

On the Speed of Quantum Computers with Finite Size Clocks

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Transcription:

Sum of Coherent Systems Decomposition by SVD Nick Cobb Department of Electrical Engineering and Computer Science University of California at Berkeley Berkeley, CA9 September 1, 199 Abstract The Hopkins partially coherent imaging equation is expanded by eigenfunctions into a sum of coherent systems (SOCS) The result is a bank of linear systems whose outputs are squared, scaled and summed This technique is useful because of the partial linearity of the resulting system approximation The eigenfunction expansion can be accomplished by computer using the SVD algorithm To solve this problem, the Hopkins transmission cross coecients (TCCs) are rst obtained as a matrix, then SVD is used on the matrix Then the system is truncated at some low order (th or th) to obtain an optimal approximation to Hopkins In eect, the numerical implementation of this using SPLAT TCCs amounts to a direct approximation to SPLAT 1 Linear Systems Approximation The goal in this section is to compute the value of a single intensity point centered in a nite square mask region of size L x L x m, as depicted in Figure 1 First, we consider the 1-D case By periodicizing a the length L x mask, and taking its Fourier series expansion ~ G(n), as done by Flanner [] we obtain the following expression for the Fourier series of the resulting periodic intensity, ~ I(): ~I(n) = X n ~ T(n + n ;n ) ~ G(n + n )G (n ) (1) Suppose that the transmission cross coecient function ~ T (n ;n ) can be approximated by: ~T (n ;n ) k k (n ) k(n ) () for some functions k ();k =1:::N a We call this an N a th order approximation Then substitute equation into equation 1 to yield ~I(n) = X n k k (n + n) k (n ) ~ G(n + n )G (n ) 1

L x rect 1 L x rect (,) rect Figure 1: Compute intensity at (,) for a small mask X = X n Na k k (n + n) ~ G(n + n ) k(n ) ~ G (n ) = k ( k ~ G)? (k ~ G) (n) () where? is the convolution operator This is observed to be sum of convolutions in the frequency domain The convolutions are autocorrelations of the Fourier coecients By convolution-multiplication duality,theinverse Fourier series of this expression can be written in the spatial domain as I(x) = k j( k?g)(x)j () In two dimensions, this is extended straightforwardly to yield the Fourier series expansion of the image intensity and the image intensity: ~I(m; n) = I(x; y) = k ( k ~ G)? (k ~ G) (m; n) () k j( k?g)(x; y)j () where? is now -D convolution Thus, the intensityistheweighted summed squared outputs of N a linear systems, which we call a SOCS as shown in Figure Decomposing Hopkin's Imaging Equation To this point, the development of the SOCS intensity calculation technique has assumed that the original approximation in equation is valid In this section, this claim is validated and a technique for nding the -D convolution kernels, k (), is elaborated We start by deriving

φ 1 λ 1 [ ] mask φ λ [ ] Σ intensity φ N [ ] λ N Optics system kernels Figure : Sum of coherent systems (SOCS) approximation to Hopkin's imaging the decomposition in the 1-D case for simplicity and then extend it naturally to -D The determination of the -D convolution kernels requires some special techniques described in section 1 1-D Decomposition The following two facts from Flanner[] are useful in the subsequent development Fact 1 ~ T(; ) is nonzero for only nitely many points and therefore can be represented bya matrix of values of size M M Fact ~ T(n ;n )= ~ T (n ;n ) and therefore it is Hermitian symmetric Based on this, we make the following claim: Claim 1 ~ T(n ;n ) can be written as ~T(n ;n ) = MX k k (n ) k(n ) Proof: Putting Facts 1 and together allow us to write ~ T(n ;n ) as the Hermitian matrix [ ~ T] i;j = ~T(i; j) The dyadic expansion of T in terms of it's eigenvectors is ~T = MX k k k ; which is equivalent to the claim With this we have a sum-of-coherent systems decomposition similar to that proposed by Pati and Kaillath[] which describes the action of the partially coherent optical system

By truncating the summation at N a,we get a reduced order approximation to the partially coherent system resulting in intensity: ~T(n ;n ) k k (n ) k(n ) () Since the singular values k rapidly decay in magnitude, the truncation will be a good approximation Theoretical error bounds on the approximation are analyzed in the paper by Pati and Kailath[], in which itisshown that very good low-order approximations are achieved The decomposition can be performed by Singular Value Decomposition (SVD) if we have the computed TCC values, T ~ With these results, the approximation of equation and the subsequent development are fully justied -D Kernel Determination In the -D case, we can also use SVD to obtain the -D convolution kernels in the decomposition In the -D case, the discrete TCCs can be thought of as a linear mapping from the space of M M matrices, R M M, to itself So, given a matrix X R M M, the function ~T(i; j; k; l) can dene a linear mapping T whose action is described by: [T (X)] (i;j) = MX MX l=1 ~T (i; j; k; l)x k;l This linear operation can be \unwound" to be represented as a matrix, T, operating on a column vector from R N The unwinding is performed by stacking the columns of the vector and then writing the operation out as a matrix-vector multiply The column stacking function S : R N N,! R N is dened by: S(X i;j )=X jn +i For example, let X be a matrix of size M M: X = x 11 x 1 ::: x 1M x 1 x ::: x M x n1 x M ::: x MM The column stacking operation is performed on X, yielding X = S(X) = h i = x 1 x ::: x M x 1 x x M

The operator T dened before can be represented as a matrix, T, operating on the stacked X vector, T = ~T (1; 1; 1; 1) ~ T (1; 1; ; 1) ::: ~ T (1; 1; N;1) ~ T (1; 1; 1; ) ~ T (1; 1; ; ) ::: ~ T (1; 1; N;N) ~T (; 1; 1; 1) ~T (N; 1; 1; 1) ~T (1; ; 1; 1) ~T (; ; 1; 1) ~T (N; ; 1; 1) ~T (N; N; 1; 1) ::: ~ T (N; N; N; N) The contour plot in gure shows the an example of the matrix T Singular value decomposition applied to this matrix yields the decomposition: T = NX k V k V k ; (8) Then, the inverse column stacking operation yields the desired functions, k, in equation and then it it possible to make the approximation therein: ~T(n ;n ) k = S,1 (V k ) k k (n ) k (n ) The spatial domain convolution kernels are the Inverse Fourier Series' (IFS) of the k 's Using the -D Inverse Fast Fourier Transform (IFFT) to obtain the k 's yields: I(x; y) = k j( k?g)(x; y)j (9) Using the IFFT instead of IFS introduces a small amount of destructive aliasing, but this is necessary in order to limit the time domain convolution kernels to have nite support A plot of the singular values in gure shows why a reduced order approximation can be very accurate, since the singular values quickly approach zero The magnitudes plots of the rst two convolution kernels obtained after the IFFT are shown in gure References [1] N Cobb, \Fast mask optimization for optical lithography," Master's thesis, University of California at Berkeley, 199

[] P D Flanner, \Two-dimensional optical imaging for photolithography simulation," Tech Rep Memorandum No UCB/ERL M8/, Electronics Research Laboratory, University of California at Berkeley, Jul 198 [] Y C Pati and T Kailath, \Phase-shifting masks for microlithography: Automated design and mask requirements," Journal of the Optical Society of America A-Optics Image Science and Vision, vol 11 No 9, pp 8{, 199 1 1 1 1 Figure : Contour plot of TCCs in -D matrix form 1 Singular values σ k 1 8 1 1 k Figure : Singular values, k obtained indecomposition

x 1 - x 1-1 1 1 1 1 8 1 1 1 1 (a) (b) Figure : (a) 1 (x; y) (b) (x; y)