Sparse Matrix Techniques for MCAO

Similar documents
Wavefront reconstruction for adaptive optics. Marcos van Dam and Richard Clare W.M. Keck Observatory

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Methods I Non-Square and Sparse Linear Systems

JACOBI S ITERATION METHOD

9.1 Preconditioned Krylov Subspace Methods

1. Fast Iterative Solvers of SLE

Lecture 13: Basic Concepts of Wavefront Reconstruction. Astro 289

Sparse Matrix Methods for Large-Scale. Closed-Loop Adaptive Optics

Lecture 18 Classical Iterative Methods

Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography

Wavefront Reconstruction

The Conjugate Gradient Method

A FREQUENCY DEPENDENT PRECONDITIONED WAVELET METHOD FOR ATMOSPHERIC TOMOGRAPHY

Kasetsart University Workshop. Multigrid methods: An introduction

Adaptive algebraic multigrid methods in lattice computations

Motivation: Sparse matrices and numerical PDE's

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Algebraic Multigrid as Solvers and as Preconditioner

Preconditioning Techniques Analysis for CG Method

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Multigrid absolute value preconditioning

Computational Linear Algebra

FEM and Sparse Linear System Solving

Scientific Computing: An Introductory Survey

4.6 Iterative Solvers for Linear Systems

Conjugate Gradient Method

Lab 1: Iterative Methods for Solving Linear Systems

PDE Solvers for Fluid Flow

Introduction and Stationary Iterative Methods

Chapter 7 Iterative Techniques in Matrix Algebra

EXAMPLES OF CLASSICAL ITERATIVE METHODS

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Multigrid Methods and their application in CFD

arxiv: v1 [math.na] 11 Jul 2011

LINEAR SYSTEMS (11) Intensive Computation

Preface to the Second Edition. Preface to the First Edition

Contents. Preface... xi. Introduction...

Stabilization and Acceleration of Algebraic Multigrid Method

Incomplete Cholesky preconditioners that exploit the low-rank property

Iterative Methods for Linear Systems

Solving PDEs with CUDA Jonathan Cohen

Structured Linear Algebra Problems in Adaptive Optics Imaging

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Notes for CS542G (Iterative Solvers for Linear Systems)

HOMEWORK 10 SOLUTIONS

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Multigrid solvers for equations arising in implicit MHD simulations

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Multigrid Methods for Discretized PDE Problems

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Scientific Computing: Solving Linear Systems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Computational Methods. Systems of Linear Equations

Iterative Methods and Multigrid

Notes on PCG for Sparse Linear Systems

6.4 Krylov Subspaces and Conjugate Gradients

Lecture 8: Fast Linear Solvers (Part 7)

9. Iterative Methods for Large Linear Systems

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

Solving Ax = b, an overview. Program

Numerical Programming I (for CSE)

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Scientific Computing

Robust solution of Poisson-like problems with aggregation-based AMG

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Preconditioning techniques to accelerate the convergence of the iterative solution methods

Geometric Multigrid Methods

Iterative Methods for Solving A x = b

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Computational Linear Algebra

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Outline. 1 Partial Differential Equations. 2 Numerical Methods for PDEs. 3 Sparse Linear Systems

Lecture 17: Iterative Methods and Sparse Linear Algebra

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Lecture # 20 The Preconditioned Conjugate Gradient Method

Control of the Keck and CELT Telescopes. Douglas G. MacMartin Control & Dynamical Systems California Institute of Technology

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Splitting Iteration Methods for Positive Definite Linear Systems

3D Space Charge Routines: The Software Package MOEVE and FFT Compared

AMG for a Peta-scale Navier Stokes Code

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

Solution to Laplace Equation using Preconditioned Conjugate Gradient Method with Compressed Row Storage using MPI

Application of Wavelets to N body Particle In Cell Simulations

Transcription:

Sparse Matrix Techniques for MCAO Luc Gilles lgilles@mtu.edu Michigan Technological University, ECE Department Brent Ellerbroek bellerbroek@gemini.edu Gemini Observatory Curt Vogel vogel@math.montana.edu Montana State University, Math Sciences November 25, 2002 L. Gilles, CfAO Fall Retreat 2002 p.1/21

Wavefront Estimation from Idealized WFS Data - Open-loop WFS: s = Gϕ + η - η and ϕ random with known statistics C η = η η T and C ϕ = ϕ ϕ T ˆϕ = G s (noise-weighted pseudo-inverse) ˆϕ = arg min ϕ J(ϕ) J(ϕ) = 1 2 [ η 2 C 1 η + ϕ 2 C 1 ϕ ] = 1 2 [ η, C 1 η η + ϕ, Cϕ 1 ϕ ] ˆϕ = G s with G = ( G T C 1 η G + Cϕ 1 ) 1 G T Cη 1 - Block-layered structure for MCAO - (SNR) 2 = Gϕ 2 / η 2 = Gϕ 2 /(2 n z σ 2 ) L. Gilles, CfAO Fall Retreat 2002 p.2/21

Key Approximations to Inverse Turbulence Covariance Approximation #1 - C ϕ is block diagonal (different layers are statistically independent) - For each turbulent layer l, C ϕl is BTTB (phase structure function) - Approximate BBTB matrix by BCCB matrix (diagonalized by DFT) - C ϕl BCCB(τ l ) = F 1 diag(ˆτ l )F where ˆτ l = Fτ l (eigenvalues) - ˆτ l = c l κ 11/3 (Kolmogorov PSD) C 1 ϕ l BCCB(Λ l ) = F 1 diag(λ l ) F, λ l = ˆτ 1 l = c 1 l κ 11/3 L. Gilles, CfAO Fall Retreat 2002 p.3/21

Key Approximations to Inverse Turbulence Covariance - BCCB(Λ l ) is a full matrix. Approximation #2 - Entries Λ l rapidly decay to small values - Approximate BCCB(Λ l ) by Sl 2 where S l = c l S and S is the discrete Laplacian matrix with periodic boundary conditions S = BCCB(v) = F 1 diag(ˆv)f (sparse BCCB) ˆv = 4 [ sin 2 (πκ x x)/ x 2 + sin 2 (πκ y y)/ y 2] x, y 0 4π2 κ 2 - S = S y I x + I y S x where S y and S x are 1D versions L. Gilles, CfAO Fall Retreat 2002 p.4/21

Key Approximations to Inverse Turbulence Covariance Sparsity Patterns in 1D [S u] i = ( u i 1 + 2u i u i+1 ) / x 2 S = circ(v), v = (2, 1, 0,, 0, 1) T / x 2 = (2 e 0 e 1 e n 1 )/ x 2 S = F 1 diag(ˆv)f, ˆv = Fv = (2 ê 0 ê 1 ê n 1 )/ x 2 ˆv = 4 sin 2 (πκ x)/ x 2 (eigenvalues) x 0 4π 2 κ 2 L. Gilles, CfAO Fall Retreat 2002 p.5/21

Key Approximations to Inverse Turbulence Covariance 0.1 0.08 Eigenvalues λ (1), λ (2) (biharmo) λ (2) (power 11/3) λ (1) 0.1 0.05 c (1) = IDFT[ λ (1) ], c (2) = IDFT[ λ (2) ] 0.06 0.04 0 0.02 0.05 0 5 10 15 20 25 30 0.1 5 10 15 20 25 30 0 C (1) = circ[ c (1) ] C (2) = circ[ c (2) ], Threshold = 3 10 2 0 32 0 32 15.62% Fill 32 0 32 53.1% Fill L. Gilles, CfAO Fall Retreat 2002 p.6/21

MCAO Minimum Variance Reconstructor Problem: find optimal actuator commands â minimizing MCAO wide-field error metric W â = Rs R = arg min J(R) R J(R) = ɛ 2 W = ɛ T Wɛ ɛ = H a â H ϕ ϕ (aperture-plane residual phase) R = H a }{{} F itting H ϕ G }{{} Estimation G = ( G T Cη 1 G }{{} + Cϕ 1 }{{} ) 1 G T Cη 1 Sparse+low rank (LGS) Sparse approx H a = ( H at W H a + low-rank ) 1 H at W L. Gilles, CfAO Fall Retreat 2002 p.7/21

Multigrid (MG) Methods - Can sometimes be used as stand-alone system solvers. - Can be used as preconditioners. - Rely on multiple scales (grid sizes) inherent in certain problems. - Need smoother which damps out high-frequency components of error on fine grids. - Classical Gauss-Seidel iteration works well for Laplace s equation. - Remaining low frequency error is well-represented on coarser grids. - Are recursive versions of the following 2-grid scheme. L. Gilles, CfAO Fall Retreat 2002 p.8/21

2-Grid Scheme x h S(x h, y h,...) r h A h x h y h x h x h + e h x h S(x h, y h,...) Restrict r H Ih Hr h Interpolate e h IH h e H Solve A H e H = r H - S(v, w,...) denotes application of smoother to solve Ax = w with initial guess x = v. - To obtain MG V-cycle, apply 2-grid scheme recursively. Carry out Solve step with (e H, r H ) in place of (x h, y h ). L. Gilles, CfAO Fall Retreat 2002 p.9/21

Multigrid (MG) Methods - Inter-grid transfers (restriction, or up-binning, and interpolation, or down-binning) are cheap. - Cost is typically dominated by smoother application on finest grid. - Choice of smoother is problem-dependent. - Block (i.e., layer-oriented) symmetric Gauss-Seidel (B-SGS) works well for MCAO estimation step. - FFT-based modified Richardson iteration works well for Ex-AO estimation. L. Gilles, CfAO Fall Retreat 2002 p.10/21

Block Gauss-Seidel Smoother Based on block L + D + U splitting A = 0 0... 0 A 11 0... 0. A 21 0... 0 0 A....... + 22 0.. 0....... 0 A n1... A n,n 1 0 0... 0 A nn }{{}}{{} L 0 A 12... A 1n. 0 0... +........ A n 1,n } 0... 0 {{ 0 } U D L. Gilles, CfAO Fall Retreat 2002 p.11/21

Block Gauss-Seidel Smoother Ax = b is equivalent to (L + D)x = b Ux. This motivates the block forward iteration (L + D)x k+1 = b Ux k, k = 0, 1,.... Similarly, we obtain the block backward interation (D + U)x k+1 = b Lx k, k = 0, 1,.... Block symmetric Gauss-Seidel (B-SGS) is obtained by interweaving forward and backward iterations. L. Gilles, CfAO Fall Retreat 2002 p.12/21

Efficient PCG Solver - MG preconditioned CG Estimation Step - Block SGS smoother: requires only inversion of diagonal blocks. Implemented using reordering + full Cholesky factorization. Fitting Step - Incomplete Cholesky preconditioned CG. Incomplete Cholesky applied to full sparse matrix without reordering. L. Gilles, CfAO Fall Retreat 2002 p.13/21

Preliminary Results Algorithm tested against conventional matrix multiply reconstructors on 8m class problems with 7 10 3 degrees of freedom 32m class problems with 7 10 4 degrees of freedom solvable in Matlab with 2-3Gb memory Convergence obtained in 2-10 iterations - Convergence rate a strong function of WFS noise level - Weak function of problem dimensionality, NGS vs. LGS MCAO L. Gilles, CfAO Fall Retreat 2002 p.14/21

Sample MCAO Problem Dimensionality Aperture diameter (m) 8 16 32 WFS measurements 2240 8560 33320 Turbulence phase points 7270 21226 70838 DM actuators 789 2417 8449 L. Gilles, CfAO Fall Retreat 2002 p.15/21

Sample MCAO Problem Dimensionality Estimation matrix to be inverted. 6 layers, 5 NGS s. D=16m. Phase screens 128 128. L. Gilles, CfAO Fall Retreat 2002 p.16/21

Fig.1 Top layer diagonal block of estimation matrix. D=16m. Phase screens 128 128. L. Gilles, CfAO Fall Retreat 2002 p.17/21

Fig.2 Fitting matrix (3 DM s, 932 actuators/dm). D=16m. L. Gilles, CfAO Fall Retreat 2002 p.18/21

Fig.3 Estimation, 6-layer profile, 5 WFSs using 5 NGS s. FoV diameter 100 arcsec, 1 V-cycle/CG iteration, 1 SGS iter/grid level, SNR = 20, r 0 = 25cm, x = r 0. 10 0 Estimation Error Norm Averaged over FoV 32m 16m 8m 10 0 CG Estimation Residual Norm 32m 16m 8m 10 1 10 1 10 2 0 5 10 15 20 CG Estimation Iteration 10 2 0 5 10 15 20 CG Estimation Iteration 10 0 Direct Solve vs CG (8m) RMS Estimation Error (32m) x 10 3 2.1983 10 1 1.0992 10 2 0 5 10 15 20 CG Estimation Iteration 0 L. Gilles, CfAO Fall Retreat 2002 p.19/21

Fig.4 Fitting after 20 CG estimation iterations. Average over array of 5 5 observation directions. FoV diameter 100 arcsec. Incomplete Cholesky preconditioning, regularization parameter α = 1e-5. Error Norm (32m) Averaged over FoV 0Residual 10 3 DMs 2 DMs 1 DM Error Norm (16m) Averaged over FoV 0Residual 10 3 DMs 2 DMs 1 DM 10 1 10 1 10 2 0 5 10 15 20 CG Fitting Iteration 10 2 0 5 10 15 20 CG Fitting Iteration Error Norm (8m) Averaged over FoV 0Residual 10 3 DMs 2 DMs 1 DM 10 0 Residual Error Norm Averaged over FoV 10 1 10 1 32m 16m 8m 10 2 0 5 10 15 20 CG Fitting Iteration 10 2 0 2 4 6 8 CG Estimation Iteration L. Gilles, CfAO Fall Retreat 2002 p.20/21

Fig.5 Fitting after 20 CG estimation iterations. Average over array of 5 5 observation directions. FoV diameter 100 arcsec. Incomplete Cholesky preconditioning, regularization parameter α = 1e-5. 10 0 Direct Solve vs CG (8m) 10 0 CG Residual Norm (8m) 10 1 3 DMs 2 DMs 1 DM 10 2 10 5 3 DMs 2 DMs 1 DM 10 3 0 5 10 15 20 CG Fitting Iteration 10 10 0 5 10 15 20 CG Fitting Iteration 10 0 CG Residual Norm (16m) RMS Residual Error (32m, 2DMs) x 10 3 3.0976 10 5 3 DMs 2 DMs 1 DM 1.5488 10 10 0 5 10 15 20 CG Fitting Iteration 0 L. Gilles, CfAO Fall Retreat 2002 p.21/21