Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,

Similar documents
Chapter 1. Vectors, Matrices, and Linear Spaces

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

EXAM. Exam #2. Math 2360 Summer II, 2000 Morning Class. Nov. 15, 2000 ANSWERS

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Lecture 21: 5.6 Rank and Nullity

Row Reduction and Echelon Forms

MATH 2360 REVIEW PROBLEMS

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

A Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Solutions of Linear system, vector and matrix equation

Simultaneous Sparsity

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Lecture 3: Linear Algebra Review, Part II

Improved FOCUSS Method With Conjugate Gradient Iterations

Linear Algebra I Lecture 10

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Linear equations in linear algebra

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Math 2030 Assignment 5 Solutions

Chapter 1. Vectors, Matrices, and Linear Spaces

The definition of a vector space (V, +, )

b for the linear system x 1 + x 2 + a 2 x 3 = a x 1 + x 3 = 3 x 1 + x 2 + 9x 3 = 3 ] 1 1 a 2 a

Solutions to Homework 5 - Math 3410

Lecture 2 Systems of Linear Equations and Matrices, Continued

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Type II variational methods in Bayesian estimation

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

Pseudoinverse & Moore-Penrose Conditions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear Equations in Linear Algebra

EUSIPCO

Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y)/2. Hence the solution space consists of all vectors of the form

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS

Week #4: Midterm 1 Review

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Section Gaussian Elimination

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

MTH 35, SPRING 2017 NIKOS APOSTOLAKIS

RECENTLY, there has been a great deal of interest in

Matrices and systems of linear equations

1. Solve each linear system using Gaussian elimination or Gauss-Jordan reduction. The augmented matrix of this linear system is

Chapter 6. Linear Independence. Chapter 6

Introduction to Compressed Sensing

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

χ 1 χ 2 and ψ 1 ψ 2 is also in the plane: αχ 1 αχ 2 which has a zero first component and hence is in the plane. is also in the plane: ψ 1

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

System of Linear Equations

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

Linear System Equations

1 Regression with High Dimensional Data

Nonnegative Tensor Factorization with Smoothness Constraints

DNNs for Sparse Coding and Dictionary Learning

l 0 -norm Minimization for Basis Selection

Review for Chapter 1. Selected Topics

Math 369 Exam #2 Practice Problem Solutions

2. Every linear system with the same number of equations as unknowns has a unique solution.

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

Robust multichannel sparse recovery

Sparse Signal Recovery: Theory, Applications and Algorithms

SGN Advanced Signal Processing Project bonus: Sparse model estimation

Dictionary Learning for L1-Exact Sparse Coding

Compressed Sensing and Robust Recovery of Low Rank Matrices

squares based sparse system identification for the error in variables

Problem Sheet 1 with Solutions GRA 6035 Mathematics

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

Math 54 HW 4 solutions

EXAM. Exam #3. Math 2360 Fall 2000 Morning Class. Nov. 29, 2000 ANSWERS

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Chapter 2 Notes, Linear Algebra 5e Lay

Lecture 22: More On Compressed Sensing

IEOR 265 Lecture 3 Sparse Linear Regression

Linear Independence Reading: Lay 1.7

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

Notes on Row Reduction

Lecture 6: Spanning Set & Linear Independency

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

1 - Systems of Linear Equations

Sections 1.5, 1.7. Ma 322 Fall Ma 322. Sept

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Math x + 3y 5z = 14 3x 2y + 3z = 17 4x + 3y 2z = 1

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

Fast Algorithms for Sparse Recovery with Perturbed Dictionary

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Number of solutions of a system

Linear Combination. v = a 1 v 1 + a 2 v a k v k

Linear Independence x

Normed & Inner Product Vector Spaces

Row Space, Column Space, and Nullspace

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Sections 1.5, 1.7. Ma 322 Spring Ma 322. Jan 24-28

MAT Linear Algebra Collection of sample exams

Transcription:

Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications)

Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS algorithm.

Bibliography [] A. Borck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 996, [] G. Golub, C. F. Van Loan, Matrix Computations, he John Hopkins University Press, (hird Edition), 996, [3] I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstructions from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE rans. Signal Process., vol. 45, no. 3, pp. 600 66, Mar. 997, [4] B. D. Rao and K. Kreutz-Delgado, Deriving algorithms for computing sparse solutions to linear inverse problems, in Proc. 3st Asilomar Conf. Signals Syst. Comput., CA, Nov. 5, 997, vol., pp. 955 959, [5] B. D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, Subset selection in noise based on diversity measure minimization, IEEE rans. Signal Process., vol. 5, no. 3, pp. 760 770, Mar. 003, [6] S. F. Cotter, B. Rao, K. Engan, and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE rans. Signal Process., vol. 53, no. 7, pp. 477 488, Jul. 005,

Solutions to linear systems A system of linear equations can be expressed in the following matrix form: M N where = [ a i ] I N = [ x ] I Ax = b, () A is a coefficient matrix, [ ] M M ( N +) x is a solution to be estimated. Let [ ] b = b I is a data vector, and i B = A b = I be the augmented matrix to the system (). he system of linear equations may behave in any one of three possible ways: A he system has no solution if ( A) rank( B) rank <. B he system has a single unique solution if rank ( ) = rank( B) = N A. C he system has infinitely many solutions if rank ( ) = rank( B) < N A.

Solutions to linear systems he case C occurs for rank-deficient problems or under-determined problems. In spite of infinitely many solutions, a good approximation to a true solution can be obtained if some a priori knowledge about the nature of the true solution is accessible. he additional constraints are usually concerned with a degree of sparsity or smoothness of the true solution.

RREF Basic variables Free variables

Example x + 3x + x = a x x + x3 = b 3x+ 7x x3 = c Let: 3 [ A b] (Basic variables) (Free variable) 3 a 0 5 a 3b Gauss-Jordan = b 0 b+ a 3 7 c 0 0 0 c a+ b he system is consistent (it has solutions) if c a+ b= 0 c= a b Solution: x 3 = free variable, x = a+ b x, x = a 3b+ 5 x3. 3

Example Let: x+ x + x3+ 3x4 = 4 x+ 4x + x3+ 3x4 = 5 3x+ 6x + x3+ 4x4 = 7 [ A b] 3 4 0 Gauss-Jordan = 4 3 5 0 0 3 6 4 7 0 0 0 0 0 x = x + x, x = free variable, Solution: 4 Consistent x = x, x 4 = free variable. 3 4

Example hus: x x x4 x x 0 0 = = + x + x 4 x 3 x 4 0 x x 0 0 4 4 General solution Particular solution xgeneral = xparticular + xhomogeneous Homogenous solution N(A) he homogeneous solution is a solution to the system Ax = 0. Problem: How to choose free variables? Remark: Replacing free variables with zero-values is not always a good solution!

Sparseness constraints Regularized least-squares problem: min x { ( ) ( )} p Ax b +γ E x L p diversity measure (Gorodnitsky, Rao, 997): E ( p) N ( ) = sgn( p) x x p. = p

FOCUSS algorithm Let Stationary point: ( p) ( ) J( x) = Ax b +γ E x ( ) J ( x ) = A Ax A b+ λw x x = 0 x * * * * ( ) { p/ = x } W x* diag, p λ = γ Hence: ( ) ( ) Iterative updates: ( λ ) M x W x A AW x A I b * = * * +. ( λ ) ( ) ( ) x W x A AW x A I b = + k+ k k M.

FOCUSS algorithm [ 0,] p, λ > 0 (0) Randomly initialize x, For k = 0,,, until convergence do { / } ( k ) p x W = diag, k ( ) λ x = W A AW A + I b, ( k+ ) k k M

Wiener filtered FOCUSS [ 0,] p, λ > 0 algorithm ( 0 ) Randomly initialize x, For k = 0,,, until convergence do { / } ( k ) p x W = diag, k ( ) λ x = W A AW A + I b, ( k+ ) k k M x ( ( k ) ) F x ( k + ) + R. Zdunek, Z. He, A. Cichocki, Proc. IEEE ISBI, 008

Wiener filtered FOCUSS algorithm Markov Random Field (MRF) μ σ = ( k + ) x n L n = N ( k + ) ( ) x n L n N First and second-order interactions N μ = h,, +, + h, L = 9 h, + h, h +, + h + - local mean around -th pixel - local variance

Wiener filtered FOCUSS algorithm Update rule Mean noise variance x ν σ ν ( k+ ) ) μ + σ N = N = σ ( ( k+ x μ )

Limited-view tomographic imaging ( A ) rank = LSS ( ) 3 ( A ; b ) = { x R N : Ax b = min! } ( A; b) x N ( A) 0 0 α 0 0 α = 0 α α 0 0 0 A 5 ( ) α = (Rank-deficient) { [ ] } N( A( ) ) = span,,,, LSS = LS + where x LS = x r = P ( ) x exact R A

omographic imaging example Phantom image Minimal l norm least squares solution: (LS algorithms: AR, SIR)

omographic imaging (noise-free data) FOCUSS algorithm (p =, k = 5, ) λ =0 8 Wiener filtered FOCUSS algorithm (p =, k = 5, ) λ =0 8

omographic imaging (noisy data SNR = 30 db) FOCUSS algorithm (p =, k = 5, = 40 ) Wiener filtered FOCUSS algorithm (p =, k = 5, ) λ λ = 40

omographic imaging (Normalized RMSE, noise-free)

omographic imaging (Normalized RMSE, p =, noisy data)

M-FOCUSS M N Let AX + N = B where R, A X [ x x ] ( A) M N, (under-determined) rank = M, N,,, N, M = R B R, > N [ n n ],, M =. R (Cotter, Rao, Engan, Kreutz-Delgado, 005) (additive noise) If M < N, the nullspace of A is non-trivial, thus additional constraints are necessary to select the right solution. he M-FOCUSS assumes sparse solutions. heorem: he sparse solution to the consistent system AX = B is unique, ( ) rank B =, M, if with any M columns of A are linearly independent (unique representation property (URP) condition), and for each t: x t has at most ( M + )/ nonzero entries, where is a ceil function.

M-FOCUSS he M-FOCUSS algorithm iteratively solves the following equality constrained problem: where p J ( ) ( ) ( p) min J, X N N ( ) p x t = = t= X s.t. AX = B, p/ X = x =. - l diversity measure for sparsity p 0 p degree of sparsity x - the -th row of X (LS solution with minimal l -norm) (l 0 norm solution NP-hard problem)

M-FOCUSS For inconsistent data, i.e. B R(A), Cotter at al. developped the regularized M-FOCUSS algorithm that solves the ikhonov regularized least-squares problem in a single iterative step: ( k) ( k ) k X = arg min Ψ( X X ), where ( ) X p/ diag ( w / ( k ) W = ), with w = ( xt ) = ( x ) Regularized M-FOCUSS: For k =,,... A ( k+ ) ( k = AW ), ( ) ( ) t= ( ) λ M ( k+ ) ( k ) ( k ) ( k ) ( k X = W A A A ) + I B ( ) Ψ X X = B AX + λ W X, ( k ) λ > 0 regularization parameter. F. F