User s Guide to Compressive Sensing

Size: px
Start display at page:

Download "User s Guide to Compressive Sensing"

Transcription

1 A Walter B. Richardson, Jr. University of Texas at San Antonio Engineering November 18, 2011

2 Abstract During the past decade, Donoho, Candes, and others have developed a framework for representing signals/images that are known to be sparse in some basis. In their approach, data compression takes place during acquisition rather than as an afterthought. In this talk, I will give a general introduction to compressive sensing with applications to problems in computerized tomography. The intended audience will be graduate students and undergraduates having a strong background in linear algebra. Extending the definition of p-norm to the range 0 < p < 1, we show how standard variational problems from linear algebra, such as least squares minimum L 2 norm solutions to Ax = b, transform, in say the L 1 -norm, in such as way as to favor sparse solutions.

3 Outline 1 Variational Formulation of Linear Algebra Problems 2 1-Norm and Quasi-Norms (0 < p < 1) favor sparsity. 3 Underlying assumption of Compressive Sensing: signal/image sparse in some basis. 4 Information Recovery and Computational Complexity 5 Least Squares Minimum Norm Solution of Ax = b. 6 Sample Results in 1 D and 2 D.

4 Variational Problems in Engineering In many areas of engineering and mathematics, various physical and geometrical quantities are given by quotients of the form Q below. u 2 Ω Q = min ( ) u H0 1(Ω) 2/p u p Ω If p = 2, then Q is the principal Dirichlet eigenvalue If p = 1 and n = 2, then S = 4/Q is the torsional rigidity of a cylindrical beam of cross section Ω If p = 2n n 2, then Q is the best Sobolev constant S n.

5 Constrained Optimization as Regularization Perhaps surprisingly, many classical problems in linear algebra can be recast in a variational formulation, allowing calculus to be used for their solution. Consider a simple problem: given a function f (x, y) find where the min/max values of f occur subject to a constraint defined by a second function g(x, y) taking on the value K. min / max f (x, y) subject to g(x, y) = K Without constraint, the problem is straightforward; take the first derivative and set to zero to find the critical or stationary points. Theory of Lagrange multipliers says for constrained problem, f (x) = λ g(x) at critical point.

6 Constrained Optimization Using L 2 norm. Often a vector norm enters either in objective functional f or constraint g. Consider linear functional f (x, y) = 2x + 1y; solve optimization problem f = max { f (v) : v = 1}.

7 Inner Products The Euclidean norm is special because it arises from an inner product. This is true only in case the norm satifies the parallelogram law: x + y 2 + x y 2 = 2 x y 2. An inner product on C n is a function, satisfying: 1 x, x 0, with equality only when x is the null vector 2 x + y, z = x, z + y, z 3 αx, y = α x, y 4 y, x = x, y Besides the standard inner product on IR m or its complex analogue, the following inner products on the space of real valued functions on a domain Ω are useful. f (x)g(x) dx f (x)g(x) + f (x) g(x) dx Ω Ω

8 The Projection Theorem Theorem If S is a closed, convex subset of a Hilbert space {H,, } and b is a point of H, then there is a unique point w of S which is closest to b. Theorem If L is a (closed) subspace of an inner product space {H,, } and b H, then each two of the following statements are equivalent for a point w in L: (i) for every x in L, b w b x (ii) for every x in L, b w, x = 0 (iii) if {q j } j is an orthonormal basis of L, then w = j q j, b q j Note that subspaces ( are necessarily convex and that Part (iii) can ) be rewritten as w = j q jq T j b

9 Inner Products and Orthogonal Projections

10 Application of Euclidean Norms: Least Squares Given an m n rectangular matrix A, what do we mean by: solve Ax = b? May be no solution in the classical sense, but we can minimize the norm of the residual r = Ax b. Define objective functional f via f (x) = 1 2 Ax b 2 2 f (x) = A T (Ax b) Clearly, f is bounded below, setting gradient equal to zero, a minimizer satisfies the normal equations, A T Ax = A T b. Now there is always a solution: unique if A has linearly independent columns; if not, we can always choose the one of minimum Euclidean norm.

11 Least Squares Pros & Cons 1 Advantages: 1 generality & convenience 2 symmetric form even for non-self adjoint PDE s 3 less sensitive to changes in PDE type (transonic flow) 4 ease of error evaluation 5 mixed least squares finite elements without restrictive inf-sup condition of Galerkin mixed methods 2 Disadvantages: 1 ill-conditioning - e.g. normal equations for Au = b 2 degradation of iterative convergence 3 subtle scaling issues 4 can converge to a wrong solution

12 Vector Norms A vector norm on IR n is a function satisfying: 1 x 0, with equality only when x is the null vector 2 αx = α x 3 x + y x + y (triangle inequality) 1/p n For x IR n, define x p = x j p. j=1 If p 1 this is a norm: special cases are p = 2 (Euclidean), p = 1 (l 1 norm), and sup or norm, limiting case as p. For 0 < p < 1, this is a quasi-norm; triangle inequality is replaced by x + y C( x + y ). Note 0 < p 1 are important for new applications with sparsity.

13 Constrained Optimization Using p-norms

14 Minimax Characterization of Eigenvalues Theorem Suppose A is a self-adjoint compact linear operator. If the positive eigenvalues λ + k are ordered in decreasing order with multiplicities repeated, then λ + k = min max Ax, x / x 2 x π k 1 π k 1 Where π k 1 denotes a linear subspace of H of codimension k 1. Alternatively, λ + k = max min Ax, x / x 2 L k x L k Where now L k denotes a linear subspace of H of dimension k.

15 Eigenvalues - Extremal Values

16 Induced Matrix Norms m n matrix A is transformation from IR n into IR m ; choose a vector norm in the domain and in the range space. Maximize Ax over unit sphere: A (m,n) = sup Ax IR m subject to x IR n = 1.

17 Singular Value Decomposition Geometry - image of unit sphere under any m n (rectangular) matrix A is hyperellipse. Optimization problem: max f (x) = Ax 2 = Ax, Ax = A H Ax, x s.t. x 2 = 1 Use Spectral Theorem & A H A is Hermitian to prove: Theorem Every matrix A in C m n can be factored as A = U Σ V H (unitary)(diagonal)(unitary). Columns of U (m m) are eigenvectors of AA H ; columns of V (n n) are eigenvectors of A H A. If r = rank(a), then r singular values on the diagonal of Σ (m n) are the square roots of the eigenvalues of both AA H and A H A.

18 Exercise: Generalized Inverse via SVD Consider the problem of finding the minimum norm least squares (MNLS) solution to Ax = b for the 3 4 matrix A and right hand side vector b σ A = 0 σ 2 0 0, b = b 1 b 2 b 3 A = 1/σ /σ where σ 1, σ 2 are positive. (Note that both the rows and the columns of A are linearly dependent.) For a matrix in the above form it is clear how to find the projection of b onto the range of A and then among all solutions of Ax = Pb find the one of minimum norm x. Show the latter can be obtained by applying A above to the vector b. Show how the SVD can be used to achieve a MNLS solution for a general matrix A and define A, the generalized (Moore-Pembrose) inverse of A.

19 Compression Using SVD The SVD factorization can be be used to compress an image. Recall that a matrix product can be rewritten as sum of outer r products, i.e. A = U Σ V = σ j u j v j where r = rank(a). j=1 Simply discard terms with σ j < ɛ.

20 u + v Models in Image Processing Consider the following variational problem. Let H be a Hilbert space and V : H IR {+ } a lower semicontinuous, strictly convex functional. A global minimizer exists and is unique. Yosida introduced the following regularization V λ (x), λ > 0 of V (x): V λ (x) = inf{v (y) + λ x y 2 : y H} These V λ (x) in essence smooth the functional V (x). Special case is Osher-Rudin model in image processing. Given image f (x) L 2 (IR 2 ) minimize inf{j(u) : f = u + v} J(u) = u BV + λ v 2 2 where infimum is computed over all possible decompositions of f into the sum of a function u in BV (IR 2 ) and a function v in L 2 (IR 2 ).

21 Variational Image Decomposition Consider splitting (Meyer) an image f up into components f = u + v, where u is a smoothed version of f and v represents texture+noise. A variational formulation for the image model is J(u, λ) = 1 p u p + λ 1 q v q p = 2, q = 2 Heat equation p = 1, q = 2 Rudin-Osher-Fatemi TV L 2 model p = 1, q = 1 Chan-Vese TV L 1 preserves contrast & geometry LSFEM with Sobolev gradients would suggest negative, even fractional, norms useful. Connection to real interpolation spaces via K-functional ( K(t, u) = inf u v 2 Y + t 2 v 2 ) 1 2 X v X

22 Binary Decomposition Decompose f = u + v, where u H 1 and v L 2. Minimize min u H 1 J(u) = u 2 H 1 + λ f u 2 H 0 with zero Neumann boundary conditions as most reasonable. J (u)h = 2(u, h) + 2( u, h) + 2λ(f u, h), Euler-Lagrange equation is Helmholtz u (1 + λ)u = λf. Below is the f = u + v decomposition.

23 Varying the Scale Parameter λ Weighting parameter λ controls which details are to be removed from the smoothed image. Below are u, v decompositions for λ = 0.1, 0.4, 0.7.

24 Negative Norm Ternary Decomposition Shen, Osher et. al. decompose f = u + v + w, where u H 1,v H 1, w L 2. Minimize min u H 1 J(u, v) = u 2 H 1 + λ 1 v 2 H 1 + λ 2 f (u + v) 2 H 0 Below original image at left, followed by u, v, w decomposition.

25 Negative Sobolev Norms and Sobolev Gradients Mumford and Meyer: model images by distributions rather than just functions. Meyer proposed the Besov space B 1, for modeling textures. (Osher, Sole, Vese, Shen, Chan, Bertozzi, others) Use of the H 1 norm for finite element least-squares methods to obtain better error estimates and coercivity constands. (Carey, Pehlivanov, Bramble, Pasiak, Bochev, Gunzberger, McCormick, Manteuffel) Sobolev Gradients for general pdes (transonic flow, Ginzburg-Landau, minimal surface) and inverse problems Gradient descent in H 1 rather than in H 0 L 2. Precondition with the inverse Laplacian. (Neuberger, Richardson, Renka, Knowles, Mahavier)

26 Wavelet Shrinkage and u + v Modells David Donoho and Ian Johnstone, Wavelet Shrinkage: Asymptompia?, J. R. Statist. Soc., 1995, 57, No. 2, pp Yves Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Sort wavelet coefficients - the first N coefficients provide a denoised image and also optimal nonlinear approximation. Denoising algorithms deal with u + v models: u(x) is an unknown function in BV satisfying u BV C. Observed image f (x) = u(x) + v(x) is corrupted by noise. Statistics of noise v are known (often assumed Gaussian white noise, i.e. the sampled noise sequence g k (ω), k Z 2, is i.i.d. N(0, σ). E[ ] denotes mathematical expectation with respect to these statistics.

27 Wavelet Shrinkage - Minimax Formulation Find a good substitute or estimator û of u: construct a linear or nonlinear mapping Φ : f û which Preserves a priori knowledge on u, i.e. û BV C Minimize expected error between true u and estimated û: E[ û u ]. Expected Risk depends on some norm and is R(Φ, u, σ) = E[ û u ] or E[ û u 2 ] when Hilbert space norm is used. Worst risk is ρ(φ, σ) = sup{r(φ, u, σ) : u B} where it is assumed that u belongs to some ball B = { u C}. Want the worst risk to be as small as possible by a wise choice of Φ. This leads to the minimax formulation: ρ(σ) = inf{ρ(φ, σ) : Φ M} = inf Φ sup{e[ û u 2 ] : u B}

28 Robust Statistics and Compressive Sensing 1 David Donoho, Compressed Sensing, IEEE Transactions on Information Theory, Vol. 52, No. 4, April 2006, pp Candes E., Romberg J., and Tao T. 2006b Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory 52 pp

29 Transform Compression; Assumption of Sparsity x IR m is a signal or image with m samples or pixels and an orthonormal basis {ψ i : i = 1,..., m} of IR m (e.g. wavelet basis, Fourier basis or a local Fourier basis). Then x has transform coefficients θ i = x, ψ i which are assumed sparse: for some 0 < p < 2 and R > 0, ( ) 1/p θ, θ i p R i

30 Example: Bounded Variation Model for Images Image brightness is represented by function f (x, y) on unit square 0 x, y 1 which satisfies f (x, y) dxdy R Wavelet viewpoint: data are seen as superposition of contributions from various scales. Let x (j) denote component of data at scale j, () denote the orthonormal basis of wavelets at scale j, containing 3 4 j elements. Then θ (j) 4R

31 Optimal Recovery/Information-Based Complexity X class of objects of interest, subset of IR m Information Operator I n : X IR n samples n pieces of information about unkown signal or image x X. Here I n (x) = ( ξ 1, x,..., ξ n, x ) where ξ j are sampling kernels (nonadaptive, fixed independently of x. Approximate reconstruction operator A n : IR n IR m Analyse l 2 error of reconstruction & find optimal reconstruction algorithm, so use minimax l 2 erros E n (X ) = inf A n,i n sup x A n (I n (x)) 2 x X Evaluate E n (X) and find practical schemes which come close to attaining it

32 Compressive Sensing uses p = 1 Suppose unknown signal x IR M is sparse in some basis. Reconstruct the signal using only a few (= N << M) linear measurements, i.e. take the inner product of x with chosen set of vectors. Use N measurements y to reconstruct M-length sparse signal x having K < N << M nonzero entries. * * * * = A = N M Measurement Matrix 0 0 * * 0 0 * 0

33 Try Euclidean Norm: l 2 Underdetermined linear system. Problem is ill-posed: there are infinitely many solutions ˆx. Classical solution technique uses Least-Squares; note fewer rows than columns in measurement matrix A. ˆx = arg min x 2 subject to y = Ax We have seen that the solution is given by least-square minimum norm ˆx = (A T A) 1 A T y which can be computed very quickly using LU-factorization. Unfortunately, a small l 2 norm for ˆx does NOT imply sparsity.

34 Modern Approach Minimizes l 0 or l 1 Norms x = arg min x p subject to Ax = y Exploit known sparsity of x in some basis. Of all solutions seek the sparsest one x. (p = 0) Define l 0 norm, x 0, to be number of nonzero entries. This gives perfect reconstruction with high probability but combinatorial complexity. (p = 1) Seek solution with the smallest l 1 norm. If N CK, C 3, Donoho and Candes, et. al. have shown perfect reconstruction with high probability, and linear programming complexity.

35 1-D Sparse Signal Recovery: L 1 wins over L 2 Figure: (Left) Original sparse signal. (Center) Very noisy least squares minimum L 2 -norm reconstruction. (Right) Accurate minimum L 1 -norm reconstruction.

36 n-widths Bound Minimax Error Gel fand n-width of set X with respect to l m 2 norm is d n (X, l 2 ) = inf V n sup { x 2 : x V n X } infimum over n-dimensional linear subspace of IR m and Vn is orthocomplement of V n with respect to standard Euclidean inner product. Kolmogorov n-width of X with respect to l m 2 norm is d n (X, l 2 ) = inf V n sup x X inf { x y 2 } y V n The infimum is over n-dimensional linear subspaces of IR m. Measures quality of approximation of X possible by n-dimensional subspaces of V n.

37 Donoho s Conditions CS1-CS3 CS1 Minimal singular value of Φ J exceeds η 1 > 0 uniformly in J < ρn/ log(m). Requires a quantitative degree of linear independence among all small groups of columns. CS2 On each subspace V J the inequality v 1 η 2 n holds uniformly in J < ρn/ log(m). Linear combinations of small groups of columns give vectors that look like random noise when l 1 and l 2 norms are compared. CS3 On each subspace V J, Q c J (v) η 3/ log(m/n) v 1, v V J uniformly in J < ρn/ log(m). For every vector in some V J, quotient norm Q J c is never much smaller than l 1 norm on IR n.

38 Applications to Cone-Beam Tomography? Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization by Emil Sidky and Xiaochuan Pan, Phys. Med. Biol. 53 (2008) Authors approximate image reconstruction for circular cone-beam CT to inversion of a finite linear system Mf = g Effects of limited angular range or angular sampling or large cone angles will be incomplete data, and a rectangular system matrix M; g is the data set of measurements. Goal is to recover f we know the problem is ill-posed with many solutions. Classical approach uses Least-Squares; note fewer rows than columns in measurement matrix M. Variationally, f = arg min f 2 subject to Mf = g; solution given by least-square minimum norm f = (M T M) 1 M T g which can be computed very quickly using LU-factorization, but small l 2 norm for f does not imply sparsity.

39 Adaptive Steepest Descent-Projection onto Convex Sets Ignoring positivity constraints, equivalent to solving unconstrained, convex optimization problem: f = arg min Mf g data τ f TV Above can be solved using standard methods such as conjugate gradient or steepest descent. Authors state it is more convenient to use the constrained formulation. Formulation of the optimization problem and constraints on solution: find the discrete image f that minimizes the TV norm subject to the inequality constraints of (A) data fidelity and (B) non-negativity. f = arg min f TV s.t. Mf g 2 ɛ and f 0

40 Steepest Descent To minimize a nonlinear functional, follow Cauchy s lead and try the method of Steepest Descent (1840) 1 Pick starting point x 0 and a small stepsize α > 0. Move a distance α in the direction g 0 in which f decreases most rapidly at x 0. Calculus says this is g 0 = f (x 0 ). New approximation is x 1 = x 0 + αg 0. 2 Recursively, define the sequence x n+1 = x n α n f (x n ) Under what conditions does this sequence converge?

41 Example using Steepest Descent

42 Continuous Gradient Descent Given differential operator F : {H,, } {K, (, )}, use gradient descent to find its zeros. Note different inner products in domain space give different descent directions. If J(u) = 1 2 F(u) 2, gradient is J(u) = F (u) F(u). (Prime denotes Fréchet derivative, star denotes adjoint.) Find z : [0, ) H satisfying dz dt = J(z(t)), z(0) = z 0 Sufficient conditions for convergence include: 1 Convexity: J (u)v, v m v 2 2 Gradient inequality: J(u) F (u) F(u) C F(u)

43 2-D Sparse Signal Recovery: Again L 1 wins over L 2 (Top-Left) Original Shepp-Logan edges. (Top-Right) Minimum L 2 - norm reconstruction. (Bottom) Minimum L 1 -norm reconstruction after 2000, 4000, and 5500 iterations.

44 References Neuberger JW. Sobolev Gradients and Differential Equations Springer-Verlag: Berlin. Carey GF, Richardson WB A note on least-squares methods Communications on Num. Methods in Engineering : Richardson WB, Sobolev Gradient Preconditioning for Image Processing PDE s. Commun. Numer. Meth. Engng : Candes E., Romberg J., and Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory :

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Sparse linear models and denoising

Sparse linear models and denoising Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise

A Variational Approach to Reconstructing Images Corrupted by Poisson Noise J Math Imaging Vis c 27 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 1.7/s1851-7-652-y A Variational Approach to Reconstructing Images Corrupted by Poisson Noise TRIET

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Convex Hodge Decomposition of Image Flows

Convex Hodge Decomposition of Image Flows Convex Hodge Decomposition of Image Flows Jing Yuan 1, Gabriele Steidl 2, Christoph Schnörr 1 1 Image and Pattern Analysis Group, Heidelberg Collaboratory for Image Processing, University of Heidelberg,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Recent developments on sparse representation

Recent developments on sparse representation Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Convex Hodge Decomposition and Regularization of Image Flows

Convex Hodge Decomposition and Regularization of Image Flows Convex Hodge Decomposition and Regularization of Image Flows Jing Yuan, Christoph Schnörr, Gabriele Steidl April 14, 2008 Abstract The total variation (TV) measure is a key concept in the field of variational

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev trust-region method for numerical solution of the Ginz A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A.

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A. COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 6, Number 4, December 27 pp. 917 936 ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

Compressed Sensing. David L. Donoho Department of Statistics Stanford University. September 14, 2004

Compressed Sensing. David L. Donoho Department of Statistics Stanford University. September 14, 2004 Compressed Sensing David L. Donoho Department of Statistics Stanford University September 14, 2004 Abstract Suppose x is an unknown vector in R m (depending on context, a digital image or signal); we plan

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Sparse Signal Reconstruction with Hierarchical Decomposition

Sparse Signal Reconstruction with Hierarchical Decomposition Sparse Signal Reconstruction with Hierarchical Decomposition Ming Zhong Advisor: Dr. Eitan Tadmor AMSC and CSCAMM University of Maryland College Park College Park, Maryland 20742 USA November 8, 2012 Abstract

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems Justin Romberg Georgia Tech, School of ECE ENS Winter School January 9, 2012 Lyon, France Applied and Computational

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

Nonlinear Flows for Displacement Correction and Applications in Tomography

Nonlinear Flows for Displacement Correction and Applications in Tomography Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Elliptic boundary value problems often occur as the Euler equations of variational problems the latter

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information