Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems
|
|
- Nickolas Chapman
- 6 years ago
- Views:
Transcription
1 Multilevel and Adaptive Sparse of Inverse Problems Fachbereich Mathematik und Informatik Philipps Universität Marburg Workshop Sparsity and Computation, Bonn, (joint work with M. Fornasier and T. Raasch)
2 Outline
3 Outline
4 The Problem: treatment of inverse problems y = Ku + e, K : X Y, X, Y Hilbert spaces Minimization of functionals: J(u) := Ku y 2 Y + 2 ( u, ψ λ ) λ I l1,α (I), u lp,α := ( u λ p α λ ) 1/p λ I Ψ := {ψ λ } λ I (wavelet) basis Ψ := { ψ λ } λ I dual basis
5 The Problem: treatment of inverse problems y = Ku + e, K : X Y, X, Y Hilbert spaces Minimization of functionals: J(u) := Ku y 2 Y + 2 ( u, ψ λ ) λ I l1,α (I), u lp,α := ( u λ p α λ ) 1/p λ I Ψ := {ψ λ } λ I (wavelet) basis Ψ := { ψ λ } λ I dual basis
6 The Problem: equivalent formulation: F u := λ I u λ ψ λ, u l 2 (I) J(u) := J α (u) = (K F )u y 2 Y + 2 u l1,α (I), A := K F several iterative methods available: (a) the GPSR-algorithm (b) the l 1 l s algorithm, (c) FISTA (fast iterative soft-thresholding algorithm) (d) LARS...
7 The Problem: equivalent formulation: F u := λ I u λ ψ λ, u l 2 (I) J(u) := J α (u) = (K F )u y 2 Y + 2 u l1,α (I), A := K F several iterative methods available: (a) the GPSR-algorithm (b) the l 1 l s algorithm, (c) FISTA (fast iterative soft-thresholding algorithm) (d) LARS...
8 The Approach: Iterated Soft- Algorithm (ISTA) [Daubechies/DeFrise/DeMol] and others... u (n+1) = S α [u (n) + A y A Au (n)], x τ x > τ S τ (x) = 0 x τ x + τ x < τ ( S α (a) = arg min u a 2 ) + 2 u 1,α u l 2 (I)
9 Problems... slow convergence: The path in the x 1 vs. Kx y 2 plane Acceleration by: decreasing thresholding parameters adaptivity multilevel preconditioning
10 Outline
11 : decreasing iterative soft-thresholding algorithm (D-ISTA): u (n+1) ( = S α (n) u (n) + A (y Au (n) ) ), α (n) λ α λ Restricted Isometry Property (RIP): (1 γ k ) u Λ 2 l 2 A Λ u Λ Y (1 + γ k ) u Λ 2 l 2, for all Λ {1,..., N} with #Λ k Theorem The following conditions are equivalent: (i) A has RIP property (ii) A A Λ Λ is positive definite, eigenvalues in [1 γ k, 1 + γ k ], for all Λ {1,..., N} with #Λ k; (iii) (I A A) Λ Λ γ k, for all Λ {1,..., N} with #Λ k.
12 It works... Theorem Let ū := (I A A)u + A y l w τ (I), 0 < τ < 2. Moreover, let L := 4 u 2 l 2 (I) + 4C ū τ ᾱ 2 l w τ (I)ᾱ τ. RIP of order 2L + #suppu with γ 0 < 1 satisfied. Whenever for γ 0 γ < 1 α λ α (n) λ α λ + (γ γ 0 )L 1/2 ɛ n, for all λ Λ, then #supp u (n) L and u u (n) l2 (I) γ n u l2 (I) =: ɛ n
13 A Comparison: log 10 ( A u n y 2 2 ) Dynamics of the algorithms u n 1 ISTA D ISTA Sparse minimizer u * and approximations due to the algorithms ISTA D ISTA u * Error with respect to the minimizer of J ISTA D ISTA Dynamics of the support sizes of the approximations to u * due to the algorithms 3.5 ISTA 3 D ISTA u * log 10 ( u n u * 2 ) log 10 ( u n 0 ) Number of iterations Number of iterations A : matrix with i.i.d. Gaussian entries, α = 10 3, γ 0 = 0.1 and γ = 0.95.
14 Outline
15 : Typical application: A solution operator of an operator equation Problem: D-ISTA not directly implementable! We need suitable approximations! Use adaptive strategies.
16 : Typical application: A solution operator of an operator equation Problem: D-ISTA not directly implementable! We need suitable approximations! Use adaptive strategies.
17 : Typical application: A solution operator of an operator equation Problem: D-ISTA not directly implementable! We need suitable approximations! Use adaptive strategies.
18 : Typical application: A solution operator of an operator equation Problem: D-ISTA not directly implementable! We need suitable approximations! Use adaptive strategies.
19 Building Blocks: RHS[g, ε] g ε : determines finitely supported g ε s.t. g g ε l2 (I) ε; APPLY[N, v, ε] w ε : determines finitely supported w ε s.t Nv w ε l2 (I) ε; Realization: [Cohen/Dahmen/DeVore] Implementable algorithm A-ISTA: ũ (n+1) = S (ũ(n) α (n) +RHS[A y, δ n ] APPLY[A A, ũ (n), γ n ] )
20 It works... Theorem Technical assumptions (RIP etc.) If δ n = γ n = ɛ n+1 2ρ, ɛ n = γ n u l2 (I), 0 < γ 0 γ < γ < 1, α (n) are chosen according to α λ α (n) λ α λ + (γ γ 0 )L 1/2 ɛ n, then the iterates of A-ISTA fulfill #supp ũ (n) L and u ũ (n) l2 (I) ɛ n.
21 Outline
22 : How can RIP be guaranteed?! X L 2 (Ω) X, Ψ = {ψ λ } λ I Assumptions: wavelet basis K : X L 2 (Ω), K 2 s λ µ 2 σ( λ + µ ) Kψ λ, ψ µ c 1 ( min( λ, µ ) dist(ω µ, Ω λ ) ) r For µ = λ For µ = λ K Kψ λ, ψ λ c 2 2 2σ λ K Kψ λ, ψ µ c 3 2 2σ λ (1 + k k ) r
23 Theorem Let A A = F K KF = ( K Kψ λ, ψ µ ) λ,µ I. Let Dj b = ( K Kψ λ, ψ µ ) λ = µ =j be the diagonal block of A A for refinement level j, D b = (D0 b, Db 1,...). Then (I (D b ) 1/2 A A(D b ) 1/2 ) Λ Λ < C2 s Λ and K((D b ) 1/2 A A (D b ) 1/2 Λ Λ ) 1 + C 2 s Λ 1 C 2 s Λ. increase of s = larger sets Λ (D b ) 1/2 A A (D b ) 1/2 not globally well-conditioned! in practice: diagonal preconditioning works well!
24 Theorem Let A A = F K KF = ( K Kψ λ, ψ µ ) λ,µ I. Let Dj b = ( K Kψ λ, ψ µ ) λ = µ =j be the diagonal block of A A for refinement level j, D b = (D0 b, Db 1,...). Then (I (D b ) 1/2 A A(D b ) 1/2 ) Λ Λ < C2 s Λ and K((D b ) 1/2 A A (D b ) 1/2 Λ Λ ) 1 + C 2 s Λ 1 C 2 s Λ. increase of s = larger sets Λ (D b ) 1/2 A A (D b ) 1/2 not globally well-conditioned! in practice: diagonal preconditioning works well!
25 Theorem Let A A = F K KF = ( K Kψ λ, ψ µ ) λ,µ I. Let Dj b = ( K Kψ λ, ψ µ ) λ = µ =j be the diagonal block of A A for refinement level j, D b = (D0 b, Db 1,...). Then (I (D b ) 1/2 A A(D b ) 1/2 ) Λ Λ < C2 s Λ and K((D b ) 1/2 A A (D b ) 1/2 Λ Λ ) 1 + C 2 s Λ 1 C 2 s Λ. increase of s = larger sets Λ (D b ) 1/2 A A (D b ) 1/2 not globally well-conditioned! in practice: diagonal preconditioning works well!
26 Examples: integral operators with Schwartz kernels Ku(x) = Φ(x, ξ)u(ξ)dξ, x Ω, Ω Ω, Ω R d, dist(ω, Ω) = δ > 0, u X := H t (Ω) Φ : Ω Ω α x β ξ Φ(x, ξ) c α,β x ξ (d+2t+ α + β )
27 Magnetic Tomography: Current Sensor Biot-Savart operator: B(x, j) = µ 0 j(ξ) (x ξ) 4π Ω x ξ 3 dξ = Φ(x, ξ) = µ 0 1 4π x ξ, x ξ j := current density Ω [ x Φ(x, ξ)] j(ξ)dξ
28 Outline
29 : Volterra integral operator K : L 2 (0, 1) L 2 (0, 1), Ku(t) = u(x) = t 0 u(s) ds, K Ku(t) = 1 24x + 1, 0 x < x , 16 x < x , 1 2 x < , otherwise 0 ( 1 max(s, t) ) u(s) ds level j k
30 : no prec. diag. prec. b. diag. prec. cond 2 (A T * AT ) #random columns
31 Linear Convergence: l 2 errors, alpha=1e-05, alphaprec=1e-05, gamma=0.99, eta=0.1 ISTA P-ISTA D-ISTA PD-ISTA l 2 error 1e-06 1e-08 1e-10 1e iteration
32 Linear Convergence: l 2 errors, alpha=1e-06, alphaprec=1e-06, gamma=0.99, eta=0.1 ISTA P-ISTA D-ISTA PD-ISTA l 2 error iteration
33 Support Dynamics: active coefficients, alpha=1e-06, alphaprec=1e-06, gamma=0.99, eta=0.1 ISTA P-ISTA D-ISTA PD-ISTA active coefficients iteration
34 Outline
35 : treatment of inverse problems, minimization of associated functionals thresholding algorithms Adaptive strategies experiments
36 : treatment of inverse problems, minimization of associated functionals thresholding algorithms Adaptive strategies experiments
37 : treatment of inverse problems, minimization of associated functionals thresholding algorithms Adaptive strategies experiments
38 : treatment of inverse problems, minimization of associated functionals thresholding algorithms Adaptive strategies experiments
39 : treatment of inverse problems, minimization of associated functionals thresholding algorithms Adaptive strategies experiments
40 Reformulation after : Observe that: J(u) = Au y 2 Y + 2α u l1 (I) = AD 1/2 D} 1/2 {{ u} y 2 Y + 2α D 1/2 D} 1/2 {{ u} :=z :=z l1 (I) = AD 1/2 z y 2 Y + 2α D 1/2 z l1 (I) := J D (z). Hence, ( ) argmin u l2 (I)J(u) = D 1/2 argmin z D 1/2 l 2 (I) J D (z).
41 Adptivity helps again... to control the supports of the iterands! Theorem We define z := z + (D 1/2 A y D 1/2 A AD 1/2 z ) l 2 (I) and Λ δ ( z) = {λ : δ < z λ + (D 1/2 A y D 1/2 A AD 1/2 z ) λ }, We set Λ = N n=0λ (n) supp z Λ δ ( z), Λ (n) = supp ( z (n) ), where N N is such that z z (N) ɛ N ε, Then for δ(ɛ) sufficiently small Λ (n) Λ, for all n 0.
DFG-Schwerpunktprogramm 1324
DFG-Schwerpunktprogramm 1324 Extraktion quantifizierbarer Information aus komplexen Systemen Multilevel preconditioning for sparse optimization of functionals with nonconvex fidelity terms S. Dahlke, M.
More informationAn Introduction to Sparse Recovery (and Compressed Sensing)
An Introduction to Sparse Recovery (and Compressed Sensing) Massimo Fornasier Department of Mathematics massimo.fornasier@ma.tum.de http://www-m15.ma.tum.de/ CoSeRa 2016 RWTH - Aachen September 19, 2016
More informationSchemes. Philipp Keding AT15, San Antonio, TX. Quarklet Frames in Adaptive Numerical. Schemes. Philipp Keding. Philipps-University Marburg
AT15, San Antonio, TX 5-23-2016 Joint work with Stephan Dahlke and Thorsten Raasch Let H be a Hilbert space with its dual H. We are interested in the solution u H of the operator equation Lu = f, with
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More informationBesov regularity for operator equations on patchwise smooth manifolds
on patchwise smooth manifolds Markus Weimar Philipps-University Marburg Joint work with Stephan Dahlke (PU Marburg) Mecklenburger Workshop Approximationsmethoden und schnelle Algorithmen Hasenwinkel, March
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationOn the coherence barrier and analogue problems in compressed sensing
On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)
More informationVariational Image Restoration
Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationVast Volatility Matrix Estimation for High Frequency Data
Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline
More informationComputation of operators in wavelet coordinates
Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates
More informationLinear convergence of iterative soft-thresholding
arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding
More informationBayesian Models for Regularization in Optimization
Bayesian Models for Regularization in Optimization Aleksandr Aravkin, UBC Bradley Bell, UW Alessandro Chiuso, Padova Michael Friedlander, UBC Gianluigi Pilloneto, Padova Jim Burke, UW MOPTA, Lehigh University,
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationFast Multipole BEM for Structural Acoustics Simulation
Fast Boundary Element Methods in Industrial Applications Fast Multipole BEM for Structural Acoustics Simulation Matthias Fischer and Lothar Gaul Institut A für Mechanik, Universität Stuttgart, Germany
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationConvergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics
Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics Meng Xu Department of Mathematics University of Wyoming February 20, 2010 Outline 1 Nonlinear Filtering Stochastic Vortex
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationApproximate Message Passing Algorithms
November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationApplied/Numerical Analysis Qualifying Exam
Applied/Numerical Analysis Qualifying Exam Cover Sheet Applied Analysis Part Policy on misprints: The qualifying exam committee tries to proofread exams as carefully as possible. Nevertheless, the exam
More information44 CHAPTER 3. CAVITY SCATTERING
44 CHAPTER 3. CAVITY SCATTERING For the TE polarization case, the incident wave has the electric field parallel to the x 3 -axis, which is also the infinite axis of the aperture. Since both the incident
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationTensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs
Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.
More informationBesov Regularity and Approximation of a Certain Class of Random Fields
Besov Regularity and Approximation of a Certain Class of Random Fields Nicolas Döhring (TU Kaiserslautern) joint work with F. Lindner, R. Schilling (TU Dresden), K. Ritter (TU Kaiserslautern), T. Raasch
More informationSuboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids
Suboptimal feedback control of PDEs by solving Hamilton-Jacobi Bellman equations on sparse grids Jochen Garcke joint work with Axel Kröner, INRIA Saclay and CMAP, Ecole Polytechnique Ilja Kalmykov, Universität
More informationHigh Dimensional Covariance and Precision Matrix Estimation
High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationCourse and Wavelets and Filter Banks
Course 18.327 and 1.130 Wavelets and Filter Bans Numerical solution of PDEs: Galerin approximation; wavelet integrals (projection coefficients, moments and connection coefficients); convergence Numerical
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationThe Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression
The Sparsity and Bias of The LASSO Selection In High-Dimensional Linear Regression Cun-hui Zhang and Jian Huang Presenter: Quefeng Li Feb. 26, 2010 un-hui Zhang and Jian Huang Presenter: Quefeng The Sparsity
More informationA orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationNonparametric Inference In Functional Data
Nonparametric Inference In Functional Data Zuofeng Shang Purdue University Joint work with Guang Cheng from Purdue Univ. An Example Consider the functional linear model: Y = α + where 1 0 X(t)β(t)dt +
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationProximal Methods for Optimization with Spasity-inducing Norms
Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationBETI for acoustic and electromagnetic scattering
BETI for acoustic and electromagnetic scattering O. Steinbach, M. Windisch Institut für Numerische Mathematik Technische Universität Graz Oberwolfach 18. Februar 2010 FWF-Project: Data-sparse Boundary
More informationDFG-Schwerpunktprogramm 1324
DFG-Schwerpunktprogramm 1324 Extraktion quantifizierbarer Information aus komplexen Systemen Adaptive Wavelet Methods for Elliptic Stochastic Partial Differential Equations P. A. Cioica, S. Dahlke, N.
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationA fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring
A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke
More informationIII. Iterative Monte Carlo Methods for Linear Equations
III. Iterative Monte Carlo Methods for Linear Equations In general, Monte Carlo numerical algorithms may be divided into two classes direct algorithms and iterative algorithms. The direct algorithms provide
More informationDiffuison processes on CR-manifolds
Diffuison processes on CR-manifolds Setsuo TANIGUCHI Faculty of Arts and Science, Kyushu University September 5, 2014 Setsuo TANIGUCHI (Kyushu Univ) CR-Browninan motion September 5, 2014 1 / 16 Introduction
More informationTheory and applications of time-frequency analysis
Theory and applications of time-frequency analysis Ville Turunen (ville.turunen@aalto.fi) Abstract: When and how often something happens in a signal? By properly quantizing these questions, we obtain the
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationErrata. Updated as of June 13, (η(u)φt. + q(u)φ x + εη(u)φ xx. 2 for x < 3t/2+3x 1 2x 2, u(x, t) = 1 for x 3t/2+3x 1 2x 2.
Errata H. Holden and N. H. Risebro, Front Tracking for Hyperbolic Conservation Laws Applied Mathematical Sciences, volume 3, Springer Verlag, New York, 00 Updated as of June 3, 007 Changes appear in red.
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationGeneralized greedy algorithms.
Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 18, 2016 Outline One versus all/one versus one Ranking loss for multiclass/multilabel classification Scaling to millions of labels Multiclass
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationIterative methods for positive definite linear systems with a complex shift
Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution
More informationDimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices
Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke
More informationA Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization
A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.
More informationA Unified Formulation of Gaussian Versus Sparse Stochastic Processes
A Unified Formulation of Gaussian Versus Sparse Stochastic Processes Michael Unser, Pouya Tafti and Qiyu Sun EPFL, UCF Appears in IEEE Trans. on Information Theory, 2014 Presented by Liming Wang M. Unser,
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationOptimization Algorithms for Compressed Sensing
Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed
More informationMulti-stage convex relaxation approach for low-rank structured PSD matrix recovery
Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun
More informationRegularization and Inverse Problems
Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse
More informationAdaptive Subgradient Methods for Online Learning and Stochastic Optimization John Duchi, Elad Hanzan, Yoram Singer
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization John Duchi, Elad Hanzan, Yoram Singer Vicente L. Malave February 23, 2011 Outline Notation minimize a number of functions φ
More informationRandom Coding for Fast Forward Modeling
Random Coding for Fast Forward Modeling Justin Romberg with William Mantzel, Salman Asif, Karim Sabra, Ramesh Neelamani Georgia Tech, School of ECE Workshop on Sparsity and Computation June 11, 2010 Bonn,
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationCache Oblivious Stencil Computations
Cache Oblivious Stencil Computations S. HUNOLD J. L. TRÄFF F. VERSACI Lectures on High Performance Computing 13 April 2015 F. Versaci (TU Wien) Cache Oblivious Stencil Computations 13 April 2015 1 / 19
More informationAutomatic Relevance Determination
Automatic Relevance Determination Elia Liitiäinen (eliitiai@cc.hut.fi) Time Series Prediction Group Adaptive Informatics Research Centre Helsinki University of Technology, Finland October 24, 2006 Introduction
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationChapter 7: Bounded Operators in Hilbert Spaces
Chapter 7: Bounded Operators in Hilbert Spaces I-Liang Chern Department of Applied Mathematics National Chiao Tung University and Department of Mathematics National Taiwan University Fall, 2013 1 / 84
More informationA Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing
A Linearly Convergent First-order Algorithm for Total Variation Minimization in Image Processing Cong D. Dang Kaiyu Dai Guanghui Lan October 9, 0 Abstract We introduce a new formulation for total variation
More informationMIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco
MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationBayesian Regularization
Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors
More informationInexact Alternating Direction Method of Multipliers for Separable Convex Optimization
Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State
More informationIterative Projection Methods
Iterative Projection Methods for noisy and corrupted systems of linear equations Deanna Needell February 1, 2018 Mathematics UCLA joint with Jamie Haddock and Jesús De Loera https://arxiv.org/abs/1605.01418
More informationMath 46, Applied Math (Spring 2008): Final
Math 46, Applied Math (Spring 2008): Final 3 hours, 80 points total, 9 questions, roughly in syllabus order (apart from short answers) 1. [16 points. Note part c, worth 7 points, is independent of the
More informationGradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice
1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationContinuous Frames and Sampling
NuHAG University of Vienna, Faculty for Mathematics Marie Curie Fellow within the European network HASSIP HPRN-CT-2002-285 SampTA05, Samsun July 2005 Joint work with Massimo Fornasier Overview 1 Continuous
More informationRisk Averse Shape Optimization
Risk Averse Shape Optimization Sergio Conti 2 Martin Pach 1 Martin Rumpf 2 Rüdiger Schultz 1 1 Department of Mathematics University Duisburg-Essen 2 Rheinische Friedrich-Wilhelms-Universität Bonn Workshop
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS. Introduction Let us consider an operator equation of second kind [1]
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 5, 1996, 457-474 ON PROJECTIVE METHODS OF APPROXIMATE SOLUTION OF SINGULAR INTEGRAL EQUATIONS A. JISHKARIANI AND G. KHVEDELIDZE Abstract. The estimate for the
More informationVelocity averaging a general framework
Outline Velocity averaging a general framework Martin Lazar BCAM ERC-NUMERIWAVES Seminar May 15, 2013 Joint work with D. Mitrović, University of Montenegro, Montenegro Outline Outline 1 2 L p, p >= 2 setting
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationCompressed Sensing via Partial l 1 Minimization
WORCESTER POLYTECHNIC INSTITUTE Compressed Sensing via Partial l 1 Minimization by Lu Zhong A thesis Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationUncertainty principles for far field patterns and applications to inverse source problems
for far field patterns and applications to inverse source problems roland.griesmaier@uni-wuerzburg.de (joint work with J. Sylvester) Paris, September 17 Outline Source problems and far field patterns AregularizedPicardcriterion
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationFRAMES AND TIME-FREQUENCY ANALYSIS
FRAMES AND TIME-FREQUENCY ANALYSIS LECTURE 5: MODULATION SPACES AND APPLICATIONS Christopher Heil Georgia Tech heil@math.gatech.edu http://www.math.gatech.edu/ heil READING For background on Banach spaces,
More informationMultiple Change Point Detection by Sparse Parameter Estimation
Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationHigh-dimensional covariance estimation based on Gaussian graphical models
High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,
More informationUncertainty quantification for sparse solutions of random PDEs
Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More information