Heat Source Identification Based on L 1 Optimization
|
|
- Cassandra Gray
- 6 years ago
- Views:
Transcription
1 Heat Source Identification Based on L 1 Optimization Yingying Li, Stanley Osher and Richard Tsai August 27, 2009 Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
2 Outline Heat source identification Solution by L 1 minimization and Bregman iteration Strategies to reduce computational cost Adaptive solution with successive samples Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
3 Heat Source Identification Consider the heat equation with sparse initial condition, { u t = ( a(x)u x )x t > 0, u 0 = k c kδ(x x k ) t = 0, c k > 0. Suppose at time T we measure samples f ij = u(x i, y j ). Can we recover u 0 knowing that it is a sum of delta functions? We can define a linear operator A T such that A T [u 0 ] = u t=t, so essentially the problem is inverting A T (but A T is ill-conditioned). Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
4 Applications The initial condition of a diffusive process can have real-life importance. Consider finding the location of buried pollutants. The buried pollutants are source conditions that diffuse over time. With concentration measurements at enough points, we can deduce where the pollution is hidden. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
5 L 0 L 1 The goal is to use as few delta functions as possible to match up with the measurements, which is an L 0 minimization problem. As in compressed sensing, we approach the inverse by solving an L 1 minimization problem min u u 1 subject to (Au) i = f i. We shall use Bregman iteration for the minimization. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
6 Bregman iterative method The constrained problem: The unconstrained problem: min u u R n 1 subject to Au = f. min E(u) = u u R n l 1 + λ Au f 2 l, 2 Algorithm: 1: Initialize: k = 0, u 0 = 0, f 0 = f. 2: while Au f 2 f 2 > tolerance do 3: Solve u k+1 arg min u u 1 + λ Au f k 2 2 by coordinate descent 4: f k+1 f k + (f Au k+1 ) 5: k k + 1 6: end while Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
7 Support Restriction Define D k = supp(u k ), S k = D k {neighboring points of D k }. u k+1 = arg min{ u l 1 + λ Au f 2 2 : supp(u) S k }. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
8 Exclusion Region Suppose the spike amplitudes are bounded from below by α min > 0, then k s.t. A[α min δ y ](x k ) > f (x k ) = u 0 (y) = 0. periodic boundry (a = 1) zero boundry Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
9 Numerical Results Exact u 0 f Recovery The orange shows the distribution of a(x). In the left figure, the blue dots indicate the heat sources and the red stars are samples. Here a(x) is a nonnegative smooth function. In the middle figure, the blue shows the distribution of u at T = The last figure shows the recovery result. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
10 Numerical Results Exact u 0 f Recovery The orange circle indicates the distribution of a(x), and a(x) = 0.2 inside of the circle and a(x) = 1 the rest of the area. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
11 Successive Greedy Solution 1 Solve the heat source identification problem with k samples; 2 Using the solution u k to choose the k + 1 sample; 3 Iterate. Why solve in a successive greedy manner? 1 We want to recover all heat sources by using as few measurements as possible under unknown the total number of heat sources. 2 Increase the stability of the L 1 minimization problem. The drawback of L 1 minimum: produce ghost solution under few measurements. With the undetermined constaint, the L 1 solution tends to be closer to the measurements than it should. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
12 Covering Region Suppose the spike amplitudes are bounded from below by α min > 0, then k s.t. A[α min δ y ](x k ) threshold = y {Covering region}. We define a way to measure how much a point is covered by samples, V (x) = A( j δ xj ). The bigger V (x) is, the more information available at x. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
13 Refine locally or Explore further Refine locally: If u k varies significantly from u k 1, then choose the next sampling location x k+1 by x k+1 = arg max G σ u k G σ u k 1. x:x B r (x j ) Explore further: Otherwise, x k+1 = arg min x:x B r (x j ) V (x). Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
14 Exact u 0 f Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
15 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
16 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
17 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
18 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
19 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
20 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
21 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
22 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
23 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
24 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
25 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
26 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
27 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
28 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
29 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
30 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
31 Recovery ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
32 Recovery Done! ingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
33 Comparison With the same number of random samples, the solutions of least square and L 1 minimization are not as accurate as the successive greedy solution. Exact u 0 Least square L 1 Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
34 Future work In the successive greedy method, we add the total length of the path into consideration. So we also N min x k+1 x k under the minimum N. k=1 In practice, it is like there is one moving sensor that tries to find out all the heat sources with the minimum measurements and minimum moving distance. Unknown geometrical environment with obstacles, our sense can discover where its view is not blocked, the goal is to figure out the locations of obstacles. Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
35 Questions? Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
36 Thanks for your attention! Yingying Li, Stanley Osher and Richard Tsai () L 1 Heat Source Identification Aug 27, / 19
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationBias-free Sparse Regression with Guaranteed Consistency
Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT
ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationSolving l 1 Regularized Least Square Problems with Hierarchical Decomposition
Solving l 1 Least Square s with 1 mzhong1@umd.edu 1 AMSC and CSCAMM University of Maryland College Park Project for AMSC 663 October 2 nd, 2012 Outline 1 The 2 Outline 1 The 2 Compressed Sensing Example
More informationOn Nesterov s Random Coordinate Descent Algorithms - Continued
On Nesterov s Random Coordinate Descent Algorithms - Continued Zheng Xu University of Texas At Arlington February 20, 2015 1 Revisit Random Coordinate Descent The Random Coordinate Descent Upper and Lower
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationA Bound on the Distance from Approximation Vectors to the Plane
A Bound on the Distance from Approximation Vectors to the Plane T. Cheslack-Postava, A. Diesl, M. Lepinski, A. Schuyler August 2, 999 Abstract In this paper, we will begin by reviewing triangle sequences.
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationInfo-Greedy Sequential Adaptive Compressed Sensing
Info-Greedy Sequential Adaptive Compressed Sensing Yao Xie Joint work with Gabor Braun and Sebastian Pokutta Georgia Institute of Technology Presented at Allerton Conference 2014 Information sensing for
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationAdaptive Primal Dual Optimization for Image Processing and Learning
Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationSolving DC Programs that Promote Group 1-Sparsity
Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014
More informationAdaptive Beamforming Algorithms
S. R. Zinka srinivasa_zinka@daiict.ac.in October 29, 2014 Outline 1 Least Mean Squares 2 Sample Matrix Inversion 3 Recursive Least Squares 4 Accelerated Gradient Approach 5 Conjugate Gradient Method Outline
More information[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,
269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationPoisson Equation in 2D
A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method
More informationIMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND
IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are
More informationUncertainty quantification for sparse solutions of random PDEs
Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July
More informationComputing High Frequency Waves By the Level Set Method
Computing High Frequency Waves By the Level Set Method Hailiang Liu Department of Mathematics Iowa State University Collaborators: Li-Tien Cheng (UCSD), Stanley Osher (UCLA) Shi Jin (UW-Madison), Richard
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationAMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC
Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution
More informationTowed Streamer EM data from Barents Sea, Norway
Towed Streamer EM data from Barents Sea, Norway Anwar Bhuiyan*, Eivind Vesterås and Allan McKay, PGS Summary The measured Towed Streamer EM data from a survey in the Barents Sea, undertaken in the Norwegian
More informationSource Reconstruction for 3D Bioluminescence Tomography with Sparse regularization
1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationAccelerated Proximal Gradient Methods for Convex Optimization
Accelerated Proximal Gradient Methods for Convex Optimization Paul Tseng Mathematics, University of Washington Seattle MOPTA, University of Guelph August 18, 2008 ACCELERATED PROXIMAL GRADIENT METHODS
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationNumerical Methods of Applied Mathematics -- II Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 18.336 Numerical Methods of Applied Mathematics -- II Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle University of Vienna/Wien June 19, 2006 Abstract This is a talk given at Univ. Vienna, 2006. SOCP
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationZonal modelling approach in aerodynamic simulation
Zonal modelling approach in aerodynamic simulation and Carlos Castro Barcelona Supercomputing Center Technical University of Madrid Outline 1 2 State of the art Proposed strategy 3 Consistency Stability
More informationAccelerated Block-Coordinate Relaxation for Regularized Optimization
Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationMark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.
CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.
More informationIntroduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationTutorial: Sparse Signal Recovery
Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:
More informationAn Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems
An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems Ming-Jun Lai and Jingyue Wang Department of Mathematics The University of Georgia Athens, GA 30602.
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More informationNon-negative Matrix Factorization via accelerated Projected Gradient Descent
Non-negative Matrix Factorization via accelerated Projected Gradient Descent Andersen Ang Mathématique et recherche opérationnelle UMONS, Belgium Email: manshun.ang@umons.ac.be Homepage: angms.science
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on
More informationMini-project in scientific computing
Mini-project in scientific computing Eran Treister Computer Science Department, Ben-Gurion University of the Negev, Israel. March 7, 2018 1 / 30 Scientific computing Involves the solution of large computational
More informationMultilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems
Multilevel and Adaptive Sparse of Inverse Problems Fachbereich Mathematik und Informatik Philipps Universität Marburg Workshop Sparsity and Computation, Bonn, 7. 11.6.2010 (joint work with M. Fornasier
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationA Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia
A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts
More informationMismatch and Resolution in CS
Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationALGORITHMS FOR THE EVOLUTION OF SURFACES
ALGORITHMS FOR THE EVOLUTION OF SURFACES MATTHEW STONE Abstract. We develop algorithms for simulating various curvature dependent motions of interfaces. These motions are described by a class of partial
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationSparse solutions of underdetermined systems
Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements
More informationNUMERICAL SOLUTION OF DOPANT DIFFUSION EQUATIONS. M.J. Bainss, C.P. Please and P.K. Sweby. Mathematics Department University of Reading, Berkshire.
271 NUMERICAL SOLUTION OF DOPANT DIFFUSION EQUATIONS M.J. Bainss, C.P. Please and P.K. Sweby. Mathematics Department University of Reading, Berkshire. 272 SUMMARY A description is given of the application
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationSparse analysis Lecture III: Dictionary geometry and greedy algorithms
Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j
More informationSIGNAL AND IMAGE RESTORATION: SOLVING
1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation
More informationMath 5587 Lecture 2. Jeff Calder. August 31, Initial/boundary conditions and well-posedness
Math 5587 Lecture 2 Jeff Calder August 31, 2016 1 Initial/boundary conditions and well-posedness 1.1 ODE vs PDE Recall that the general solutions of ODEs involve a number of arbitrary constants. Example
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationMorphing ensemble Kalman filter
Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983
More informationIdentifying and Characterizing Star-Forming Stellar Clumps
Deep Learning Applied to Galaxy Evolution: Identifying and Characterizing Star-Forming Stellar Clumps Christoph Lee (UCSC) Joel Primack (UCSC), Marc Huertas-Company (Paris Observatory), Yicheng Guo (University
More informationDesign and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016
Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationCoordinate Update Algorithm Short Course Proximal Operators and Algorithms
Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow
More informationIntroduction to Alternating Direction Method of Multipliers
Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationSparse, stable gene regulatory network recovery via convex optimization
Sparse, stable gene regulatory network recovery via convex optimization Arwen Meister June, 11 Gene regulatory networks Gene expression regulation allows cells to control protein levels in order to live
More informationSparse Signal Reconstruction with Hierarchical Decomposition
Sparse Signal Reconstruction with Hierarchical Decomposition Ming Zhong Advisor: Dr. Eitan Tadmor AMSC and CSCAMM University of Maryland College Park College Park, Maryland 20742 USA November 8, 2012 Abstract
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationQuantized Iterative Hard Thresholding:
Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 27, 2015 Outline One versus all/one versus one Ranking loss for multiclass/multilabel classification Scaling to millions of labels Multiclass
More informationA Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints
A Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave.,
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationApplied PDEs: Analysis and Computation
Applied PDEs: Analysis and Computation Hailiang Liu hliu@iastate.edu Iowa State University Tsinghua University May 07 June 16, 2012 1 / 15 Lecture #1: Introduction May 09, 2012 Model, Estimate and Algorithm=MEA
More informationOn Acceleration with Noise-Corrupted Gradients. + m k 1 (x). By the definition of Bregman divergence:
A Omitted Proofs from Section 3 Proof of Lemma 3 Let m x) = a i On Acceleration with Noise-Corrupted Gradients fxi ), u x i D ψ u, x 0 ) denote the function under the minimum in the lower bound By Proposition
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationNOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained
NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume
More informationTime domain sparsity promoting LSRTM with source estimation
Time domain sparsity promoting LSRTM with source estimation Mengmeng Yang, Philipp Witte, Zhilong Fang & Felix J. Herrmann SLIM University of British Columbia Motivation Features of RTM: pros - no dip
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More information1 minimization without amplitude information. Petros Boufounos
1 minimization without amplitude information Petros Boufounos petrosb@rice.edu The Big 1 Picture Classical 1 reconstruction problems: min 1 s.t. f() = 0 min 1 s.t. f() 0 min 1 + λf() The Big 1 Picture
More informationA Posteriori Adaptive Low-Rank Approximation of Probabilistic Models
A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models Rainer Niekamp and Martin Krosche. Institute for Scientific Computing TU Braunschweig ILAS: 22.08.2011 A Posteriori Adaptive Low-Rank
More informationOptimization for Learning and Big Data
Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for
More informationGTOC 7 Team 12 Solution
GTOC 7 Team 12 Solution Telespazio Vega Deutschland GmbH (Germany) Holger Becker, Gianni Casonato, Bernard Godard, Olympia Kyriopoulos, Ganesh Lalgudi, Matteo Renesto Contact: bernard godard
More informationOptimal Value Function Methods in Numerical Optimization Level Set Methods
Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander
More information