Image processing and Computer Vision

Size: px
Start display at page:

Download "Image processing and Computer Vision"

Transcription

1 1 / 1 Image processing and Computer Vision Continuous Optimization and applications to image processing Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015

2 Optimization 2 / 1 We have a function f from R n to R and we look for the global minimum i.e. x R n such that x R n : f (x) f (x ) When n = 2, we can visualize the function as a surface. We look for the lowest point of that surface

3 First order approximations in 1D 3 / 1 A functionf (x) that is differentiable around a point a can be approximated by a linear function around that point more formally f (x) f (a) + f (a)(x a) (1) with f (x) = f (a) + f (a)(x a) + h 1 (x)(x a)) (2) lim x a h 1(x) = 0 (3) Figure: e x 1 + x around x=0

4 Second order approximations in 1D 4 / 1 A functionf (x) that is twice differentiable around a point a can be approximated by a quadratic function around that point: f (x) = f (a)+f (a)(x a)+ f (a) 2 (x a)2 +h 2 (x)(x a) 2 ) (4) with lim h 2(x) = 0 (5) x a Figure: e x 1 + x + x 2 /2 around x=0

5 Taylor s theorem 5 / 1 A functionf (x) that is n times differentiable at a point a can be approximated by a polynome around that point using: f (x) = n k=0 f (k) (a) (x a) k + h n (x)(x a) n (6) k! with lim h n(x) = 0 (7) x a

6 Gradient 6 / 1 If f is differentiable in x, then the gradient of f in x is defined as the vector f (x) = [ df dx 1 (x),..., df dx n (x)] with df = lim h 0 f (x) f (x + he i) dx i h and e 1,..., e n the canonic base of R n (e k have all values equal to zeros but the k t h that is equal to 0 but the k th equal to 1 f (x) points towards the steepest slope direction of f at location x

7 First order approximation in ND 7 / 1 If f is differentiable in a,f can be approximated by an affine function around a: f (x) f (a)+ < f (a), (x a) > (cos(x) + cos(y) x)/2 1 + x/2 around 0

8 Hessian matrix 8 / 1 If a function f mapping from R n into R is twice differentiable in x, then the Hessian matrix of f in x is defined as the matrix 2 f x 1 x 1 2 f x 1 x n H f (x) =..... (8) 2 f x n x 1 2 f x n x n with 2 f x i x j = x i ( f x j ) (9)

9 9 / 1 If f is twice differentiable in a,f can be approximated by an quadratic function around a using the gradient and the hessian matrix: f (x) f (a)+ < f (a), (x a) > +(x a) t H f (a)(x a) (cos(x) + cos(y) x)/2 1 + x/2 x 2 /2 y 2 /2 around 0

10 Minimum 10 / 1 We say that x is a local minimum if there exist a radius r such that for all x such that x x r then f (x) f (x ) When we do not have any constraint and the function is differentiable everywhere then for all x that is a local minimum of the function we have f (x) = 0 n with 0 n the nul vector of size n Warning: f (x) = 0 does not implie that x is a local minimum it can also be a sell point or a local maximum minimum point col

11 Gradient descent 11 / 1 suppose f (x) continus differentiable everywhere For a small displacement d around a point x we have f (x + d) f (x)+ < f (x t ), d > The function f decreases the fastest in the direction f (x t ), indeed: argmin d, d <=1 (< f (x t ), d >) = f (x t )/ f (x t ) A minimization strategy called gradient descent consists in following iteratively this direction: x t+1 = x t τ f (x t ) with τ a fixed parameter small enough to guaranty the decrease of f at each step

12 Gradient descent 12 / 1 Rather than keeping τ fixed, it is possible to do a one dimensional search in the direction opposite to the gradient at each iteration : τ n = agmin λ>0 (f (x t λ f (x t ))) Rather than spend time looking for the optimal step length τ n at each iteration we can look for a τ n that gives a sufficient decrease of f The gradient descent can be very slow to converge if the function has a deep narrow valley shape

13 Majoration-Minimization 13 / 1 Instead of minimizing directly f (x), the Majoration-Minimization approach consists in solving a sequence of easier minimization problems x k+1 = argmin x g k (x) The MM method requires that Each function g k (x) is majoring f i.e. x : g k (x) f (x) g k and f touch each other at x k i.e g k (x k ) = f (x k )

14 Linear least squares Linear least squares: f (x) = Ax b 2 = N (A[i, :]x b[i]) 2 i=1 Ax b 2 is quadratic and we have A(x + h) b 2 = Ax b 2 + f (x)h ht H(x)h with H and f (x) respectively the hessian and the gradient of f en x. A(x + h) b 2 = Ah + (Ax b) 2 = (Ah + (Ax b)) T (Ah + (Ax b)) =... = Ax b 2 + 2(Ax b) T Ah + h t A T Ah (10) By identification we get f (x) = 2(Ax b) T A 14 / 1

15 Linear least squares 15 / 1 The gradient writes f (x) = 2(Ax b) T A The gradient is nul at the minimum of f : f (x) = 0 N A t Ax = A t b The matrix A t A is a symetric square matrix. If it is inversible then the equation has a unique solution and the problem has a single minima at location x = (A t A) 1 A t b

16 Regularized least squares 16 / 1 if A t A is not invertible, the linear system has an infinity of solutions and f (x) has an infinity of minimas. A solution consists in adding a small regularization term, refered as tikhonov regularization: with λ > 0. The gradient writes gradient nul: f (x) = Ax b 2 + λ x 2 f (x) = 2(A t (Ax b) + λx) f (x) = 0 (A t A + λi d )x = A t b with I d the identity matrix. Forλ > 0 we have (A t A + λi d ) invertible and the solution writes x = (A t A + λi d ) 1 A t b

17 Least squares We can look for the minimum of the sum of least squares N f (x) = A i x b i 2 i=1 The gradient writes a the sum of the gradients: N N N f (x) = 2 A t i (A ix b i ) = 2 A t i A ix A t i b i the solution of f (x) = 0 writes i=1 i=1 i=1 ( N ) 1 x = A t i A N i A t i b i i=1 Note : we ] can also rewrite f (x) as A x b 2 with A the matrix A = [ A1. A N i=1 and b the concatenation of the vectors b i 17 / 1

18 Denoising 18 / 1 Suppose that we have an image I b corresponding to an image I to which we added a gaussian noise on each pixel. A simple method to estimate I from I b consists in minimizing f (u) = u(x, y) I b (x, y) 2 dxdy+λ u(x, y) 2 dxdy x y x y The first term minimise the difference between the denoised image and the noisy image. It is called the data term The second term is called regularization term and favor smooth reconstructed images. The λ allows to control the strength of the smoothing. The more you have noise the bigger should λ be

19 2D denoising 19 / 1 Examples of denoised images with various λ I b λ = 0.2 λ = 0.8 λ = 5 We observe that the boundaries are blured If we synthetize I b from a known image I, it is possible to compute the SNR 1 for various λ. SNR lambda 1 Signal to Noise ratio = ij I(i, j)2 / ij (I(i, j) I denoised(i, j)) 2

20 1D denoising 20 / 1 In the discrete space and in 1D we can rewrite the first term as U I 2 and the second term as n 1 (u(x + 1) u(x)) 2 = DU 2 i=0 with D the following Toeplitz matrix of size n 1 n : D = We minimize U I b 2 + λ DU 2 using the solution for a sum of least sqaures seen before and we get: Ũ = (λdt D + I d ) 1 I b with I d the identity matrix.

21 1D denoising 21 / 1 We can do the same things but now wrapping the image periodically using the modulo % operator: n (u((x + 1)%n) u(x)) 2 = D c U 2 i=0 with D c the circulant matrix (We can interpret the product by this matrix as the 1D convolution by [1, 1, 0]): D =

22 1D denoising Using the solution for a least squares sum we get: U = (λd t cd c + I d ) 1 I b with I d the identity matrix The matrix M = (λd t cd c + I d ) is also a circulant matrix and its inverse M 1 is also a circulant matrix (property of circulant matrices) we can interpret the multiplication by M 1 as a 1D filter or convolution. We can visualize the impulse response of this filter by visualizing a line of that matrix M 1 pour λ = 10 et n = M 1 M 1 [:, 25] 22 / 1

23 2D denoising 23 / 1 In the 2D case we look for a matrix U with the same size as I i.e. H W. Denoting I v and U v the vectors obtained by aligning the elements of I b and V (using the row-major order), we can rewrite the data term as U v I v 2.

24 Dbruitage 2D 24 / 1 We can approximate the regularization term as follow: x y u(x, y) 2 dxdy W 1 j=0 H 1 i=1 d x (i, j) 2 + d y (i, j) 2 with d x and d y two arrays of size W 1 H 1: d x (i, j) = (u(i + 1, j) u(i, j)) d y (i, j) = (u(i, j + 1) u(i, j)) this term can be rewriten with two sparse matrices 2 D x and D y in the form D x U v 2 + D y U v 2. 2 see slide below

25 2D denoising 25 / 1 For H = 3 and W = 4, we have d x and d y of size 2 by 3 and the 6 values in row-major order can be obtained from the 12 coefficients of Uv by mutliplication by these two matrices Dx = Dy =

26 2D denoising 26 / 1 If we want that the vectors D x U v et D y U v correspond respectively the differences u(i + 1, j) u(i, j) and (u(i, j + 1) u(i, j)) un the row-major order then: D x = I H 1,H D W D y = D H I W 1,W with the kronecker product of two matrices, I kl the identity matrix of size k l and D k the bi-diagonal matrix of size k 1 k with D ii = 1 and D i,j+1 = 1 the kronecker product is defined by: a 11 B a 1n B A B =..... a m1 B a mn B

27 Dbruitage 2D 27 / 1 We want to minimize U v I v 2 + D x U v 2 + D y U v 2 Using the solution for the sum of least sqaures we have: U v = (λ(dxd t x + DyD t y ) + I d ) 1 I v with I d the identity matrix.

28 Weighted least squares 28 / 1 We can weight least squares f (x) = N w i (A[i, :]x b[i]) 2 i=1 f (x) = 2(A t WAx A t Wb) with W the diagonal matrix with W ii = w i We have: f (x) = 0 A t WAx = A t Wb if A t A is invertible then the solution is unique and we have x = (A t WA) 1 A t Wb

29 Inpainting 29 / 1 Objectif : Reconstructing missing parts of an image Applications: photography and movies

30 Inpainting 30 / 1 We denote Ω the region where we know the image A simple inpainting method consists in minimizing f (u) = u(x, y) 2 dxdy x y with the constraint u(x, y) = I(x, y) for the observed points (x, y) Ω. To make it easier, we reuse the denoising formulation without constraints with weights α(x, y) for each point of the image. α(x, y) is big relativelemy to λ for (x, y) Ω and nul for (x, y) / Ω f (u) = α(x, y) u(x, y) I(x, y) 2 dxdy x y +λ u(x, y) 2 dxdy x y

31 Inpainting In discrete setting we have : f (U) = λ ( ) D x U v 2 + D y U v 2 + α ij U(i, j)] I(i, j) 2 Let A be the diagonal matrix of size WH WH whose elements correspond to α ij in the row-major order. Using the weighted least square solution we get: ij Ũ v = (λ(d T x D x + D t yd y ) + A) 1 AI v 31 / 1

32 Inpainting 32 / 1 Inpainting domains: Results:

33 Reweighted least squares 33 / 1 We sometimes we want to use another function than x 2 f (x) = i h(a[i, :]x b[i]) with h(u) increasing not as fast as u 2 such that the point with important errors are less penalized than if we were using u 2.

34 Reweighted least squares 34 / 1 We consider functions h that are symetric, increasing such that h( x) is concave exemple of such functions h : { Huber smoothed absolute value u h(u) 2 si u τ ɛ τ( u τ/2) sinon 2 + u tau=1/ h(u) 2.0 epsilon=1/

35 Reweighted least squares For a function h in the set of previously considered functions, there exists an associated function g 3 (called conjugate function ) such that it is possible to rewrite h under this form: ( ) γu 2 h(u) = min γ>0 2 + g(γ) example with the Huber function 1.0 tau=1/ defined next slide 35 / 1

36 Reweighted least squares 36 / 1 The conjugate function g is defined by g(γ) = max u (h(u) γu 2 /2) example with the Huber function tau=1/2 h(u) γu 2 /2) γu 2 /2 + g(γ) g(γ) The maximum with respect to u is obtained by looking for u > 0 such that h (u) γu = 0

37 Reweighted least squares 37 / 1 We define the influence function ψ(u) by ) ψ(u) = argmin γ (γu 2 /2 + g(γ) ψ(u) is the curvature of the quadratic function that touches h at location u γx 2 /2 + g(γ) tangent to h(x) in u d(γx 2 /2+g(γ) h(x)) dx (u) = 0 γu = h (u) γ = h (u)/u We have ψ(u) = h (u)/u From h( (u)) increasing and concave we can show that ψ(u) is a decreasing function

38 Reweighted least squares 38 / 1 we have: { Huber u h(u) 2 /2 si u τ τ( u { τ/2) otherwise 1 si u τ ψ(u) τ/ u otherwise tau=1/ smoothed absolute value ɛ 2 + u / ɛ 2 + u 2 tau=1/

39 Reweighted least squares 39 / 1 ( Usingh(u) = min γ>0 γu 2 /2 + g(γ) ), we can rewrite the least squares as follow : min x f (x) = min x h(a[i, :]x b[i]) i = min x,γ1,..., γ n ( N i=1 γ i 2 (A[i, :]x b[i])2 + g(γ i ) )

40 Reweighted least squares 40 / 1 min x f (x) = min x,γ1,..., γ n ( N i=1 γ i 2 (A[i, :]x b[i])2 + We can minimize this function iteratively by minimizing alternatively with respect to Γ = (γ 1,..., γ N ) and x ) N g(γ i ) i=1 The minimization with respect to Γ is done bu solving N independant problems and we get γ i = ψ(u i ) with u i = A[i, :]x b[i] The second term do not depend on x and the minimization with respect to x can be done using a weighted least square with w i = γ i /2 i.e. x = (A t WA) 1 A t Wb

41 Majoration-Minimization Interpretation 41 / 1 La minimization par rapport Γ revient chercher pour caque rsidu u i la quadratique 1D qui touche h et u i et supprieure h partout ailleurs. En sommant ces quadratiques on obtient une majoration quadratique N dimensions de f (avec N la taille de x) La minimization par rapport x avec Γ fixe revient minimizer cette majoration

42 Robustesse 42 / 1 ψ(u) is decreasing, the weight of the points with the bigest residual is reduced when computing the least square solution. This allows to gain robustness by limiting the impact of the point with big residuals on the least square solution.

43 Denoising : The ROF model 43 / 1 A denoising method proposed by Rudin, Osher et, Fatemi (ROF) consists in minimizing f (u) = u(x, y) I(x, y) 2 dxdy + λ u(x, y) dxdy x y The use of u(x, y) instead of u(x, y) 2 in the second term allows the favor a smooth image while keeping the edges sharp. The second term is called total variation of u : TV (u) = u(x, y) dxdy x y x y

44 Variation Totale 1D 44 / 1 in 1D the total variation writes: TV (u) = u (x) dx The can be compute ad the sum of the absolute differences between successive extremums x TV (u) = = 1300

45 Variation Totale 1D 45 / 1 An increasing function going from 0 to 1 has a total variation of 1 whatever its shape is Unlike the TV a regularization of the x u (x) 2 dx will favor a smooth curve and thus will blur the edges

46 Variation Totale 2D et courbes de niveaux 46 / 1 For u a smooth 2D function and λ a real number, we define the level set L λ (u) by L λ (u) = {(x, y) u(x, y) = λ} L λ (u) is either empty or a set of closed curves In 2D, the total variation can be written a the sum of the level set lengths TV (u) = λ= length(l λ )(u)dλ

47 ROF discret 47 / 1 We can approximate the TV (u) as follows: with TV (u) = W 1 j=0 H 1 i=1 d x (i, j) 2 + d y (i, j) 2 d x (i, j) = (u(i + 1, j) u(i, j)) d y (i, j) = (u(i, j + 1) u(i, j))

48 ROF : descente de gradient 48 / 1 TV (u) is not differentiable for d x (i, j) 2 + d y (i, j) 2 = 0 We approximate x 2 + y 2 by ɛ 2 + x 2 + y 2, with ɛ small We get a function f (u) that is differentiable everywhere and we can use a gradient descent Problem : using a small ɛ forces use to use a small step in the gradient descent to get convergence

49 ROF : descente de gradient 49 / 1 We try to minimize the discrete smooth ROF cost: with W j=0 i=1 H (I(i, j) u(i, j)) 2 + W 1 j=0 H 1 i=1 ɛ 2 + d x (i, j) 2 + d y (i, j) 2 d x (i, j) = (u(i + 1, j) u(i, j)) d y (i, j) = (u(i, j + 1) u(i, j))

50 ROF : Moindres carrs repondrs We can use a reweighted least square approach that converges faster than the gradient descent : There is a function g(λ) such that ɛ 2 + x 2 + y 2 rewrite as the lower enveloppe of a set of quadratic functions: ɛ2 + x 2 + y 2 = min γ ( γ 2 (x 2 + y 2 ) + g(λ)) We definie ψ(x, y) = argmin γ ( γ 2 (x 2 + y 2 ) + g(γ) ) we can show that we have ψ(x, y) = 1/ ɛ 2 + x 2 + y 2 50 / 1

51 ROF : Moindres carrs repondrs 51 / 1 We can rewrite min u f (u) as min U,Λ f (u, Γ) with f (u, Γ) = Uv I v 2 + λ W 1 j=0 H 1 i=1 γ ij 2 (d x(i, j) 2 + d y (i, j) 2 ) + g(γ ij ) With Γ the matrix of size W 1 H 1 containing the γ ij. We can minimize f (u, Γ) using and alternate minimization: We minimize with respect to Γ using γ ij = ψ(d x (i, j), d y (i, j)) We minimise with respect to u using U v = (λ(d T x Γ d D x + D t yγ d D y ) + I d ) 1 I v with Γ d the diagonal matrix of size (W 1)(H 1) (W 1)(H 1) whose coefficient are γ ij /2 in the row-major order, D x and D y the two sparse matrices defined previously.

52 ROF discret 52 / 1 Example of denoised images for variation λ I b λ = 10 λ = 20 λ = 50 We can see that the edges are preserved SNR for various λ: SNR lambda

53 Inpainting avec TV 53 / 1 We can use the total variation to do inpainting by minimizing f (u) = u(x, y) dxdy x y with the constraint u(x, y) = I(x, y) for the observed points (x, y) Ω. To make the derivation easier, we reuse the ROF denoising formulation with weights α(x, y) for each point in the image. α(x, y) is big relatively to λ for (x, y) Ω and nul for (x, y) / Ω f (u) = α(x, y) u(x, y) I(x, y) 2 dxdy x y +λ u(x, y) dxdy x y

54 Inpainting : Reweighted least squares We can rewrite min u f (u) as min U,Λ f (u, Γ) with f (u, Γ) = α ij U(i, j)] I(i, j) 2 i +λ W 1 j=0 H 1 i=1 γ ij 2 (d x(i, j) 2 + d y (i, j) 2 ) + g(γ ij ) We can minimize f (u, Γ) using an alternated minimization: We minimize with respect to Γ using γ ij = ψ(d x (i, j), d y (i, j)) We minimize with respect to u by calculating U v = (λ(d T x Γ d D x + D t yγ d D y ) + A) 1 AI v with A the diagonal matrix diagonale of size WH WH whose elements correspond to the α ij in the row-major order, Γ d the diagonal matrix with coefficients γ ij /2, D x and D y the two sparse matrices defined previously. 54 / 1

55 Inpainting 55 / 1 inpainting domains: Least squares result : Results with TV :

56 Inpainting Zoom 56 / 1

57 some links 57 / TVDmm/TVDmm.pdf postgrad/cca/files/ipol.pdf

58 Nikolova, Mila and Ng, Michael K., Analysis of Half-Quadratic Minimization Methods for Signal and Image Recovery. SIAM J. Scientific Computing / 1

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

A Localized Linearized ROF Model for Surface Denoising

A Localized Linearized ROF Model for Surface Denoising 1 2 3 4 A Localized Linearized ROF Model for Surface Denoising Shingyu Leung August 7, 2008 5 Abstract 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 1 Introduction CT/MRI scan becomes a very

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Iterative Reweighted Least Squares

Iterative Reweighted Least Squares Iterative Reweighted Least Squares Sargur. University at Buffalo, State University of ew York USA Topics in Linear Classification using Probabilistic Discriminative Models Generative vs Discriminative

More information

Introduction to Linear Systems

Introduction to Linear Systems cfl David J Fleet, 998 Introduction to Linear Systems David Fleet For operator T, input I, and response R = T [I], T satisfies: ffl homogeniety: iff T [ai] = at[i] 8a 2 C ffl additivity: iff T [I + I 2

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018

Recitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018 Gradients and Directional Derivatives Brett Bernstein CDS at NYU January 21, 2018 Brett Bernstein (CDS at NYU) Recitation 1 January 21, 2018 1 / 23 Initial Question Intro Question Question We are given

More information

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998 Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND

IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND IMAGE RESTORATION: TOTAL VARIATION, WAVELET FRAMES, AND BEYOND JIAN-FENG CAI, BIN DONG, STANLEY OSHER, AND ZUOWEI SHEN Abstract. The variational techniques (e.g., the total variation based method []) are

More information

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions

Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Mathematical Economics (ECON 471) Lecture 3 Calculus of Several Variables & Implicit Functions Teng Wah Leo 1 Calculus of Several Variables 11 Functions Mapping between Euclidean Spaces Where as in univariate

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Introduction to Sparsity in Signal Processing

Introduction to Sparsity in Signal Processing 1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Partial Differential Equations and Image Processing. Eshed Ohn-Bar

Partial Differential Equations and Image Processing. Eshed Ohn-Bar Partial Differential Equations and Image Processing Eshed Ohn-Bar OBJECTIVES In this presentation you will 1) Learn what partial differential equations are and where do they arise 2) Learn how to discretize

More information

Lecture 17: Numerical Optimization October 2014

Lecture 17: Numerical Optimization October 2014 Lecture 17: Numerical Optimization 36-350 22 October 2014 Agenda Basics of optimization Gradient descent Newton s method Curve-fitting R: optim, nls Reading: Recipes 13.1 and 13.2 in The R Cookbook Optional

More information

A Quick Tour of Linear Algebra and Optimization for Machine Learning

A Quick Tour of Linear Algebra and Optimization for Machine Learning A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators

More information

6. Approximation and fitting

6. Approximation and fitting 6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with

More information

A Dual Formulation of the TV-Stokes Algorithm for Image Denoising

A Dual Formulation of the TV-Stokes Algorithm for Image Denoising A Dual Formulation of the TV-Stokes Algorithm for Image Denoising Christoffer A. Elo, Alexander Malyshev, and Talal Rahman Department of Mathematics, University of Bergen, Johannes Bruns gate 12, 5007

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Introduction to Sparsity in Signal Processing

Introduction to Sparsity in Signal Processing 1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system

More information

Gradient Methods Using Momentum and Memory

Gradient Methods Using Momentum and Memory Chapter 3 Gradient Methods Using Momentum and Memory The steepest descent method described in Chapter always steps in the negative gradient direction, which is orthogonal to the boundary of the level set

More information

Total Variation Theory and Its Applications

Total Variation Theory and Its Applications Total Variation Theory and Its Applications 2nd UCC Annual Research Conference, Kingston, Jamaica Peter Ndajah University of the Commonwealth Caribbean, Kingston, Jamaica September 27, 2018 Peter Ndajah

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

Contre-examples for Bayesian MAP restoration. Mila Nikolova

Contre-examples for Bayesian MAP restoration. Mila Nikolova Contre-examples for Bayesian MAP restoration Mila Nikolova CMLA ENS de Cachan, 61 av. du Président Wilson, 94235 Cachan cedex (nikolova@cmla.ens-cachan.fr) Obergurgl, September 26 Outline 1. MAP estimators

More information

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal

Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Fast Multilevel Algorithm for a Minimization Problem in Impulse Noise Removal Raymond H. Chan and Ke Chen September 25, 7 Abstract An effective 2-phase method for removing impulse noise was recently proposed.

More information

Colorado School of Mines Image and Multidimensional Signal Processing

Colorado School of Mines Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Department of Electrical Engineering and Computer Science Spatial Filtering Main idea Spatial filtering Define a neighborhood of a pixel

More information

Gaussian derivatives

Gaussian derivatives Gaussian derivatives UCU Winter School 2017 James Pritts Czech Tecnical University January 16, 2017 1 Images taken from Noah Snavely s and Robert Collins s course notes Definition An image (grayscale)

More information

Basic Math for

Basic Math for Basic Math for 16-720 August 23, 2002 1 Linear Algebra 1.1 Vectors and Matrices First, a reminder of a few basic notations, definitions, and terminology: Unless indicated otherwise, vectors are always

More information

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

Edge Detection. CS 650: Computer Vision

Edge Detection. CS 650: Computer Vision CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of, 269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives Machine Learning Brett Bernstein Recitation 1: Gradients and Directional Derivatives Intro Question 1 We are given the data set (x 1, y 1 ),, (x n, y n ) where x i R d and y i R We want to fit a linear

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Introduction to Nonlinear Image Processing

Introduction to Nonlinear Image Processing Introduction to Nonlinear Image Processing 1 IPAM Summer School on Computer Vision July 22, 2013 Iasonas Kokkinos Center for Visual Computing Ecole Centrale Paris / INRIA Saclay Mean and median 2 Observations

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

Part III Super-Resolution with Sparsity

Part III Super-Resolution with Sparsity Aisenstadt Chair Course CRM September 2009 Part III Super-Resolution with Sparsity Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Super-Resolution with Sparsity Dream: recover high-resolution

More information

Variational Methods in Image Denoising

Variational Methods in Image Denoising Variational Methods in Image Denoising Jamylle Carter Postdoctoral Fellow Mathematical Sciences Research Institute (MSRI) MSRI Workshop for Women in Mathematics: Introduction to Image Analysis 22 January

More information

Failures of Gradient-Based Deep Learning

Failures of Gradient-Based Deep Learning Failures of Gradient-Based Deep Learning Shai Shalev-Shwartz, Shaked Shammah, Ohad Shamir The Hebrew University and Mobileye Representation Learning Workshop Simons Institute, Berkeley, 2017 Shai Shalev-Shwartz

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning First-Order Methods, L1-Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Fundamentals of Unconstrained Optimization

Fundamentals of Unconstrained Optimization dalmau@cimat.mx Centro de Investigación en Matemáticas CIMAT A.C. Mexico Enero 2016 Outline Introduction 1 Introduction 2 3 4 Optimization Problem min f (x) x Ω where f (x) is a real-valued function The

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Math 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials

Math 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials Math 10b Ch. 8 Reading 1: Introduction to Taylor Polynomials Introduction: In applications, it often turns out that one cannot solve the differential equations or antiderivatives that show up in the real

More information

PDE-based image restoration, I: Anti-staircasing and anti-diffusion

PDE-based image restoration, I: Anti-staircasing and anti-diffusion PDE-based image restoration, I: Anti-staircasing and anti-diffusion Kisee Joo and Seongjai Kim May 16, 2003 Abstract This article is concerned with simulation issues arising in the PDE-based image restoration

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

CPSC 340: Machine Learning and Data Mining. Gradient Descent Fall 2016

CPSC 340: Machine Learning and Data Mining. Gradient Descent Fall 2016 CPSC 340: Machine Learning and Data Mining Gradient Descent Fall 2016 Admin Assignment 1: Marks up this weekend on UBC Connect. Assignment 2: 3 late days to hand it in Monday. Assignment 3: Due Wednesday

More information

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova

TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE. Jian-Feng Cai. Raymond H. Chan. Mila Nikolova Manuscript submitted to Website: http://aimsciences.org AIMS Journals Volume 00, Number 0, Xxxx XXXX pp. 000 000 TWO-PHASE APPROACH FOR DEBLURRING IMAGES CORRUPTED BY IMPULSE PLUS GAUSSIAN NOISE Jian-Feng

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion

Super-Resolution. Dr. Yossi Rubner. Many slides from Miki Elad - Technion Super-Resolution Dr. Yossi Rubner yossi@rubner.co.il Many slides from Mii Elad - Technion 5/5/2007 53 images, ratio :4 Example - Video 40 images ratio :4 Example Surveillance Example Enhance Mosaics Super-Resolution

More information

PRACTICE PROBLEMS FOR MIDTERM I

PRACTICE PROBLEMS FOR MIDTERM I Problem. Find the limits or explain why they do not exist (i) lim x,y 0 x +y 6 x 6 +y ; (ii) lim x,y,z 0 x 6 +y 6 +z 6 x +y +z. (iii) lim x,y 0 sin(x +y ) x +y Problem. PRACTICE PROBLEMS FOR MIDTERM I

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

Adaptive Beamforming Algorithms

Adaptive Beamforming Algorithms S. R. Zinka srinivasa_zinka@daiict.ac.in October 29, 2014 Outline 1 Least Mean Squares 2 Sample Matrix Inversion 3 Recursive Least Squares 4 Accelerated Gradient Approach 5 Conjugate Gradient Method Outline

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

Modeling Blurred Video with Layers Supplemental material

Modeling Blurred Video with Layers Supplemental material Modeling Blurred Video with Layers Supplemental material Jonas Wulff, Michael J. Black Max Planck Institute for Intelligent Systems, Tübingen, Germany {jonas.wulff,black}@tue.mpg.de July 6, 204 Contents

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu February

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information