l 1 -magic : Recovery of Sparse Signals via Convex Programming

Size: px
Start display at page:

Download "l 1 -magic : Recovery of Sparse Signals via Convex Programming"

Transcription

1 l -magic : Recovery of Sparse Signals via Convex Programming Emmanuel Candès and Justin Romberg Caltech October 005 Seven problems A recent series of papers [3 8] develops a theory of signal recovery from highly incomplete information The central results state that a sparse vector x 0 R N can be recovered from a small number of linear measurements b = Ax 0 R K K N or b = Ax 0 + e when there is measurement noise by solving a convex program As a companion to these papers this package includes MATLAB code that implements this recovery procedure in the seven contexts described below The code is not meant to be cuttingedge rather it is a proof-of-concept showing that these recovery procedures are computationally tractable even for large scale problems where the number of data points is in the millions The problems fall into two classes: those which can be recast as linear programs LPs and those which can be recast as second-order cone programs SOCPs The LPs are solved using a generic path-following primal-dual method The SOCPs are solved with a generic log-barrier algorithm The implementations follow Chapter of [] For maximum computational efficiency the solvers for each of the seven problems are implemented separately They all have the same basic structure however with the computational bottleneck being the calculation of the Newton step this is discussed in detail below The code can be used in either small scale mode where the system is constructed explicitly and solved exactly or in large scale mode where an iterative matrix-free algorithm such as conjugate gradients CG is used to approximately solve the system Our seven problems of interest are: Min-l with equality constraints The program P min x subject to Ax = b also known as basis pursuit finds the vector with smallest l norm x := i x i that explains the observations b As the results in [4 6] show if a sufficiently sparse x 0 exists such that Ax 0 = b then P will find it When x A b have real-valued entries P can be recast as an LP this is discussed in detail in [0] Minimum l error approximation Let A be a M N matrix with full rank Given y R M the program P A min x y Ax

2 finds the vector x R N such that the error y Ax has minimum l norm ie we are asking that the difference between Ax and y be sparse This program arises in the context of channel coding [8] Suppose we have a channel code that produces a codeword c = Ax for a message x The message travels over the channel and has an unknown number of its entries corrupted The decoder observes y = c + e where e is the corruption If e is sparse enough then the decoder can use P A to recover x exactly Again P A can be recast as an LP Min-l with quadratic constraints This program finds the vector with minimum l norm that comes close to explaining the observations: P min x subject to Ax b ɛ where ɛ is a user specified parameter As shown in [5] if a sufficiently sparse x 0 exists such that b = Ax 0 + e for some small error term e ɛ then the solution x to P will be close to x 0 That is x x 0 C ɛ where C is a small constant P can be recast as a SOCP Min-l with bounded residual correlation Also referred to as the Dantzig Selector the program P D min x subject to A Ax b γ where γ is a user specified parameter relaxes the equality constraints of P in a different way P D requires that the residual Ax b of a candidate vector x not be too correlated with any of the columns of A the product A Ax b measures each of these correlations If b = Ax 0 +e where e i N 0 σ then the solution x D to P D has near-optimal minimax risk: E x D x 0 Clog N minx 0 i σ see [7] for details For real-valued x A b P D can again be recast as an LP; in the complex case there is an equivalent SOCP It is also true that when x A b are complex the programs P P A P D can be written as SOCPs but we will not pursue this here If the underlying signal is a D image an alternate recovery model is that the gradient is sparse [8] Let x denote the pixel in the ith row and j column of an n n image x and define the operators and D h; x = { x i+j x i < n 0 i = n D x = i D v; x = Dh; x D v; x { x ij+ x j < n 0 j = n The -vector D x can be interpreted as a kind of discrete gradient of the digital image x The total variation of x is simply the sum of the magnitudes of this discrete gradient at every point: TVx := D h; x + D v; x = D x With these definitions we have three programs for image recovery each of which can be recast as a SOCP:

3 Min-TV with equality constraints T V min TVx subject to Ax = b If there exists a piecewise constant x 0 with sufficiently few edges ie D x 0 is nonzero for only a small number of indices then T V will recover x 0 exactly see [4] Min-TV with quadratic constraints T V min TVx subject to Ax b ɛ Examples of recovering images from noisy observations using T V were presented in [5] Note that if A is the identity matrix T V reduces to the standard Rudin-Osher-Fatemi image restoration problem [8] See also [9 3] for SOCP solvers specifically designed for the total-variational functional Dantzig TV T V D min TVx subject to A Ax b γ This program was used in one of the numerical experiments in [7] In the next section we describe how to solve linear and second-order cone programs using modern interior point methods Interior point methods Advances in interior point methods for convex optimization over the past 5 years led by the seminal work [4] have made large-scale solvers for the seven problems above feasible Below we overview the generic LP and SOCP solvers used in the l -magic package to solve these problems A primal-dual algorithm for linear programming In Chapter of [] Boyd and Vandenberghe outline a relatively simple primal-dual algorithm for linear programming which we have followed very closely for the implementation of P P A and P D For the sake of completeness and to set up the notation we briefly review their algorithm here The standard-form linear program is min z c 0 z subject to A 0 z = b f i z 0 where the search vector z R N b R K A 0 is a K N matrix and each of the f i i = m is a linear functional: f i z = c i z + d i for some c i R N d i R At the optimal point z there will exist dual vectors ν R K λ R m λ 0 such that the Karush-Kuhn-Tucker conditions are satisfied: KKT c 0 + A T 0 ν + i λ i c i = 0 λ i f i z = 0 i = m A 0 z = b f i z 0 i = m 3

4 In a nutshell the primal dual algorithm finds the optimal z along with optimal dual vectors ν and λ by solving this system of nonlinear equations The solution procedure is the classical Newton method: at an interior point z k ν k λ k by which we mean f i z k < 0 λ k > 0 the system is linearized and solved However the step to new point z k+ ν k+ λ k+ must be modified so that we remain in the interior In practice we relax the complementary slackness condition λ i f i = 0 to λ k i f i z k = /τ k where we judiciously increase the parameter τ k as we progress through the Newton iterations This biases the solution of the linearized equations towards the interior allowing a smooth welldefined central path from an interior point to the solution on the boundary see [50] for an extended discussion The primal dual and central residuals quantify how close a point z ν λ is to satisfying KKT with in place of the slackness condition: r dual = c 0 + A T 0 ν + i λ i c i r cent = Λf /τ r pri = A 0 z b where Λ is a diagonal matrix with Λ ii = λ i and f = f z f m z T From a point z ν λ we want to find a step z ν λ such that r τ z + z ν + ν λ + λ = 0 3 Linearizing 3 with the Taylor expansion around z ν λ r τ z + z ν + ν λ + λ r τ z ν λ + J rτ z νλ z ν λ where J rτ z νλ is the Jacobian of r τ we have the system 0 AT 0 C T z ΛC 0 F v = c 0 + A T 0 ν + i λ ic i Λf /τ A λ A 0 z b where m N matrix C has the c T i as rows and F is diagonal with F ii = f i z We can eliminate λ using: λ = ΛF C z λ /τf 4 leaving us with the core system C T F ΛC A T 0 A 0 0 z = ν c0 + /τc T f A T 0 ν b A 0 z 5 With the z ν λ we have a step direction To choose the step length 0 < s we ask that it satisfy two criteria: z + s z and λ + s λ are in the interior ie f i z + s z < 0 λ i > 0 for all i The norm of the residuals has decreased sufficiently: r τ z + s z ν + s ν λ + s λ αs r τ z ν λ where α is a user-sprecified parameter in all of our implementations we have set α = 00 4

5 Since the f i are linear functionals item is easily addressed We choose the maximum step size that just keeps us in the interior Let and set I + f = {i : c i z > 0} I λ {i : λ < 0} s max = 099 min{ { f i z/ c i z i I + f } { λ i/ λ i i I λ }} Then starting with s = s max we check if item above is satisfied; if not we set s = β s and try again We have taken β = / in all of our implementations When r dual and r pri are small the surrogate duality gap η = f T λ is an approximation to how close a certain z ν λ is to being opitmal ie c 0 z c 0 z η The primal-dual algorithm repeats the Newton iterations described above until η has decreased below a given tolerance Almost all of the computational burden falls on solving 5 When the matrix C T F ΛC is easily invertible as it is for P or there are no equality constraints as in P A P D 5 can be reduced to a symmetric positive definite set of equations When N and K are large forming the matrix and then solving the linear system of equations in 5 is infeasible However if fast algorithms exist for applying C C T and A 0 A T 0 we can use a matrix free solver such as Conjugate Gradients CG is iterative requiring a few hundred applications of the constraint matrices roughly speaking to get an accurate solution A CG solver based on the very nice exposition in [9] is included with the l -magic package The implementations of P P A P D are nearly identical save for the calculation of the Newton step In the Appendix we derive the Newton step for each of these problems using notation mirroring that used in the actual MATLAB code A log-barrier algorithm for SOCPs Although primal-dual techniques exist for solving second-order cone programs see [ 3] their implementation is not quite as straightforward as in the LP case Instead we have implemented each of the SOCP recovery problems using a log-barrier method The log-barrier method for which we will again closely follow the generic but effective algorithm described in [ Chap ] is conceptually more straightforward than the primal-dual method described above but at its core is again solving for a series of Newton steps Each of P T V T V T V D can be written in the form min z c 0 z subject to A 0 z = b where each f i describes either a constraint which is linear f i = c i z + d i f i z 0 i = m 6 or second-order conic f i z = Ai z c i z + d i the A i are matrices the c i are vectors and the d i are scalars The standard log-barrier method transforms 6 into a series of linearly constrained programs: min c 0 z + z τ k log f i z subject to A 0 z = b 7 i 5

6 where τ k > τ k The inequality constraints have been incorporated into the functional via a penalty function which is infinite when the constraint is violated or even met exactly and smooth elsewhere As τ k gets large the solution z k to 7 approaches the solution z to 6: it can be shown that c 0 z k c 0 z < m/τ k ie we are within m/τ k of the optimal value after iteration k m/τ k is called the duality gap The idea here is that each of the smooth subproblems can be solved to fairly high accuracy with just a few iterations of Newton s method especially since we can use the solution z k as a starting point for subproblem k + At log-barrier iteration k Newton s method which is again iterative proceeds by forming a series of quadratic approximations to 7 and minimizing each by solving a system of equations again we might need to modify the step length to stay in the interior The quadratic approximation of the functional f 0 z = c 0 z + log f i z τ in 7 around a point z is given by where g z is the gradient f 0 z + z z + g z z + H z z z := qz + z i g z = c 0 + τ and H z is the Hessian matrix H z = τ f i z f iz f i z T i i f i z f iz + τ i f i z f i z Given that z is feasible that A 0 z = b in particular the z that minimizes qz + z subject to A 0 z = b is the solution to the set of linear equations Hz A τ T 0 z = τg A 0 0 v z 8 The vector v which can be interpreted as the Lagrange multipliers for the quality constraints in the quadratic minimization problem is not directly used In all of the recovery problems below with the exception of T V there are no equality constraints A 0 = 0 In these cases the system 8 is symmetric positive definite and thus can be solved using CG when the problem is large scale For the T V problem we use the SYMMLQ algorithm which is similar to CG and works on symmetric but indefinite systems see [6] With z in hand we have the Newton step direction The step length s is chosen so that f i z + s z < 0 for all i = m The functional has decreased suffiently: f 0 z + s z < f 0 z + αs z g z z where α is a user-specified parameter each of the implementations below uses α = 00 This requirement basically states that the decrease must be within a certain percentage of that predicted by the linear model at z As before we start with s = and decrease by multiples of β until both these conditions are satisfied all implementations use β = / The complete log-barrier implementation for each problem follows the outline: The choice of log x for the barrier function is not arbitrary it has a property termed self-concordance that is very important for quick convergence of 7 to 6 both in theory and in practice see the very nice exposition in [7] 6

7 Inputs: a feasible starting point z 0 a tolerance η and parameters µ and an initial τ Set k = Solve 7 via Newton s method followed by the backtracking line search using z k as an initial point Call the solution z k 3 If m/τ k < η terminate and return z k 4 Else set τ k+ = µτ k k = k + and go to step In fact we can calculate in advance how many iterations the log-barrier algorithm will need: log m log η log τ barrier iterations = log µ The final issue is the selection of τ Our implementation chooses τ conservatively; it is set so that the duality gap m/τ after the first iteration is equal to c 0 z 0 In Appendix we explicitly derive the equations for the Newton step for each of P T V T V T V D again using notation that mirrors the variable names in the code 3 Examples To illustrate how to use the code the l -magic package includes m-files for solving specific instances of each of the above problems these end in examplem in the main directory 3 l with equality constraints We will begin by going through leq examplem in detail This m-file creates a sparse signal takes a limited number of measurements of that signal and recovers the signal exactly by solving P The first part of the procedure is for the most part self-explainatory: % put key subdirectories in path if not already there pathpath /Optimization ; pathpath /Data ; % load random states for repeatable experiments load RandomStates rand state rand_state; randn state randn_state; % signal length N = 5; % number of spikes in the signal T = 0; % number of observations to make K = 0; % random +/- signal x = zerosn; q = randpermn; xq:t = signrandnt; 7

8 We add the Optimization directory where the interior point solvers reside and the Data directories to the path The file RandomStatesm contains two variables: rand state and randn state which we use to set the states of the random number generators on the next two lines we want this to be a repeatable experiment The next few lines set up the problem: a length 5 signal that contains 0 spikes is created by choosing 0 locations at random and then putting ± at these locations The original signal is shown in Figure a The next few lines: % measurement matrix disp Creating measurment matrix ; A = randnkn; A = ortha ; disp Done ; % observations y = A*x; % initial guess = min energy x0 = A *y; create a measurement ensemble by first creating a K N matrix with iid Gaussian entries and then orthogonalizing the rows The measurements y are taken and the minimum energy solution x0 is calculated x0 which is shown in Figure is the vector in {x : Ax = y} that is closest to the origin Finally we recover the signal with: % solve the LP tic xp = leq_pdx0 A [] y e-3; toc The function leq pdm found in the Optimization subdirectory implements the primal-dual algorithm presented in Section ; we are sending it our initial guess x0 for the solution the measurement matrix the third argument which is used to specify the transpose of the measurement matrix is unnecessary here and hence left empty since we are providing A explicitly the measurements and the precision to which we want the problem solved leq pd will terminate when the surrogate duality gap is below 0 3 Running the example file at the MATLAB prompt we have the following output: >> leq_example Creating measurment matrix Done Iteration = tau = 6433e+0 Primal = 650e+0 PDGap = 59e+0 Dual res = 693e+00 Primal res = 568e-5 Hp condition number = 486e-0 Iteration = tau = 96e+0 Primal = 9358e+0 PDGap = 7903e+0 Dual res = 706e-0 Primal res = 497e-5 Hp condition number = 5e-0 Iteration = 3 tau = 79e+0 Primal = 5567e+0 PDGap = 375e+0 Dual res = 950e-0 Primal res = 6003e-4 Hp condition number = 9606e-05 Iteration = 4 tau = 4689e+0 Primal = 439e+0 PDGap = 84e+0 Dual res = 575e-0 Primal res = 3307e-4 Hp condition number = 9e-04 Iteration = 5 tau = 66e+0 Primal = 3646e+0 PDGap = 66e+0 Dual res = 56e-0 Primal res = 44e-4 Hp condition number = 84e-04 Iteration = 6 tau = 57e+03 Primal = 879e+0 PDGap = 8853e+00 Dual res = 5549e-0 Primal res = 473e-4 Hp condition number = 7396e-05 Iteration = 7 tau = 047e+04 Primal = 097e+0 PDGap = 9780e-0 Dual res = 5549e-04 Primal res = 4e-3 Hp condition number = 05e-06 Iteration = 8 tau = 9605e+04 Primal = 00e+0 PDGap = 066e-0 Dual res = 5549e-06 Primal res = 376e-3 Hp condition number = 90e-07 Iteration = 9 tau = 88e+05 Primal = 00e+0 PDGap = 6e-0 Dual res = 5549e-08 Primal res = 830e- Hp condition number = 6793e-09 Iteration = 0 tau = 8084e+06 Primal = 000e+0 PDGap = 67e-03 Dual res = 5549e-0 Primal res = 33e- Hp condition number = 889e- Iteration = tau = 747e+07 Primal = 000e+0 PDGap = 38e-04 Dual res = 5550e- Primal res = 48e-0 Hp condition number = 076e- Elapsed time is seconds The recovered signal xp is shown in Figure c The signal is recovered to fairly high accuracy: >> normxp-x 8

9 a Original b Minimum energy reconstruction c Recovered Figure : D recovery experiment for l minimization with equality constraints a Original length 5 signal x consisting of 0 spikes b Minimum energy linear reconstruction x0 c Minimum l reconstruction xp ans = 4746e-05 3 Phantom reconstruction A large scale example is given in tveq phantom examplem This files recreates the phantom reconstruction experiment first published in [4] The Shepp-Logan phantom shown in Figure a is measured at K = 548 locations in the D Fourier plane; the sampling pattern is shown in Figure b The image is then reconstructed exactly using T V The star-shaped Fourier-domain sampling pattern is created with % number of radial lines in the Fourier domain L = ; % Fourier samples we are given [MMhmhmhi] = LineMaskLn; OMEGA = mhi; The auxiliary function LineMaskm found in the Measurements subdirectory creates the starshaped pattern consisting of lines through the origin The vector OMEGA contains the locations of the frequencies used in the sampling pattern This example differs from the previous one in that the code operates in large-scale mode The measurement matrix in this example is making the system 8 far too large to solve or even store explicitly In fact the measurment matrix itself would require almost 3 gigabytes of memory if stored in double precision Instead of creating the measurement matrix explicitly we provide function handles that take a vector x and return Ax As discussed above the Newton steps are solved for using an implicit algorithm To create the implicit matrix we use the function handles A A_fhpz OMEGA; At At_fhpz OMEGA n; The function A fhpm takes a length N vector we treat n n images as N := n vectors and returns samples on the K frequencies Actually since the underlying image is real A fhpm 9

10 a Phantom b Sampling pattern c Min energy d min-tv reconstruction Figure : Phantom recovery experiment return the real and imaginary parts of the D FFT on the upper half-plane of the domain shown in Figure b To solve T V we call xp = tveq_logbarrierxbp A At y e- e-8 600; The variable xbp is the initial guess which is again the minimal energy reconstruction shown in Figure c y are the measurements ande- is the desired precision The sixth input is the value of µ the amount by which to increase τ k at each iteration; see Section The last two inputs are parameters for the large-scale solver used to find the Newton step The solvers are iterative with each iteration requiring one application of A and one application of At The seventh and eighth arguments above state that we want the solver to iterate until the solution has precision 0 8 that is it finds a z such that Hz g / g 0 8 or it has reached 600 iterations The recovered phantom is shown in Figure d We have X T V X / X Optimization routines We include a brief description of each of the main optimization routines type help <function> in MATLAB for details Each of these m-files is found in the Optimization subdirectory 0

11 cgsolve Solves Ax = b where A is symmetric positive definite using the Conjugate Gradient method ldantzig pd ldecode pd leq pd lqc logbarrier lqc newton Solves P D the Dantzig selector using a primal-dual algorithm Solves the norm approximation problem P A for decoding via linear programming using a primal-dual algorithm Solves the standard Basis Pursuit problem P using a primal-dual algorithm Barrier outer iterations for solving quadratically constrained l minimization P Newton inner iterations for solving quadratically constrained l minimization P tvdantzig logbarrier Barrier iterations for solving the TV Dantzig selector T V D tvdantzig newton Newton iterations for T V D tveq logbarrier Barrier iterations for equality constrained TV minimizaiton T V tveq newton Newton iterations for T V tvqc logbarrier Barrier iterations for quadratically constrained TV minimization T V tvqc newton Newton iterations for T V

12 Appendix A l minimization with equality constraints When x A and b are real then P can be recast as the linear program min u i subject to x i u i 0 xu i x i u i 0 Ax = b which can be solved using the standard primal-dual algorithm outlined in Section again see [ Chap] for a full discussion Set f u;i := x i u i f u;i := x i u i with λ u;i λ u;i the corresponding dual variables and let f u be the vector f u; f u;n T and likewise for f u λ u λ u Note that δi f u;i = δ i δi f u;i = δ i f u;i = 0 f u;i = 0 where δ i is the standard basis vector for component i Thus at a point x u; v λ u λ u the central and dual residuals are Λu f r cent = u /τ Λ u f u λu λ r dual = u + A T v λ u λ u and the Newton step 5 is given by: Σ Σ A T Σ Σ 0 x u = w /τ fu + fu A T v w := /τ fu + fu A 0 0 v b Ax with Σ = Λ u F u w 3 Λ u Fu Σ = Λ u Fu + Λ u Fu The F for example are diagonal matrices with F ii = f ;i and f ;i we can eliminate and solve Σ x = Σ Σ Σ x = Σ x w Σ Σ w A T v u = Σ w Σ x AΣ AT v = w 3 AΣ x w Σ x Σ Σ w = /f ;i Setting This is a K K positive definite system of equations and can be solved using conjugate gradients Given x u v we calculate the change in the inequality dual variables as in 4: λ u = Λ u Fu x + u λ u /τfu λ u = Λ u Fu x + u λ u /τfu

13 B l norm approximation The l norm approximation problem P A can also be recast as a linear program: min xu M m= u m subject to Ax u y 0 Ax u + y 0 recall that unlike the other 6 problems here the M N matrix A has more rows than columns For the primal-dual algorithm we define Given a vector of weights σ R M σ m f u;m = m m f u = Ax u y f u = Ax u + y A T σ σ σ m f u;m fu T A ;m = T ΣA A T Σ ΣA Σ σ m f u;m = At a point x u; λ u λ u the dual residual is A r dual = T λ u λ u λ u λ u and the Newton step is the solution to A T Σ A A T Σ x = Σ A Σ u where m m Σ = Λ u F u Σ = Λ u F u A T σ σ σ m f u;m fu T A ;m = T ΣA A T Σ ΣA Σ /τ A T f u /τ f u Λ u F u Λ u F u Setting Σ x = Σ Σ Σ we can eliminate u = Σ w Σ A x and solve A T Σ x A x = w A T Σ Σ w + fu + f := for x Again A T Σ x A is a N N symmetric positive definite matrix it is straightforward to verify that each element on the diagonal of Σ x will be strictly positive and so the Conjugate Gradients algorithm can be used for large-scale problems Given x u the step directions for the inequality dual variables are given by u λ u = Λ u Fu A x u λ u /τfu λ u = Λ u Fu A x + u λ u /τfu w w C l Dantzig selection An equivalent linear program to P D in the real case is given by: min u i subject to x u 0 xu i x u 0 A T r ɛ 0 A T r ɛ 0 3

14 where r = Ax b Taking f u = x u f u = x u f ɛ = A T r ɛ f ɛ = A T r ɛ the residuals at a point x u; λ u λ u λ ɛ λ ɛ the dual residual is λu λ r dual = u + A T Aλ ɛ λ ɛ λ u λ u and the Newton step is the solution to A T AΣ a A T A + Σ Σ x = Σ Σ u where Again setting we can eliminate and solve /τ A T A f ɛ Σ = Λ u F u Σ = Λ u F u Σ a = Λ ɛ F ɛ + f ɛ /τ f u Λ u F u Λ u F u Λ ɛ F ɛ Σ x = Σ Σ Σ u = Σ w Σ x A T AΣ a A T A + Σ x x = w Σ Σ w fu + f u + f u := for x As before the system is symmetric positive definite and the CG algorithm can be used to solve it Given x u the step directions for the inequality dual variables are given by λ u = Λ u Fu x u λ u /τfu λ u = Λ u Fu x u λ u /τfu λ ɛ = Λ ɛ Fɛ A T A x λ ɛ /τfɛ λ ɛ = Λ ɛ Fɛ A T A x λ ɛ /τfɛ w w D l minimization with quadratic constraints The quadractically constrained l minimization problem P can be recast as the second-order cone program min u i subject to x u 0 xu i x u 0 Ax b ɛ 0 Taking f u = x u f u = x u f ɛ = Ax b ɛ we can write the Newton step as in 8 at a point x u for a given τ as Σ fɛ A T A + fɛ A T rr T A Σ x f u = fu + fɛ A T r Σ Σ u τ fu fu := 4 w w

15 where r = Ax b and As before we set and eliminate u leaving us with the reduced system Σ x f ɛ Σ = F u Σ = F u + F u + F u Σ x = Σ Σ Σ u = Σ w Σ x A T A + fɛ A T rr T A x = w Σ Σ w which is symmetric positive definite and can be solved using CG E Total variation minimization with equality constraints The equality constrained TV minimization problem min x TVx subject to Ax = b can be rewritten as the SOCP min tx t st D x t Ax = b Defining the inequality functions f t = D t i j = n 9 we have D T f t = D x t δ f t f T t = D T D xx T D T D t D T D xδ T t δ x T D T D t δ δ T D f t = D 0 0 δ δ T where δ is the Kronecker vector that is in entry and zero elsewhere For future reference: D T σ f t = h ΣD h x + Dv T ΣD v x σt σ f t ft T = σ f t = BΣB T BT Σ ΣT B T ΣT D T h ΣD h + D T v ΣD v 0 0 Σ where Σ = diag{σ } T = diagt D h has the D h; as rows and likewise for D v and B is a matrix that depends on x: B = D T h Σ h + D T v Σ v with Σ h = diagd h x Σ v = diagd v x 5

16 The Newton system 8 for the log-barrier algorithm is then H BΣ A T Σ B T Σ 0 x t = DT h F t D h x + Dv T F t D v x τ Ft t := A 0 0 v 0 w w 0 where Eliminating t H = D T h F t D h + D T v F t D v + BF t B T t = Σ w Σ B T x = Σ w Σ Σ h D h x Σ Σ v D v x the reduced N + K N + K system is H A T x w = A 0 v 0 0 with H = H BΣ Σ BT = D T h Σ b Σ h F t D h + D T v Σ b Σ v F t D v + D T h Σ b Σ h Σ v D v w = w BΣ Σ w + D T v Σ b Σ h Σ v D h = w D T h Σ h + D T v Σ v Σ Σ w Σ b = F t Σ Σ The system of equations 0 is symmetric but not positive definite Note that D h and D v are very sparse matrices and hence can be stored and applied very efficiently This allows us to again solve the system above using an iterative method such as SYMMLQ [6] F Total variation minimization with quadratic constraints We can rewrite T V as the SOCP t subject to D x t i j = n min xt Ax b ɛ where D is as in Taking f t as in 9 and with f ɛ = where r = Ax b f ɛ = A T r f 0 ɛ fɛ T = Ax b ɛ A T rr T A 0 f 0 0 ɛ = A A Also D f t = D 0 0 δ δ T f ɛ = A A

17 The Newton system is similar to that in equality constraints case: H BΣ x D T Σ B T = h Ft D h x + Dv T F t D v x + fɛ A T r Σ t τ tft := where tft = t /f t and Again eliminating t H = Dh T Ft D h + Dv T Ft D v + BFt B T fɛ A T A + fɛ A T rr T A Σ = T Ft Σ = Ft + Ft T t = Σ w Σ Σ h D h x Σ Σ v D v x w w the key system is where H x = w D T h Σ h + D T v Σ v Σ Σ w H = H BΣ Σ BT = D T h Σ b Σ h F t D h + D T v Σ b Σ v F t D v + Σ b = F t D T h Σ b Σ h Σ v D v + D T v Σ b Σ h Σ v D h fɛ A T A + fɛ A T rr T A Σ Σ The system above is symmetric positive definite and can be solved with CG G Total variation minimization with bounded residual correlation The T V Dantzig problem has an equivalent SOCP as well: t subject to D x t i j = n min xt A T Ax b ɛ 0 A T Ax b ɛ 0 The inequality constraint functions are f t = D x t i j = n f ɛ = A T Ax b ɛ f ɛ = A T Ax b ɛ with and σ f ɛ; = A T Aσ 0 σ f ɛ; = σ f ɛ; fɛ T ; = σ f ɛ; fɛ T ; = A T Aσ 0 A T AΣA T A

18 Thus the log barrier Newton system is nearly the same as in the quadratically constrained case: H BΣ x D T Σ B T = h Ft D h x + Dv T F t D v x + A T Afɛ fɛ w Σ t τ tft := w where H = Dh T Ft D h + Dv T Ft D v + BFt B T + A T AΣ a A T A Σ = T Ft Σ = Ft Σ a = Fɛ Eliminating t as before + F t T + F ɛ t = Σ w Σ Σ h D h x Σ Σ v D v x the key system is where H x = w D T h Σ h + D T v Σ v Σ Σ w H = D T h Σ b Σ h F t D h + D T v Σ b Σ v F t D v + Σ b = F t D T h Σ b Σ h Σ v D v + D T v Σ b Σ h Σ v D h + A T AΣ a A T A Σ Σ References [] F Alizadeh and D Goldfarb Second-order cone programming Math Program Ser B 95: [] S Boyd and L Vandenberghe Convex Optimization Cambridge University Press 004 [3] E Candès and J Romberg Quantitative robust uncertainty principles and optimally sparse decompositions To appear in Foundations of Comput Math 005 [4] E Candès J Romberg and T Tao Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information Submitted to IEEE Trans Inform Theory June 004 Available on thearxiv preprint server: mathgm/ [5] E Candès J Romberg and T Tao Stable signal recovery from incomplete and inaccurate measurements Submitted to Communications on Pure and Applied Mathematics March 005 [6] E Candès and T Tao Near-optimal signal recovery from random projections and universal encoding strategies submitted to IEEE Trans Inform Theory November 004 Available on the ArXiV preprint server: mathca/04054 [7] E Candès and T Tao The Dantzig selector: statistical estimation when p is much smaller than n Manuscript May 005 [8] E J Candès and T Tao Decoding by linear programming To appear in IEEE Trans Inform Theory December 005 8

19 [9] T Chan G Golub and P Mulet A nonlinear primal-dual method for total variation-based image restoration SIAM J Sci Comput 0: [0] S S Chen D L Donoho and M A Saunders Atomic decomposition by basis pursuit SIAM J Sci Comput 0: [] D Goldfarb and W Yin Second-order cone programming methods for total variation-based image restoration Technical report Columbia University 004 [] H Hintermüller and G Stadler An infeasible primal-dual algorithm for TV-based infconvolution-type image restoration To appear in SIAM J Sci Comput 005 [3] M Lobo L Vanderberghe S Boyd and H Lebret Applications of second-order cone programming Linear Algebra and its Applications 84: [4] Y E Nesterov and A S Nemirovski Interior Point Polynomial Methods in Convex Programming SIAM Publications Philadelphia 994 [5] J Nocedal and S J Wright Numerical Optimization Springer New York 999 [6] C C Paige and M Saunders Solution of sparse indefinite systems of linear equations SIAM J Numer Anal 4 September 975 [7] J Renegar A mathematical view of interior-point methods in convex optimization MPS- SIAM Series on Optimization SIAM 00 [8] L I Rudin S Osher and E Fatemi Nonlinear total variation noise removal algorithm Physica D 60: [9] J R Shewchuk An introduction to the conjugate gradient method without the agonizing pain Manuscript August 994 [0] S J Wright Primal-Dual Interior-Point Methods SIAM Publications 997 9

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Basis Pursuit Denoising and the Dantzig Selector

Basis Pursuit Denoising and the Dantzig Selector BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007 Sparse Optimization: Algorithms and Applications Stephen Wright 1 Motivation and Introduction 2 Compressed Sensing Algorithms University of Wisconsin-Madison Caltech, 21 April 2007 3 Image Processing +Mario

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Homework 3. Convex Optimization /36-725

Homework 3. Convex Optimization /36-725 Homework 3 Convex Optimization 10-725/36-725 Due Friday October 14 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

Linear programming II

Linear programming II Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality

More information

EE364b Convex Optimization II May 30 June 2, Final exam

EE364b Convex Optimization II May 30 June 2, Final exam EE364b Convex Optimization II May 30 June 2, 2014 Prof. S. Boyd Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 361 Uplink Downlink Duality Via Minimax Duality Wei Yu, Member, IEEE Abstract The sum capacity of a Gaussian vector broadcast channel

More information

Lecture 14: Newton s Method

Lecture 14: Newton s Method 10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information