Homework Set #8 - Solutions

Size: px
Start display at page:

Download "Homework Set #8 - Solutions"

Transcription

1 EE 50 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term Homework Set #8 - Solutions. For all parts to this problem we will use the fact that H R f) can be expressed as H R f) = cf) T b where [cf)] k = cos2π k ) f) k =... M +. In addition we will define the following functions for sake of convenience. g b) sup H R f) = sup cf) T b 0 f f P 0 f f P g 2 b) inf H R f) = inf cf) T b 0 f f P 0 f f P g 3 b) sup H R f) = sup cf) T b f S f /2 f S f /2 g 4 b) inf H Rf) = inf f S f /2 f S f /2 cf)t b. Note that as g and g 3 are each the pointwise supremum of an affine actually linear) function of b they are each convex. Similarly as g 2 and g 4 are each the pointwise infimum of an affine or rather linear) function of b they are each concave. a) The optimization problem here is the following. minimize δ S g b) + δ P g 2 b) δ P g 3 b) δ S g 4 b) δ S with variables δ S R and b R M+. This is identical to the following problem. minimize δ S g b) + δ P ) 0 g 2 b) + δ P ) 0 g 3 b) δ S 0 g 4 b) δ S 0 As the objective δ S is clearly convex g and g 3 are convex and g 2 and g 4 are concave this problem is a convex optimization problem. b) Define the function g 5 b) as follows. g 5 b) inf {φ : δ S H R f) δ S for φ f /2} { } = inf φ : δ S cf) T b δ S for φ f /2. Note that g 5 b) is quasiconvex as a function of b. To see this note that the φ 0 -sublevel set C φ0 is given by { } C φ0 = {b : g 5 b) φ 0 } = b : δ S cf) T b δ S for φ 0 f /2

2 which is the intersection of an infinite number of halfspaces. As such C φ0 is convex and so g 5 b) is quasiconvex. With the function g 5 defined as such the optimization problem becomes the following. minimize g 5 b) g b) + δ P g 2 b) δ P with variable b R M+. This is identical to the problem minimize g 5 b) g b) + δ P ) 0 g 2 b) + δ P ) 0. As the objective function g 5 b) is quasiconvex and the constraints are convex this problem is a quasiconvex optimization problem. c) Define the function g 6 b) as follows. g 6 b) min {k : b k+ = = b M = 0}. Note that g 6 b) is a quasiconvex function of b. To see this note that the k 0 -sublevel set C k0 is given by C k0 = {b : g 6 b) k 0 } = {b : b k0 + = = b M = 0} which is an affine set. As such C k0 is convex and so g 6 b) is quasiconvex. With the function g 6 defined as such the optimization problem becomes the following. minimize g 6 b) g b) + δ P g 2 b) δ P g 3 b) δ S g 4 b) δ S with variable b R M+. This is identical to the following problem. minimize g 6 b) g b) + δ P ) 0 g 2 b) + δ P ) 0 g 3 b) δ S 0 g 4 b) δ S 0 As the objective function g 6 b) is quasiconvex and the constraints are convex this problem is a quasiconvex optimization problem. d) After discretizing the frequency to f l = l/ 2L) define the index sets I P and I S as follows. I P {l : 0 f l f P } I S {l : f S f l /2}.

3 In other words I P and I S denote the sets of discretized frequency indices corresponding to the passband and stopband respectively. With these sets defined as such the problem in part c) can be solved via bisection by solving the following feasibility LP at each step. find b cf l ) T b + δ P ) 0 l I P cf l ) T b + δ P ) 0 l I P cf l ) T b δ S 0 l I S cf l ) T b δ S 0 l I S. Using cvx in MATLAB we find M = 27 and that the optimal filter which achieves this minimal filter length has a magnitude response in db as shown in Figure. Close-ups of the passband and stopband regions are shown in Figure 2a) and b) respectively and clearly satisfy the design specifications H e j2πf) db) Figure : Minimal length FIR low-pass filter magnitude response in db. f The following MATLAB code employing cvx) was used to generate these results. % Define filter specifications f_p =./6; f_s =./5; delta_p_db = 0.; delta_s_db = -30; % Map magnitude response specifications to amplitude response ones K = 0.^delta_P_dB./20); delta_p = K-)./K+); delta_s = 0.^delta_S_dB./20);

4 H e j2πf) db) H e j2πf) db) a) f b) f Figure 2: Minimal length FIR low-pass filter magnitude response in db: a) close-up of the passband region 0 f f P and b) close-up of the stopband region f S f /2. % Set discretized frequency parameters L = 6383; f = [0:L]./2.*L); I_P = findf >= 0 & f <= f_p); I_S = findf >= f_s & f <= 0.5); % Define bisection parameters M_LL = 0; M_UL = 00; bisec_tol =.; % Carry out bisection using a while-loop while M_UL - M_LL >= bisec_tol) % Set the midpoint filter order M_MP = roundm_ul+m_ll)./2); % Compute the discretized cosine vectors k = 0:M_MP) ; C = cos2.*pi.*k*f); % Solve the feasibility LP cvx_begin variable bm_mp + ); % Passband constraints C:I_P) *b - +delta_p) <= 0; -C:I_P) *b + -delta_p) <= 0; % Stopband constraints C:I_S) *b - delta_s <= 0; -C:I_S) *b - delta_s <= 0;

5 end cvx_end % Check for feasibility and update the bisection ignorance interval if strcmpcvx_status Solved )) M_UL = M_MP; b_opt = b; M_opt = M_MP; else M_LL = M_MP; end % Plot the optimal magnitude response in db f = linspace ); k = 0:M_opt) ; C = cos2.*pi.*k*f); H_R = C *b_opt; plotf20*log0absh_r)) LineWidth ) grid on xlabel $f$ Interpreter LaTeX FontSize 20) ylabel $\left H\!\left e^{j2\pi f} \right) \right $ db)... Interpreter LaTeX FontSize 20) 2. a) We get a subset P R 3 which will soon be shown to be a polyhedron) of locations x that are consistent with the camera measurements. To find the smallest box that covers any subset in R 3 all we need to do is minimize and maximize the linear) functions x x 2 and x 3 to get l and u. Here P is a polyhedron so we will end up solving 6 LPs one to get each of l l 2 l 3 u u 2 and u 3. To verify that P is a polyhedron note that our measurements tell us that v i ρ i /2) c T i x + d A i x + b i ) v i + ρ i /2) i =... m. i Multiplying through by c T i x + d i) which is positive we get v i ρ i /2) ) c T i x + d i ) Ai x + b i v i + ρ i /2) ) c T i x + d i ) i =... m which is a set of 2m linear inequalities in x. In particular it defines a set P which is a polyhedron. To get l k and u k we solve the LPs minimize/maximize x k v i ρ i /2) ) c T i x + d i ) Ai x + b i i =... m A i x + b i v i + ρ i /2) ) c T i x + d i ) i =... m for k = 2 3. Here it is understood that l k is found by solving the minimization problem whereas u k is found by solving the maximization problem. b) The following MATLAB script using cvx) solves this specific problem instance. % Load the data and extract the sections of the camera matrices

6 camera_data; A = P:2:3); b = P:24); c = P3:3) ; d = P34); A2 = P2:2:3); b2 = P2:24); c2 = P23:3) ; d2 = P234); A3 = P3:2:3); b3 = P3:24); c3 = P33:3) ; d3 = P334); A4 = P4:2:3); b4 = P4:24); c4 = P43:3) ; d4 = P434); % Solve the 6 LPs to find the smallest bounding box consistent with the % measurements cvx_quiettrue); for bounds = :6 cvx_begin variable x3); switch bounds case minimize x) case 2 maximize x) case 3 minimize x2) case 4 maximize x2) case 5 minimize x3) case 6 maximize x3) end % Constraints for the st camera vhat:)-rho)/2)*c *x + d) <= A*x + b; A*x + b <= vhat:)+rho)/2)*c *x + d); % Constraints for the 2nd camera vhat:2)-rho2)/2)*c2 *x + d2) <= A2*x + b2; A2*x + b2 <= vhat:2)+rho2)/2)*c2 *x + d2); % Constraints for the 3rd camera vhat:3)-rho3)/2)*c3 *x + d3) <= A3*x + b3; A3*x + b3 <= vhat:3)+rho3)/2)*c3 *x + d3); % Constraints for the 4th camera vhat:4)-rho4)/2)*c4 *x + d4) <= A4*x + b4; A4*x + b4 <= vhat:4)+rho4)/2)*c4 *x + d4); cvx_end valbounds) = cvx_optval; end % Display the minimal bounding box bounds disp[ l = num2strval))]); disp[ u = num2strval2))]); disp[ l2 = num2strval3))]); disp[ u2 = num2strval4))]); disp[ l3 = num2strval5))]);

7 disp[ u3 = num2strval6))]); The script returns the following results. l = u = l 2 = u 2 = l 3 = u 3 = a) Consider the desired measurement equations z i = φ y i ) i =... m. ) The function φ is unknown and needs to be estimated here however we do have bounds on its derivative. Specifically we have for all v. To show this note that we have /β φ ) v) /α 2) φ φ v) ) = v v. Differentiating both sides of the above relation and using the chain rule we have φ u) φ ) v) = where u = φ v). Hence we get φ ) v) = /φ u) where u = φ v). 3) But as α φ u) β for all u where 0 < α < β we have Substituting this into 3) thus yields /β /φ u) /α u. /β φ ) v) /α v. Returning to the problem at hand note that from ) and the mean value theorem there is a v ) for which we have φ ) v) = z i+ z i y i+ y i for any i =... m. Substituting this into 2) and assuming that the data is given with y i in nondecreasing order we get /β) y i+ y i ) z i+ z i /α) y i+ y i ) i =... m. Now as v i = z i a T i x and v i are i.i.d. with distribution N 0 σ 2) for i =... m the log-likelihood function lx z) has the form lx z) = C m i= zi a T i x ) 2 + C2

8 where C / 2σ 2) which satisfies C > 0) and C 2 m/2) log 2πσ 2) are constants. Thus to find an ML estimate of x and z we can minimize the objective f 0 x z) m i= zi a T i x ) 2 = z Ax 2 2 the constraints. Note that here the i-th row of A is a T i for i =... m. This leads to the following problem. minimize z Ax 2 2 /β) y i+ y i ) z i+ z i /α) y i+ y i ) i =... m with variables x R n and z R m. However this is a QP and hence a convex optimization problem. Note that the problem does not depend on the noise variance σ 2. b) The following MATLAB code which invokes cvx was used to solve this specific problem instance. % Load the nonlinear measurement data nonlin_meas_data; % Generate the m-) x m first order difference matrix Delta_ Delta row = zerosm); Delta row) = -; Delta row2) = ; Delta col = zerosm-); Delta col) = -; Delta_ = toeplitzdelta coldelta row); % Solve the QP used to get the ML estimates of x and z using cvx cvx_begin variable xn); variable zm); minimize normz - A*x)./beta).*Delta_*y <= Delta_*z; Delta_*z <=./alpha).*delta_*y; cvx_end % Display the ML estimate of x disp ML estimate of x: ); dispx); % Plot the estimated function \widehat{\phi}_{\mathrm{ml}} plotzy LineWidth ) xlabel $u$ Interpreter LaTeX FontSize 20) ylabel $\widehat{\phi}_{\mathrm{ml}}\!\left u \right)$ Interpreter... LaTeX FontSize 20) This yields the ML estimate of x given as follows. x ml = [ ] T.

9 φmlu) Figure 3: Plot of the ML estimate of the nonlinear function φu) namely φ ml u). This was constructed by plotting [ẑ ml ] i versus y i for i =... m. u A plot of the estimated function φ ml u) is shown in Figure In this problem to eliminate outliers successively at each iteration we compute the Löwner-John ellipsoid for the current set of data points and remove the point corresponding to the largest Lagrange multiplier and add it to the set of outliers. Let D k and O k denote respectively the data and outlier sets at the k-th iteration. Also let x k) i denote the point removed at the k-th iteration i.e. the one that corresponds to the largest Lagrange multiplier. Then we have the following ellipsoidal peeling algorithm. Initialization: D 0 {x... x N } O 0. Iteration: For k =... N rem where N rem denotes the total number of points to remove from the original data set do the following.. Determine x k) i as the point corresponding to the largest Lagrange multiplier of the following convex optimization problem. minimize log det A Ax i + b 2 x i D k minimize det A)/n Ax i + b 2 x i D k with variables A S n and b R n. When it comes to solving instantiations of this problem using cvx we will use the second equivalent form of the problem as the det rootn subroutine is more efficient that the log det one. 2. Update the data and outlier sets as follows. { } { D k = D k \ O k = O k x k) i x k) i }.

10 3. Increment k as k k If k N rem then go to. Otherwise stop. The following MATLAB code which uses cvx was used to remove outliers from the given data set here. % Load the data ellip_peel_data; % Set the number of outliers to remove N_rem = 30; % Initialize the ellipsoidal peeling algorithm D = data; O = []; log_vol_array = []; n = sizedata); % Run for-loop iteration to remove outliers successively for k = :N_rem+) % Find the minimum volume ellipsoid containing all current data points cvx_begin variable Ann) symmetric; variable bn); dual variable lambda; minimize -det_rootna)) %minimize -log_deta)) lambda : normsa*d + repmatbsized2))) <= ; cvx_end % Find the maximum dual variable i.e. the largest Lagrange multiplier % and remove the corresponding data point [lambda_maxoutlier_index] = maxlambda); % Update the outlier and data sets O = [O D:outlier_index)]; D:outlier_index) = []; % Store the logarithm of the volume of the optimal ellipsoid log_vol_array = [log_vol_array 0.5.*log./detA)))]; clear A b; end % Plot the log of the volume of the optimal ellipsoid as a function of the % number of points removed figure plot0:n_remlog_vol_array LineWidth ) grid on xlabel $\mathrm{card}\!\left \mathcal{o} \right)$ Interpreter... LaTeX FontSize 20) ylabel $\log\!\left\mathrm{vol}\!\left\mathcal{e}\right)\right)$... Interpreter LaTeX FontSize 20)

11 % Plot the data points to determine visually the number of outliers present figure plotdata:)data2:). LineWidth ) xlabel $x$ Interpreter LaTeX FontSize 20) ylabel $y$ Interpreter LaTeX FontSize 20) A plot of the number of points removed cardo) as a function of the logarithm of the volume of the resulting minimum volume covering ellipsoid E is shown in Figure 4a). Along with this a plot of the data points themselves is given in Figure 4b). logvole)) y cardo) a) b) x Figure 4: Ellipsoidal peeling plots: a) logarithm of the volume of the minimum covering ellipsoid i.e. logvole))) versus the number of data points removed i.e. cardo)) and b) original data set. From the eyeball test it is clear that there are 4 outliers. This is not immediately clear from the plot in Figure 4a) which exhibits two knees. From Figure 4a) it can be seen that there is a dramatic decrease in the volume of the minimum covering ellipsoid for the first 3 points and then there is some stalling for the next two points which suggests that non-outlier points were erroneously added to O. This corresponds to the first knee of the curve. Then there is another dramatic decrease at the next point followed by a slow and gradual shrinking of the volume from that point on. This corresponds to the second knee of the curve. If we add up only the number of points leading to a dramatic decrease of the ellipsoidal volume we obtain 4 outliers consistent with the eyeball test. The results of this simulation bring to light some of the advantages and disadvantages of this ellipsoidal peeling approach. If we accumulate only the points corresponding to large decreases in the ellipsoidal volume then we obtain the correct number of outliers. However the algorithm did not extract all the outliers before extracting some of the normal data points.

12 *5. a) Let ν R n denote the dual variable. Then the Lagrangian is given by the following. n Lx ν) = Ax b ν k x 2 k ) k= = x T A T Ax 2b T Ax + b T b + x T diagν)) x T ν = x T A T A + diagν) ) x 2b T Ax T ν + b T b. 4) From 4) it can be seen that Lx ν) is unbounded below in x if A T A + diagν) 0 or A T b ) R A T A + diagν) ). When both A T A + diagν) 0 and A T b ) R A T A + diagν) ) the infimum of the Lagrangian occurs when the gradient with respect to x vanishes. This yields Lx ν) = 2 A T A + diagν) ) x 2A T b = 0 which is equivalent to the normal equations A T A + diagν) ) x = A T b. The solution to the normal equations is given by x = A T A + diagν) ) # A T b. Substituting this into 4) it follows that the dual function gν) = inf x Lx ν) is given by T ν b T A A T A + diagν) ) # A T b + b T b A T A + diagν) 0 gν) = A T b ) R A T A + diagν) ) otherwise. Hence the dual problem is the following. maximize T ν b T A A T A + diagν) ) # A T b + b T b A T A + diagν) 0 A T b ) R A T A + diagν) ). Expressing the problem in epigraph form yields the following equivalent problem. maximize T ν t + b T b b T A A T A + diagν) ) # A T b t. A T A + diagν) 0 A T b ) R A T A + diagν) ) In turn this is identical to the problem maximize T ν t + b T b t b T A ) A T A + diagν) ) # A T b ) 0 A T A + diagν) 0 A T b ) R A T A + diagν) )

13 which upon using Schur complements leads to the following equivalent problem which is an SDP. maximize T ν t + b T b [ A T A + diagν) A T b b T A t ] 0. b) We first write the dual problem as a minimization problem by negating the objective function. This leads to the problem minimize T ν + t b T b [ A T A + diagν) A T b b T A t ] 0. To handle the linear matrix inequality LMI) constraint we introduce a Lagrange multiplier matrix of the form [ ] Z z z T. λ With this the Lagrangian is given by the following. Lν t Z z λ) = [ ] [ ]) Z z A T T ν + t b T A + diagν) A T b b tr z T λ b T A t = T ν + t b T b tr Z A T A + diagν) ) zb T A ) z T A T b + λt ) = T ν + t b T b tr A T AZ ) diagz) T ν + b T Az + z T A T b λt = diagz)) T ν + t λ) tr A T AZ ) + 2b T Az b T b. This is unbounded below unless diagz) = and λ =. As such the dual function is given by gz z λ) = { tr A T AZ ) + 2b T Az b T b diagz) = λ = otherwise. Thus the dual problem of the SDP of part a) is the following. maximize tr A T AZ ) + 2b T Az b T b diagz) = [ ] Z z z T 0. Expressing this maximization problem as a minimization problem by negating the objective leads to the following equivalent problem. minimize tr A T AZ ) 2b T Az + b T b diagz) = [ ] Z z z T 0. 5)

14 This is the desired form here. To see that 5) is a relaxation of the original problem note that the binary least-squares problem is equivalent to minimize tr A T AZ ) 2b T Az + b T b diagz) = Z = zz T. 6) Suppose we relax the equality constraint Z = zz T with the weaker inequality constraint Z zz T. Using Schur complements this weaker inequality constraint is equivalent to [ ] z T 0. z Z But note that we have the following. [ ] [ ] [ z T 0n I n z T 0 z Z 0 n z Z [ ] Z z z T 0. ] [ 0 n I n 0 n ] 0 Substituting this equivalent weaker inequality constraint into 6) we get the problem from 5). If we have [ ]) Z z rank z T = at the optimum of 5) which is equivalent to saying that Z = zz T then we satisfy the equality constraint Z = zz T of 6) and so the relaxation is exact in this case. Thus the optimal values of problems 6) and 5) are equal and the optimal solution z of 5) is optimal for 6). c) Consider the problem minimize [ ] E Av b 2 2 E [ vk 2 ]. 7) = k =... n Note that we have the following upon expanding the objective. [ ] [ ] E Av b 2 2 = E Av b) T Av b) [ )] = E tr Av b) Av b) T [ ]) = tr E Av b) Av b) T = tr E [ Avv T A T Avb T bv T A T + bb T ]) = tr A E [ vv T ]) ) A T A E[v]) b T b E[v]) T A T + bb T = tr AZA T Azb T bz T A T + bb T ) 8) = tr A T AZ ) b T Az z T A T b + b T b = tr A T AZ ) 2b T Az + b T b. 9)

15 Here 8) follows from the fact that z = E[v] and Z = E [ vv T ]. Similarly for the left-hand-side of the equality constraints of 7) we get the following. E [ v 2 k] = Zkk = [Z] kk k =... n. 0) Finally the covariance matrix of v given by [ ] C v E v E[v]) v E[v]) T = Z zz T must be positive semidefinite i.e. C v 0. Hence we must have Z zz T 0 which was found in part b) to be equivalent to [ ] Z z z T 0. Combining this constraint with the results of 9) and 0) it follows that the problem from 7) is equivalent to the problem minimize tr A T AZ ) 2b T Az + b T b diagz) = [ ] Z z z T 0 which is the same as 5) from part b). d) The following MATLAB code employing cvx was used to compute suboptimal feasible solutions using the desired heuristics. % Initialize the problem data randn state 0) s = 0.5; %s = ; %s = 2; %s = 3; m = 50; n = 40; A = randnmn); xhat = signrandnn)); b = A*xhat + s.*randnm); f_xhat = norma*xhat - b).^2; % i) Compute the sign of the solution to the LS problem x_a = signa\b); f_x_a = norma*x_a - b).^2; % ii) Compute the sign of the solution to the SDP of part b) %cvx_precision high cvx_begin sdp variable zn);

16 variable Znn) symmetric; minimize tracea *A*Z) - 2*b *A*z + b *b) [Z z ; z ] >= 0; diagz) == ; cvx_end x_b = signz); f_x_b = norma*x_b - b).^2; % iii) Compute the sign of the rank-one approximation of the optimal % solution of the SDP of part b) Y = [Z z ; z ]; [VD] = eigy); [eig_sortedeig_index_sorted] = sortdiagd) descend ); v_ = V:neig_index_sorted)); x_c = signz); f_x_c = norma*x_c - b).^2; % iv) Compute the sign of the random sample with mean z and second moment % Z N = 00; U = randnnn); V_tilde = sqrtmz - z*z )*U + repmatzn); X_d = signrealv_tilde)); R = A*X_d - repmatbn); F_x_d = sumr.^2); [f_x_dx_d_index] = minf_x_d); x_d = X_d:x_d_index); % Report the objective function values for each approach disp[ f_xhat = num2strf_xhat)]); disp[ f_x_a = num2strf_x_a)]); disp[ f_x_b = num2strf_x_b)]); disp[ f_x_c = num2strf_x_c)]); disp[ f_x_d = num2strf_x_d)]); disp[ SDP lower bound = num2strcvx_optval)]); % Report the square of the l_2-norm of the difference between the actual % input to the problem and each heuristic solution % solutions and the actual input to the problem disp[ xhat - x_a _{2}^2 = num2strnormxhat-x_a).^2)]); disp[ xhat - x_b _{2}^2 = num2strnormxhat-x_b).^2)]); disp[ xhat - x_c _{2}^2 = num2strnormxhat-x_c).^2)]); disp[ xhat - x_d _{2}^2 = num2strnormxhat-x_d).^2)]); To generate samples of N z Z zz T ) we used the following result from probability theory. If u N µ Σ) then w Bu + c is such that w N Bµ + c BΣB T ). Thus if u N 0 n I n ) then w Z zz) /2 u + z is such that w N z Z zz T ).

17 The following table lists for each s the values of fx) = Ax b 2 2 for x = x and the four suboptimal solutions i)-iv) along with the lower bound obtained from the SDP relaxation given in 5). s f x) f x a)) f x b)) f x c)) f x d)) SDP lower bound From this table and the results of the simulations carried out here we make the following observations. For s = 0.5 all heuristics return x. This is likely to be the global optimum but that is not necessarily true. However from the lower bound we know that the global optimum is in the interval [ ] so even if is not the global optimum it is quite close. For higher values of s the result from the first heuristic x a) is substantially worse than those derived from the SDP-based heuristics. All three SDP-based heuristics return x for s = and s = 2. For s = 3 the randomized rounding method returns x. The other SDP-based heuristics give slightly higher values.

EE364b Homework 4. L(y,ν) = (1/2) y x ν(1 T y 1), minimize (1/2) y x 2 2 subject to y 0, 1 T y = 1,

EE364b Homework 4. L(y,ν) = (1/2) y x ν(1 T y 1), minimize (1/2) y x 2 2 subject to y 0, 1 T y = 1, EE364b Prof. S. Boyd EE364b Homework 4 1. Projection onto the probability simplex. In this problem you will work out a simple method for finding the Euclidean projection y of x R n onto the probability

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017 Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem. EE364b Prof. S. Boyd EE364b Homewor 5 1. Distributed method for bi-commodity networ flow problem. We consider a networ (directed graph) with n arcs and p nodes, described by the incidence matrix A R p

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

EE364a Homework 8 solutions

EE364a Homework 8 solutions EE364a, Winter 2007-08 Prof. S. Boyd EE364a Homework 8 solutions 9.8 Steepest descent method in l -norm. Explain how to find a steepest descent direction in the l -norm, and give a simple interpretation.

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

7. Statistical estimation

7. Statistical estimation 7. Statistical estimation Convex Optimization Boyd & Vandenberghe maximum likelihood estimation optimal detector design experiment design 7 1 Parametric distribution estimation distribution estimation

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Final exam solutions

Final exam solutions EE364 Convex Optimization June 7 8 or June 8 9, 2006. Prof. S. Boyd Final exam solutions 1. Optimizing processor speed. A set of n tasks is to be completed by n processors. The variables to be chosen are

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Lecture 2: Convex functions

Lecture 2: Convex functions Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009 UC Berkele Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 6 Fall 2009 Solution 6.1 (a) p = 1 (b) The Lagrangian is L(x,, λ) = e x + λx 2

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

4. Convex optimization problems (part 1: general)

4. Convex optimization problems (part 1: general) EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

EE364 Review Session 4

EE364 Review Session 4 EE364 Review Session 4 EE364 Review Outline: convex optimization examples solving quasiconvex problems by bisection exercise 4.47 1 Convex optimization problems we have seen linear programming (LP) quadratic

More information

9. Geometric problems

9. Geometric problems 9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

EE363 homework 8 solutions

EE363 homework 8 solutions EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ

More information

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Convex Optimization: Applications

Convex Optimization: Applications Convex Optimization: Applications Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 1-75/36-75 Based on material from Boyd, Vandenberghe Norm Approximation minimize Ax b (A R m

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Analytic Center Cutting-Plane Method

Analytic Center Cutting-Plane Method Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

7. Statistical estimation

7. Statistical estimation 7. Statistical estimation Convex Optimization Boyd & Vandenberghe maximum likelihood estimation optimal detector design experiment design 7 1 Parametric distribution estimation distribution estimation

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y) Convex functions I basic properties and I operations that preserve convexity I quasiconvex functions I log-concave and log-convex functions IOE 611: Nonlinear Programming, Fall 2017 3. Convex functions

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15

EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15 EE538R: Problem Set 5 Assigned: 6/0/5 Due: 06/03/5. BV Problem 4. The feasible set is the convex hull of (0, ), (0, ), (/5, /5), (, 0) and (, 0). (a) x = (/5, /5) (b) Unbounded below (c) X opt = {(0, x

More information

Optimisation convexe: performance, complexité et applications.

Optimisation convexe: performance, complexité et applications. Optimisation convexe: performance, complexité et applications. Introduction, convexité, dualité. A. d Aspremont. M2 OJME: Optimisation convexe. 1/128 Today Convex optimization: introduction Course organization

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Convex Optimization in Communications and Signal Processing

Convex Optimization in Communications and Signal Processing Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB 1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Equality constrained minimization

Equality constrained minimization Chapter 10 Equality constrained minimization 10.1 Equality constrained minimization problems In this chapter we describe methods for solving a convex optimization problem with equality constraints, minimize

More information

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

EE364b Homework 2. λ i f i ( x) = 0, i=1

EE364b Homework 2. λ i f i ( x) = 0, i=1 EE364b Prof. S. Boyd EE364b Homework 2 1. Subgradient optimality conditions for nondifferentiable inequality constrained optimization. Consider the problem minimize f 0 (x) subject to f i (x) 0, i = 1,...,m,

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite

More information

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979) Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information