Lecture 4: Linear and quadratic problems
|
|
- Claud Hunt
- 6 years ago
- Views:
Transcription
1 Lecture 4: Linear and quadratic problems linear programming examples and applications linear fractional programming quadratic optimization problems (quadratically constrained) quadratic programming second-order cone programming examples and applications 1
2 Linear program (LP) linear program: minimize c T x subject to Gx h, Ax = b P c xopt standard form LP minimize c T x subject to Ax = b, x 0 c T x constant (widely used in LP literature & software) variations: e.g.,, maximize c T x subject to Ax b 2
3 force/moment generation with thrusters rigid body with center of mass at origin p = 0 R 2 n forces with magnitude u i, acting at p i = (p ix,p iy ), in direction θ i θ i (p ix,p iy ) u i resulting horizontal force: F x = n i=1 u icosθ i resulting vertical force: F y = n i=1 u isinθ i resulting torque: T = n i=1 (p iyu i cosθ i p ix u i sinθ i ) force limits: 0 u i 1 (thrusters) fuel usage: u u n problem: find u i that yield given desired forces and torques and minimize fuel usage 3
4 can be expressed as LP: minimize 1 T u subject to Fu = f des 0 u i 1, i = 1,...,n where F = cosθ 1 cosθ n sinθ 1 sinθ n, p 1y cosθ 1 p 1x sinθ 1 p ny cosθ n p nx sinθ n f des = (F des x, Fdes y, Tdes ), 1 = ( 1, 1, 1 ) 4
5 Converting LP to standard form inequality constraints: write a T i x b i as a T i x + s i = b i, s i 0 s i is called slack variable associated with a T i x b i unconstrained variables: write x i R as x i = x + i x i, x+ i,x i 0 Example. thruster problem in standard form [ minimize 1 T 0 ][ ] u s [ ] [ u F 0 subject to 0, s I I ][ u s ] = [ f des 1 ] 5
6 Piecewise-linear minimization minimize max i (c T i x + d i) subject to Ax b max i (c T i x + d i ) c T i x + d i x express as minimize t subject to c T i x + d i t, Ax b an LP in variables x R n, t R 6
7 Blind Channel Equalization via Linear Programming H(!) G(!) LTI channel; impulse response h: complex, unknown; QAM modulation Goal: design equalizer G(ω) such that H(ω)G(ω) = e jdω, where D is delay. Let x i be the channel output, and y k be the equalizer output. It is known that minimize max k Re(y k ) subject to Re(g 1 ) + Im(g 1 ) = 1 channel equalization. where Re(g 1 ) + Im(g 1 ) = 1 is a normalizing constraint. Note that Re(y k )= N i= N Re(x k i )Re(g i ) Im(x k i )Im(g i ) is linear in g. 7
8 Blind Channel Equalization via Linear Programming Note that max k { Re(y k ) } is nonsmooth in g. Smoothing approach: approximating the -norm by p-norm, with p large, max{ Re(y k ) } k ( ) 1/p Re(y k ) p and then remove the exponent 1 p to obtain a smooth objective. Gradient descent. Linear programming formulation: minimize t subject to Re(g 1 ) + Im(g 1 ) = 1 N t Re(x k i )Re(g i ) Im(x k i )Im(g i ) t, k i= N The above formulation requires fewer samples, and enjoys faster convergence. k 8
9 Blind Channel Equalization via Linear Programming (a) Channel output (b) Real part (c) Imaginary part (d) Equalized output (b) (e) Real part (f) Imaginary part 9
10 l - and l 1 -norm approximation constrained l - (Chebychev) approximation minimize Ax b subject to Fx g write as minimize t subject to Ax b t1, Ax b t1 Fx g constrained l 1 -approximation minimize Ax b 1 subject to Fx g write as minimize 1 T y subject to Ax b y, Ax b y Fx g 10
11 Extensions of thruster problem opposing thruster pairs minimize i u i subject to Fu = f des u i 1, i = 1,...,n can express as LP given f des, can express as LP minimize Fu f des subject to 0 u i 1, i = 1,...,n given f des, minimize # thrusters on subject to Fu = f des 0 u i 1, i = 1,...,n can t express as LP (# thrusters on is quasiconcave!) 11
12 Design centering find largest ball inside a polyhedron P = {x a T i x b i, i = 1,...,m} P x c center is called Chebychev center ball {x c + u u r} lies in P if and only if i.e.,, sup{a T i x c + a T i u u r} b i, i = 1,...,m, hence, finding Chebychev center is an LP: a T i x c + r a i b i, i = 1,...,m maximize r subject to a T i x c + r a i b i, i = 1,...,m 12
13 Linear fractional program objective function is quasiconvex sublevel sets are polyhedra like LP, can be solved very efficiently c T x + d minimize f T x + g subject to Ax b, f T x + g > 0 extension: c T i minimize max x + d i i=1,...,k fi Tx + g i subject to Ax b, f T i x + g i > 0, i = 1,...,K objective function is quasiconvex sublevel sets are polyhedra 13
14 state space model: Minimum-time optimal control x(t + 1) = Ax(t) + Bu(t), t = 0,1,...,K x(0) = x 0, u(t) 1, t = 0,1,...,K variables: u(0),...,u(k) settling time f(u(0),...,u(k)) is inf {T x(t) = 0 for T t K + 1} f is quasiconvex function of (u(0),...,u(k)): f(u(0),u(1),...,u(k)) T if and only if for all t = T,...,K + 1 x(t) = A t x 0 + A t 1 Bu(0) + + Bu(t 1) = 0 min-time optimal control problem: minimize f(u(0),u(1),...,u(k)) subject to u(t) 1, t = 0,...,K 14
15 Min-time control example three unit masses, connected by two unit springs with equilibrium length one, u(t) R 2 is force on left & right mass over time interval (0.15t,0.15(t + 1)] u 1 (t) u 2 (t) problem: pick u(0),...,u(k) to bring masses to positions ( 1,0,1) (at rest), as quickly as possible, from initial condition ( 3, 0, 2) (at rest) optimal solution: u 1 u 2 positions t t t right mass center mass left mass 15
16 Optimal transmitter power allocation m transmitters, mn receivers all at same frequency transmitter i wants to transmit to n receivers labeled (i,j), j = 1,...,n transmitter k receiver (i, j) transmitter i A ijk is path gain from transmitter k to receiver (i,j) N ij is (self) noise power of receiver (i,j) variables: transmitter powers p k, k = 1,...,m 16
17 at receiver (i, j): signal power: noise plus interference power: S ij = A iji p i I ij = k i A ijk p k + N ij signal to interference/noise ratio (SINR): S ij /I ij problem: choose p i to maximize smallest SINR: A iji p i maximize min i,j k i A ijkp k + N ij subject to 0 p i p max... a (generalized) linear fractional program 17
18 Nonconvex extensions of LP Boolean LP or zero-one LP: minimize c T x subject to Ax b Fx = g x i {0,1} integer LP: these are in general minimize c T x subject to Ax b Fx = g x i not convex problems extremely difficult to solve 18
19 Quadratic functions and forms quadratic function f(x) = x T Px + 2q T x + r [ ] T [ ][ x P q x = 1 q T r 1 ] convex if and only if P 0 quadratic form f(x) = x T Px convex if and only if P 0 Euclidean norm f(x) = Ax + b (f 2 is a convex quadratic function... ) 19
20 Minimizing a quadratic function minimize f(x) = x T Px + 2q T x + r nonconvex case (P 0): unbounded below proof: take x = tv, t, where Pv = λv, λ < 0 convex case (P 0): x is optimal if and only if two cases: q range(p): f > q range(p): unbounded below f(x) = 2Px + 2q = 0 important special case, P 0: unique optimal point x opt = P 1 q; f = r q T P 1 q 20
21 Least-squares minimize Euclidean norm (squared) (A = [a 1 a n ] full rank, skinny) minimize Ax b 2 = x T (A T A)x 2b T x + b T b geometrically: project b on span({a 1,...,a n }) b 0 x ls span({a 1,...,a n }) solution: set gradient equal to zero x ls = (A T A) 1 A T b general solution, without rank assumption: x ls = A b + v A is Moore-Penrose inverse of A, v N(A) 21
22 Least-norm solution of linear equations (A full rank, fat) minimize x 2 subject to Ax = b feasible if b R(A) geometrically: project 0 on {x Ax = b} 0 x ln {x Ax = b} solution: x ln = A T (AA T ) 1 b general solution, without rank assumption: x ln = A b 22
23 Extension: linearly constrained least-squares minimize Ax b subject to Cx = d (C R r s full rank, skinny) can be solved by elimination: write {x Cx = d} = {Fw + g w R s r } span(f) = nullspace(c) Cg = d (any solution) and solve (may or may not be a good idea in practice) minimize AFw + Ag b 23
24 Minimizing a linear function with quadratic constraint minimize c T x subject to x T Ax 1 x opt (A = A T 0) c x opt = A 1 c/ c T A 1 c proof. Change of variables y = A 1/2 x, c = A 1/2 c minimize c T y subject to y T y 1 optimal solution: y opt = c/ c Note: the problem is easy even without the condition A 0 24
25 Quadratic program (QP) quadratic objective, linear inequalities & equalities minimize x T Px + 2q T x + r subject to Ax b, Fx = g convex optimization problem if P 0 very hard problem if P 0 25
26 QCQP and SOCP quadratically constrained quadratic programming (QCQP): minimize x T P 0 x + 2q T 0 x + r 0 subject to x T P i x + 2q T i x + r i 0, i = 1,...,L convex if P i 0, i = 0,...,L nonconvex QCQP very difficult second-order cone programming (SOCP): minimize c T x subject to A i x + b i e T i x + d i, i = 1,...,L includes QCQP (QP, LP) 26
27 Robust linear program linear program minimize c T x subject to a T i x b i, i = 1,...,m suppose a i are uncertain robust linear program Note a i E i = {a i + F i u u 1} minimize c T x subject to a T i x b i a i E i, i = 1,...,m a T i x b i a i E i a T i x + ut F T i x b i u 1 a T i x + F ix b i hence, robust LP is SOCP minimize c T x subject to a T i x + F ix b i, i = 1,...,m 27
28 Phased-array antenna beamforming (x i,y i ) θ omnidirectional antenna elements at positions (x 1,y 1 ), (x 2,y 2 ),..., (x n,y n ) unit plane wave incident from angle θ induces in ith element a signal e j(x i cosθ+y i sinθ ωt), (j = 1, frequency ω, wavelength 2π) demodulate to get output e j(x i cosθ+y i sinθ) (assuming noise/interference free) linearly combine with complex weights w i : y(θ) = n i=1 w ie j(x i cosθ+y i sinθ) y(θ) is (complex) antenna array gain pattern or beam pattern y(θ) gives sensitivity of array as function of incident angle θ depends on design variables Re w, Im w (called antenna array weights) design problem: choose w to achieve desired gain pattern 28
29 Sidelobe level minimization make y(θ) small for θ θ tar > α (θ tar : target direction; 2α: beamwidth) sidelobe level y(θ) 50 θtar = via least-squares (discretize angles) minimize i y(θ i) 2 subject to y(θ tar ) = 1 (sum over angles outside beam) least-squares problem with two (real) linear equality constraints 29
30 minimize sidelobe level (discretize angles) (max over angles outside beam) minimize max i y(θ i ) subject to y(θ tar ) = 1 can be cast as SOCP minimize t subject to y(θ i ) t y(θ tar ) = 1 y(θ) 50 θtar = 30 sidelobe level 10 30
31 Other variants convex (& quasiconvex) extensions: y(θ 0 ) = 0 (null in direction θ 0 ) w is real (amplitude only shading) w i 1 (attenuation only shading) minimize σ 2 n i=1 w i 2 (thermal noise power in y) minimize beamwidth given a maximum sidelobe level nonconvex extension: maximize number of zero weights 31
32 Robust adaptive beamforming In the presence of noise and interference, the received signal is given by x(θ,t) = s(t)a(θ) + i(θ,t) + n(t) s(t) is the desired signal, a(θ) is the steering vector of the desired user; in the ideal case a(θ) = e j(x i cosθ+y i sinθ), estimated in practice i(θ,t) is the interference signal; n(t) is noise beamformer output: y(θ,t) = w H x(θ,t) minimum variance distortionless response (MVDR) beamformer: minimize w H R x w, subject to w H ā = 1 where R x is the data covariance matrix, ā is the estimated steering vector. robust MVDR as a SOCP: minimize w H R x w, subject to Re(w H (ā + a)) 1, a ɛ minimize w H R x w, subject to Re(w H ā) 1 + ɛ w 32
33 Optimal receiver location N transmitter frequencies 1,...,N a i, b i use frequencyi(a i, b i R 2 ) a 1, a 2,..., a N are wanted b 1, b 2,..., b N are interfering x a 3 a 2 a 1 b 1 b 2 b 3 (signal) receiver power from a i : x a i α (interfering) receiver power from b i : x b i α (α 2.1) worst signal to interference ratio as a function of receiver position x: S/I = min i x a i α x b i α what is the optimal receiver location? 33
34 S/I is quasiconcave on {x S/I 1}, i.e.,, on {x x a i x b i, i = 1,...,N} b 3 a 3 a 2 b 2 a 1 b 1 can use bisection; every iteration is a convex quadratic feasibility problem 34
Convex optimization examples
Convex optimization examples multi-period processor speed scheduling minimum time optimal control grasp force optimization optimal broadcast transmitter power allocation phased-array antenna beamforming
More informationEE364 Review Session 4
EE364 Review Session 4 EE364 Review Outline: convex optimization examples solving quasiconvex problems by bisection exercise 4.47 1 Convex optimization problems we have seen linear programming (LP) quadratic
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationExample 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p
Convex Optimization in Electrical Engineering Stephen Boyd and Lieven Vandenberghe EE Dept., Stanford ISL EE370 Seminar, 2/9/96 1 Main idea many electrical engineering design problems can be cast as convex
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationApplications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012
Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications
ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February
More informationLecture 1 Introduction and overview
ESE504 (Fall 2013) Lecture 1 Introduction and overview linear programming example course topics software integer linear programming 1 1 Linear program (LP) minimize subject to n c j x j j=1 n a ij x j
More informationExercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem
Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information9. Geometric problems
9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined
More informationCOM Optimization for Communications 8. Semidefinite Programming
COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationThe Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System
The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationNew Rank-One Matrix Decomposition Techniques and Applications to Signal Processing
New Rank-One Matrix Decomposition Techniques and Applications to Signal Processing Yongwei Huang Hong Kong Baptist University SPOC 2012 Hefei China July 1, 2012 Outline Trust-region subproblems in nonlinear
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationLecture 2: Convex functions
Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations
More informationOn Solving SOCPs with an Interior-Point Method for NLP
On Solving SOCPs with an Interior-Point Method for NLP Robert Vanderbei and Hande Y. Benson INFORMS Nov 5, 2001 Miami Beach, FL Operations Research and Financial Engineering, Princeton University http://www.princeton.edu/
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLecture 1 Introduction
L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation
More informationSolution to EE 617 Mid-Term Exam, Fall November 2, 2017
Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationDisciplined Convex Programming
Disciplined Convex Programming Stephen Boyd Michael Grant Electrical Engineering Department, Stanford University University of Pennsylvania, 3/30/07 Outline convex optimization checking convexity via convex
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationConvex Optimization in Communications and Signal Processing
Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationLinear Optimum Filtering: Statement
Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error
More information4. Convex optimization problems (part 1: general)
EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationSample ECE275A Midterm Exam Questions
Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationCourse Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)
Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:
More informationIntroduction to Convex Optimization
Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationTutorial on Convex Optimization for Engineers Part II
Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationOverview of Beamforming
Overview of Beamforming Arye Nehorai Preston M. Green Department of Electrical and Systems Engineering Washington University in St. Louis March 14, 2012 CSSIP Lab 1 Outline Introduction Spatial and temporal
More informationConvex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization
Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationSubgradient Method. Ryan Tibshirani Convex Optimization
Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationCHAPTER 3 ROBUST ADAPTIVE BEAMFORMING
50 CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING 3.1 INTRODUCTION Adaptive beamforming is used for enhancing a desired signal while suppressing noise and interference at the output of an array of sensors. It is
More informationA complex version of the LASSO algorithm and its application to beamforming
The 7 th International Telecommunications Symposium ITS 21) A complex version of the LASSO algorithm and its application to beamforming José F. de Andrade Jr. and Marcello L. R. de Campos Program of Electrical
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationELEC E7210: Communication Theory. Lecture 10: MIMO systems
ELEC E7210: Communication Theory Lecture 10: MIMO systems Matrix Definitions, Operations, and Properties (1) NxM matrix a rectangular array of elements a A. an 11 1....... a a 1M. NM B D C E ermitian transpose
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationEE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15
EE538R: Problem Set 5 Assigned: 6/0/5 Due: 06/03/5. BV Problem 4. The feasible set is the convex hull of (0, ), (0, ), (/5, /5), (, 0) and (, 0). (a) x = (/5, /5) (b) Unbounded below (c) X opt = {(0, x
More informationROBUST ADAPTIVE BEAMFORMING BASED ON CO- VARIANCE MATRIX RECONSTRUCTION FOR LOOK DIRECTION MISMATCH
Progress In Electromagnetics Research Letters, Vol. 25, 37 46, 2011 ROBUST ADAPTIVE BEAMFORMING BASED ON CO- VARIANCE MATRIX RECONSTRUCTION FOR LOOK DIRECTION MISMATCH R. Mallipeddi 1, J. P. Lie 2, S.
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationADAPTIVE ANTENNAS. SPATIAL BF
ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used
More informationOn the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,
Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,
More informationNear Optimal Adaptive Robust Beamforming
Near Optimal Adaptive Robust Beamforming The performance degradation in traditional adaptive beamformers can be attributed to the imprecise knowledge of the array steering vector and inaccurate estimation
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationAgenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization
Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationConvex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University
Conve Optimization 4. Conve Optimization Problems Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2017 Autumn Semester SJTU Ying Cui 1 / 58 Outline Optimization problems
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised October 2017 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationCoordinate Update Algorithm Short Course Subgradients and Subgradient Methods
Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n
More informationConvex Optimization Lecture 16
Convex Optimization Lecture 16 Today: Projected Gradient Descent Conditional Gradient Descent Stochastic Gradient Descent Random Coordinate Descent Recall: Gradient Descent (Steepest Descent w.r.t Euclidean
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationConvex Optimization Problems. Prof. Daniel P. Palomar
Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationThe Newton Bracketing method for the minimization of convex functions subject to affine constraints
Discrete Applied Mathematics 156 (2008) 1977 1987 www.elsevier.com/locate/dam The Newton Bracketing method for the minimization of convex functions subject to affine constraints Adi Ben-Israel a, Yuri
More informationA : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:
1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationDuality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationMinimizing Cubic and Homogeneous Polynomials over Integers in the Plane
Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More information