Unconstrained Geometric Programming

Similar documents
Functions of Several Variables

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

Logarithmic and Exponential Equations and Change-of-Base

Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS APPLICATIONS

Generalization to inequality constrained problem. Maximize

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

Notes taken by Graham Taylor. January 22, 2005

Jim Lambers MAT 169 Fall Semester Lecture 6 Notes. a n. n=1. S = lim s k = lim. n=1. n=1

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Constrained Optimization and Lagrangian Duality

Gaussian Elimination and Back Substitution

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes

Lecture #21. c T x Ax b. maximize subject to

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Today: Linear Programming (con t.)

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Lagrange duality. The Lagrangian. We consider an optimization program of the form

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x?

The Eigenvalue Problem: Perturbation Theory

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Nonlinear Optimization for Optimal Control

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

gpcvx A Matlab Solver for Geometric Programs in Convex Form

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

ICS-E4030 Kernel Methods in Machine Learning

Lecture Note 5: Semidefinite Programming for Stability Analysis

4. Algebra and Duality

Convex Optimization & Lagrange Duality

Lecture 18: Optimization Programming

Applications of Linear Programming

LECTURE 13 LECTURE OUTLINE

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

EE 227A: Convex Optimization and Applications October 14, 2008

Utility, Fairness and Rate Allocation

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Learning Module 1 - Basic Algebra Review (Appendix A)

Lecture 9 Sequential unconstrained minimization

1 Review Session. 1.1 Lecture 2

Lagrangian Duality and Convex Optimization

ORIE 6300 Mathematical Programming I August 25, Lecture 2

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Optimality Conditions for Constrained Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Week 3 Linear programming duality

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

4. Convex optimization problems

The dual simplex method with bounds

Lecture Support Vector Machine (SVM) Classifiers

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

10-725/ Optimization Midterm Exam

Lecture Notes on Support Vector Machine

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Lecture: Convex Optimization Problems

The Fundamental Theorem of Linear Inequalities

Machine Learning. Support Vector Machines. Manfred Huber

EE364a Review Session 5

Discrete Optimization

CS-E4830 Kernel Methods in Machine Learning

1 Convexity, concavity and quasi-concavity. (SB )

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

2 Lecture Defining Optimization with Equality Constraints

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

GEOMETRIC PROGRAMMING APPROACHES OF RELIABILITY ALLOCATION

3. Linear Programming and Polyhedral Combinatorics

Game Theory: Lecture 3

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

SEMIDEFINITE PROGRAM BASICS. Contents

CS269: Machine Learning Theory Lecture 16: SVMs and Kernels November 17, 2010

Lecture 9: Dantzig-Wolfe Decomposition

Chapter 1. Preliminaries

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

Lagrange Relaxation and Duality

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

BBM402-Lecture 20: LP Duality

Linear and Combinatorial Optimization

CONSTRAINED OPTIMALITY CRITERIA

Last Revised: :19: (Fri, 12 Jan 2007)(Revision:

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

Algebra and Trigonometry 2006 (Foerster) Correlated to: Washington Mathematics Standards, Algebra 2 (2008)

subject to (x 2)(x 4) u,

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Duality of LPs and Applications

Chapter 33 MSMYM1 Mathematical Linear Programming

Differential Equations Practice: 2nd Order Linear: Nonhomogeneous Equations: Variation of Parameters Page 1

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington

12. Interior-point methods

Curriculum Map: Mathematics

15. Conic optimization

Chapter 2 Functions and Graphs

Nonlinear Optimization: What s important?

Transcription:

Jim Lambers MAT 49/59 Summer Session 20-2 Lecture 8 Notes These notes correspond to Section 2.5 in the text. Unconstrained Geometric Programming Previously, we learned how to use the A-G Inequality to solve an unconstrained minimization problem. We now formalize the procedure for solving such problems, in the case where the obective function to be minimized has the following particular form. Definition Let D R m be the convex) subset of R m defined by A function g : D R m of the form D = {t, t 2,..., t m ) R m t > 0, =, 2,..., m}. gt) = m c i t α i, where c i > 0 for i =, 2,..., n and α i R for i =, 2,..., n, =, 2,..., m, is called a posynomial. We now investigate how the A-G Inequality can be used to find the minimum of a given posynomial gt) on D, if one exists. That is, we will be solving the primal geometric program GP) We denote the ith term of gt) by Minimize gt) = n c i subect to t, t 2,..., t m > 0. g i t) = c i m m tα i t α i, x i = g it), i =, 2,..., n, where δ, δ 2,..., δ n > 0 and Then, the A-G Inequality yields gt) = =. x i

x i δ i δ i ) δi n m ) δi n m ) δi m t t α i t α i n α i. Because this is an unconstrained minimization problem, we need the quantity on the low side of the A-G Inequality to be a constant. It follows that the exponents must satisfy α i = 0, =, 2,..., m. If a vector δ = δ, δ 2,..., δ n ) can be found that satisfies this condition, as well as the previous conditions we have imposed on the s, then we have a candidate for a solution to the dual geometric program DGP) Maximize vδ) = ) δi n subect to δ, δ 2,..., δ n > 0 Positivity Condition) n = Normality Condition) n α i = 0, =, 2,..., m Orthogonality Condition). A vector δ that satisfies all three of the above conditions is said to be a feasible vector of the DGP. By the A-G Inequality, we have, for each feasible vector δ, gt) vδ), t D. This inequality is known as the Primal-Dual Inequality. If, in addition, δ is a global maximizer of vδ), and is therefore a solution of the DGP, then vδ ) is at least a lower bound for the minimum value of gt) on D. It can be shown using the criterion gt) = 0 that if t is a global minimizer of gt) on D, and therefore is a solution of the GP, then gt ) = vδ ) for some feasible vector δ. That is, the Primal-Dual Inequality actually becomes an equality. 2

Therefore, if δ is a solution of the DGP, then, by the A-G Inequality, the solution to the GP t = t, t 2,..., t m) can be found from the relations or x = x 2 = = x n = vδ ), g i t ) = vδ )δ i, i =, 2,..., n. Note that the s indicate the relative contributions of each term g i t ) of gt ) to the minimum. By taking the natural logarithm of both sides of these equations, we obtain a system of linear equations for the unknowns z = ln t, =, 2,..., m. Spefically, we can solve m [ vδ )δ ] i α i z = ln, i =, 2,..., n. c i Exponentiating the z s yields the components t, =, 2,..., m, of the minimizer t. This leads to the following method for solving the GP, known as Unconstrained Geometric Programming:. Find all feasible vectors δ for the corresponding DGP. 2. If no feasible vectors can be found, then the DGP, and therefore the GP, have no solution. 3. Compute the value of vδ) for each feasible vector δ. Each vector δ that maximizes the value of vδ) is a solution to the DGP. 4. To obtain the solution t to the GP, solve the system of equations g i t ) = vδ )δ i, i =, 2,..., n for t, t 2,..., t m, which can be reduced to a system of linear equations as described above. Example We will solve the GP Minimize gt) = t t 2 t 3 + 2t 2 t 3 + 3t t 3 + 4t t 2 subect to t, t 2, t 3 > 0. This leads to the DGP Maximize ) δ ) δ2 ) δ3 ) δ4 vδ) = 2 3 4 δ δ2 δ3 δ4 subect to δ, δ 2, δ 3, δ 4 > 0 Positivity Condition) δ + δ 2 + δ 3 + δ 4 = Normality Condition) δ + δ 3 + δ 4 = 0, δ + δ 2 + δ 4 = 0, Orthogonality Condition) δ + δ 2 + δ 3 = 0 3

The Normality Condition and Orthogonality Condition, together, form a system of 4 equations with 4 unknowns whose coeffient matrix is nonsingular, so the system has the unique solution 2 δ = 5, 5, 5, ), 5 which also satisfies the Positivity Condition, so it is feasible. As it is the only feasible vector for the DGP, it is also the solution to the DGP. It follows that the maximum value of vδ), which is also the minimum value of gt), is ) 5 2/5 vδ ) = 0 /5 5 /5 20 /5 7.55. 2 To find the minimizer t, we can solve the equations t t 2 t 3 From these equations, we obtain the relations which yields the solutions = δ vδ ) = 2 5 vδ ) 2t 2t 3 = δ 2 vδ ) = 5 vδ ) 3t t 3 = δ 3 vδ ) = 5 vδ ) 4t t 2 = δ 4 vδ ) = 5 vδ ). 3t 3 = 4t 2, 2t 3 = 4t, 2t 2 = 3t, t 5 = 3 6vδ ), t 2 = 3 5 3 2 6vδ ), t 5 3 = 2 3 6vδ ). 3t )3 = 2 5 vδ ) Substituting these values into gt) yields the value of vδ ), as expected. Example We now consider the GP Minimize gt) = 2 t t 2 + t t 2 + t subect to t, t 2 > 0. 4

This leads to the DGP Maximize ) δ ) δ2 ) δ3 vδ) = 2 δ δ2 δ3 subect to δ, δ 2, δ 3 > 0 Positivity Condition) δ + δ 2 + δ 3 = Normality Condition) δ + δ 2 + δ 3 = 0, δ + δ 2 = 0 Orthogonality Condition) Unfortunately, the only values of δ, δ 2, δ 3 that satisfy the Normality Condition and the Orthogonality Condition are δ = 2, δ 2 = 2, δ 3 = 0. These values do not satisfy the Positivity Condition, so there are no feasible vectors for the DGP. We conclude that there is no solution to the GP. Exerses. Chapter 2, Exerse 8 2. Chapter 2, Exerse 2 3. Chapter 2, Exerse 25 4. Chapter 2, Exerse 26 5