Chemical Equilibrium: A Convex Optimization Problem
|
|
- Bryce Joshua Casey
- 6 years ago
- Views:
Transcription
1 Chemical Equilibrium: A Convex Optimization Problem Linyi Gao June 4, Introduction The equilibrium composition of a mixture of reacting molecules is essential to many physical and chemical systems, ranging from physiology to chemical plant design. The computation of chemical equilibrium is thought to be challenging, and, as such, only simple ideal cases are introduced in textbooks. Although there exists a substantial body of literature on chemical equilibrium, the convex nature of the problem is seldom emphasized. We show that the computation of chemical equilibrium in closed systems can in general be formulated as a convex optimization problem, leading to an elegant conceptual framework, as well as efficient numerical solutions. This formulation provides a natural generalization of a number of important topics in physical chemistry and thermal physics. 2 Background When a set of molecules are mixed together, two fundamental processes can occur: Mass distribution (i.e., phase transitions) and chemical reactions. Phase transitions. At equilibrium, the mass distribution in a system may not be homogeneous. For instance, a mixture of oil and water will naturally separate into layers. Suppose the system contains n distinct types of species, and let x = (x 1, x 2,..., x n ) T R n + be the composition of the system, where x i is the amount of species i. If the system is separated into k physical phases with composition x (j) R n + in the jth phase, then x = x (1) + x (2) + + x (k), x (j) 0, j = 1, 2,..., k. Chemical reactions. The molecules in a system can not only physically redistribute, but they can also change via chemical reactions. For example, consider the hydrogen combustion reaction 2H 2 + O 2 2H 2 O, or equivalently, 2H 2 + O 2 2H 2 O 0. This reaction converts two molecules of H 2 and one molecule of O 2 into two molecules of water, and vice versa. 1
2 In general, chemical reaction processes in a system consisting of n species X 1,..., X n and m reactions can be described as a 11 X a n1 X n 0 a 1m X a nm X n 0, where a ij is a constant. For each reaction j, j = 1,..., m, if a ij > 0, species X i appears as a reactant if a ij < 0, species X i appears as a product if a ij = 0, species X i does not appear We can collect the coefficients into a matrix A R n m, where A ij = a ij. For instance, for the hydrogen combustion reaction above, we have n = 3, m = 1, X 1 = H 2, X 2 = O 2, X 3 = H 2 O, and A = (2, 1, 2) T. By the conservation of mass,. x = x init + Ay (1) for some y R m, where x init R n + is the initial composition. Physically, y i represents the quantity of reaction i (i.e. the integrated rate). The reactions are assumed to be reversible, so y i can be either positive or negative. In other words, the overall composition of the system is the sum of the initial composition (x init ) and the changes caused by chemical reactions (Ay). No assumptions about the molecular formulas of the species are necessary, provided that the reactions as written are balanced. The constraint (1) already accounts for the elemental conservation of mass. 3 Problem Description According to thermodynamics, a chemical system will spontaneously change in any way possible in order to minimize a particular objective function of the composition [Gib78]. For example, at constant temperature and pressure, the objective function is Gibbs free energy. To compute equilibrium, all possible variations of both mass distributions and chemical reactions must be considered. Problem data. A R n m : description of reactions x init R n +: initial amounts f 1,..., f k : R n R: phase functions, e.g., free energies ( dissatisfaction levels ), usually (but not always) convex 2
3 Outer loop (reaction equilibrium). minimize subject to f(x) x = x init + Ay x 0, (2) where the variables are x R n and y R m. The overall objective f(x) is given by the optimal value of another optimization problem: Inner loops (phase equilibrium). Given a composition x R n +, compute f(x): minimize ˆf1 (x (1) ) + + ˆf k (x (k) ) subject to x (1) + + x (k) = x x (i) 0, i = 1,..., k. (3) If the phase functions f i are convex, then ˆf i f i. However, if f i is not convex, then ˆf i (x (i) ) is the solution to another (non-convex) optimization problem minimize subject to f i (z (i) 1 ) + + f i (z p (i) ) z (i) z p (i) = x (i) z (i) j 0, j = 1,..., p p = 1, 2,.... (4) Physically, z (i) j R n + is the composition of the jth partition (phase) of type i. Conceptually, these loops perform the following task: Of all possible physical partitions of the system given a fixed composition x, find one that minimizes the objective. Problems (3) and (4) have an elegant geometric interpretation: ˆfi is the convex envelope of f i, and f is the convex envelope of min{f 1,..., f k }. Thus, f is always a convex function. This is a generalization of the lever rule of thermal physics. Any optimal point x of (2) is an equilibrium composition of the system. At equilibrium, the composition of each phase of the system is given by solving the inner loop problems (3) and (4) for x = x. (The optimal value f is usually insignificant; the important quantity is the optimal point, i.e., the equilibrium amounts of the species.) If f i, i = 1,..., k, are convex, then (4) does not need to be solved; it will simply return the value of f i (x (i) ). Furthermore, if k = 1 and f 1 is convex (i.e., a single-phase system), then neither (3) nor (4) need to be solved because they will return the value of f 1 (x). In this case, we can set f f 1 and solve the outer loop problem (2) only. In chemistry, these cases are sufficiently common that f 1,..., f k are often assumed to be convex [Smi80], [ZS11]. However, the above formulation is valid with or without this assumption. Remark. Note that we have separated the problem into inner and outer loops in order to emphasize that we are simultaneously considering two distinct physical processes. For computation, (2), (3), and (4) can be combined into a single optimization problem. 3
4 Objective function. The form of the objective function f(x) depends on reaction conditions [PP08]. In chemistry, among the most widely used objectives are those that model ideal systems. Chemistry textbooks almost exclusively use these objectives [Oxt03], though they are rarely explicitly presented. Ideal gas or liquid, constant temperature and pressure: f i (x) = c T x + n j=1 ( xj ) x j log, (5) 1 T x where c is a constant determined by thermodynamics. In particular, c j = µ j(t, p )/RT + log (p/p ) for a gas, and c j = µ j(t, p )/RT for an ideal liquid. Ideal gas, constant temperature and volume: f i (x) = c T x + n x j log x j, (6) i=1 where c j = µ j(t, p )/RT + log (RT/p V ) 1 is constant. Ideal dilute liquid: If the reacting species are extremely dilute, 1 T x is assumed to be constant, and the objective (5) is replaced with (6). In general non-ideal systems in which f i may be non-convex, f(x) is still convex since it is the convex envelope of min{f 1,..., f k }. However evaluating f(x) requires solving the problem (4), which may be NP-hard. Nevertheless, the convex optimization formulation remains valid. In the cases (5) and (6) above, f i (x) is strictly convex. Thus, for a single-phase ideal system (k = 1), f ˆf 1 f 1 is also strictly convex, implying that the equilibrium composition is unique. This simplifies uniqueness arguments presented previously [PP08], [NZS65]. We focused on the objectives of the form (5) and (6), since they are among the most widely encountered. In both these cases, 2 f(x) has structure; in (5), it is diagonal plus rank one, and in (6), it is diagonal. The structure of 2 f(x) was exploited for efficient computation. 4 Solutions Although many chemical equilibrium problems are small (m, n < 10), some problems of recent interest, such as nucleic acid (DNA and RNA) interaction problems, are much larger (n > 1000) and would benefit from efficient computation algorithms. For example, a DNA template strand with s distinct RNA binding sites generates a system with s + 2 s species. 4
5 Figure 1: Computation time, in seconds. Problems were randomly generated instances of ideal gas and liquid equilibria problems of the form (5), and (6), with 15 equality constraints. Revisiting the constraints. The equality constraint (1) can be converted to Ãx = b, (7) where à is any matrix such that N (Ã) = R(A), and b = Ãx init. Computing à can be expensive if A is large, but in many systems, à has a particularly simple form. If n rank(a) = s, where s is equal to the total number of elements (i.e. atoms types) present in the system, then Ãij is simply the number of times element i appears in species j. In practice, this case occurs sufficiently often [Smi80] that some authors assume that the equality constraint is given in the form (7). Since there are only 118 elements in the periodic table, à has at most 118 rows (usually fewer than 20). On the other hand, to compute à in the case that n rank(a) s, we used a full QR decomposition. For the objectives (5) and (6), the inequality constraint x 0 can be removed, since it is implicit in the domain of the objective. Algorithm. Newton s method with backtracking line search was implemented for this problem. A primary challenge was that for many equilibrium problems, x i 0 for some i; in other words, the equilibrium amount of certain species may be close to zero. For instance, for the objective (5), since ( ) ( ) f i (x) = diag,..., 11 T, x 1 1 T x the diagonal entries of the Hessian can become extremely large if x i 0 for some i. If the condition number of the Hessian was sufficiently high, Newton s method failed in MATLAB. To overcome this challenge, we used the matrix inversion lemma, as well as block inversion x m 5
6 of the KKT system, to avoid explicitly computing the Hessian or its inverse [BV04]. The Newton decrement, as well as primal and dual updates, were then successfully computed. A further challenge was evaluating the gradient at each step. For (5), and for (6), f i (x) = c + ( ( x1 ) ( xn )) T log,..., log, 1 T x 1 T x f i (x) = c + (log (x 1 ),..., log (x n )) T + 1. Though any x 0 with x i = 0 for some i is contained in dom f i (by virtue of the convention 0 log 0 = 0), the gradient becomes unbounded as x i 0 +. To overcome this problem, any starting point x init with x i < ε for some i was deemed to be infeasible (we used ε = 10 5 ). All such x i were set equal to ε, and the problem was then solved with either infeasible-start Newton or by choosing a feasible start point by projecting onto the feasible set. Finally, the stopping criterion was based on duality gap; for (5), we have { b T ν if log n i=1 g(ν) = exp( ãt i ν c i ) 0 otherwise, and for (6), g(ν) = b T ν n exp ( ã T i ν c i 1 ). i=1 We tested the algorithm on randomly generated equilibrium problems with 15 equality constraints (of the form Ãx = b, where b = Ãx init). Results are shown in the figure above. The equilibrium compositions of systems containing up to n = species were efficiently computed on a dual-core 2.90 GHz processor; the bottleneck was memory rather than speed. 5 Conclusions In conclusion, we have shown that: The computation of chemical equilibrium (in the general case) can be naturally formulated as a convex optimization problem. Equilibrium compositions of large ideal reacting systems can be efficiently computed if problem structure is exploited. 6
7 References [BV04] [Gib78] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, J. W. Gibbs. On the equilibrium of heterogeneous substances. Trans. Conn. Acad. Art. Sci., 3: , , [NZS65] L.S. Shapley N. Z. Shapiro. Mass action laws and the gibbs free energy function. J. Soc. Ind. Appl. Math., 13:353 75, [Oxt03] D. W. Oxtoby. Chemistry: Science of Change. Thomson-Brooks/Cole, [PP08] J. M. Powers and S. Paolucci. Uniqueness of chemical equilibria in ideal mixtures of ideal gases. Am. J. Phys., 76:848 55, [Smi80] R. W. Smith. The computation of chemical equilibria in complex systems. Ind. Eng. Chem Fundam., 19:1 10, [ZS11] F. Zeggeren and S. H. Storey. The Computation of Chemical Equilibria. Cambridge University Press, second edition,
Lecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More information11. Equality constrained minimization
Convex Optimization Boyd & Vandenberghe 11. Equality constrained minimization equality constrained minimization eliminating equality constraints Newton s method with equality constraints infeasible start
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa
Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19 Today s Lecture 1 Basic Concepts 2 for Equality Constrained
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationConvex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods
Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationEquality constrained minimization
Chapter 10 Equality constrained minimization 10.1 Equality constrained minimization problems In this chapter we describe methods for solving a convex optimization problem with equality constraints, minimize
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationAnalytic Center Cutting-Plane Method
Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationInequality constrained minimization: log-barrier method
Inequality constrained minimization: log-barrier method We wish to solve min c T x subject to Ax b with n = 50 and m = 100. We use the barrier method with logarithmic barrier function m ϕ(x) = log( (a
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationLecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)
Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,
ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationTruncated Newton Method
Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation
More informationBellman s Curse of Dimensionality
Bellman s Curse of Dimensionality n- dimensional state space Number of states grows exponen
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationEE364a Homework 8 solutions
EE364a, Winter 2007-08 Prof. S. Boyd EE364a Homework 8 solutions 9.8 Steepest descent method in l -norm. Explain how to find a steepest descent direction in the l -norm, and give a simple interpretation.
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationgpcvx A Matlab Solver for Geometric Programs in Convex Form
gpcvx A Matlab Solver for Geometric Programs in Convex Form Kwangmoo Koh deneb1@stanford.edu Almir Mutapcic almirm@stanford.edu Seungjean Kim sjkim@stanford.edu Stephen Boyd boyd@stanford.edu May 22, 2006
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationLecture 24: August 28
10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationPrimal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725
Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationSupport Vector Machines and Kernel Methods
2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationIntroduction and Math Preliminaries
Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter
More informationOPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM
OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation
More informationUNIT 15: THERMODYNAMICS
UNIT 15: THERMODYNAMICS ENTHALPY, DH ENTROPY, DS GIBBS FREE ENERGY, DG ENTHALPY, DH Energy Changes in Reactions Heat is the transfer of thermal energy between two bodies that are at different temperatures.
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationOn the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,
Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationChapter 6. Orthogonality
6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationSensitivity Analysis and Duality
Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationPrimal-Dual Interior-Point Methods
Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationHyper Algebraic Structure Associated to Convex Function and Chemical Equilibrium
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 9 (2017), pp. 6209-6220 Research India Publications http://www.ripublication.com Hyper Algebraic Structure Associated to
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationLINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:
LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationTMA947/MAN280 APPLIED OPTIMIZATION
Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationConvex Optimization Lecture 13
Convex Optimization Lecture 13 Today: Interior-Point (continued) Central Path method for SDP Feasibility and Phase I Methods From Central Path to Primal/Dual Central'Path'Log'Barrier'Method Init: Feasible&#
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationUniqueness of Generalized Equilibrium for Box Constrained Problems and Applications
Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationSolutions of Linear system, vector and matrix equation
Goals: Solutions of Linear system, vector and matrix equation Solutions of linear system. Vectors, vector equation. Matrix equation. Math 112, Week 2 Suggested Textbook Readings: Sections 1.3, 1.4, 1.5
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationLecture 7 Duality II
L. Vandenberghe EE236A (Fall 2013-14) Lecture 7 Duality II sensitivity analysis two-person zero-sum games circuit interpretation 7 1 Sensitivity analysis purpose: extract from the solution of an LP information
More information