Dynamic Optimization in Air Quality Modeling

Size: px
Start display at page:

Download "Dynamic Optimization in Air Quality Modeling"

Transcription

1 Dynamic Optimization in Air Quality Modeling A. Caboussat Department of Mathematics, University of Houston Houston, Texas Project supported by the U. S. Environmental Protection Agency Grant X Rice-UH Optimization Seminar, Rice University, March 30, 200 Dynamic Optimization in Air Quality Modeling p.1/42

2 Acknowledgments N. R. Amundson, J. W. He, A. V. Martynenko, University of Houston, Department of Mathematics, Houston, Texas. J. H. Seinfeld, Caltech, Department of Chemical Engineering, Pasadena, California. S. Clegg, University of East Anglia, UK. J. Rappaz, C. Landry, Ecole Polytechnique Fédérale de Lausanne. Rice-UH Optimization Seminar, Rice University, March 30, 200 Acknowledgments p.2/42

3 Aerosols Life Cycle Aerosols particles have effects on human health, visibility reduction in urban and regional areas, acid rain, alteration of the earth s radiation balance, oxidation due to aqueous droplets, cloud and ozone formation, etc. Rice-UH Optimization Seminar, Rice University, March 30, 200 Aerosol Life Cycle p.3/42

4 Motivations Modeling and computation of the physical state and chemical composition of atmospheric aerosol particles. "The chemical and physical properties of aerosols are needed to estimate and predict direct and indirect climate forcing", (IPCC, Third Assessment Report, 2001). At present the knowledge of aerosol composition and transformation is limited and high uncertainty remains on their environmental effects. Current aerosol models do not always predict accurately the phase state and the growth phenomena of atmospheric aerosols, since they rely on many a priori assumptions. Rice-UH Optimization Seminar, Rice University, March 30, 200 Motivations p.4/42

5 Aerosol Particles Single aerosol particle: c(t) aerosol R(t) b(t) p equil gas Thermodynamics : Find the equilibrium state of the particle and the repartition between solid, liquid and gas phases. Dynamics : Find the gas-particle partitioning of chemical species. Rice-UH Optimization Seminar, Rice University, March 30, 200 Aerosol Particles p.5/42

6 Outline Modeling of the thermodynamic equilibrium and mass transfer for organic aerosol particles. Static Optimization. Determination of the convex hull of the energy function. The notion of phase simplex. A primal-dual interior-point method for the computation of the minimum of energy. Dynamic Optimization. Differential equations and algebraic constraints. An extended optimization problem and sequential quadratic programming techniques. Rice-UH Optimization Seminar, Rice University, March 30, 200 Outline p./42

7 Global Air Quality Problem System of coupled PDEs for the concentrations b = ( b i ) of chemical components in the aerosol and c = ( c i ) in the bulk gas: (1) (2) i (r,x, t) + (u(x, t)) x bi (r,x, t) t b I i(r,x, t) bs (r,x, t) r x (K(x, t) x bi (r,x, t) ) + ( Is (r,x, t) b i (r,x, t) ) r = r 0 β(r r, r s (r r ) b,x, t) r r bi (r,x, t)dr b i (r,x, t) β(r, r s (r ) b,x, t) 0 r dr + S i (r,x, t), t c i(x, t) + (u(x, t)) x c i (x, t) x (K(x, t) x c i (x, t)) b s (r,x, t) + I i (r,x, t)dr = f i ( c(x, t)) + E i (x, t). r 0 Rice-UH Optimization Seminar, Rice University, March 30, 200 Global Air Quality Problem p./42

8 Global Air Quality Problem System of coupled PDEs for the concentrations b = ( b i ) of chemical components in the aerosol and c = ( c i ) in the bulk gas: (1) (2) i (r,x, t) + (u(x, t)) x bi (r,x, t) t b I i(r,x, t) bs (r,x, t) r x (K(x, t) x bi (r,x, t) ) + ( Is (r,x, t) b i (r,x, t) ) r = r 0 β(r r, r s (r r ) b,x, t) r r bi (r,x, t)dr b i (r,x, t) β(r, r s (r ) b,x, t) 0 r dr +S i (r,x, t), t c i(x, t) + (u(x, t)) x c i (x, t) x (K(x, t) x c i (x, t)) b s (r,x, t) + I i (r,x, t)dr = f i ( c(x, t)) + E i (x, t). r 0 Rice-UH Optimization Seminar, Rice University, March 30, 200 Global Air Quality Problem p./42

9 Global Air Quality Problem System of coupled PDEs for the concentrations b = ( b i ) of chemical components in the aerosol and c = ( c i ) in the bulk gas: (1) (2) i (r,x, t)+ (u(x, t)) x bi (r,x, t) t b I i(r,x, t) bs (r,x, t) r x (K(x, t) x bi (r,x, t) ) + ( Is (r,x, t) b i (r,x, t) ) r = r 0 β(r r, r s (r r ) b,x, t) r r bi (r,x, t)dr b i (r,x, t) β(r, r s (r ) b,x, t) 0 r dr + S i (r,x, t), t c i(x, t) + (u(x, t)) x c i (x, t) x (K(x, t) x c i (x, t)) b s (r,x, t) + I i (r,x, t)dr = f i ( c(x, t)) + E i (x, t). r 0 Rice-UH Optimization Seminar, Rice University, March 30, 200 Global Air Quality Problem p./42

10 Global Air Quality Problem System of coupled PDEs for the concentrations b = ( b i ) of chemical components in the aerosol and c = ( c i ) in the bulk gas: (1) (2) i (r,x, t) + (u(x, t)) x bi (r,x, t) t b I i(r,x, t) bs (r,x, t) r x (K(x, t) x bi (r,x, t) ) + ( Is (r,x, t) b i (r,x, t) ) r = r 0 β(r r, r s (r r ) b,x, t) r r bi (r,x, t)dr b i (r,x, t) β(r, r s (r ) b,x, t) 0 r dr + S i (r,x, t), t c i(x, t) + (u(x, t)) x c i (x, t) x (K(x, t) x c i (x, t)) b s (r,x, t) + I i (r,x, t)dr = f i ( c(x, t)) + E i (x, t). r 0 Rice-UH Optimization Seminar, Rice University, March 30, 200 Global Air Quality Problem p./42

11 Global Air Quality Problem System of coupled PDEs for the concentrations b = ( b i ) of chemical components in the aerosol and c = ( c i ) in the bulk gas: (1) (2) i (r,x, t) + (u(x, t)) x bi (r,x, t) t b I i(r,x, t) bs (r,x, t) r x (K(x, t) x bi (r,x, t) ) + ( Is (r,x, t) b i (r,x, t) ) r = r 0 β(r r, r s (r r ) b,x, t) r r bi (r,x, t)dr b i (r,x, t) β(r, r s (r ) b,x, t) 0 r dr + S i (r,x, t), t c i(x, t) + (u(x, t)) x c i (x, t) x (K(x, t) x c i (x, t)) b s (r,x, t) + I i (r,x, t)dr = f i ( c(x, t)) + E i (x, t). r 0 Splitting algorithm and discretization in time. Rice-UH Optimization Seminar, Rice University, March 30, 200 Global Air Quality Problem p./42

12 Evaporation Term The coupling is a result of condensation and evaporation processes: I i (r,x, t) = h i (r) ( c i (x, t) η(r) 1 RT(x, t) pequil i ) (b((r,x, t)), (η(r) Kelvin constant of particle mass r, h i (r) molecular transfer coefficient, T temperature, p equil i (r, x, t) fugacity of gas species i.) The physical phenomena in the system of partial differential equations are decoupled with a time splitting scheme and discretized in space. The resulting subsystem is governed by two main processes, namely the thermodynamic equilibrium inside the particle and the mass transfer between the bulk gas and the particle. Rice-UH Optimization Seminar, Rice University, March 30, 200 Evaporation Term p.8/42

13 Dynamics and Mass Transfer d dt c(t) d dt b(t) = h(r) = h(r) ( c(t) η(r) ( c(t) η(r) ) 1 RT(t) pequil (b(t)), c(0) = c 0 ) 1 RT(t) pequil (b(t)), b(0) = b 0 We need to solve a phase equilibrium problem for the internal variables (y α,x α ) for the calculation of p equil i (b) with min y α,x α s. t. p equil (b(t)) = p vapor exp( g(x α )), P y α g(x α ) P y α x α = b(t), y α 0, e T x α = 1, x α > 0, α = 1,...,P. Rice-UH Optimization Seminar, Rice University, March 30, 200 Dynamics and Mass Transfer p.9/42

14 Model Problem Since the system is conservative, c(t) + b(t) = b(0) + c(0) = b tot and one can eliminate b: d c(t) = h(r) dt P min y α,x α s. t. ( c(t) η(r) p ) vapor RT(t) e g(x α) y α g(x α ) P y α x α = b tot c(t), y α 0, e T x α = 1, x α > 0, α = 1,...,P. (1) Static Minimization - determination of the convex hull of the energy; (2) Dynamic Optimization - extended optimization problem. Rice-UH Optimization Seminar, Rice University, March 30, 200 Model Problem p.10/42

15 Static Optimization min y α,x α s. t. P y α g(x α ) P y α x α = b, y α 0, e T x α = 1, x α > 0, α = 1,...,P. x α virtual mole-fraction compositions (e T = (1, 1,...,1)). y α total amount in each phase and allows to track of the existence of the phase α. Although the number of phase classes is specified a priori, the number of phases existing at the equilibrium is not known a priori, but is a result of the equilibrium computation. Rice-UH Optimization Seminar, Rice University, March 30, 200 Static Optimization p.11/42

16 Reduced Problem min y α,z α s. t. P y α f(z α ) P y α z α = b, P y α = 1, y α 0, α = 1,...,P, z α int( n ), α = 1,...,P. z α = (x α ) N 1 R N, for all x α R N+1 such that e T x α = 1. N = conv{0,e 1,,e n } is a N-simplex in R N. Rice-UH Optimization Seminar, Rice University, March 30, 200 Reduced Problem p.12/42

17 Convex Hull Convex hull of a function f : R N R: convf is the greatest convex function majored by f. (convf)(b) = inf { P y α f(z α ) P y α z α = b, y α 0, } P y α = 1. Carathéodory s Theorem: For a set C in R N, every point of conv(c) belongs to some simplex with vertices in C and thus can be expressed as a convex combination of N + 1 points of C (not necessarily different). When C is connected, N points suffice. (convf)(b) = inf { N+1 y α f(z α ) N+1 y α z α = b, y α 0, N+1 y α = 1 }. Rice-UH Optimization Seminar, Rice University, March 30, 200 Reduced Problem p.13/42

18 Properties of the Energy Function Regularity of the energy function: f C (int N ) C 0 ( N ). x 0 N, w proper lim x x 0 f w =. Each vertex of the N-simplex N (pure components) belongs to a connected convex region of f. Determination of a tangent plane to the energy function. Rice-UH Optimization Seminar, Rice University, March 30, 200 Properties of the Energy Function p.14/42

19 Properties of the Energy Function No isolated connected convex regions (for global optimum). The function f is strictly convex in a neighborhood of a solution (for uniqueness). Rice-UH Optimization Seminar, Rice University, March 30, 200 Properties of the Energy Function p.14/42

20 Supporting Tangent Plane Gibbs Free Energy Supporting tangent plane If C is convex and z C, there exists a hyperplane Θ with z Θ and C is included in one of its closed half-spaces. Rice-UH Optimization Seminar, Rice University, March 30, 200 Supporting Tangent Plane p.15/42

21 Some Results on the Convex Hull The feed b can be decomposed into a stable phase splitting: b = P y α z α, z k int( N ), k = 1,...,P. The convex hull is continuously differentiable on int( N ). The following relations hold for all (k, m) = 1,...,P : f(z k ) = f(z m ) f(z k ) f(z k ) z k = f(z m ) f(z m ) z m Moreover the tangent plane Θ(z) = f(z 1 ) + f(z 1 ) (z z 1 ) is tangent to f at all active vertices and always below the graph of f. Existence and uniqueness [Rabier, Griewank (1992)]. Acknowledgments: A. Anantharanam, Ecole Polytechnique de Paris Rice-UH Optimization Seminar, Rice University, March 30, 200 Some Results on the Convex Hull p.1/42

22 Global vs. Local Minima We need a criterion to distinguish the local minima from the global minimum. Let (y 1,...,y N+1,z 1,...,z N+1 ) be a local minimum of the reduced optimization problem for b int( N ). Then if y k > 0, then z k int( N ). Consider Y = (y 1,...,y n+1,z 1,...,z n+1 ) a feasible point for the optimization problem with P = n + 1 fixed phases. Y is a global minimum if and only if Y is a local minimum and f(z) Θ(z) on n, where Θ is the hyperplane associated with Y. Rice-UH Optimization Seminar, Rice University, March 30, 200 Global vs. Local Minima p.1/42

23 Global vs. Local Methods Global Methods for Phase equilibrium calculations: No global optimization methods for the problem of convexification of the Gibbs energy; Complexity of global methods grows exponentially with the size of the problem; Local Methods: Local optimization methods can handle large-scale problems; They can miss the global solution; They require an good initial guess for the variables; They are sensitive to algorithm parameter values; Local optimization techniques, together with good initial guess Rice-UH Optimization Seminar, Rice University, March 30, 200 Global vs. Local Methods p.18/42

24 An Interior-Point Method Introduction of a log-barrier penalty function to ensure the non-negativeness of the total number of elements in each phase. min y α,x α s. t. N+1 N+1 y α g(x α ) y α x α = b, e T x α = 1, x α > 0, y α 0, α = 1,...,N + 1. ν is a penalty parameter, which tends to zero. The phase α disappears when y α 0. x α indicates only the location of a virtual phase. All phases exist at the beginning and then are selected by the algorithm. Rice-UH Optimization Seminar, Rice University, March 30, 200 An Interior-Point Method p.19/42

25 An Interior-Point Method Introduction of a log-barrier penalty function to ensure the non-negativeness of the total number of elements in each phase. min y α,x α s. t. N+1 N+1 y α g(x α ) ν y α x α = b, N+1 ln(y α ) e T x α = 1, x α > 0, α = 1,...,N + 1. ν is a penalty parameter, which tends to zero. The phase α disappears when y α 0. x α indicates only the location of a virtual phase. All phases exist at the beginning and then are selected by the algorithm. Rice-UH Optimization Seminar, Rice University, March 30, 200 An Interior-Point Method p.19/42

26 Karush-Kuhn-Tucker Conditions KKT conditions (stationary points of the Lagrangian): y α ( g(x α ) + λ) + ζ α e = 0, α = 1,...,N + 1, g(x α ) + λ T x α ν y α = 0, α = 1,...,N + 1, e T x α 1 = 0, α = 1,...,N + 1, P y α x α b = 0, KKT System (Newton method), (with 2 g(x α ) singular): y α 2 g(x α ) g(x α ) + λ y α e ( g(x α ) + λ) T ν (y α ) 2 (x α ) T 0 (y α ) T x α 0 0 e T p x p y p λ p ζα = b x b y b λ b ζα Rice-UH Optimization Seminar, Rice University, March 30, 200 Karush-Kuhn-Tucker Conditions p.20/42

27 Projected KKT System Projection on the constraints e T x α = 1: y α 2 f(z α ) f(z α ) + η y α 0 ( f(z α ) + η) T ν (y α ) 2 (z α ) T e (y α ) T z α 0 0 e T p z p y p η p θ = b z b y b η b θ KKT systems for chemical systems are usually ill-conditioned. Design of numerical linear algebra techniques. Direct decomposition techniques of the block-structured system (range-space + null-space). Control of the inertia of the matrices arising in the resolution. Rice-UH Optimization Seminar, Rice University, March 30, 200 Projected KKT System p.21/42

28 Projected KKT System Projection on the constraints e T x α = 1: y α 2 f(z α ) f(z α ) + η y α 0 ( f(z α ) + η) T ν (y α ) 2 (z α ) T e (y α ) T z α 0 0 e T p z p y p η p θ = b z b y b η b θ Assumptions: The Hessian 2 f(z α ) is positive definite (second order conditions) The iterates z 1,z 2,...,z N+1 are linearly independent (linear independent constraint qualification). Rice-UH Optimization Seminar, Rice University, March 30, 200 Projected KKT System p.21/42

29 Projected KKT System Projection on the constraints e T x α = 1: y α 2 f(z α ) f(z α ) + η y α 0 ( f(z α ) + η) T ν (y α ) 2 (z α ) T e (y α ) T z α 0 0 e T p z p y p η p θ = b z b y b η b θ Key Issue: Initialization of z α in order to guarantee the second order conditions during the whole algorithm. Rice-UH Optimization Seminar, Rice University, March 30, 200 Projected KKT System p.21/42

30 Numerical Results The existence of the phases depends on the feed vector b. z 1 z 2 Gibbs Free Energy b Tangent plane conv f(b) < f(b), conv f(z i ) = f(z i ), i = 1, 2 Rice-UH Optimization Seminar, Rice University, March 30, 200 Numerical Results p.22/42

31 Numerical Results The existence of the phases depends on the feed vector b. z 1 Gibbs Free Energy b Tangent plane conv f(b) = f(b) = f(z 1 ) = conv f(z 1 ). Rice-UH Optimization Seminar, Rice University, March 30, 200 Numerical Results p.22/42

32 Results in One Dimension Energy is defined on the (0, 1) segment. Both extremities of the segment are in a convex region by assumption. Convex Hull: normalized GFE water mole fraction 1 hexacosanol Rice-UH Optimization Seminar, Rice University, March 30, 200 Results in One Dimension p.23/42

33 Results in Two Dimensions Non-convex Energy Function: Determination of a tangent plane. Rice-UH Optimization Seminar, Rice University, March 30, 200 Results in Two Dimensions p.24/42

34 Convergence of Phase Simplexes Phase simplex for b fixed: Three vertices Two vertices Convergence of the phase simplexes in approximately 20 iterations (ν 0 = 10 3, tolerance = 10 ). Rice-UH Optimization Seminar, Rice University, March 30, 200 Convergence of Phase Simplexes p.25/42

35 Convergence of Phase Simplexes (2) Convergence towards three active vertices: 1 Vertices x α Barycentric Coordinates y α Particular choice of initial guess (near the vertices) and numerical parameters to avoid local minima. Rice-UH Optimization Seminar, Rice University, March 30, 200 Convergence of Phase Simplexes (2) p.2/42

36 Convex Hull C 2 H 54 O H2 O C 9 H 14 O 4 Computational cost: 20 s. for grid points, 25 iterations in average (tolerance = 10 ). Rice-UH Optimization Seminar, Rice University, March 30, 200 Convex Hull p.2/42

37 Results in Higher Dimensions Optimization in R 4 : For b = (0.002, 0.002, 0.002, 13.0). Solution with 2 active vertices (y 1 = y 3 = 0, y 2 = , y 4 = ). Convergence in 40 iterations (CPU time = s. / ν 0 = 10 3, tolerance = 10 ) Optimization in R 18 : For b int 18. Solution with 2 active vertices / 1 active constraints. Convergence in 41 iterations (CPU time = s. / ν 0 = 10 3, tolerance = 10 ) Rice-UH Optimization Seminar, Rice University, March 30, 200 Results in Higher Dimensions p.28/42

38 Dynamic Optimization Coupling of the evolution of the gas concentrations with the KKT conditions for the modeling of mass transfer. d c(t) = h(r) dt ( c(t) η(r) p ) vapor RT(t) exp( g(x α)). Under the optimum constraints: min y α,x α s. t. N+1 N+1 y α g(x α ) ν P ln(y α ) y α x α = b tot c, e T x α = 1, x α > 0, α = 1,...,N + 1. Rice-UH Optimization Seminar, Rice University, March 30, 200 Dynamic Optimization p.29/42

39 Dynamic Optimization Coupling of the evolution of the gas concentrations with the KKT conditions for the modeling of mass transfer. d dt c(t) = h(r) ( c(t) η(r) p ) vapor RT(t) exp( λ). Replaced by the KKT conditions: y α ( g(x α ) + λ) + ζ α e = 0, α = 1,...,N + 1, g(x α ) + λ T x α ν = 0, y α α = 1,...,N + 1, e T x α 1 = 0, α = 1,...,N + 1, N+1 y α x α + c(t) b tot = 0. Rice-UH Optimization Seminar, Rice University, March 30, 200 Dynamic Optimization p.29/42

40 DAE Discontinuities d dt c = f(t, c, x), 0 = g ν (t, c, x). Without inequalities (i.e. for given ν) : stiff system of DAE (difference of scales between concentrations and of speed between the reactions). Existence of a solution relies on the implicit function theorem. Convergence of implicit time-discretization schemes. With inequalities (i.e. for ν 0) : bifurcation problem. Discontinuities of the derivatives of the solution. Bifurcation between local and global optimum. Rice-UH Optimization Seminar, Rice University, March 30, 200 DAE Discontinuities p.30/42

41 An Implicit Discretization c n+1 c n ( = h(r n ) c n+1 η(r n ) p vapor τ RT n+1 exp( λ n+1)), yα n+1 ( g(x n+1 α ) + λ n+1 ) + ζα n+1 e = 0, α = 1,...,N + 1, g(x n+1 α ) + (λ n+1 ) T x n+1 α ν yα n+1 = 0, α = 1,...,N + 1, e T x n+1 α 1 = 0, α = 1,...,N + 1, P yα n+1 x n+1 α + c n+1 b tot = 0 Nonlinear system solved by a Newton method. The particle mass r is discretized explicitly, thanks to different time scales. Rice-UH Optimization Seminar, Rice University, March 30, 200 An Implicit Discretization p.31/42

42 Extended KKT System Newton system H c 0 0 B 0 0 y α 2 g(x α ) g(x α ) + λ y α e 0 ( g(x α ) + λ) T ν (y α ) 2 (x α ) T 0 I (y α ) T x α 0 0 p c p x p y p λ = b c b x b y b λ 0 e T p ζα b ζα H c is positive definite. Design of numerical algebra techniques with sequential quadratic programming and/or Schur complement methods. Control of the inertia of the matrices. Rice-UH Optimization Seminar, Rice University, March 30, 200 Extended KKT System p.32/42

43 Sequential Quadratic Programming Resolution of the extended linear system by resolution of a sequence of convex quadratic optimization problems (sequential quadratic programming). General Formulation in terms of primal and dual variables. ( Hk A T k A k 0 ) ( px p λ ) = ( fk + A T k λ k c k ) is equivalent to min p 1 2 pt H k p + f T k p = 0 s. t. A k p k + c k = 0 Control of the inertia of the matrices. Rice-UH Optimization Seminar, Rice University, March 30, 200 Sequential Quadratic Programming p.33/42

44 Sequential Quadratic Programming (2) In our case, the sequential quadratic programming problem can be expressed as the convex problem min p c { } 1 2 pt c H c p c + b T c p c + G(p c ), where G(p c ) is the optimum value of min p x α,p yα 1 2 ( P (p xα p yα ) T pxα Θ α p yα ) + P ( b xα byα ) T ( pxα p yα ) s. t. e T p xα = b ζα P P y α p xα + x α p yα = b λ p c Rice-UH Optimization Seminar, Rice University, March 30, 200 Sequential Quadratic Programming (2) p.34/42

45 Gas-Particle Partitioning Trajectory of the feed vector b (mixing inside the particle): 1-hexacosanol water pinic acid Convergence towards a stationary solution inside the particle. Detections of the phase separations. Rice-UH Optimization Seminar, Rice University, March 30, 200 Gas-Particle Partitioning p.35/42

46 Mass Conservation 25 Gas-Particle Aerosol Total 12 Particle Phases Mass 10 Moles Gas Time Time Mass conservation in the gas-particle system and convergence to a stationary solution for the particle phases. Rice-UH Optimization Seminar, Rice University, March 30, 200 Mass Conservation p.3/42

47 Computation of Particle Radius The radius of the particle r(t) is computed by conservation of mass in the (spherical) particle: 4 3 πr(t)3 }{{} Volume = n s i=1 b i (t)m c,i ρ i, }{{} Approximated ratio Mass/density where m c the molecular weight vector of the components set and ρ i is the density of the component i. Rice-UH Optimization Seminar, Rice University, March 30, 200 Computation of Particle Radius p.3/42

48 Aerosol Growth Evolution of r Radius Time The characteristic times for gas-particle equilibrium compare well with Meng, Seinfeld (199). Rice-UH Optimization Seminar, Rice University, March 30, 200 Aerosol Growth p.38/42

49 Extension to a Population of Particles Population of aerosol particles: c(t) aerosol R 2 (t) p equil 2 R 1 (t) p equil 1 b 2 (t) b 1 (t) gas b 3 (t) p equil 3 b 4 (t) p equil 4 Differences of sizes/reaction speeds/modeling of internal energy increase the stiffness of the problem. Rice-UH Optimization Seminar, Rice University, March 30, 200 Extension to a Population of Particles p.39/42

50 Extension to a Population of Particles Population of aerosol particles i = 1,...,N. d dt c(t) N = h(r i ) i=1 d dt b i(t) = h(r i ) ( c(t) η(r i ) ( c(t) η(r i ) ) 1 RT(t) pequil (b i (t)), c(0) = c 0 ) 1 RT(t) pequil (b i (t)) p equil (b i (t)) = p vapor exp ( g i (x i α) ),, b i (0) = b 0,i min y i α,xi α s. t. P i y i αg i (x i α) P yαx i i α = b i (t), y i α 0, e T x i α = 1, x i α > 0, α = 1,...,P i. Rice-UH Optimization Seminar, Rice University, March 30, 200 Extension to a Population of Particles p.39/42

51 Rice-UH Optimization Seminar, Rice University, March 30, 200 Extension to a Population of Particles Numerical linear algebra techniques: 2 4 H 1 0 B C 1 0 O 1 A I A T H 2 0 B 2 C O 2 A I A T D D H p c1 p x1 p λ1 p c2 p x2 p λ p c0 3 5 = 2 4 r c1 r x1 r λ1 r c2 r x2 r λ r c0 3 5 Properties of the Schur complement? Current work with A. Leonard, Undergraduate Student, UH Extension to a Population of Particles p.39/42

52 Tracking of Discontinuities Phase separations when activation/deactivation of an inequality constraint. Event location techniques for the tracking of the discontinuities: Phase becomes inactive when y α = 0. Phase becomes active when y α > 0. Trade-off with warm-start techniques Rice-UH Optimization Seminar, Rice University, March 30, 200 Tracking of Discontinuities p.40/42

53 Tracking of Discontinuities First order Euler scheme for the differential-algebraic system. Detection of vanishing phases y α = 0: Taylor expansion of y α = y α (b n ) + h dy α dt (bn ) + O((h ) 2 ). Estimation of the derivatives with a sensitivity approach, to obtain the appropriate time step. Predictor-corrector two-step Adams method for the computation of next time step. Convergence result: b(t ) b n+1 R h N Ch, where t is the time of impact and h (0, h) is the time step of impact. This order is due to the Euler sc heme! Current work with C. Landry, Graduate Student, EPFL Rice-UH Optimization Seminar, Rice University, March 30, 200 Tracking of Discontinuities p.40/42

54 Current and Future Work Accurate detection of the discontinuities (event location theory). Extension to a population of particles and numerical linear algebra. Higher order numerical schemes. Bifurcation theory for differential-algebraic equations. Convex hull of a finite family of energy functions. Coupling with active sets methods for mixtures of aerosols. Determination of the convex hull of a finite family of energy functions (mixtures of aerosols) and design of interior-point active-sets methods. Rice-UH Optimization Seminar, Rice University, March 30, 200 Current and Future Work p.41/42

55 Rice-UH Optimization Seminar, Rice University, March 30, 200 p.42/42

SOLVING OPTIMIZATION-CONSTRAINED DIFFERENTIAL EQUATIONS WITH DISCONTINUITY POINTS, WITH APPLICATION TO ATMOSPHERIC CHEMISTRY

SOLVING OPTIMIZATION-CONSTRAINED DIFFERENTIAL EQUATIONS WITH DISCONTINUITY POINTS, WITH APPLICATION TO ATMOSPHERIC CHEMISTRY SOLVING OPTIMIZATION-CONSTRAINED DIFFERENTIAL EQUATIONS WITH DISCONTINUITY POINTS, WITH APPLICATION TO ATMOSPHERIC CHEMISTRY CHANTAL LANDRY, ALEXANDRE CABOUSSAT, AND ERNST HAIRER Abstract. Ordinary differential

More information

A Primal-Dual Interior-Point Method for an Optimization Problem Related to the Modeling of Atmospheric Organic Aerosols 1

A Primal-Dual Interior-Point Method for an Optimization Problem Related to the Modeling of Atmospheric Organic Aerosols 1 A Primal-Dual Interior-Point Method for an Optimization Problem Related to the Modeling of Atmospheric Organic Aerosols 1 N. R. Amundson 2, A. Caboussat 3, J.-W. He 4, J. H. Seinfeld 5 1 This work was

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

A PRIMAL-DUAL ACTIVE-SET METHOD AND ALGORITHM FOR CHEMICAL EQUILIBRIUM PROBLEM RELATED TO MODELING OF ATMOSPHERIC INORGANIC AEROSOLS

A PRIMAL-DUAL ACTIVE-SET METHOD AND ALGORITHM FOR CHEMICAL EQUILIBRIUM PROBLEM RELATED TO MODELING OF ATMOSPHERIC INORGANIC AEROSOLS A PRIMAL-DUAL ACTIVE-SET METHOD AND ALGORITHM FOR CHEMICAL EQUILIBRIUM PROBLEM RELATED TO MODELING OF ATMOSPHERIC INORGANIC AEROSOLS A Dissertation Presented to the Faculty of the Department of Department

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

A Primal-Dual Active-Set Algorithm for Chemical Equilibrium Problems Related to Modeling of Atmospheric Inorganic Aerosols 1

A Primal-Dual Active-Set Algorithm for Chemical Equilibrium Problems Related to Modeling of Atmospheric Inorganic Aerosols 1 A Primal-Dual Active-Set Algorithm for Chemical Equilibrium Problems Related to Modeling of Atmospheric Inorganic Aerosols 1 N. R. Amundson 2, A. Caboussat 3, J.-W. He 4, J. H. Seinfeld 5, K.-Y. Yoo 6

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Sequential Convex Programming

Sequential Convex Programming Sequential Convex Programming sequential convex programming alternating convex optimization convex-concave procedure Prof. S. Boyd, EE364b, Stanford University Methods for nonconvex optimization problems

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

A ten page introduction to conic optimization

A ten page introduction to conic optimization CHAPTER 1 A ten page introduction to conic optimization This background chapter gives an introduction to conic optimization. We do not give proofs, but focus on important (for this thesis) tools and concepts.

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

TMA947/MAN280 APPLIED OPTIMIZATION

TMA947/MAN280 APPLIED OPTIMIZATION Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Linear programming II

Linear programming II Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information