Journal of Computational and Applied Mathematics
|
|
- Darrell Hudson
- 5 years ago
- Views:
Transcription
1 Journal of Computational and Applied Mathematics 234 (2) Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: Global optimization by canonical dual function Jinghao Zhu a,, Jiani Zhou b, David Gao c a Department of Applied Mathematics, Tongji University, Shanghai, China b Department of Mathematics, Tongji University, Shanghai, China c Department of Mathematics, Virginia Tech, Blacksburg, USA a r t i c l e i n f o a b s t r a c t Article history: Received 4 June 29 Received in revised form 27 December 29 MSC: 9-xx In this paper, the canonical dual function (Gao, 24 4]) is used to solve a global optimization. We find global minimizers by backward differential flows. The backward flow is created by the local solution to the initial value problem of an ordinary differential equation. Some examples and applications are presented. 29 Elsevier B.V. All rights reserved. Keywords: Canonical dual function Global optimization Backward differential flow. Introduction The primal goal of this paper is to find the global minimizers to the following optimization problem (primal problem (P) in short). (P) : min P(x) s.t. x D, (.) where D {x R n x } and P(x) is a twice continuously differentiable function on R n. This problem often comes up as a subproblem in general optimization algorithms (cf. ]). As indicated in 2], due to the presence of the nonlinear sphere constraint, the solution of (P) is likely to be irrational, which implies that it is not possible to exactly compute the solution. Therefore many polynomial time algorithms have been suggested to compute the approximate solution to this problem (see, 3]). However, when P(x) is a concave quadratic function, by the canonical dual transformation (see, 4 7]), this problem can be solved completely. The canonical duality theory is a new powerful approach in global optimization and non-convex variational problems. The duality structure in non-convex systems was originally studied in (see 8]). In this paper, we will solve (.) for the objective P(x) to be a general twice continuously differentiable function. The goal of this paper is to find an exact global minimizer of P(x) over a sphere by the canonical dual function. The paper is organized as follows. In Section 2, for the primal problem (P), an ordinary backward differential equation is introduced to construct the canonical dual function. In Section 3, we use the backward flow to reach a global minimizer. Meanwhile, some examples are illustrated. An application in control problems is given in the last section. Corresponding author. address: jinghaok@online.sh.cn (J. Zhu) /$ see front matter 29 Elsevier B.V. All rights reserved. doi:.6/j.cam
2 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) Global optimization via differential flows In this section we present differential flows for constructing the so-called canonical dual function ] to deal with the global optimization (.). Here we use the method in our another paper (see 9]). In the following we consider the function P(x) to be twice continuously differentiable in R n. Define the set G {ρ > 2 P(x) + ρi] >, x D}, (2.) where D {x R n x T x }. By the elementary calculus it is easy to get the following result. Proposition 2.. G is an open set. If ˆρ G, then ρ G for ρ > ˆρ. When there is a pair ( ˆρ, ˆx) G D satisfying the following equation P(ˆx) + ˆρˆx, we focus on the flow ˆx(ρ) which is well defined near ˆρ by the initial value problem (2.2) dˆx + 2 P(ˆx) + ρi] ˆx, ˆx( ˆρ) ˆx. The flow ˆx(ρ) can be extended to wherever ρ G (, + ) ]. The canonical dual function 5] with respect to the given flow ˆx(ρ) is defined as follows: (2.3) (2.4) P d (ρ) P(ˆx(ρ)) + ρ 2 ˆxT (ρ)ˆx(ρ) ρ 2. (2.5) Lemma 2.. For a given flow defined by (2.2) (2.4), we have dp d (ρ) d 2 P d (ρ) 2 2 ˆxT (ρ)ˆx(ρ) 2. ( dˆx(ρ) Proof. Since P d (ρ) is differentiable, dp d (ρ) dp(ˆx(ρ)) P(ˆx(ρ)) d(ˆx(ρ)) ρˆx T (ρ) d(ˆx(ρ)) ) T 2 P(ˆx(ρ)) + ρi] dˆx(ρ). (2.7) + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 2 ρ d(ˆx T (ρ)ˆx(ρ)) + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 2 ˆxT (ρ)ˆx(ρ) 2. Further, since P(x) is twice continuously differentiable, by (2.3) we have d 2 P d (ρ) 2 ˆx T (ρ) dˆx(ρ) ( ) dˆx(ρ) T 2 P(ˆx(ρ)) + ρi] dˆx(ρ). (2.6) Lemma 2.2. Let ˆx(ρ) be a given flow defined by (2.2) (2.4) and P d (ρ) be the corresponding canonical dual function defined by (2.5). We have (i) For every ρ G, d2 P d (ρ) ; (ii) If ˆρ G, then dp d(ρ) monotonously decreases in ˆρ, + ); (iii) In ( ˆρ, + ), 2 P d (ρ) is monotonously decreasing.
3 54 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) Proof. When ρ G, by the definition of G we have 2 P(ˆx(ρ))+ρI >. It follows from (2.7) that d2 P d (ρ) 2 by Proposition 2. we see that dp d(ρ). Consequently, monotonously decreases in ˆρ, + ) when ˆρ G. Finally, since ˆx( ˆρ) D, dp d( ˆρ) by (2.6). It follows from ˆρ G that, in ˆρ, + ), dp d(ρ). Thus, in ( ˆρ, + ), P d (ρ) is monotonously decreasing. Theorem 2.. If the flow ˆx(ρ) (defined by (2.2) (2.4)) intersects the boundary of the ball D {x R m x } as ρ ˆρ G, i.e. ˆx( ˆρ)] T ˆx( ˆρ), ˆρ G, then ˆx( ˆρ) is a global minimizer of P(x) over D. Furthermore, we have min D P(x) P(ˆx( ˆρ)) P d ( ˆρ) max P d (ρ). ρ ˆρ Proof. By the definition of the flow ˆx(ρ) ((2.2) (2.4)) and Proposition 2., noting that ˆx( ˆρ) is on the flow and ˆρ G, we have, for all ρ ˆρ, { P(ˆx(ρ)) + ρ } 2 ˆxT (ρ)ˆx(ρ) ] P(ˆx(ρ)) + ρˆx(ρ), (2.9) and for all ρ ˆρ (P(x) 2 + ρ ) 2 xt x ] 2 P(x) + ρi >, x D. (2.) Now we need to note the fact that since P(x) is twice continuously differentiable on R n, there is a positive real number δ such that (2.) holds in {x T x < + δ} which contains D. In other words, for each ρ > ˆρ, ˆx(ρ) is the global minimizer of P(x) + ρ 2 xt x ] over D. Therefore, for every x D {x R n x T x }, when ρ ˆρ, we have P(x) P(x) + ρ 2 xt x ] inf D { P(x) + ρ 2 xt x ] P(ˆx(ρ)) + ρ 2 ˆxT (ρ)ˆx(ρ) ρ 2 P d(ρ). Thus, by Lemma 2.2 and (2.8), } P(x) max P d (ρ) P d ( ˆρ) P(ˆx( ˆρ)) + ρ ρ ˆρ 2 (ˆx( ˆρ))T ˆx( ˆρ) ] P(ˆx( ˆρ)). (2.) Consequently min D P(x) max P d (ρ). ρ ˆρ This concludes the proof of Theorem 2.. Definition 2.. Let ˆx(ρ) be a flow defined by (2.2) (2.4). We call ˆx(ρ), ρ (, ˆρ] the backward differential flow. In other words, the backward differential flow ˆx(ρ), ρ (, ˆρ] comes from solving Eq. (2.2) backwards from ˆρ. Example 2. (A Non-Convex Quadratic Optimization Over a Sphere). Let G R m m be a symmetric matrix and f R m, f be a vector such that P(x) 2 xt Gx f T x is non-convex. We consider the following global optimization over a sphere: min P(x) 2 xt Gx f T x s.t. x T x. Suppose that G has p m distinct eigenvalues a < a 2 < < a p. Since P(x) 2 xt Gx f T x is non-convex, a <. Choose a large ˆρ > (tr(g T G)) /2 such that < (G + ˆρI) f <, noting that f. We see that the backward differential equation is dx (G + ρi) x, x( ˆρ) (G + ˆρI) f, ρ ˆρ which leads to the backward flow x(ρ) (G + ρi) f, ρ ˆρ. (2.8) (2.2)
4 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) Further, noting that there is an orthogonal matrix R leading to a diagonal transformation RGR T D : (a i δ ij ) and correspondingly Rf g : (g i ), we have p x T (ρ)x(ρ) f T (G + ρi) 2 g 2 i f (a i + ρ), 2 ρ ˆρ. Since f T (G + ˆρI) 2 f < and p lim ρ> a,ρ a i i g 2 i (a i + ρ) 2 +, there is the unique ρ : < ρ < ˆρ such that p x T ( ρ)x( ρ) f T (G + ρi) 2 g 2 i f (a i + ρ). 2 i By Theorem 2., we see that x( ρ) (G + ρi) f is a global minimizer of the problem. Remark 2.. In the beginning of this section, we have mentioned that the idea of introducing backward differential flows is motivated by our another paper 9]. Here we would like to describe a little bit about the connection of this paper with the paper 9] as follows. In 9] we consider the differential equation (2.3) (2.4) to be defined on the set S {ρ > 2 P(x) + ρi] is invertible on D}. It is clear that the set G defined in (2.) is a subset of S. By the canonical duality theory, in general it cannot be expected to obtain a global minimizer by solving the differential equation (2.3) (2.4) in the set S which dose not give any information even for a local minimizer. Therefore in this paper we solve the differential equation in G. On the other hand, the property given in Proposition 2. leads us to consider the backward differential equation in G. 3. Find the global minimizer by backward differential flows The main idea of using backward differential flows to find a global minimizer is as follows. Since D is compact and P(x) is twice continuously differentiable, we can choose a large positive parameter ˆρ such that 2 P(x) + ˆρI >, x D and ˆρ > sup D { P(x), 2 P(x) }. If P(), then it follows that there is a nonzero point ˆx D such that P(x) ˆρ x by Brown fixed-point theorem. It means that the pair (ˆx, ˆρ) satisfies (2.2). We solve (2.3) (2.4) backwards from ˆρ to get the backward flow x(ρ), ρ (, ˆρ]. If there is a ρ G (, ˆρ] such that x( ρ) T x( ρ), then x( ρ) is a global minimizer of the problem (.) by Theorem 2.. Example 3. (A Concave Minimization). Let us consider the following one-dimensional concave minimization problem min P(x) 2 x4 x 2 + x s.t. x 2. We have P (x) 3 x3 2x +, P (x) x 2 2 <, x 2. Choosing ˆρ, we solve the following equation in {x 2 < } (for the fixed point ˆx) 3 x3 2x + + x (3.3) to get ˆx.25. Next we solve the following backward differential equation dx(ρ) x(ρ), x( ˆρ).25, ρ. x 2 (ρ) + 2 ρ To find a parameter such that we get x 2 (ρ) ρ 3, which satisfies (3.) (3.2) P (x) + 3 >, x.
5 542 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) ( ) Let x 3 be denoted by x. Compute the solution of the following algebra equation 3 x3 2x x, x2 to get x. It follows from Theorem 2. that x is the global minimizer of P(x) over, ]. Remark 3.. In this example, we solve a concave optimization problem by backward differential flows. By Theorem 2., we see that if the flow intersects the boundary of the ball at x, then this point x is a global minimizer. It is helpful to one s obtaining an exact solution of the global optimization problem. Example 3.2 (A Non-Convex Minimization). We now consider a non-convex minimization problem min P(x) 3 x3 + 2x (3.4) s.t. x 2. Choosing ˆρ 72, we solve the following equation in {x 2 < } (for the fixed point) to get ˆx x x ẋ 2. We also solve the following backward differential equation x 2x + t, t 72, x( 72) To find a parameter such that, we get x 2 (ρ) ρ 3, which satisfies P (x) + 3 2x + 3 >, x. Let x(3) be denoted by x. Compute the solution of the following algebra equation x x, x 2 to get x. It follows from Theorem 2. that x is the global minimizer of P(x) over, ]. Remark 3.2. In this example, we see that a backward differential flow is also useful in solving a non-convex optimization problem. For the global optimization problem, people usually compute the global minimizer numerically. Even in using canonical duality method one has to solve a duality problem numerically. Nevertheless, the backward differential flow directs us to a new way for finding a global minimizer. Particularly, one may expect an exact solution of the problem provided that the corresponding backward differential equation has an analytic solution. 4. An application in optimal control In this section, we consider matrices A R n m, B R n m, c R n, b R m and the symmetric and non-convex matrix G R m m. Suppose that G has p m distinct eigenvalues a < a 2 < < a p. We need the following assumption. Basic assumption: rank(b T, b) > rank(b T ). We will solve the following optimal control problem: (P ) min J(u) c T x + ] s.t. ẋ Ax + Bu, x() x, t, T], u. We define a function φ(t, x) ψ T (t)x, where the continuously differentiable function ψ(t) is to be determined by the following Cauchy boundary problem: ψ(t) A T ψ(t) + c ψ(t).
6 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) We have J(u) c T x + 2 ut Gu b T u ( ψ(t) + A T ψ(t)) T x + ] ] dt ψ T (t)x + ψ(t) T Ax + ] ψ T (t)x + ψ(t) T (Ax + Bu) ψ(t) T Bu + ] ψ T (t)x(t) + ψ(t) T ẋ(t) ψ(t) T Bu + ] φ(t, x(t)) ψ(t) T Bu + ] ] φ(t, x(t)) φ(, x()) + 2 ut Gu b T u ψ(t) T Bu dt ] φ(, x()) + 2 ut Gu b T u ψ(t) T Bu dt (4.) noting that ψ(t) and x() x. Thus, ] min J(u) φ(, x()) + min 2 ut Gu b T u ψ(t) T Bu dt. Consequently, we deduce that, for t, T], a.e., the optimal control ] û(t) arg min u T u 2 ut Gu b T u ψ(t) T Bu. For each t, T], we need to solve the following non-convex optimization min 2 ut Gu (b + B T ψ(t)) T u s.t. u T u. It follows from the basic assumption rank(b T, b) > rank(b T ) that b + B T ψ(t) for each t, T]. By Example 2., for each t, T], we have û(t) (G + ρ t I) b + B T ψ(t)] where the duality variable ρ t > a such that (b + B T ψ(t)) T (G + ρ t I) 2 (b + B T ψ(t)). With respect to ψ we define the function ρ(ψ) by the following equation (b + B T ψ) T (G + ρ(ψ)i) 2 (b + B T ψ), (4.2) ρ(ψ) > a. We have the analytic expression of the optimal control where û (G + ρ(ψ(t))i) (b + B T ψ(t)), ψ(t) e AT T t t e AT t e AT s dsc e AT (T t) Example 4.. Consider the following optimal control problem: x 2 ] u2 (P ) min dt s.t. ẋ x + u, x(), t, ], u. ] e AT s ds c. (4.3)
7 544 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) and to get In this example we have G, c, b, A, B, T t ψ(t) e t e s ds e t, (t ). To find an optimal control, we solve (ρ ) 2 ψ 2 (t), ρ > ρ + ψ(t) + e t ]. Finally, we get the analytic expression of the optimal control û(t) (ρ ) ψ(t)] (e t ) e t ] (t ). Acknowledgement The first author s research was partly supported by the National Science Foundation of China under grant no References ] M.J.D. Powell, UOBYQA: Unconstrained optimization by quadratic approximation, Mathematical Programming, Series B 92 (3) (22) ] C.A. Floudas, V. Visweswaran, in: R. Horst, P.M. Pardalos (Eds.), Quadratic Optimization, in: Handbook of Global Optimization, Kluwer Academic Publishers, Dordrecht, Boston, London, 995, pp ] Y.Y. Ye, On affine scalling algorithm for nonconvex quadratic programming, Mathematical Programming 56 (992) ] D.Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, Journal of Global Optimization 29 (24) ] D.Y. Gao, Duality Principles in Nonconvex Systems, Theory, Methods and Applications, Kluwer Academic Publishers, Dordrecht, 2 (Now Springer). 6] D.Y. Gao, Analytic solution and triality theory for nonconvex and nonsmooth variational problems with applications, Nonlinear Analysis 42 (2) ] D.Y. Gao, Solutions and optimality criteria to box constrained nonconvex minimization problems, Journal of Industry and Management Optimization 3 (2) (27) ] David Y. Gao, G. Strang, Geometric nonlinearity: Potential energy, complementary energy, and the gap function, Quarterly of Applied Mathematics 47 (989) ] Jinghao Zhu, Shiming Tao, David Gao, A study on concave optimization via canonical dual function, Journal of Computational and Applied Mathematics 224 (29) ] C. Robinson, Dynamical Systems, CRC Press, 999.
Research Article Complete Solutions to General Box-Constrained Global Optimization Problems
Journal of Applied Mathematics Volume, Article ID 47868, 7 pages doi:.55//47868 Research Article Complete Solutions to General Box-Constrained Global Optimization Problems Dan Wu and Youlin Shang Department
More informationSOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS. David Yang Gao. (Communicated by K.L. Teo)
JOURNAL OF INDUSTRIAL AND Website: http://aimsciences.org MANAGEMENT OPTIMIZATION Volume 3, Number 2, May 2007 pp. 293 304 SOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS
More informationSolutions to 8 th Order Polynomial Minimization Problem
Solutions to 8 th Order Polynomial Minimization Problem Timothy K. Gao timkgao@gmail.com Blacksburg High School VA 26 USA Abstract This paper presents a special canonical dual transformation method for
More informationCANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS. Shu-Cherng Fang. David Yang Gao. Ruey-Lin Sheu. Soon-Yi Wu
JOURNAL OF INDUSTRIAL AND Website: http://aimsciences.org MANAGEMENT OPTIMIZATION Volume 4, Number 1, February 2008 pp. 125 142 CANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS Shu-Cherng
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationarxiv: v1 [math.oc] 16 Jul 2016
Canonical dual method for mixed integer fourth-order polynomial minimization problems with fixed cost terms Zhong Jin David Y Gao arxiv:607.04748v [math.oc] 6 Jul 06 Abstract we study a canonical duality
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationA New Trust Region Algorithm Using Radial Basis Function Models
A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationComplexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization
Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July
More informationConstrained controllability of semilinear systems with delayed controls
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 56, No. 4, 28 Constrained controllability of semilinear systems with delayed controls J. KLAMKA Institute of Control Engineering, Silesian
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationRecent Trends in Differential Inclusions
Recent Trends in Alberto Bressan Department of Mathematics, Penn State University (Aveiro, June 2016) (Aveiro, June 2016) 1 / Two main topics ẋ F (x) differential inclusions with upper semicontinuous,
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationTEST CODE: MMA (Objective type) 2015 SYLLABUS
TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,
More informationThird In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix
Third In-Class Exam Solutions Math 26, Professor David Levermore Thursday, December 2009 ) [6] Given that 2 is an eigenvalue of the matrix A 2, 0 find all the eigenvectors of A associated with 2. Solution.
More informationECON 4117/5111 Mathematical Economics
Test 1 September 29, 2006 1. Use a truth table to show the following equivalence statement: (p q) (p q) 2. Consider the statement: A function f : X Y is continuous on X if for every open set V Y, the pre-image
More informationLinear Quadratic Zero-Sum Two-Person Differential Games
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,
More informationInequality Constraints
Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the
More informationTHE WAVE EQUATION. d = 1: D Alembert s formula We begin with the initial value problem in 1 space dimension { u = utt u xx = 0, in (0, ) R, (2)
THE WAVE EQUATION () The free wave equation takes the form u := ( t x )u = 0, u : R t R d x R In the literature, the operator := t x is called the D Alembertian on R +d. Later we shall also consider the
More informationChapter 2: Unconstrained Extrema
Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationNonlinear stabilization via a linear observability
via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationApplied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.
Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationA Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions
A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationDichotomy, the Closed Range Theorem and Optimal Control
Dichotomy, the Closed Range Theorem and Optimal Control Pavel Brunovský (joint work with Mária Holecyová) Comenius University Bratislava, Slovakia Praha 13. 5. 2016 Brunovsky Praha 13. 5. 2016 Closed Range
More informationRelationships between upper exhausters and the basic subdifferential in variational analysis
J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationLecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then
Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationContinuous Primal-Dual Methods in Image Processing
Continuous Primal-Dual Methods in Image Processing Michael Goldman CMAP, Polytechnique August 2012 Introduction Maximal monotone operators Application to the initial problem Numerical illustration Introduction
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationResearch Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization
Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationMATH 312 Section 8.3: Non-homogeneous Systems
MATH 32 Section 8.3: Non-homogeneous Systems Prof. Jonathan Duncan Walla Walla College Spring Quarter, 2007 Outline Undetermined Coefficients 2 Variation of Parameter 3 Conclusions Undetermined Coefficients
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationTEST CODE: PMB SYLLABUS
TEST CODE: PMB SYLLABUS Convergence and divergence of sequence and series; Cauchy sequence and completeness; Bolzano-Weierstrass theorem; continuity, uniform continuity, differentiability; directional
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More informationON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR TERM
Internat. J. Math. & Math. Sci. Vol. 22, No. 3 (999 587 595 S 6-72 9922587-2 Electronic Publishing House ON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR
More informationJournal of Computational and Applied Mathematics. Relations among eigenvalues of left-definite Sturm Liouville problems
Journal of Computational and Applied Mathematics 236 (2012) 3426 3433 Contents lists available at SciVerse ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationUnbounded Convex Semialgebraic Sets as Spectrahedral Shadows
Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationGENERALIZED second-order cone complementarity
Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the
More informationA note on the σ-algebra of cylinder sets and all that
A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationCHAPTER V DUAL SPACES
CHAPTER V DUAL SPACES DEFINITION Let (X, T ) be a (real) locally convex topological vector space. By the dual space X, or (X, T ), of X we mean the set of all continuous linear functionals on X. By the
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationModern Optimal Control
Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints
More informationA Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions
A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationDirect method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions
Journal of Computational and Applied Mathematics 22 (28) 51 57 wwwelseviercom/locate/cam Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationTEST CODE: MIII (Objective type) 2010 SYLLABUS
TEST CODE: MIII (Objective type) 200 SYLLABUS Algebra Permutations and combinations. Binomial theorem. Theory of equations. Inequalities. Complex numbers and De Moivre s theorem. Elementary set theory.
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationLinear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the
More informationNATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40
NATIONAL BOARD FOR HIGHER MATHEMATICS Research Scholarships Screening Test Saturday, January 2, 218 Time Allowed: 15 Minutes Maximum Marks: 4 Please read, carefully, the instructions that follow. INSTRUCTIONS
More informationII KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov
Abstract Convexity and Global Optimization by Alexander Rubinov School of Information Technology and Mathematical Sciences, University of Ballarat, Victoria, Australia II KLUWER ACADEMIC PUBLISHERS DORDRECHT
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationEE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1
EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More information