Journal of Computational and Applied Mathematics

Similar documents
Research Article Complete Solutions to General Box-Constrained Global Optimization Problems

SOLUTIONS AND OPTIMALITY CRITERIA TO BOX CONSTRAINED NONCONVEX MINIMIZATION PROBLEMS. David Yang Gao. (Communicated by K.L. Teo)

Solutions to 8 th Order Polynomial Minimization Problem

CANONICAL DUAL APPROACH TO SOLVING 0-1 QUADRATIC PROGRAMMING PROBLEMS. Shu-Cherng Fang. David Yang Gao. Ruey-Lin Sheu. Soon-Yi Wu

Lecture Note 5: Semidefinite Programming for Stability Analysis

arxiv: v1 [math.oc] 16 Jul 2016

Self-Concordant Barrier Functions for Convex Optimization

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Real Symmetric Matrices and Semidefinite Programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A New Trust Region Algorithm Using Radial Basis Function Models

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lecture 6: Conic Optimization September 8

Lagrange Relaxation and Duality

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Constrained controllability of semilinear systems with delayed controls

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Written Examination

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Chapter 2 Convex Analysis

Spectral gradient projection method for solving nonlinear monotone equations

Recent Trends in Differential Inclusions

Absolute value equations

TEST CODE: MMA (Objective type) 2015 SYLLABUS

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

ECON 4117/5111 Mathematical Economics

Linear Quadratic Zero-Sum Two-Person Differential Games

Inequality Constraints

THE WAVE EQUATION. d = 1: D Alembert s formula We begin with the initial value problem in 1 space dimension { u = utt u xx = 0, in (0, ) R, (2)

Chapter 2: Unconstrained Extrema

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Nonlinear stabilization via a linear observability

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Largest dual ellipsoids inscribed in dual cones

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Semidefinite Programming Basics and Applications

Multivariable Calculus

5. Duality. Lagrangian

Dichotomy, the Closed Range Theorem and Optimal Control

Relationships between upper exhausters and the basic subdifferential in variational analysis

The Trust Region Subproblem with Non-Intersecting Linear Constraints

2 Two-Point Boundary Value Problems

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

NORMS ON SPACE OF MATRICES

4th Preparation Sheet - Solutions

Lecture 8. Strong Duality Results. September 22, 2008

Continuous Primal-Dual Methods in Image Processing

Improved Newton s method with exact line searches to solve quadratic matrix equation

Semidefinite Programming

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Convex Optimization Boyd & Vandenberghe. 5. Duality

Additional Homework Problems

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

CS711008Z Algorithm Design and Analysis

MATH 312 Section 8.3: Non-homogeneous Systems

Obstacle problems and isotonicity

TEST CODE: PMB SYLLABUS

Partial Differential Equations

ON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR TERM

Journal of Computational and Applied Mathematics. Relations among eigenvalues of left-definite Sturm Liouville problems

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Lecture: Duality.

GENERALIZED second-order cone complementarity

A note on the σ-algebra of cylinder sets and all that

A Concise Course on Stochastic Partial Differential Equations

Balanced Truncation 1

Lecture: Duality of LP, SOCP and SDP

Optimization for Machine Learning

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

CHAPTER V DUAL SPACES

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Modern Optimal Control

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Applied Lagrange Duality for Constrained Optimization

Summer School: Semidefinite Optimization

Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions

Strong duality in Lasserre s hierarchy for polynomial optimization

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

Convex Optimization M2

TEST CODE: MIII (Objective type) 2010 SYLLABUS

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

THE INVERSE FUNCTION THEOREM

Linear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Transcription:

Journal of Computational and Applied Mathematics 234 (2) 538 544 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Global optimization by canonical dual function Jinghao Zhu a,, Jiani Zhou b, David Gao c a Department of Applied Mathematics, Tongji University, Shanghai, China b Department of Mathematics, Tongji University, Shanghai, China c Department of Mathematics, Virginia Tech, Blacksburg, USA a r t i c l e i n f o a b s t r a c t Article history: Received 4 June 29 Received in revised form 27 December 29 MSC: 9-xx In this paper, the canonical dual function (Gao, 24 4]) is used to solve a global optimization. We find global minimizers by backward differential flows. The backward flow is created by the local solution to the initial value problem of an ordinary differential equation. Some examples and applications are presented. 29 Elsevier B.V. All rights reserved. Keywords: Canonical dual function Global optimization Backward differential flow. Introduction The primal goal of this paper is to find the global minimizers to the following optimization problem (primal problem (P) in short). (P) : min P(x) s.t. x D, (.) where D {x R n x } and P(x) is a twice continuously differentiable function on R n. This problem often comes up as a subproblem in general optimization algorithms (cf. ]). As indicated in 2], due to the presence of the nonlinear sphere constraint, the solution of (P) is likely to be irrational, which implies that it is not possible to exactly compute the solution. Therefore many polynomial time algorithms have been suggested to compute the approximate solution to this problem (see, 3]). However, when P(x) is a concave quadratic function, by the canonical dual transformation (see, 4 7]), this problem can be solved completely. The canonical duality theory is a new powerful approach in global optimization and non-convex variational problems. The duality structure in non-convex systems was originally studied in (see 8]). In this paper, we will solve (.) for the objective P(x) to be a general twice continuously differentiable function. The goal of this paper is to find an exact global minimizer of P(x) over a sphere by the canonical dual function. The paper is organized as follows. In Section 2, for the primal problem (P), an ordinary backward differential equation is introduced to construct the canonical dual function. In Section 3, we use the backward flow to reach a global minimizer. Meanwhile, some examples are illustrated. An application in control problems is given in the last section. Corresponding author. E-mail address: jinghaok@online.sh.cn (J. Zhu). 377-427/$ see front matter 29 Elsevier B.V. All rights reserved. doi:.6/j.cam.29.2.45

J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 539 2. Global optimization via differential flows In this section we present differential flows for constructing the so-called canonical dual function ] to deal with the global optimization (.). Here we use the method in our another paper (see 9]). In the following we consider the function P(x) to be twice continuously differentiable in R n. Define the set G {ρ > 2 P(x) + ρi] >, x D}, (2.) where D {x R n x T x }. By the elementary calculus it is easy to get the following result. Proposition 2.. G is an open set. If ˆρ G, then ρ G for ρ > ˆρ. When there is a pair ( ˆρ, ˆx) G D satisfying the following equation P(ˆx) + ˆρˆx, we focus on the flow ˆx(ρ) which is well defined near ˆρ by the initial value problem (2.2) dˆx + 2 P(ˆx) + ρi] ˆx, ˆx( ˆρ) ˆx. The flow ˆx(ρ) can be extended to wherever ρ G (, + ) ]. The canonical dual function 5] with respect to the given flow ˆx(ρ) is defined as follows: (2.3) (2.4) P d (ρ) P(ˆx(ρ)) + ρ 2 ˆxT (ρ)ˆx(ρ) ρ 2. (2.5) Lemma 2.. For a given flow defined by (2.2) (2.4), we have dp d (ρ) d 2 P d (ρ) 2 2 ˆxT (ρ)ˆx(ρ) 2. ( dˆx(ρ) Proof. Since P d (ρ) is differentiable, dp d (ρ) dp(ˆx(ρ)) P(ˆx(ρ)) d(ˆx(ρ)) ρˆx T (ρ) d(ˆx(ρ)) ) T 2 P(ˆx(ρ)) + ρi] dˆx(ρ). (2.7) + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 2 ρ d(ˆx T (ρ)ˆx(ρ)) + 2 ˆxT (ρ)ˆx(ρ) + 2 ρ d(ˆxt (ρ)ˆx(ρ)) 2 2 ˆxT (ρ)ˆx(ρ) 2. Further, since P(x) is twice continuously differentiable, by (2.3) we have d 2 P d (ρ) 2 ˆx T (ρ) dˆx(ρ) ( ) dˆx(ρ) T 2 P(ˆx(ρ)) + ρi] dˆx(ρ). (2.6) Lemma 2.2. Let ˆx(ρ) be a given flow defined by (2.2) (2.4) and P d (ρ) be the corresponding canonical dual function defined by (2.5). We have (i) For every ρ G, d2 P d (ρ) ; (ii) If ˆρ G, then dp d(ρ) monotonously decreases in ˆρ, + ); (iii) In ( ˆρ, + ), 2 P d (ρ) is monotonously decreasing.

54 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 Proof. When ρ G, by the definition of G we have 2 P(ˆx(ρ))+ρI >. It follows from (2.7) that d2 P d (ρ) 2 by Proposition 2. we see that dp d(ρ). Consequently, monotonously decreases in ˆρ, + ) when ˆρ G. Finally, since ˆx( ˆρ) D, dp d( ˆρ) by (2.6). It follows from ˆρ G that, in ˆρ, + ), dp d(ρ). Thus, in ( ˆρ, + ), P d (ρ) is monotonously decreasing. Theorem 2.. If the flow ˆx(ρ) (defined by (2.2) (2.4)) intersects the boundary of the ball D {x R m x } as ρ ˆρ G, i.e. ˆx( ˆρ)] T ˆx( ˆρ), ˆρ G, then ˆx( ˆρ) is a global minimizer of P(x) over D. Furthermore, we have min D P(x) P(ˆx( ˆρ)) P d ( ˆρ) max P d (ρ). ρ ˆρ Proof. By the definition of the flow ˆx(ρ) ((2.2) (2.4)) and Proposition 2., noting that ˆx( ˆρ) is on the flow and ˆρ G, we have, for all ρ ˆρ, { P(ˆx(ρ)) + ρ } 2 ˆxT (ρ)ˆx(ρ) ] P(ˆx(ρ)) + ρˆx(ρ), (2.9) and for all ρ ˆρ (P(x) 2 + ρ ) 2 xt x ] 2 P(x) + ρi >, x D. (2.) Now we need to note the fact that since P(x) is twice continuously differentiable on R n, there is a positive real number δ such that (2.) holds in {x T x < + δ} which contains D. In other words, for each ρ > ˆρ, ˆx(ρ) is the global minimizer of P(x) + ρ 2 xt x ] over D. Therefore, for every x D {x R n x T x }, when ρ ˆρ, we have P(x) P(x) + ρ 2 xt x ] inf D { P(x) + ρ 2 xt x ] P(ˆx(ρ)) + ρ 2 ˆxT (ρ)ˆx(ρ) ρ 2 P d(ρ). Thus, by Lemma 2.2 and (2.8), } P(x) max P d (ρ) P d ( ˆρ) P(ˆx( ˆρ)) + ρ ρ ˆρ 2 (ˆx( ˆρ))T ˆx( ˆρ) ] P(ˆx( ˆρ)). (2.) Consequently min D P(x) max P d (ρ). ρ ˆρ This concludes the proof of Theorem 2.. Definition 2.. Let ˆx(ρ) be a flow defined by (2.2) (2.4). We call ˆx(ρ), ρ (, ˆρ] the backward differential flow. In other words, the backward differential flow ˆx(ρ), ρ (, ˆρ] comes from solving Eq. (2.2) backwards from ˆρ. Example 2. (A Non-Convex Quadratic Optimization Over a Sphere). Let G R m m be a symmetric matrix and f R m, f be a vector such that P(x) 2 xt Gx f T x is non-convex. We consider the following global optimization over a sphere: min P(x) 2 xt Gx f T x s.t. x T x. Suppose that G has p m distinct eigenvalues a < a 2 < < a p. Since P(x) 2 xt Gx f T x is non-convex, a <. Choose a large ˆρ > (tr(g T G)) /2 such that < (G + ˆρI) f <, noting that f. We see that the backward differential equation is dx (G + ρi) x, x( ˆρ) (G + ˆρI) f, ρ ˆρ which leads to the backward flow x(ρ) (G + ρi) f, ρ ˆρ. (2.8) (2.2)

J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 54 Further, noting that there is an orthogonal matrix R leading to a diagonal transformation RGR T D : (a i δ ij ) and correspondingly Rf g : (g i ), we have p x T (ρ)x(ρ) f T (G + ρi) 2 g 2 i f (a i + ρ), 2 ρ ˆρ. Since f T (G + ˆρI) 2 f < and p lim ρ> a,ρ a i i g 2 i (a i + ρ) 2 +, there is the unique ρ : < ρ < ˆρ such that p x T ( ρ)x( ρ) f T (G + ρi) 2 g 2 i f (a i + ρ). 2 i By Theorem 2., we see that x( ρ) (G + ρi) f is a global minimizer of the problem. Remark 2.. In the beginning of this section, we have mentioned that the idea of introducing backward differential flows is motivated by our another paper 9]. Here we would like to describe a little bit about the connection of this paper with the paper 9] as follows. In 9] we consider the differential equation (2.3) (2.4) to be defined on the set S {ρ > 2 P(x) + ρi] is invertible on D}. It is clear that the set G defined in (2.) is a subset of S. By the canonical duality theory, in general it cannot be expected to obtain a global minimizer by solving the differential equation (2.3) (2.4) in the set S which dose not give any information even for a local minimizer. Therefore in this paper we solve the differential equation in G. On the other hand, the property given in Proposition 2. leads us to consider the backward differential equation in G. 3. Find the global minimizer by backward differential flows The main idea of using backward differential flows to find a global minimizer is as follows. Since D is compact and P(x) is twice continuously differentiable, we can choose a large positive parameter ˆρ such that 2 P(x) + ˆρI >, x D and ˆρ > sup D { P(x), 2 P(x) }. If P(), then it follows that there is a nonzero point ˆx D such that P(x) ˆρ x by Brown fixed-point theorem. It means that the pair (ˆx, ˆρ) satisfies (2.2). We solve (2.3) (2.4) backwards from ˆρ to get the backward flow x(ρ), ρ (, ˆρ]. If there is a ρ G (, ˆρ] such that x( ρ) T x( ρ), then x( ρ) is a global minimizer of the problem (.) by Theorem 2.. Example 3. (A Concave Minimization). Let us consider the following one-dimensional concave minimization problem min P(x) 2 x4 x 2 + x s.t. x 2. We have P (x) 3 x3 2x +, P (x) x 2 2 <, x 2. Choosing ˆρ, we solve the following equation in {x 2 < } (for the fixed point ˆx) 3 x3 2x + + x (3.3) to get ˆx.25. Next we solve the following backward differential equation dx(ρ) x(ρ), x( ˆρ).25, ρ. x 2 (ρ) + 2 ρ To find a parameter such that we get x 2 (ρ) ρ 3, which satisfies (3.) (3.2) P (x) + 3 >, x.

542 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 ( ) Let x 3 be denoted by x. Compute the solution of the following algebra equation 3 x3 2x + + 3 x, x2 to get x. It follows from Theorem 2. that x is the global minimizer of P(x) over, ]. Remark 3.. In this example, we solve a concave optimization problem by backward differential flows. By Theorem 2., we see that if the flow intersects the boundary of the ball at x, then this point x is a global minimizer. It is helpful to one s obtaining an exact solution of the global optimization problem. Example 3.2 (A Non-Convex Minimization). We now consider a non-convex minimization problem min P(x) 3 x3 + 2x (3.4) s.t. x 2. Choosing ˆρ 72, we solve the following equation in {x 2 < } (for the fixed point) to get ˆx x 2 + 2 + 72x ẋ 2. We also solve the following backward differential equation 4+3 2 x 2x + t, t 72, x( 72) 2 4 + 3 2. To find a parameter such that, we get x 2 (ρ) ρ 3, which satisfies P (x) + 3 2x + 3 >, x. Let x(3) be denoted by x. Compute the solution of the following algebra equation x 2 + 2 + 3x, x 2 to get x. It follows from Theorem 2. that x is the global minimizer of P(x) over, ]. Remark 3.2. In this example, we see that a backward differential flow is also useful in solving a non-convex optimization problem. For the global optimization problem, people usually compute the global minimizer numerically. Even in using canonical duality method one has to solve a duality problem numerically. Nevertheless, the backward differential flow directs us to a new way for finding a global minimizer. Particularly, one may expect an exact solution of the problem provided that the corresponding backward differential equation has an analytic solution. 4. An application in optimal control In this section, we consider matrices A R n m, B R n m, c R n, b R m and the symmetric and non-convex matrix G R m m. Suppose that G has p m distinct eigenvalues a < a 2 < < a p. We need the following assumption. Basic assumption: rank(b T, b) > rank(b T ). We will solve the following optimal control problem: (P ) min J(u) c T x + ] s.t. ẋ Ax + Bu, x() x, t, T], u. We define a function φ(t, x) ψ T (t)x, where the continuously differentiable function ψ(t) is to be determined by the following Cauchy boundary problem: ψ(t) A T ψ(t) + c ψ(t).

J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 543 We have J(u) c T x + 2 ut Gu b T u ( ψ(t) + A T ψ(t)) T x + ] ] dt ψ T (t)x + ψ(t) T Ax + ] ψ T (t)x + ψ(t) T (Ax + Bu) ψ(t) T Bu + ] ψ T (t)x(t) + ψ(t) T ẋ(t) ψ(t) T Bu + ] φ(t, x(t)) ψ(t) T Bu + ] ] φ(t, x(t)) φ(, x()) + 2 ut Gu b T u ψ(t) T Bu dt ] φ(, x()) + 2 ut Gu b T u ψ(t) T Bu dt (4.) noting that ψ(t) and x() x. Thus, ] min J(u) φ(, x()) + min 2 ut Gu b T u ψ(t) T Bu dt. Consequently, we deduce that, for t, T], a.e., the optimal control ] û(t) arg min u T u 2 ut Gu b T u ψ(t) T Bu. For each t, T], we need to solve the following non-convex optimization min 2 ut Gu (b + B T ψ(t)) T u s.t. u T u. It follows from the basic assumption rank(b T, b) > rank(b T ) that b + B T ψ(t) for each t, T]. By Example 2., for each t, T], we have û(t) (G + ρ t I) b + B T ψ(t)] where the duality variable ρ t > a such that (b + B T ψ(t)) T (G + ρ t I) 2 (b + B T ψ(t)). With respect to ψ we define the function ρ(ψ) by the following equation (b + B T ψ) T (G + ρ(ψ)i) 2 (b + B T ψ), (4.2) ρ(ψ) > a. We have the analytic expression of the optimal control where û (G + ρ(ψ(t))i) (b + B T ψ(t)), ψ(t) e AT T t t e AT t e AT s dsc e AT (T t) Example 4.. Consider the following optimal control problem: x 2 ] u2 (P ) min dt s.t. ẋ x + u, x(), t, ], u. ] e AT s ds c. (4.3)

544 J. Zhu et al. / Journal of Computational and Applied Mathematics 234 (2) 538 544 and to get In this example we have G, c, b, A, B, T t ψ(t) e t e s ds e t, (t ). To find an optimal control, we solve (ρ ) 2 ψ 2 (t), ρ > ρ + ψ(t) + e t ]. Finally, we get the analytic expression of the optimal control û(t) (ρ ) ψ(t)] (e t ) e t ] (t ). Acknowledgement The first author s research was partly supported by the National Science Foundation of China under grant no. 6745. References ] M.J.D. Powell, UOBYQA: Unconstrained optimization by quadratic approximation, Mathematical Programming, Series B 92 (3) (22) 555 582. 2] C.A. Floudas, V. Visweswaran, in: R. Horst, P.M. Pardalos (Eds.), Quadratic Optimization, in: Handbook of Global Optimization, Kluwer Academic Publishers, Dordrecht, Boston, London, 995, pp. 27 27. 3] Y.Y. Ye, On affine scalling algorithm for nonconvex quadratic programming, Mathematical Programming 56 (992) 285 3. 4] D.Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, Journal of Global Optimization 29 (24) 377 399. 5] D.Y. Gao, Duality Principles in Nonconvex Systems, Theory, Methods and Applications, Kluwer Academic Publishers, Dordrecht, 2 (Now Springer). 6] D.Y. Gao, Analytic solution and triality theory for nonconvex and nonsmooth variational problems with applications, Nonlinear Analysis 42 (2) 6 93. 7] D.Y. Gao, Solutions and optimality criteria to box constrained nonconvex minimization problems, Journal of Industry and Management Optimization 3 (2) (27) 293 34. 8] David Y. Gao, G. Strang, Geometric nonlinearity: Potential energy, complementary energy, and the gap function, Quarterly of Applied Mathematics 47 (989) 487 54. 9] Jinghao Zhu, Shiming Tao, David Gao, A study on concave optimization via canonical dual function, Journal of Computational and Applied Mathematics 224 (29) 459 464. ] C. Robinson, Dynamical Systems, CRC Press, 999.