Self-concordant functions for optimization on smooth manifolds
|
|
- Crystal Gardner
- 5 years ago
- Views:
Transcription
1 J Glob Optim 7) 38: DOI.7/s z ORIGINAL ARTICLE Self-concordant functions for optimization on smooth manifolds Danchi Jiang John B. Moore Huibo Ji Received: 8 February 6 / Accepted: 8 September 6 / Published online: October 6 Springer Science+Business Media B.V. 6 Abstract This paper discusses self-concordant functions on smooth manifolds. In Euclidean space, such functions are utilized extensively as barrier functions in interiorpoint methods for polynomial time optimization algorithms. Here, the self-concordant function is carefully defined on a differential manifold in such a way that the properties of self-concordant functions in Euclidean space are preserved. A Newton decrement is defined and analyzed for this class of functions. Based on this, a damped Newton algorithm is proposed for the optimization of self-concordant functions. Under reasonable technical assumptions such as geodesic completeness of the manifold, this algorithm is guaranteed to fall in any given small neighborhood of the optimal solution in a finite number of steps. The existence and uniqueness of the optimal solution is also proved in this paper. Hence, the optimal solution is a global one. Furthermore, it ensures a quadratic convergence within a neighborhood of the minimal point. This neighborhood can be specified in terms of the Newton decrement. The computational complexity bound of the proposed approach is also given explicitly. This complexity bound is shown to be of the ordero lnɛ)), whereɛ is the desired precision. Some interesting optimization problems are given to illustrate the proposed concept and algorithm. Keywords Self-concordant functions Polynomial algorithms Damped Newton method Geodesics Covariant differentials A part of the materials has been presented at 4 Conference on Decision and Control D. Jiang B) The School of Engineering, University of Tasmania, Private Bag 65, Hobart, TAS 7, Australia Danchi.Jiang@utas.edu.au J. B. Moore H. Ji Dept. of Information Engineering, RSISE, Australian National University, Canberra, ACT, Australia J. B. Moore John.Moore@anu.edu.au H. Ji huibo@rsise.anu.edu.au
2 438 J Glob Optim 7) 38: Introduction Self-concordant functions play an important role in the powerful interior-point polynomial algorithms for convex programing in Euclidean space. Following the work of Nesterov and Nemirovskii [], many articles have been published using this type of function to construct barrier functions for interior-point algorithms. For example see [,3]). The idea of interior-point methods is to force the constraints of the optimization problem to be satisfied using a barrier penalty function in a composite cost function. This barrier function is relatively flat in the interior of the feasible region yet approaches to infinity in approaching the boundary. As the coefficient of the barrier function in the composite cost function converges to zero, the minimal point of the composite cost function converges to that of the original minimization problem. Self-concordant functions are a class of functions such that the third-order derivative is bounded by the cube of the square root of the second-order derivative, possibly with a constant scaling factor. The main advantage of using such a class of functions as barrier functions is that the computational complexity of the minimization problem of the constructed composite function is very low so that the original minimization problem can be solved in an amount of time that is a polynomial of the required precision. The notion of a self-concordant function has deep roots in geometry. In [4], it is shown that a Riemannian metric can be rendered by a self-concordant barrier function. Such a metric gives a good explanation of the optimal direction for optimization algorithms. As such, it can provide guidance for the construction of efficient interior-point methods. In this aspect, the optimal path is along a geodesic defined by the Riemannian metric. This path is not a straight line in Euclidean space. Indeed, optimization problems in Euclidean space can often be better understood with the structure of appropriately associated Riemannian manifolds see [5] for many meaningful examples). In fact, many optimization problems can also be better posed on manifolds rather than in Euclidean space. For example, optimization problems associated with orthogonal matrices, such as algorithms for the computation of eigenvalues and singular values, or Oja s flow in neural network pattern classification algorithms see [5 8]), to list a few. On the other hand, optimization methods such as the steepest descent method, the Newton method, and other related methods can be extended to Riemannian manifolds see [5,6,9]). It is natural to ask, what are the self-concordant functions and the associated interior-point methods on manifolds? Such a question can be justified by practical importance or by theoretical completeness. The self-concordant concept has been applied in [] with analysis restricted to a logarithm cost function optimized on a manifold. However, polynomial complexity is not available in that case. In this paper, we will give a useful definition of a self-concordant function on a manifold and give a thorough study of such. One of the advantages of solving minimization problems on manifolds is, as pointed out in [6], the reduction of the dimensions of the problems, compared to solving the original problems in their ambient Euclidean spaces. A typical intrinsic approach for minimization is based on the computation of geodesics and covariant differentials, which might be expansive. However, there are many meaningful cases where the computation can be very simple. One of the examples is the real compact semisimple Lie group endowed with its natural Riemannian metric. In such a case the geodesic and parallel transportation can be computed by matrix exponentiation, whose rich
3 J Glob Optim 7) 38: set of nice properties provide convenience in analysis and development of efficient algorithms see [6,]). Many particular classes of minimization problems in this category such as those on Orthogonal groups or Stiefel manifolds are studied in [5]. Another simple but non-trivial case is the sphere, where the geodesic and parallel transportation can be computed via trigonometric functions and vector calculation. This paper is organized as follows: notations and assumptions are listed in Sect.. In Sect. 3, the self-concordant function is defined to preserve as many nice properties of the original version in Euclidean space as possible. To facilitate the analysis and understanding of the elegantly proposed damped Newton method, the Newton decrement is defined and analyzed in Sect. 4. Then, the existence and uniqueness of the optimal solution are proved and a damped Newton algorithm is proposed in Sect. 5. It is shown that this algorithm has a similar convergence property and computational complexity to the algorithm for self-concordant functions in Euclidean space proposed in []. Two interesting examples are included to illustrate the proposed concept and approach in this paper in the last section. Assumptions and notations In this paper, notations will be listed in the following paragraph. Definitions and other details of the concepts and related results used will not be included. Please refer to [] for more details. M A smooth manifold T p M Tangent space of M at point p TM tangent bundle of M T p M -forms of M at point p T M dual bundle of TM An affine connection defined on M X i the ith order covariant differentials with respect to the vector field X, based on the affine connection, wherei is an integer exp p X Exponential map of vector field X based on the parallel transform defined using the affine connection X pq The tangent vector field of the shortest geodesic connecting point p and q on M, where the geodesic parameter is at p and at q t τ Parallel transform on M, where the affine connection is determined according to the context. In this paper, our purpose is to show how the self-concordant function can be extended to non-euclidean space. The discussion is not intended for the most general cases to avoid unnecessary details. As such, we make the following assumptions: Assumption ) The affine connection is symmetric, meaning that Y X f ) = X Y f ), for any vector field X, Y and function f. A very interesting example of the symmetric connection is the Riemannian connection. ) The smooth manifold M of interest is geodesic complete. Furthermore, the geodesic between two points is unique.
4 44 J Glob Optim 7) 38: Self-concordant functions Let M be a smooth manifold of finite dimension and a symmetric affine connection defined on it. Assume that M is geodesic complete in this paper to focus on the main ideas rather than how general the proposed concept and method can be applied. Consider a function defined on M: f : M R, which has an open domain, a closed map, meaning that {f P), P), P domf )} is a closed set in the product manifold R M, and is at least three times differentiable. Definition f is a self-concordant function with respect to if and only if the following condition holds: ] X 3 3/ f p) Mf [ X f p), X Tp M, p M, ) where M f is a positive constant associated with f. As noticed in [], if a function f is self-concordant with the constant M f, then the function M f f is self-concordant with the constant. It can also be directly checked by simple computation. As such, we assume M f = for the rest of this paper. Also notice that the second covariant differential of a self-concordant function is a positive semi-definite mapping, meaning that it is symmetric with respect to two tangent vectors and its value is always non-negative. For the simplicity of the analysis in this paper, we only consider those functions that satisfy the following assumption: Assumption X f p) >, p domf ), X T pm. Then, the second-order covariant differentials can be used to define a Dikin-type ellipsoid W p; r) as follows: Definition For any p domf ), andr >, { [ ] } / W p; r) := q M X pq f p) < r, where X pq is the vector field defined by the geodesic connecting the points p and q. The definition of self-concordant function is based on second-order and third-order covariant differentials with respect to the same vector field X. However, it is equivalent to the case where they are calculated with respect to different vector fields. More specifically, the following property holds: Property If f is a self-concordant function defined on M, then, the following inequality holds: X X X3 f p) ] / [ ] / [ / Mf [ X f p) X f p) X 3 f p)], X, X, X 3 T p M. ) This property comes from the linearity of the mapping df :T pm T p M R, and 3 df :T pm T p M T p M R,
5 J Glob Optim 7) 38: defined by df X, X ) := X X f P), respectively, for any X, X, X 3 T p M. and 3 df X, X, X 3 ) := X X X3 f p), Proof The following proof is inspired by that in [], Proposition 9.). However, it should be pointed out that, the trilinear property of the third-order differentials is not required in our proof. By assumption the affine connection is symmetric. Therefore, the manifold is torsion free. With a slight misuse of notions, for a given tangent vector X at p, we let X also denote the vector field generated by X using parallel transportation in a neighborhood of p where a geodesic starting from p along the direction X exists uniquely. In this case, it is easy to see that [X i, X j ]=, i = j. Hence the corresponding second-order covariant differentials is symmetric, meaning As such, it can be directly checked that Xi Xj f p) = Xj Xi f p), i, j =,, 3. X X X3 f p) = X X X3 f p), 3) X X X3 f p) = X X3 X f p). Adopting the reasoning process in Proposition 9.. in [, p 36], we only need to show that the extremal value of the function: WX, X, X 3 ) := X X X3 f p), X i T M p), X i f p) =, i =,, 3 4) is achieved at a triple vector in the form of Z, Z, Z). Assume the contrary, let X, Y, Z ) be an extremal point of W. By Lemma 9.. in [] one knows that there exists a unit vector X, meaning its second covariant differential of f is, in the direction X + Y such that X, X, Z ) achieves the extreme. By the same method in this lemma, one can show that there exists a unit vector Y in the one dimensional linear space spanned by X + Z such that X, Y, Y ) achieves the extremal value. By using this method iteratively, one constructs a sequence of vectors in R n R n R n : {X, Y, Z ), X, X, Z ), X, Y, Y ), X, X, Y ), X, Y, Y ),...}, where the vector in the middle is always in the one-dimensional linear space spanned by the summation of two different vectors in the previous triple and either the first or the last vector is the same as the middle. By the assumption that X f p) >forall tangent vector X, one can show that this sequence is convergent to a vector triple in the form of Z, Z, Z ). Hence this property is proved. A self-concordant function also has the following interesting property: Property p domf ) M, W p;) domf ). This property gives a safe bound for the line search along geodesics for optimization problems so that the search will always be in the admissible domain. We need the following lemma to prove it:
6 44 J Glob Optim 7) 38: Lemma Given that f is a self-concordant function defined on a smooth manifold M and X a vector field on M, define a function φt) : R Ras follows: [ ] / φt) := τ pexpp tx)x f exp p tx), 5) where τ pq is the parallel transportation from the point p to the point q and exp p X) is the exponential map of the vector field X at p. Then, the following results hold: ) φ t). ) If φ) >, then, φ), φ)) domφ). Proof It can be calculated that φ t) = ] d dt [ τpexpptx)x f exp p tx) ] 3/ [ τpexpptx)x f exp p tx) τ 3 pexpp tx)x f exp p tx) = ] 3/. [ τpexpptx)x f exp p tx) The claim ) follows directly from the definition of self-concordant function. Assume the cliam ) is not true. Since φ) >, because of the continuity of τ f exp pexpp tx) ptx), there is a symmetric neighborhood of in the definition domain of φ. Let t, t) denote the largest of such symmetric neighborhoods. Then, at least one of the two end points is not in domφ). Without loss of generality, assume t is this point and t <φ). Because φt) φ) t, we have τ f exp pexpp tx) p tx) < φ) t ) φ) t) < +, t t, t). Hence, lim τ t t pexpp tx)x f exp p tx) = τ pexpp tx) Xf exp p tx) = [φ) t ] < +. The existence of f exp p tx) comes from the assumption that f has a closed map and the fact that t s f exp p tx) = τ f exp pexpp νx) p νx)dν + Xf X) ds < +. Therefore,φ t) is well-defined, which is contradiction to the assumption we made. As such, ) holds. Now we can prove Property. Proof Notice, from Lemma, that for any vector field X TM, {exp p tx) t <φ )} domf ).
7 J Glob Optim 7) 38: On the other hand, { } {exp p tx) t <φ )} = exp p tx) t < X f p) = {exp ) } p tx t X f p) < = W p;). This completes the proof. In the following, two groups of properties will be given to reveal the relationship between two different points on a geodesic. They are delicate characteristics of selfconcordant functions. In fact, they are the foundation for the polynomial complexity of self-concordant functions. Property 3 For any p, q domf ), such that there is a geodesic contained in the definition domain of f connecting the points p and q, if f is a self-concordant function, the following results hold: [ X pq q) f q) ] / [ Xpq p) f p) ] / ] /, 6) + [ Xpq p) f p) X pq p) Xpq q)f q) Xpq p)f p) f p) ] /, 7) + [ Xpq p) f p) ] / f q) f p) + Xpq p)f p) + [ X pq p) f p) ] ) / ln + [ X pq p) f p). 8) Proof Let φt) be the same function defined in Lemma, where one can see that φ) φ) +. This is equivalent to 6) taking into account that φ) = [ X pq p) f p)] /,andφ) =[ X pq q) f q)] /. Furthermore, Xpq q)f q) Xpq p)f p) = = X pq exp p tx pq ) f exp p tx pq)dt, t tx pq exp p tx pq ) f exp p tx pq)dt, 9) which leads to 7) using the inequality 6).
8 444 J Glob Optim 7) 38: For the inequality 8), notice that: f q) f p) Xpq f p) = = Xpq exp p tx pq )f exp p tx pq ) Xpq p)f p))dt [ ) ] txpq exp t p tx pq )f exp p tx pq txpq p)f p) dt tx pq p) f p) t +[ tx pq p)] / ) dt. Let r =[ X pq p) f p)]/. The last integral becomes tr dt = r ln + r), + tr which leads to the inequality 8)byreplacingr with its original form. Property 4 For any p domf ), q W p;), and X TM, there holds: [ X pq p) f p)]/ ) Xp) f p) Xq) f q) Xp) f p) [ ] ) /, X pq p) f p) ) X pq p) Xpq q)f q) Xpq p)f p) f p) ] /, ) [ Xpq p) f p) ] / [ ] /). f q) f p) + Xpq p)f p) [ X pq p) f p) ln X pq p) f p) ) Proof Let ψt) be a function defined in the following form: Then, its derivative satisfies ψt) = Xexp p tx pq ) f exp p tx pq). ψ t) = Xpq exp p tx pq ) Xexp p tx pq ) f exp p tx pq) X pq exp p tx pq ) f exp p tx pq) / ψt) = t tx pq exp p tx pq ) f exp p tx pq) / ψt) ] / t [ Xpq p) f p) t ] / ψt). 3) t [ Xpq p) f p)
9 J Glob Optim 7) 38: Here the last part is obtained by applying φ) φ) from Lemma. Integrating both sides of the inequality 3), one obtains [ X pq p) f p)]/ ) ψ) ψ) [ ] ) /, X pq p) f p) which is equivalent to the inequality ). Combining the inequality ) and the formula 9), one obtains Xpq q)f q) Xpq p)f p) t f p) txpqp) [ ] ) / dt tx pq p) f p) = X pq p) f p) ] /, [ Xpq p) f p) which proves the inequality ). Combining this result and using the same technique as that used in the proof of the last property, there holds: f q) f p) Xpq f p) = = = Xpq exp p tx pq )f exp p tx pq )dt Xpq f p) { } t [ tx pq exp p tx pq )f exp p tx pq )] Xpq f p) dt tx pq f p) t [ tx pq f p)] / ) dt ] / [ X pq p) f p) ln [ X pq p) f p) ] / ). As such, the inequality ) is obtained by a simple transformation of this inequality. 4Newtondecrement Consider the following auxiliary quadratic cost defined on T p M : N f,p X) := f p) + X f p) + Xf p). 4) Definition 3 The Newton decrement X N f, p) is defined as the minimal solution to the auxiliary cost function given by 4). More specifically, X N f, p) := arg min N f,px). 5) X T p M Similar to the case in Euclidean space, the Newton decrement can be characterized in many ways. The following theorem summaries its properties.
10 446 J Glob Optim 7) 38: Theorem Let f : M R be a self-concordant function, p, a given point in domf ) M, and X N, its Newton decrement defined at p. The following results hold: XN X f p) = X f p), X T p M, 6) } X N f p) = max { X f p) X T p M, X f p). 7) Furthermore, if the quadratic form µ :T pm TM p), defined by: µ X) = X µ, for a given µ TM p), is of full rank, then X T pm X N = df ) df. 8) Proof Since p is a given point on the manifold M, the claimed results can be converted into their local representation in Euclidean space. More specifically, consider the following quadratic function: qx) := x Ax + b x + c, where A R n R n, A = A, b R n, c R. 9) Let x denote the optimal point. Then, the gradient of q at x must be a zero vector. i.e., y Ax + b) =, y R n. This is the local representation of 6). The case where µ is isomorphic is corresponding to the case where A is non-degenerate. Then, x = A b, which is the local representation of 8). On the other hand, y b = y Ax = y A / A / x y Ay) / x ) Ax ) /, where the equality holds if and only if y = x. As such, { } max{y b y R n, y y b Ay } =max y Ay y Rn = x ) Ax. This is the local representation of 7). Therefore, the proof is complete. 5 A damped newton algorithm for self-concordant functions Consider now the minimization problem of self-concordant functions on a smooth manifold. First, let us establish the existence of the minimal point: Theorem Let λ f p) be defined as follows: λ f p) := X f p) max for p domf ). ) X T p M X f p)
11 J Glob Optim 7) 38: If λ f p) < for some p domf ), then there exists a unique point p f domf ) such that f p f ) = min{f p) p domf )}. Proof Let p is a point such that λ f p )<. For any q domf ) such that f q) <f p ), from 8) we have f q) f p ) λ f p )[ X f p p q )] / [ ] / [ ] ) / + X f p p q ) ln + X f p p q ). Hence, [ ] ) / ln + X f p p ) q [ ] / λ. X f p p ) q Since ln + t) lim =, t + t there exists a constant c >, such that [ / X f p p q )] c. ) Hence these X p q contained in the compact set defined by inequality ). Consider the map from the any tangent vector X to its geodesic expx). This map is continuous by the definition of geodesic. This indicates the image of compact set defined by inequality ) is also compact. Therefore, a minimal point exists. On the other hand, let p denote a minimal point. Then, The uniqueness is proved. f q) f p ) +[ X p q f p )] / ln +[ X p q f p )] / ) > f p ), q domf ), q = p. ) Consider the following damped Newton method: Algorithm Damped Newton algorithm) Step : Find a feasible point p domf ). Step k: p k = exp pk +λ f p k ) X N, where exp pk tx N is the exponential map of the Newton decrement at p k. The following theorem establishes the convergence properties of the proposed damped Newton algorithm: Theorem 3 Let the minimal point of f p) be denoted as p, and p is any admissible point in W p ;). ) The following inequality holds: [ Xpp f p) ] / λ f p) λ f p). 3)
12 448 J Glob Optim 7) 38: ) If λ f p) <, then f p) f p ) λ f p) ln λ f p)). 4) 3) For the proposed Damped Newton Method algorithm, there holds: Proof f p ) f p k ) f p k ) λ f p k ) ln + λ f p k ))). 5) ) Let [ X pp f p)]/ be denoted as rp). Inviewof7)and) we have: r r X pp f p) r p). 6) + rp) On the other hand, there holds by the definition of λ f p). Therefore, where r can be solved as follows: Xpp f p) λ f p)rp) λ f p) rp) + rp), rp) λ f p) λ f p), which is 3). ) Based on 8) and the inequality 3) obtained above, one has: f p ) f p) Xpp f p) + rp) ln + rp)) rp) ln + rp)) λ f p)rp). 7) Let an auxiliary function gx, y) be defined as: It can be easily checked that and gx, y) = x ln + x) xy + y ln y), x, > y. gx,) = x ln + x) g, y) = y ln y). If there is a point x, y ) such that gx, y )<, this function must have a minimal interior point. The gradient will be zero at such a point. However, it can be calculated that g x x,y )= y =, + x g y x,y )= x + + =. y
13 J Glob Optim 7) 38: The solution to this system of equations satisfies y ) + x ) =. As such, at the minimal point there holds: gx, y ) = x x y + y = x y ) + y >, which is a contradiction. Therefore, the minimum, if it exists, is achieved at the boundary. Hence, gx, y), x, > y. Applying this inequality to 7), we obtain the right side of the inequality 4). 3) It is clear that p k+ W p k,) since [ ] +λ f p k ) X f p k ) 4.4) = λ f p k ) <. N + λ f p k ) Applying ), there holds f p k+ ) f p k ) + ln + λ f p k ) X N f p k ) + λ f p k ) [ X N f p k )] / = f p k ) λ f p k ) + ln + λ f p k )) + λ f p k ) [ X N f p k )] / ) by the definition of λ f p k ) and the results in Theorem. Therefore, the inequality 5)isproved. Notice that the two functions λ ln + λ), λ, + ) and λ ln λ), λ, ) are positive and monotonically increasing. The results proved in Theorem 3 have already given a set of error bounds for the function f p) and estimation of the variable point p based on λ f p). More specifically, the inequality 5) implies the following results: Corollary For the Damped Newton algorithm, the λ f p k ) is bounded as follows: λ f p k ) ln + λ f p k )) f p k ) f p ). 8) Furthermore, for a given precision ɛ >, the number of iterations, denoted as N, required such that λ f p N )<ɛis less than or equal to f p ) f p ) ɛ ln+ɛ). For the convergence rate, the following theorem reveals the quadratic convergence and the computational complexity of the damped Newton algorithm proposed in this paper.
14 45 J Glob Optim 7) 38: Theorem 4 For the damped Newton algorithm proposed in this paper, the following result holds: λ f p k+ ) λ f p k). 9) Proof For a vector field X TM, construct an auxiliary function ψt) as follows: ψt) := X f exp pk tx N p k )), t ),. 3) + λ f p k ) Now, with a slight misuse of the notation that ignores the vector field adaptation: ψ t) = XN p k ) X f exp pk tx N p k )), ψ t) = X N X f exp pk tx N p k )) X N f exp pk tx N p k ))[ X f exp p k tx N p k ))] /. The last inequality is obtained in view of Property. Notice that: X N f exp pk tx N p k )) = t tx N f exp pk tx N p k )) tx N f p k ) t [ ] ) / tx N f p k ) = λ f p k) tλ f p k )) 3) by applying the inequality ) and the fact that p k+ W p k,). Then, by applying theleftpartoftheinequality) we have ψ t) = λ f p k) [ ] / tλf p k ) ) X f exp p k tx N p k )) λ f p k) tλf p k ) ) tλ f p k ) [ X f p k)] / λ f p k) tλf p k ) ) 3 [ X f p k)] /.
15 J Glob Optim 7) 38: Let +λ f p k ) be denoted as r. Then, r ψr) = ψ) + = ψ) + r ψ τ)dτ τ {ψ ) + ψ t)dt}dτ X f p k ) + r XN X f p k ) r τ +[ X f p λ k)] / f p k) dt dτ tλ f p k )) 3 = X f p k ) + r XN X f p k ) +[ X f p λ k)] / f p k)r λ f p k )r. By using the 6) of the Newton decrement in Theorem, the last inequality results in: [ ] / ψr) r) X f p k ) + X f p λ f k) p k)r λ f p k )r [ ] / [ ] / r)λ f p k ) X f p k) + X f p λ f k) p k)r λ f p k )r = λ f p k) [ / X k)] + λ f p k ) f p, where [ X f p k)] / can be estimated by the left part of the inequality ). That is: ψr) λ f p k) + λ f p k ) rλ f p k ) [ X exp p k rx N p k ))] / = λ f p k)[ X exp p k rx N p k ))] /. Therefore, λ f p k+ ) = X f p k+ ) max [ X TMp k+ ) X f p k+ ) ] / = max X TMp k+ ) ψr) [ X f exp p k rx N p k )) ] / λ f p k). The proof is complete. It is clear that λ f p k+ )<λ f p k ) if λ f p k )<. It can also be proved by simple analysis that λ ln λ) < λ, λ, /).
16 45 J Glob Optim 7) 38: Remark Under the condition that λp) <, p M and starting from an admissible initial point, the convergence results of the proposed damped Newton algorithm can be summarized as: ) For not more than f p ) f p ) / ln3/) steps λ f p k ) will fall into the interval, /). ) λ f p k ) will monotonically and quadratically converge to zero starting from any point p k such that λ f p k ), /). 3) [ X f pp p)]/ λ f p), λ f p), /). 4) f p) f p ) λ f p), λ f p), /). 5) For a given precision ɛ, /), the maximal number of iterations such that λ f p k ) ɛ, is not more than { f p ) f p ) min ɛ ln + ɛ), min This amount is then bounded by α,/) f p ) f p ) α ln + α) + ln ɛ )}. ln α f p ) f p ) / ln3) + ln) + ln ɛ ln). 6 Illustrative examples This section includes two simple examples to illustrate the proposed damped Newton method: Example Consider the following optimization problem: minimize f x) := x + x, subject to: x, x >, x = x, x ) H, where H is a hyperbola satisfying x x =. The Riemannian metric is defined as the induced metric from the ambient Eucleadean space. Let T x H be the tangent space of H at x. i.e.,t x H ={αh h = x, x ), α R}. Then, the geodesic on the hyperbola can be calculated as: exp x tx = Ax, 3) ) e αt where A = e αt and X T x H. Hence, covariant differentials can be calculated as follows: X f x) = x + x )α, h f X) = x + x )α, h 3 f X) = x + x )α 3. It can be seen that X f x) is positive definite. Then 3 X f x) ) X f x) ) 3 = x x ). 33) x + x ) 3
17 J Glob Optim 7) 38: As such, f x) is a self-concordant function. Now we can apply the proposed damped Newton algorithm. Algorithm Damped Newton algorithm) step : Randomly generate a feasible initial point x. step k: Calculate the kth step according to: ) x k = exp x k + λx k ) X N = A k x k, where A k = e +λx k ) αk, e +λx k ) αk λx k ) = x k + x k )α ) T X N = α k x k, x k, α k = xk x k x k + x k. The simultaion result is shown in Table. Example Consider the following optimization problem: min f x) := lnx lnx n, { x = x, x subject to :,..., x n ) S n, < x,..., x n <, 34) where S n is the unit sphere with x T x =. Here, we define a Riemannian metric as the induced metric from the ambient Euclidean space, i.e., < y, z >= y T z where Table The simulation result for Example Step k x fx) λx) 6.,.667) , 3.96) ,.335) ,.376) ,.56) ,.547) ,.88) ,.66) ,.9943) ,.)..8.,.)...,.)..
18 454 J Glob Optim 7) 38: y, z T x S n.letx S n and h = h, h,..., h n ) T x S n have unit length if not, we can normalize it). Then, the geodesic on the sphere is exp x th = x cost) + h sint), and the parallel transportation along the geodesic τh = h cost) x sint). Therefore, the following covariant differentials can be calculated: h f x) = h x h x,..., h n x n, h f x) = h x + h x +,..., + h n x + n, n h h 3 3 f x) = x 3 + h + h3 x x 3 + h +,..., + h3 n x x 3 + h n n x n The following procedure is to prove that the function f is self-concordant defined on S n. It can be easily checked that for any h T x S n, the second covariant differentials h f x) are always positive. Let y = h x,..., y n = h n x n, y = y, y,..., y n ) T, b = y +), y +),...,y n +))T. Then, we have 3 M f = h f x) ) h f x))3 = 4 y 3 +,..., +y3 n + y + +y n ) y + +y n + n) 3 = 4 y y + ) + +y ny n + )) y + ) + +y n + ))3 4 y b y + ) + +y n + )) y + ) + +y n + )) 4y + +y n ) y + ) + +y n + ) 4. 35) Therefore, the function f is self-concordant function defined on S n. We apply the damped Newton algorithm on this problem. In particular, n =. Figure shows the results of the damped Newton method on function f,wherex is the optimal solution and the logarithm function is -based. The result clearly demonstrates the quadratic convergence of the proposed algorithm. To illustrate the advantage of the proposed algorithms on manifold, this optimization problem is also solved in Euclidean space using the Lagrangian multiplier method [3], as a comparison example. Given the Lagrangian multiplier α R, the Lagrangian form of the original problem is Lx, α) = ln x ln x n + α x T x). 36) Then, Newton method can be applied in Euclidean space to find the critical point x, α ) of L. However, because the parametrization of problem in Euclidean space )
19 J Glob Optim 7) 38: log xx * Iteration number Fig. The results of damped Newton method of Example may not necessarily reflect its intrinsic global property, the efficiency and convergence of the Newton method will depend on the choice of initial points, which cannot be guaranteed. Figure shows the simulation result obtained starting from a randomly generated initial point. The performance is inferior to that shown in Fig.. 7 Conclusions This paper reports our effort to generalize the definition and known results for self-concordant functions in Euclidean space to manifolds. It lays a comparative solid foundation to facilitate the construction of barrier functions for interior-point algorithms on manifolds. For the proposed self-concordant function defined on a general class of smooth manifolds, a number of desirable properties are obtained. These include the feasibility of a Dikin-type ellipsoid and several inequalities that characterizes the similarity between self-concordant functions and quadratic functions along the geodesics of the manifold in various inequalities. Under the convexity condition on manifold defined by second-order covariant differential, it is also shown that the optimal solution is global. A Newton decrement is defined for this specific class of functions. This concept is analyzed in regard to the relationship between first-order covariant derivatives along Newton direction and along general direction, and to the maximal ratio of the norm of first-order covariant derivative and that of second-order derivative. The later facilitate
20 456 J Glob Optim 7) 38: log xx * ) Iteration number Fig. The results of Newton method in Euclidean space for Example the definition of the index λ f p). With those theoretical preparation, the existence of global optimal solution is shown when λ f p) < holds for a point p. A damped Newton algorithm is proposed. Its computational complexity is carefully analyzed and precise bound is shown to be O lnɛ)). Two simple but meaningful optimization problems are given to illustrate the efficiency of the proposed concept and algorithm. Acknowledgements The authors are very grateful for the valuable comments given by anonymous reviewers. The first author would like to thank Dr. Jochen Trumph for valuable discussions and suggestions. His research was Partially supported by the IRGS Grant J56 from UTAS. John B. Moore was Partially supported by an ARC Large Research Grant A589. Huibo Ji s work was in part supported by National ICT Australia. References. Nesterov, Y., Nemirovskii, A.: Interior-Point polynomial algorithms in convex programming. Studies in Applied Mathematics, vol. 3. SIAM, Philadelphia, PA 994). Glineur, F.: Improving complexity of structured convex optimization problems usiing selfconcordant barriers. Eur. J. Oper. Res. 43, 9 3 ) 3. Faybusovich, L.: Infinite-dimensional semidefinite programming: regularized determinants and self concordant barriers. Fields Inst. Commun. 8, )
21 J Glob Optim 7) 38: Nesterov, Y., Todd, M.: On the Riemannian geometry defined by self-concordant barriers and interior-point methods. Found. Comput. Math., ) 5. Helmke, U., Moore, J.B.: Optimization and Dynamical Systems. Springer-Verlag, London 994) 6. Smith, S.: Optimization techniques on Riemannian manifolds. Fields Inst. Commun. 3, ) 7. Edelman, A., Arias, T., Smith, S.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl., ) 8. Yan, W., Helmke, U., Moore, J.B.: Global analysis of Oja s flow for neural networks. IEEE Trans. Neural Netw. 5, ) 9. Addler, R.L., Dedieu J-P, M., Joesph, Y., Martens, M., Shub, M.: Newton s method on riemannian manifolds and a geometric model for the human spine. IMA J. Numer. Anal., ). Udriste, C.: Optimization methods on Riemannian manifolds. Algebera, Groups Geom. 4, ). Helgason, S.: Differential Geometry, Lie Groups, and Symmetric Spaces. Academic Press, New York 978). Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers Dordrecht, 4 3. Bertsekas, D.: Nonlinear Programming, nd edn. Athena Scientific, Belmonk, MA 999)
The nonsmooth Newton method on Riemannian manifolds
The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationChapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves
Chapter 3 Riemannian Manifolds - I The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves embedded in Riemannian manifolds. A Riemannian manifold is an abstraction
More informationCALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =
CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.
More informationVariational inequalities for set-valued vector fields on Riemannian manifolds
Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationWeak sharp minima on Riemannian manifolds 1
1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationOPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS
OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange
More informationMath 164-1: Optimization Instructor: Alpár R. Mészáros
Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationA Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationSteepest descent method on a Riemannian manifold: the convex case
Steepest descent method on a Riemannian manifold: the convex case Julien Munier Abstract. In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationHOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection.
HOMEWORK 2 - RIEMANNIAN GEOMETRY ANDRÉ NEVES 1. Problems In what follows (M, g will always denote a Riemannian manifold with a Levi-Civita connection. 1 Let X, Y, Z be vector fields on M so that X(p Z(p
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationMonotone Point-to-Set Vector Fields
Monotone Point-to-Set Vector Fields J.X. da Cruz Neto, O.P.Ferreira and L.R. Lucambio Pérez Dedicated to Prof.Dr. Constantin UDRIŞTE on the occasion of his sixtieth birthday Abstract We introduce the concept
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationII. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES
II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationNewton Method on Riemannian Manifolds: Covariant Alpha-Theory.
Newton Method on Riemannian Manifolds: Covariant Alpha-Theory. Jean-Pierre Dedieu a, Pierre Priouret a, Gregorio Malajovich c a MIP. Département de Mathématique, Université Paul Sabatier, 31062 Toulouse
More informationL. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1
L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationAn interior-point trust-region polynomial algorithm for convex programming
An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic
More informationPrimal Central Paths and Riemannian Distances for Convex Sets
Found Comput Math (28) 8: 533 56 DOI 1.17/s128-7-919-4 Primal Central Paths and Riemannian Distances for Convex Sets Y. Nesterov A. Nemirovski Received: 2 November 25 / Revised: 29 May 27 / Accepted: 5
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationAnalysis II: The Implicit and Inverse Function Theorems
Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationA LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION
A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationSteepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds
J. Math. Anal. Appl. 341 (2008) 467 477 www.elsevier.com/locate/jmaa Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds E.A. Papa Quiroz, E.M. Quispe,
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationKantorovich s Majorants Principle for Newton s Method
Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationMath 6455 Nov 1, Differential Geometry I Fall 2006, Georgia Tech
Math 6455 Nov 1, 26 1 Differential Geometry I Fall 26, Georgia Tech Lecture Notes 14 Connections Suppose that we have a vector field X on a Riemannian manifold M. How can we measure how much X is changing
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationCONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS
Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,
More informationUnconstrained minimization
CSCI5254: Convex Optimization & Its Applications Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions 1 Unconstrained
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationGeneralized vector equilibrium problems on Hadamard manifolds
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1402 1409 Research Article Generalized vector equilibrium problems on Hadamard manifolds Shreyasi Jana a, Chandal Nahak a, Cristiana
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationCentral Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds
Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February
More informationMath Advanced Calculus II
Math 452 - Advanced Calculus II Manifolds and Lagrange Multipliers In this section, we will investigate the structure of critical points of differentiable functions. In practice, one often is trying to
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationLecture 14: October 17
1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.
ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.
More information7.3 Singular points and the method of Frobenius
284 CHAPTER 7. POWER SERIES METHODS 7.3 Singular points and the method of Frobenius Note: or.5 lectures, 8.4 and 8.5 in [EP], 5.4 5.7 in [BD] While behaviour of ODEs at singular points is more complicated,
More informationECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.
ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More informationORIE 6326: Convex Optimization. Quasi-Newton Methods
ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationNumerical Algorithms as Dynamical Systems
A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive
More informationON TWO-DIMENSIONAL MINIMAL FILLINGS. S. V. Ivanov
ON TWO-DIMENSIONAL MINIMAL FILLINGS S. V. Ivanov Abstract. We consider Riemannian metrics in the two-dimensional disk D (with boundary). We prove that, if a metric g 0 is such that every two interior points
More informationDEVELOPMENT OF MORSE THEORY
DEVELOPMENT OF MORSE THEORY MATTHEW STEED Abstract. In this paper, we develop Morse theory, which allows us to determine topological information about manifolds using certain real-valued functions defined
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationSymmetric Spaces Toolkit
Symmetric Spaces Toolkit SFB/TR12 Langeoog, Nov. 1st 7th 2007 H. Sebert, S. Mandt Contents 1 Lie Groups and Lie Algebras 2 1.1 Matrix Lie Groups........................ 2 1.2 Lie Group Homomorphisms...................
More informationDIFFERENTIABLE CONJUGACY NEAR COMPACT INVARIANT MANIFOLDS
DIFFERENTIABLE CONJUGACY NEAR COMPACT INVARIANT MANIFOLDS CLARK ROBINSON 0. Introduction In this paper 1, we show how the differentiable linearization of a diffeomorphism near a hyperbolic fixed point
More informationOptimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.
Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationConvex Analysis and Optimization Chapter 2 Solutions
Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationExtended Monotropic Programming and Duality 1
March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where
More informationLECTURE 10: THE PARALLEL TRANSPORT
LECTURE 10: THE PARALLEL TRANSPORT 1. The parallel transport We shall start with the geometric meaning of linear connections. Suppose M is a smooth manifold with a linear connection. Let γ : [a, b] M be
More informationNewton Method with Adaptive Step-Size for Under-Determined Systems of Equations
Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997
More information1. Geometry of the unit tangent bundle
1 1. Geometry of the unit tangent bundle The main reference for this section is [8]. In the following, we consider (M, g) an n-dimensional smooth manifold endowed with a Riemannian metric g. 1.1. Notations
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More informationHow curvature shapes space
How curvature shapes space Richard Schoen University of California, Irvine - Hopf Lecture, ETH, Zürich - October 30, 2017 The lecture will have three parts: Part 1: Heinz Hopf and Riemannian geometry Part
More informationJune 21, Peking University. Dual Connections. Zhengchao Wan. Overview. Duality of connections. Divergence: general contrast functions
Dual Peking University June 21, 2016 Divergences: Riemannian connection Let M be a manifold on which there is given a Riemannian metric g =,. A connection satisfying Z X, Y = Z X, Y + X, Z Y (1) for all
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More information5.2 The Levi-Civita Connection on Surfaces. 1 Parallel transport of vector fields on a surface M
5.2 The Levi-Civita Connection on Surfaces In this section, we define the parallel transport of vector fields on a surface M, and then we introduce the concept of the Levi-Civita connection, which is also
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More informationChoice of Riemannian Metrics for Rigid Body Kinematics
Choice of Riemannian Metrics for Rigid Body Kinematics Miloš Žefran1, Vijay Kumar 1 and Christopher Croke 2 1 General Robotics and Active Sensory Perception (GRASP) Laboratory 2 Department of Mathematics
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationThe continuous d-step conjecture for polytopes
The continuous d-step conjecture for polytopes Antoine Deza, Tamás Terlaky and Yuriy Zinchenko September, 2007 Abstract The curvature of a polytope, defined as the largest possible total curvature of the
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationAnnals of Mathematics
Annals of Mathematics The Clifford-Klein Space Forms of Indefinite Metric Author(s): Joseph A. Wolf Reviewed work(s): Source: The Annals of Mathematics, Second Series, Vol. 75, No. 1 (Jan., 1962), pp.
More informationPROJECTIONS ONTO CONES IN BANACH SPACES
Fixed Point Theory, 19(2018), No. 1,...-... DOI: http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html PROJECTIONS ONTO CONES IN BANACH SPACES A. DOMOKOS AND M.M. MARSH Department of Mathematics and Statistics
More informationInexact alternating projections on nonconvex sets
Inexact alternating projections on nonconvex sets D. Drusvyatskiy A.S. Lewis November 3, 2018 Dedicated to our friend, colleague, and inspiration, Alex Ioffe, on the occasion of his 80th birthday. Abstract
More informationA LOWER BOUND ON THE SUBRIEMANNIAN DISTANCE FOR HÖLDER DISTRIBUTIONS
A LOWER BOUND ON THE SUBRIEMANNIAN DISTANCE FOR HÖLDER DISTRIBUTIONS SLOBODAN N. SIMIĆ Abstract. Whereas subriemannian geometry usually deals with smooth horizontal distributions, partially hyperbolic
More information4.7 The Levi-Civita connection and parallel transport
Classnotes: Geometry & Control of Dynamical Systems, M. Kawski. April 21, 2009 138 4.7 The Levi-Civita connection and parallel transport In the earlier investigation, characterizing the shortest curves
More informationOn Conically Ordered Convex Programs
On Conically Ordered Convex Programs Shuzhong Zhang May 003; revised December 004 Abstract In this paper we study a special class of convex optimization problems called conically ordered convex programs
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationLinear Ordinary Differential Equations
MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R
More informationABSOLUTE CONTINUITY OF FOLIATIONS
ABSOLUTE CONTINUITY OF FOLIATIONS C. PUGH, M. VIANA, A. WILKINSON 1. Introduction In what follows, U is an open neighborhood in a compact Riemannian manifold M, and F is a local foliation of U. By this
More information1. Bounded linear maps. A linear map T : E F of real Banach
DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:
More information