(Here > 0 is given.) In cases where M is convex, there is a nice theory for this problem; the theory has much more general applicability too.
|
|
- Scot Dalton
- 6 years ago
- Views:
Transcription
1 Convex Analysis with Applications UBC Math 604 Lecture Notes by Philip D. Loewen In trust region methods, we minimize a quadratic model function M = M(p) over the set of all p R n satisfying a constraint ( ) g(p) def = 1 p (Here > 0 is given.) In cases where M is convex, there is a nice theory for this problem; the theory has much more general applicability too. A. Convex Sets and Functions Definition. A subset S of a real vector space X is called convex when, for every pair of distinct points x 0, x 1 in S, one has x t := (1 t)x 0 + tx 1 S t (0, 1). We call S strictly convex if in fact one has x t int S above. Definition. Let X be a real vector space containing a convex subset S. A function f: S R is called convex when, for every pair of distinct points x 0, x 1 in S, one has f ((1 t)x 0 + tx 1 ) (1 t)f(x 0 )+tf(x 1 ) t (0, 1). If this statement holds with strict inequality, then f is called strictly convex on S. Geometrically, a convex function is one whose chords lie on or above its graph; a strictly convex function is one whose chords lie strictly above the graph except at their endpoints. Example. Given a row vector g of length n, andann n matrix H = H T take X = R n and S = X. Then f(x) def = gx xt Hx is convex iff H 0 and strictly convex iff H>0. Proof. Pick x 0,x 1 R n distinct. Let v = x 1 x 0, and consider, for t (0, 1), x t =(1 t)x 0 + tx 1 = x 0 + t(x 1 x 0 )=x 0 + tv. We have [(1 t)f(x 0 )+tf(x 1 )] f(x t ) =(1 t) [ gx xt 0 Hx [ 0] + t gx xt 1 Hx [ 1] gxt xt t Hx ] t = 1 2 (1 [ ] t)xt 0 Hx txt 1 Hx (1 t) 2 x T 0 Hx 0 +2t(1 t)x T 0 Hx 1 + t 2 x T 1 Hx 1 = 1 2 [(1 t) (1 t)2 ]x T 0 Hx [t t2 ]x T 1 Hx 1 t(1 t)x T 0 Hx 1 = 1 2 t(1 t)(x 1 x 0 ) T H(x 1 x 0 )= 1 2 t(1 t)vt Hv. Now v 0 by assumption, so if H > 0 then the right side is positive whenever t (0, 1). If we know only H 0, then at least the right side is nonnegative in this interval. //// File convex, version of 20 February 01, page 1. Typeset at 09:13 February 20, 2001.
2 2 PHILIP D. LOEWEN Notice that the definition of convexity makes no reference to differentiability, and indeed many convex functions fail to be differentiable. A simple example of this is f 1 (x) = x on X = R note that f 1 (x) = x sgn(t) dt; a more complicated example 0 is f 2 (x) := x 0 t dt. The graph of f 2 is shown below: it has corners at every integer value of x. z f 2 (x) := x 0 t dt. Operations Preserving Convexity. Here are several ways to combine functions that respect the property of convexity. In each of them, X is a real vector space with convex subset D. (Confirming these statements is an easy exercise: please try it!) (a) Positive Linear Combinations: If the functions f 1, f 2,..., f n are convex on D, and c 1, c 2,..., c n are positive real numbers, then the sum function f(x) :=c 0 + c 1 f 1 (x)+c 2 f 2 (x)+ + c n f n (x) is well-defined and convex on the set D. Moreover, if any one of the functions f k happens to be strictly convex, then the sum function f will be strictly convex too. (b) Restriction to Convex Subsets: If f is convex on D and S is a convex subset of D, then the restriction of f to S is convex on S. (An important particular case arises when D = X is the whole space and S is some affine subspace of X: a function convex on X is automatically convex on S.) (c) Pointwise Maxima: If f 1, f 2,..., f n are convex on D, then so is their maximum function f(x) :=max{f 1,f 2,...,f n }. (A similar result holds for the maximum of an infinite number of convex functions, under suitable hypotheses.) x File convex, version of 20 February 01, page 2. Typeset at 09:13 February 20, 2001.
3 Convex Analysis with Applications 3 B. Convexity and its Characterizations The First-Derivative Test. Theorem (Subgradient Inequality). Let X be a real vector space containing a convex subset S. Let the function f: X R be Gâteaux differentiable at every point of S. (a) f is convex on S if and only if f(x)(y x) f(y) f(x) for any distinct x and y in S. (b) f is strictly convex on S if and only if f(x)(y x) < f(y) f(x) for any distinct x and y in S. Proof. (a)(b)( ) Let x 0 and x 1 be distinct points of S, andlett (0, 1) be given. Define x t := (1 t)x 0 + tx 1. In statement (b), the assumption is equivalent to f(x) <f(y)+ f(x)(x y) for any x y: choosingy = x t here, and then applying the inequality to both x = x 0 and x = x 1 in turn, we get f(x t ) <f(x 0 )+ f(x t )(x t x 0 ), ( ) f(x t ) <f(x 1 )+ f(x t )(x t x 1 ). ( ) Multiplying ( ) by(1 t) and( ) byt and adding the resulting inequalities produces f(x t ) <tf(x 1 )+(1 t)f(x 0 )+ f(x t )(t(x t x 1 )+(1 t)(x t x 0 )). ( ) A simple calculation reveals that the operand of f(x t ) appearing here is the zero vector, so this reduces to the inequality defining the strict convexity of f on S. In statement (a), the assumed inequality is non-strict, but exactly the same steps lead to a non-strict version of conclusion ( ). This confirms the ordinary convexity of f on S. (a)( ) Suppose f is convex on S. Let two distinct points x and y in S be given. Then for any t (0, 1), the definition of convexity gives f (x + t(y x)]) = f ((1 t)x + ty) (1 t)f(x)+tf(y) =f(x)+t (f(y) f(x)). Rearranging this gives f (x + t(y x)) f(x) t f(y) f(x). In the limit as t 0 +, the left side converges to the directional derivative f [x; y x], which equals f(x)(y x) by the definition of the Gâteaux derivative. Thus we have f(x)(y x) f(y) f(x). ( ) File convex, version of 20 February 01, page 3. Typeset at 09:13 February 20, 2001.
4 4 PHILIP D. LOEWEN This argument applies to any distinct points x and y in S, so the result follows. (b)( ) Iff is known to be strictly convex on S, then we certainly obtain ( ). It remains only to show that equality cannot hold. Indeed, if x 0 and x 1 are distinct points of S where one has f(x 1 )=f(x 0 )+ f(x 0 )(x 1 x 0 ), the definition of convexity implies that for all t in (0, 1), f(x t ) (1 t)f(x 0 )+tf(x 1 ) =(1 t)f(x 0 )+t[f(x 0 )+ f(x 0 )(x 1 x 0 )] = f(x 0 )+t f(x 0 )(x 1 x 0 )=f(x 0 )+ f(x 0 )(x t x 0 ) f(x t ). (The last inequality is a consequence of ( ).) This chain of inequalities reveals that f(x t )=(1 t)f(x 0 )+tf(x 1 ) t (0, 1), which cannot happen for a strictly convex function f. //// The inequality at the heart of the previous Theorem, namely f(x)(y x) f(y) f(x) y X, is called the subgradient inequality. Geometrically, it says that the tangent hyperplane to the graph of f at the point (x, f(x)) lies on or below the graph of f at every point y in X. (Write f(y) f(x)+ f(x)(y x) to see this.) The name also helps one remember the direction of the inequality: the gradient f(x) isalways on the small side. It is worth noting that the subgradient inequality written above follows from the convexity of f using the differentiability property only at the point x see the proof of (a) above. The Second-Derivative Test. We will never need second derivatives in general vector spaces, so we focus on the finite-dimensional case (X = R n ) in the next result. Here we use the familiar notation f(x) instead of f(x), and write 2 f(x) forthe usual Hessian matrix of mixed partial derivatives of f at the point x: [ 2 f(x) ] = ij 2 f/ x i x j. We assume f C 2 (R n ), so the order of mixed differentiation does not matter and the Hessian matrix is symmetric. The shorthand 2 f(x) > 0 means that this matrix is positive definite, i.e., v T 2 f(x)v >0 v R n,v 0. (B.1) (A similar understanding governs the notation 2 f(x) 0.) File convex, version of 20 February 01, page 4. Typeset at 09:13 February 20, 2001.
5 Convex Analysis with Applications 5 Theorem. Let f: R n R be of class C 2. (a) f is convex on R n if and only if 2 f(x) 0 for all x in R n ; (b) If 2 f(x) > 0 for all x in R n,thenf is strictly convex on R n. Proof. (b) Pick any two distinct points x and y in R n. Taylor s theorem says that thereissomepointξ on the line segment joining x to y for which f(y) =f(x)+ f(x)(y x)+ 1 2 (y x)t 2 f(ξ)(y x) >f(x)+ f(x)(y x). The inequality here holds because 2 f(ξ) > 0 by assumption. Since x and y in X are arbitrary, this establishes the hypothesis of the previous Theorem (part (b)): the strict convexity of f follows. (a)( ) Just rewrite the proof of (b) with inplaceof >. ( ) Let us prove first that the operator f is monotone, i.e., that ( f(y) f(x)) (y x) 0 x, y R n. ( ) To see this, just write down the subgradient inequality twice and add: Rearranging the sum produces ( ). f(y) f(x) f(x)(y x) f(x) f(y) f(y)(x y) 0 ( f(x) f(y)) (y x) Now fix any x and v in R n, and put y = x + tv into ( ), assuming t>0. It follows that ( f(x + tv) f(x)) (v) 0 t >0. Take g(t) := f(x + tv)(v): then the previous inequality implies, upon division by t>0, that g(t) g(0) 0 t >0. t Now f is C 2,sog is C 1 : hence the limit of the left-hand side as t 0 + exists and equals g (0). We deduce that g (0) 0, i.e., v T 2 f(x)v 0. Since v R n was arbitrary, this shows that 2 f(x) 0, as required. //// The second-derivative test, together with the simple convexity-preserving operations mentioned above, is an easy way to confirm (or refute) the convexity of a given function. For example, when X = R n, the quadratic function f(x) :=x T Hx + gx + c built around symmetric n n matrix H, arow vector g, and a constant c, will be convex if and only if H 0 and strictly convex if and only if H>0. File convex, version of 20 February 01, page 5. Typeset at 09:13 February 20, 2001.
6 6 PHILIP D. LOEWEN [ a b Exercises. (1) Consider the 2 2 matrix H = b d ].Provethat H 0 [ a 0, d 0, and ad b 2 0 ]. (B.2) Then show that (B.2) remains true if all strict inequalities ( > ) are replaced by nonstrict ones ( ). [ ] A B (2) Consider the 2n 2n matrix H = B T,inwhichA = A D T, B, andd = D T are n n matrices. Show that H>0 [ A>0, D > 0, and A BD 1 B T > 0 ]. Here the matrix inequalities are interpreted as in (B.1). Hints: (i) D>0implies the existence of D 1 ; (ii) D = D T implies (D 1 ) T =(D T ) 1 ; and (iii) if D is invertible then [ ][ ][ ] I BD 1 A BD H = 1 B T 0 I 0 0 I 0 D D 1 B T. I C. Convexity as a Sufficient Condition General Theory Proposition. Let S be an affine subset of a real vector space X. Suppose that f: S R is convex on S. If x is a point of S where f is Gâteaux differentiable, then the following statements are equivalent: (a) x minimizes f over S, i.e., f( x) f(x) for all x in S; (b) x is a critical point for f relative to S, i.e., f( x)(h) =0for all h in the subspace V = S x. Proof. (a b) Proved much earlier, without using convexity. (b a) Note that for any point x of S, the difference h = x x lies in the subspace V of the statement. Hence the subgradient inequality, together with condition (b), gives f(x) f( x)+ f( x)(x x) =f( x)+0 x S. //// Corollary. If f: S R is strictly convex on some affine subset S of a real vector space X, and x is a critical point for f relative to S, then x minimizes f over S uniquely, i.e., f( x) <f(x) for all x S, x x. D. Separation Theorem Separation. Let U, V be convex subsets of R n. If the intersection U V contains at most one point, then there exists α 0inR n such that α T u α T v u U, v V. More generally, the existence of α 0 is assured whenever (int U) (int V )=. File convex, version of 20 February 01, page 6. Typeset at 09:13 February 20, 2001.
7 Convex Analysis with Applications 7 E. Inequality-Constrained Convex Programming Problem Statement. Differentiable convex functions F, G: R n R are given. We want to min {F (x) :G(x) 0}. Sufficient Conditions. Given x R n, the following conditions guarantee that x solves the problem above: There exists λ 0 such that (i) λg( x) =0, (ii) the function x F (x)+λg(x) has gradient 0 at x. ( ) Note that the function described above would be convex, so having gradient 0 at x is the same as having a global min at x. Proof. Note that since λ 0andbothf,g are convex, every critical point for f + λg is a global minimizer: F ( x)+λg( x) F (x)+λg(x) x R n. Now G( x) 0 by assumption, and for any x R n satisfying the constraint G(x) 0, F ( x) F (x)+λg(x) F (x). This completes the proof. //// Necessary Conditions. Suppose x R n solves the problem above. Then one of these two statements hold: (i) G( x) =0and G( x) = 0. [ABNORMAL constraint is degenerate.] (ii) Conclusion ( ) aboveholdsfor x. [NORMAL] Proof. If G( x) < 0, then G(x) < 0 for all x near x, so x is an unconstrained local minimum for f. Hence F ( x) = 0. Case (ii) holds, because λ =0worksin( ). So assume that G( x) = 0. [Note that conclusion (i) in ( ) is then automatic.] Define U = {(F ( x) β, G( x) γ) :β>0,γ 0}, V = {(F (x)+r, G(x)+s) :x R n,r,s 0}. These are convex sets in R 2. [Practice: If v 0,v 1 V,sayv i =(F (x i )+r i,g(x i )+s i ), show v t =(1 t)v 0 +tv 1 V using convexity inequality on f,g.] Note that U V = : if (p, q) U V, then there must be some specific x R n and α>0,β 0,r 0,s 0 such that p = F ( x) β = F (x)+r, i.e., F (x) =F ( x) β r<f( x), q = G( x) γ = G(x)+s, i.e., G(x) =G( x) γ s 0. File convex, version of 20 February 01, page 7. Typeset at 09:13 February 20, 2001.
8 8 PHILIP D. LOEWEN The existence of such an x contradicts the optimality of x. So pick α =(α 0,α 1 ) (0, 0) such that α T u α T v for all u U, v V.Expand α T (v u) 0: 0 α 0 [F (x)+r F ( x)+β]+α 1 [G(x)+s G( x)+γ] x R n,β >0, γ,r,s 0. ( ) Pick x = x, r =0=s: This forces α 0 0andα α 0 β + α 1 γ β >0, γ 0. If α 0 =0,thenα 1 > 0 (since vector α 0). Takes =0=γ in ( ) toget This is the abnormal case. 0 G(x) G( x) x R n. Assume α 0 > 0. Define λ = α 1 /α 0 0. Taking γ =0=r = s in ( ) and rearranging gives F ( x) +λg( x) F (x) +λg(x) +β β > 0. This implies the same inequality even when β = 0, and shows that x f + λg has a global minimum at x. Consequently its gradient is 0: that s condition (ii) in ( ). //// To illustrate this proof, here is a sketch of the set V for the case of F (x) = (x 2) 2 +(y 2) 2 and G(x) =x 2 + y 2 6. The pairs (F (x),g(x)) are shaded in gray; points included only because of the additional (r, s) in the definition are shown in black. g(x,y) = x 2 + y f(x,y) = (x 2) 2 + (y 2) 2 Sensitivity. You can see from picture that, in the presence of sufficient smoothness, the multiplier λ has the following interpretation: λ = V (0), where V (z) = min {F (x) :G(x) =z}. File convex, version of 20 February 01, page 8. Typeset at 09:13 February 20, 2001.
9 Convex Analysis with Applications 9 Here V is the value function associated with our problem. Another proof: V (G(x)) F (x) V (G( x)) = F ( x), x, } = f V g min at x = F ( x) V (0) G( x) =0. The Nonconvex Case. Similar results hold when f,g are nonconvex, but the proofs are much harder. (All results are expressed in terms of critical points rather than global minima.) Detailed discussion postponed. F. Application to the Trust-Region Subproblem Recall our problem, with > 0 given in advance: minimize subject to M(p) def = 1 2 pt Hp + gp + c p. This has the form above, with ) a quadratic function M and a constraint G(p) 0 where G(p) = 1 2 ( p 2 2.Note M(p) =g + p T H, G(p) =p T. Theorem. A vector p B[0; ] solves the problem above if and only if there exists λ 0 satisfying all of (i) λ ( p ) = 0, (ii) (H + λi) p = g T, (iii) H + λi 0. Proof. ( ) Suppose some p obeys (i) (iii). Then (iii) implies that the quadratic function M +λg is convex, and (i) (ii) show that p is a critical point for this function with λg( p) = 0. The sufficient conditions above imply that p gives the constrained minimum. ( ) Now suppose p B[0; ] gives the minimum. If p <, then conditions (i) (iii) hold for λ = 0. So assume p =. Apply the Necessary Conditions above. The abnormal case is impossible it requires both G( p) = 0(so p = > 0) and G( p) =0(so p = 0), and these are incompatible. Hence there must be some λ 0 such that λg( p) = 0 and 0= (M + λg)( p) =g + p T H + λp T, i.e., (H + λi)p = g T. This proves (i) (ii). To get (iii), use (ii) to help compare the minimizer p with general File convex, version of 20 February 01, page 9. Typeset at 09:13 February 20, 2001.
10 10 PHILIP D. LOEWEN elements p obeying p = : 0 M(p) M( p) = 1 2 pt Hp 1 2 pt H p + g(p p) = 1 2 pt Hp 1 2 pt H p p T (H + λi)(p p) ( = 1 p T (H + λi)p λ p 2) ( 1 p T (H + λi) p λ p 2) p T (H + λi)(p p) 2 2 [ = 1 2 p T (H + λi)p 2 p T (H + λi)p + p T (H + λi) p ] + λ [ p 2 p 2] 2 = 1 2 (p p)t (H + λi)(p p)+0. The last equation holds because p = = p. Since the resulting inequality holds for all p with p =, we deduce that H + λi 0. //// So to calculate the exact solution p for the subproblem, first try λ =0: ifh 0 already, and the solution p of Hp = g T obeys p, then that s the answer. Otherwise, we define p(λ) = (H + λi) 1 g T, noting that (H + λi) > 0 for all λ>0 sufficiently large, and then reduce λ until p(λ) =. Thisisa1D root-finding problem in the variable λ. Exact Search for λ. This works even when H is not positive. The derivation uses the orthogonal diagonalization of H: H = QΛQ T, where Λ = diag(λ 1,λ 2,...,λ n )andλ 1 λ 2 λ n. Each column of Q is a unit vector, and they are all orthogonal: QQ T = I = Q T Q. Hence H + λi = QΛQ T + λqq T = Q(Λ + λi)q T, p(λ) = (H + λi) 1 g T = Q(Λ + λi) 1 Q T g T! = Calculating p(λ) T p(λ) either as a double sum or as the expansion of n j=1 gq j λ j + λ q j. p(λ) 2 = gq(λ + λi) 1 Q T Q(Λ + λi) 1 Q T g T = gqdiag ( (λ j + λ) 2) Q T g T leads to p(λ) 2 = n j=1 (gq j ) 2 (λ + λ j ) 2. The exact solution to the subproblem corresponds to the unique λ λ 1 such that p(λ) 2 = 2. How do we find this λ? Suppose all eigenvalues are distinct, and gq 1 > 0. Then the right side converges to0asλ and to + as λ λ + 1. (Sketch.) The goal is simply to pick λ [λ 1, + ) to arrange p(λ) =, recalling n gq j p(λ) = q j. λ + λ j j=1 File convex, version of 20 February 01, page 10. Typeset at 09:13 February 20, 2001.
11 Convex Analysis with Applications 11 Now λ p(λ) is very steep near λ 1, so Newton s method may work badly on this problem. A nice trick is to turn everything upside down. Newton s method for 1 finding a zero of F (λ) = p(λ) 1 : λ (l+1) = λ (l) F (λ(l) ) F (λ (l) ). Using our earlier expression for p(λ) 2,weget F (λ) = 1 d p(λ) 2 dλ p(λ) = 1 p(λ) 2 d dλ p(λ) 2 = 1 p(λ) = 2 p(λ) 3 = 1 p(λ) 3 n j=1 2 p(λ) 2 n [ 2 (gq j) 2 j=1 (gq j ) 2 (λ + λ j ) 3 d dλ p(λ) 2 ] (λ + λ j ) 3 Final result Nocedal and Wright page 81: Start with λ (0) large enough that H + λ (0) I>0. Repeat for l =0, 1, 2,...: Write λ = λ (l) for simplicity. Find R is upper triangular such that H + λi = R T R. This is the Cholesky factorization (Matlab chol). Solve for p using R T Rp = g T. If p sufficiently near, use this vector as an approximate solution to the subproblem. Otherwise, continue: Solve for z using R T z = p. Use these vectors to build Go around again. Matlab help: CHOL Cholesky factorization. λ (l+1) = λ + ( p z ) 2 ( p ). File convex, version of 20 February 01, page 11. Typeset at 09:13 February 20, 2001.
12 12 PHILIP D. LOEWEN CHOL(X) uses only the diagonal and upper triangle of X. The lower triangular is assumed to be the (complex conjugate) transpose of the upper. If X is positive definite, then R = CHOL(X) produces an upper triangular R so that R *R = X. If X is not positive definite, an error message is printed. Home practice: Reconcile Newton s method ideas above with iteration scheme above. The Hard Case. If gradient g is orthogonal to the entire eigenspace corresponding to λ = λ 1, then the behaviour of p(λ) is not as described above. The correct value of λ to use in this case is λ = λ 1, and the corresponding p comes from adding an eigenspace component whose length is chosen appropriately to arrange p =. Check the book for these details. (Or, for a rough first cut, use the Cauchy point and move rapidly to the next step, hoping the Hard Case is rather rare.) File convex, version of 20 February 01, page 12. Typeset at 09:13 February 20, 2001.
Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationSupremum and Infimum
Supremum and Infimum UBC M0 Lecture Notes by Philip D. Loewen The Real Number System. Work hard to construct from the axioms a set R with special elements O and I, and a subset P R, and mappings A: R R
More informationTrust Regions. Charles J. Geyer. March 27, 2013
Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but
More information1 Convexity, concavity and quasi-concavity. (SB )
UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More information10-725/36-725: Convex Optimization Prerequisite Topics
10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationChapter 8: Taylor s theorem and L Hospital s rule
Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))
More informationLecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods
Lecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods Qixing Huang The University of Texas at Austin huangqx@cs.utexas.edu 1 Disclaimer This note is adapted from Section 4 of
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationDesign and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016
Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationDifferentiation. f(x + h) f(x) Lh = L.
Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationConvex Analysis and Economic Theory AY Elementary properties of convex functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationLines, parabolas, distances and inequalities an enrichment class
Lines, parabolas, distances and inequalities an enrichment class Finbarr Holland 1. Lines in the plane A line is a particular kind of subset of the plane R 2 = R R, and can be described as the set of ordered
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationPreliminary draft only: please check for final version
ARE211, Fall2012 CALCULUS4: THU, OCT 11, 2012 PRINTED: AUGUST 22, 2012 (LEC# 15) Contents 3. Univariate and Multivariate Differentiation (cont) 1 3.6. Taylor s Theorem (cont) 2 3.7. Applying Taylor theory:
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationMATH 1A, Complete Lecture Notes. Fedor Duzhin
MATH 1A, Complete Lecture Notes Fedor Duzhin 2007 Contents I Limit 6 1 Sets and Functions 7 1.1 Sets................................. 7 1.2 Functions.............................. 8 1.3 How to define a
More informationMath 118, Fall 2014 Final Exam
Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T
More informationMath 5BI: Problem Set 6 Gradient dynamical systems
Math 5BI: Problem Set 6 Gradient dynamical systems April 25, 2007 Recall that if f(x) = f(x 1, x 2,..., x n ) is a smooth function of n variables, the gradient of f is the vector field f(x) = ( f)(x 1,
More informationHW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2
HW3 - Due 02/06 Each answer must be mathematically justified Don t forget your name Problem 1 Find a 2 2 matrix B such that B 3 = A, where A = 2 2 If A was diagonal, it would be easy: we would just take
More informationMonotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2
Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )
More information3.5 Quadratic Approximation and Convexity/Concavity
3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies
More informationChapter 2: Preliminaries and elements of convex analysis
Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationStatic Problem Set 2 Solutions
Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationMath 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible.
Math 320: Real Analysis MWF pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible. Do the following problems from the book: 4.3.5, 4.3.7, 4.3.8, 4.3.9,
More informationChapter 2: Unconstrained Extrema
Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of
More informationNumerical Methods for Inverse Kinematics
Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.
More informationChapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44
Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationAnalysis II - few selective results
Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationThe Rocket Car. UBC Math 403 Lecture Notes by Philip D. Loewen
The Rocket Car UBC Math 403 Lecture Notes by Philip D. Loewen We consider this simple system with control constraints: ẋ 1 = x, ẋ = u, u [ 1, 1], Dynamics. Consider the system evolution on a time interval
More informationM311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3.
M311 Functions of Several Variables 2006 CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3. Differentiability 1 2 CHAPTER 1. Continuity If (a, b) R 2 then we write
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More information3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationSOLUTIONS TO ADDITIONAL EXERCISES FOR II.1 AND II.2
SOLUTIONS TO ADDITIONAL EXERCISES FOR II.1 AND II.2 Here are the solutions to the additional exercises in betsepexercises.pdf. B1. Let y and z be distinct points of L; we claim that x, y and z are not
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationg 2 (x) (1/3)M 1 = (1/3)(2/3)M.
COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is
More informationLecture 1: January 12
10-725/36-725: Convex Optimization Fall 2015 Lecturer: Ryan Tibshirani Lecture 1: January 12 Scribes: Seo-Jin Bang, Prabhat KC, Josue Orellana 1.1 Review We begin by going through some examples and key
More informationTHEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)
4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M
More informationA SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06
A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationA function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3
Convex functions The domain dom f of a functional f : R N R is the subset of R N where f is well-defined. A function(al) f is convex if dom f is a convex set, and f(θx + (1 θ)y) θf(x) + (1 θ)f(y) for all
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationConvex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)
ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality
More informationThis manuscript is for review purposes only.
1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region
More informationMath Precalculus I University of Hawai i at Mānoa Spring
Math 135 - Precalculus I University of Hawai i at Mānoa Spring - 2013 Created for Math 135, Spring 2008 by Lukasz Grabarek and Michael Joyce Send comments and corrections to lukasz@math.hawaii.edu Contents
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationEconomics 204 Fall 2011 Problem Set 1 Suggested Solutions
Economics 204 Fall 2011 Problem Set 1 Suggested Solutions 1. Suppose k is a positive integer. Use induction to prove the following two statements. (a) For all n N 0, the inequality (k 2 + n)! k 2n holds.
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationfy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))
1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationMathematical Foundations -1- Convexity and quasi-convexity. Convex set Convex function Concave function Quasi-concave function Supporting hyperplane
Mathematical Foundations -1- Convexity and quasi-convexity Convex set Convex function Concave function Quasi-concave function Supporting hyperplane Mathematical Foundations -2- Convexity and quasi-convexity
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationMAT 473 Intermediate Real Analysis II
MAT 473 Intermediate Real Analysis II John Quigg Spring 2009 revised February 5, 2009 Derivatives Here our purpose is to give a rigorous foundation of the principles of differentiation in R n. Much of
More informationChapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics
Chapter 3a Topics in differentiation Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Problems in differentiation Problems in differentiation Problem 1.
More informationRichard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration
Richard S. Palais Department of Mathematics Brandeis University Waltham, MA 02254-9110 The Magic of Iteration Section 1 The subject of these notes is one of my favorites in all mathematics, and it s not
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationA Proximal Method for Identifying Active Manifolds
A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationIntroduction to Real Analysis Alternative Chapter 1
Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationBasics of Calculus and Algebra
Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array
More informationOPER 627: Nonlinear Optimization Lecture 9: Trust-region methods
OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods Department of Statistical Sciences and Operations Research Virginia Commonwealth University Sept 25, 2013 (Lecture 9) Nonlinear Optimization
More informationFALL 2018 MATH 4211/6211 Optimization Homework 1
FALL 2018 MATH 4211/6211 Optimization Homework 1 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationConnectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).
Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.
More informationFIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland
FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More information