MMA and GCMMA two methods for nonlinear optimization

Size: px
Start display at page:

Download "MMA and GCMMA two methods for nonlinear optimization"

Transcription

1 MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. Ths note descrbes the algorthms used n the author s 2007 mplementatons of MMA and GCMMA n Matlab. The frst versons of these methods were publshed n [1] and [2]. 1. Consdered optmzaton problem Throughout ths note, optmzaton problems of the followng form are consdered, the varables are x = (x 1,..., x n T IR n, y = (y 1,..., y m T IR m, and z IR. mnmze f 0 (x + a 0 z + (c y d y 2 =1 subect to f (x a z y 0, = 1,..., m x X, y 0, z 0. (1.1 Here, X = {x IR n x mn real numbers whch satsfy x x max x mn < x max, = 1,..., n}, x mn and x max are gven for all, f 0, f 1,..., f m are gven, contnuously dfferentable, real-valued functons on X, a 0, a, c and d are gven real numbers whch satsfy a 0 > 0, a 0, c 0, d 0 and c + d > 0 for all, and also a c > a 0 for all wth a > 0. In (1.1, the natural optmzaton varables are x 1,..., x n, whle y 1,..., y m and z are artfcal optmzaton varables whch should make t easer for the user to formulate and solve certan subclasses of problems, lke least squares problems and mnmax problems. As a frst example, assume that the user wants to solve a problem on the followng standard form for nonlnear programmng. mnmze f 0 (x subect to f (x 0, = 1,..., m x X, (1.2 f 0, f 1,..., f m are gven dfferentable functons and X s as above. To make problem (1.1 (almost equvalent to ths problem (1.2, frst let a 0 = 1 and a = 0 for all > 0. Then z = 0 n any optmal soluton of (1.1. Further, for each, let d = 1 and c = a large number, so that the varables y become expensve. Then typcally y = 0 n any optmal soluton of (1.1, and the correspondng x s an optmal soluton of (1.2. It should be noted 1

2 that the problem (1.1 always has feasble solutons, and n fact also at least one optmal soluton. Ths holds even f the user s problem (1.2 does not have any feasble solutons, n whch case some y > 0 n the optmal soluton of (1.1. As a second example, assume that the user wants to solve a mn-max problem on the form mnmze max (x} =1,..,p subect to g (x 0, = 1,..., q x X. (1.3 h and g are gven dfferentable functons and X s as above. It s assumed that the functons h are non-negatve on X (possbly after addng a suffcently large constant C to each of them, the same C for all h. For each gven x X, the value of the obectve functon n problem (1.3 s the largest of the p real numbers h 1 (x,..., h p (x. Problem (1.3 may equvalently be wrtten on the followng form wth varables x IR n and z IR : mnmze z subect to z h (x, = 1,..., p g (x 0, x X, z 0. = 1,..., q (1.4 To make problem (1.1 (almost equvalent to ths problem (1.4, let m = p + q, f 0 (x = 0, f (x = h (x for = 1,..., p, f p+ (x = g (x for = 1,..., q, a 0 = a 1 = = a p = 1, a p+1 = = a p+q = 0, d 1 = = d m = 1, c 1 = = c m = a large number. As a thrd example, assume that the user wants to solve a constraned least squares problem on the form p 1 mnmze 2 (h (x 2 =1 (1.5 subect to g (x 0, = 1,..., q x X. h and g are gven dfferentable functons and X s as above. Problem (1.5 may equvalently be wrtten on the followng form wth varables x IR n and y 1,..., y 2p IR: mnmze 1 2 p (y 2 + yp+ 2 =1 subect to y h (x, = 1,..., p y p+ h (x, g (x 0, y 0 and y p+ 0, x X. = 1,..., p = 1,..., q = 1,..., p (1.6 To make problem (1.1 (almost equvalent to ths problem (1.6, let m = 2p + q, f 0 (x = 0, f (x = h (x for = 1,..., p, f p+ (x = h (x for = 1,..., p, f 2p+ (x = g (x for = 1,..., q, a 0 = 1, a 1 = = a m = 0, d 1 = = d m = 1, c 1 = = c 2p = 0, c 2p+1 = = c 2p+q = a large number. 2

3 2. Some practcal consderatons In many applcatons, the constrants are on the form g (x g max, g (x stands for e.g. a certan stress, whle g max s the largest permtted value on ths stress. Ths means that f (x = g (x g max (n (1.1 as well as n (1.2. The user should then preferably scale the constrants n such a way that 1 g max 100 for each (and not g max = The obectve functon f 0 (x should preferably be scaled such that 1 f 0 (x 100 for reasonable values on the varables. The varables x should preferably be scaled such that 0.1 x max x mn 100, for all. Concernng the large numbers on the coeffcents c (mentoned above, the user should for numercal reasons try to avod extremely large values on these coeffcents (lke It s better to start wth reasonably large values and then, f t turns out that not all y = 0 n the optmal soluton of (1.1, ncrease the correspondng values of c by e.g. a factor 100 and solve the problem agan, etc. If the functons and the varables have been scaled accordng to above, then resonably large values on the parameters c could be, say, c = 1000 or Fnally, concernng the smple bound constrants x mn x x max, t may sometmes be the case that some varables x do not have any prescrbed upper and/or lower bounds. In that case, t s n practce always possble to choose artfcal bounds x mn and x max such that every realstc soluton x satsfes the correspondng bound constrants. The user should then preferably avod choosng x max x mn unnecessarly large. It s better to try some reasonable bounds and then, f t turns out that some varable x becomes equal to such an artfcal bound n the optmal soluton of (1.1, change ths bound and solve the problem agan (startng from the recently obtaned soluton, etc. 3. The ordnary MMA MMA s a method for solvng problems on the form (1.1, usng the followng approach: In each teraton, the current teraton pont (x (k, y (k, z (k s gven. Then an approxmatng (k subproblem, n whch the functons f (x are replaced by certan convex functons f (x, s generated. The choce of these approxmatng functons s based manly on gradent nformaton at the current teraton pont, but also on some parameters u (k and l (k ( movng asymptotes whch are updated n each teraton based on nformaton from prevous teraton ponts. The subproblem s solved, and the unque optmal soluton becomes the next teraton pont (x (k+1, y (k+1, z (k+1. Then a new subproblem s generated, etc. 3

4 The MMA subproblem looks as follows: mnmze subect to f (k 0 (x + a 0z + (c y d y 2 =1 f (k (x a z y 0, = 1,..., m α (k y 0, z 0. x β (k, = 1,..., n, = 1,..., m (3.1 In ths subproblem (3.1, the approxmatng functons p (k q (k f (k (x = = (u (k x (k 2 = (x (k l (k 2 =1 (k f (x are chosen as ( (k p q (k u (k + x x l (k + r (k, = 0, 1,..., m, (3.2 ( ( f (x (k x ( ( f (x (k x r (k = f (x (k + ( f ( f =1 ( (k p u (k x (k (x (k + x (x (k + x + q (k x (k l (k x max x max 10 5 x mn 10 5 x mn, (3.3, (3.4. (3.5 ( + f Here, (x (k f denotes the largest of the two numbers (x (k and 0, x x ( f whle (x (k denotes the largest of the two numbers f (x (k and 0. x x The bounds α (k α (k β (k and β (k n (3.1 and (5.1 are chosen as = max{ x mn, l (k = mn{ x max, u (k whch means that the constrants α (k of constrants: 0.9(x (k 0.5(x max + 0.1(x (k l (k 0.1(u (k x (k x β (k x mn l (k x x (k x mn x x (k, x (k, x (k 0.5(x max x mn }, ( (x max x mn }, (3.7 are equvalent to the followng three sets x x max, ( (u (k x (k, ( (x max x mn. (3.10 4

5 The default rules for updatng the lower asymptotes l (k and the upper asymptotes u (k as follows. The frst two teratons, when k = 1 and k = 2, are l (k = x (k 0.5(x max x mn, u (k = x (k + 0.5(x max x mn. (3.11 In later teratons, when k 3, γ (k = l (k = x (k u (k = x (k 0.7 f (x (k 1.2 f (x (k 1 f (x (k provded that ths leads to values that satsfy γ (k (x (k 1 l (k 1, + γ (k (u (k 1 x (k 1, x (k 1 (x (k 1 x (k 2 < 0, x (k 1 (x (k 1 x (k 2 > 0, x (k 1 (x (k 1 x (k 2 = 0, l (k x (k 0.01(x max x mn, l (k x (k 10(x max x mn, u (k x (k (x max x mn, u (k x (k + 10(x max x mn. (3.12 (3.13 (3.14 If any of these bounds s volated, the correspondng l (k of the volated nequalty. or u (k s put to the rght hand sde Note that most of the explct numbers n the above expressons are ust default values of dfferent parameters n the fortran code. More precsely: The number 10 5 n (3.3 and (3.4 s the default value of the parameter raa0. The number 0.1 n (3.6 and (3.7 s the default value of the parameter albefa. The number 0.5 n (3.6 and (3.7 s the default value of the parameter move. The number 0.5 n (3.11 s the default value of the parameter asynt. The number 0.7 n (3.13 s the default value of the parameter asydecr. The number 1.2 n (3.13 s the default value of the parameter asyncr. All these values can be carefully changed by the user. As an example, a more conservatve method s obtan by decreasng move and/or asynt and/or asyncr. 5

6 4. GCMMA the globally convergent verson of MMA The globally convergent verson of MMA, from now on called GCMMA, for solvng problems of the form (1.1 conssts of outer and nner teratons. The ndex k s used to denote the outer teraton number, whle the ndex ν s used to denote the nner teraton number. Wthn each outer teraton, there may be zero, one, or several nner teratons. The double ndex (k, ν s used to denote the ν:th nner teraton wthn the k:th outer teraton. The frst teraton pont s obtaned by frst chosng x (1 X and then chosng y (1 and z (1 such that (x (1, y (1, z (1 becomes a feasble soluton of (1.1. Ths s easy. An outer teraton of the method, gong from the k:th teraton pont (x (k, y (k, z (k to the (k + 1:th teraton pont (x (k+1, y (k+1, z (k+1, can be descrbed as follows: Gven (x (k, y (k, z (k, an approxmatng subproblem s generated and solved. In ths subproblem, the functons f (x are replaced by certan convex functons f (k,0 (x. The optmal soluton of ths subproblem s denoted (ˆx (k,0, ŷ (k,0, ẑ (k,0 (k,0. If f (ˆx (k,0 f (ˆx (k,0, for all = 0, 1,..., m, the next teraton pont becomes (x (k+1, y (k+1, z (k+1 = (ˆx (k,0, ŷ (k,0, ẑ (k,0, and the outer teraton s completed (wthout any nner teratons needed. Otherwse, an nner teraton s made, whch means that a new subproblem s generated and solved at x (k (k,1 (k,0, wth new approxmatng functons f (x whch are more conservatve than f (x for those ndces for whch the above nequalty was volated. (By ths we mean that f (k,1 (k,0 (x > f (x for all x X (k, except for x = x (k (k,1 f (x (k (k,0 = f (x (k. The optmal soluton of ths new subproblem s denoted (ˆx (k,1, ŷ (k,1, ẑ (k,1 (k,1. If f (ˆx (k,1 f (ˆx (k,1, for all = 0, 1,..., m, the next teraton pont becomes (x (k+1, y (k+1, z (k+1 = (ˆx (k,1, ŷ (k,1, ẑ (k,1, and the outer teraton s completed (wth one nner teratons needed. Otherwse, another nner teraton s made, whch means that a new subproblem s generated and solved at x (k (k,2, wth new approxmatng functons f (x, etc. These nner teratons are (k,ν repeated untl f (ˆx (k,ν f (ˆx (k,ν for all = 0, 1,..., m, whch always happens after a fnte (usually small number of nner teratons. Then the next teraton pont becomes (x (k+1, y (k+1, z (k+1 = (ˆx (k,ν, ŷ (k,ν, ẑ (k,ν, and the outer teraton s completed (wth ν nner teratons needed. It should be noted that n each nner teraton, there s no need to recalculate the gradents f (x (k, snce x (k has not changed. Gradents of the orgnal functons f are calculated only once n each outer teraton. Ths s an mportant note snce the calculaton of gradents s typcally the most tme consumng part n structural optmzaton. 6

7 The GCMMA subproblem looks as follows, for k {1, 2, 3,...} and ν {0, 1, 2,...}: mnmze subect to f (k,ν 0 (x + a 0 z + (c y d y 2 =1 f (k,ν (x a z y 0, = 1,..., m α (k y 0, z 0. x β (k, = 1,..., n, = 1,..., m (4.1 In ths subproblem (4.1, the approxmatng functons p (k,ν q (k,ν f (k,ν (x = = (u (k x (k 2 = (x (k l (k 2 Note: If ρ (k,ν+1 =1 (k,ν f (x are chosen as ( (k,ν p q (k,ν u (k + x x l (k + r (k,ν, = 0, 1,..., m, (4.2 ( ( f (x (k x ( ( f (x (k x r (k,ν > ρ (k,ν = f (x (k then + ( f ( f =1 ( (k,ν p u (k x (k (x (k + x (x (k + x + q(k,ν x (k l (k x max x max ρ (k,ν x mn ρ (k,ν x mn, (4.3, (4.4. (4.5 (k,ν+1 f (x s a more conservatve approxmaton than (k,ν f (x. Between each outer teraton, the bounds α (k and β (k and the asymptotes l (k and u (k are updated as n the orgnal MMA, the formulas (3.6 (3.14 stll hold. The parameters ρ (k,ν n (4.3 and (4.4 are strctly postve and updated accordng to below. Wthn a gven outer teraton k, the only dfferences between two nner teratons are the values of some of these parameters. In the begnnng of each outer teraton, when ν = 0, the followng default values are used: ρ (k,0 = 0.1 n f (x (k x =1 (xmax x mn, for = 0, 1,.., m. (4.6 If any of the rght hand sdes n (4.6 s < 10 6 then the correspondng ρ (k,0 s set to

8 In each new nner teraton, the updatng of ρ (k,ν s based on the soluton to the most recent subproblem. Note that f (k,ν (x may be wrtten on the form: f (k,ν (x = h (k (x + ρ (k,ν d (k (x, h (k (x and d (k (x do not depend on ρ (k,ν. Some calculatons gve that Now, let d (k (x = Then h (k (ˆx (k,ν +(ρ (k,ν be a natural value of ρ (k,ν+1 s modfed as follows. +δ (k,ν =1 δ (k,ν (u (k (u (k l (k (x x (k 2 x (x l (k (x max x mn. (4.7 = f (ˆx (k,ν (k,ν f (ˆx (k,ν d (k (ˆx (k,ν. (4.8 d (k (ˆx (k,ν = f (ˆx (k,ν, whch shows that ρ (k,ν +δ (k,ν mght. In order to get a globally convergent method, ths natural value ρ (k,ν+1 = mn{ 1.1 (ρ (k,ν + δ (k,ν, 10ρ (k,ν } f δ (k,ν > 0, ρ (k,ν+1 = ρ (k,ν f δ (k,ν 0. (4.9 f (k,ν It follows from the formulas (4.2 (4.5 that the functons are always frst order approxmatons of the orgnal functons f at the current teraton pont,.e. f (k,ν (x (k = f (x (k and (k,ν f (x (k = f (x (k. (4.10 x x f (k,ν Snce the parameters ρ (k,ν are always strctly postve, the functons are strctly convex. Ths mples that there s always a unque optmal soluton of the GCMMA subproblem. There are at least two approaches for solvng the subproblems n MMA and n GCMMA, the dual approach and the prmal-dual nteror-pont approach. In the prmal-dual nteror-pont approach, a sequence of relaxed KKT condtons are solved by Newton s method. We have mplemented ths approach n Matlab, snce all the requred calculatons are most naturally carred out on a matrx and vector level. Ths approach for solvng the subproblem s descrbed next. 8

9 5. A prmal-dual method for solvng the subproblems n MMA and GCMMA To smplfy the notatons, the teraton ndces k and ν are now removed n the subproblem. Further, we let b = r (k,ν, and drop the constant r (k,ν 0 from the obectve functon. Then the MMA/GCMMA subproblem becomes mnmze g 0 (x + a 0 z + (c y d y 2 =1 subect to g (x a z y b, = 1,..., m g (x = α x β, = 1,..., n z 0, y 0, = 1,..., m =1 ( p u x + and l < α < β < u for all. q x l (5.1, = 0, 1,..., m, ( Optmalty condtons for the MMA/GCMMA subproblem Snce the subproblem (5.1 s a regular convex problem, the KKT optmalty condtons are both necessary and suffcent for an optmal soluton to (5.1. In order to state these condtons, we frst form the Lagrange functon correspondng to (5.1. L = g 0 (x + a 0 z + + m (c y d y 2 + λ (g (x a z y b + =1 (ξ (α x + η (x β =1 =1 µ y ζz, λ = (λ 1,..., λ m T, ξ = (ξ 1,..., ξ n T, η = (η 1,..., η n T, µ = (µ 1,..., µ m T and ζ are non-negatve Lagrange multplers for the dfferent constrants n (5.1. Let ψ(x, λ = g 0 (x + p (λ = p 0 + λ g (x = =1 Then the Lagrange functon can be wrtten L = ψ(x, λ + (a 0 ζz + + =1 =1 λ p and q (λ = q 0 + (5.3 ( p (λ + q (λ, (5.4 u x x l (ξ (α x + η (x β + =1 (c y d y 2 λ a z λ y λ b µ y, =1 9 λ q. (5.5 (5.6

10 and then the KKT optmalty condtons for the subproblem (5.1 become as follows. ψ ξ + η = 0, x = 1,..., n ( L/ x = 0 (5.7a c + d y λ µ = 0, = 1,..., m ( L/ y = 0 (5.7b a 0 ζ λ T a = 0, ( L/ z = 0 (5.7c g (x a z y b 0, = 1,..., m (prmal feasblty (5.7d λ (g (x a z y b = 0, = 1,..., m (compl slackness (5.7e ξ (α x = 0, = 1,..., n (compl slackness (5.7f η (x β = 0, = 1,..., n (compl slackness (5.7g µ y = 0, = 1,..., m (compl slackness (5.7h ζz = 0, (compl slackness (5.7 α x β, = 1,..., n (prmal feasblty (5.7 z 0 and y 0, = 1,..., m (prmal feasblty (5.7k ξ 0 and η 0, = 1,..., n (dual feasblty (5.7l ζ 0 and µ 0, = 1,..., m (dual feasblty (5.7m λ 0, = 1,..., m (dual feasblty (5.7n ψ x = p (λ (u x 2 q (λ (x l 2 and λ T a = λ a. (5.8 =1 10

11 5.2. The relaxed optmalty condtons for the MMA/GCMMA subproblem When a prmal-dual nteror-pont method s used for solvng the subproblem (5.1, the zeros n the rght hand sdes of the complementary slackness condtons (5.7e (5.7 are replaced by the negatve of a small parameter ε > 0. Further, slack varables s are ntroduced for the constrants (5.7d. The relaxed optmalty condtons then become as follows. ψ ξ + η = 0, x = 1,..., n (5.9a c + d y λ µ = 0, = 1,..., m (5.9b a 0 ζ λ T a = 0, (5.9c g (x a z y + s b = 0, = 1,..., m (5.9d ξ (x α ε = 0, = 1,..., n (5.9e η (β x ε = 0, = 1,..., n (5.9f µ y ε = 0, = 1,..., m (5.9g ζz ε = 0, (5.9h λ s ε = 0, = 1,..., m (5.9 x α > 0 and ξ > 0, = 1,..., n (5.9 β x > 0 and η > 0, = 1,..., n (5.9k y > 0 and µ > 0, = 1,..., m (5.9l z > 0 and ζ > 0, (5.9m s > 0 and λ > 0, = 1,..., m (5.9n For each fxed ε > 0, there exsts a unque soluton (x, y, z, λ, ξ, η, µ, ζ, s of these condtons. Ths follows because (5.9a (5.9n are mathematcally (but not numercally equvalent to the KKT condtons of the followng strctly convex problem n the varables x, y, z and s. mnmze g 0 (x + a 0 z + (c y d y 2 ε log(x α ε log(β x ε log y ε log s ε log z (5.10 subect to g (x a z y + s b, = 1,..., m (α < x < β, y > 0, z > 0, s > 0 the strct nequaltes wthn parentheses wll be satsfed automatcally because of the logarthm terms n the obectve functon. 11

12 5.3. A Newton drecton for the relaxed optmalty condtons Assume that a pont w = (x, y, z, λ, ξ, η, µ, ζ, s whch satsfes (5.9 (5.9n s gven, and assume that, startng from ths pont, Newton s method should be appled to (5.9a (5.9. Then the followng system of lnear equatons n w = ( x, y, z, λ, ξ, η, µ, ζ, s should be generated and solved. Ψ G T e e d e e a T 1 G e a e ξ x α η β x µ y ζ z s λ x y z λ ξ η µ ζ s = δ x δ y δ z δ λ δ ξ δ η δ µ δ ζ δ s δ x,..., δ s are defned by the left hand sdes n (5.9a (5.9, Ψ s an n n dagonal matrx wth (Ψ = 2 ψ x 2 = 2p (λ (u x 3 + 2q (λ (x l 3, (5.11 G s an m n matrx wth (G = g x = p (u x 2 q (x l 2, (5.12 d s a dagonal matrx wth the vector d = (d 1,..., d m T on the dagonal, x α s a dagonal matrx wth the vector x α on the dagonal, etc. e s a vector (1,..., 1 T, wth dmenson apparent from the context, so that e s a unty matrx, wth dmenson apparent from the context. In the above Newton system, ξ, η, µ, ζ and s can be elmnated through ξ = x α 1 ξ x ξ + ε x α 1 e, η = β x 1 η x η + ε β x 1 e, (5.13a (5.13b µ = y 1 µ y µ + ε y 1 e, (5.13c ζ = (ζ/z z ζ + ε/z, s = λ 1 s λ s + ε λ 1 e. (5.13d (5.13e 12

13 Then the followng system n ( x, y, z, λ s obtaned. D x G T D y e ζ/z a T G e a D λ x δ x y δ y = z δ z λ δ λ (5.14 D x = Ψ + x α 1 ξ + β x 1 η, (a dagonal matrx (5.15a D y = d + y 1 µ, (a dagonal matrx (5.15b D λ = λ 1 s, (a dagonal matrx (5.15c δ x = ψ x ε x α 1 e + ε β x 1 e, δ y = c + d y λ ε y 1 e, (5.15d (5.15e δ z = a 0 λ T a ε/z, (a scalar (5.15f δ λ = g(x az y b + ε λ 1 e. (5.15g In (5.15d, ψ x s a vector wth the n components ψ x = Next, y can be elmnated from the system (5.14 through Then the followng system n x, z and λ s obtaned. p (λ (u x 2 q (λ (x l 2. y = Dy 1 λ Dy 1 δ y. (5.16 D x G T ζ/z a T G a D λy x δ x z = δ z λ δ λy (5.17 D λy = D λ + D 1 y, (a dagonal matrx (5.18a δ λy = δ λ + D 1 y δ y. (5.18b 13

14 In ths system (5.17, ether x or λ can be elmnated. If x s elmnated through the followng system n λ and z s obtaned. x = Dx 1 G T λ Dx 1 δ x, (5.19 D λy + GD 1 x G T a a T ζ/z λ z = δ λy GD 1 x δ z δ x (5.20 If, nstead, λ s elmnated from the system (5.17 through λ = D 1 λy the followng system n x and z s obtaned. G x D 1 λy a z + D 1 δ λy λy, (5.21 D x + G T D 1 λy G a T D 1 λy G GT D 1 λy a ζ/z + at D 1 λy a x z = δ x G T D 1 δ λy λy δ z + a T D 1 δ λy λy (5.22 Note that the sze of the system (5.20 s (m+1 (m+1, whle the sze of the system (5.22 s (n + 1 (n + 1. Therefore, typcally, the system (5.20 should be preferred f n > m, whle the system (5.22 should be preferred f m > n. In both the systems (5.20 and (5.22, t s of course possble to elmnate also the varable z (and reduce the systems to m m and n n, respectvely, but for numercal reasons t s not wse to do that. As an example, f a s dense (many non-zeros then the system obtaned after elmnatng z wll be dense, even f the orgnal system (5.20 or (5.22 s sparse Lne search n the Newton drecton When the Newton drecton w = ( x, y, z, λ, ξ, η, µ, ζ, s has been calculated n the gven pont w = (x, y, z, λ, ξ, η, µ, ζ, s, a step should be taken n that drecton. A pure Newton step would be to take a step equal to the calculated drecton, but such a step mght lead to a pont some of the nequaltes (5.9 (5.9n are volated. Therefore, we frst let t be the largest number such that t 1, x + t x α 0.01(x α for all, β (x + t x 0.01(β x for all, and (y, z, λ, ξ, η, µ, ζ, s + t ( y, z, λ, ξ, η, µ, ζ, s 0.01 (y, z, λ, ξ, η, µ, ζ, s. Further, the new pont should also be n some sense better than the prevous. Therefore, we let τ be the largest of t, t/2, t/4, t/8,... such that δ(w + τ w < δ(w, δ(w s the resdual vector defned by the left hand sdes n the relaxed KKT condtons (5.9a (5.9, and s the ordnary Eucldan norm. Ths s always possble to obtan snce the Newton drecton s a descent drecton for δ(w. 14

15 5.5. The complete prmal-dual algorthm Frst of all, ε (1 and a startng pont w (1 = (x (1, y (1, z (1, λ (1, ξ (1, η (1, µ (1, ζ (1, s (1 whch satsfes (5.9 (5.9n are chosen. The followng s a smple but reasonable choce. ε (1 = 1, x (1 = 1 2 (α + β, y (1 = 1, z (1 = 1, ζ (1 = 1, λ (1 = 1, s (1 = 1, ξ (1 = max{ 1, 1/(x (1 α }, η (1 = max{ 1, 1/(β x (1 }, µ (1 = max{ 1, c /2}. A typcal teraton, leadng from the l:th teraton pont w (l to the (l+1:th teraton pont w (l+1, conssts of the followng steps. Step 1: For gven ε (l and w (l whch satsfy (5.9 (5.9n, calculate w (l as descrbed n secton 5.3 above. Step 2: Calculate a steplength τ (l as descrbed n secton 5.4 above. Step 3: Let w (l+1 = w (l + τ (l w (l. Step 4: If δ(w (l+1 < 0.9 ε (l, let ε (l+1 = 0.1 ε (l. Otherwse, let ε (l+1 = ε (l. Increase l by 1 and go to Step 1. The algorthm s termnated when ε (l has become suffcently small, say ε (l 10 7, and δ(w (l+1 < 0.9 ε (l. 6. References [1] K. Svanberg, The method of movng asymptotes a new method for structural optmzaton, Internatonal Journal for Numercal Methods n Engneerng, 1987, 24, [2] K. Svanberg, A class of globally convergent optmzaton methods based on conservatve convex separable approxmatons, SIAM Journal of Optmzaton, 2002, 12,

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui. Convex Optmzaton (EE227BT: UC Berkeley) Lecture 9 (Optmalty; Conc dualty) 9/25/14 Laurent El Ghaou Organsatonal Mdterm: 10/7/14 (1.5 hours, n class, double-sded cheat sheet allowed) Project: Intal proposal

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG Chapter 7: Constraned Optmzaton CHAPER 7 CONSRAINED OPIMIZAION : SQP AND GRG Introducton In the prevous chapter we eamned the necessary and suffcent condtons for a constraned optmum. We dd not, however,

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dxdoorg/0287/opre007ec e - c o m p a n o n ONLY AVAILABLE IN ELECTRONIC FORM 202 INFORMS Electronc Companon Generalzed Quantty Competton for Multple Products and Loss of Effcency

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n + Topcs n Theoretcal Computer Scence May 4, 2015 Lecturer: Ola Svensson Lecture 11 Scrbes: Vncent Eggerlng, Smon Rodrguez 1 Introducton In the last lecture we covered the ellpsod method and ts applcaton

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Finite Element Modelling of truss/cable structures

Finite Element Modelling of truss/cable structures Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

FIRST AND SECOND ORDER NECESSARY OPTIMALITY CONDITIONS FOR DISCRETE OPTIMAL CONTROL PROBLEMS

FIRST AND SECOND ORDER NECESSARY OPTIMALITY CONDITIONS FOR DISCRETE OPTIMAL CONTROL PROBLEMS Yugoslav Journal of Operatons Research 6 (6), umber, 53-6 FIRST D SECOD ORDER ECESSRY OPTIMLITY CODITIOS FOR DISCRETE OPTIML COTROL PROBLEMS Boban MRIKOVIĆ Faculty of Mnng and Geology, Unversty of Belgrade

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d. SELECTED SOLUTIONS, SECTION 4.3 1. Weak dualty Prove that the prmal and dual values p and d defned by equatons 4.3. and 4.3.3 satsfy p d. We consder an optmzaton problem of the form The Lagrangan for ths

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

High resolution entropy stable scheme for shallow water equations

High resolution entropy stable scheme for shallow water equations Internatonal Symposum on Computers & Informatcs (ISCI 05) Hgh resoluton entropy stable scheme for shallow water equatons Xaohan Cheng,a, Yufeng Ne,b, Department of Appled Mathematcs, Northwestern Polytechncal

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

14 Lagrange Multipliers

14 Lagrange Multipliers Lagrange Multplers 14 Lagrange Multplers The Method of Lagrange Multplers s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

On the Global Linear Convergence of the ADMM with Multi-Block Variables

On the Global Linear Convergence of the ADMM with Multi-Block Variables On the Global Lnear Convergence of the ADMM wth Mult-Block Varables Tany Ln Shqan Ma Shuzhong Zhang May 31, 01 Abstract The alternatng drecton method of multplers ADMM has been wdely used for solvng structured

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

In this section is given an overview of the common elasticity models.

In this section is given an overview of the common elasticity models. Secton 4.1 4.1 Elastc Solds In ths secton s gven an overvew of the common elastcty models. 4.1.1 The Lnear Elastc Sold The classcal Lnear Elastc model, or Hooean model, has the followng lnear relatonshp

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information