SCALED STEEPEST DESCENT METHOD
|
|
- Fay Chambers
- 6 years ago
- Views:
Transcription
1 SCALED SEEPES DESCEN MEHOD We want to solve the same problem 0. mn x R n f x + x usng another approach, whch we called Scaled Steepest Descent method SSD n short. We propose to solve 0. by tang the safeguarded Barzla Borwen steplength along the scaled steepest descent drecton n each teraton. For smplcty, we focus on a quadratc case 0.2 mn x R n 2 x Hx b x + x where H s an n n postve defnte matrx and b s an n column vector. 0.2 s stll a convex problem snce the superposton of two convex functons s convex. It s well nown that for a convex functon, the local mnmum and the global mnmum concde. We use h x to denote the objectve functon as defned n 0.2. In ths chapter, we frst nvestgate the performance of the proposed SSD method and establsh some convergence results for 0.2. Next, we generalze the SSD method to solvng 0. and compare our method aganst other alternatves n varous settngs.. Motvatons of Our Research Scalng matrx as defned n Chapter 2 s specally desgned to handle optmzaton problems wth L norm nvolved. It s not surprsng that nether Cauchy steplengths nor BB steplengths wor well n the Steepest Descent Drecton snce problem 0.2 s equvalent to a lnear system of nequaltes LOI constraned problem. Drect applcaton of the Cauchy steplength or BB steplength does not guarantee that all the terates are wthn the feasble regon. As a result of the above observaton, both steplengths fal to gve convergence. Inspred by the trust regon method, we propose to use the scaled steepest descent drecton nstead of steepest descent drecton wth BB stepszes. he safeguard mechansm s also ncorporated to avod the chosen stepszes from beng unreasonably too large or small. In secton 2, we propose a varant of the lne search method to solve 0.2. In secton 3, we establsh the framewor of the SSD method. In secton 4, convergence results by smulaton are presented. 2. he lne search based method n the SD drecton We can stll do lne search for the optmal stepsze n the SD drecton but the optmal stepszes are far more complex than the tradtonal Cauchy steplength because of the crossngs of the brea ponts. In the -th teraton, the orgnal Cauchy steplength s gven by and the next terate x + s updated by g α = g g 2 h x g x + = x α g where g = Hx b + sgn x and 2 h x = 2 f x = H.
2 SCALED SEEPES DESCEN MEHOD 2 When we use the above Cauchy steplength α = g g g Hg, the generated sequence x } = sequence does not converge as proved by smulaton. We beleve that the cause of dvergence s that the objectve functon h x s non-dfferentable at each hyperplane x j = 0, j,, n}}. Whenever those hyperplanes are crossed, the correspondng gradent components wll have jumps n values. Hence the smple use of α = g g g Hg result n the ncrease of the objectve functon values because the steepest descent drecton may become an ascent drecton after brea pont crossngs. Nonetheless, we pont out that the correct calculaton of steplength α should follow the steps as gven below: he objectve functon s gven by may h x = 2 x Hx b x + x hence the gradent of h x at x can be represented as g = Hx b + sgn x If we move along the negatve gradent drecton startng from the current terate x, we defne p α h x + = 2 x αg H x αg b x αg + x + = 2 g Hg α 2 g Hx α + g b α + 2 x Hx b x + sgn x αg x αg = h x + 2 g Hg α 2 + g b g Hx α + sgn x αg x αg In order to fnd the mnmum of p α, f we don t have the x + term, we smply get α = g Hx b 2 2 g Hg = g g g Hg snce p α s a quadratc functon of α. Nonetheless we have the extra L norm term here and because of the extra term sgn x αg x αg n p α, to calculate the optmal α s no longer a trval tas. Indeed, smlar to the proof n Chapter 2, p α s pecewse quadratc wth respect to α. he only dfference s that the curvature of h x s always convex. We only have two cases: ether the local mnmum s the mnmum of some quadratc pece after crossng as many non-dfferentable hyperplanes as possble, or the local mnmum s exactly located on one of those non-dfferentable hyperplanes. Note that p α can be rewrtten as p α = h x + = α Hx b d + 2 α2 d Hd + x + αd x + h x hence we get the ncrease ϕ αd for the current terate x as defned by 2. ϕ αd := h x + h x = α Hx b g + 2 α2 g Hg + x αg x Le we had before, β defnes the -th brea pont along d. Suppose that α [ β, β+, 0,, l } where l corresponds to the last brea pont n the drecton d. We have all 3 terms n 2. wth respect
3 SCALED SEEPES DESCEN MEHOD 3 to the brea ponts decomposed as follows: Hx b αd = Hx b [ α β + β β and 2 α2 d Hd = 2 and = Hx b α β d + Hx b [ α β + β β = α β 2 2 d Hd m β m β m = 2 α β 2 d Hd β j βj β j βj β j βj β j βj + + β j βj β j βj + + β β 0 ] + β 0 d d + + β β ] 0 2 d Hd 2 d Hd + α β β j βj d Hd d Hd + + β 2 β β β 0 d Hd 2 d Hd + α β β d Hd + β j βj j=2 β j d Hd x + αd x = x + αd x + βd + x + βd x + β d + x + βd x x = sgn x + β + d + αd x + βd + + sgn x x + βd x = = sgn x + β + d α β d + sgn x + β j + d Defne ϕ j d := g j d + 2 d Hd, j,, l } where g j s the gradent mmedately after crossng the j-th brea pont. hat s, g j := Hx b + sgn x + β j+ d + β j Hd β j βj Hence the ncrease functon ϕ αd s a combnaton of + pecewse quadratc functons ϕ j, j,, + } ϕ αd = ϕ + α β d + ϕ j β j βj d, α [ β, β + Notce that g = f x + sgn x + β + d + β Hd = g + β β Hd 2sgn x j e j n where j corresponds to the axs for the -th brea pont and e j n = [ 0 }} j th component 0 ] We have establshed Lemma to capture the characterstcs of the mnmum wth respect to the brea ponts. d
4 SCALED SEEPES DESCEN MEHOD 4 Lemma. he optmzer α for 2. n the negatve gradent drecton d = g s ether α = β + or α = β 0,, l } + g,g0 d Hd where corresponds to the followng the last brea pont that s crossed, β := max β : β < α } Proof. We examne two cases. Case I: If the optmzer s strctly wthn two consecutve brea ponts,.e., α g 0, g > 0 because of the crossng of the brea pont β, we have ϕ αd α = ϕ + = = g g α β d α + α β d + 2 d + α β µ ϕ j α β β j βj β, + β, ths mples d α 2 d Hd α Substtute d = g 0 nto the above equaton, we can get Bear n mnd that µ = d Hd = g 0 Hg 0 > 0 g α = β +, g0 > β µ therefore the optmal value s ϕ α d = = ϕ j 2 ϕj β j βj β j βj Case II: he mnmum s one of the brea ponts α = β +, 0,, l }. We have g 0, g,, + } snce µ > 0. We further get ϕ α d = ϕ j 2 β j βj β j βj d β + g j, g 0 g d, 2 g0 + g, g0 µ 2 µ 2 µ g 0 g, 2 g0 2 µ β, βl }. Wthout loss of generalty, we assume,, g 0, g > 0 and β j + gj,g 0 µ > β j, j β g, g 0 + β + 2 β + 2 β g, g 0 β β + β µ }} β + β µ g,g0 he ntuton behnd Lemma s qute straghtforward. After we cross some brea pont, the steepest descent drecton defned at x may become an ascent drecton hence the mnmum of 2. may be some brea pont n the drecton g 0. We propose our verson of lne search based algorthm as elaborated below:
5 SCALED SEEPES DESCEN MEHOD 5 Algorthm Lne Search Based Algorthm Gven x 0, for = 0,, Step. Whle termnatng condtons are not satsfed do Step 2. Compute h x = 2 x Hx b x + x and g = Hx b + sgn x defne the quadratc model as ϕ αd := h x + αd h x where d = g. Step 3. Compute an optmal steplength α for ϕ αd and bactracng s performed f h x s non-dfferentable at x +. Step 4. Set x + = x + αd and goto step } 3. he Framewor of non-monotone Scaled Steepest Descent Method Algorthm s a monotone lne search based method n solvng 0.2. However, to determne the optmal steplength s computatonally heavy. If ths algorthm s generalzed to solvng 0., second order Hessan matrx nformaton s also needed. Barzla Borwen method s comparable n practcal effcency to conjugate gradent method n solvng the unconstraned optmzaton problem. However, as stated before, we observed that drect applcaton of BB stepsze fals to solve 0.2 snce ths problem s ndeed a constraned problem. In order to deal wth the L norm n 0.2, we choose the scaled steepest descent drecton. he detaled dervaton of the BB stepsze n our stuaton s gven n Secton Dervaton of the BB stepszes. he terates are updated by x + = x α D g mn x, } f x where D dag v x and v x :=. otherwse Note that we have changed the defnton of the scalng matrx. If one grad componet satsfes f x at the currrent terate x and x, then we do not change the correspondng scalng factor. If f x and x <, as descrbed n secton 3.3, the correspondng scalng factor can force the component to be bndng at zero very fast f the stepsze s close. Smlar to the dervaton n [], we want to solve the optmzaton problem as defned by 3. mn α α D x D g 2 = mn α α D x D g α D x D g = mn α D x, D x α 2 2 D x, D g α + D g, D g where x = x x and g = g g. he mnmum of 3. s trvally derved as 3.2 α BB := α = D x, D x D x, D g
6 SCALED SEEPES DESCEN MEHOD 6 Smlarly, the mnmum of mn α D x αd g 2 s gven by 3.3 α BB2 := α = D g, D x D g, D g If we substtute x = x x and g = H x x + sgn x sgn x nto 3.2 and 3.3, we observe that nether BB stepsze nor BB stepsze 2 are Raylegh quotent due to brea pont crossngs. Both BB stepsze and 2 as calculated above sometmes are negatve. We force the chosen stepsze α to be n a predetermned nterval [α mn, α max ] where 0 < α mn < α max to avod negatve stepszes and mae t bounded away from zero Upper-boundedness of the chosen stepsze. In practce, we can set α max to be a constant. We can set As proved n Chapter 2, we only care about the asymptotc propertes of D g 2. If ths norm goes to zero, the frst order necessary optmalty condton s satsfed and the lmt pont of x } = reaches the local mnmum x. For the convex problem h x as defned n 0.2, x ndeed s the global mnmum. We use the orthogonal transformaton matrx = [ ] v v 2 v n such that t transforms H nto a dagonal matrx λ λ 2 H =... λn where v = λ v. It s obvous that g + = Hx + b + sgn x + = H x α D g b + sgn x α D g = Hx b + sgn x α HD g + e where = g α HD g + e e sgn x α D g sgn x = j Ix,x sgn x j. 0 and I x, x + refers to the correspondng axs ndex set of the brea ponts along the drecton D g startng from x to x +. Notce that I x, x + l as defned n the proof and β > 0,,, l } s the -th soluton, n terms of the ncreasng order of the magntude, to the followng system hence β = x + j = x β D g j = 0 x sgnx j j g = j = g j f x j D g j x j g j f x > As before, f v x j = x j, β g. Otherwse, β can approach zero.n
7 SCALED SEEPES DESCEN MEHOD 7 In concluson, D + g + = D + g α HD g + e D + g I α HD F + D + e where we should utlze the recurrence relaton of D + and D. We observe that For dagonal matrx, many matrx norms concde,we use Frobenus norm to derve the norm nequaltes D + g n v x + g 2 and Hence and herefore we get x + = x α D g x + = x α D g x + α D g D + g D g = D g n v x g 2 n v x + g 2 n v x g 2 θ max,,n} + 2α, by observng that f all postve numbers a, b, c satsfy a b and } 2 x, x + α g c,,, n}, then n a n b max,,n} c } v x + g v x g = x + x f x, f x + x f x, f x + > x + f x >, f x + f x >, f x + > + 2α f x, f x + x f x, f x + > x + f x >, f x + f x >, f x + > max + 2α, = max,,n} θ α x } I fx } + max, x + α g } I fx >} }
8 SCALED SEEPES DESCEN MEHOD 8 From D + g + θ α I α HD D g + D + e and f e 0 no sgn changes eventually and θ α I α HD <, then the scaled gradent converges. We need to fnd out the exact formula for θ wth respect to α = D [H x x + sgn x sgn x ], D x x D [H x x + sgn x sgn x ], D [H x x + sgn x sgn x ] If α as defned above s postve, t should satsfy the θ α I α HD < Otherwse, we choose α to be the one satsfyng argmn θ α I α HD : θ α I α HD < } Recap that [ ] I α HD F = tr I α HD I α HD [ ] = tr I α HD α HD + α 2 HD HD = n 2α tr HD + α 2tr [D H HD ] Applyng transform : H H to HD and D 2H H, respectvely and based on the smlar nvarant property of trace operator, we can derve that tr HD = tr D H n = v x λ and tr [ D H HD ] = tr [ D 2 H H ] = n v x 2 λ2 herefore we get [ n ] [ n ] I α HD F = n 2α v x λ + α 2 v x 2 λ2 and D + g D g I α HD F = [ n v x + g 2 n ] [ n n v x g 2 n 2α v x λ + α 2 θ α I α HD F m α Notce that θ α s pecewse lnear once g and x are gven, and the mnmum for the second pece [ n ] [ n ] n 2α v x λ + α 2 v x 2 λ2 s just always located at α = b 2a = [ n v x λ ] [ n ] v x 2 λ2 ] v x 2 λ2
9 SCALED SEEPES DESCEN MEHOD 9 [ n Notce that by Cauchy-Schwartz nequalty, [ n v x λ ] 2 and the mnmum for the norm s c b2 4a = n [ n v x λ ] 2 [ n ] v x 2 λ2 Hence based on the fact that θ α max max,,n} + 2α, ] [ v x 2 λ ] } } I x fx } + max, x + α g } I fx >} s ether a constant over some nterval or monotoncally ncreasng, we can draw a concluson that the global mnmum for θ α should have the followng propertes: m α 2 s pecewse-cubc, and for each pece, t s always postve. 2 In order to fnd the mnmum of m α, frst eep n mnd that θ α s monotone. 3 We hereby propose to use the steplength α = [ n vx λ ] for the purpose to have a steepest [ n vx 2 λ2 ] descent n terms of the magntude of the gradent. In consderaton of that, when the algorthm gets closer to the mnmum, the necessary condtons are almost satsfed, by almost I mean all f x for suffcently large, then D +g D g and by our choce of α, we try to move along the scaled steepest descent drecton such that the scaled gradent exponentally goes down to zero. 4 It does not matter too much f e = 0 as long as D +g D g I α HD F <, the early sgn D + g D g change terms e are sort of exponentally decayed by Π = I α HD F 5 We may be n the stuaton that the mnmum for the Frobenus norm I α HD F s not smaller than at all. If that s the case, t stll shows some drectons on how large at maxmum the steplength should be. In concluson, we can choose α max = [ n [ n α := mn v x λ ] [ n ], max v x 2 λ2 vx λ ] [ n vx 2 λ2 ] α mn, hence we get } D x x, D x x D x x, D [H x x + sgn x sgn x ] 3.3. Constant stepsze even wors when L norm term s domnant. We observe that by smulaton, the sequence generated by x + = x D h x actually converges to the optmal very fast when the L norm s somehow domnant. he reason s smply gven as follows: We can see that by settng the steplength along the scaled steepest descent drecton to be, x x + = g v x = x x g v x = x
10 SCALED SEEPES DESCEN MEHOD 0 Algorthm 2 Scaled Steepest Descent Method Gven x 0,α 0 set = 0, α mn, α max, 0 < τ < τ 2 <, γ 0, and M Z + Step : If D g = 0 stop Step 2: Calculate f max max f x j 0 j mn, M }}, δ g, D g and α mn α max, max α mn, D x x, D x x D x x, D [H x x + sgn x sgn x ] Step 3: Whle f x αd g > f max + γαδ set α new [τ α, τ 2 α], α α new In our mplementaton, we choose α new = τ +τ 2 2 α Step 4: x + = x α D g and goto step }} and we observe that x + = x sgn x g = x f x }} x when the necessary condton f x s satsfed. herefore t s not surprsng the terates converge to the orgn almost exponentally Our Approach: SSDBB Method. From Secton 3. to 3.3, we have already establshed the fundamental recpes of our proposed SSDBB method. he full verson of the algorthm s lsted below: We provde the followng theorem to establsh the convergence result. heorem 2. Assume that Ω 0 = x : h x h x 0 } s a bounded set. Let h : R n R be contnuously dfferentable n some neghborhood N of Ω 0. Let x } be the sequence generated by the SDDBB algorthm Scaled Steepest Descent Method wth Barzla Borwen Steplength. hen ether D j x j g x j = 0 for some fnte j, or the followng propertes hold: lm D g = 0; no lmt pont of x } s a local maxmum of h; f the number of statonary ponts of h n Ω 0 s fnte, the the sequence x } converges. Proof. BC. Remar: Parameter M s used to control the satsfablty of the condton f x αd g > f max + γαδ n the whle loop. As M ncreases, t s more lely that the tral pont gves enough decrease n the objectve values. If M =, then the above algorthm becomes a monotone algorthm.γ s another control parameter to set the hardness of the same condton. Note that the nner product g, D g s negatve. Hence as γ ncreases, the nequalty f x αd g > f max + γαδ s more dffcult to be satsfed. If at some terate x, t s very unlely the f x αd g > f max +γαδ s volated, we wll eep shrnng the stepsze. We don t mpose lower and upper bound on α new n the whle loop because of the above observatons. Otherwse, we may encounter an nfnte loop f we set α new mn α max, max α mn, D x x, D x x D x x, D [H x x + sgn x sgn x ] }}
11 SCALED SEEPES DESCEN MEHOD 4. Smulaton Results In ths secton, we nvestgate the performance of our proposed SSDBB method through smulatons. here are many components n Algorthm 2. Hereby we randomly generate 50 test cases and assess the mpact of each parameter on the convergence rate. We want to choose a set of system parameters wth approprate values so as to gve the best convergence rate. 4.. he mpact of scalng. We prove through smulatons that our scalng technques help to solve the problem 0.2. If the algorthm drectly taes the steepest descent drecton wth BB stepsze, we can see that t does not wor at all he Impact of the system parameters. We examne n dfferent sectons to see the performance nflueced by the settngs of the system parameters he mpact of M. We lst n able to gve an overvew of the overall performance. We choose γ =, α mn = 0.0, α max = α mn, τ = 0., τ 2 = 0.9, ρ = 0. he objectve functon s gven by mn ρ x R n 2 x Hx b x + x and the condton number of H s 3 and the dmenson of the above problem s 0. he termnatng condton s set to be whether or not the dfference between the two consecutve objectve values s less than the typcal tolerance value eps = e 8 or the maxmum number of teratons 000 s reached. We say an executon s unsuccessful though t termnates less than 000 teratons, when the last stepsze s < 0 4. he reason why we choose 0 4 s because we observe that, for successful executons, the norm of the scaled gradent s < 0 4. he success executon should termnate wth the stepsze no less than α mn. If the last steplength s < 0 4, t can easly mae the change n x + = x α D g to be too small so that the dfference n objectve values s less than eps. herefore the algorthm does not converge to the optmal pont. Comments: he faled cases n less than 000 teratons are caused by very small stepszes due to repeated call to step 3. 2 Almost all successful cases are termnated wth stepsze α mn = 0.0 whleas the norm of the scaled gradent s at As M ncreases from to 5, the success rate s tang the maxmum value around M = he mpact of γ. We tae 3 and 4 as the canddates for M. In ths secton, we evaluate the mpact of γ on the success rate. Comments: Smaller γ tends to mae the f condton easer to be volated, that n turn results n earler termnaton from the whle loop n step 3. In theory, the γ 0,. In practce, we observe that f γ 0.3, 0.6, there s no sgnfcant dfference n terms of the success rate and the number of teratons to converge. In our mplementaton, we choose γ = M = 3 or 4 does not mae great dfference. For slow convergence sequence x } =, the dfferent choces of M and γ can not mprove the convergence rate he mpact of weghtng factor ρ. We chec the valdty of our proposed SSDBB method n dfferent scenaros,.e, the change of domnance from L norm to quadratc term. he smulaton results are put n the able 3. We choose M = 3 and γ = 0.5 Comments:
12 SCALED SEEPES DESCEN MEHOD 2 est # M = M = 2 M = 3 M = 4 M = 5 Fal n 2 Fal n 2 Fal n 4 Fal n 9 3 Fal n 3 Fal n Fal n Fal n Fal n Fal n 3 Fal n Fal n Fal n Fal n 9 Fal n Fal n 2 Fal n Fal n 4 Fal n 62 2 Fal n Fal n Fal n 2 Fal n 4 Fal n Fal n Fal n Fal n Fal n Fal n 3 Fal n Fal n Fal n Fal n 2 Fal n Fal n Fal n 2 Fal n Fal n 2 Fal n Fal n 4 Fal n Fal n Fal n Fal n Fal n Fal n Fal n Fal n 4 3 Fal n Fal n 3 Fal n 3 33 Fal n Fal n Fal n 2 Fal n Fal n 2 Fal n 37 Fal n 3 Fal n Fal n 3 Fal n Fal n Fal n 40 Fal n Fal n Fal n 42 Fal n Fal n Fal n Fal n 2 Fal n Fal n 4 Fal n Fal n 2 Fal n 48 Fal n 2 Fal n Fal n Fal n Fal n Success Rate 0% 40% 82% 84% 78% able. Impact of M
13 SCALED SEEPES DESCEN MEHOD 3 est # M = 3, γ = 0.3 M = 3, γ = 0.6 M = 4, γ = 0.3 M = 4, γ = Grad Dverge at 000 Grad Dverge at 000 Grad Dverge at 000 Grad Dverge at Success Rate 90% 86% 88% 88% able 2. Impact of γ on the success rate
14 SCALED SEEPES DESCEN MEHOD 4 est # ρ = 0. ρ = ρ = Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Success Rate 98% 94% 56%/80% able 3. Change the role of domnance
15 SCALED SEEPES DESCEN MEHOD 5 In ρ = 0. case, the only falure test case #0 actually gves the norm of the scaled gradent at e level. he stepszes are always chosen to be the lower bound α mn = 0.0 snce the BB stepsze s always out of the bound. he objectve values are at e level as well for all cases. 2 In ρ = case, there are 2 test cases n whch t taes more than 000 teratons to converge to the mnmum even though the stepsze are almost constant around.038. We can conclude that when the L norm s domnant, our algorthm wors qute well. 3 In ρ = 00 case, L norm s less domnant compared to the quadratc term. For those test cases who termnates at 000-th teraton, the norm of the scaled gradent s at level 0 4. here are qute a few cases, 0 out of 50, whose scaled gradent components do not show the tendency to converge to zero. In those cases, the chosen steplength n Step are almost always at the lower bound α mn = 0.0 and the objectve values are less than 0 3. We should bear n mnd that n order to recover the volatlty surface from the observed maret data, L norm should not be neglgble f we want the recovered volatlty surface to possess the stablty property he mpact of the condton number. When the condton number of the matrx H s ncreased, the optmzaton problem 0.2 s more ll-posed. We assess the performance of our SSDBB method n able 4. We choose ρ = 0, M = 3 and γ = 0.5 Comments: In cond# = 0 case, the converged sequences are wth exactly 9/0 components bndng at 0. For the dverged cases, more than component s not bndng at zero. 2 As condton number ncreases, the success rate decreases. Moreover, due to numercal ssues n Matlab, the scaled gradent and the stepsze sometmes have complex values wth mage part to be zero. References [] Jonathan Barzla and Jonathan M. Borwen, wo-po44nt Step Sze Gradent Method n IMA Journal of Numercal Analyss 988, 8, 4 48
16 SCALED SEEPES DESCEN MEHOD 6 est # cond# = 0 cond# = Grad component dverges 5 M = 3 and γ = Grad component dverges Grad component dverges Grad component dverges 2 Grad component dverges 57 3 Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges Grad component dverges 49 Grad component dverges Success Rate 78% 74% able 4. Impact of the condton number
17 SCALED SEEPES DESCEN MEHOD 7 est # ρ = Success Rate 00% able 5. ρ = 00, M = 3 and γ = 0.5 tol=0 8
MMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More information( ) 2 ( ) ( ) Problem Set 4 Suggested Solutions. Problem 1
Problem Set 4 Suggested Solutons Problem (A) The market demand functon s the soluton to the followng utlty-maxmzaton roblem (UMP): The Lagrangean: ( x, x, x ) = + max U x, x, x x x x st.. x + x + x y x,
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationSection 3.6 Complex Zeros
04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationBézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0
Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method
More informationA Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function
A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationSome basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C
Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationCHAPTER 3 UNCONSTRAINED OPTIMIZATION
. Prelmnares CHAPER 3 UNCONSRAINED OPIMIZAION.. Introducton In ths chapter we wll examne some theory for the optmzaton of unconstraned functons. We wll assume all functons are contnuous and dfferentable.
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationFirst day August 1, Problems and Solutions
FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationLecture 14: Bandits with Budget Constraints
IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationPractical Newton s Method
Practcal Newton s Method Lecture- n Newton s Method n Pure Newton s method converges radly once t s close to. It may not converge rom the remote startng ont he search drecton to be a descent drecton rue
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationCOMPLEX NUMBERS AND QUADRATIC EQUATIONS
COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationHow Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *
How Strong Are Weak Patents? Joseph Farrell and Carl Shapro Supplementary Materal Lcensng Probablstc Patents to Cournot Olgopolsts * September 007 We study here the specal case n whch downstream competton
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationCHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG
Chapter 7: Constraned Optmzaton CHAPER 7 CONSRAINED OPIMIZAION : SQP AND GRG Introducton In the prevous chapter we eamned the necessary and suffcent condtons for a constraned optmum. We dd not, however,
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationFormal solvers of the RT equation
Formal solvers of the RT equaton Formal RT solvers Runge- Kutta (reference solver) Pskunov N.: 979, Master Thess Long characterstcs (Feautrer scheme) Cannon C.J.: 970, ApJ 6, 55 Short characterstcs (Hermtan
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationSolutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1
Solutons to Homework 7, Mathematcs 1 Problem 1: a Prove that arccos 1 1 for 1, 1. b* Startng from the defnton of the dervatve, prove that arccos + 1, arccos 1. Hnt: For arccos arccos π + 1, the defnton
More informationOnline Appendix: Reciprocity with Many Goods
T D T A : O A Kyle Bagwell Stanford Unversty and NBER Robert W. Stager Dartmouth College and NBER March 2016 Abstract Ths onlne Appendx extends to a many-good settng the man features of recprocty emphaszed
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More information14 Lagrange Multipliers
Lagrange Multplers 14 Lagrange Multplers The Method of Lagrange Multplers s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve
More information