|
|
- Drusilla Pope
- 6 years ago
- Views:
Transcription
1 A Drect Search Conjugate Drectons Algorthm for Unconstraned Mnmzaton I. D. Coope and C. J. Prce Department of Mathematcs & Statstcs, Unversty of Canterbury, Prvate Bag 4800, Chrstchurch, New Zealand. Report Number: 188 November 1999 Keywords: dervatve free, grd based optmzaton, postve bass, multdrectonal search, conjugate drectons
2
3 A Drect Search Conjugate Drectons Algorthm for Unconstraned Mnmzaton. I. D. Coope and C. J. Prce, Department of Mathematcs and Statstcs, Unversty of Canterbury, Prvate Bag 4800, Chrstchurch, New Zealand. Abstract A drect search algorthm for unconstraned mnmzaton of smooth functons s descrbed. The algorthm mnmzes the functon over a sequence of successvely ner grds. Each grd s dened by a set of bass vectors. From tme to tme these bass vectors are updated to nclude avalable second dervatve nformaton by makng some bass vectors mutually conjugate. Convergence to one or more statonary ponts s shown, and the nte termnaton property of conjugate drecton methods on strctly convex quadratcs s retaned. Numercal results show that the algorthm s eectve onavarety of problems ncludng ll-condtoned problems. Key Words: dervatve free, grd based optmzaton, postve bass, multdrectonal search, conjugate drectons. 1 Introducton There has been much recent nterest n dervatve free methods for unconstraned optmzaton [1, 6, 9]. It may be argued that methods such as dscrete quas-newton methods whch approxmate dervatves wth nte derences are dervatve free, however these methods have not been proven to be convergent. In ths paper nterest s drected at algorthms for whch convergence proofs are known. A varety of provably convergent methods have been descrbed, ncludng ones based on lne searches, trust regons, and on grds. The algorthm presented here s n the last category, and uses the convergence theory developed n [2]. The algorthm does not requre C 2 contnuty, but can explot t by usng conjugate drectons to form the grds. From tme to tme gradent estmates are avalable as a byproduct, and these are used to approxmate a quas-newton step on each such occason. These quas-newton steps are not needed to establsh convergence. A mnmzer of a gven C 1 objectve functon f : R n! R s sought, where the gradent rf of f s locally Lpschtz. The algorthm does not make explct use of rf, but mnmzes 2
4 Grd-based Conjugate Drectons 3 f by examnng t on a sequence fg (m) g 1 m=1 of successvely ner grds. Each grd G (m) s dened by a set of n lnearly ndependent bass vectors V (m) = n v (m) : 2 1;:::;no. The ponts on the grd G (m) are G (m) = ( x 2 R n : x = x (m) o + h (m) nx =1 v (m) where s nteger 8 2 1;:::;n The parameter h (m) s referred to as the mesh sze, and s adjusted as m s ncreased n order to ensure that the meshes become ner n a manner needed to establsh convergence. The pont x (m) o s ncluded to allow each grd to have a derent orgn to ts predecessor. The grd ponts are referenced va rather than x to avod the accumulaton of round o errors from repeated movements on G (m). The algorthm seeks to mnmze f over each grd G (m), where a mnmser of f over a grd s dened as follows: Denton 1 Grd local mnmum A pont x on the grd G (m) s dened asagrd local mnmum f and only f f(x + v) f(x) and f(x, v) f(x) 8v 2V (m) 2 ) Ths denton s motvated by the observaton that f (rf(x)) T v 0 and (rf(x)) T (,v) 0 8v 2V (m) (1) then x s a statonary pont of f (see e.g. [2]). The condtons whch dene a grd local mnmum are a nte derence approxmaton to ths. In each man teraton of the algorthm, a grd G (m) s selected usng prevous nformaton, and a grd local mnmser of f over G (m) s sought through a seres of lne searches along the drectons n V (m). In practce, a nte number of alteratons to the grd are permtted durng the lne searches. An outlne of the algorthm's form s as follows: The algorthm outlne () Intalze all varables. () Execute any nte process. () Search cyclcally along the drectons v 1 ;:::;v n for grd ponts whch are lower than the current terate. When a grd local mnmum s found, proceed to the next step. (v) Execute any nte process. (v) Form a new grd wth ts orgn at the current lowest terate. If stoppng crtera are not satsed, go to step ().
5 4 I. D. Coope and C. J. Prce It s shown n [2] that, under mld condtons, an algorthm wth ths framework generates a sequence of grd local mnma whch converge to one or more statonary ponts of f. For convenence ths theorem s restated here, wth a slght specalzaton to reect the denton of a grd local mnmum used heren. Theorem 1 Gven (a) The sequence of terates fx (k) g 1 k=1 s bounded; (b) f(x) s contnuously derentable, and ts gradent rf(x) s Lpschtz n any bounded regon of R n ; (c) There exst postve constants K and det kv (m) kk for all m and ; and such that j det(v (m) 1 :::v (m) n )j det and (d) h (m)! 0 as m!1; then each cluster pont ^x (1) f(x). Here each ^x (m) s the grd local mnmum of G (m) found by the algorthm. of the subsequence f^x (m) g fx (k) g s a statonary pont of Proof: See [2]. 2 2 General Descrpton of the Algorthm The members of V (m) are chosen to mantan any known second dervatve nformaton n the form of mutually conjugate drectons. The set of drectons V (m) s dvded nto two subsets: V c (m) = fv (m) 1 ;:::;v c (m) g and V (m) nc = fv (m) c+1 ;:::;v(m) n g. The members of V c are regarded as mutually conjugate, whereas h the members of V nc are not. These bass vectors form the columns of the matrx V (m) = v (m) 1 :::v n (m). For convenence the matrces V (m) c and V nc (m) wll be used to refer to the rst c and the last n, c columns of V (m) respectvely. The algorthm repeatedly conducts lne searches along the drectons n V (m) untl a grd local mnmum s found. Between grd local mnma, exstng members of V c (m) are not changed durng these lne searches. Each member of V (m) nc can be changed once between grd local mnma. Ths occurs when v 2V (m) nc s removed from V (m) nc, and replaced by a new conjugate drecton whch s then ncluded n the set V c (m). These new conjugate drectons are generated usng the parallel subspace theorem (see eg. [3, 7, 8]). Ths process contnues untl a grd local mnmum s found. The drectons n V c (m) are then scaled so that they have unt estmated curvature along them. Ths ensures that, when c = n, V c Vc T s the nverse Hessan on a strctly convex quadratc. Each new conjugate drecton changes the grd G (m). Each such grd alteraton removes a vector from V nc, hence only a nte number of such alteratons can be made wthout locatng a grd local mnmum. These alteratons are permtted as part of the nte process n step () of the algorthm outlne.
6 Grd-based Conjugate Drectons 5 At each grd local mnmum, f less than a full set of conjugate drectons s known, then these are retaned. Otherwse the members of V (m) are re-ordered, the conjugate drectons are no longer regarded as such, and the process begns agan wth c =1. At each grd local mnmzer, a second order estmate ^g v (m) of V T rf s obtaned. On notng VV T approxmates the nverse Hessan, the Newton step p =,(r 2 f),1 rf can be estmated. The algorthm conducts a bref search along p foralower pont before selectng the next grd. Ths search forms part of the nte process n step (v). 2.1 The Lne and Ray Searches The form of the algorthm requres that a search from an terate x along v may be abandoned only after f has been calculated at the ponts x+v and x,v. Hence f the algorthm searches along all n drectons v 1 ;:::;v n from x wthout ndng a pont lower than x, then x s a grd local mnmum. If a lower pont than x s located, then the algorthm searches further along that drecton. More precsely, f f(x + v ) < f(x) then a ray search along the ray x + v, >0 s performed; otherwse f f(x, v ) <f(x) a ray search along the ray x, v, >0 s performed; otherwse the lne search s termnated unsuccessfully. Each ray search from x along v o calculates f(x + v o ) at successvely larger nteger values of as long as a decreasng sequence of functon values s obtaned. When the last value s not lower than the second to last value, then the ray search s termnated, and the penultmate value determnes the new terate. The rst two values are = 1 and =2, unless v o =,v, n whch case the rst and second values are =,1 and = 1. Each subsequent value s calculated usng the formula = max ( +1; mn(8; b q +0:5c)) Here bc denotes the oor functon, and q s dened as the mnmzer of the one dmensonal quadratc nterpolatng the last three ponts on the lne x + v o at whch f was calculated. If the nterpolatng quadratc s not strctly convex, then q =8 s used. 3 The Man Algorthm The basc structure of the algorthm s as follows The man algorthm 1. Intalze m = k = c =1, =0,startng pont x (0). Set x b = `unknown', h (0) = 1, h (1) = 1, and V (1) = I n. 2. (a) Set = +1. If >n, set =1. If = 1 set x old = x (k). (b) execute a lne search along the drecton v (m) from x (k). (c) f = c, c<n, and x b 6= `unknown', then augment the set of conjugate drectons as descrbed n secton 3.2. (d) f a grd local mnmum has been found go to step 3, otherwse alter h as speced n secton 3.1.
7 6 I. D. Coope and C. J. Prce (e) f = n do aray search along x (k) + (x (k), x old ), >0. Go to step 2(a). 3. Calculate ^g v (m) and scale each member of V c so the estmated curvature along each drecton s unty. 4. Perform a 2 pont lne search along the quas-newton drecton. 5. If f(x e ) <f(x (k+1) ), then set x (k+1) = x e. 6. Choose h (m+1) = h (m) =s r and update s r. 7. If c n set c =1,and x b = `unknown'. Set v (m+1) 1 = v n (m), set v (m+1) =2;:::;n. Orthogonalze V. = v (m),1 for all 8. Set =0,ncrement m, and go to step 2. Here s the ndex of the drecton beng used n the lne search. The quantty ^g v (m) V T rf(^x (m) ) s the estmated gradent of f(x + h (m) V) wth respect to h. At each grd local mnmser ^x (m), the functon value s known at each of the ponts ^x (m) h (m) v (m), =1;:::;n, and so central derence estmates along each v (m) drectly yeld each element of ^g v (m). In step 7 the matrx V s orthogonalzed by post-multplyng t by an orthogonal matrx Q, where Q s chosen so that Q T V T VQ s a dagonal matrx. Orthogonalzng V n ths way leaves the estmate VV T of the nverse Hessan unaltered. 3.1 Choosng the mesh sze Each tme a new grd s selected n step 6, h (m) s dvded by a scale down factor s r, and s r s then updated va the followng process: f the number of lne searches on the prevous grd s exceeds 4n + n 2 =2 then s r s reduced accordng to the formula s r = max (1 + [s r, 1]=4;s mn ) Otherwse, f the number of lne searches on the prevous grd s less than 2n then s r s ncreased usng the formula: s r = mn (1+2(s r, 1) ;s max ) Here s max s mn 1 s requred. The values s mn = 1:01 and s max = 8 were used to generate the numercal results presented heren. The reason for ths adaptve strategy for reducng h s to allow grds to become ne quckly when grd local mnma are beng found quckly, but to avod grds that are too ne. In the latter event, f the grd s poorly orented then many lne searches may be made before a grd local mnmum s found, and untl a grd local mnmum s found there s only lmted scope for re-orentng the grd. The ray search n step 2(e) s also used to speed up the locaton of a grd local mnmum on each grd.
8 Grd-based Conjugate Drectons 7 For the same reason, every tme n 2 +8n consecutve lne searches are executed wthout leavng step 2 the algorthm attempts to ncrease h at the end of step 2(d) accordng to the formula h (m) = mn 2h (m) ;h (m,1) =s mn The use of h (0) = 1 allows the algorthm to scale the ntal grd up as much as s necessary to obtan a grd local mnmum. These alteratons are part of the nte process n step () of the algorthm outlne. 3.2 Generatng the Set of Conjugate Drectons When f s a strctly convex quadratc, the searches along the drectons n V (m) c allow the mnmzer x b of f over the manfold M to be calculated, where M = fx b +V c : 9 2 R c g. Provded a non-zero step occurs n the followng n, c lne searches along the drectons n V (m) nc, the sequence of terates s translated o M. The next searches along the drectons n V c (m) then allow the mnmzer x e of f on a manfold parallel to M to be calculated. The drecton x e, x b s conjugate to all members of V c (m) (see eg. [7, 3, 8]). Usng h (m) V (m) new = x e, x b, the new conjugate drecton x e, x b replaces the drecton v j n V (m) nc for whch the absolute value of the j th component ( new ) j of new s maxmal. The order of the remanng members of V (m) nc s retaned, the new conjugate drecton s transferred from V nc (m) to V c (m), and c s ncremented. If ( new ) j = 0 for each j = c +1;:::;n then no dsplacement o the manfold M has occurred, n whch case the update s abandoned, and x b s set to x e. If the update s successful, then x b s reset to `unknown.' The ablty to calculate the locaton of x b stems from the fact that each lne search provdes functon values at three or more ponts along the lne n queston. Ths allows the step to that lne's exact mnmzer to be calculated for a strctly convex quadratc, by mnmzng the one dmensonal quadratc nterpolatng the last three ponts at whch f was calculated on the lne. The form of the lne search guarantees ths nterpolatng quadratc s strctly convex except when all three nterpolated functon values are equal. In the latter case the mddle nterpolated pont s taken as the lne's mnmser. The contguty of the searches along the members of V c, and conjugacy means that the sum of these steps to each lne's mnmser s the step to the mnmser x b. It can be shown that each update to V s va ether by scalng of columns, or postmultplcaton by a rank 1 matrx. Hence the determnant j det(v )j n condton (c) of theorem 1 can be updated from teraton to teraton. 3.3 Scalng the members of V c At each grd local mnmum, the drectons n V (m) c nformaton from the lne searches along elements n V (m) c dervatve off at ^x (m) along the drecton v (m) v (m+1) = v (m) h max ; H (m) be H (m), 1 2 are scaled to ncorporate curvature. Let the estmate of the second. Then 8 =1;:::;c (2)
9 8 I. D. Coope and C. J. Prce so that the estmate of the second dervatve of f at ^x (m) along each new drecton v (m+1) s 1, for =1;:::;c. Here s a small postve constant (10,8 ) used to avod dvde by zero problems. Although the form of the lne search means that H < 0 s mpossble, H =0 can occur when f(x) =f(x + v) =f(x, v). The scalng of v (m+1) condton (c) of theorem 1, n whch case v (m+1) 3.4 Stoppng Condtons n (2) may result n the volaton of the bound kvk K n s scaled so that kv (m+1) k = K. The numercal results presented heren were generated usng the smple test k^g (m) v k 2 acc (3) where the stoppng tolerance acc was set at 10,5. The use of g v n (3) s preferred because, gven VV T G,1, k^g v k 2 2 g T G,1 g (^x, x ) T G (^x, x ) where the Taylor seres approxmaton g(x) =G (x, x ) has been used, and where G = r 2 f(x ). Clearly, (3) provdes an estmate of the derence between the least known and optmal values of f. In addton to (3), the algorthm halted whenever h fell below 0:01 acc. Such a lmt s needed because, f h were allowed to become too small then nteger ncrements to may produce no change to x + hv n nte precson arthmetc. More sophstcated tests [4] may be appled to the sequence of grd local mnma, but the `nfrequent' nature of ths sequence reduces the value of such tests. 4 Exact Termnaton on a Quadratc It has been shown n theorem 1 that the subsequence of grd local mnmzers converges to a statonary pont. It s now shown that the algorthm possesses the property of nte termnaton on strctly convex quadratcs. Theorem 2 Let f be a strctly convex quadratc of the form f(x) = 1 2 xt Gx + a T x (4) then the algorthm nds the exact mnmser x of f(x) n a nte number of functon evaluatons. Proof: Frst, t s shown that the algorthm generates a full set of conjugate drectons unless t selects x as an terate before ths process s complete. Let V c be the set of conjugate drectons at the j th teraton, where x (j) has been obtaned from a search along v c. Let M = fx (j) + V c : 9 2 R c g, and let x b mnmse f over M. Although the searches along the members of V c do not select x b as an terate, they do provde enough nformaton
10 Grd-based Conjugate Drectons 9 to calculate x b exactly when f s of the form (4). It s rst shown that ether (a) x b = x ; or (b) a drecton v new conjugate to every member of V c s generated. To show (b) occurs t s sucent to show that the algorthm performs a set of lne searches along the drectons n V c from an terate x (k) 62 M, for some k > j. Together wth the parallel subspace theorem, the rst such set of searches yelds v new. If the algorthm takes a non-zero step along a drecton n V nc, the lnear ndependence of V ensures the subsequent set of searches along the drectons n V c are completed, and take place o M. Otherwse the searches for V nc make no movement, and conjugacy ensures that one set of searches along the drectons n V c wll locate a grd local mnmum. Steps 4 and 5 are then executed, ensurng the next terate x satses f(x) f(x b ). If ths nequalty s strct, then x 62 M. Otherwse x = x b, and the next n lne searches wll ether return x b as a grd local mnmum or move to a lower terate (necessarly not on M). In the former case, the algorthm executes step 4 at x b. If x b = x, the soluton has been found, otherwse rf(x b ) s non-zero (because x b 6= x ), and orthogonal to M. The use of central derences means that rf(x b ) s known exactly. Now V s of full rank, and so p =,V V T rf(x b ) s a non-zero drecton of descent. The lne `() = x b + p, 2 R ntersects M at x b only. Step 4 of the algorthm looks at two ponts on the lne. These are x b + p and x b + p p where the latter s the mnmzer of f over `(). Now because p s a descent drecton at x b t follows that nether x b + p nor x b + p p le on M. Hence step 4moves the sequence of terates o M. The above argument shows the algorthm ether encounters x, or generates a full set of conjugate drectons. In the latter case g v = V T rf, and, when c = n, the nverse Hessan (r 2 f),1 = VV T because of the scalng n step 3. Hence p =,V g v s the exact step to x, and step 4 of the algorthm ensures that ths step wll be taken. 2 5 Numercal Results The algorthm was tested on a varety of general test problems, and on a famly of quadratcs. 5.1 Results for the full algorthm The algorthm was tested on the rst 19 test problems lsted n [5]. The results for these problems are lsted n table 1, where `# fcn' denotes the number of functon evaulatons performed, and f ] s the functon value at the nal terate. The legends kg ] vk, m ], and h ] denote the nal values for the norm of the gradent wth respect to h ], the number of meshes, and the nal mesh sze respectvely. For all of these problems the algorthm was able to locate the optmal pont, and termnated after satsfyng the stoppng condton (3). The second, starred, set of results for Powell's badly scaled two dmensonal functon use a requred accuracy of acc =10,8 rather than 10,5. The latter, looser tolerance s acheved by ponts far from the soluton. For acc =10,5 the nal terate was x ] =(1:3310,5 ; 7:52), whereas for 10,8 the nal terate was x ] =(1:1 10,5 ; 9:106), whch s the soluton. The
11 10 I. D. Coope and C. J. Prce Problem n # fcn f ] kg vk ] m ] h ] Rosenbrock e e e-4 Freudensten & Roth e e-3 Powell badly scaled e-7 1.2e e-4 Powell badly scaled e e e-7 Brown badly scaled e e Beale e e e-5 Jennrch & Sampson e e-3 Helcal valley Helcal valley e e e-4 Bard e Gaussan e-8 1.2e e-3 Meyer e e-4 Gulf Research e e e-6 Box 3-dmensonal e e-6 Powell sngular e e e-5 Wood e e e-5 Kowalk and Osborne e-4 7.7e e-5 Brown and Denns e Osborne e-5 2.0e e-5 Bggs exp e e e-5 Osborne e e-7 Table 1: Numercal Results on 19 standard test functons for the standard algorthm. Here n s the dmenson of the problem and `# fcn' s the number of functon evaluatons performed. The quanttes n the rght hand four columns are respectvely the nal functon value, the magntude of the nal gradent estmate g v, the number of grds used, and the nal grd sze.
12 Grd-based Conjugate Drectons 11 n # fcn f ] kg vk ] kx ], x k e e e e e e e e e e e e e e e e e e-10 Table 2: Numercal Results on a famly of quadratcs. second, starred, set of results for the helcal valley problem use h (1) = 0:9 rather than h (1) =1. Wth the latter choce the soluton x s a grd local mnmzer of the ntal grd, and so the algorthm locates t artcally fast. The algorthm was also tested on a famly of quadratcs of the form f(x) =(x, 1) T G n (x, 1) where 1 =(1; 1;:::;1) T and x (0) = 1; 1 2 ; 1 3 ;:::; 1 T n Here G n s the nn trdagonal matrx wth all dagonal elements equal to 2, and all superand sub-dagonal elements equal to 1. Results are lsted n table 2, where the ] superscrpt denotes the value of the quantty taken at the nal terate x ], and x s the soluton. The results show that the algorthm s eectve on a wde varety of problems whch ncludes ll-condtoned problems. The property of exact termnaton on strctly convex quadratcs s vered by the numercal results. The stoppng condton s satsed when f 10,5, yet the nal functon values are many orders of magntude smaller than ths. 5.2 Results for varatons on the algorthm Sx varants of the algorthm were also tested on the 19 general test problems. These varatons were obtaned by deletng one or more parts of the algorthm. The rst varant omts the ray search n step 2(e); the second omts the orthogonalzaton of V n step 7; and the thrd omts both the orthogonalzaton of V and the ray search n step 2(e). Results are presented n table 3. The fourth varant adjusts h only after a grd local mnmum s found, and halves h on each such occason. The fth and sxth varants respectvely omt step 4, and steps 4 and 5 of the algorthm. Results for these three varants are lsted n table 4. The second, starred, sets of results for Powell's badly scaled functon and the helcal valley functon are for the reasons descrbed above. Each varant of the algorthm obtaned the soluton of the Powell badly scaled functon wth an accuracy of 10,8, but stopped short of the soluton when the requred accuracy was 10,5. Ths was due to the nature of Powell's badly scaled functon, rather than the algorthm. There are three ways the algorthm can termnate: by achevng the requred accuracy; by reachng the mnmum mesh sze lmt; and by reachng the maxmum number of t-
13 12 I. D. Coope and C. J. Prce Problem n Number of functon evaluatons Full no step 2(e) no orthog. no 2(e)/orthog. Rosenbrock Freudensten & Roth Powell badly scaled Powell badly scaled Brown badly scaled Beale Jennrch & Sampson Helcal valley Helcal valley Bard Gaussan Meyer y 28527y y Gulf Research > 10 6 z Box 3-dmensonal Powell sngular Wood Kowalk and Osborne Brown and Denns Osborne Bggs exp Osborne Table 3: Numercal Results on 19 standard test functons for several varants of the algorthm. Column 4 lsts results when the ray search n step 2(e) s omtted. The results n column 5 were generated wth the orthogonalzaton of V n step 7 omtted, and the results n column 6 are for when both the orthogonalzaton of V and step 2(e) were omtted.
14 Grd-based Conjugate Drectons 13 Problem n Number of functon evaluatons Full no step 4 no step 4,5 h + = 1 2 h Rosenbrock Freudensten & Roth Powell badly scaled y Powell badly scaled Brown badly scaled Beale Jennrch & Sampson Helcal valley Helcal valley Bard Gaussan Meyer y 16493y Gulf Research Box 3-dmensonal Powell sngular Wood Kowalk and Osborne Brown and Denns Osborne Bggs exp Osborne > 10 6 z 2192 Table 4: Numercal Results on 19 standard test functons for several varants of the algorthm. The fourth column lsts results for the algorthm wth the quas-newton step removed. The fth column lsts results when both the quas-newton step and the step to the estmated mnmum over the manfold M were omtted. The sxth lsts results for h kept constant except when a grd local mnmum s found; mmedately after ths occurs h s reduced by a factor of 2.
15 14 I. D. Coope and C. J. Prce eratons. Results for whch the algorthm termnated for the second or thrd reasons are marked wth a y and z respectvely. In each case the lower lmt on the mesh sze h was set at 0:01 acc. Entres marked wth a termnated before the optmal functon value was attaned. The extra costs of steps 2(e), 3, and 4 are n terms of extra functon evaluatons, and so the work saved n omttng these steps s reected n the lstngs n tables 3 and 4. In contrast, the savngs n omttng the orthogonalzaton of V n step 7 take the form of reduced overheads, and so are not reected n the tabulated gures. Table 3 shows that deletng one of the skewer search n step 2(e) or the orthogonalzaton n step 7 ether makes lttle derence, or worsens the algorthm's performance. Deletng both steps 2(e) and 7 sgncantly worsens the algorthm's performance on over half the problems lsted. A danger wth any grd method s that the grd local mnmzer les along a narrow valley whch does not le along any axs of the grd. Any sgncant movement along the valley requres many short movements along each of the grd axes n turn. Between grd local mnmzers, opportuntes to re-orent the grd are lmted, and so t s possble that the algorthm wll get forced nto a very long zg-zaggng search on one grd. The orthogonalzaton of V n step 7 and the skewer search n step 2(e) have been ncluded to reduce the rsk of ths occurrng, but they do not provde mmunty. On both occasons when the algorthm exceeded the functon evaluaton lmt, a very large number of lne searches had been performed on one grd ndcatng that zg-zaggng was occurrng. Steps 4 and 5 perform smlar functons n that both represent a step to the mnmzer of an approxmatng quadratc on some subspace of R n. The results lsted n table 4 show that omttng step 4 mproved performance on a few problems such as Rosenbrock's functon, but worsened performance on others. In partcular, the problems n hgher dmensons requred more functon evaluatons to solve. Deletng both steps 4 and 5 worsened performance on most problems, partcularly those of hgher dmenson. The algorthm was termnated by the lmt on h on sx runs: ve of these were for the Meyer problem, and the sxth for Powell's badly scaled problem. On all but one of these runs the algorthm obtaned the optmal functon value. For the Meyer functon wth h (k+1) = h (k) =2 only, the algorthm stopped before the optmal functon value was acheved. A smple calculaton shows that ths varant of the algorthm s lmted to 25 grds essentally the algorthm ran out of grds before reachng the soluton. The same varant also ran out of grds before satsfyng (3) on Powell's badly scaled problem. The full algorthm was not the fastest varant on most of the problems, although for many problems the derence was margnal. However the full algorthm and the varant wth step 4 omtted were the only two to solve all problems n a reasonable amount of tme. The results ndcate that the full algorthm s the more eectve of these two varants n dmensons greater than about 3 or 4.
16 Grd-based Conjugate Drectons 15 6 Concluson A provably convergent dervatve free conjugate drectons algorthm has been presented. Numercal results for general unconstraned problems show that the algorthm eectve n practce, even on problems whch are ll-condtoned. The algorthm s based on a sequence of grds whch are chosen to ncorporate known second dervatve nformaton generated by use of the parallel subspace theorem. Consequently the algorthm retans the property of exact termnaton on strctly convex quadratcs. Ths property s vered by numercal results for the famly of trdagonal quadratcs. The algorthm s capable of makng use of the contnuty of second dervatves, but convergence s guaranteed under the weaker requrement of a C 1 locally Lpschtz objectve functon. A number of antzgzaggng features were ncluded n the algorthm. These features are not requred by the convergence theory, but mproved the algorthm's performance on the set of general test problems. References [1] Conn, A. R., K. Schenberg, and P. Tont, On the convergence of dervatve free methods for unconstraned optmzaton, n Approxmaton Theory and Optmzaton, M. D. Buhmann and A. Iserles, eds, Cambrdge, 1997, Cambrdge Unversty Press, pp 83{108. [2] Coope, I.D. and C.J. Prce, On the convergence of grd based methods for unconstraned optmzaton, Research Report 180, Department of Mathematcs and Statstcs, Unversty of Canterbury, Chrstchurch, New Zealand. [3] Fletcher, R., Practcal Methods of Optmzaton, c1987, Wley. [4] Gll, P. E., W. Murray, and M. H. Wrght, Practcal Optmzaton, c1981, Academc Press. [5] More, J. J., B. S. Garbow, and K. E. Hllstrom, Testng unconstraned optmzaton software, ACM Trans. Math. Software 7 (1981), pp 17{41. [6] Powell, M. J. D., Drect search algorthms for optmzaton calculatons, Acta Numerca 7, pp 287{336, Cambrdge Unversty Press (1998). [7] Powell, M. J. D., An ecent method of ndng the mnmum of a functon of several varables wthout calculatng dervatves, Computer J. 7 (1964), pp 155{162. [8] Smth, C. S., The automatc computaton of maxmum lkelhood estmates, N. C. B. Sc. Dept. Report SC846/MR/40 (1962). [9] Torczon, V., On the convergence of pattern search algorthms, SIAM J. Optmzaton 7 (1997), pp 1{25.
Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationChapter 3 Differentiation and Integration
MEE07 Computer Modelng Technques n Engneerng Chapter Derentaton and Integraton Reerence: An Introducton to Numercal Computatons, nd edton, S. yakowtz and F. zdarovsky, Mawell/Macmllan, 990. Derentaton
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationOPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming
OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationIntroduction to the R Statistical Computing Environment R Programming
Introducton to the R Statstcal Computng Envronment R Programmng John Fox McMaster Unversty ICPSR 2018 John Fox (McMaster Unversty) R Programmng ICPSR 2018 1 / 14 Programmng Bascs Topcs Functon defnton
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationMATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)
1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationPractical Newton s Method
Practcal Newton s Method Lecture- n Newton s Method n Pure Newton s method converges radly once t s close to. It may not converge rom the remote startng ont he search drecton to be a descent drecton rue
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationDepartment of Chemical and Biological Engineering LECTURE NOTE II. Chapter 3. Function of Several Variables
LECURE NOE II Chapter 3 Functon of Several Varables Unconstraned multvarable mnmzaton problem: mn f ( x), x R x N where x s a vector of desgn varables of dmenson N, and f s a scalar obectve functon - Gradent
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More information5 The Rational Canonical Form
5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces
More informationQPCOMP: A Quadratic Programming Based Solver for Mixed. Complementarity Problems. February 7, Abstract
QPCOMP: A Quadratc Programmng Based Solver for Mxed Complementarty Problems Stephen C. Bllups y and Mchael C. Ferrs z February 7, 1996 Abstract QPCOMP s an extremely robust algorthm for solvng mxed nonlnear
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationStrong Markov property: Same assertion holds for stopping times τ.
Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.
More informationNumerical Solution of Ordinary Differential Equations
Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More information= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.
Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationVARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES
VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationP A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that
Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationInductance Calculation for Conductors of Arbitrary Shape
CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors
More information2 STATISTICALLY OPTIMAL TRAINING DATA 2.1 A CRITERION OF OPTIMALITY We revew the crteron of statstcally optmal tranng data (Fukumzu et al., 1994). We
Advances n Neural Informaton Processng Systems 8 Actve Learnng n Multlayer Perceptrons Kenj Fukumzu Informaton and Communcaton R&D Center, Rcoh Co., Ltd. 3-2-3, Shn-yokohama, Yokohama, 222 Japan E-mal:
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationA Ferris-Mangasarian Technique. Applied to Linear Least Squares. Problems. J. E. Dennis, Trond Steihaug. May Rice University
A Ferrs-Mangasaran Technque Appled to Lnear Least Squares Problems J. E. Denns, Trond Stehaug CRPC-TR98740 May 1998 Center for Research on Parallel Computaton Rce Unversty 6100 South Man Street CRPC -
More informationSingle Variable Optimization
8/4/07 Course Instructor Dr. Raymond C. Rump Oce: A 337 Phone: (95) 747 6958 E Mal: rcrump@utep.edu Topc 8b Sngle Varable Optmzaton EE 4386/530 Computatonal Methods n EE Outlne Mathematcal Prelmnares Sngle
More informationAn Algorithm to Solve the Inverse Kinematics Problem of a Robotic Manipulator Based on Rotation Vectors
An Algorthm to Solve the Inverse Knematcs Problem of a Robotc Manpulator Based on Rotaton Vectors Mohamad Z. Al-az*, Mazn Z. Othman**, and Baker B. Al-Bahr* *AL-Nahran Unversty, Computer Eng. Dep., Baghdad,
More informationLecture 2 Solution of Nonlinear Equations ( Root Finding Problems )
Lecture Soluton o Nonlnear Equatons Root Fndng Problems Dentons Classcaton o Methods Analytcal Solutons Graphcal Methods Numercal Methods Bracketng Methods Open Methods Convergence Notatons Root Fndng
More information[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).
[7] R.S. Varga, Matrx Iteratve Analyss, Prentce-Hall, Englewood ls, New Jersey, (962). [8] J. Zhang, Multgrd soluton of the convecton-duson equaton wth large Reynolds number, n Prelmnary Proceedngs of
More informationAPPENDIX 2 FITTING A STRAIGHT LINE TO OBSERVATIONS
Unversty of Oulu Student Laboratory n Physcs Laboratory Exercses n Physcs 1 1 APPEDIX FITTIG A STRAIGHT LIE TO OBSERVATIOS In the physcal measurements we often make a seres of measurements of the dependent
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationSecond Order Analysis
Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to
More informationThe Algorithms of Broyden-CG for. Unconstrained Optimization Problems
Internatonal Journal of Mathematcal Analyss Vol. 8, 014, no. 5, 591-600 HIKARI Ltd, www.m-hkar.com http://dx.do.org/10.1988/jma.014.497 he Algorthms of Broyden-CG for Unconstraned Optmzaton Problems Mohd
More informationChapter 3 Describing Data Using Numerical Measures
Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationform, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo
Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationLeast squares cubic splines without B-splines S.K. Lucas
Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More information6) Derivatives, gradients and Hessian matrices
30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationNewton s Method for One - Dimensional Optimization - Theory
Numercal Methods Newton s Method for One - Dmensonal Optmzaton - Theory For more detals on ths topc Go to Clck on Keyword Clck on Newton s Method for One- Dmensonal Optmzaton You are free to Share to copy,
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationCHAPTER 4d. ROOTS OF EQUATIONS
CHAPTER 4d. ROOTS OF EQUATIONS A. J. Clark School o Engneerng Department o Cvl and Envronmental Engneerng by Dr. Ibrahm A. Assakka Sprng 00 ENCE 03 - Computaton Methods n Cvl Engneerng II Department o
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationMath1110 (Spring 2009) Prelim 3 - Solutions
Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More information