ALGORITHMS FOR OPTIMAL DECISIONS 2. PRIMAL-DUAL INTERIOR POINT LP ALGORITHMS 3. QUADRATIC PROGRAMMING ALGORITHMS & PORTFOLIO OPTIMISATION

Size: px
Start display at page:

Download "ALGORITHMS FOR OPTIMAL DECISIONS 2. PRIMAL-DUAL INTERIOR POINT LP ALGORITHMS 3. QUADRATIC PROGRAMMING ALGORITHMS & PORTFOLIO OPTIMISATION"

Transcription

1 - (00) Itroducto - ALGORIHMS FOR OPIMAL DECISIONS. INRODUCION: ì Nolear decso problems ì Basc cocepts ad optmalty ì Basc Algorthms: Steepest descet; Frak-Wolfe; Barrer/SUM. PRIMAL-DUAL INERIOR POIN LP ALGORIHMS 3. QUADRAIC PROGRAMMING ALGORIHMS & PORFOLIO OPIMISAION 4. SQP & INERIOR POIN ALGORIHMS FOR NLP ether 5. ALGORIHMS FOR GAMES; EQUILIBRIA & NONLINEAR SYSEMS OF EQUAIONS or 6. NONLINEAR PROGRAMMING ALGORIHMS FOR CONVEX CONSAINS Suggested referece books: Nolear Programmg - D. Bertsekas (Athea) Practcal Methods of Optmzato - R. Fletcher (Wley) Lear & Nolear Programmg - D.G. Lueberger & Y. Ye (Sprger) Numercal Optmzato - J. Nocedal & S. Wrght (Sprger)

2 - (00) Itroducto - INRODUCION. HE GENERAL NONLINEAR PROGRAMMING (NLP) PROBLEM: m f (x) h (x) Ÿ 0 ; g (x) = 0 or max f (x) h (x) Ÿ 0 ; g (x) = 0 x h(x) g (x) x h(x) g (x) x = ; h (x) = ; g (x) = ã ã ã x h(x) e g (x) Feasble rego: e = x g (x) = 0 ; h (x) Ÿ 0 Optmalty of x for m of f (x) Ê x e ad f (x ) Ÿ f (x); ax e Optmalty of x for max of f (x) Ê x e ad f (x) f (x); ax e f, g ad h lear Ê LP problem Example : o maufacture a product costs c/ut. It sells at prce p/ut. Customer demad s d(p) uts. Fd the prce to charge to maxmse profts. he proft s (p - c) d(p). he decso varable s p. Problem: max p (p - c) d(p). Example : If k uts of captal ad j uts of labour are used, a compay ca maufacture (k j) uts of a product. Captal ca be purchased at 4/ut ad labour at /ut. A total of 8 s avalable to purchase k ad j. How ca the compay maxmse the uts of the product that ca be maufactured? We wsh to solve max k j 4 k + jÿ 8; k 0; j 0. Remark : Eve f e s a covex set, the optmal soluto of a NLP s ulkely to be at a extreme pot (vertex) of e. I example, the feasble rego (bouded by the tragle ABC) ad the costat proft curves k j =, k j = ad k j = 4. he optmal soluto occurs where the costat proft curve s taget to the boudary of e. hus the

3

4 - (00) Itroducto 3 - optmal k j = 4, k =, j = 4 (pot D). Of course, pot D s ot a extreme pot of e, because the costat proft les are ot straght. FIGURE. Remark : he optmal soluto of a NLP may ot be eve o the boudary of e. $ For example, cosder max x x % 0 Ÿ x Ÿ. he optmal soluto s at x whch s clearly ot o the boudary of e. FIGURE œ " # Example 3: Furture House (FH) has 700 uts of wood (W) for maufacturg tables () ad chars (C). Small productos requre uts of W ut of W C ;. For small productos, FH's dstrbutor wll pay 0/ ad 5/C. Let x, x deote respectvely the umber of ad C produced. For larger quattes of 's ad C's more precse maufacturg methods become worthwhle. hs ecoomes of scale (cludg savg ad reusg shavgs, scraps etc) makes W used depedet o x, x. Suppose uts of W used 0.5 = x uts of W used 0.5 C x x = FH's dstrbutor requres twce as may C's as 's. For large orders, t requres a cetve dscout of 0.% for every or C, so that prce prce C (x + x ) = 0 ( 0.00) ; (x + x ) = 5 ( 0.00). How may 's ad C's should FH maufacture to maxmse ts reveue? max x, x f ( x, x ) = 0 x + 5 x subject to (x + x ) x x x h (x, x ) = x.5+ + x Ÿ 0 h(x, x) = x x Ÿ 0 3 h (x, x ) = x Ÿ 0; h (x, x ) = x Ÿ 0 hs problem has = varables ad m = 4 costrats. FIGURE 3. LOCAL MAXIMA OR MINIMA x x x x For ay NLP, x = e s a local maxmum f, ax = e ã ã x x

5

6 - (00) Itroducto 4 - ad ± x x ± ( =,..., ) % for some % 0, f (x ) f (x). hus, x s a local maxmum f f (x) f (x) holds a feasble x that are close to x. Local mmum s defed smlarly for f (x ) Ÿ f (x) holds a feasble x close to x. 3. CONVEX & CONCAVE FUNCIONS A set f s covex f x', x'' f Ê α x' + ( α) x'' f, 0 Ÿ α Ÿ. A fucto f (x) s covex o f f for x', x'' f, 0 Ÿ α Ÿ f ( α x' + ( α) x'') Ÿ α f(x') + ( α) f(x'') [the le jog x' ad x'' s ever below the fucto value at that pot]. A fucto f(x) cocave o f f for x', x'' f, 0 Ÿ α Ÿ s f ( α x' + ( α) x'') α f(x') + ( α) f(x'') Ê f(x) s covex f ad oly f f(x) s cocave. FIGURES 4 & 5 Remark: For a lear fucto, f(x) = a x + b, equalty follows so t s both covex ad cocave: a α x' + ( α) x'' + b = αa x' + αb + ( α) a x'' ( α) b. " x Example 4: For x 0, f(x) = x ad f(x) = e are covex; f(x) = x # s cocave. FIGURE 6 Example 5: It ca be show that the sum of two covex (cocave) fuctos s covex x (cocave). hus, f(x) = x e s covex. heorem: For covex f ad covex f(x), the local mmum s also the global mmum, for cocave f(x), the local maxmum s also the global maxmum. Proof Suppose that x s a local mmum. Suppose also that there s aother pot: x f, x Á x, wth f(x) f(x).

7

8

9 - (00) Itroducto 5 - he, for we have α x + ( α) x, α (0, ), f( α x + ( α) x ) Ÿ αf (x ) + ( α) f (x ) f (x ) whch cotradcts that x s a local mmum pot. Let ` ` x" ` f # ` f f f ` = ` f x Þ $ ã f ` ` f x heorem: Let f(x) be oce cotuously dfferetable. he, f(x) s covex over a covex set f f ad oly f. f(y) f(x) + ff (x)(y x) ax, y f. Proof We omt the proof (see e.g. Lueberger ad Ye p. 95) but motvate by otg that the taget to f(y), at x, s alwas below the covex fucto f(y). Defto: he Hessa of f(x) at x, s the symmetrc matrx H (x,..., x ) whose j th etry s H (x,..., x ) =. j ` f (x) j 6 x " 3 # Example 6: f(x, x ) = (x ) + x x + (x ), H (x, x ) =. Defto: H(x) s postve (egatve) sem-defte f for 0 Á v, v H(x) v ( Ÿ ) 0. Furthermore, H(x) s postve (egatve) defte f for 0 Á v, v H(x) v ( ) 0. heorem: Suppose f (x) has secod partal dervatves for all x f where f has cotas a teror pot. he, f (x) s a covex (cocave) fucto f ad oly f a x f, H (x) s postve (egatve) sem-defte. Proof

10 - (00) Itroducto 6 - By aylor expaso we have: " # α f(y) = f(x) + ff (x)(y x) + (y x) H(x + (y x))(y x) for some α, 0 Ÿ α Ÿ. Clearly, f the Hessa s postve everywhere sem-defte, we have f(y) f(x) + ff (x)(y x) whch, by the above theorem, mples that f(x) s covex. Suppose the Hessa s ot postve sem-defte at some pot x f. By cotuty of the Hessa t ca be assumed that x s a teror pot of f. here s a y f such that (y x) H(x)(y x) 0. Aga, by cotuty of the Hessa, y may be selected so that for all α, 0 Ÿ α Ÿ, (y x) H(x + α(y x))(y x) 0. hs, vew of the aylor expaso above mples that f(y) f(x) + ff (x)(y x) does ot hold ad cosequetly, vew of the prevous theorem, f s ot covex. have " # Example 7: f(x, x ) = (x ) + x x + (x ) s a covex fucto o f =. We `f(x, x) x + x f f(x, x ) œ `f(x, x) œ ; x + x H(x, x) = H s postve sem-defte (each prcpal mor s oegatve). Exercses:. Show that the tersecto \ of ay umber of covex sets f s covex (e \ œ f s covex. [follows by defto of covex f.]. Show that f f(x) ad g(x) are covex fuctos o a covex set f, the f(x) + g(x) s a covex fucto o f. 3. Show that f f s a covex (cocave) fucto, x f (x) Ÿ b s a covex (cocave) set. 4. UNCONSRAINED OPIMISAION

11 - (00) Itroducto 7 - m f (x) x or max f (x) x Defto: A pot x for whch f f (x) = 0 s a statoary pot of f(x). If H(x ) s also postve (egatve) defte, the x s a local m (max) of f(x). Remark. Covex (cocave) f (x) Ê x s a global mmum (maxmum) (see heorem Secto 3). Example 8: A moopolst produces a product for two types of customers (C & C). If q uts are produced for C, the C s wllg to pay (70 4q )/ut. For C, ths s (50 5q )/ut. Maufacturg cost for uts s (q + q ) (q +q ). o max proft, how much should be sold to each customer? Objectve proft fucto: f (q, q ) = q (70 4q ) + q (50 5q ) 00 5 (q + q ) `f(q, q) 70 8 q 5 Ê q = ` q 8 f f(q, q ) œ `f(q, q) œ = 0; q 5 9 Ê q = ` q thus, we have foud the statoary pot of f. Next we exame the Hessa 55 H(q, q) = whch s egatve defte. hus, q =, q = maxmses f ad yelds the proft f(, ) = ( ) ) œ 39.8 Exercse: A compay maufactures two products. If t charges a prce p for product, t ca sell q uts of product where q = 60 3 p + p ad q = 80 p + p. It costs 5/ut to produce product ad 7/ut to produce product. I order to maxmse proft, how much of each product should be produced? 5. OPIMALIY: LAGRANGE MULIPLIERS For the equalty costraed optmsato problem

12 - (00) Itroducto 8 - m f (x) g (x) = 0 or max f (x) g (x) = 0 g(x)=0 g(x)=0 3 g(x)=0 ã e g(x)=0 he Lagraga s defed as L (x, -) œ f (x) + -, g(x) f (x) + - g(x) f (x) + - g (x) e he costraed mmum (maxmum) s the statoary pot (x, -) of L wth respect to (x, -) f x, e ` f (x) g (x ) x ` j ` - j = - L (x, - ) = = g (x ) ad t s ecessary that the Hessa of the Lagraga = f x - L(x, ) = ` L(x, -) `x`xj s postve (egatve) sem-defte the tersecto of the costrats: g(x ) j v f L(x, - ) v ( Ÿ ) 0; av: ` v = 0; a =,...,. x j= j e Furthermore, t s suffcet that f x L(x, -) s postve (egatve) defte for all v satsfyg: ` g(x ) j= j j v = 0; a =,..., e. For covex (cocave) f ad for g(x) lear, the statoary pot of L s esured to be a mmum (maxmum) as the codto o the Hessa s satsfed. he varable vector - (lke dual varables LP) are shadow prces. he th shadow prce (e elemet of -) correspods to the th costrat. If the rght had sde of the th costrat s creased by a small amout, say? ( ether a max or m problem), the the optmal objectve fucto value wll crease by approxmately?-. Example 9: A compay s plag to sped 0, 000 o advertsg. It costs 3000/mute to advertse o V ad 000/mute to advertse o rado. If the compay buys x ms of V ad y ms of rado advertsg, ts reveue s gve by f (x, y) = x y + xy + 8x +3y

13 - (00) Itroducto 9 - NLP: maxmse f (x, y) = x y + xy + 8x +3y subject to: 3 x + y = 0 Lagraga: L(x, y, -) = x y + xy + 8x +3y + - (0 3x y) Statoary pot: `L(x, y, -) 4 x + y L(x, y, -) ` fx, y, - L(x, y, -) œ ` y œ y + x = 0; `L(x, y, -) 0 3 x y ` - y = 4 x ; x = y ; 3 x + y = 0 y = (- 3 + y) = y Ê y = x œ œ = 0 Ê - = Ê y = = Ê x = 0 7 `L `L `x `x `y 4 H(x, y) = `L `L = `y `x `y he Hessa of f (x, y) s egatve defte; f(x, y) s cocave. he costrat s lear, so the soluto obtaed maxmses f(x, y). he frm should purchase 69/8 ms of V ad 73/8 ms of rado tme. Sce - = /4, spedg a extra? 's o advertsg (for small?) would crease the compay's reveues by approxmately 0.5?. Exercse: Labour costs /hour ad captal costs /ut. If j of labour ad K hours of /3 /3 captal are avalable, the j K maches ca be produced. If the budget for purchasg captal ad labour s 0, what s the maxmum umber of maches that ca be produced? What s the mmum cost method for producg 6 maches? 6. OPIMALIY: KARUSH-KUHN-UCKER (KK) CONDIIONS m f (x) h (x) Ÿ 0 or max f (x) h (x) Ÿ 0 h (x) Ÿ 0 (or h (x) Ÿ b Í h (x) b Ÿ!) h (x) Ÿ 0 ã

14 - (00) Itroducto 0 - h (x) Ÿ 0. ñ A costrat the form h (x) b must be wrtte as h (x) + b Ÿ 0 (eg x + x should be rewrtte as x x + Ÿ 0). ña equalty costrat ca be replaced by a Ÿ ad costrat (eg x + x œ 0 s replaced by x + x Ÿ 0 ad x + x, or x x + Ÿ 0). Alteratvely, equalty costrats are treated as secto 5. (he geeral approach s Bertsekas, Nolear Programmg; Fletcher, Practcal Optmsato; Lueberger, Nolear Programmg; Rustem, Algorthms for Nolear Programmg & Multple Objectve Decsos). he Lagraga s defed as L (x,.) œ f (x) +., h(x) f (x) +. h(x) f (x) +. h (x) For f (x) ad h (x) covex (local mmum Ê global mmum) ad for a mmsato problem, the optmal soluto satsfes the KK optmalty codtos: =. f (x ) + h (x ). = 0 ` f (x) ` h (x) j j f f =. j=,..., ; =,..., ; h ( x ) = 0; h (x ) Ÿ 0; 0.. Whe the covexty assumpto s ot ecessarly statsfed at all pots x for f ad h, the t s ecessary to formalse the secod order codto that esures the maxmum or mmum. hs s smlar to the equalty costraed case ad t s ecessary that the Hessa f x. L(x, ) = ` L(x,.) `x`xj s postve (egatve) sem-defte for all v: ` h(x ) j= j j v = 0; a h (x ) = 0. Note that the latter codto s related to the equalty costrats satsfed as equaltes at the soluto. Furthermore, t s suffcet that f x L(x,.) s postve (egatve) defte the tersecto of the actve costrats: h(x ) j v f L(x,. ) v ( ) 0; av: ` v = 0; ah (x ) = 0 x j j=

15 - (00) Itroducto - he multpler. s the shadow prce for the costrat. We show ow that. s the sestvty of the optmal value of the objectve fucto to the chage the rght had sde of th th the costrat. For smplcty, we gore the superscrpt for the costrat. As before, let f f th ` f f h ` h From the optmalty codtos, we have f f(x ) + f h (x). = 0. Let c be a arbtrary value for the rght had sde of costrat h ` h ` c h (x) = c Ê = I ad let x = x(c) (.e. x satsfyg h(x)=c s a fucto of c) all evaluated at x ad c = 0. But ` h (c) ` h (c) ` c ` c ` c I = = = f h(x ) ` f (c) ` f (c) ` c ` c ` c = = f f (x ) so f f(x ) = f h (x)., ` f (c) (c) ` c ` c ` c = f f (x ) = fh (x ). =. th If the rght had of the costrat s creased from 0 to? uts (for small?), the the optmal objectve fucto value s creased by.? (for a more rgorous dscusso of shadow prces/lagrage multplers/sestvty see Lueberger ad Ye, p. 340). he codto. h ( x ) = 0 s a geeralsato of the complmetary slackess codtos of LP's. he codto mples that f. 0, the h ( x ) = 0 f h ( x ) 0, the. = 0 th ( costrat actve). th ( costat actve) Suppose that the costrat h (x) Ÿ 0 s a costrat about the use of a resource. he the frst mplcato states that f a addtoal ut of the resource assocated wth the th costrat s to have ay value, the curret optmal soluto must use all the avalable uts of the resource, drvg the costrat to ts boud. O the other had, the secod mplcato th th states that f some of the resource s uused, the addtoal amouts of the resource th have o value. If for? 0, we crease the rght had sde of the costrat from 0 to?,

16 - (00) Itroducto - the optmal objectve value must decrease or stay the same, because the crease expads the feasble rego. Example 0: For cocave f(x), descrbe the optmal soluto to max f (x) a Ÿ x Ÿ b or, alteratvely, m f (x) a Ÿ x Ÿ b. ñif f f (x) exsts the terval [ a, b], the optmal soluto wll ether be at a pot x (a, b) or at a or b. ñif the soluto s at some x (a, b) the ff (x) = 0Þ ñif the soluto s at a the f f (a) Ÿ 0 À for f f (a) 0, movg a small dstace away from a would be charactersed by f (y) = ~ f (a) + f f (a) (y - a) ad f (y) f (a), for y a, thereby cotradctg the maxmal ature of f(a). We also ote that y a s ot feasble. ñif the soluto s at b the f f (b) 0. We wrte the problem as m f (x) x b Ÿ 0; x + a Ÿ 0 ad the optmalty codtos yeld ff (x) +.. = 0. (x b) = 0. ( x + a) = 0.,. 0 x b Ÿ 0 x + a Ÿ 0 - If. =. = 0, we have f f (x) = 0. - If. = 0,. 0, the latter yelds from. ( x + a) = 0, x = a. ff (x ) +.. = 0 yelds ff (x ) œ. Ê ff (x ) 0. - If. = 0,. 0,. (x b) = 0 Ê x = b ad ff (x ) œ., ff (x ) 0Þ - If. 0,. 0 we have x = a ad x = b whch caot occur. Example : A moopolst ca purchase up to 7.5 oz of chemcal C for 0/oz. At a cost of 3/oz C ca be processed to a oz of P ad at a cost of 5/oz C ca be processed to a oz of P. If x oz of P are produced,

17 - (00) Itroducto 3 - If x oz of P are produced, (30 x ) oz P sells at. (50 x ) oz P sells at. How ca the moopolst maxmse proft? x = oz of P produced x = oz of P produced 3 x = oz of chemcal processed max x (30 x ) + x (50 x ) 3 x 5 x 0 x x + x x Ÿ 0; 3 x 7.5 Ÿ 0 3 Also x, x, x 0, but as the optmal soluto happes to satsfy these, we choose to avod ths complexty. he objectve s cocave, the costrats are lear, hece covex. So the optmal soluto s determed by the frst order KK whch are (wrtte for the mmsato of the egatve of the objectve): 30 + x = 0; x = 0; = (x + x x ) = 0; (x 7.5) = 0;., If. =. = 0, the = 0 s volated. So ths caot occur. - If. œ 0,. 0, we ote that = 0 would mply. = 0 whch would volate. 0. hus, ths case caot occur as well. - If. 0,. = 0, = 0 mples. = x = 0 yelds x = 8.5 ad x = 0 yelds x = From. (x + x x 3 ) = 0, 3 3 we obta x + x x = 0, ad hece x = 7.5. We eed ot cosder the case.,. 0 sce we have determed the soluto. For? small,. = 0 dcates that f a extra? oz of the chemcal were obtaed at o cost, the profts would crease by 0?. As. = 0, the rght to purchase a extra? oz of chemcal would ot crease profts. Exercse: Use the KK codtos to solve m (x ) + (x ) x x = 0; x x Ÿ 0; x, x 0.

18 - (00) Itroducto 4-7. HE SEEPES DESCEN ALGORIHM Suppose we wsh to solve the ucostraed problem: m f (x) x. If f (x) s a covex fucto, the the optmal soluto wll occur at a statoary pot `f(x,..., x ) f f (x) = ff(x,..., x ) œ ã = 0 `f(x,..., x ) I prevous examples, t was easy to determe the optmum solutos. For may fuctos, t may be dffcult to fd a statoary pot satsfyg optmalty. We dscuss the steepest descet algorthm (Cauchy) as the frst teratve method for computg (approxmately) optmal solutos. v v Gve a vector v =, the legth ( ) of v s gve by ã orm v v = vv = (v ") + (v#) (v ) Ay -dmesoal vector represets a drecto. For ay drecto, there are fte 3 umber of vectors represetg that drecto (eg,, represet the drecto 3 potg to a postve 45 agle ). For ay vector v, the ormalsed vector has a v v legth of ad defes the same drecto as v. hus, wth ay drecto, we may assocate a vector a ( ut) vector of legth (eg the drecto defed by v = wth v = s assocated wth the ut vector v v v v = =. For ay vector v, the ut vector s called the ormalsed v. We shall use the ormalsed 3 verso of drectos. he drecto defed by,, wll be descrbed by 3 the ormalsed vector

19 - (00) Itroducto 5 -. Cosder the fucto whose frst partal dervatves exst. he gradet of the fucto s gve by f f (x) ad we defe the steepest descet drecto f f (x) f f (x) d =. " # Example : For f (x, x ) = (x ) + (x ) x 6 f f (3, 4).6 f f (x) =, f f (3, 4) =, ff (3, 4) = 0 ad =. x 8 f f (3, 4).8 At ay pot x ^ whch les o the curve f (x,..., x ) = f (x), ^ the vector d (x) ^ = f f f (x) ^ f (x) ^ wll be perpedcular to the curve f (x,..., x ) = f (x). ^ Example 3: " f (x, x ) = (x ) # + (x ) ; at (3, 4), f.6 d (3, 4) = f (3, 4) f f (3, 4) =.8 " # s perpedcular (orthogoal) to (x ) + (x ) = 5 FIGURE 7. If x s altered by a small amout 7, the value of f (x) wll alter by approxmately 7 ` f(x) `f(x) (by defto of ). Suppose we move from a pot x, a small legth 7 a drecto defed by ormalsed vector?. By how much does f(x) chage? he quatty d f (x + 7? ) d 7 7=0 =? f f (x). dcates a frst order approxmato for the amout of chage. If f f (x) ^ f f (x) ^? =, the clearly,? ff (x) 0, ad movg the drecto 7 0 wll decrease f. " # Example 4: f (x,x ) = (x ) + (x ), at (3, 4). Heceß 6 ff (3,4) =. 8

20

21 - (00) Itroducto 6 - If we move by a step 7 the drecto? =, f wll chage (crease) by 6 8 7? f f(x) = 7 +. o show that the steepest descet drecto reduces the objectve fucto, cosder the secod f f (x) f f (x) order aylor seres expaso of f x 7 about x: f x 7 f f (x) f f (x) - = f (x) 7 f f f (x) + f f (x) 7 f (x) H f f (x). ff (x) ff (x) ff (x) - f f(x) where H s the Hessa at some mea value betwee x ad (x 7 ): For f f (x) - f f(x) H œ Hx + α(x 7 ) xfor some α, 0 Ÿ α Ÿ. f f (x) 7 small eough, the last term s eglgble. Hece, we have the descet property 7 f f(x) f f(x) ². f x f(x) he steepest descet algorthm Step 0: Startg at ay pot x 0, set ( (0, ), # (0, ), k = 0. Step : Evaluate ff (x k) ; f f f (x k) Ÿ macheps, the optmum has bee acheved, Stop. Else, evaluate f f (x k) f f (x ). k j Step : Determe the stepsze 7k as the value largest value 7 = (#), j = 0,,,... (or equvaletly the smallest value of j) for whch the equalty ff(x ) ff(x ) k f f(x ) ² k f f(x ) ² k k k f x 7 ) f ( x ) Ÿ ( 7 ff(x ) k k s satsfed. he equalty above s kow as the Armjo stepsze rule ad esures suffcet decrease of the objectve alog the steepest descet drecto. Set x = x 7 k+ k k f f(x k) f f(x ) ² k Step 3: Set k = k +, go to Step. A earler alteratve to the Armjo rule was the soluto of the problem m f f(x k) 7 0 k 7 f f(x ) ² f x. k

22 - (00) Itroducto 7 - hs uvarate mmsato s, geeral, a fte procedure whereas the ca be acheved a fte umber of reductos of the stepsze through Armjo rule j 7 = (#), j = 0,,,.... Example 5: Use the method of steepest descet to solve Choose arbtrarly x = (, ). We have 0 m f(x, x ) = (x 3) (x ) f f(x, x ) = (x 3) f ; hece f(, ) = 4 (x ). Accordg to we must choose 7 m 0 k 7 f k 7 f x f(x ), to mmse the oe dmesoal fucto f(x 7 f f(x )): k 4 f + = f ( + 4, + ) = (4 ) + ( ) k whch s mmsed where 4 f7 f + 7 = 8 (47 ) + 4 ( 7 ) = 0. Hece, 7 = 0.5. he ew pot s 4 3 x = +.5 =. As f f (3, ) = 0 ad f(x) s covex (Hessa!), we have arrved at a optmum. Example 5 s deceptve. he steepest descet algorthm coverges to a optmum oe step extremely rarely. Its covergece s very slow. It s used oly very large problems where other methods mght be approprate. We meto ths algorthm because t s the smplest, hece useful to motvate more complex algorthms. Exercse:. For ay x ( Á 0), show that x/ ² x ² has ut legth.. Use steepest descet to solve the problem m f(x, x ) = (x ) + x + (x # ). Beg at (.5,.5)

23 - (00) Itroducto 8 - " # 3. Wrte a program to solve m f(x, x) = xx x + (x ) + (x ) ad observe how slow progress may become. Beg at (.5,.5). 4. Show that the steepest descet drecto f (x ) s the soluto of f f f (x ) m d k f f (x ) d d =. Hece, the steepest descet drecto s based o the lear approxmato to the orgal problem sce the soluto of the above s the same as m d k k f (x ) + f f (x ) d d =. where f (x k) s costat. 8. CONVERGENCE A sequece of vectors x 0, x,..., x k,..., f, deoted by {x k}, s sad to coverge to the lmt x f f x k x Ä 0 as k Ä (that s, gve % 0, there s a N such that k N mples x k x %). If {x k} coverges to x, we wrte {x k} Ä x. he ope ball aroud x s a set of the form { y y x %}, for some % 0. A subset f s ope f aroud every pot x f, there s a ope ball cotaed f (e for x f, there s a % 0 such that y x %, wth y f). For example, the ball { y y %} s ope. he teror of ay set f s the set of pots x f whch are at the cetre of some ball cotaed f. A set s sad to be ope f all ts pots are all teror pots. hus, the teror of a set s always ope. he teror of the ball s the ope ball k { y y Ÿ %} { y y < %}. A pot x s a lmt of {x k} f there s a subsequece of {x k} coverget to x. he subsequece s deoted terms of the subset ^ of the postve tegers such that {x k} Ä x for k ^. A related cocept s a accumulato pot: f f ad x, the x s called a accumulato pot of f f every ball aroud x cotas at least oe pot of f dstct from x. Example 6: he set of umbers /, =,, 3,..., has zero as a accumulato pot. A set f s closed f ts complemet e f s ope. Alteratvely, a set f s closed f every pot arbtrarly close to f, s a member of f (.e. f s closed f {x k} f ad {x k} Ä x mply that x f ). For example, the ball { y y Ÿ %} s closed. he closure of ay set f s the smallest closed set cotag f. he boudary of a set s that part of the set that s ot ts teror. A set s bouded f t les etrely wth a ball of some radus % 0, { y y x Ÿ % }. A set s compact f ad oly f t s closed ad bouded.

24 - (00) Itroducto 9 - For a compact set f, the Bolzao-Weerstrass theorem (Apostol, 98) ca be used to show that every fte subset of f has a accumulato pot f. hus, f {x k } f (e each member of the sequece s f, ) the {x k} has a lmt pot f. hs establshes that there s a subsequece of {x k} covergg to a pot f. Examples of ope, closed, bouded: ope: x + x ; closed: x + x Ÿ ; & bouded x + x Ÿ, x 0, x 0 he covergece theory assocated wth the algorthms we are cocered wth depeds o the above result. A more geeral result s gve Lueberger (984; Secto 6.6) for algorthms vewed as pot-to-set mappgs. We are cocered wth algorthms that am to solve for a fxed optmal pot whle geeratg a sequece {x k} wth a compact set f such that a certa descet property s satsfed for a mert fucto (e.g. f (x k+) f (x k) ad for x solvg the problem, f (x ) Ÿ f (x ), ak). k 9. EXENSION O LINEARLY CONSRAINED PROBLEMS he smple exteso of the steepest descet drecto to lear costrats s kow as the Frak-Wolfe algorthm. Cosder m f(x) A x Ÿ b ; x 0. ad a frst order aylor seres expaso of f (x) aroud x k We have (where x s ay value for x). k k ` f(x ) j k k k k j= k j j f (x) f (x ) + (x x ) = f (x ) + ff (x ) (x x ). Sce f(x k) ad f f(x k) x k have fxed values, they do ot affect the posto of the optmum ad ca be dropped to gve a lear objectve: f f(x ) x. Let x = arg m f(x ) + ff(x ) (x x ) A x Ÿ b ; x 0 k LP k k k k k k gorg the costat terms f(x ) ad ff(x ) x, we have x LP = arg m ff(x k) x A x Ÿ b ; x 0. he lear objectve fucto decreases steadly as oe moves alog the le segmet from x k to x LP (whch s o the boudary of the feasble rego). However, the lear approxmato to the olear objectve may ot be partcularly accurate for x far away from x k. So, the olear objectve may ot cotue to decrease all the way from x k to x LP. herefore, rather tha just acceptg x LP the ext tral soluto, we ca choose the pot that mmses the

25 - (00) Itroducto 0 - olear objectve fucto alog ths le segmet. hs pot ca be chose usg uvarate mmsato (as the steepest descet algorthm). he oe varable for ths purpose s the stepsze 7 to be take from x k to x LP. I the statemet of the algorthm below, however, we have used a stepsze rule that volves a fte umber of operatos. he pot thus obtaed becomes the ew tral soluto for tatg the ext terato of the algorthm. he sequece of tral solutos geerated by repeated teratos coverges to a optmal soluto of the orgal problem. he algorthm stops as soo as the successve tral solutos are close eough together to have essetally reached ths optmal soluto. he Frak-Wolfe algorthm Step 0: Startg at a feasble tal pot x 0, choose # (0, ), set k = 0. Step : Evaluate f f (x k). Step : Determe the drecto of search/descet x x by solvg x LP = arg m ff(x k) x A x Ÿ b ; x 0. LP k x x LP k If x Ÿ macheps, stop. We have reached the optmum. k j Step 3: Determe the stepsze 7k as the value largest value 7 = (#), j = 0,,,... (or equvaletly the smallest value of j) for whch the equalty k LP k k k LP k f x + 7 (x x ) f ( x ) Ÿ ( 7 ff(x ) (x x ) " where ( (0, #). he equalty above s the Armjo stepsze rule ad esures suffcet decrease of the objectve alog the steepest descet drecto. Set x k+ = x k + 7k (x LP x k) Step 4: Set k = k +, go to Step. he algorthm coverges to the soluto as the steepest descet method. Covergece s slow, however. For steepest descet, the estmated rate s R-lear covergece k/ x x Ÿ C " ; C ad 0 ". k ad for the Frak-Wolfe algorthm x x Ÿ k C " ; C ad 0 ". k he above s the geerc defto of R-lear covergece. We shall study the covergece detals of oe algorthm as a model for such results. Example 7: " m f(x) = (x ) # + (x ) 5 x 8 x 3 x + x Ÿ 6; x, x 0 Note that the ucostraed mmum s defed by the codtos

26 - (00) Itroducto - ` f (x) ` f (x) = x 5 = 0; = 4 x 8 = 0 whch are satsfed at 5 x = u. Clearly, x volates the frst costrat. hus, more work s eeded to fd the costraed u mmum. 0 Because x = s clearly feasble (ad correspods to the tal BFS for the LP 0 costrats), let us choose t as x 0 to start the algorthm. For ths x 0, step, we have So, the LP problem Step s ` f (x) ` f (x) = 5 ; = 8. m 5 x 8 x 3 x + x Ÿ 6; x, x 0. 0 We ca use the smplex algorthm or a graphcal soluto to see that x LP =. For Step 3, 3 we eed to determe the stepsze: we depart from the algorthm descrpto to mplemet a older dea - uvarate mmsato o the le segmet x LP x 0 (.e. 0 Ÿ 7 Ÿ ). hus, we have x = x (x LP x) 0 x( 7) = + = x( ) substtutg ths the objectve fucto yelds a fucto of the sgle varable 7 x( 7) f( ) = f (0, 3 ) = 8 (3 ) + (3 ) = x( 7) hs fucto s covex ad the value of 7 (0 Ÿ 7 Ÿ ) that mmses t ths case s gve by the soluto of so that 7 0 = 3. Hece, we have x( 7) ` f( ) x( 7) ` 7 = = 0 x (/3) x = x (/3) = = + = x (/3) he ext terato volves the evaluatos ` f (x) ` f (x) = 5 ; = () = 0

27

28 - (00) Itroducto - ad x LP = arg m 5 x 3 x + x Ÿ 6; x, x 0. wth uvarate mmsato of x LP = 0 x ( 7) = x + 7 (x x ) ; where 0 Ÿ 7 Ÿ LP x( 7) = + = x ( ) f ( ) = 5 ( ) + ( ) 8 ( ) ( ) = Hece 7 = ad x( 7) ` f( ) x( 7) ` 7 = = 0. x (5/) /6 x = x (5/) = = + = x (5/) 0 7/6 ad so o... FIGURE 8 hs algorthm s ot mportat curretly as a optmsato tool: t s too slow to coverge. We use t to motvate more complcated algorthms. Exercse: Cosder the followg learly costraed covex programmg problem. 3 Ÿ m f(x, x ) = - 3 x - 4 x + (x ) + (x ) x + x ; x, x 0. Start from (/4, /4) ad apply three teratos of the Frak-Wolfe Algorthm.. Use KK codtos to check f the soluto s correct. 3. Program the Frak-Wolfe algorthm to solve the problem. (here are may publc doma LP solvers you ca use. For example, there s o Numercal recpes C.) 0. BARRIER FUNCIONS FOR GENERAL CONSRAINED PROBLEMS I covex programmg, ay local mmum (maxmum) s also a global mmum (maxmum). I geeral NLP that arse practce do ot exactly satsfy covexty assumptos. A commo approach to such problems s to apply a algorthmc search procedure that wll stop whe t fds a local mmum (or maxmum) ad the to restart t from a umber of tmes from a varety of tal tral solutos order to fd as may dstct

29 - (00) Itroducto 3 - local mma (maxma) as possble. he best of these optma s the chose as the decso to mplemet. Normally, the search procedure s oe that has bee desged to fd a global mmum (maxmum) whe all the assumptos of covex programmg hold, but t ca also operate to fd a local mmum (maxmum) whe they do ot. Oe such procedure that has bee wdely used s the sequetal ucostraed mmsato techque (SUM) due to Facco & McCormck. here are actually two versos of SUM, oe of whch s a exteror pot algorthm that starts wth feasble solutos whle usg a pealty fucto to force covergece to the feasble rego. Pealty fuctos are dscussed later ths course. We ow descrbe the other verso, s a teror pot algorthm that deals drectly wth feasble solutos whle usg a barrer fucto to force the soluto to rema sde the feasble rego. SUM replaces the orgal problem by a sequece of ucostraed optmsato problems whose soluto coverge to a soluto (local mmum) of the orgal problem. he ucostraed optmsato algorthms covered ths course are the steepest descet method dscussed Secto 7 ad the Goldste-Levt-Polyak algorthm for covex programmg, whch, the absece of costrats, reduces mmedately to Newto or quas-newto algorthms for ucostraed optmsato. Each of the ucostraed problems the SUM sequece volves choosg a successvely smaller strctly postve value of a scalar ( ad the solvg m x m x j (x, () = f (x) ( B (x). B (x) s a barrer fucto that has the followg propertes (for x that are feasble for the orgal problem):. B (x) s small whe x s far from the boudary of the feasble rego,. B (x) s large whe x s close to the boudary of the feasble rego, 3. B (x) Ä as dstace of x to earest boudary of the feasble rego Ä 0. As h (x) Ÿ 0, examples of B (x) are = boud costrats x 0, j /x (Caroll, 96), j= h (x) log h (x) = (Frsch, 955). hus, by startg the search procedure wth a teror feasble tal pot (h (x) 0) ad the attemptg to decrease j (x, (), B (x) provdes a barrer that prevets the search from ever crossg (or eve reachg) the boudary of the feasble rego for the orgal problem. Note that, for feasble values of x, the deomator of each term j = x j

30 - (00) Itroducto 4 - s proportoal to the dstace of x from the boudary of the oegatvty costrat. Cosequetly, each term s a boudary repulso term that has all the precedg three propertes wrt the partcular costrat boudary. Aother attractve feature of B (x) s that, whe all the assumptos of covex programmg are satsfed, j (x, () s a covex fucto of x. What happes f the soluto s at the boudary? hs s the reaso SUM volves a sequece of these ucostraed optmsato problems for successvely smaller values of ( approachg zero. he soluto of each ucostraed optmsato problem becomes the tal pot for the ext. For example, each ew ( may be obtaed from the precedg oe by multplyg wth a costat ), 0 ), where typcal values are ) = 0. or 0.0. As ( Ä 0, j (x, () approaches f (x), so the correspodg local mmum coverges to a local mmum of the orgal problem. herefore, t s ecessary to solve oly eough ucostraed optmsato problems to permt extrapolatg ther solutos to ths lmtg soluto. Example 8: " # m x + x 4 x 4 x + (x ) + (x ) + 7 Ÿ 0 hus the feasble rego s the dsc wth radus, cetred at,, that s: (x ) + (x ) Ÿ. he soluto s (, ) wth objectve fucto value x + x = 4. m x m x " # j(x, () = x + x ( log 4 x + 4 x (x ) (x ) 7 he ecessary codtos for a ucostraed optmum are `j(x, () `j(x, () From these equatos, we see " # = ( (4 x ) 4 x + 4 x (x ) (x ) 7 = 0 " # = ( (4 x ) 4 x + 4 x (x ) (x ) 7 = 0 ad solvg oe of these equatos, we have x( () = x( () ( x (() = (

31 - (00) Itroducto 5 - where the egatve secod term correspods to the mmum. he soluto of the problem s approached by a sequece of x ((), as ( Ä 0. We have, therefore, lm ( x = x = ( Ä0 ( + 8 = =.989 Let us suppose that we start the process of mmsato wth ( = 0. Usg a sutable mmsato techque, we should be able to fd the soluto x (0) =.9504, We decrease ( by a factor of 0 ad restart the optmsato wth ( = ad the tal values for x just computed. By cotug the process, the trajectory of mma s geerated for decreasg values of (. We see that for ( small eough, say ( = 0.000, the method produces qute a good approxmato to the soluto. ( x( () = x( () ã ã he SUM algorthm Step 0: Gve tal feasble pot x 0 ot o the boudary, set ( 0 0, ) (0, ), k = 0. Step : Usg x k as the startg pot solve the followg usg a ucostraed optmsato method, to fd a local mmum m k+ k x k k x = x (( ) = arg j (x, ( ) = f (x) ( B (x). Step : x x x k+ k If Ÿ macheps, stop. We have reached a local optmum. Else, set k ( k+ = ) ( k, k = k + ad go to Step. Exercse:. Solve the exercse at the ed of secto 9 wth the gve tal pot usg SUM by - -4 applyg the sequece ( =, 0, 0.. Program SUM to solve ths problem. ( 0 ad ) are data that are set by the user. Use a ucostraed optmsato subroute from NAG lbrary. 3. How would we adapt SUM to a maxmsato problem?

7.0 Equality Contraints: Lagrange Multipliers

7.0 Equality Contraints: Lagrange Multipliers Systes Optzato 7.0 Equalty Cotrats: Lagrage Multplers Cosder the zato of a o-lear fucto subject to equalty costrats: g f() R ( ) 0 ( ) (7.) where the g ( ) are possbly also olear fuctos, ad < otherwse

More information

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Numercal Computg -I UNIT SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Structure Page Nos..0 Itroducto 6. Objectves 7. Ital Approxmato to a Root 7. Bsecto Method 8.. Error Aalyss 9.4 Regula Fals Method

More information

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971)) art 4b Asymptotc Results for MRR usg RESS Recall that the RESS statstc s a specal type of cross valdato procedure (see Alle (97)) partcular to the regresso problem ad volves fdg Y $,, the estmate at the

More information

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions. Ordary Least Squares egresso. Smple egresso. Algebra ad Assumptos. I ths part of the course we are gog to study a techque for aalysg the lear relatoshp betwee two varables Y ad X. We have pars of observatos

More information

The Mathematical Appendix

The Mathematical Appendix The Mathematcal Appedx Defto A: If ( Λ, Ω, where ( λ λ λ whch the probablty dstrbutos,,..., Defto A. uppose that ( Λ,,..., s a expermet type, the σ-algebra o λ λ λ are defed s deoted by ( (,,...,, σ Ω.

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem CS86. Lecture 4: Dur s Proof of the PCP Theorem Scrbe: Thom Bohdaowcz Prevously, we have prove a weak verso of the PCP theorem: NP PCP 1,1/ (r = poly, q = O(1)). Wth ths result we have the desred costat

More information

1 Lyapunov Stability Theory

1 Lyapunov Stability Theory Lyapuov Stablty heory I ths secto we cosder proofs of stablty of equlbra of autoomous systems. hs s stadard theory for olear systems, ad oe of the most mportat tools the aalyss of olear systems. It may

More information

Summary of the lecture in Biostatistics

Summary of the lecture in Biostatistics Summary of the lecture Bostatstcs Probablty Desty Fucto For a cotuos radom varable, a probablty desty fucto s a fucto such that: 0 dx a b) b a dx A probablty desty fucto provdes a smple descrpto of the

More information

MATH 247/Winter Notes on the adjoint and on normal operators.

MATH 247/Winter Notes on the adjoint and on normal operators. MATH 47/Wter 00 Notes o the adjot ad o ormal operators I these otes, V s a fte dmesoal er product space over, wth gve er * product uv, T, S, T, are lear operators o V U, W are subspaces of V Whe we say

More information

An Introduction to. Support Vector Machine

An Introduction to. Support Vector Machine A Itroducto to Support Vector Mache Support Vector Mache (SVM) A classfer derved from statstcal learg theory by Vapk, et al. 99 SVM became famous whe, usg mages as put, t gave accuracy comparable to eural-etwork

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines It J Cotemp Math Sceces, Vol 5, 2010, o 19, 921-929 Solvg Costraed Flow-Shop Schedulg Problems wth Three Maches P Pada ad P Rajedra Departmet of Mathematcs, School of Advaced Sceces, VIT Uversty, Vellore-632

More information

Mu Sequences/Series Solutions National Convention 2014

Mu Sequences/Series Solutions National Convention 2014 Mu Sequeces/Seres Solutos Natoal Coveto 04 C 6 E A 6C A 6 B B 7 A D 7 D C 7 A B 8 A B 8 A C 8 E 4 B 9 B 4 E 9 B 4 C 9 E C 0 A A 0 D B 0 C C Usg basc propertes of arthmetc sequeces, we fd a ad bm m We eed

More information

CHAPTER 4 RADICAL EXPRESSIONS

CHAPTER 4 RADICAL EXPRESSIONS 6 CHAPTER RADICAL EXPRESSIONS. The th Root of a Real Number A real umber a s called the th root of a real umber b f Thus, for example: s a square root of sce. s also a square root of sce ( ). s a cube

More information

PTAS for Bin-Packing

PTAS for Bin-Packing CS 663: Patter Matchg Algorthms Scrbe: Che Jag /9/00. Itroducto PTAS for B-Packg The B-Packg problem s NP-hard. If we use approxmato algorthms, the B-Packg problem could be solved polyomal tme. For example,

More information

18.413: Error Correcting Codes Lab March 2, Lecture 8

18.413: Error Correcting Codes Lab March 2, Lecture 8 18.413: Error Correctg Codes Lab March 2, 2004 Lecturer: Dael A. Spelma Lecture 8 8.1 Vector Spaces A set C {0, 1} s a vector space f for x all C ad y C, x + y C, where we take addto to be compoet wse

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information

A conic cutting surface method for linear-quadraticsemidefinite

A conic cutting surface method for linear-quadraticsemidefinite A coc cuttg surface method for lear-quadratcsemdefte programmg Mohammad R. Osoorouch Calfora State Uversty Sa Marcos Sa Marcos, CA Jot wor wth Joh E. Mtchell RPI July 3, 2008 Outle: Secod-order coe: defto

More information

LINEARLY CONSTRAINED MINIMIZATION BY USING NEWTON S METHOD

LINEARLY CONSTRAINED MINIMIZATION BY USING NEWTON S METHOD Jural Karya Asl Loreka Ahl Matematk Vol 8 o 205 Page 084-088 Jural Karya Asl Loreka Ahl Matematk LIEARLY COSTRAIED MIIMIZATIO BY USIG EWTO S METHOD Yosza B Dasrl, a Ismal B Moh 2 Faculty Electrocs a Computer

More information

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture) CSE 546: Mache Learg Lecture 6 Feature Selecto: Part 2 Istructor: Sham Kakade Greedy Algorthms (cotued from the last lecture) There are varety of greedy algorthms ad umerous amg covetos for these algorthms.

More information

Special Instructions / Useful Data

Special Instructions / Useful Data JAM 6 Set of all real umbers P A..d. B, p Posso Specal Istructos / Useful Data x,, :,,, x x Probablty of a evet A Idepedetly ad detcally dstrbuted Bomal dstrbuto wth parameters ad p Posso dstrbuto wth

More information

Chapter 9 Jordan Block Matrices

Chapter 9 Jordan Block Matrices Chapter 9 Jorda Block atrces I ths chapter we wll solve the followg problem. Gve a lear operator T fd a bass R of F such that the matrx R (T) s as smple as possble. f course smple s a matter of taste.

More information

Support vector machines

Support vector machines CS 75 Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Outle Outle: Algorthms for lear decso boudary Support vector maches Mamum marg hyperplae.

More information

Numerical Analysis Formulae Booklet

Numerical Analysis Formulae Booklet Numercal Aalyss Formulae Booklet. Iteratve Scemes for Systems of Lear Algebrac Equatos:.... Taylor Seres... 3. Fte Dfferece Approxmatos... 3 4. Egevalues ad Egevectors of Matrces.... 3 5. Vector ad Matrx

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information

L5 Polynomial / Spline Curves

L5 Polynomial / Spline Curves L5 Polyomal / Sple Curves Cotets Coc sectos Polyomal Curves Hermte Curves Bezer Curves B-Sples No-Uform Ratoal B-Sples (NURBS) Mapulato ad Represetato of Curves Types of Curve Equatos Implct: Descrbe a

More information

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy Bouds o the expected etropy ad KL-dvergece of sampled multomal dstrbutos Brado C. Roy bcroy@meda.mt.edu Orgal: May 18, 2011 Revsed: Jue 6, 2011 Abstract Iformato theoretc quattes calculated from a sampled

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

A new type of optimization method based on conjugate directions

A new type of optimization method based on conjugate directions A ew type of optmzato method based o cojugate drectos Pa X Scece School aj Uversty of echology ad Educato (UE aj Cha e-mal: pax94@sacom Abstract A ew type of optmzato method based o cojugate drectos s

More information

Lecture 16: Backpropogation Algorithm Neural Networks with smooth activation functions

Lecture 16: Backpropogation Algorithm Neural Networks with smooth activation functions CO-511: Learg Theory prg 2017 Lecturer: Ro Lv Lecture 16: Bacpropogato Algorthm Dsclamer: These otes have ot bee subected to the usual scruty reserved for formal publcatos. They may be dstrbuted outsde

More information

Lecture 9: Tolerant Testing

Lecture 9: Tolerant Testing Lecture 9: Tolerat Testg Dael Kae Scrbe: Sakeerth Rao Aprl 4, 07 Abstract I ths lecture we prove a quas lear lower boud o the umber of samples eeded to do tolerat testg for L dstace. Tolerat Testg We have

More information

ESS Line Fitting

ESS Line Fitting ESS 5 014 17. Le Fttg A very commo problem data aalyss s lookg for relatoshpetwee dfferet parameters ad fttg les or surfaces to data. The smplest example s fttg a straght le ad we wll dscuss that here

More information

n -dimensional vectors follow naturally from the one

n -dimensional vectors follow naturally from the one B. Vectors ad sets B. Vectors Ecoomsts study ecoomc pheomea by buldg hghly stylzed models. Uderstadg ad makg use of almost all such models requres a hgh comfort level wth some key mathematcal sklls. I

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem Joural of Amerca Scece ;6( Cubc Nopolyomal Sple Approach to the Soluto of a Secod Order Two-Pot Boudary Value Problem W.K. Zahra, F.A. Abd El-Salam, A.A. El-Sabbagh ad Z.A. ZAk * Departmet of Egeerg athematcs

More information

X ε ) = 0, or equivalently, lim

X ε ) = 0, or equivalently, lim Revew for the prevous lecture Cocepts: order statstcs Theorems: Dstrbutos of order statstcs Examples: How to get the dstrbuto of order statstcs Chapter 5 Propertes of a Radom Sample Secto 55 Covergece

More information

arxiv:math/ v1 [math.gm] 8 Dec 2005

arxiv:math/ v1 [math.gm] 8 Dec 2005 arxv:math/05272v [math.gm] 8 Dec 2005 A GENERALIZATION OF AN INEQUALITY FROM IMO 2005 NIKOLAI NIKOLOV The preset paper was spred by the thrd problem from the IMO 2005. A specal award was gve to Yure Boreko

More information

Rademacher Complexity. Examples

Rademacher Complexity. Examples Algorthmc Foudatos of Learg Lecture 3 Rademacher Complexty. Examples Lecturer: Patrck Rebesch Verso: October 16th 018 3.1 Itroducto I the last lecture we troduced the oto of Rademacher complexty ad showed

More information

Pinaki Mitra Dept. of CSE IIT Guwahati

Pinaki Mitra Dept. of CSE IIT Guwahati Pak Mtra Dept. of CSE IIT Guwahat Hero s Problem HIGHWAY FACILITY LOCATION Faclty Hgh Way Farm A Farm B Illustrato of the Proof of Hero s Theorem p q s r r l d(p,r) + d(q,r) = d(p,q) p d(p,r ) + d(q,r

More information

. The set of these sums. be a partition of [ ab, ]. Consider the sum f( x) f( x 1)

. The set of these sums. be a partition of [ ab, ]. Consider the sum f( x) f( x 1) Chapter 7 Fuctos o Bouded Varato. Subject: Real Aalyss Level: M.Sc. Source: Syed Gul Shah (Charma, Departmet o Mathematcs, US Sargodha Collected & Composed by: Atq ur Rehma (atq@mathcty.org, http://www.mathcty.org

More information

ECE 421/599 Electric Energy Systems 7 Optimal Dispatch of Generation. Instructor: Kai Sun Fall 2014

ECE 421/599 Electric Energy Systems 7 Optimal Dispatch of Generation. Instructor: Kai Sun Fall 2014 ECE 4/599 Electrc Eergy Systems 7 Optmal Dspatch of Geerato Istructor: Ka Su Fall 04 Backgroud I a practcal power system, the costs of geeratg ad delverg electrcty from power plats are dfferet (due to

More information

F. Inequalities. HKAL Pure Mathematics. 進佳數學團隊 Dr. Herbert Lam 林康榮博士. [Solution] Example Basic properties

F. Inequalities. HKAL Pure Mathematics. 進佳數學團隊 Dr. Herbert Lam 林康榮博士. [Solution] Example Basic properties 進佳數學團隊 Dr. Herbert Lam 林康榮博士 HKAL Pure Mathematcs F. Ieualtes. Basc propertes Theorem Let a, b, c be real umbers. () If a b ad b c, the a c. () If a b ad c 0, the ac bc, but f a b ad c 0, the ac bc. Theorem

More information

1 0, x? x x. 1 Root finding. 1.1 Introduction. Solve[x^2-1 0,x] {{x -1},{x 1}} Plot[x^2-1,{x,-2,2}] 3

1 0, x? x x. 1 Root finding. 1.1 Introduction. Solve[x^2-1 0,x] {{x -1},{x 1}} Plot[x^2-1,{x,-2,2}] 3 Adrew Powuk - http://www.powuk.com- Math 49 (Numercal Aalyss) Root fdg. Itroducto f ( ),?,? Solve[^-,] {{-},{}} Plot[^-,{,-,}] Cubc equato https://e.wkpeda.org/wk/cubc_fucto Quartc equato https://e.wkpeda.org/wk/quartc_fucto

More information

5 Short Proofs of Simplified Stirling s Approximation

5 Short Proofs of Simplified Stirling s Approximation 5 Short Proofs of Smplfed Strlg s Approxmato Ofr Gorodetsky, drtymaths.wordpress.com Jue, 20 0 Itroducto Strlg s approxmato s the followg (somewhat surprsg) approxmato of the factoral,, usg elemetary fuctos:

More information

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

8.1 Hashing Algorithms

8.1 Hashing Algorithms CS787: Advaced Algorthms Scrbe: Mayak Maheshwar, Chrs Hrchs Lecturer: Shuch Chawla Topc: Hashg ad NP-Completeess Date: September 21 2007 Prevously we looked at applcatos of radomzed algorthms, ad bega

More information

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings Hdaw Publshg Corporato Iteratoal Joural of Mathematcs ad Mathematcal Sceces Volume 009, Artcle ID 391839, 9 pages do:10.1155/009/391839 Research Artcle A New Iteratve Method for Commo Fxed Pots of a Fte

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew for the prevous lecture: Theorems ad Examples: How to obta the pmf (pdf) of U = g (, Y) ad V = g (, Y) Chapter 4 Multple Radom Varables Chapter 44 Herarchcal Models ad Mxture Dstrbutos Examples:

More information

Lecture 07: Poles and Zeros

Lecture 07: Poles and Zeros Lecture 07: Poles ad Zeros Defto of poles ad zeros The trasfer fucto provdes a bass for determg mportat system respose characterstcs wthout solvg the complete dfferetal equato. As defed, the trasfer fucto

More information

Lebesgue Measure of Generalized Cantor Set

Lebesgue Measure of Generalized Cantor Set Aals of Pure ad Appled Mathematcs Vol., No.,, -8 ISSN: -8X P), -888ole) Publshed o 8 May www.researchmathsc.org Aals of Lebesgue Measure of Geeralzed ator Set Md. Jahurul Islam ad Md. Shahdul Islam Departmet

More information

Simple Linear Regression

Simple Linear Regression Statstcal Methods I (EST 75) Page 139 Smple Lear Regresso Smple regresso applcatos are used to ft a model descrbg a lear relatoshp betwee two varables. The aspects of least squares regresso ad correlato

More information

Department of Agricultural Economics. PhD Qualifier Examination. August 2011

Department of Agricultural Economics. PhD Qualifier Examination. August 2011 Departmet of Agrcultural Ecoomcs PhD Qualfer Examato August 0 Istructos: The exam cossts of sx questos You must aswer all questos If you eed a assumpto to complete a questo, state the assumpto clearly

More information

A Remark on the Uniform Convergence of Some Sequences of Functions

A Remark on the Uniform Convergence of Some Sequences of Functions Advaces Pure Mathematcs 05 5 57-533 Publshed Ole July 05 ScRes. http://www.scrp.org/joural/apm http://dx.do.org/0.436/apm.05.59048 A Remark o the Uform Covergece of Some Sequeces of Fuctos Guy Degla Isttut

More information

Likewise, properties of the optimal policy for equipment replacement & maintenance problems can be used to reduce the computation.

Likewise, properties of the optimal policy for equipment replacement & maintenance problems can be used to reduce the computation. Whe solvg a vetory repleshmet problem usg a MDP model, kowg that the optmal polcy s of the form (s,s) ca reduce the computatoal burde. That s, f t s optmal to replesh the vetory whe the vetory level s,

More information

Ideal multigrades with trigonometric coefficients

Ideal multigrades with trigonometric coefficients Ideal multgrades wth trgoometrc coeffcets Zarathustra Brady December 13, 010 1 The problem A (, k) multgrade s defed as a par of dstct sets of tegers such that (a 1,..., a ; b 1,..., b ) a j = =1 for all

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

G S Power Flow Solution

G S Power Flow Solution G S Power Flow Soluto P Q I y y * 0 1, Y y Y 0 y Y Y 1, P Q ( k) ( k) * ( k 1) 1, Y Y PQ buses * 1 P Q Y ( k1) *( k) ( k) Q Im[ Y ] 1 P buses & Slack bus ( k 1) *( k) ( k) Y 1 P Re[ ] Slack bus 17 Calculato

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b CS 70 Dscrete Mathematcs ad Probablty Theory Fall 206 Sesha ad Walrad DIS 0b. Wll I Get My Package? Seaky delvery guy of some compay s out delverg packages to customers. Not oly does he had a radom package

More information

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America SOLUTION OF SYSTEMS OF SIMULTANEOUS LINEAR EQUATIONS Gauss-Sedel Method 006 Jame Traha, Autar Kaw, Kev Mart Uversty of South Florda Uted States of Amerca kaw@eg.usf.edu Itroducto Ths worksheet demostrates

More information

A COMPARATIVE STUDY OF THE METHODS OF SOLVING NON-LINEAR PROGRAMMING PROBLEM

A COMPARATIVE STUDY OF THE METHODS OF SOLVING NON-LINEAR PROGRAMMING PROBLEM DAODIL INTERNATIONAL UNIVERSITY JOURNAL O SCIENCE AND TECHNOLOGY, VOLUME, ISSUE, JANUARY 9 A COMPARATIVE STUDY O THE METHODS O SOLVING NON-LINEAR PROGRAMMING PROBLEM Bmal Chadra Das Departmet of Tetle

More information

Complete Convergence and Some Maximal Inequalities for Weighted Sums of Random Variables

Complete Convergence and Some Maximal Inequalities for Weighted Sums of Random Variables Joural of Sceces, Islamc Republc of Ira 8(4): -6 (007) Uversty of Tehra, ISSN 06-04 http://sceces.ut.ac.r Complete Covergece ad Some Maxmal Iequaltes for Weghted Sums of Radom Varables M. Am,,* H.R. Nl

More information

MA/CSSE 473 Day 27. Dynamic programming

MA/CSSE 473 Day 27. Dynamic programming MA/CSSE 473 Day 7 Dyamc Programmg Bomal Coeffcets Warshall's algorthm (Optmal BSTs) Studet questos? Dyamc programmg Used for problems wth recursve solutos ad overlappg subproblems Typcally, we save (memoze)

More information

Regression and the LMS Algorithm

Regression and the LMS Algorithm CSE 556: Itroducto to Neural Netorks Regresso ad the LMS Algorthm CSE 556: Regresso 1 Problem statemet CSE 556: Regresso Lear regresso th oe varable Gve a set of N pars of data {, d }, appromate d b a

More information

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model Lecture 7. Cofdece Itervals ad Hypothess Tests the Smple CLR Model I lecture 6 we troduced the Classcal Lear Regresso (CLR) model that s the radom expermet of whch the data Y,,, K, are the outcomes. The

More information

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II CEE49b Chapter - Free Vbrato of Mult-Degree-of-Freedom Systems - II We ca obta a approxmate soluto to the fudametal atural frequecy through a approxmate formula developed usg eergy prcples by Lord Raylegh

More information

C-1: Aerodynamics of Airfoils 1 C-2: Aerodynamics of Airfoils 2 C-3: Panel Methods C-4: Thin Airfoil Theory

C-1: Aerodynamics of Airfoils 1 C-2: Aerodynamics of Airfoils 2 C-3: Panel Methods C-4: Thin Airfoil Theory ROAD MAP... AE301 Aerodyamcs I UNIT C: 2-D Arfols C-1: Aerodyamcs of Arfols 1 C-2: Aerodyamcs of Arfols 2 C-3: Pael Methods C-4: Th Arfol Theory AE301 Aerodyamcs I Ut C-3: Lst of Subects Problem Solutos?

More information

Objectives of Multiple Regression

Objectives of Multiple Regression Obectves of Multple Regresso Establsh the lear equato that best predcts values of a depedet varable Y usg more tha oe eplaator varable from a large set of potetal predctors {,,... k }. Fd that subset of

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postpoed exam: ECON430 Statstcs Date of exam: Jauary 0, 0 Tme for exam: 09:00 a.m. :00 oo The problem set covers 5 pages Resources allowed: All wrtte ad prted

More information

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3. Sampling, sampling distributions, and parameter estimation Lecture 3 Samplg, samplg dstrbutos, ad parameter estmato Samplg Defto Populato s defed as the collecto of all the possble observatos of terest. The collecto of observatos we take from the populato s called

More information

QR Factorization and Singular Value Decomposition COS 323

QR Factorization and Singular Value Decomposition COS 323 QR Factorzato ad Sgular Value Decomposto COS 33 Why Yet Aother Method? How do we solve least-squares wthout currg codto-squarg effect of ormal equatos (A T A A T b) whe A s sgular, fat, or otherwse poorly-specfed?

More information

Lattices. Mathematical background

Lattices. Mathematical background Lattces Mathematcal backgroud Lattces : -dmesoal Eucldea space. That s, { T x } x x = (,, ) :,. T T If x= ( x,, x), y = ( y,, y), the xy, = xy (er product of xad y) x = /2 xx, (Eucldea legth or orm of

More information

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever.

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever. 9.4 Sequeces ad Seres Pre Calculus 9.4 SEQUENCES AND SERIES Learg Targets:. Wrte the terms of a explctly defed sequece.. Wrte the terms of a recursvely defed sequece. 3. Determe whether a sequece s arthmetc,

More information

Assignment 7/MATH 247/Winter, 2010 Due: Friday, March 19. Powers of a square matrix

Assignment 7/MATH 247/Winter, 2010 Due: Friday, March 19. Powers of a square matrix Assgmet 7/MATH 47/Wter, 00 Due: Frday, March 9 Powers o a square matrx Gve a square matrx A, ts powers A or large, or eve arbtrary, teger expoets ca be calculated by dagoalzg A -- that s possble (!) Namely,

More information

Binary classification: Support Vector Machines

Binary classification: Support Vector Machines CS 57 Itroducto to AI Lecture 6 Bar classfcato: Support Vector Maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Supervsed learg Data: D { D, D,.., D} a set of eamples D, (,,,,,

More information

CSE 5526: Introduction to Neural Networks Linear Regression

CSE 5526: Introduction to Neural Networks Linear Regression CSE 556: Itroducto to Neural Netorks Lear Regresso Part II 1 Problem statemet Part II Problem statemet Part II 3 Lear regresso th oe varable Gve a set of N pars of data , appromate d by a lear fucto

More information

Beam Warming Second-Order Upwind Method

Beam Warming Second-Order Upwind Method Beam Warmg Secod-Order Upwd Method Petr Valeta Jauary 6, 015 Ths documet s a part of the assessmet work for the subject 1DRP Dfferetal Equatos o Computer lectured o FNSPE CTU Prague. Abstract Ths documet

More information

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines CS 675 Itroducto to Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Mdterm eam October 9, 7 I-class eam Closed book Stud materal: Lecture otes Correspodg chapters

More information

= lim. (x 1 x 2... x n ) 1 n. = log. x i. = M, n

= lim. (x 1 x 2... x n ) 1 n. = log. x i. = M, n .. Soluto of Problem. M s obvously cotuous o ], [ ad ], [. Observe that M x,..., x ) M x,..., x ) )..) We ext show that M s odecreasg o ], [. Of course.) mles that M s odecreasg o ], [ as well. To show

More information

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015 Fall 05 Homework : Solutos Problem : (Practce wth Asymptotc Notato) A essetal requremet for uderstadg scalg behavor s comfort wth asymptotc (or bg-o ) otato. I ths problem, you wll prove some basc facts

More information

Simulation Output Analysis

Simulation Output Analysis Smulato Output Aalyss Summary Examples Parameter Estmato Sample Mea ad Varace Pot ad Iterval Estmato ermatg ad o-ermatg Smulato Mea Square Errors Example: Sgle Server Queueg System x(t) S 4 S 4 S 3 S 5

More information

Strong Convergence of Weighted Averaged Approximants of Asymptotically Nonexpansive Mappings in Banach Spaces without Uniform Convexity

Strong Convergence of Weighted Averaged Approximants of Asymptotically Nonexpansive Mappings in Banach Spaces without Uniform Convexity BULLETIN of the MALAYSIAN MATHEMATICAL SCIENCES SOCIETY Bull. Malays. Math. Sc. Soc. () 7 (004), 5 35 Strog Covergece of Weghted Averaged Appromats of Asymptotcally Noepasve Mappgs Baach Spaces wthout

More information

ρ < 1 be five real numbers. The

ρ < 1 be five real numbers. The Lecture o BST 63: Statstcal Theory I Ku Zhag, /0/006 Revew for the prevous lecture Deftos: covarace, correlato Examples: How to calculate covarace ad correlato Theorems: propertes of correlato ad covarace

More information

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018 Chrs Pech Fal Practce CS09 Dec 5, 08 Practce Fal Examato Solutos. Aswer: 4/5 8/7. There are multle ways to obta ths aswer; here are two: The frst commo method s to sum over all ossbltes for the rak of

More information

C.11 Bang-bang Control

C.11 Bang-bang Control Itroucto to Cotrol heory Iclug Optmal Cotrol Nguye a e -.5 C. Bag-bag Cotrol. Itroucto hs chapter eals wth the cotrol wth restrctos: s boue a mght well be possble to have scotutes. o llustrate some of

More information

ANALYSIS ON THE NATURE OF THE BASIC EQUATIONS IN SYNERGETIC INTER-REPRESENTATION NETWORK

ANALYSIS ON THE NATURE OF THE BASIC EQUATIONS IN SYNERGETIC INTER-REPRESENTATION NETWORK Far East Joural of Appled Mathematcs Volume, Number, 2008, Pages Ths paper s avalable ole at http://www.pphm.com 2008 Pushpa Publshg House ANALYSIS ON THE NATURE OF THE ASI EQUATIONS IN SYNERGETI INTER-REPRESENTATION

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Extreme Value Theory: An Introduction

Extreme Value Theory: An Introduction (correcto d Extreme Value Theory: A Itroducto by Laures de Haa ad Aa Ferrera Wth ths webpage the authors ted to form the readers of errors or mstakes foud the book after publcato. We also gve extesos for

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ  1 STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Recall Assumpto E(Y x) η 0 + η x (lear codtoal mea fucto) Data (x, y ), (x 2, y 2 ),, (x, y ) Least squares estmator ˆ E (Y x) ˆ " 0 + ˆ " x, where ˆ

More information

Support vector machines II

Support vector machines II CS 75 Mache Learg Lecture Support vector maches II Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Learl separable classes Learl separable classes: here s a hperplae that separates trag staces th o error

More information

Chapter 3 Sampling For Proportions and Percentages

Chapter 3 Sampling For Proportions and Percentages Chapter 3 Samplg For Proportos ad Percetages I may stuatos, the characterstc uder study o whch the observatos are collected are qualtatve ature For example, the resposes of customers may marketg surveys

More information

Lecture Notes to Rice Chapter 5

Lecture Notes to Rice Chapter 5 ECON 430 Revsed Sept. 06 Lecture Notes to Rce Chapter 5 By H. Goldste. Chapter 5 gves a troducto to probablstc approxmato methods, but s suffcet for the eeds of a adequate study of ecoometrcs. The commo

More information

Lecture 3 Probability review (cont d)

Lecture 3 Probability review (cont d) STATS 00: Itroducto to Statstcal Iferece Autum 06 Lecture 3 Probablty revew (cot d) 3. Jot dstrbutos If radom varables X,..., X k are depedet, the ther dstrbuto may be specfed by specfyg the dvdual dstrbuto

More information

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006 Mult-Layer Refracto Problem Rafael Espercueta, Bakersfeld College, November, 006 Lght travels at dfferet speeds through dfferet meda, but refracts at layer boudares order to traverse the least-tme path.

More information

A tighter lower bound on the circuit size of the hardest Boolean functions

A tighter lower bound on the circuit size of the hardest Boolean functions Electroc Colloquum o Computatoal Complexty, Report No. 86 2011) A tghter lower boud o the crcut sze of the hardest Boolea fuctos Masak Yamamoto Abstract I [IPL2005], Fradse ad Mlterse mproved bouds o the

More information

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then Secto 5 Vectors of Radom Varables Whe workg wth several radom varables,,..., to arrage them vector form x, t s ofte coveet We ca the make use of matrx algebra to help us orgaze ad mapulate large umbers

More information

1. A real number x is represented approximately by , and we are told that the relative error is 0.1 %. What is x? Note: There are two answers.

1. A real number x is represented approximately by , and we are told that the relative error is 0.1 %. What is x? Note: There are two answers. PROBLEMS A real umber s represeted appromately by 63, ad we are told that the relatve error s % What s? Note: There are two aswers Ht : Recall that % relatve error s What s the relatve error volved roudg

More information

DIFFERENTIAL GEOMETRIC APPROACH TO HAMILTONIAN MECHANICS

DIFFERENTIAL GEOMETRIC APPROACH TO HAMILTONIAN MECHANICS DIFFERENTIAL GEOMETRIC APPROACH TO HAMILTONIAN MECHANICS Course Project: Classcal Mechacs (PHY 40) Suja Dabholkar (Y430) Sul Yeshwath (Y444). Itroducto Hamltoa mechacs s geometry phase space. It deals

More information

MOLECULAR VIBRATIONS

MOLECULAR VIBRATIONS MOLECULAR VIBRATIONS Here we wsh to vestgate molecular vbratos ad draw a smlarty betwee the theory of molecular vbratos ad Hückel theory. 1. Smple Harmoc Oscllator Recall that the eergy of a oe-dmesoal

More information

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations Lecture 7 3. Parametrc ad No-Parametrc Ucertates, Radal Bass Fuctos ad Neural Network Approxmatos he parameter estmato algorthms descrbed prevous sectos were based o the assumpto that the system ucertates

More information