Solution of Final Exam : / Machine Learning

Size: px
Start display at page:

Download "Solution of Final Exam : / Machine Learning"

Transcription

1 Solutio of Fial Exam : / Machie Learig Fall 2004 Dec. 12th 2004 Your Adrew ID i capital letters: Your full ame: There are 9 questios. Some of them are easy ad some are more difficult. So, if you get stuck o ay oe of the questios, proceed with the rest of the questios ad retur back at the ed if you have time remaiig. The maximum score of the exam is 100 poits If you eed more room to work out your aswer to a questio, use the back of the page ad clearly mark o the frot of the page if we are to look at what s o the back. You should attempt to aswer all of the questios. You may use ay ad all otes, as well as the class textbook. You have 3 hours. Good luck! 1

2 Problem 1. Assorted Questios ( 16 poits) (a) [ 3.5 pts] Suppose we have a sample of real values, called x 1, x 2,..., x. Each sampled from p.d.f. p(x) which has the followig form: f(x) = { αe αx, if x 0 0, otherwise (1) where α is a ukow parameter. Which oe of the followig expressios is the maximum likelihood estimatio of α? ( Assume that i our sample, all x i are large tha 1. ) 1). log(x i) i=1 2). max i=1 log(xi) 3). log(x i) i=1 4). max i=1 log(xi) 5). x i i=1 6). max i=1 xi 7). 8). x i max i=1 i=1 xi 9). x 2 i i=1 10). max i=1 x2 i 11). x 2 i i=1 12). max i=1 x2 i 13). e x i i=1 14). max i=1 ex i 15). e x i i=1 16). max i=1 ex i Aswer: Choose [7]. 2

3 (b). [7.5 pts] Suppose that X 1,..., X m are categorical iput attributes ad Y is categorical output attribute. Suppose we pla to lear a decisio tree without pruig, usig the stadard algorithm. b.1 (True or False -1.5 pts ) : If X i ad Y are idepedet i the distributio that geerated this dataset, the X i will ot appear i the decisio tree. Aswer: False (because the attribute may become relevat further dow the tree whe the records are restricted to some value of aother attribute) (e.g. XOR) b.2 (True or False -1.5 pts) : If IG(Y X i ) = 0 accordig to the values of etropy ad coditioal etropy computed from the data, the X i will ot appear i the decisio tree. Aswer: False for same reaso b.3 (True or False -1.5 pts ) : The maximum depth of the decisio tree must be less tha m+1. Aswer: True because the attributes are categorical ad ca each be split oly oce b.4 (True or False -1.5 pts ) : Suppose data has R records, the maximum depth of the decisio tree must be less tha 1 + log 2 R Aswer: False because the tree may be ubalaced b.5 (True or False -1.5 pts) : Suppose oe of the attributes has R distict values, ad it has a uique value i each record. The the decisio tree will certaily have depth 0 or 1 (i.e. will be a sigle ode, or else a root ode directly coected to a set of leaves) Aswer: True because that attribute will have perfect iformatio gai. If a attribute has perfect iformatio gai it must split the records ito pure buckets which ca be split o more. 3

4 (c) [5 pts] Suppose you have this data set with oe real-valued iput ad oe real-valued output: x y (c.1) What is the mea squared leave oe out cross validatio error of usig liear regressio? (i.e. the mode is y = β 0 + β 1 x + oise) Aswer: 2 2 +(2/3) = 49/27 (c.2) Suppose we use a trivial algorithm of predictig a costat y = c. What is the mea squared leave oe out error i this case? ( Assume c is leared from the o-left-out data poits.) Aswer: = 1/2 4

5 Problem 2. Bayes Rule ad Bayes Classifiers ( 12 poits) Suppose you are give the followig set of data with three Boolea iput variables a, b, ad c, ad a sigle Boolea output variable K. a b c K For parts (a) ad (b), assume we are usig a aive Bayes classifier to predict the value of K from the values of the other variables. (a) [1.5 pts] Accordig to the aive Bayes classifier, what is P (K = 1 a = 1 b = 1 c = 0)? Aswer: 1/2. P (K = 1 a = 1 b = 1 c = 0) = P (K = 1 a = 1 b = 1 c = 0)/P (a = 1 b = 1 c = 0) = P (K= 1) P (a = 1 K = 1) P (b = 1 K = 1) P (c = 0 K = 1)/ P (a = 1 b = 1 c = 0 K = 1) + P (a = 1 b = 1 c = 0 K = 0). (b) [1.5 pts] Accordig to the aive Bayes classifier, what is P (K = 0 a = 1 b = 1)? Aswer: 2/3. P (K = 0 a = 1 b = 1) = P (K = 0 a = 1 b = 1)/P (a = 1 b = 1) = P (K= 0) P (a = 1 K = 0) P (b = 1 K = 0)/ P (a = 1 b = 1 K = 1) + P (a = 1 b = 1 K = 0). 5

6 Now, suppose we are usig a joit Bayes classifier to predict the value of K from the values of the other variables. (c) [1.5 pts] Accordig to the joit Bayes classifier, what is P (K = 1 a = 1 b = 1 c = 0)? Aswer: 0. Let um(x) be the umber of records i our data matchig X. The we have P (K = 1 a = 1 b = 1 c = 0) = um(k (d) [1.5 pts] Accordig to the joit Bayes classifier, what is P (K = 0 a = 1 b = 1)? Aswer: 1/2. P (K = 0 a = 1 b = 1) = um(k = 0 a = 1 b = 1)/um(a = 1 b = 1) = 1/2. I a urelated example, imagie we have three variables X, Y, ad Z. (e) [2 pts] Imagie I tell you the followig: P (Z X) = 0.7 P (Z Y ) = 0.4 Do you have eough iformatio to compute P (Z X Y )? If ot, write ot eough ifo. If so, compute the value of P (Z X Y ) from the above iformatio. Aswer: Not eough ifo. 6

7 (f) [2 pts] Istead, imagie I tell you the followig: P (Z X) = 0.7 P (Z Y ) = 0.4 P (X) = 0.3 P (Y ) = 0.5 Do you ow have eough iformatio to compute P (Z X Y )? If ot, write ot eough ifo. If so, compute the value of P (Z X Y ) from the above iformatio. Aswer: Not eough ifo. (g) [2 pts] Istead, imagie I tell you the followig (falsifyig my earlier statemets): P (Z X) = 0.2 P (X) = 0.3 P (Y ) = 1 Do you ow have eough iformatio to compute P (Z X Y )? If ot, write ot eough ifo. If so, compute the value of P (Z X Y ) from the above iformatio. Aswer: 2/3. P (Z X Y ) = P (Z X) sice P (Y ) = 1. I this case, P (Z X Y ) = P (Z X)/P (X) = 0.2/0.3 = 2/3. 7

8 Problem 3. SVM ( 9 poits) (a) (True/False - 1 pt ) Support vector machies, like logistic regressio models, give a probability distributio over the possible labels give a iput example. Aswer: False (b) (True/False - 1 pt ) We would expect the support vectors to remai the same i geeral as we move from a liear kerel to higher order polyomial kerels. Aswer: False ( There are o guaratees that the support vectors remai the same. The feature vectors correspodig to polyomial kerels are o-liear fuctios of the origial iput vectors ad thus the support poits for maximum margi separatio i the feature space ca be quite differet. ) (c) (True/False - 1 pt ) The maximum margi decisio boudaries that support vector machies costruct have the lowest geeralizatio error amog all liear classifiers. Aswer: False ( The maximum margi hyperplae is ofte a reasoable choice but it is by o meas optimal i all cases. ) (d) (True/False - 1 pt ) Ay decisio boudary that we get from a geerative model with classcoditioal Gaussia distributios could i priciple be reproduced with a SVM ad a polyomial kerel of degree less tha or equal to three. Aswer: True (A polyomial kerel of degree two suffices to represet ay quadratic decisio boudary such as the oe from the geerative model i questio.) 8

9 (e) (True/False - 1 pts ) The values of the margis obtaied by two differet kerels K 1 (x, x 0 ) ad K 2 (x, x 0 ) o the same traiig set do ot tell us which classifier will perform better o the test set. Aswer: True ( We eed to ormalize the margi for it to be meaigful. For example, a simple scalig of the feature vectors would lead to a larger margi. Such a scalig does ot chage the decisio boudary, however, ad so the larger margi caot directly iform us about geeralizatio. ) (f) ( 2 pts ) What is the leave-oe-out cross-validatio error estimate for maximum margi separatio i the followig figure? (we are askig for a umber) Aswer: 0 Based o the figure we ca see that removig ay sigle poit would ot chace the resultig maximum margi separator. Sice all the poits are iitially classified correctly, the leave-oe-out error is zero. 9

10 (g) ( 2 pts ) Now let us discuss a SVM classifier usig a secod order polyomial kerel. The first polyomial kerel maps each iput data x to Φ 1 (x) = [x, x 2 ] T. The secod polyomial kerel maps each iput data x to Φ 2 (x) = [2x, 2x 2 ] T. I geeral, is the margi we would attai usig Φ 2 (x) A. ( ) greater B. ( ) equal C. ( ) smaller D. ( ) ay of the above i compariso to the margi resultig from usig Φ 1 (x)? Aswer: A. 10

11 Problem 4. Istace based learig ( 8 poits) The followig picture shows a dataset with oe real-valued iput x ad oe real-valued output y. There are seve traiig poits. Suppose you are traiig usig kerel regressio usig some uspecified kerel fuctio. The oly thig you kow about the kerel fuctio is that it is a mootoically decreasig fuctio of distace that decays to zero at a distace of 3 uits (ad is strictly greater tha zero at a distace of less tha 3 uits). (a) ( 2 pts ) What is the predicted value of y whe x = 1? Aswer: = 3.5 (b) ( 2 pts ) What is the predicted value of y whe x = 3? Aswer: = 26/7 11

12 (c) ( 2 pts ) What is the predicted value of y whe x = 4? Aswer: = 4 (d) ( 2 pts ) What is the predicted value of y whe x = 7? Aswer: = 4 12

13 Problem 5. HMM ( 12 poits) Cosider the HMM defied by the trasitio ad emissio probabilities i the table below. This HMM has six states (plus a start ad ed states) ad a alphabet with four symbols (A,C, G ad T). Thus, the probability of trasitioig from state S 1 to state S 2 is 1, ad the probability of emittig A while i state S 1 is 0.3. Here is the state diagram: 13

14 For each of the pairs belows, place <, > or = betwee the right ad left compoets of each pair. ( 2 pts each ): (a) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 1 = S 1, q 2 = S 2 ) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A q 1 = S 1, q 2 = S 2 ) Below we will use a shorteed otatio. Specifically we will us P (A, C, T, A, S 1, S 2 ) istead of P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 1 = S 1, q 2 = S 2 ), P (A, C, T, A) istead of P (O 1 = A, O 2 = C, O 3 = T, O 4 = A) ad so forth. Aswer: = P (A, C, T, A, S 1, S 2 ) = P (A, C, T, A S 1, S 2 )P (S 1, S 2 ) = P (A, C, T, A S 1, S 2 ), sice P (S 1, S 2 ) = 1 (b) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 3 = S 3, q 4 = S 4 ) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A q 3 = S 3, q 4 = S 4 ) Aswer: < As i (b), P (A, C, T, A, S 3, S 4 ) = P (A, C, T, A S 3, S 4 )P (S 3, S 4 ) however, sice P (S 3, S 4 ) = 0.3, the the right had side is bigger. (c) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 3 = S 3, q 4 = S 4 ) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 3 = S 5, q 4 = S 6 ) Aswer: < The first two emissios (A ad C) do ot matter sice they are the same. Thus, the right had side traslates to P (O 3 = T, O 4 = A, q 3 = S 3, q 4 = S 4 ) = P (O 3 = T, O 4 = A q 3 = S 3, q 4 = S 4 )P (S 3, S 4 ) = = while the right had side is =

15 (d) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A, q 3 = S 3, q 4 = S 4 ) Aswer: > Here the left had side is: P (A, C, T, A, S 3, S 4 ) + P (A, C, T, A, S 5, S 6 ). The right side of the summatio is the right had side above. Sice the left side of the summatio is greater tha 0, the left had side is greater. (e) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A q 3 = S 3, q 4 = S 4 ) Aswer: < As metioed for (e) the left had side is: P (A, C, T, A, S 3, S 4 )+P (A, C, T, A, S 5, S 6 ) = P (A, C, T, A S 3, S 4 )P (S 3, S 4 )+ P (A, C, T, A S 5, S 6 )P (S 5, S 6 ). Sice P (A, C, T, A S 3, S 4 ) > P (A, C, T, A S 5, S 6 ) the left had side is lower from the right had side. (f) P (O 1 = A, O 2 = C, O 3 = T, O 4 = A) P (O 1 = A, O 2 = T, O 3 = T, O 4 = G) Aswer: < Sice the first ad third letters are the same, we oly eed to worry about the secod ad fourth. The left had side is: 0.1 ( ) = while the right had side is: 0.6 ( ) =

16 Problem 6. Learig from labeled ad ulabeled data ( 10 poits) Cosider the followig figure which cotais labeled (class 1 black circles class 2 hollow circles) ad ulabeled (squares) data. We would like to use two methods discussed i class (re-weightig ad co-traiig) i order to utilize the ulabeled data whe traiig a Gaussia classifier. (a) ( 2 pts ) How ca we use co-traiig i this case (what are the two classifiers)? Aswer: Co-traiig partitios thew feature space ito two separate sets ad uses these sets to costruct idepedet classifiers. Here, the most atural way is to use oe classifier (a Gaussia) for the x axis ad the secod (aother Gaussia) usig the y axis. 16

17 (b) We would like to use re-weightig of ulabeled data to improve the classificatio performace. Reweightig will be doe by placig a the dashed circle o each of the labeled data poits ad coutig the umber of ulabeled data poits i that circle. Next, a Gaussia classifier is ru with the ew weights computed. (b.1). ( 2 pts ) To what class (hollow circles or full circles) would we assig the ulabeled poit A is we were traiig a Gaussia classifier usig oly the labeled data poits (with o re-weightig)? Aswer: Hollow class. Note that the hollow poits are much more spread out ad so the Gaussia leared for them will have a higher variace. (b.2). ( 2 pts ) To what class (hollow circles or full circles) would we assig the ulabeled poit A is we were traiig a classifier usig the re-weightig procedure described above? Aswer: Agai, the hollow class. Re-weightig will ot chage the result sice it will be doe idepedetly for each of the two classes, ad will produce very similar class ceters to the oes i 1 above. 17

18 (c) ( 4 pts ) Whe we hadle a polyomial regressio problem, we would like to decide what degree of polyomial to use i order to fit a test set. The table below describes the dis-agreemet betwee the differet polyomials o ulabeled data ad also the disagreemet with the labeled data. Based o the method preseted i class, which polyomial should we chose for this data? Which of the two tables do you prefer? Aswer: The degree we would select is 3. Based o the classificatio accuracy, it is beeficial to use higher degree polyomials. However, as we said i class these might overfit. Oe way to test if they do or do t is to check cosistecy o ulabeled data by requirig that the triagle iequality will hold for the selected degree. For a third degree this is ideed the case sice u(2, 3) = 0.2 l(2) + l(3) = (where u(2, 3) is the disagreemet betwee the secod ad third degree polyomials o the ulabeled data ad l(2) is the disagreemet betwee degree 2 ad the labeled data). Similarly, u(1, 3) = 0.5 l(1) + l(3) = I cotrast, this does ot hold for a fourth degree polyomial sice u(3, 4) = 0.2 > l(3) + l(4) =

19 Problem 7. Bayes Net Iferece ( 10 poits) For (a) through (c), compute the followig probabilities from the Bayes et below. Hit: These examples have bee desiged so that oe of the calculatios should take you loger tha a few miutes. If you fid yourself doig dozes of calculatios o a questio sit back ad look for shortcuts. This ca be doe easily if you otice a certai special property of the umbers o this diagram. (a) ( 2 pts ) P (A B) = Aswer: 3/8. P (A B) = P (A B)/P (B) = P (B A) P (A)/(P (B A) P (A)+P (B A) P ( A)) = 0.21/( ) = 3/8. 19

20 (b) ( 2 pts ) P (B D) = Aswer: P (D C) = P (D C) so D is idepedet of C, ad is ot ifluecig the Bayes et. So P (B D) = P (B), which we calculated i (a) to be (c) ( 2 pts ) P (C B) = Aswer: 5/11. P (C B) = (P (A B C) + P ( A B C))/P (B) = (P (A) P (B A) P (C A) + P ( A) P (B A) P (C A))/P (B) = ( )/0.56 = 5/11. 20

21 For (d) through (g), idicate whether the give statemet is TRUE or FALSE i the Bayes et give below. (d) [ T/F - ( 1 pt ) ] I<A, {}, E > Aswer: TRUE. (e) [ T/F - ( 1 pt )] I<A, G, E > Aswer: FALSE. (f) [T/F - ( 1 pt )] I<C, {A, G}, F > Aswer: FALSE. (g) [T/F - ( 1 pt )] I<B, {C, E}, F > Aswer: FALSE. 21

22 Problem 8. Bayes Nets II ( 12 poits) (a) (4 poits) Suppose we use a aive Bayes classifier to lear a classifier for y = A B, where A, B are idepedet of each other boolea radom variables with P (A) = 0.4, P (B) = 0.5. Draw the Bayes et that represets the idepedece assumptios of our classifier ad fill i the probability tables for the et. Aswer: I computig the probabilities for the Bayes et we use the followig Boolea table with correspodig probabilities for each row: A B y P *0.5= *0.5= *0.5= *0.5=0.2 Usig the table we ca compute the probabilities for the Bayes et: P (y) = 0.2 P (B y) = P (B,y) P (y) = 1 P (B y) = P (B, y) P ( y) = = P (A y) = P (A,y) P (y) = 1 P (A y) = P (A, y) P ( y) = =

23 (b) (8 poits) Cosider a robot operatig i the two-cell gridworld show below. Suppose the robot is iitially i the cell C 1. At ay poit of time the robot ca execute ay of the two actios: A 1 ad A 2. A 1 is to move to a eighborig cell. If the robot is i C 1 the actio A 1 succeeds (moves the robot ito C 2 ) with the probability 0.9 ad fails (leaves the robot i C 1 ) with the probability 0.1. If the robot is i C 2 the actio A 1 succeeds (moves the robot ito C 1 ) with the probability 0.8 ad fails (leaves the robot i C 2 ) with the probability 0.2. The actio A 2 is to stay i the same cell, ad whe executed it keeps the robot i the same cell with probability 1. The first actio the robot executes is chose at radom (with a equal probability betwee A 1 ad A 2 ). Afterwards, the robot alterates the actios it executes. (for example, if the robot executed actio A 1 first, the the sequece of actios is A 1, A 2, A 1, A 2,... ). Aswer the followig questios. (b.1) (4 poits) Draw the Bayes et that represets the cell the robot is i durig the first two actios the robot executes (e.g, iitial cell, the cell after the first actio ad the cell after the secod actio) ad fill i the probability tables. (Hit: The Bayes et should have five variables: q 1 - the iitial cell, q 2, q 3 - the cell after the first ad the secod actio, respectively, a 1, a 2 - the first ad the secod actio, respectively). Aswer: 23

24 (b.2) (4 poits) Suppose you were told that the first actio the robot executes is A 1. What is the probability that the robot will appear i cell C 1 after it executes close to ifiitely may actios? Aswer: Sice actios alterate ad the first actio is A 1 the trasitio matrix for ay odd actio is: ( ) P (a odd ) =, where p ij elemet is a probability of trasitioig ito cell j as a result of a executio of a odd actio give that the robot is i cell i before executig this actio. ( ) 1 0 Similarly, the trasitio matrix for ay eve actio is: P (a eve ) =. 0 1 If we cosider the pair of actios as oe meta-actio, the we have a Markov chai with the trasitio probability matrix: ( ) P = P (a odd ) P (a eve ) = At t =, the state distributio satisfies P (q t ) = P T P (q t ). So, P (q t = C 1 ) = 0.1 P (q t 1 = C 1 ) P (q t 1 = C 2 ). Sice there are oly two cells possible we have: P (q t = C 1 ) = 0.1 P (q t 1 = C 1 ) (1 P (q t 1 = C 1 )). Solvig for P (q t = C 1 ) we get: P (q t = C 1 ) = 0.8/1.7 =

25 Problem 9. Markov Decisio Processes (11pts) (a) (8 poits) Cosider the MDP give i the figure below. Assume the discout factor γ = 0.9. The r-values are rewards, while the umbers ext to arrows are probabilities of outcomes. Note that oly state S 1 has two actios. The other states have oly oe actio for each state. (a.1) (4 poits) Write dow the umerical value of J(S 1 ) after the first ad the secod iteratios of Value Iteratio. Iitial value fuctio: J 0 (S 0 ) = 0; J 0 (S 1 ) = 0; J 0 (S 2 ) = 0; J 0 (S 3 ) = 0; J 1 (S 1 ) = J 2 (S 1 ) = Aswer: J 1 (S 1 ) = 2 J 2 (S 1 ) = max( (0.5 J 1 (S 1 ) J 1 (S 3 )), J 1 (S 2 )) = max( ( ), ) =

26 (a.2) (4 poits) Write dow the optimal value of state S 1. There are few ways to solve it, ad for oe of them you may fid useful the followig equality: i=0 αi = 1 1 α for ay 0 α < 1. J (S 1 ) = Aswer: It is pretty clear from the give MDP that the optimal policy from S 1 will ivolve tryig to move from S 1 to S 3 as this is the oly state that has a large reward. First, we compute optimal value for S 3 : J (S 3 ) = J (S 3 ) J (S 3 ) = = 100 We ca ow compute optimal value for S 1 : J (S 1 ) = (0.5 J (S 1 ) J (S 3 )) = (0.5 J (S 1 ) + 50); Solvig for J (S 1 ) we get: J (S 1 ) = =

27 (b) (3 poits) A geeral MDP with N states is guarateed to coverge i the limit for Value Iteratio as log as γ < 1. I practice oe caot perform ifiitely may value iteratios to guaratee covergece. Circle all the statemets below that are true. (1) Ay MDP with N states coverges after N value iteratios for γ = 0.5; Aswer: False (2) Ay MDP coverges after the 1st value iteratio for γ = 1; Aswer: False (3) Ay MDP coverges after the 1st value iteratio for a discout factor γ = 0; Aswer: True, sice all the coverged values will be just immediate rewards. (4) A acyclic MDP with N states coverges after N value iteratios for ay 0 γ 1. Aswer: True, sice there are o cycles ad therefore after each iteratio at least oe state whose value was ot optimal before is guarateed to have its value set to a optimal value (eve whe γ = 1), uless all state values are already coverged. (5) A MDP with N states ad o stochastic actios (that is, each actio has oly oe outcome) coverges after N value iteratios for ay 0 γ < 1. Aswer: False. Cosider a situatio where there are o absorbig goal states. (6) Oe usually stops value iteratios after iteratio k+1 if: max 0 i N 1 J k+1 (S i ) J k (S i ) < ξ, for some small costat ξ > 0. Aswer: True. 27

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

6.867 Machine learning

6.867 Machine learning 6.867 Machie learig Mid-term exam October, ( poits) Your ame ad MIT ID: Problem We are iterested here i a particular -dimesioal liear regressio problem. The dataset correspodig to this problem has examples

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Expectation-Maximization Algorithm.

Expectation-Maximization Algorithm. Expectatio-Maximizatio Algorithm. Petr Pošík Czech Techical Uiversity i Prague Faculty of Electrical Egieerig Dept. of Cyberetics MLE 2 Likelihood.........................................................................................................

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

1 Review of Probability & Statistics

1 Review of Probability & Statistics 1 Review of Probability & Statistics a. I a group of 000 people, it has bee reported that there are: 61 smokers 670 over 5 960 people who imbibe (drik alcohol) 86 smokers who imbibe 90 imbibers over 5

More information

Part I: Covers Sequence through Series Comparison Tests

Part I: Covers Sequence through Series Comparison Tests Part I: Covers Sequece through Series Compariso Tests. Give a example of each of the followig: (a) A geometric sequece: (b) A alteratig sequece: (c) A sequece that is bouded, but ot coverget: (d) A sequece

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Chapter 4. Fourier Series

Chapter 4. Fourier Series Chapter 4. Fourier Series At this poit we are ready to ow cosider the caoical equatios. Cosider, for eample the heat equatio u t = u, < (4.) subject to u(, ) = si, u(, t) = u(, t) =. (4.) Here,

More information

MA131 - Analysis 1. Workbook 3 Sequences II

MA131 - Analysis 1. Workbook 3 Sequences II MA3 - Aalysis Workbook 3 Sequeces II Autum 2004 Cotets 2.8 Coverget Sequeces........................ 2.9 Algebra of Limits......................... 2 2.0 Further Useful Results........................

More information

NUMERICAL METHODS FOR SOLVING EQUATIONS

NUMERICAL METHODS FOR SOLVING EQUATIONS Mathematics Revisio Guides Numerical Methods for Solvig Equatios Page 1 of 11 M.K. HOME TUITION Mathematics Revisio Guides Level: GCSE Higher Tier NUMERICAL METHODS FOR SOLVING EQUATIONS Versio:. Date:

More information

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS AAEC/ECON 5126 FINAL EXAM: SOLUTIONS SPRING 2015 / INSTRUCTOR: KLAUS MOELTNER This exam is ope-book, ope-otes, but please work strictly o your ow. Please make sure your ame is o every sheet you re hadig

More information

SNAP Centre Workshop. Basic Algebraic Manipulation

SNAP Centre Workshop. Basic Algebraic Manipulation SNAP Cetre Workshop Basic Algebraic Maipulatio 8 Simplifyig Algebraic Expressios Whe a expressio is writte i the most compact maer possible, it is cosidered to be simplified. Not Simplified: x(x + 4x)

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10 DS 00: Priciples ad Techiques of Data Sciece Date: April 3, 208 Name: Hypothesis Testig Discussio #0. Defie these terms below as they relate to hypothesis testig. a) Data Geeratio Model: Solutio: A set

More information

Intro to Learning Theory

Intro to Learning Theory Lecture 1, October 18, 2016 Itro to Learig Theory Ruth Urer 1 Machie Learig ad Learig Theory Comig soo 2 Formal Framework 21 Basic otios I our formal model for machie learig, the istaces to be classified

More information

Support vector machine revisited

Support vector machine revisited 6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector

More information

Final Review for MATH 3510

Final Review for MATH 3510 Fial Review for MATH 50 Calculatio 5 Give a fairly simple probability mass fuctio or probability desity fuctio of a radom variable, you should be able to compute the expected value ad variace of the variable

More information

Recurrence Relations

Recurrence Relations Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The

More information

MA131 - Analysis 1. Workbook 2 Sequences I

MA131 - Analysis 1. Workbook 2 Sequences I MA3 - Aalysis Workbook 2 Sequeces I Autum 203 Cotets 2 Sequeces I 2. Itroductio.............................. 2.2 Icreasig ad Decreasig Sequeces................ 2 2.3 Bouded Sequeces..........................

More information

CHAPTER 10 INFINITE SEQUENCES AND SERIES

CHAPTER 10 INFINITE SEQUENCES AND SERIES CHAPTER 10 INFINITE SEQUENCES AND SERIES 10.1 Sequeces 10.2 Ifiite Series 10.3 The Itegral Tests 10.4 Compariso Tests 10.5 The Ratio ad Root Tests 10.6 Alteratig Series: Absolute ad Coditioal Covergece

More information

6.003 Homework #3 Solutions

6.003 Homework #3 Solutions 6.00 Homework # Solutios Problems. Complex umbers a. Evaluate the real ad imagiary parts of j j. π/ Real part = Imagiary part = 0 e Euler s formula says that j = e jπ/, so jπ/ j π/ j j = e = e. Thus the

More information

Statistics 511 Additional Materials

Statistics 511 Additional Materials Cofidece Itervals o mu Statistics 511 Additioal Materials This topic officially moves us from probability to statistics. We begi to discuss makig ifereces about the populatio. Oe way to differetiate probability

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis Recursive Algorithms Recurreces Computer Sciece & Egieerig 35: Discrete Mathematics Christopher M Bourke cbourke@cseuledu A recursive algorithm is oe i which objects are defied i terms of other objects

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES Peter M. Maurer Why Hashig is θ(). As i biary search, hashig assumes that keys are stored i a array which is idexed by a iteger. However, hashig attempts to bypass

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

Sequences I. Chapter Introduction

Sequences I. Chapter Introduction Chapter 2 Sequeces I 2. Itroductio A sequece is a list of umbers i a defiite order so that we kow which umber is i the first place, which umber is i the secod place ad, for ay atural umber, we kow which

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018 CSE 353 Discrete Computatioal Structures Sprig 08 Sequeces, Mathematical Iductio, ad Recursio (Chapter 5, Epp) Note: some course slides adopted from publisher-provided material Overview May mathematical

More information

Measures of Spread: Standard Deviation

Measures of Spread: Standard Deviation Measures of Spread: Stadard Deviatio So far i our study of umerical measures used to describe data sets, we have focused o the mea ad the media. These measures of ceter tell us the most typical value of

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam 4 will cover.-., 0. ad 0.. Note that eve though. was tested i exam, questios from that sectios may also be o this exam. For practice problems o., refer to the last review. This

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

6.867 Machine learning, lecture 7 (Jaakkola) 1

6.867 Machine learning, lecture 7 (Jaakkola) 1 6.867 Machie learig, lecture 7 (Jaakkola) 1 Lecture topics: Kerel form of liear regressio Kerels, examples, costructio, properties Liear regressio ad kerels Cosider a slightly simpler model where we omit

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract I this lecture we derive risk bouds for kerel methods. We will start by showig that Soft Margi kerel SVM correspods to miimizig

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam will cover.-.9. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for you

More information

sin(n) + 2 cos(2n) n 3/2 3 sin(n) 2cos(2n) n 3/2 a n =

sin(n) + 2 cos(2n) n 3/2 3 sin(n) 2cos(2n) n 3/2 a n = 60. Ratio ad root tests 60.1. Absolutely coverget series. Defiitio 13. (Absolute covergece) A series a is called absolutely coverget if the series of absolute values a is coverget. The absolute covergece

More information

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t =

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t = Mathematics Summer Wilso Fial Exam August 8, ANSWERS Problem 1 (a) Fid the solutio to y +x y = e x x that satisfies y() = 5 : This is already i the form we used for a first order liear differetial equatio,

More information

Chapter 23: Inferences About Means

Chapter 23: Inferences About Means Chapter 23: Ifereces About Meas Eough Proportios! We ve spet the last two uits workig with proportios (or qualitative variables, at least) ow it s time to tur our attetios to quatitative variables. For

More information

10.6 ALTERNATING SERIES

10.6 ALTERNATING SERIES 0.6 Alteratig Series Cotemporary Calculus 0.6 ALTERNATING SERIES I the last two sectios we cosidered tests for the covergece of series whose terms were all positive. I this sectio we examie series whose

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

( ) = p and P( i = b) = q.

( ) = p and P( i = b) = q. MATH 540 Radom Walks Part 1 A radom walk X is special stochastic process that measures the height (or value) of a particle that radomly moves upward or dowward certai fixed amouts o each uit icremet of

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

15-780: Graduate Artificial Intelligence. Density estimation

15-780: Graduate Artificial Intelligence. Density estimation 5-780: Graduate Artificial Itelligece Desity estimatio Coditioal Probability Tables (CPT) But where do we get them? P(B)=.05 B P(E)=. E P(A B,E) )=.95 P(A B, E) =.85 P(A B,E) )=.5 P(A B, E) =.05 A P(J

More information

MATH 413 FINAL EXAM. f(x) f(y) M x y. x + 1 n

MATH 413 FINAL EXAM. f(x) f(y) M x y. x + 1 n MATH 43 FINAL EXAM Math 43 fial exam, 3 May 28. The exam starts at 9: am ad you have 5 miutes. No textbooks or calculators may be used durig the exam. This exam is prited o both sides of the paper. Good

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract We will itroduce the otio of reproducig kerels ad associated Reproducig Kerel Hilbert Spaces (RKHS). We will cosider couple

More information

Algorithms for Clustering

Algorithms for Clustering CR2: Statistical Learig & Applicatios Algorithms for Clusterig Lecturer: J. Salmo Scribe: A. Alcolei Settig: give a data set X R p where is the umber of observatio ad p is the umber of features, we wat

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Mathematical Induction

Mathematical Induction Mathematical Iductio Itroductio Mathematical iductio, or just iductio, is a proof techique. Suppose that for every atural umber, P() is a statemet. We wish to show that all statemets P() are true. I a

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Math 113, Calculus II Winter 2007 Final Exam Solutions

Math 113, Calculus II Winter 2007 Final Exam Solutions Math, Calculus II Witer 7 Fial Exam Solutios (5 poits) Use the limit defiitio of the defiite itegral ad the sum formulas to compute x x + dx The check your aswer usig the Evaluatio Theorem Solutio: I this

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

Test One (Answer Key)

Test One (Answer Key) CS395/Ma395 (Sprig 2005) Test Oe Name: Page 1 Test Oe (Aswer Key) CS395/Ma395: Aalysis of Algorithms This is a closed book, closed otes, 70 miute examiatio. It is worth 100 poits. There are twelve (12)

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Notes on iteration and Newton s method. Iteration

Notes on iteration and Newton s method. Iteration Notes o iteratio ad Newto s method Iteratio Iteratio meas doig somethig over ad over. I our cotet, a iteratio is a sequece of umbers, vectors, fuctios, etc. geerated by a iteratio rule of the type 1 f

More information

Math 10A final exam, December 16, 2016

Math 10A final exam, December 16, 2016 Please put away all books, calculators, cell phoes ad other devices. You may cosult a sigle two-sided sheet of otes. Please write carefully ad clearly, USING WORDS (ot just symbols). Remember that the

More information

CHAPTER I: Vector Spaces

CHAPTER I: Vector Spaces CHAPTER I: Vector Spaces Sectio 1: Itroductio ad Examples This first chapter is largely a review of topics you probably saw i your liear algebra course. So why cover it? (1) Not everyoe remembers everythig

More information

7 Sequences of real numbers

7 Sequences of real numbers 40 7 Sequeces of real umbers 7. Defiitios ad examples Defiitio 7... A sequece of real umbers is a real fuctio whose domai is the set N of atural umbers. Let s : N R be a sequece. The the values of s are

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Math 36Q Fall 25 Professor Hoh Below is a list of practice questios for the Fial Exam. I would suggest also goig over the practice problems ad exams for Exam ad Exam 2 to

More information

University of Colorado Denver Dept. Math. & Stat. Sciences Applied Analysis Preliminary Exam 13 January 2012, 10:00 am 2:00 pm. Good luck!

University of Colorado Denver Dept. Math. & Stat. Sciences Applied Analysis Preliminary Exam 13 January 2012, 10:00 am 2:00 pm. Good luck! Uiversity of Colorado Dever Dept. Math. & Stat. Scieces Applied Aalysis Prelimiary Exam 13 Jauary 01, 10:00 am :00 pm Name: The proctor will let you read the followig coditios before the exam begis, ad

More information

PROPERTIES OF AN EULER SQUARE

PROPERTIES OF AN EULER SQUARE PROPERTIES OF N EULER SQURE bout 0 the mathematicia Leoard Euler discussed the properties a x array of letters or itegers ow kow as a Euler or Graeco-Lati Square Such squares have the property that every

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Section 4.3. Boolean functions

Section 4.3. Boolean functions Sectio 4.3. Boolea fuctios Let us take aother look at the simplest o-trivial Boolea algebra, ({0}), the power-set algebra based o a oe-elemet set, chose here as {0}. This has two elemets, the empty set,

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019 Outlie CSCI-567: Machie Learig Sprig 209 Gaussia mixture models Prof. Victor Adamchik 2 Desity estimatio U of Souther Califoria Mar. 26, 209 3 Naive Bayes Revisited March 26, 209 / 57 March 26, 209 2 /

More information

Chapter 10: Power Series

Chapter 10: Power Series Chapter : Power Series 57 Chapter Overview: Power Series The reaso series are part of a Calculus course is that there are fuctios which caot be itegrated. All power series, though, ca be itegrated because

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Real Variables II Homework Set #5

Real Variables II Homework Set #5 Real Variables II Homework Set #5 Name: Due Friday /0 by 4pm (at GOS-4) Istructios: () Attach this page to the frot of your homework assigmet you tur i (or write each problem before your solutio). () Please

More information

Math 475, Problem Set #12: Answers

Math 475, Problem Set #12: Answers Math 475, Problem Set #12: Aswers A. Chapter 8, problem 12, parts (b) ad (d). (b) S # (, 2) = 2 2, sice, from amog the 2 ways of puttig elemets ito 2 distiguishable boxes, exactly 2 of them result i oe

More information

Lecture 2: April 3, 2013

Lecture 2: April 3, 2013 TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 2: April 3, 203 Scribe: Shubhedu Trivedi Coi tosses cotiued We retur to the coi tossig example from the last lecture agai: Example. Give,

More information

Frequentist Inference

Frequentist Inference Frequetist Iferece The topics of the ext three sectios are useful applicatios of the Cetral Limit Theorem. Without kowig aythig about the uderlyig distributio of a sequece of radom variables {X i }, for

More information

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar.

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar. Clusterig CM226: Machie Learig for Bioiformatics. Fall 216 Sriram Sakararama Ackowledgmets: Fei Sha, Ameet Talwalkar Clusterig 1 / 42 Admiistratio HW 1 due o Moday. Email/post o CCLE if you have questios.

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Linear Classifiers III

Linear Classifiers III Uiversität Potsdam Istitut für Iformatik Lehrstuhl Maschielles Lere Liear Classifiers III Blaie Nelso, Tobias Scheffer Cotets Classificatio Problem Bayesia Classifier Decisio Liear Classifiers, MAP Models

More information

Math 113 Exam 4 Practice

Math 113 Exam 4 Practice Math Exam 4 Practice Exam 4 will cover.-.. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

STA Learning Objectives. Population Proportions. Module 10 Comparing Two Proportions. Upon completing this module, you should be able to:

STA Learning Objectives. Population Proportions. Module 10 Comparing Two Proportions. Upon completing this module, you should be able to: STA 2023 Module 10 Comparig Two Proportios Learig Objectives Upo completig this module, you should be able to: 1. Perform large-sample ifereces (hypothesis test ad cofidece itervals) to compare two populatio

More information

Z ß cos x + si x R du We start with the substitutio u = si(x), so du = cos(x). The itegral becomes but +u we should chage the limits to go with the ew

Z ß cos x + si x R du We start with the substitutio u = si(x), so du = cos(x). The itegral becomes but +u we should chage the limits to go with the ew Problem ( poits) Evaluate the itegrals Z p x 9 x We ca draw a right triagle labeled this way x p x 9 From this we ca read off x = sec, so = sec ta, ad p x 9 = R ta. Puttig those pieces ito the itegralrwe

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

PLEASE MARK YOUR ANSWERS WITH AN X, not a circle! 1. (a) (b) (c) (d) (e) 3. (a) (b) (c) (d) (e) 5. (a) (b) (c) (d) (e) 7. (a) (b) (c) (d) (e)

PLEASE MARK YOUR ANSWERS WITH AN X, not a circle! 1. (a) (b) (c) (d) (e) 3. (a) (b) (c) (d) (e) 5. (a) (b) (c) (d) (e) 7. (a) (b) (c) (d) (e) Math 0560, Exam 3 November 6, 07 The Hoor Code is i effect for this examiatio. All work is to be your ow. No calculators. The exam lasts for hour ad 5 mi. Be sure that your ame is o every page i case pages

More information