Introduction to Artificial Intelligence CAP 4601 Summer 2013 Midterm Exam

Size: px
Start display at page:

Download "Introduction to Artificial Intelligence CAP 4601 Summer 2013 Midterm Exam"

Transcription

1 Itroductio to Artificial Itelligece CAP 601 Summer 013 Midterm Exam 1. Termiology (7 Poits). Give the followig task eviromets, eter their properties/characteristics. The properties/characteristics of the Checkers task eviromet is provided as a example. Task Eviromet: Checkers Fully Observable or Partially Observable: Fully Observable Sigle Aget or Multiaget: Multiaget Determiistic or Stochastic: Determiistic Episodic or Sequetial: Sequetial Static or Dyamic: Static Discrete or Cotiuous: Discrete Kow or Ukow: Kow Task Eviromet: Robotic Car (Machie Drivig) Fully Observable or Partially Observable: Partially Observable Sigle Aget or Multiaget: Multiaget Determiistic or Stochastic: Stochastic Episodic or Sequetial: Sequetial Static or Dyamic: Dyamic Discrete or Cotiuous: Cotiuous Kow or Ukow: Kow

2 . Liear Regressio (1 Poits). Give the followig poits: { ( 0, 1 ), (, 1 ), (, -7 ), (, -7) } We have the followig plot: Calculate the Sample Mea for the x's ( a umber ): x i1 0 8 x i

3 Calculate the Sample Mea for the y's ( a umber ): -3 y i Calculate the Biased Sample Variace for the x's ( a umber ): y i var x,biased 1 i x i x 0 Calculate the Biased Sample Variace for the y's ( a umber ): 16 var y,biased 1 i y i y

4 Calculate the Biased Sample Covariace betwee the x's ad the y's ( a umber ): - cov 1 x,,biased i x xy yi y i Calculate the w0 from y = w0 + w1*x ( i.e. calculate the Itercept, a umber ): 1 cov w1 var xy,,biased x,biased w w x y Calculate the w1 from y = w0 + w1*x ( i.e. calculate the Slope, a umber ): - cov w1 var xy,,biased x,biased

5 3. Classificatio (1 Poits). Give the followig two classes each with three poits, fid the mea of each class ad the fid the equatio of the plae that separates those classes halfway betwee their respective meas. Esure that the separatig plae produces positive values for Class Poits (Blue) ad egative values for Class 1 Poits (Red). I other words, fid the separatig plae that could be used for classificatio with the maximum margi betwee the classes ad where the separatig plae z is z > 0 for Class Poits (Blue) ad z < 0 for Class 1 Poits (Red). Class 1 Poits (Red): { ( -, ), ( 0, ), (, ) } Class Poits (Blue): { ( -, 0 ), ( 0, 0 ), (, 0 ) } The followig is the plot of these two classes: We wat the followig classificatio:

6 x x, y, Class1 i i1 i , 3 3 0, y i Class1, 3 3 xy, midpoit x x, y, Class x, y x, y 0, 0,0 i i1 i , 3 3 0,0 Class1 Class 0 0, 0 0, 0, 0,1 y i Class, 3 3 Calculate the x value of the Midpoit betwee the Sample Mea of the Class 1 Poits ad the Sample Mea of the Class Poits ( i.e. the value of x i the midpoit ( x, y ) ): 0 Calculate the y value of the Midpoit betwee the Sample Mea of the Class 1 Poits ad the Sample Mea of the Class Poits ( i.e. the value of y i the midpoit ( x, y ) ): 1

7 Sice we wat the separatig plae to produces positive values for Class Poits (Blue), the we wat the ormalized ormal vector to poit toward Class Poits (Blue). Sice we wat to fid the separatig plae used for classificatio with the maximum margi betwee the classes, the we wat to start poit from the midpoit we just calculated; hece, the agle we eed to use for this ormalized ormal vector is 90. Therefore, we have: ormalized cos,si cos 90,si 90 0, 1 Calculate the x value of the Normalized Normal Vector that poits towards the Class Poits (Blue) (i.e. the value of the x i the ormalized ormal vector ormalized ): 0 Calculate the y value of the Normalized Normal Vector that poits towards the Class Poits (Blue) (i.e. the value of the y i the ormalized ormal vector ormalized ): -1 Sice we have the followig: f x, y cos,si x, y x, y classificatio 0 0 0, 1 xy, 0,1 0, 1 x 0, y1 0 x 0 1 y1 0 x 00 1 y x 1 y 1 0 x 1 y 1 0 x 1 y Calculate the w0 ( a umber ) i the equatio of the separatig plae z = w0 + w1*x + w*y that creates the separatig lie i the level set z = 0: 1 Calculate the w1 ( a umber ) i the equatio of the separatig plae z = w0 + w1*x + w*y that creates the separatig lie i the level set z = 0: 0 Calculate the w ( a umber ) i the equatio of the separatig plae z = w0 + w1*x + w*y that creates the separatig lie i the level set z = 0: -1

8 . Naive Bayes Network ( Poits). Usig a small subset of data from a experimet at the Ft. Beig Battle Lab, costruct a Naïve Bayes Network to predict a subject's first actio (move or fire) i close rage egagemets uder various circumstaces. "Urba" meas the subject was i a city, vice beig i a forest. "Day" meas the ecouter happeed i daylight, vice at ight usig ight-visio equipmet. "Withi5m" meas that the target was first observed at a rage less tha 5 meters, vice up to 50 meters. "MoveFirst" meas that the subject's first actio o detectig the threat was to move to cover, vice firig immediately. HINT: Draw the graph of this Naïve Bayes Network. Rememberig that a Naïve Bayes Network assumes coditioal idepedece. QUESTION: Usig the data table above, do the followig (Do ot write ay decimal approximatios): 1) Write the exact fractio for P( MoveFirst = true ). 3/10

9 ) Write the exact fractio for P( MoveFirst = false ). 7/10 3) Write the exact fractio for P( Urba = false MoveFirst = false ). /7 ) Write the exact fractio for P( Urba = false MoveFirst = true ). 1/3 5) Write the exact fractio for P( Urba = true MoveFirst = false ). 3/7

10 6) Write the exact fractio for P( Urba = true MoveFirst = true ). /3 7) Write the exact fractio for P( Day = false MoveFirst = false ). 3/7 8) Write the exact fractio for P( Day = false MoveFirst = true ). /3 9) Write the exact fractio for P( Day = true MoveFirst = false ). /7

11 10) Write the exact fractio for P( Day = true MoveFirst = true ). 1/3 11) Write the exact fractio for P( Withi5m = false MoveFirst = false ). 3/7 1) Write the exact fractio for P( Withi5m = false MoveFirst = true ). /3 13) Write the exact fractio for P( Withi5m = true MoveFirst = false ). /7

12 1) Write the exact fractio for P( Withi5m = true MoveFirst = true ). 1/3 15) Write the equatio to predict the probability that a subject's first actio is to shoot whe he ecouters a threat withi 5 meters i a city at ight. I order words, write the equatio that could calculate: P( MoveFirst = false Urba = true, Day = false, Withi5m = true ) i terms of oly the probabilities above. Just write the probabilities (usig the rules of Probability that you leared about i Chapter 13). Do NOT write the exact fractios or the decimal approximatio. P( MoveFirst = false Urba = true, Day = false, Withi5m = true ) = ( P( Urba = true MoveFirst = false ) * P( Day = false MoveFirst = false ) * P( Withi5m = true MoveFirst = false ) * P( MoveFirst = false ) ) / ( P( Urba = true MoveFirst = false ) * P( Day = false MoveFirst = false ) * P( Withi5m = true MoveFirst = false ) * P( MoveFirst = false ) + P( Urba = true MoveFirst = true ) * P( Day = false MoveFirst = true ) * P( Withi5m = true MoveFirst = true ) * P( MoveFirst = true ) ) Or writte out so that we ca actually have a hope of beig able to read it:,, 5 5 PWithi5m true MoveFirst false P MoveFirst false P MoveFirst false Urba true Day false Withi m true P Urba true MoveFirst false P Day false MoveFirst false P Withi m true MoveFirst false P MoveFirst false P Urba true MoveFirst false P Day false MoveFirst false P Urba true MoveFirst true P Day false MoveFirst true P Withi5m true MoveFirst true P MoveFirst true

13 5. k-nearest Neighbor ( Poits). If k 1, the our closest eighbors would be: ad majority rule would classify our query poit as. k, the our closest eighbors would be:, If If 3 k, the our closest eighbors would be:,, k, the our closest eighbors would be:,,, k, the our closest eighbors would be:,,,, k, the our closest eighbors would be:,,,,, k, the our closest eighbors would be:,,,,,, If If 5 If 6 If 7 7 ad majority rule would classify our query poit as. ad majority rule would classify our query poit as. ad majority rule would classify our query poit as. ad majority rule would classify our query poit as. ad our query poit would be arbitrarily classified. ad majority rule would classify our query poit as. 6. Liear Regressio vs. Logistic Regressio (6 Poits). Describe the differece betwee Liear Regressio ad Logistic Regressio. Detail what is beig miimized i both methods (i.e. what is beig regressed). Liear Regressio is a Supervised Learig method used for Regressio, ot Classificatio. Liear Regressio miimizes the distace of the y value of each poit with the y value of the lie of regressio. Logistic Regressio is a Supervised Learig method used for Classificatio, ot Regressio. Logistic Regressio miimizes the distace betwee the class of a poit (expressed as either a 0 or a 1) ad the value of that poit o the Logistic Fuctio.

14 7. Parametric vs. Noparametric Methods (8 Poits). For the methods below, idicate whether the method is parametric or oparametric usig a "P" for parametric ad a "N" for oparametric: Liear Regressio: P k-nearest Neighbor: N Bayesia Network: P Multivariate Liear Regressio: P Logistic Regressio: P Perceptro: P Multilayer Feed-Forward Neural Network: P Locally Weighted Regressio: N Support Vector Machies: P Kerel Desity Estimatio: N 8. Parametric vs. Noparametric ( Poits). Describe the differece betwee parametric methods ad oparametric methods. Parametric methods summarize data ito a fixed umber of parameters whose cout does ot icrease as the umber of data poits icrease. Noparametric methods do ot summarize data ito a fixed umber of parameters. The umber of parameters (if they're used at all) icreases as the umber of data poits icrease. 9. The Curse of Dimesioality (6 Poits). Describe the Curse of Dimesioality (esp. as described by Pedro Domigos i "A Few Useful Thigs to Kow about Machie Learig"). The curse of dimesioality, coied by Bellma i 1961, refers to the fact that may algorithms that work fie i low dimesios become itractable whe the iput is high-dimesioal. But i machie learig it refers to much more. Geeralizig correctly becomes expoetially harder as the dimesioality (umber of features) of the examples grows, because a fixed-size traiig set covers a dwidlig fractio of the iput space. Moreover, the similarity-based reasoig that machie learig algorithms deped o (explicitly or implicitly) breaks dow i high dimesios.

15 10. Supervised vs. Usupervised Learig ( Poits). Describe the differece betwee Supervised ad Usupervised Learig. Supervised Learig requires data that are labeled with their class durig learig. Oly class membership (classificatio) is determied by the supervised learig method. Usupervised Learig does ot require data that are labeled with their class durig learig. Both classes ad class membership are determied by the usupervised learig method. 11. A* Search (0 Poits). 1) The ode where h=15: 1 ) The ode where h=11: 0 3) The ode where h=8: ) The ode where h=7: 3 5) The ode where h=6: 6) The ode where h=10: 5 7) The ode where h=: 0 8) The ode where h=3: 0 9) The ode where h=9: 0

16 10) The ode where h=5: 0 11) The ode where h=0: 0 1) The ode where h=0 GOAL: 6 13) Is this heuristic admissible? If yes, why? If o, why? No, the heuristic is ot admissible because h=0 is a overestimate sice sigle steps oly cost 10 ad the ode where h=0 is oly oe step away from the goal. Here's the exhaustive aswer: First, we augmet the graph above with edge weights: Next, we add the top ode to the frotier: ad explored is empty: f 15 frotier g 0 h 15 explored =.

17 Eterig the loop, we see that frotier is ot empty, so we pop frotier to ode: f 15 ode g 0 h 15 The, we test if ode is the goal. Sice it is t, we add ode to explored: f 15 explored = g 0. h 15 For each child of the ode, we test if it is i explored ad frotier. Sice they are t, we add each child to the frotier givig priority to the smallest f : f 16 f 17 f 18 f 0 f 1 frotier g 10, g 10, g 10, g 10, g 10 h 6 h 7 h 8 h 10 h 11 O the ext iteratio, we agai see that frotier is ot empty, so we pop frotier to ode: f 16 ode g 10 h 6

18 Sice this ode is t the goal, we add the ode to explored: f 15 f 16 explored = g 0, g 10 h 15 h 6 ad the we look at the childre of that ode. For each child, we check to see if the child is i the frotier or i explored. Sice each child is t, we add these childre to frotier: f 17 f 18 f 0 f 1 f 5 f 0 frotier g 10, g 10, g 10, g 10, g 0, g 0 h 7 h 8 h 10 h 11 h 5 h 0 O the ext iteratio, we see that frotier is ot empty, so we pop frotier to ode: f 17 ode g 10 h 7 Sice ode is t the goal, we add ode to explored: f 15 f 16 f 17 explored = g 0, g 10, g 10 h 15 h 6 h 7 Sice ode does t have ay childre, we go o to the ext iteratio.

19 O this ext iteratio, we see that frotier is still ot empty, so we pop frotier to ode: f 18 ode g 10 h 8 Sice ode is t the goal, we add ode to explored: f 15 f 16 f 17 f 18 explored = g 0, g 10, g 10, g 10 h 15 h 6 h 7 h 8 For each child of ode, we check to see if the child is i the frotier or i explored. Sice each child is t, we add these childre to frotier: f 0 f 1 f 3 f 5 f 9 f 0 frotier g 10, g 10, g 0, g 0, g 0, g 0 h 10 h 11 h 3 h 5 h 9 h 0 O the ext iteratio, we see that frotier is ot empty, so we pop frotier to ode: f 0 ode g 10 h 10 Sice ode is t the goal, we add ode to explored: f 15 f 16 f 17 f 18 f 0 explored = g 0, g 10, g 10, g 10, g 10 h 15 h 6 h 7 h 8 h 10

20 ad the we look at the childre of that ode. For each child, we check to see if the child is i the frotier or i explored. Sice each child is t, we add these childre to frotier: f 0 f 1 f 3 f 5 f 9 f 0 frotier g 0, g 10, g 0, g 0, g 0, g 0 h 0 h 11 h 3 h 5 h 9 h 0 O the ext iteratio, we see that frotier is ot empty, so we pop frotier to ode: f 0 ode g 0 h 0 Sice ode is the goal, the we have our solutio ad we re doe ad we put zeros i the odes that will ever be expaded:

21 Therefore, we have the followig: a. The ode where h=15: 1 b. The ode where h=11: 0 c. The ode where h=8: d. The ode where h=7: 3 e. The ode where h=6: f. The ode where h=10: 5 g. The ode where h=: 0 h. The ode where h=3: 0 i. The ode where h=9: 0 j. The ode where h=5: 0 k. The ode where h=0: 0 j. The ode where h=0 GOAL: 6 l. Is this heuristic admissible? If yes, why? If o, why? Highlightig the followig: We see that our heuristic has a higher value tha the actual edge cost to the goal because edge cost 10 0 heuristic ; hece, our heuristic overestimates the cost to reach the goal. Therefore, sice our heuristic overestimates the cost to reach the goal, the this heuristic is ot admissible. I other words: No, our heuristic is ot admissible because it overestimates the cost to reach the goal.

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Topic 10: Introduction to Estimation

Topic 10: Introduction to Estimation Topic 0: Itroductio to Estimatio Jue, 0 Itroductio I the simplest possible terms, the goal of estimatio theory is to aswer the questio: What is that umber? What is the legth, the reactio rate, the fractio

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Pattern Classification, Ch4 (Part 1)

Pattern Classification, Ch4 (Part 1) Patter Classificatio All materials i these slides were take from Patter Classificatio (2d ed) by R O Duda, P E Hart ad D G Stork, Joh Wiley & Sos, 2000 with the permissio of the authors ad the publisher

More information

Statistics 20: Final Exam Solutions Summer Session 2007

Statistics 20: Final Exam Solutions Summer Session 2007 1. 20 poits Testig for Diabetes. Statistics 20: Fial Exam Solutios Summer Sessio 2007 (a) 3 poits Give estimates for the sesitivity of Test I ad of Test II. Solutio: 156 patiets out of total 223 patiets

More information

15-780: Graduate Artificial Intelligence. Density estimation

15-780: Graduate Artificial Intelligence. Density estimation 5-780: Graduate Artificial Itelligece Desity estimatio Coditioal Probability Tables (CPT) But where do we get them? P(B)=.05 B P(E)=. E P(A B,E) )=.95 P(A B, E) =.85 P(A B,E) )=.5 P(A B, E) =.05 A P(J

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Machine Learning Theory (CS 6783)

Machine Learning Theory (CS 6783) Machie Learig Theory (CS 6783) Lecture 2 : Learig Frameworks, Examples Settig up learig problems. X : istace space or iput space Examples: Computer Visio: Raw M N image vectorized X = 0, 255 M N, SIFT

More information

1 Approximating Integrals using Taylor Polynomials

1 Approximating Integrals using Taylor Polynomials Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information

Please do NOT write in this box. Multiple Choice. Total

Please do NOT write in this box. Multiple Choice. Total Istructor: Math 0560, Worksheet Alteratig Series Jauary, 3000 For realistic exam practice solve these problems without lookig at your book ad without usig a calculator. Multiple choice questios should

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

6.867 Machine learning

6.867 Machine learning 6.867 Machie learig Mid-term exam October, ( poits) Your ame ad MIT ID: Problem We are iterested here i a particular -dimesioal liear regressio problem. The dataset correspodig to this problem has examples

More information

Statistical Properties of OLS estimators

Statistical Properties of OLS estimators 1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of

More information

Jacob Hays Amit Pillay James DeFelice 4.1, 4.2, 4.3

Jacob Hays Amit Pillay James DeFelice 4.1, 4.2, 4.3 No-Parametric Techiques Jacob Hays Amit Pillay James DeFelice 4.1, 4.2, 4.3 Parametric vs. No-Parametric Parametric Based o Fuctios (e.g Normal Distributio) Uimodal Oly oe peak Ulikely real data cofies

More information

Inferential Statistics. Inference Process. Inferential Statistics and Probability a Holistic Approach. Inference Process.

Inferential Statistics. Inference Process. Inferential Statistics and Probability a Holistic Approach. Inference Process. Iferetial Statistics ad Probability a Holistic Approach Iferece Process Chapter 8 Poit Estimatio ad Cofidece Itervals This Course Material by Maurice Geraghty is licesed uder a Creative Commos Attributio-ShareAlike

More information

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.

More information

NUMERICAL METHODS FOR SOLVING EQUATIONS

NUMERICAL METHODS FOR SOLVING EQUATIONS Mathematics Revisio Guides Numerical Methods for Solvig Equatios Page 1 of 11 M.K. HOME TUITION Mathematics Revisio Guides Level: GCSE Higher Tier NUMERICAL METHODS FOR SOLVING EQUATIONS Versio:. Date:

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

Z ß cos x + si x R du We start with the substitutio u = si(x), so du = cos(x). The itegral becomes but +u we should chage the limits to go with the ew

Z ß cos x + si x R du We start with the substitutio u = si(x), so du = cos(x). The itegral becomes but +u we should chage the limits to go with the ew Problem ( poits) Evaluate the itegrals Z p x 9 x We ca draw a right triagle labeled this way x p x 9 From this we ca read off x = sec, so = sec ta, ad p x 9 = R ta. Puttig those pieces ito the itegralrwe

More information

10/2/ , 5.9, Jacob Hays Amit Pillay James DeFelice

10/2/ , 5.9, Jacob Hays Amit Pillay James DeFelice 0//008 Liear Discrimiat Fuctios Jacob Hays Amit Pillay James DeFelice 5.8, 5.9, 5. Miimum Squared Error Previous methods oly worked o liear separable cases, by lookig at misclassified samples to correct

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam 4 will cover.-., 0. ad 0.. Note that eve though. was tested i exam, questios from that sectios may also be o this exam. For practice problems o., refer to the last review. This

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Math 312 Lecture Notes One Dimensional Maps

Math 312 Lecture Notes One Dimensional Maps Math 312 Lecture Notes Oe Dimesioal Maps Warre Weckesser Departmet of Mathematics Colgate Uiversity 21-23 February 25 A Example We begi with the simplest model of populatio growth. Suppose, for example,

More information

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019

Outline. CSCI-567: Machine Learning (Spring 2019) Outline. Prof. Victor Adamchik. Mar. 26, 2019 Outlie CSCI-567: Machie Learig Sprig 209 Gaussia mixture models Prof. Victor Adamchik 2 Desity estimatio U of Souther Califoria Mar. 26, 209 3 Naive Bayes Revisited March 26, 209 / 57 March 26, 209 2 /

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the

More information

AP Calculus BC Review Applications of Derivatives (Chapter 4) and f,

AP Calculus BC Review Applications of Derivatives (Chapter 4) and f, AP alculus B Review Applicatios of Derivatives (hapter ) Thigs to Kow ad Be Able to Do Defiitios of the followig i terms of derivatives, ad how to fid them: critical poit, global miima/maima, local (relative)

More information

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar.

Clustering. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar. Clusterig CM226: Machie Learig for Bioiformatics. Fall 216 Sriram Sakararama Ackowledgmets: Fei Sha, Ameet Talwalkar Clusterig 1 / 42 Admiistratio HW 1 due o Moday. Email/post o CCLE if you have questios.

More information

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions Statistical ad Mathematical Methods DS-GA 00 December 8, 05. Short questios Sample Fial Problems Solutios a. Ax b has a solutio if b is i the rage of A. The dimesio of the rage of A is because A has liearly-idepedet

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Simple Linear Regression

Simple Linear Regression Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Math 10A final exam, December 16, 2016

Math 10A final exam, December 16, 2016 Please put away all books, calculators, cell phoes ad other devices. You may cosult a sigle two-sided sheet of otes. Please write carefully ad clearly, USING WORDS (ot just symbols). Remember that the

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Math 36Q Fall 25 Professor Hoh Below is a list of practice questios for the Fial Exam. I would suggest also goig over the practice problems ad exams for Exam ad Exam 2 to

More information

Maximum and Minimum Values

Maximum and Minimum Values Sec 4.1 Maimum ad Miimum Values A. Absolute Maimum or Miimum / Etreme Values A fuctio Similarly, f has a Absolute Maimum at c if c f f has a Absolute Miimum at c if c f f for every poit i the domai. f

More information

Week 1, Lecture 2. Neural Network Basics. Announcements: HW 1 Due on 10/8 Data sets for HW 1 are online Project selection 10/11. Suggested reading :

Week 1, Lecture 2. Neural Network Basics. Announcements: HW 1 Due on 10/8 Data sets for HW 1 are online Project selection 10/11. Suggested reading : ME 537: Learig-Based Cotrol Week 1, Lecture 2 Neural Network Basics Aoucemets: HW 1 Due o 10/8 Data sets for HW 1 are olie Proect selectio 10/11 Suggested readig : NN survey paper (Zhag Chap 1, 2 ad Sectios

More information

Data Analysis and Statistical Methods Statistics 651

Data Analysis and Statistical Methods Statistics 651 Data Aalysis ad Statistical Methods Statistics 651 http://www.stat.tamu.edu/~suhasii/teachig.html Suhasii Subba Rao Review of testig: Example The admistrator of a ursig home wats to do a time ad motio

More information

Boosting. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 1, / 32

Boosting. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 1, / 32 Boostig Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machie Learig Algorithms March 1, 2017 1 / 32 Outlie 1 Admiistratio 2 Review of last lecture 3 Boostig Professor Ameet Talwalkar CS260

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

MIDTERM 3 CALCULUS 2. Monday, December 3, :15 PM to 6:45 PM. Name PRACTICE EXAM SOLUTIONS

MIDTERM 3 CALCULUS 2. Monday, December 3, :15 PM to 6:45 PM. Name PRACTICE EXAM SOLUTIONS MIDTERM 3 CALCULUS MATH 300 FALL 08 Moday, December 3, 08 5:5 PM to 6:45 PM Name PRACTICE EXAM S Please aswer all of the questios, ad show your work. You must explai your aswers to get credit. You will

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

MATH 10550, EXAM 3 SOLUTIONS

MATH 10550, EXAM 3 SOLUTIONS MATH 155, EXAM 3 SOLUTIONS 1. I fidig a approximate solutio to the equatio x 3 +x 4 = usig Newto s method with iitial approximatio x 1 = 1, what is x? Solutio. Recall that x +1 = x f(x ) f (x ). Hece,

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

Lecture 11 Simple Linear Regression

Lecture 11 Simple Linear Regression Lecture 11 Simple Liear Regressio Fall 2013 Prof. Yao Xie, yao.xie@isye.gatech.edu H. Milto Stewart School of Idustrial Systems & Egieerig Georgia Tech Midterm 2 mea: 91.2 media: 93.75 std: 6.5 2 Meddicorp

More information

1 Review of Probability & Statistics

1 Review of Probability & Statistics 1 Review of Probability & Statistics a. I a group of 000 people, it has bee reported that there are: 61 smokers 670 over 5 960 people who imbibe (drik alcohol) 86 smokers who imbibe 90 imbibers over 5

More information

MCT242: Electronic Instrumentation Lecture 2: Instrumentation Definitions

MCT242: Electronic Instrumentation Lecture 2: Instrumentation Definitions Faculty of Egieerig MCT242: Electroic Istrumetatio Lecture 2: Istrumetatio Defiitios Overview Measuremet Error Accuracy Precisio ad Mea Resolutio Mea Variace ad Stadard deviatio Fiesse Sesitivity Rage

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Power and Type II Error

Power and Type II Error Statistical Methods I (EXST 7005) Page 57 Power ad Type II Error Sice we do't actually kow the value of the true mea (or we would't be hypothesizig somethig else), we caot kow i practice the type II error

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

Regression, Part I. A) Correlation describes the relationship between two variables, where neither is independent or a predictor.

Regression, Part I. A) Correlation describes the relationship between two variables, where neither is independent or a predictor. Regressio, Part I I. Differece from correlatio. II. Basic idea: A) Correlatio describes the relatioship betwee two variables, where either is idepedet or a predictor. - I correlatio, it would be irrelevat

More information

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled 1 Lecture : Area Area ad distace traveled Approximatig area by rectagles Summatio The area uder a parabola 1.1 Area ad distace Suppose we have the followig iformatio about the velocity of a particle, how

More information

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A) REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data

More information

Areas and Distances. We can easily find areas of certain geometric figures using well-known formulas:

Areas and Distances. We can easily find areas of certain geometric figures using well-known formulas: Areas ad Distaces We ca easily fid areas of certai geometric figures usig well-kow formulas: However, it is t easy to fid the area of a regio with curved sides: METHOD: To evaluate the area of the regio

More information

Math 105: Review for Final Exam, Part II - SOLUTIONS

Math 105: Review for Final Exam, Part II - SOLUTIONS Math 5: Review for Fial Exam, Part II - SOLUTIONS. Cosider the fuctio f(x) = x 3 lx o the iterval [/e, e ]. (a) Fid the x- ad y-coordiates of ay ad all local extrema ad classify each as a local maximum

More information

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation II. Descriptive Statistics D. Liear Correlatio ad Regressio I this sectio Liear Correlatio Cause ad Effect Liear Regressio 1. Liear Correlatio Quatifyig Liear Correlatio The Pearso product-momet correlatio

More information

INFINITE SEQUENCES AND SERIES

INFINITE SEQUENCES AND SERIES INFINITE SEQUENCES AND SERIES INFINITE SEQUENCES AND SERIES I geeral, it is difficult to fid the exact sum of a series. We were able to accomplish this for geometric series ad the series /[(+)]. This is

More information

6.867 Machine learning, lecture 7 (Jaakkola) 1

6.867 Machine learning, lecture 7 (Jaakkola) 1 6.867 Machie learig, lecture 7 (Jaakkola) 1 Lecture topics: Kerel form of liear regressio Kerels, examples, costructio, properties Liear regressio ad kerels Cosider a slightly simpler model where we omit

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

SNAP Centre Workshop. Basic Algebraic Manipulation

SNAP Centre Workshop. Basic Algebraic Manipulation SNAP Cetre Workshop Basic Algebraic Maipulatio 8 Simplifyig Algebraic Expressios Whe a expressio is writte i the most compact maer possible, it is cosidered to be simplified. Not Simplified: x(x + 4x)

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

ME 539, Fall 2008: Learning-Based Control

ME 539, Fall 2008: Learning-Based Control ME 539, Fall 2008: Learig-Based Cotrol Neural Network Basics 10/1/2008 & 10/6/2008 Uiversity Orego State Neural Network Basics Questios??? Aoucemet: Homework 1 has bee posted Due Friday 10/10/08 at oo

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

CSE 4095/5095 Topics in Big Data Analytics Spring 2017; Homework 1 Solutions

CSE 4095/5095 Topics in Big Data Analytics Spring 2017; Homework 1 Solutions CSE 09/09 Topics i ig Data Aalytics Sprig 2017; Homework 1 Solutios Note: Solutios to problems,, ad 6 are due to Marius Nicolae. 1. Cosider the followig algorithm: for i := 1 to α log e do Pick a radom

More information

CALCULUS BASIC SUMMER REVIEW

CALCULUS BASIC SUMMER REVIEW CALCULUS BASIC SUMMER REVIEW NAME rise y y y Slope of a o vertical lie: m ru Poit Slope Equatio: y y m( ) The slope is m ad a poit o your lie is, ). ( y Slope-Itercept Equatio: y m b slope= m y-itercept=

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

10.6 ALTERNATING SERIES

10.6 ALTERNATING SERIES 0.6 Alteratig Series Cotemporary Calculus 0.6 ALTERNATING SERIES I the last two sectios we cosidered tests for the covergece of series whose terms were all positive. I this sectio we examie series whose

More information

Castiel, Supernatural, Season 6, Episode 18

Castiel, Supernatural, Season 6, Episode 18 13 Differetial Equatios the aswer to your questio ca best be epressed as a series of partial differetial equatios... Castiel, Superatural, Seaso 6, Episode 18 A differetial equatio is a mathematical equatio

More information

STA Learning Objectives. Population Proportions. Module 10 Comparing Two Proportions. Upon completing this module, you should be able to:

STA Learning Objectives. Population Proportions. Module 10 Comparing Two Proportions. Upon completing this module, you should be able to: STA 2023 Module 10 Comparig Two Proportios Learig Objectives Upo completig this module, you should be able to: 1. Perform large-sample ifereces (hypothesis test ad cofidece itervals) to compare two populatio

More information

1 Generating functions for balls in boxes

1 Generating functions for balls in boxes Math 566 Fall 05 Some otes o geeratig fuctios Give a sequece a 0, a, a,..., a,..., a geeratig fuctio some way of represetig the sequece as a fuctio. There are may ways to do this, with the most commo ways

More information

(7 One- and Two-Sample Estimation Problem )

(7 One- and Two-Sample Estimation Problem ) 34 Stat Lecture Notes (7 Oe- ad Two-Sample Estimatio Problem ) ( Book*: Chapter 8,pg65) Probability& Statistics for Egieers & Scietists By Walpole, Myers, Myers, Ye Estimatio 1 ) ( ˆ S P i i Poit estimate:

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

Chapter 8: Estimating with Confidence

Chapter 8: Estimating with Confidence Chapter 8: Estimatig with Cofidece Sectio 8.2 The Practice of Statistics, 4 th editio For AP* STARNES, YATES, MOORE Chapter 8 Estimatig with Cofidece 8.1 Cofidece Itervals: The Basics 8.2 8.3 Estimatig

More information

Department of Civil Engineering-I.I.T. Delhi CEL 899: Environmental Risk Assessment HW5 Solution

Department of Civil Engineering-I.I.T. Delhi CEL 899: Environmental Risk Assessment HW5 Solution Departmet of Civil Egieerig-I.I.T. Delhi CEL 899: Evirometal Risk Assessmet HW5 Solutio Note: Assume missig data (if ay) ad metio the same. Q. Suppose X has a ormal distributio defied as N (mea=5, variace=

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

4.1 Sigma Notation and Riemann Sums

4.1 Sigma Notation and Riemann Sums 0 the itegral. Sigma Notatio ad Riema Sums Oe strategy for calculatig the area of a regio is to cut the regio ito simple shapes, calculate the area of each simple shape, ad the add these smaller areas

More information

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t =

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t = Mathematics Summer Wilso Fial Exam August 8, ANSWERS Problem 1 (a) Fid the solutio to y +x y = e x x that satisfies y() = 5 : This is already i the form we used for a first order liear differetial equatio,

More information

Support vector machine revisited

Support vector machine revisited 6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

For example suppose we divide the interval [0,2] into 5 equal subintervals of length

For example suppose we divide the interval [0,2] into 5 equal subintervals of length Math 1206 Calculus Sec 1: Estimatig with Fiite Sums Abbreviatios: wrt with respect to! for all! there exists! therefore Def defiitio Th m Theorem sol solutio! perpedicular iff or! if ad oly if pt poit

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Solution of Final Exam : / Machine Learning

Solution of Final Exam : / Machine Learning Solutio of Fial Exam : 10-701/15-781 Machie Learig Fall 2004 Dec. 12th 2004 Your Adrew ID i capital letters: Your full ame: There are 9 questios. Some of them are easy ad some are more difficult. So, if

More information

Math 132, Fall 2009 Exam 2: Solutions

Math 132, Fall 2009 Exam 2: Solutions Math 3, Fall 009 Exam : Solutios () a) ( poits) Determie for which positive real umbers p, is the followig improper itegral coverget, ad for which it is diverget. Evaluate the itegral for each value of

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

The Bayesian Learning Framework. Back to Maximum Likelihood. Naïve Bayes. Simple Example: Coin Tosses. Given a generative model

The Bayesian Learning Framework. Back to Maximum Likelihood. Naïve Bayes. Simple Example: Coin Tosses. Given a generative model Back to Maximum Likelihood Give a geerative model f (x, y = k) =π k f k (x) Usig a geerative modellig approach, we assume a parametric form for f k (x) =f (x; k ) ad compute the MLE θ of θ =(π k, k ) k=

More information

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer. 6 Itegers Modulo I Example 2.3(e), we have defied the cogruece of two itegers a,b with respect to a modulus. Let us recall that a b (mod ) meas a b. We have proved that cogruece is a equivalece relatio

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

Section 13.3 Area and the Definite Integral

Section 13.3 Area and the Definite Integral Sectio 3.3 Area ad the Defiite Itegral We ca easily fid areas of certai geometric figures usig well-kow formulas: However, it is t easy to fid the area of a regio with curved sides: METHOD: To evaluate

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statistic ad Radom Samples A parameter is a umber that describes the populatio. It is a fixed umber, but i practice we do ot kow its value. A statistic is a fuctio of the sample data, i.e.,

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information