Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

Size: px
Start display at page:

Download "Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices."

Transcription

1 Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. 1. What is the difference between a deterministic model and a probabilistic model? (Two or three sentences only). 2. What is the difference between DATA and data? (Two or three sentences only).

2 Quiz 2. Name: Instructions: Closed book, notes, and no electronic devices. 1. Suppose the derivative of a curved function f(x) at x = 3.5 is given by f (3.5) = 0.7. Draw a graph that illustrates this fact. 2. Suppose the integral of a positive curved function over the range from 1 to 2 is 3.0; i.e., suppose 2 that f ( x) dx Draw a graph that illustrates this fact Suppose the derivative of a curved function f(x) at x = 3.5 is given by f (3.5) = 0.7. Explain how this fact appears in a graph of the function f(x). 4. Suppose the integral of a positive curved function over the range from 1 to 2 is 3.0; i.e., suppose 2 that f ( x) dx Explain how this fact appears in a graph of the function f(x). 1

3 Quiz 3. Name: Instructions: Closed book, notes, and no electronic devices. You are employed by a company to evaluate credit-worthiness of loan applicants. The next person that will walk into your office will have annual income Y. In the absence of any other information about that person, give your model for Y.

4 Quiz 4: Name: Closed books, notes, and no electronic devices. Here is the normal q-q plot of the call center data from the book: The data points do not lie on a straight line. As described in the book, how can you tell if the differences between the points and the line are explainable by chance alone? (1-3 sentences maximum).

5 Quiz 5: Name: Closed books, notes, and no electronic devices. There is a probability that a car purchaser will select a red car. What does the phrase nature favors continuity over discontinuity tell you about how this probability relates to the age of the purchaser?

6 Quiz 6 Name: Closed book, notes, and no electronic devices. 1. Which mantra applies when deciding whether to use p(y x) versus p(x y)? A. model produces data B. nature favors continuity over discontinuity C. use what you know to predict what you don t know 2. Suppose p(x,y) is a continuous joint distribution. Then p(x,y) dxdy = A. The mean B. The variance C. The covariance D Which formula gives you the conditional distribution p(y x)? A. p(y x) dxdy B. p(x,y) dx C. p(x,y)/p(x) 4. Suppose X ~p(x) and Y~p(y), independent of X. Then the joint distribution is given by p(x,y) =. A. p(x)p(y) B. p(x,y) dxdy C. p(x,y)/p(x)

7 Quiz 7 Name: Closed book, notes, and no electronic devices. 1. You are on a drunk driving excursion. What is the probability that you will either kill someone or be killed on this excursion? Give your best guess % of car purchasers at a dealerships are younger. Among the younger purchasers at this dealership, 50% buy a red car. Out of the next 100 customers, about how many will be younger customers who purchase a red car? 3. In the housing expense example, what distribution was assumed for income? A. Uniform B. Normal C. Poisson D. Bernoulli 4. In the psychometric evaluation example, what distribution was assumed for stealing behavior? A. Uniform B. Normal C. Poisson D. Bernoulli

8 Quiz 8 Name: Closed book, notes, and no electronic devices. You will randomly sample a single person from a finite population of N=1,000 people, and ask that person, Do you like lemonade? The answer will be either Y = yes or Y = no. Of the 1,000 people, suppose that 280 would answer yes and 720 would answer no. Write down the population p(y) in list form. Quiz 9. Name: Closed book, notes, and no electronic devices. It is common to assume that data Y1, Y2 are independent and identically distributed (iid). Briefly give a real example of data Y1, Y2 that are not identically distributed: Y1 = Y2 =

9 Quiz 10. Name: Closed book, notes, and no electronic devices. The graph of a continuous probability distribution function (pdf) p(y) is shown below. Guess the value of E(Y): p(y) y

10 Quiz 11. Name: Closed book, notes, and no electronic devices. Here is a distribution that produces Y. y p(y) 1 1/2 2 1/4 3 1/4 1.0 Find E(1 + 4Y) using the linearity property of expectation.

11 Quiz 12. Name: Closed book, notes, and no electronic devices. 1. (30) Show using calculus that f(y) = y 2 is a convex function. 2. (30) Here is a distribution that produces Y. y p(y) 1 1/4 2 2/4 3 1/4 1.0 It is clear that E(Y) = 2; don t show this. It is also easy to show that E(Y 2 ) = 18/4 (or 4.5); don t show this either. Instead, use your answer to 1. to answer this: How do the facts that E(Y) = 2 and E(Y 2 ) = 4.5 illustrate Jensen s inequality? 3. (20) The mean of students commute times 10 minutes and the standard deviation is 4 minutes. What percentage of commute times are between 2 and 18 minutes? A. at least 75% B. at least 95% C. exactly 75% D. exactly 95%

12 Quiz 13. Name: Closed book, notes, and no electronic devices. All mathematical theorems are logical statements of the form If A is true, then B is true. For example, if A is that animal is a cow, and B is that animal is a mammal, we know that if A is true, then B is true. The Central Limit Theorem is similar. There is an assumption (the A ), and a conclusion (the B ). If the assumption (A) is true, then the conclusion (B) is true. Give the assumption (the A) and the conclusion (the B) for the Central Limit Theorem. Assumption: If Conclusion: Then

13 Quiz 14. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. Suppose Y 1, Y 2 ~ iid N(, 2 ). Show, step by step, with justifications, that (Y 1 + Y 2)/2 is an unbiased estimator of. Suppose Y 1, Y 2 ~ iid Bernoulli( ). Show, step by step, with justifications, that (Y 1 + Y 2)/2 is an unbiased estimator of.

14 Quiz 15. Name: Closed book, notes, and no electronic devices. Note: Problem 4 is a different kind of multiple choice problem. 1. (20) Which distribution is useful as a model for a process that produces occasional outliers? A. Mixture B. Normal C. Uniform D. Beta 2. (20) Suppose Y ~ p(y), where p(y) is the Poisson pdf. Then A. E(Y) = 1 B. E(Y) = Var(Y) C. E(Y) = y0.5 D. E(Y) = 3. (20) The variance of an estimator ˆ is 10 and its bias is 2. Then ESD ( ˆ) = E ˆ 2 {( ) } = A. 12 B. 14 C. Var ( ˆ) D (Select all that apply; 4 points for each correct selection/non-selection.) Suppose Y1, Y2,, Yn ~iid p(y), where the mean of the distribution p(y) is. Let Y1 Y2 Yn Y n Then Y is. A. an unbiased estimator of B. an efficient estimator of C. a consistent estimator of D. equal to when n is large. E. a normally distributed estimator of when n is large

15 Quiz 16. Name: Closed book, notes, and no electronic devices. 1. (20) Data Y are produced by the N(, 2 ) distribution. What is the parameter space for? A. The set {0, 1}. B. The set {-3, -2, -1, 0, 1, 2, 3}. C. The set of all numbers between 0 and 1. D. The set of all numbers greater than (60) Suppose the data Yi are produced as iid from the Bernoulli( ) distribution. The data values are y1 = 1, and y2 = 1. Write down L( ), the likelihood function for.

16 Quiz 17. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. 1. (40) Here is a graph of a likelihood function for a parameter ( theta ). The MLE is ˆ Give your best guess (a single number) of the Wald standard error of ˆ as indicated by the graph. L.theta theta 2.(20) The log-likelihood function (called LL in the book) for an iid sample is equal to A. the pdf for the sample. B. the sum of the pdfs for each observation in the sample. C. the sum of the logarithms of the pdfs for each observation in the sample. D. the product of the pdfs for each observation in the sample. 3.(20) Suppose y1, y2,, yn are produced as an iid sample from N(, 2 ), where and are unknown parameters. Then the MLE of is 1 n A. ( y 2 i i y) 1 n 1 n B. ( y 1 n 1 i i y ) 2 1 n 2 1 n C. ( y i 1 i ) D. n ( y i 1 i ) n 1 2

17 Quiz 18. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. 1. Draw a graph of a prior distribution that expresses your prior ignorance about the Bernoulli parameter. Label and number both axes. 2. Draw a graph of a posterior distribution about the Bernoulli parameter. Label and number both axes.

18 Quiz 18. Name: Closed books, notes and no electronic devices. 1. Hans got two successes and 8 failures in both his thumbtack toss experiment and in his coin toss experiment. His likelihood function for the thumbtack toss data was his likelihood function for the coin toss data. A. identical to B. shifted to the left of C. shifted to the right of 2. The is what you use to express your uncertainty about the parameters before collecting your data. A. prior distribution B. posterior distribution C. likelihood function 3. Hans gives his prior distribution for = mean driving time as follows: p( ) 20.0 min min 0.6 Total 1.0 Hans prior is an example of a prior. A. non-informative B. vague C. uniform D. dogmatic 4. The function is the kernel of the beta( ) distribution. A. 1, 2 B. 0, 1 C. 1/3, 2/3 D. 2, 3

19 Quiz 19. Name: Closed book, notes, and no electronic devices. 1. What is Markov Chain Monte Carlo? A. A gambling strategy B. A financial investment strategy C. A method for simulating data from p(data ) D. A method for simulating data from p( data) 2. Why is the prior p( ) = 1, for < <, called improper? A. Because the area under the curve is infinity B. Because distributions cannot be negative numbers C. Because it is dogmatic D. Because it cannot be used in practice 3. When do you need to use Bayesian statistics? A. When the normality assumption is violated B. When the data are not produced by a model C. When you wish to select plausible values of the parameters D. When you have no idea what is your prior distribution 4. Past performance is no guarantee of future performance. So the parameters that govern future financial markets are unknown. How does the book suggest that you select these parameters? A. By calculating them from future data B. By simulating them from their posterior distribution, given past data C. By asking experts (such as Jimmie Buffet) to give them to you D. By using logistic regression on historical financial data

20 Quiz 19. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. 1. In the Bayesian logistic regression example, it was found that A. Success probability increases with greater experience B. Success probability decrease with greater experience C. Success probability sometimes increases, sometimes decreases with greater experience D. Success probability is independent of experience 2. In the stock return example, what was the approximate probability that the mean return,, was more than 0? A. 0.0 B C D Why is the prior p( ) = 1, for < <, called improper? E. Because it cannot be used in practice F. Because the area under the curve is infinity G. Because distributions cannot be negative numbers H. Because it is dogmatic 4. When do you need to use Bayesian statistics? E. When you wish to select plausible values of the parameters F. When the normality assumption is violated G. When the data are not produced by a model H. When you have no idea what is your prior distribution

21 Quiz 20. Name: Closed book, notes and no electronic devices. Turn quiz over when done. 1. When you say that a frequentist confidence interval is an approximate 95% interval, what does the word approximate mean? A. The distribution p(y) in the model Y1, Y2,, Yn ~iid p(y) is not exactly normal, it is just approximately a normal distribution. B. That the true confidence level is not exactly 95%, it is instead just approximately 95%. C. That the endpoints of the interval are not exactly correct, they are only approximately correct. 2. Suppose Y1, Y2,, Y10 are iid coin toss outcomes, either 0 or 1. Thus each Yi has expected value = 0.5 and variance 2 = The variance of (Y1 + Y2 + +Y10)/10 is A B C D. 0.5 In the town/mountain lion analogy, the mountain lion represents and the town represents. A. the estimate, the estimand B. the estimand, the estimate C. confidence, probability D. probability, confidence When are frequentist and Bayesian conclusions similar? A. when the normal distribution is assumed B. when the data are free of outliers C. when the prior is vague D. when the data are iid

22 Quiz 21. Name: Closed book, notes and no electronic devices. Turn quiz over when done. 1. The permutation model is an example of A. an iid model B. a normally distributed model C. a null model D. a Bernoulli model 2. Suppose Y1, Y2,, Y16 ~iid p0(y), where the mean of any Yi is, and the variance of any Yi is 2. Let Y1 be the average of the first 8 observations, and let Y2 be the average of the last 8 observations. Then E( Y2 -Y 1 ) = A. 0 B. y2 - y 1 C. D Suppose Y1, Y2,, Y16 ~iid p0(y), where the mean of any Yi is, and the variance of any Yi is 2. Let Y1 be the average of the first 8 observations, and let Y2 be the average of the last 8 observations. Then Var( Y2 -Y 1 ) = A. 0 B. 2 /8 C. 2 /8 + 2 /8 = 2 /4 D What does a p-value of mean? A. There is a probability that the data are explained by chance alone B. There is a probability of seeing a difference as extreme or more extreme than the observed difference, assuming the data are produced by a null model C. There is a probability that the data are not explained by chance alone D. There is a probability of seeing a difference that is less extreme than the observed difference, assuming the data are produced by a null model

23 Quiz 22. Name: Closed book, notes and no electronic devices. Turn quiz over when done. 1. Suppose a random variable V is produced by the chi-square distribution with 10 degrees of freedom. Then E(V) = A. 0 B. 1 C. 9 D. 10 n i i Y ( Y 2. Suppose Y1, Y2,, Yn ~iid N(, 2 1 ). What is the distribution of 2 A. N(0,1) B. N(, ) C. n 1 D. n ) 2? 3. Why was the confidence interval for so wide in the failure time example? A. Because the normality assumption was violated B. Because the chi-squared assumption was violated C. Because the sample size was very small D. Because the sample size was very large Y 4. When is the distribution of the statistic exactly a t-distribution? ˆ / n A. When Y1, Y2,, Yn ~iid N(, 2 ) B. When Y1, Y2,, Yn ~iid Tdf C. When the sample size, n, is sufficiently large D. Never

24 Quiz 22. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. 1. (40) Suppose Y ~N(10,5 2 ). Let T = 2Y. What is the distribution of T? Answer in words, no symbols (40) Suppose Y 1, Y 2 ~ iid N(0,1). Let T = Y. What is the distribution of T? Answer in words, no symbols. 1 Y2

25 Quiz 23. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. The two-sample t test is used to compare means of two distinct groups. The data are Yij, where i denotes group (i = 1,2), and j indicates observation within group (j = 1,2,,ni). State the null model that you assume to have produced your data Yij when you use the twosample t test.

26 Quiz 24. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. Here is a partial data set to be used in a standard regression analysis (as discussed in the reading) to predict Y = price of a used car of a particular make and model, from X = age (in years) of the car. Obs X=age Y=price 1 10 Y Y Y Y 4 State the null model for how Y 1, Y 2, Y 3, Y 4 are produced.

27 Quiz 25. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. The George HW Bush ratings (X) and the Barbara Bush ratings (Y) are as follows. Obs X=GeorgeHWBush Y=BarbBush These data were used in a chi-square test in the reading. This test assumes a particular null model. Explain in words, without using symbols, the null model for how the BarbBush data (the Y data) were produced. Specifically, how were the numbers 2,3,4,1,4,1,1,4, produced, according to the null model?

28 Quiz 26. Name: Instructions: Closed book, notes and no electronic devices. Turn quiz over when done. In the chapter, data were used to test conformance with a standard. The standard was that the computer chip width data values look as if produced independently by the normal distribution having mean 310 and standard deviation 4.5. How will you assume the computer chip width data values are produced when you perform power analysis? Answer in words; do not use symbols.

29 Quiz 27. Name: Closed books, notes, and no electronic devices. Here is a data set. Obs y * * * How do you produce a bootstrap sample y 1, y2,..., y213 from this data set?

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. 1.(10) What is usually true about a parameter of a model? A. It is a known number B. It is determined by the data C. It is an

More information

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. ISQS 5347 Final Exam Spring 2017 Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. 1. Recall the commute

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Business Statistics Midterm Exam Fall 2015 Russell. Please sign here to acknowledge

Business Statistics Midterm Exam Fall 2015 Russell. Please sign here to acknowledge Business Statistics Midterm Exam Fall 5 Russell Name Do not turn over this page until you are told to do so. You will have hour and 3 minutes to complete the exam. There are a total of points divided into

More information

Swarthmore Honors Exam 2012: Statistics

Swarthmore Honors Exam 2012: Statistics Swarthmore Honors Exam 2012: Statistics 1 Swarthmore Honors Exam 2012: Statistics John W. Emerson, Yale University NAME: Instructions: This is a closed-book three-hour exam having six questions. You may

More information

ISQS 5349 Spring 2013 Final Exam

ISQS 5349 Spring 2013 Final Exam ISQS 5349 Spring 2013 Final Exam Name: General Instructions: Closed books, notes, no electronic devices. Points (out of 200) are in parentheses. Put written answers on separate paper; multiple choices

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

Chapter 11. Regression with a Binary Dependent Variable

Chapter 11. Regression with a Binary Dependent Variable Chapter 11 Regression with a Binary Dependent Variable 2 Regression with a Binary Dependent Variable (SW Chapter 11) So far the dependent variable (Y) has been continuous: district-wide average test score

More information

Final Exam. Name: Solution:

Final Exam. Name: Solution: Final Exam. Name: Instructions. Answer all questions on the exam. Open books, open notes, but no electronic devices. The first 13 problems are worth 5 points each. The rest are worth 1 point each. HW1.

More information

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Steve Dunbar Due Mon, November 2, 2009. Time to review all of the information we have about coin-tossing fortunes

More information

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses ISQS 5349 Final Spring 2011 Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses 1. (10) What is the definition of a regression model that we have used throughout

More information

Part 2: One-parameter models

Part 2: One-parameter models Part 2: One-parameter models 1 Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes

More information

Some Probability and Statistics

Some Probability and Statistics Some Probability and Statistics David M. Blei COS424 Princeton University February 13, 2012 Card problem There are three cards Red/Red Red/Black Black/Black I go through the following process. Close my

More information

Chapter 1: Revie of Calculus and Probability

Chapter 1: Revie of Calculus and Probability Chapter 1: Revie of Calculus and Probability Refer to Text Book: Operations Research: Applications and Algorithms By Wayne L. Winston,Ch. 12 Operations Research: An Introduction By Hamdi Taha, Ch. 12 OR441-Dr.Khalid

More information

Module 6: Methods of Point Estimation Statistics (OA3102)

Module 6: Methods of Point Estimation Statistics (OA3102) Module 6: Methods of Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 9.6-9.7 Revision: 1-12 1 Goals for this Module

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Data Analysis and Uncertainty Part 1: Random Variables

Data Analysis and Uncertainty Part 1: Random Variables Data Analysis and Uncertainty Part 1: Random Variables Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics 1. Why uncertainty exists? 2. Dealing

More information

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming

More information

Introduction to Regression Analysis. Dr. Devlina Chatterjee 11 th August, 2017

Introduction to Regression Analysis. Dr. Devlina Chatterjee 11 th August, 2017 Introduction to Regression Analysis Dr. Devlina Chatterjee 11 th August, 2017 What is regression analysis? Regression analysis is a statistical technique for studying linear relationships. One dependent

More information

Part 4: Multi-parameter and normal models

Part 4: Multi-parameter and normal models Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

The connection of dropout and Bayesian statistics

The connection of dropout and Bayesian statistics The connection of dropout and Bayesian statistics Interpretation of dropout as approximate Bayesian modelling of NN http://mlg.eng.cam.ac.uk/yarin/thesis/thesis.pdf Dropout Geoffrey Hinton Google, University

More information

18.05 Practice Final Exam

18.05 Practice Final Exam No calculators. 18.05 Practice Final Exam Number of problems 16 concept questions, 16 problems. Simplifying expressions Unless asked to explicitly, you don t need to simplify complicated expressions. For

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB

More information

Salt Lake Community College MATH 1040 Final Exam Fall Semester 2011 Form E

Salt Lake Community College MATH 1040 Final Exam Fall Semester 2011 Form E Salt Lake Community College MATH 1040 Final Exam Fall Semester 011 Form E Name Instructor Time Limit: 10 minutes Any hand-held calculator may be used. Computers, cell phones, or other communication devices

More information

PROBABILITY THEORY REVIEW

PROBABILITY THEORY REVIEW PROBABILITY THEORY REVIEW CMPUT 466/551 Martha White Fall, 2017 REMINDERS Assignment 1 is due on September 28 Thought questions 1 are due on September 21 Chapters 1-4, about 40 pages If you are printing,

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science

UNIVERSITY OF TORONTO Faculty of Arts and Science UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator

More information

Errata and suggested changes for Understanding Advanced Statistical Methods, by Westfall and Henning

Errata and suggested changes for Understanding Advanced Statistical Methods, by Westfall and Henning Errata and suggested changes for Understanding Advanced Statistical Methods, by Westfall and Henning Special thanks to Robert Jordan and Arkapravo Sarkar for suggesting and/or catching many of these. p.

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2 IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

18.05 Final Exam. Good luck! Name. No calculators. Number of problems 16 concept questions, 16 problems, 21 pages

18.05 Final Exam. Good luck! Name. No calculators. Number of problems 16 concept questions, 16 problems, 21 pages Name No calculators. 18.05 Final Exam Number of problems 16 concept questions, 16 problems, 21 pages Extra paper If you need more space we will provide some blank paper. Indicate clearly that your solution

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

STAT FINAL EXAM

STAT FINAL EXAM STAT101 2013 FINAL EXAM This exam is 2 hours long. It is closed book but you can use an A-4 size cheat sheet. There are 10 questions. Questions are not of equal weight. You may need a calculator for some

More information

DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling

DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling Due: Tuesday, May 10, 2016, at 6pm (Submit via NYU Classes) Instructions: Your answers to the questions below, including

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

The Purpose of Hypothesis Testing

The Purpose of Hypothesis Testing Section 8 1A:! An Introduction to Hypothesis Testing The Purpose of Hypothesis Testing See s Candy states that a box of it s candy weighs 16 oz. They do not mean that every single box weights exactly 16

More information

Lecture 1: Probability Fundamentals

Lecture 1: Probability Fundamentals Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Machine Learning using Bayesian Approaches

Machine Learning using Bayesian Approaches Machine Learning using Bayesian Approaches Sargur N. Srihari University at Buffalo, State University of New York 1 Outline 1. Progress in ML and PR 2. Fully Bayesian Approach 1. Probability theory Bayes

More information

Fundamental Probability and Statistics

Fundamental Probability and Statistics Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

Bayesian analysis in nuclear physics

Bayesian analysis in nuclear physics Bayesian analysis in nuclear physics Ken Hanson T-16, Nuclear Physics; Theoretical Division Los Alamos National Laboratory Tutorials presented at LANSCE Los Alamos Neutron Scattering Center July 25 August

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Mathematics Level D: Lesson 2 Representations of a Line

Mathematics Level D: Lesson 2 Representations of a Line Mathematics Level D: Lesson 2 Representations of a Line Targeted Student Outcomes Students graph a line specified by a linear function. Students graph a line specified by an initial value and rate of change

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

Some Probability and Statistics

Some Probability and Statistics Some Probability and Statistics David M. Blei COS424 Princeton University February 12, 2007 D. Blei ProbStat 01 1 / 42 Who wants to scribe? D. Blei ProbStat 01 2 / 42 Random variable Probability is about

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Bernoulli and Poisson models

Bernoulli and Poisson models Bernoulli and Poisson models Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes rule

More information

A Primer on Statistical Inference using Maximum Likelihood

A Primer on Statistical Inference using Maximum Likelihood A Primer on Statistical Inference using Maximum Likelihood November 3, 2017 1 Inference via Maximum Likelihood Statistical inference is the process of using observed data to estimate features of the population.

More information

Introduction and Overview STAT 421, SP Course Instructor

Introduction and Overview STAT 421, SP Course Instructor Introduction and Overview STAT 421, SP 212 Prof. Prem K. Goel Mon, Wed, Fri 3:3PM 4:48PM Postle Hall 118 Course Instructor Prof. Goel, Prem E mail: goel.1@osu.edu Office: CH 24C (Cockins Hall) Phone: 614

More information

Human-Oriented Robotics. Probability Refresher. Kai Arras Social Robotics Lab, University of Freiburg Winter term 2014/2015

Human-Oriented Robotics. Probability Refresher. Kai Arras Social Robotics Lab, University of Freiburg Winter term 2014/2015 Probability Refresher Kai Arras, University of Freiburg Winter term 2014/2015 Probability Refresher Introduction to Probability Random variables Joint distribution Marginalization Conditional probability

More information

Statistics for the LHC Lecture 1: Introduction

Statistics for the LHC Lecture 1: Introduction Statistics for the LHC Lecture 1: Introduction Academic Training Lectures CERN, 14 17 June, 2010 indico.cern.ch/conferencedisplay.py?confid=77830 Glen Cowan Physics Department Royal Holloway, University

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

SPRING 2007 EXAM C SOLUTIONS

SPRING 2007 EXAM C SOLUTIONS SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =

More information

Terminology. Experiment = Prior = Posterior =

Terminology. Experiment = Prior = Posterior = Review: probability RVs, events, sample space! Measures, distributions disjoint union property (law of total probability book calls this sum rule ) Sample v. population Law of large numbers Marginals,

More information

ISQS 5349 Final Exam, Spring 2017.

ISQS 5349 Final Exam, Spring 2017. ISQS 5349 Final Exam, Spring 7. Instructions: Put all answers on paper other than this exam. If you do not have paper, some will be provided to you. The exam is OPEN BOOKS, OPEN NOTES, but NO ELECTRONIC

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

1. Here is a distribution. y p(y) A.(5) Draw a graph of this distribution. Solution:

1. Here is a distribution. y p(y) A.(5) Draw a graph of this distribution. Solution: ISQS 5347 Final Exam. Instructions: Open book. No loose leaf notes. No electronic devices. Put all answers on the paper provided to ou. Points (out of 200) are in parentheses. 1. Here is a distribution.

More information

Lecture 2: Categorical Variable. A nice book about categorical variable is An Introduction to Categorical Data Analysis authored by Alan Agresti

Lecture 2: Categorical Variable. A nice book about categorical variable is An Introduction to Categorical Data Analysis authored by Alan Agresti Lecture 2: Categorical Variable A nice book about categorical variable is An Introduction to Categorical Data Analysis authored by Alan Agresti 1 Categorical Variable Categorical variable is qualitative

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics Table of Preface page xi PART I INTRODUCTION 1 1 The meaning of probability 3 1.1 Classical definition of probability 3 1.2 Statistical definition of probability 9 1.3 Bayesian understanding of probability

More information

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017 Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

A.I. in health informatics lecture 2 clinical reasoning & probabilistic inference, I. kevin small & byron wallace

A.I. in health informatics lecture 2 clinical reasoning & probabilistic inference, I. kevin small & byron wallace A.I. in health informatics lecture 2 clinical reasoning & probabilistic inference, I kevin small & byron wallace today a review of probability random variables, maximum likelihood, etc. crucial for clinical

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

STATISTICS 3A03. Applied Regression Analysis with SAS. Angelo J. Canty

STATISTICS 3A03. Applied Regression Analysis with SAS. Angelo J. Canty STATISTICS 3A03 Applied Regression Analysis with SAS Angelo J. Canty Office : Hamilton Hall 209 Phone : (905) 525-9140 extn 27079 E-mail : cantya@mcmaster.ca SAS Labs : L1 Friday 11:30 in BSB 249 L2 Tuesday

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

COMP2610/COMP Information Theory

COMP2610/COMP Information Theory COMP2610/COMP6261 - Information Theory Lecture 9: Probabilistic Inequalities Mark Reid and Aditya Menon Research School of Computer Science The Australian National University August 19th, 2014 Mark Reid

More information

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please

More information

Simple Linear Regression for the MPG Data

Simple Linear Regression for the MPG Data Simple Linear Regression for the MPG Data 2000 2500 3000 3500 15 20 25 30 35 40 45 Wgt MPG What do we do with the data? y i = MPG of i th car x i = Weight of i th car i =1,...,n n = Sample Size Exploratory

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #27 Estimation-I Today, I will introduce the problem of

More information

Sample Problems for the Final Exam

Sample Problems for the Final Exam Sample Problems for the Final Exam 1. Hydraulic landing assemblies coming from an aircraft rework facility are each inspected for defects. Historical records indicate that 8% have defects in shafts only,

More information

Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2

Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Logistics CSE 446: Point Estimation Winter 2012 PS2 out shortly Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Last Time Random variables, distributions Marginal, joint & conditional

More information

Review of Probabilities and Basic Statistics

Review of Probabilities and Basic Statistics Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 143 Part IV

More information

Fall 07 ISQS 6348 Midterm Solutions

Fall 07 ISQS 6348 Midterm Solutions Fall 07 ISQS 648 Midterm Solutions Instructions: Open notes, no books. Points out of 00 in parentheses. 1. A random vector X = 4 X 1 X X has the following mean vector and covariance matrix: E(X) = 4 1

More information

Simulation. Where real stuff starts

Simulation. Where real stuff starts 1 Simulation Where real stuff starts ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3 What is a simulation?

More information

Chapter 16. Simple Linear Regression and Correlation

Chapter 16. Simple Linear Regression and Correlation Chapter 16 Simple Linear Regression and Correlation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

STAT 4385 Topic 01: Introduction & Review

STAT 4385 Topic 01: Introduction & Review STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes. Closed book and notes. 10 minutes. Two summary tables from the concise notes are attached: Discrete distributions and continuous distributions. Eight Pages. Score _ Final Exam, Fall 1999 Cover Sheet, Page

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

Chapter 16. Simple Linear Regression and dcorrelation

Chapter 16. Simple Linear Regression and dcorrelation Chapter 16 Simple Linear Regression and dcorrelation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so. CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic

More information

Lecture 1: Bayesian Framework Basics

Lecture 1: Bayesian Framework Basics Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of

More information