CSE 312 Final Review: Section AA
|
|
- Basil Stokes
- 5 years ago
- Views:
Transcription
1 CSE 312 TAs December 8, 2011
2 General Information
3 General Information Comprehensive Midterm
4 General Information Comprehensive Midterm Heavily weighted toward material after the midterm
5 Pre-Midterm Material
6 Pre-Midterm Material Basic Counting Principles
7 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle
8 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion
9 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement
10 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry
11 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability
12 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P(A B) = P(AB) P(B)
13 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P(A B) = P(AB) P(B) Law of Total Probability: P(A) = P(A B) P(B) + P(A B) P(B)
14 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P(A B) = P(AB) P(B) Law of Total Probability: P(A) = P(A B) P(B) + P(A B) P(B) Bayes Theorem: P(A B) = P(B A) P(A) P(B)
15 Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P(A B) = P(AB) P(B) Law of Total Probability: P(A) = P(A B) P(B) + P(A B) P(B) Bayes Theorem: P(A B) = Network Failure Questions P(B A) P(A) P(B)
16 Pre-Midterm Material
17 Pre-Midterm Material Independence
18 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F )
19 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F ) E and F are independent if P(E F ) = P(E) and P(F E) = P(F )
20 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F ) E and F are independent if P(E F ) = P(E) and P(F E) = P(F ) Events E 1,... E n are independent if for every subset S of events ( ) P E i = P(E i ) i S i S
21 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F ) E and F are independent if P(E F ) = P(E) and P(F E) = P(F ) Events E 1,... E n are independent if for every subset S of events ( ) P E i = P(E i ) i S i S Biased coin example from Lecture 5, slide 7
22 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F ) E and F are independent if P(E F ) = P(E) and P(F E) = P(F ) Events E 1,... E n are independent if for every subset S of events ( ) P E i = P(E i ) i S i S Biased coin example from Lecture 5, slide 7 If E and F are independent and G is an arbitrary event then in general P(EF G) P(E G) P(F G)
23 Pre-Midterm Material Independence E and F are independent if P(EF ) = P(E)P(F ) E and F are independent if P(E F ) = P(E) and P(F E) = P(F ) Events E 1,... E n are independent if for every subset S of events ( ) P E i = P(E i ) i S i S Biased coin example from Lecture 5, slide 7 If E and F are independent and G is an arbitrary event then in general P(EF G) P(E G) P(F G) For any given G, equality in the above statement means that E and F are Conditionally Independent given G
24 Distributions
25 Distributions Know the mean and variance for:
26 Distributions Know the mean and variance for: Uniform distribution
27 Distributions Know the mean and variance for: Uniform distribution Normal distribution
28 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution
29 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution
30 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution
31 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution
32 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var[aX + b] = a 2 Var[X ]; in general Var[X + Y ] Var[X ] + Var[Y ]).
33 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var[aX + b] = a 2 Var[X ]; in general Var[X + Y ] Var[X ] + Var[Y ]). Remember: For any a, P(X = a) = 0 (the probability that a continuous R.V. falls at a specific point is 0!)
34 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var[aX + b] = a 2 Var[X ]; in general Var[X + Y ] Var[X ] + Var[Y ]). Remember: For any a, P(X = a) = 0 (the probability that a continuous R.V. falls at a specific point is 0!) Expectation is now an integral: E[X ] = x f (x)dx
35 Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var[aX + b] = a 2 Var[X ]; in general Var[X + Y ] Var[X ] + Var[Y ]). Remember: For any a, P(X = a) = 0 (the probability that a continuous R.V. falls at a specific point is 0!) Expectation is now an integral: E[X ] = x f (x)dx Use normal approximation when applicable.
36 Central Limit Theorem
37 Central Limit Theorem Central Limit Theorem: Consider i.i.d. (independent, identically distributed) random variables X 1, X 2,.... Xi has µ = E[X i ] and σ 2 = Var[X i ]. Then, as n X X n nµ σ n N(0, 1) Alternatively X = 1 n n X i N i=1 ) (µ, σ2 n
38 Tail Bounds
39 Tail Bounds Markov s Inequality: If X is a non-negative random variable, then for every α > 0, we have P(X α) E[X ] α
40 Tail Bounds Markov s Inequality: If X is a non-negative random variable, then for every α > 0, we have Corollary P(X α) E[X ] α P(X αe[x ]) 1 α
41 Tail Bounds Markov s Inequality: If X is a non-negative random variable, then for every α > 0, we have Corollary P(X α) E[X ] α P(X αe[x ]) 1 α Chebyshev s Inequality: If Y is an arbitrary random variable with E[Y ] = µ, then, for any α > 0, P( Y µ α) Var[Y ] α 2
42 Tail Bounds
43 Tail Bounds Chernoff Bounds: Suppose X is drawn from Bin(n, p) and µ = E[X ] = pn Then, for any 0 < δ < 1 P(X > (1 + δ)µ) e δ2 µ 2 P(X < (1 δ)µ) e δ2 µ 3
44 Law of Large Numbers
45 Law of Large Numbers Weak Law of Large Numbers: Let X be the empirical mean of i.i.d.s X 1,..., X n. For any ɛ > 0, as n Pr ( X µ > ɛ ) 0
46 Law of Large Numbers Weak Law of Large Numbers: Let X be the empirical mean of i.i.d.s X 1,..., X n. For any ɛ > 0, as n Pr ( X µ > ɛ ) 0 Strong Law of Large Numbers: Same hypotheses ( ( )) X1 + + X n Pr lim = µ = 1 n n
47 Random Facts
48 Random Facts If X and Y are R.V.s from the same distribution, then Z = X + Y isn t necessarily from the same distribution as X and Y.
49 Random Facts If X and Y are R.V.s from the same distribution, then Z = X + Y isn t necessarily from the same distribution as X and Y. If X and Y are both normal, then so is X + Y.
50 MLEs
51 MLEs Write an expression for the likelihood.
52 MLEs Write an expression for the likelihood. Convert this into the log likelihood.
53 MLEs Write an expression for the likelihood. Convert this into the log likelihood. Take derivatives to find the maximum.
54 MLEs Write an expression for the likelihood. Convert this into the log likelihood. Take derivatives to find the maximum. Verify that this is indeed a maximum.
55 MLEs Write an expression for the likelihood. Convert this into the log likelihood. Take derivatives to find the maximum. Verify that this is indeed a maximum. See Lecture 11 for worked examples.
56 Expectation-Maximization
57 Expectation-Maximization E-step: Computes the log-likelihood using current parameters.
58 Expectation-Maximization E-step: Computes the log-likelihood using current parameters. M-step: Maximize the expected log-likelihood, changing the parameters.
59 Expectation-Maximization E-step: Computes the log-likelihood using current parameters. M-step: Maximize the expected log-likelihood, changing the parameters. Iterated until convergence is achieved.
60 Hypothesis Testing
61 Hypothesis Testing H 0 is the null hypothesis
62 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis
63 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis Likelihood Ratio = L 1 L 0 where L 0 is the likelihood of H 0 and L 1 is the likelihood of H 1.
64 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis Likelihood Ratio = L 1 L 0 where L 0 is the likelihood of H 0 and L 1 is the likelihood of H 1. Saying that alternative hypothesis is 5 times more likely than the null hypothesis means that L 1 L 0 5.
65 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis Likelihood Ratio = L 1 L 0 where L 0 is the likelihood of H 0 and L 1 is the likelihood of H 1. Saying that alternative hypothesis is 5 times more likely than the null hypothesis means that L 1 L 0 5. Decision rule: When to reject H 0.
66 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis Likelihood Ratio = L 1 L 0 where L 0 is the likelihood of H 0 and L 1 is the likelihood of H 1. Saying that alternative hypothesis is 5 times more likely than the null hypothesis means that L 1 L 0 5. Decision rule: When to reject H 0. α = P(rejected H 0 but H 0 was true)
67 Hypothesis Testing H 0 is the null hypothesis H 1 is the alternative hypothesis Likelihood Ratio = L 1 L 0 where L 0 is the likelihood of H 0 and L 1 is the likelihood of H 1. Saying that alternative hypothesis is 5 times more likely than the null hypothesis means that L 1 L 0 5. Decision rule: When to reject H 0. α = P(rejected H 0 but H 0 was true) β = P(accept H 0 but H 1 was true)
68 Algorithms
69 Algorithms Better algorithms usually trump better hardware, but both are needed for progress
70 Algorithms Better algorithms usually trump better hardware, but both are needed for progress Some problems cannot be solved (e.g. the Halting Problem)
71 Algorithms Better algorithms usually trump better hardware, but both are needed for progress Some problems cannot be solved (e.g. the Halting Problem) Other (intractable) problems cannot (yet?) be solved in a reasonable amount of time (e.g. Integer Factorization)
72 Sequence Alignment
73 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time.
74 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time. Dynamic Programming decrease computation to O(n 2 ).
75 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time. Dynamic Programming decrease computation to O(n 2 ). IDEA:
76 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time. Dynamic Programming decrease computation to O(n 2 ). IDEA: Store prior computations so that future computations can do table look-ups.
77 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time. Dynamic Programming decrease computation to O(n 2 ). IDEA: Store prior computations so that future computations can do table look-ups. Backtrace Algorithm: Start at the bottom right of the matrix and find which neighboring cells could have transitioned to the current cell under the cost function, σ. Time Bound (O(n 2 ))
78 Sequence Alignment Brute force solution take at least ( 2n n ) computations of the sequence score, which is exponential time. Dynamic Programming decrease computation to O(n 2 ). IDEA: Store prior computations so that future computations can do table look-ups. Backtrace Algorithm: Start at the bottom right of the matrix and find which neighboring cells could have transitioned to the current cell under the cost function, σ. Time Bound (O(n 2 )) See Lecture 15 for a worked example.
79 P vs. NP
80 P vs. NP Some problems cannot be computed (e.g. The Halting Problem).
81 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring).
82 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring). P - a solution is computable in polynomial time
83 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring). P - a solution is computable in polynomial time NP - a solution can be verified in polynomial time, given a hint that is polynomial in the input length
84 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring). P - a solution is computable in polynomial time NP - a solution can be verified in polynomial time, given a hint that is polynomial in the input length A p B means that if you have a fast algorithm for B, you have a fast algorithm for A.
85 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring). P - a solution is computable in polynomial time NP - a solution can be verified in polynomial time, given a hint that is polynomial in the input length A p B means that if you have a fast algorithm for B, you have a fast algorithm for A. NP-complete - In NP and as hard as the hardest problem in NP
86 P vs. NP Some problems cannot be computed (e.g. The Halting Problem). Some problems can be computed but take a long time (e.g. SAT, 3-SAT, 3-coloring). P - a solution is computable in polynomial time NP - a solution can be verified in polynomial time, given a hint that is polynomial in the input length A p B means that if you have a fast algorithm for B, you have a fast algorithm for A. NP-complete - In NP and as hard as the hardest problem in NP Any fast solution to an NP-complete problem would yield a fast solution to all problems in NP.
CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions)
CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions) 1. (Confidence Intervals, CLT) Let X 1,..., X n be iid with unknown mean θ and known variance σ 2. Assume
More informationLecture 16. Lectures 1-15 Review
18.440: Lecture 16 Lectures 1-15 Review Scott Sheffield MIT 1 Outline Counting tricks and basic principles of probability Discrete random variables 2 Outline Counting tricks and basic principles of probability
More informationKousha Etessami. U. of Edinburgh, UK. Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 7) 1 / 13
Discrete Mathematics & Mathematical Reasoning Chapter 7 (continued): Markov and Chebyshev s Inequalities; and Examples in probability: the birthday problem Kousha Etessami U. of Edinburgh, UK Kousha Etessami
More informationCSE548, AMS542: Analysis of Algorithms, Spring 2014 Date: May 12. Final In-Class Exam. ( 2:35 PM 3:50 PM : 75 Minutes )
CSE548, AMS54: Analysis of Algorithms, Spring 014 Date: May 1 Final In-Class Exam ( :35 PM 3:50 PM : 75 Minutes ) This exam will account for either 15% or 30% of your overall grade depending on your relative
More informationProbability reminders
CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so
More informationCopyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups.
Copyright & License Copyright c 2006 Jason Underdown Some rights reserved. choose notation binomial theorem n distinct items divided into r distinct groups Axioms Proposition axioms of probability probability
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions
Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions 1999 Prentice-Hall, Inc. Chap. 4-1 Chapter Topics Basic Probability Concepts: Sample
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationCOMPSCI 240: Reasoning Under Uncertainty
COMPSCI 240: Reasoning Under Uncertainty Andrew Lan and Nic Herndon University of Massachusetts at Amherst Spring 2019 Lecture 20: Central limit theorem & The strong law of large numbers Markov and Chebyshev
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationStochastic Models of Manufacturing Systems
Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/
More informationEvaluating Hypotheses
Evaluating Hypotheses IEEE Expert, October 1996 1 Evaluating Hypotheses Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal distribution,
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationAdvanced Herd Management Probabilities and distributions
Advanced Herd Management Probabilities and distributions Anders Ringgaard Kristensen Slide 1 Outline Probabilities Conditional probabilities Bayes theorem Distributions Discrete Continuous Distribution
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationThings to remember when learning probability distributions:
SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions
More informationHypothesis testing: theory and methods
Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable
More informationGEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs
STATISTICS 4 Summary Notes. Geometric and Exponential Distributions GEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs P(X = x) = ( p) x p x =,, 3,...
More informationDisjointness and Additivity
Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationSTAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.
STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial
More informationMidterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley
Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationMidterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example
Midterm #1 Midterm 1 Lecture 10: and the Law of Large Numbers Statistics 104 Colin Rundel February 0, 01 Exam will be passed back at the end of class Exam was hard, on the whole the class did well: Mean:
More informationStatistics for Managers Using Microsoft Excel (3 rd Edition)
Statistics for Managers Using Microsoft Excel (3 rd Edition) Chapter 4 Basic Probability and Discrete Probability Distributions 2002 Prentice-Hall, Inc. Chap 4-1 Chapter Topics Basic probability concepts
More informationProbability Theory for Machine Learning. Chris Cremer September 2015
Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares
More information(Practice Version) Midterm Exam 2
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 7, 2014 (Practice Version) Midterm Exam 2 Last name First name SID Rules. DO NOT open
More informationCSE 312, 2017 Winter, W.L.Ruzzo. 5. independence [ ]
CSE 312, 2017 Winter, W.L.Ruzzo 5. independence [ ] independence Defn: Two events E and F are independent if P(EF) = P(E) P(F) If P(F)>0, this is equivalent to: P(E F) = P(E) (proof below) Otherwise, they
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More informationA Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.
A Probability Primer A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. Are you holding all the cards?? Random Events A random event, E,
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationReview of Probabilities and Basic Statistics
Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to
More informationLecture 4: Sampling, Tail Inequalities
Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /
More informationMAS113 Introduction to Probability and Statistics
MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically
More informationExpectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda
Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,
More informationChernoff Bounds. Theme: try to show that it is unlikely a random variable X is far away from its expectation.
Chernoff Bounds Theme: try to show that it is unlikely a random variable X is far away from its expectation. The more you know about X, the better the bound you obtain. Markov s inequality: use E[X ] Chebyshev
More informationECEN 5612, Fall 2007 Noise and Random Processes Prof. Timothy X Brown NAME: CUID:
Midterm ECE ECEN 562, Fall 2007 Noise and Random Processes Prof. Timothy X Brown October 23 CU Boulder NAME: CUID: You have 20 minutes to complete this test. Closed books and notes. No calculators. If
More informationExpectation of Random Variables
1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationDiscrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20
CS 70 Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20 Today we shall discuss a measure of how close a random variable tends to be to its expectation. But first we need to see how to compute
More informationChapter 7: Hypothesis Testing
Chapter 7: Hypothesis Testing *Mathematical statistics with applications; Elsevier Academic Press, 2009 The elements of a statistical hypothesis 1. The null hypothesis, denoted by H 0, is usually the nullification
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More information15 Discrete Distributions
Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.
More informationLearning Objectives for Stat 225
Learning Objectives for Stat 225 08/20/12 Introduction to Probability: Get some general ideas about probability, and learn how to use sample space to compute the probability of a specific event. Set Theory:
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationExample continued. Math 425 Intro to Probability Lecture 37. Example continued. Example
continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with
More informationAnalysis of Engineering and Scientific Data. Semester
Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationSDS 321: Practice questions
SDS 2: Practice questions Discrete. My partner and I are one of married couples at a dinner party. The 2 people are given random seats around a round table. (a) What is the probability that I am seated
More informationEstimating the accuracy of a hypothesis Setting. Assume a binary classification setting
Estimating the accuracy of a hypothesis Setting Assume a binary classification setting Assume input/output pairs (x, y) are sampled from an unknown probability distribution D = p(x, y) Train a binary classifier
More informationLecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019
Lecture 10: Probability distributions DANIEL WELLER TUESDAY, FEBRUARY 19, 2019 Agenda What is probability? (again) Describing probabilities (distributions) Understanding probabilities (expectation) Partial
More informationExpectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,
More informationCOMP2610/COMP Information Theory
COMP2610/COMP6261 - Information Theory Lecture 9: Probabilistic Inequalities Mark Reid and Aditya Menon Research School of Computer Science The Australian National University August 19th, 2014 Mark Reid
More informationMTMS Mathematical Statistics
MTMS.01.099 Mathematical Statistics Lecture 12. Hypothesis testing. Power function. Approximation of Normal distribution and application to Binomial distribution Tõnu Kollo Fall 2016 Hypothesis Testing
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationCS 109 Review. CS 109 Review. Julia Daniel, 12/3/2018. Julia Daniel
CS 109 Review CS 109 Review Julia Daniel, 12/3/2018 Julia Daniel Dec. 3, 2018 Where we re at Last week: ML wrap-up, theoretical background for modern ML This week: course overview, open questions after
More informationMATH 118 FINAL EXAM STUDY GUIDE
MATH 118 FINAL EXAM STUDY GUIDE Recommendations: 1. Take the Final Practice Exam and take note of questions 2. Use this study guide as you take the tests and cross off what you know well 3. Take the Practice
More informationNotation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x.
Ch. 16 Random Variables Def n: A random variable is a numerical measurement of the outcome of a random phenomenon. A discrete random variable is a random variable that assumes separate values. # of people
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationBasic Probability. Introduction
Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with
More informationPractice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example
A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X
More information{ p if x = 1 1 p if x = 0
Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH A Test #2 June 11, Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 2. A Test #2 June, 2 Solutions. (5 + 5 + 5 pts) The probability of a student in MATH 4 passing a test is.82. Suppose students
More informationUC Berkeley, CS 174: Combinatorics and Discrete Probability (Fall 2008) Midterm 1. October 7, 2008
UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 2008) Midterm Instructor: Prof. Yun S. Song October 7, 2008 Your Name : Student ID# : Read these instructions carefully:. This is a closed-book
More informationMath Review Sheet, Fall 2008
1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the
More informationLecture 5: The Principle of Deferred Decisions. Chernoff Bounds
Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized
More informationWhat is Probability? Probability. Sample Spaces and Events. Simple Event
What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationFinal Solutions Fri, June 8
EE178: Probabilistic Systems Analysis, Spring 2018 Final Solutions Fri, June 8 1. Small problems (62 points) (a) (8 points) Let X 1, X 2,..., X n be independent random variables uniformly distributed on
More informationTwo hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45
Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions
More informationTail and Concentration Inequalities
CSE 694: Probabilistic Analysis and Randomized Algorithms Lecturer: Hung Q. Ngo SUNY at Buffalo, Spring 2011 Last update: February 19, 2011 Tail and Concentration Ineualities From here on, we use 1 A to
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 10: Expectation and Variance Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/ psarkar/teaching
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationCSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Nov 30. Final In-Class Exam. ( 7:05 PM 8:20 PM : 75 Minutes )
CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Nov 30 Final In-Class Exam ( 7:05 PM 8:20 PM : 75 Minutes ) This exam will account for either 15% or 30% of your overall grade depending on your
More information(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.
Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error
More informationLecture 1: Review on Probability and Statistics
STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair
More informationComputational Biology Lecture #3: Probability and Statistics. Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept
Computational Biology Lecture #3: Probability and Statistics Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept 26 2005 L2-1 Basic Probabilities L2-2 1 Random Variables L2-3 Examples
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationLecture 4: Random Variables and Distributions
Lecture 4: Random Variables and Distributions Goals Random Variables Overview of discrete and continuous distributions important in genetics/genomics Working with distributions in R Random Variables A
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationTom Salisbury
MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom
More informationLearning Theory. Sridhar Mahadevan. University of Massachusetts. p. 1/38
Learning Theory Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts p. 1/38 Topics Probability theory meet machine learning Concentration inequalities: Chebyshev, Chernoff, Hoeffding, and
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More information4. Suppose that we roll two die and let X be equal to the maximum of the two rolls. Find P (X {1, 3, 5}) and draw the PMF for X.
Math 10B with Professor Stankova Worksheet, Midterm #2; Wednesday, 3/21/2018 GSI name: Roy Zhao 1 Problems 1.1 Bayes Theorem 1. Suppose a test is 99% accurate and 1% of people have a disease. What is the
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationWeek 2: Review of probability and statistics
Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationM378K In-Class Assignment #1
The following problems are a review of M6K. M7K In-Class Assignment # Problem.. Complete the definition of mutual exclusivity of events below: Events A, B Ω are said to be mutually exclusive if A B =.
More informationDept. of Linguistics, Indiana University Fall 2015
L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 34 To start out the course, we need to know something about statistics and This is only an introduction; for a fuller understanding, you would
More information3. Review of Probability and Statistics
3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture
More informationFundamental Tools - Probability Theory IV
Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent
More information3 Multiple Discrete Random Variables
3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationWeek 2. Review of Probability, Random Variables and Univariate Distributions
Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationContinuous Distributions
Continuous Distributions 1.8-1.9: Continuous Random Variables 1.10.1: Uniform Distribution (Continuous) 1.10.4-5 Exponential and Gamma Distributions: Distance between crossovers Prof. Tesler Math 283 Fall
More information