PH 425 Quantum Measurement and Spin Winter SPINS Lab 1

Similar documents
The standard deviation of the mean

Chapter 6 Sampling Distributions

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

The Random Walk For Dummies

Chapter 22. Comparing Two Proportions. Copyright 2010 Pearson Education, Inc.

STA Learning Objectives. Population Proportions. Module 10 Comparing Two Proportions. Upon completing this module, you should be able to:

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Lecture 2: April 3, 2013

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Chapter 22. Comparing Two Proportions. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency

Expectation and Variance of a random variable

Statistics 511 Additional Materials

A quick activity - Central Limit Theorem and Proportions. Lecture 21: Testing Proportions. Results from the GSS. Statistics and the General Population

IE 230 Seat # Name < KEY > Please read these directions. Closed book and notes. 60 minutes.

7.1 Convergence of sequences of random variables

Homework 5 Solutions

Final Review for MATH 3510

MATH/STAT 352: Lecture 15

Simulation. Two Rule For Inverting A Distribution Function

7.1 Convergence of sequences of random variables

Estimation of a population proportion March 23,

STAT 350 Handout 19 Sampling Distribution, Central Limit Theorem (6.6)

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

Math 140 Introductory Statistics

Random Variables, Sampling and Estimation

Topic 9: Sampling Distributions of Estimators

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions

Output Analysis (2, Chapters 10 &11 Law)

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

Frequentist Inference

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

Introducing Sample Proportions

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Power and Type II Error

Read through these prior to coming to the test and follow them when you take your test.

Discrete probability distributions

6.3 Testing Series With Positive Terms

Module 1 Fundamentals in statistics

Big Picture. 5. Data, Estimates, and Models: quantifying the accuracy of estimates.

Parameter, Statistic and Random Samples

Machine Learning for Data Science (CS 4786)


Lecture 2: Monte Carlo Simulation

Topic 9: Sampling Distributions of Estimators

Probability, Expectation Value and Uncertainty

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS

Chapter 18: Sampling Distribution Models

Math 10A final exam, December 16, 2016

(7 One- and Two-Sample Estimation Problem )

Activity 3: Length Measurements with the Four-Sided Meter Stick

Quick Review of Probability

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

Quick Review of Probability

Interval Estimation (Confidence Interval = C.I.): An interval estimate of some population parameter is an interval of the form (, ),

PRACTICE PROBLEMS FOR THE FINAL

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36

Measures of Spread: Standard Deviation

Economics 250 Assignment 1 Suggested Answers. 1. We have the following data set on the lengths (in minutes) of a sample of long-distance phone calls

Comparing your lab results with the others by one-way ANOVA

Hypothesis Testing. Evaluation of Performance of Learned h. Issues. Trade-off Between Bias and Variance

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

Massachusetts Institute of Technology

Lecture 1 Probability and Statistics

Topic 8: Expected Values

Computing Confidence Intervals for Sample Data

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Introduction to Probability and Statistics Twelfth Edition

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

An Introduction to Randomized Algorithms

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Topic 9: Sampling Distributions of Estimators

NUMERICAL METHODS FOR SOLVING EQUATIONS

MOST PEOPLE WOULD RATHER LIVE WITH A PROBLEM THEY CAN'T SOLVE, THAN ACCEPT A SOLUTION THEY CAN'T UNDERSTAND.

P1 Chapter 8 :: Binomial Expansion

Infinite Sequences and Series

1 Inferential Methods for Correlation and Regression Analysis

Response Variable denoted by y it is the variable that is to be predicted measure of the outcome of an experiment also called the dependent variable

Overview. p 2. Chapter 9. Pooled Estimate of. q = 1 p. Notation for Two Proportions. Inferences about Two Proportions. Assumptions

Section 9.2. Tests About a Population Proportion 12/17/2014. Carrying Out a Significance Test H A N T. Parameters & Hypothesis

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Lecture 6 Simple alternatives and the Neyman-Pearson lemma

Introducing Sample Proportions

Economics Spring 2015

Skip Lists. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 S 3 S S 1

Exam II Covers. STA 291 Lecture 19. Exam II Next Tuesday 5-7pm Memorial Hall (Same place as exam I) Makeup Exam 7:15pm 9:15pm Location CB 234

Last time, we talked about how Equation (1) can simulate Equation (2). We asserted that Equation (2) can also simulate Equation (1).

We will conclude the chapter with the study a few methods and techniques which are useful

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

April 18, 2017 CONFIDENCE INTERVALS AND HYPOTHESIS TESTING, UNDERGRADUATE MATH 526 STYLE

Lecture 1 Probability and Statistics

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Understanding Samples

Lecture 2 February 8, 2016

Transcription:

PH 425 Quatum Measuremet ad Spi Witer 23 SPIS Lab Measure the spi projectio S z alog the z-axis This is the experimet that is ready to go whe you start the program, as show below Each atom is measured to have spi up or spi dow, deoted by the arrows ad by the + ad symbols (we will explai the symbols i more detail later) i the figure below The measured spi projectios for these cases are S z =±h 2 Ru the experimet by selectig Do (ctrl-) uder the Cotrol meu, which seds oe atom through the apparatus Do this repeatedly so you ca see the iheret radomess i the measuremet process Try ruig the experimet cotiuously (Go) ad usig the other fixed umbers (,,, ) Z + 2 From the above experimets, ad from what we have said i class, you will have surmised that the probability for a spi-up measuremet is P = 2, with the probability for spi dow beig ( P ) = 2 How ca we be certai of this? Let s do a series of experimets ad examie the statistics of the data (see appedix for iformatio about statistics) Reset the couters ad ru the experimet times (ctrl-3) Record the umber of couts i the spi-up detector i the table below Repeat this times to fill up the table (I have already doe the atom case) ow put the umbers ito your calculator ad fid the mea x ad stadard deviatio s of your data, ad the stadard deviatio of the mea σ m The calculate the experimetal estimate of the probability P, its ucertaity σ P, ad the relative ucertaity σ P P Do agai for ad atom cases Are you coviced that P = 2? How cofidet are you?

o of Atoms (M) 7 5 5 8 Data 5 ( = ) 4 8 2 7 8 x 59 s 22 σ m 64 P 59 σ P 64 σ P P 3 ow set up a experimet to measure the spi projectio S z alog the z-axis twice i successio as show below You eed a extra aalyzer ad aother couter (see the SPIS otes for help) Ru the experimet ad ote the results Focus your attetio o the secod aalyzer The iput state is deoted + ad there are two possible output states + ad What is the probability that a atom eterig the secod aalyzer (state i =+) exits the spi up port (state out =+) of the secod aalyzer? This probability is deoted i geeral as P( out)= out i ad i this case specific case as P ( + )= out i 2 = + + 2 What is the probability of exitig the spi dow port (state )? What coclusios ca you draw from the measuremets performed i this experimet? + + Z Z 2, 2

4 Usig the same apparatus as above (#3), chage the orietatio directios of the aalyzers You ca choose directios X, Y, or Z, which are orieted alog the usual xyz-axes of a Cartesia coordiate system (igore the fourth directio ˆ for ow) Whe a directio other tha Z is chose, we use a subscript to distiguish the output states (eg, y ) If we allow ourselves to also use the spi dow port of the first aalyzer as iput to the secod aalyzer (ot both up ad dow at the same time), the there are six possible iput states ad six possible output states for the secod aalyzer, which are listed i the table below Your task is to measure the probabilities P( out)= out i 2 correspodig to these iput ad output states Remember that this is the probability that a atom leavig the first aalyzer also makes it through the secod aalyzer to the appropriate detector, ad ot the total probability for gettig from the ove to the detector The experimet performed i #3 above (with both aalyzers alog the z-axis) gave the result 2 ++ =, which is already etered i the table ow do all other possible combiatios ad fill i the rest of the table out i 2 + + x x + y y + x + x y + y 3

Appedix A: Statistics iformatio As you see i the experimets, the arrival of a atom at a measuremet couter is a radom process We would like to use the results of the experimets to determie the probability P that govers that radom process I the cases where all the atoms exit oe port, the it is clear that the probability is for that output state ad zero for the other However, if we measure 3 spi up atoms ad 7 spi dow atoms, the we must apply statistical aalysis to help us solve the problem Of course, those results would lead you to coclude that the probability of spi up is P ( + )= 3 ad the probability of spi dow is P( )= 7 However, if you performed the experimet a secod time ad couted 4 spi-up atoms ad 6 spi-dow atoms, the you would wat to revise your estimates The questios we thus wish to address are: What is the best estimate of the probability, give the experimetal data, ad how cofidet are we of that estimate? To aswer these questios, let's first discuss what results we expect to obtai if we kow the probability Assume that a radom process is govered by a probability P, ad that each evet is idepedet of all other evets ow assume that we have M of these evets ad we cout the umber of successes (eg, spi-up atoms), which we call The probability that we cout spi up atoms out of M total atoms is determied by the biomial probability distributio, ad is give by f M ( )= M! M ( ) M!! P P ( ) This probability distributio is show i Fig A for the case M = ad P = 5 Thus, for 25 f() 2 5 5 2 3 4 5 6 7 8 9 Figure A Biomial distributio for evets 4

example, you expect to cout 3 spi-up atoms about 2% of the time ( f ()= 3 2) ad 5 spi up atoms 25% of the time ( f ( 5)= 25) i this case The most obvious coclusio is that oe sigle measuremet of atoms is ot too reliable a predictor of the probability P that a atom is measured to have spi up To reliably predict the probability we must perform repeated experimets ad produce a experimetal histogram of the data aki to the plot i Fig A From the statistical properties of the histogram we ca the estimate the probability ad determie a error or ucertaity i that probability We geerally characterize a probability distributio by 2 quatities: () the average or mea or expectatio value, which is deoted by or, ad (2) the stadard deviatio σ, which is the square root of the variace σ 2 The mea tells you where the distributio is cetered ad the stadard deviatio tells you about the width of the distributio The mea is obtaied as a weighted average of the possible results: = f( ), where f() is the probability of recordig couts The variace is defied as For the biomial distributio, the mea is ad the stadard deviatio is f ( ) ( ) σ 2 = 2 = MP, ( ) σ= MP P Experimetal data is also commoly characterized by these two quatities Cosider a experimet where a variable x is measured times to yield a data set x i The mea x (or average value) of this data is x = x i i= The stadard deviatio s of the data is 5

s = 2 2 2 ( xi x) = xi x i= i= To coect this firmly to our experimets, assume that the variable x represets the umber of times a certai result was obtaied i M tries (eg, M atoms leave the ove ad we measure how may ed up as spi up) You would thus expect (ad it is true) that the best experimetal estimates of the parameters ad σ of the theoretical distributio are the experimetal parameters x ad s Thus the experimetal estimate of the probability of obtaiig the desired result (eg, the spi-up result) is P = x M What the is our ucertaity i this estimate? The first guess is to use the stadard deviatio of the data (divided by M to get a probability) sice it is a estimate of the stadard deviatio of the theoretical probability distributio However, this is ot correct The stadard deviatio of the data (ad the theoretical probability distributio) tells us how the data are distributed about the mea The best estimate of the ucertaity of the mea, ofte called the stadard deviatio of the mea, is σ m s =, which, as you might expect, tells us that we get a better estimate of the mea if we repeat the experimet more times A simple example may help to make this all more cocrete Cosider a experimet where (M) cois are flipped ad the umber of heads (x) are couted, ad the experimet is repeated times () Figure A2 represets data from the experimet The bars of the histogram tell us how may times a give umber of heads occurred The solid circles (coected by a solid lie oly as a guide to the eye) are the expected values give that the probability of a head is /2; this is just the biomial distributio show i Fig A The data have a mea of 542, with a stadard deviatio of 7, which you ca see gives a measure of the width of the distributio of measuremets but is much larger tha what you might guess is the ucertaity of the mea value (ote that if we do more experimets (icrease ), the stadard deviatio s will ot decrease, but we expect our ucertaity i the mea (ie, the stadard deviatio of the mea) to decrease) From this data we would estimate the probability P of a head ad its ucertaity σ P to be 6

3 Flippig cois times 25 umber of occureces 2 5 5 2 3 4 5 6 7 8 9 umber of heads Figure A2: Experimetal histogram of coi flippig x 542 P = = = 542 M σm s 7 σp = = = = 7 M M ote that the ucertaity is about 3% of the value of the probability This is a commo result i statistics: if you measure somethig times, you ca geerally determie it with a precisio of / We already saw this i the stadard deviatio of the mea I our coutig experimets here, we are actually coutig M atoms ad it should t matter whether we measure them as groups of M or M groups of, or ay other combiatio; it's all the same data This is evidet if we recall that the stadard deviatio of the probability distributio scales as M Thus we expect the ucertaity i the probability to scale like: σ σp = m = s M M M M = M 7

I the coi tossig example above M = flips, so / 3% I the atoms case show i #2 of the lab above, M = atoms, so / = % ote that the experimetal estimate of the probability i the coi tossig example above differs from what we kow the real value to be by about 25 times the stadard deviatio This is oly expected to happe 5% of the time, but it ca happe We expect our results to be withi oe stadard deviatio 68% of the time ad withi 2 stadard deviatios 95% of the time 8