Muon Lifetime Notes Estimating correction factors on the muon lifetime (Experiment Part I) In the first part of this experiment, you will estimate the muon lifetime by counting by hand a certain number of muon events, and then taking the average of the individual muon lifetimes. There are three correction factors you should apply to this estimate. Note that when doing the second analysis method (fitting the exponential decay curve) you do not need to apply these corrections. Setup: Let the singles event rate be R. This is displayed on the scaler/counter on the instrument rack. If two singles events occur with a time spacing within a certain window, then the Smart Trigger on the oscilloscope tags this as a double event, and sends it to the oscilloscope screen, along with a determination of the delay between the two pulses (and sends the data to the computer in the second part of the experiment). These doubles events are what we re interested in; the singles rate is just used for estimating background and accidental count levels. The window we will use runs from t 0 =200ns to t 1 =10 s; any two singles occurring with a separation between t 0 and t 1 will be tagged as a double event. These parameters are adjustable in the Smart Trigger setup menu, but these values will suffice. Let the measured doubles rate be D, and the numerical average of the delays between the events of each double is T. Correction 1: Correct for accidental events ( background ) We may consider that doubles arise from two phenomena: 1. True doubles, caused by (a) a muon entering the scintillator and depositing its kinetic energy visible as the first pulse and (b) the decay of that muon into an electron/positron plus neutrinos, with the decay energy being deposited into the scintillator as the second pulse, and 2. Accidental doubles, which are caused by two unrelated events which happen to occur within the specified window. The accidental doubles rate, D acc, is the singles rate R times the probability of a second event occurring within the window: D acc = R * P( 2 nd 1 st ) = R * R(t 1 -t 0 ) = R 2 t (1) where t = t 1 -t 0 is the width of the window. The true muon doubles rate D, is just the difference between the total doubles rate and the accidental doubles rate: D = D - D acc (2)
How large is the accidentals rate compared to the muon rate? In other words, are most of your doubles events coming from real muons, or are they coming from background (accidentals)? How would this be affected by a change in the window size? A change in the voltage applied to the photomultiplier tube? We may also determine the average time ( delay ) for each type of event: 1. The total doubles rate is D, and we take a numerical average of the delays of each event, to get the average time T. 2. The accidental events, we will assume, are uniformly distributed over the window, since they are comprised of two unrelated events that happen to occur within a certain time interval of each other. Thus, the average delay between accidental events, denoted T acc, is simply the median time of the window: T acc = (t 0 +t 1 ) / 2 (3) 3. Finally, the average lifetime of muon events, T, which is what we re interested in, can be determined using the definition of expectation value: D T = D acc T acc + D T (4) Solve this equation for T, and plug in the measured D and T, the calculated T acc, the D acc determined from the measured singles rate R, and D = D - D acc to find the background-corrected mean muon lifetime T. How large was the correction (i.e. compared to T)? Did you get any closer to the accepted value? Correction 2: Correction for finite window size ( bias ) The second correction you will apply takes into account the fact that you only using events occurring within a finite window in order to estimate your muon lifetime. This is a biased data sample, which may or may not be accurately representative of all events. We are rejecting both those muons which decay very quickly, and also extremely long-lived muons. These may both be rare, but because their lifetimes are so widely distributed, they have a disproportionate effect on the mean. The minimum time t 0 is set primarily by the electronics if a muon decays too quickly, the setup would not be able to distinguish between the entry and decay events since they would appear as one. The maximum time t 1 is constrained by background you can verify using Equation 1 that the accidental doubles rate increases linearly with the window size. But very few real muon events occur beyond 10 s (10 s 5 muon lifetimes, so only ( 1 / 2 ) 5 3% of muons will be left after 10 s). The choice of window used is rather judicious in that it includes the vast majority of muons without also allowing in a huge amount of background, and is also roughly symmetrically positioned about the mean lifetime so that the neglect of both short-lived and long-lived at least partially cancel each other out. But there still needs to be a correction. (A real-world example of a biased data set: The majority of extrasolar planets discovered so far have been so-called hot Jupiters, named because they are quite massive and orbit very close to their parent star (closer than Mercury is to the Sun). But this is not necessarily representative of solar systems at large: our sample is strongly biased towards hot Jupiters
since the dominant method used detection of periodic Doppler shifts of the parent star is strongly biased towards them. Large planets cause larger Doppler shifts of their parent star, so all else being equal, they are easier to detect than smaller planets. And planets in smaller orbits move more quickly, causing both larger Doppler shifts than equally massive planets in larger orbits, and taking less time for the periodicity in those shifts to become apparent.) The number of radioactive particles left after a given amount of time is given by an exponential distribution: N(t) = N 0 exp( -t / ) (5) The rate at which particles decay (i.e. the number of decays per unit time) is: dn / dt = ( -1 / ) N 0 exp( -t / ) (6) The mean lifetime of particles decaying according to this exponential distribution is found as an expectation value: which you can show is in fact just the characteristic lifetime : (7) However, the average experimental muon lifetime T (after applying Correction #1 above) (Equation 4) is being found using only those decays that fall within a finite window, i.e.: (8) (9) since all muons with lifetimes outside that range are being rejected. Thus, we state that there is a correction factor k between the two, such that = k*t (10) where (11)
Calculate k, the ratio of these two integrals (you can approximate in the integrals by T ) to determine, the unbiased, background-corrected estimate of the muon lifetime, using Equations 10 and 11. How large was this correction? Did it bring you any closer to the accepted value? Correction #3: Correction for pile-up Pile-up is a common problem in data collection. Pile-up occurs when multiple events are confused for a single event of a different type. It can have complex and subtle effects and may be very difficult to correct, although it is relatively simple to determine whether pile-up is occurring. The most practical consequence of pile-up is that it sets an upper limit to your data collection rate (requiring you to lower the luminosity of a source or use absorbers to limit the count rate, for example). Since statistical errors decrease as the number of counts increase, the rate limit imposed by pile-up effectively limits how good your statistics can get within a given amount of time, even if you are capable of getting a high count rate. One type of pile-up occurs in high-energy spectroscopy, where one tries to collect and determine the energy of photons one at a time. If two photons enter the detector close enough in time that the detector cannot distinguish between them, they may be confused for a single higher-energy photon; the net effect of pile-up is to blue-shift a spectrum away from lower energies and into higher energies. Another type of pile-up occurs when one is studying time-separated events, such as the doubles events studied here. It is possible that while a muon is sitting in the scintillator waiting to decay, a stray particle may enter, create its own pulse, and effectively pre-empt the muon. The result is that a long-lived muon will be interpreted as a short-lived one, biasing the distribution downward. (Similarly, accidental doubles with long delays are more likely to be pre-empted by an intervening event, so the mean time of accidentals T acc is shifted downward from the window median estimate in Equation 3.) The amounts of these shifts can be estimated by doing some integrals. Rather than actually go ahead and do these integrals, let s instead determine whether or not pile-up is a significant factor in our experiment. We may model the probability of a pile-up event as follows: The rate of pile-up events D pu can be determined as the doubles rate multiplied by the probability that a single, unrelated, event will occur in the time t between the two events of the double: D pu = D * R* t (12) Note that the probability of a pile-up event increases as the time spacing t between the two events of the double increases (this is what distinguishes it from the uniform background correction introduced by Equation 4 in Correction #1). The worst-case scenario for pileup is for long-lived events, say t = t 1 = 10 s t (from Eq. 1). Then, D pu D R t
Given your doubles rate D, singles rate R, and the window length possible pile-up rate D pu? t = t 1 -t 0, what is the maximum Over the time over which you collected your doubles events, how many pile-up events do you expect occurred? Is this a significant correction that you need to apply? (Alternately, how does D pu compare to D? In other words, what fraction of doubles events do you anticipate will be affected by pile-up?)