Title: Search for Strangelets at RHIC Author: Brandon Szeliga

Similar documents
MITOCW ocw f99-lec01_300k

Experiment 1: The Same or Not The Same?

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

MITOCW MITRES18_005S10_DerivOfSinXCosX_300k_512kb-mp4

MITOCW ocw f99-lec30_300k

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

LAB 2 - ONE DIMENSIONAL MOTION

Introduction to Algebra: The First Week

Note: Please use the actual date you accessed this material in your citation.

Atomic Theory. Introducing the Atomic Theory:

Instructor (Brad Osgood)

Please bring the task to your first physics lesson and hand it to the teacher.

MITOCW ocw f99-lec05_300k

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Quadratic Equations Part I

Getting Started with Communications Engineering

MITOCW big_picture_derivatives_512kb-mp4

Physic 602 Conservation of Momentum. (Read objectives on screen.)

Student s guide CESAR Science Case Rotation period of the Sun and the sunspot activity

Atoms, Elements, and the Periodic Table

MITOCW Investigation 3, Part 1

Luminosity Calculation From Known Beam Functions

ODE s and Apogees: A Harrowing Tale of Survival

Looking for strange particles in ALICE. 1. Overview

Linear Algebra, Summer 2011, pt. 2

LHC. Jim Bensinger Brandeis University New England Particle Physics Student Retreat August 26, 2004

P vs. NP. Data Structures and Algorithms CSE AU 1

Solving Quadratic & Higher Degree Equations

Lecture 12: Quality Control I: Control of Location

Period Analysis on a Spreadsheet

Michael Fowler, UVa Physics, 12/1/07. Momentum has Direction

Solving with Absolute Value

ASTRO 114 Lecture Okay. We re gonna continue our discussion today on galaxies and quasars, and

Electron Identification

University of Maryland Department of Physics

MITOCW ocw f99-lec17_300k

Intensity of Light and Heat. The second reason that scientists prefer the word intensity is Well, see for yourself.

Lesson 1: Forces. Fascinating Education Script Fascinating Intro to Chemistry Lessons. Slide 1: Introduction. Slide 2: Forces

Superposition - World of Color and Hardness

Understanding the Atom

MITOCW ocw-18_02-f07-lec17_220k

Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay. Lecture - 15 Momentum Energy Four Vector

Energy Diagrams --- Attraction

Conservation of Momentum

Physics Motion Math. (Read objectives on screen.)

What is Crater Number Density?

Lecture 7 - Momentum. A Puzzle... Momentum. Basics (1)

This image is brought to you by: Fermilab's Visual Media Services

MONTE CARLO CHECKS. Samantha Hamernik Hastings College (NE) Mentor: Dr. Hal Evans 2003 REU: Nevis Laboratories Columbia University: D-Zero Group

DIFFERENTIAL EQUATIONS

RHIC Run 14: Brookhaven's atom smasher produced more gold collisions than all previous runs combined 8 August 2014

Experiment 0 ~ Introduction to Statistics and Excel Tutorial. Introduction to Statistics, Error and Measurement

Chapter 1 Review of Equations and Inequalities

ACTIVITY 2: Motion and Energy

Solving Quadratic & Higher Degree Equations

LABORATORY II DESCRIPTION OF MOTION IN TWO DIMENSIONS

EM Waves in Media. What happens when an EM wave travels through our model material?

Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process

Physics 225 Relativity and Math Applications. Fall Unit 7 The 4-vectors of Dynamics

People. The Shadow Shadow People. The Shadow People A Reading A Z Level O Leveled Book Word Count: 874 LEVELED BOOK O.

Deep Algebra Projects: Algebra 1 / Algebra 2 Go with the Flow

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Physics 6A Lab Experiment 6

Putting the Shot. 3. Now describe the forces that you exerted on the weight. Is it constant? Explain why you think so.

Organic Photochemistry and Pericyclic Reactions Prof. N.D. Pradeep Singh Department of Chemistry Indian Institute of Technology Kharagpur

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

T he Science of. What exactly is quantum physics? Until the end of the nineteenth century, scientists thought

MITOCW watch?v=poho4pztw78

Study skills for mathematicians

The Haar Wavelet Transform: Compression and. Reconstruction

Chemical Applications of Symmetry and Group Theory Prof. Manabendra Chandra Department of Chemistry Indian Institute of Technology, Kanpur

5. LECTURE 5. Objectives

Read the text and then answer the questions.

MITOCW ocw lec8

PHY 111L Activity 2 Introduction to Kinematics

Talk Science Professional Development

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Math 38: Graph Theory Spring 2004 Dartmouth College. On Writing Proofs. 1 Introduction. 2 Finding A Solution

Physics For The 21 st Century. Unit 2: The Fundamental Interactions. Srini Rajagopalan and Ayana Arce

Volume vs. Diameter. Teacher Lab Discussion. Overview. Picture, Data Table, and Graph

ASTRO 114 Lecture Today we re gonna continue our discussion of atoms and then we ll get into energy.

Equivalent Width Abundance Analysis In Moog

MITOCW watch?v=vjzv6wjttnc

Water tank. Fortunately there are a couple of objectors. Why is it straight? Shouldn t it be a curve?

LABORATORY II DESCRIPTION OF MOTION IN TWO DIMENSIONS

30 Days to Awakening

MITOCW ocw-18_02-f07-lec02_220k

Squaring and Unsquaring

PHY 221 Lab 8. Momentum and Collisions: Conservation of momentum and kinetic energy

PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14

MITOCW ocw f99-lec09_300k

Last few slides from last time

Fair Division on the Hubble Space Telescope (repo: bkerr) Brandon Kerr and Jordana Kerr. Project Description

ASTRO 114 Lecture Okay. We re now gonna continue discussing and conclude discussing the entire

What s the longest single-shot exposure ever recorded of any object or area of space by Hubble?

LAB 2: INTRODUCTION TO MOTION

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

Lecture 2 - Length Contraction

Steve Smith Tuition: Maths Notes

MITOCW MITRES18_005S10_DiffEqnsGrowth_300k_512kb-mp4

Transcription:

Title: Search for Strangelets at RHIC Author: Brandon Szeliga Abstract: Introduction What are Strangelets? How are we looking for them? ZDC-SMD Graphing the Data Find them? Not find them? What do we do if we find them? What do we do if we can t find them? Main (What did I do?) Root Cuts Finding Implementing What was eliminated why? Acceptance for Detector Major points GEANT Problems Solutions Program Efficiency Program Conclusion Acknowledgements Introduction: Ever since its conception, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) has concentrated on trying to discover information about the beginning of the universe. Over the course of its experimentation,

RHIC has provided answers and data to many important questions concerning our universe, but one of the questions still unanswered is the existence of Strangelets. Strangelets are particles consisting of equal numbers of up, down, and strange quarks. This means that a six quark Strangelet would consist of two up, two down, and two strange quarks. Due to the presence of multiple quarks and of the strange quark(s), Strangelets are very heavy. Also, since the up and down quarks charge cancels each other out, the only charge that exists within a Strangelet is due to the strange quark. In order to help in the search for Strangelet existence, the scientists at RHIC added a new piece of equipment to one of their current experiments, the Solenoidal Tracker at RHIC (STAR). The Zero Degree Calorimeter (ZDC), a small piece of STAR located up and down the beam line from the center, was altered to incorporate a new small detector, the Shower Max Detector (SMD). The SMD is a position sensitive detector that detects the amount of energy a collision with it results in. Now how does all of this work together to find Strangelets? When an experiment is running at STAR, two gold atoms are forced into a head on collision in the center of STAR. This results in a mass of particles being flung outward from the collision point. Some of the particles from this collision are continuing along in the same direction as the beam line. The particles now travel a short distance before the reach the DX magnet. This magnet will then take all of the particles with a high charge and low mass, and bend them to the sides. This leaves us with low charge and high mass particles (a Strangelet perhaps?) that are only slightly affected by the magnet. So these particles continue on through the magnet with changing their path along the beam line a small amount. Once the remaining particles have exited the magnet they hit the ZDC-SMD. Which records where it was hit, and the amount of energy deposited when it was hit. A problem encountered was how could we tell if the particle was a Strangelet that deposited the bunch of energy, or if it was just a cluster of relatively neutral objects that happen to hit the ZDC-SMD? This is answered by a characteristic that I mentioned of the SMD, position sensitivity. If there happens to be a big spike in the amount of energy deposited, we next look at a graph of the position of the energy deposited. If the energy spike is all located in one spot, then the particle was a Strangelet, whereas if the energy was spread out over a couple of different locations, then we know that it was several neutral objects hitting the detector all at once. Now that we have a method for detecting Strangelets, what are our goals for trying to detect them? Our goals are quite simple; if we find a Strangelet then we wish to establish a probability for their existence within a number of particles. If we do not manage to find any, then we wish to establish efficiency for how much we checked for them. This way in future searches the researchers are able to tell how much data we took into looking for them, and they will have a starting point to begin their experimentation with. How do we plan on accomplishing these goals you might ask? First we must establish what data we are looking at that might contain a Strangelet within it. In order to do this, we must look at the data from runs that we are certain do not have Strangelets within them. From here we can say the rest of the data we are left with is questionable Strangelet occurrences.

The next step is to separate bad events from the events where a Strangelet event is possible. For example; we would apply a cut to the data if the collision happened to not occur within the center of STAR (time difference cut); if a detector on one side received a bigger energy deposit than the other we would cut the other detector with the smaller energy deposit (trigger by East/West cut); a cut if the trigger Id equals15401 or 15404 (Strangelet Id cut); and if the collision didn t occur within a specific range along the z- axis (vertex range cut). These cuts would eliminate all the data that we would consider being a bad event of the experiment. The next step in our experimentation would be to graph the RMS y versus the RMS x from the ZDC-SMD. This is the part of the data dealing with the position sensitivity of the detector. Since Strangelet signals are barely spread their position in this graph is supposed to be close to the origin. The closer to the origin, then the more chance to be a Strangelet. So at this point we would be looking for data that exists close to the origin, or at least within the lower quadrant. The next step is based on what we find on the last step. If we happen to find a possible Strangelet, we need to look into the data and find out more about it. If it just so happens that we don t find any then we need to establish an efficiency of our instrumentation. This can be done by figuring out the number of data pieces we looked at and figuring out the percentage of particles that end up hitting the detector. This can be done with a simple simulation of GSTAR and shooting a particle towards the detector with difference values of Pt, η, and φ, and recording the number of particles that are a successful hit. Main Content: Upon arriving at Brookhaven the first job I was to undertake was learning a new platform for running C++, root. These few days were spent reading the root tutorials and learning about how versatile root is. Also, I was required to look at what Aihong and Zhangbu have done so far on the project. This was not an easy task. They gave me a program that they wrote in order to make cuts upon the data, and then would graph the RMS y versus the RMS x. This program was hard to understand at first, but Aihong sat down with me and cut some parts out of it, and then proceeded to explain it to me. Once I understood what the program was trying to accomplish, I was able to put it to work for myself. Aihong wanted me to graph the data from several aspects of the data to see if there was any correspondence between them. Once I finished graphing these, we sat down together, and Aihong told me that certain of them would be used to make cuts on the questionable Strangelet data. He told me to find cuts for the following data plots (all of which are versus the number of good primary tracks); number of good tracks, the number of good negative tracks, the number of good primary tracks, RMS η, number of primary tracks, mean η, mean Pt2, and mean Pt. This was not hard to accomplish. All I needed to do was take the code he gave me, and now that I understood it, I could slightly manipulate it to graph these (this code already had the required cuts that were mentioned earlier, and I was to include the more specific cuts now). I would then use the program to make some cuts upon the graph. The options for cuts were a two-sigma cut fitted by a polynomial of degree two or three, or a three-sigma cut with a polynomial of degree two or three. I was then to analyze all of

Three graphs of the number of Primary Tracks versus the number of Good Primary Tracks. Top: Plot Middle: 2 sigma fit with polynomial degree 2 Bottom: 2 sigma fit with polynomial degree 3 these graphs with their lines of cuts and judge which would be a best cut for the data. Aihong believed it would be better to stay with the two-sigma cuts, so I limited myself to those during judging our cuts. The next step was now to combine all of these cuts together to graph the RMS y versus the RMS x, and analyze the resulting data. Taking the code Aihong gave me, and once again slightly modifying it allowed me to do this. What I needed to do in the code was declare the cuts I just worked on, and then include the cuts in the final drawing process. The program upon running would then check each individual piece of data to see if it existed outside of the cuts, or within them. If the data was outside the cuts, the program would then eliminate drawing that data piece, and if the reverse, then the program would draw the data. Upon finishing these graphs, an interesting thing was noticed. The graph using all of the cuts that I did, along with all of the cuts Aihong included, was left with no questionable Strangelet data points. Deciding that our cuts must be too specific we then released most of the cuts that I did. We then left the program with the vertex range cut, the time difference cut, the trigger by East/West cut, the number of tracks versus the number of good primary tracks, and the Strangelet Id cut. By eliminating the cuts I researched, the program became less specific in what it was graphing, and some Strangelet data points did appear on the graph. Finally for a point of reference we also graphed the data without any cuts. It is now time for us to analyze our graphs. If we were too look at our first graph (the one with no Strangelet data) and our second graph (Strangelet data but still cuts) we began to wonder what cut is it that eliminated all the points on the second graph in order to make it look like the first. This is a question we will investigate soon enough. Continuing on, looking at the cut graph, we can notice that there are no points from the questionable Strangelet data within the interesting lower quadrant of the graph. This can lead us to the conclusion that there are no Strangelets within our data set when we are using those desired cuts. Furthermore if one were to look into the third graph (no cuts at

all), one would notice that there exists a lot of points from the questionable Strangelet data, but none really that exist within the lower quadrant of interest. This can lead us to believe that this data contains no Strangelets. Having our graphs and our new question to look into ( What eliminated the points between the second graph and the first graph? ), we now proceeded to investigate that problem. What we needed to do was take the N-tuple where the Strangelet data was stored, and go entry by entry checking to see if it violated any of the cuts I made. If an occasion occurs where it does, then the run Id and the event Id are then written to a file with the name dedicated to the cut it violated. This way we are left with an extensive list of all the Strangelet data that violated a cut, for all of the cuts that I researched. Now that we have looked into our data and determined that there are no Strangelets within that data set, we can now move onto determining the efficiency and extensiveness of our search. In order to do this, we need to know both the size of our search pool (size of the Strangelet data) and the acceptance levels of the ZDC-SMD. The size of the Strangelet data is a piece of data that is easy enough to figure out, but the acceptance level of the ZDC-SMD is something a little more difficult. Two graphs of the RMS y versus the RMS x. Red dots are the questionable Strangelet data, while black dots are the known data. Top: Contains no cuts on the Strangelet data Bottom: Contains the basic cuts on the Strangelet data, and not the refined cuts. In order to determine this I now needed to work with the GEANT simulation (a simulation to watch the path a particle takes when traveling through STAR). This means I was now required to understand how to work an entire new program. This took a little bit of getting used to, and required me to ask a few questions of Aihong in order to understand what I was supposed to do, and how I was supposed to accomplish it. What he wanted me to do was take the GEANT simulation, and get a view of the ZDC-SMD on the screen. I was then to fire a proton at it using various Pt and η values, and I was then supposed to record the η value where the proton would first miss the detector. Through all of this, I was supposed to keep the other variables (φ and z-axis vertex position) equal to zero. So for example I would start with a Pt of 2.5, and I would start the η at eight. I would see that the particle hit, so I would then decrease the η until the particle would first miss the detector. This would leave me with at a Pt of 2.5 the first missed η would be 5.74. I would then continue along this process until I had shot 50 particles at the detector and had covered the range of Pt from zero to 10 (the max Pt allowed).

My next step would be to take this information I just gathered and graph it, but the trick would be to graph it where P/z is on the x-axis and Pt is on the y- axis. This would require me to use a few formulas that Aihong supplied, and my knowledge of C++, in order to convert η into P/z. Once this was accomplished I went back to using root, and I graphed these values. Once this was finished we ended up with a graph where all the values to the right of the points represented values that would hit the detector, and the values to the left would always miss. Realizing that because of the magnet, a particle of the opposite charge would be affected differently, I had to go back through and re-simulate everything I just did with an electron. And low and behold, the graphs did turn out to be different. Showing this to Aihong, we sat down and discussed this. We then managed to reason that if we were to shoot the particles at a φ equal to 180 degrees, then the proton graph would look like the electron, and vice versa. This happens to be true, but then we reached a snag, what would the graphs look like at the other two major angles (φ equals 90 and 270)? Knowing there was only one was to find out, I ran the GEANT simulation 4 more times (a proton and electron at both φ equals 90 and 270). These graphs all came out pretty much the same, and yet they were still both different from the φ equals zero and 180 graphs. This led us to the conclusion that this was going to be more difficult that we originally thought. After a slight discussion, Aihong decided with Zhangbu that the only thing we can do is test a Pt and η at various φ s, record the number of Graphs of acceptance of the ZDC-SMD. Everything to the right of the dots is a hit on the ZDC-SMD Top: Proton shot at φ = 0 (same as Electron at φ = 180) 2 nd : Electron shot at φ = 0 3 rd : Proton shot at φ = 180 4 th : Proton shot at φ = 90 (same as Proton shot at φ = 270, and Electron shot at φ = 90 & 270)

hits, and then convert it to a percent so we know how likely at that Pt and η it is that we have a hit. That s the basics of how acceptance is supposed to work, but it is a lot harder than that to establish a program for it. First thing I needed to do was I needed to develop a code in C++ that would create a kumac file that I could use to run in GEANT so that I would not be required to sit at the terminal and type in every line that is going to be simulated. Secondly, GEANT does not come equipped with a function to keep track or monitor whether of not a particle hits a desired object. Thirdly, GEANT was written in Fortran a language I did not know anything about. Next, I had no idea how to interact a function I am supposed to write into a program like GEANT. Also I needed to find out how to eliminate the interactions that occur when a particle hits an object. The problem is this is when I do manage to figure out how to get a function that links up to GEANT correctly, it is going to be called for every step of any particle. If there happens to be interactions occurring when a particle hits something then it is going to be called for every particle that results from the collision and this is going to greatly skew our results. Finally, I am going to need a find a way to take the information that I created for the kumac file and link it with information from the Fortran subroutine I will be creating. This way I could create a nice diagram showing the acceptance at the various levels of Pt and P/z that we tested. The first problem I tackled was the easiest and also most crucial, the interactions problem. What I needed to do was find a way to delete all chances of a particle having an interaction in such a way that it would create more particles to throw our results off. After asking Maxim Potekhin if this can be accomplished, and how I tackled this issue. Maxim told me that the only way to achieve this would be to go into the geometry files for GEANT and change all the materials for everything into being made from a vacuum. This would make it so that a particle would be able to go through objects that it normally wouldn t, but it would also eliminate this problem. Once again I discuss this with Aihong, and we decided the problem lost was worse than the problem gained. The next problem I tackled was how to create the kumac file to be read. This was not a very hard process. I would have the program output to a kumac file everything I needed to start the GEANT simulation along with compile and link my yet to be made subroutine. Then I needed to divide the P/z and Pt axis into a certain number of bins, and then φ into another number of bins. I would then systematically go through each P/z and Pt bin, randomly pick a Pt and P/z, convert P/z to η, and output this information to a separate file. I would then for that bin, go through each of the φ angle bins, randomly pick a φ, and then output the Pt, η, and φ to the kumac in the form it requires in order to be read. I would then repeat this process for all of the bins. Now I moved on to creating the subroutine. This was by far the hardest part of this experience. Not only was I required to learn an entire new language of code, but also I needed to figure out how to get the program to relate to the simulation. Now my first system of code that I developed for this simulation worked something to the effect that it would need to be recompiled between particles, and it would output each time it was opened a number of particles that hit the detector between recompiling. So it would output something each time, and have to be recompiled in order to set the counter back to zero, and needless to say, the function ran very slowly.

My remake of the function used a slightly different format. I noticed that all particles that were going to hit the detector, or were going to only slightly miss, all traveled through the same volume spaces up until the actual detector volume. So this function uses that information to tell whether or not it is going to hit or miss the detector. The moment a particle is not within one of those volumes, it is known it is going to miss, and otherwise it must hit it. Once a new particle is shot is when the function then outputs the hit (1) or miss (0) data to a file. So in this subroutine to get the information about a particle, you need to shoot two. Finally I then created a file to graph our resulting data for us. This function would then go ahead and takes the particle information file created earlier and also the hit file, and link them together into a nice diagram to show the percentage of hits. It prints the graph out in 2-d where you can see what values were used in the simulation. You can then edit this into a 3-d graph where you can see along the z-axis the percentage of hits on those Pt and P/z. Now the final part of my project, establishing an efficiency for our Acceptance graphs of the ZDC-SMD. Top: 2-d graph of the values used. Bottom: 3-d graph of the values used and the percentage of times those values hit. experiment. Zhangbu wanted me to create a program to display this. The way efficiency is calculated is you take a cut of the graph created above along a P/z. Thereby leaving you with a Pt on the x- axis and a percent on the y-axis. Now you take the assumed Pt spectra for Strangelets (which is Gaussian) and integrate it and multiply it to the Pt versus percent function to get one function. The integral should then be taken of the resulting function. In order to normalize this integral, you would now divide by the integral of the assumed Pt spectra for Strangelets. This would leave you with a percent of efficiency. I implemented this into a program by creating a function that would take in a value of the P/z and then calculate the efficiency for you. Zhangbu wanted me to test several values of P/z (500, 1000, 1500, 2000, 2500, 3000), so in order to do this I simply needed to rerun the program several times entering the different values of Graph of the efficiency at the crucial values.

P/z and recording the results. In order to put these results into a visual, I made a different program, which would take the P/z value and then plot the efficiency value that I received from earlier. Upon looking at the resulting graph, we noticed that we had a relatively high efficiency, so we are happy to know that our detector is highly reliable in its job. This is where my time here has end, however there is much more work needed to be done. A paper finalizing the results still needs to be written, and also all of our results need to be rechecked. Also in further studies, it is possible to refine our searches, and possibly still find a Strangelet. Maybe in the future, someone will attempt to look again with a bigger search pool, or a more efficient detector. Conclusion: Overall a lot of work has been done on this experiment, not just by me, but also by the entire team. Together we have taken a search pool of close to a hundred million, and looked through it methodically. We have eliminated bad data, and even refined it even further. We have taken these results and found out what eliminated some data pieces, so that further studies can look into those aspects of Strangelets a little more. We have established an acceptance level for the ZDC-SMD. We know now what it takes to hit the detector, and at certain points, how likely it is to hit the detector. We know the efficiency of our search, so that future searches have a guideline to start at, and know what they can do in order to establish themselves with a better search. I have also joined the ranks as a research scientist by helping out and presenting a talk on our search at the STAR Junior meeting. I put together a presentation, presented it, and also help to take questions. With all of this success and hard work though, we have completed the first search for Strangelets at RHIC, with the help of everyone here, and the ZDC-SMD. Acknowledgements: I would like to thank the National Science Foundation (NSF) and Wayne State University (WSU) for allowing me to have this opportunity and the financial support they provided. It truly was a once in a lifetime opportunity. I would like to thank Giovanni Bonvicini and the rest of the REU staff for taking time out of their lives to give us as much help as they could, for traveling here to check up on us, and for having the overall patience when helping us non-physics majors. I would like to thank Aihong Tang and Zhangbu Xu for showing me the ropes of research. I would also like to thank the friends I made here for keeping me sane through the times where I thought I was getting nothing done. I hope we all can remain in touch.