UNIVERSITÀ DEGLI STUDI DI MILANO Facoltà di Scienze e Tecnologie Corso di Laurea Magistrale in Fisica

Size: px
Start display at page:

Download "UNIVERSITÀ DEGLI STUDI DI MILANO Facoltà di Scienze e Tecnologie Corso di Laurea Magistrale in Fisica"

Transcription

1 UNIVERSITÀ DEGLI STUDI DI MILANO Facoltà di Scienze e Tecnologie Corso di Laurea Magistrale in Fisica Search for Dark Matter direct production in the mono-photon plus missing energy channel in pp collisions at s = 8 TeV with the ATLAS detector Relatore interno: Relatore esterno: Correlatore: Dott. Leonardo Carminati Dott.ssa Donatella Cavalli Dott. Valerio Ippolito Tesi di Laurea di: Marta Maria Perego Matr Codice P.A.C.S.: j Anno Accademico

2 2 Stay Hungry. Stay Foolish.

3 Contents Introduction 6 1 The Dark Matter problem Evidences and theoretical motivation from Astrophysics and cosmology Rotation Curves of stars in galaxy Gravitational lensing CMB Necessity of Physics beyond the Standard Model from Particle Physics One of the Standard Model open questions Dark Matter Candidates: the non barionic candidate zoo Why Particle Dark Matter? Neutrinos Axions WIMPs Kaluza Klein states Other exotics Dark Matter candidates Dark Matter Searches Direct Detection Indirect Detection Collider Searches Effective Field Theories Models for physics beyond the Standard Model Graviton production in the ADD scenario Direct production of WIMPs Compressed Squark scenario Conclusions The LHC and the ATLAS Detector The Large Hadron Collider The ATLAS Detector Introduction General layout of ATLAS Inner Detector

4 CONTENTS Calorimetry Muon Detector Trigger System Physics objects reconstruction Electrons and Photons Photons Jets Jets Reconstruction in this analysis Muons Taus Missing Transverse Momentum ET miss ET miss reconstruction ET miss in this analysis The mono-photon Analysis Data and Simulation samples Data Simulations MC re-weighting Event Selection Standard Model Background Control Regions Electrons faking photons Jet faking photon background Basic 2D Sideband Method Photon isolation and identification definitions Signal Leakage Corrections Uncertainties DSideband Method Validation Noise burts Results The γ + jet Background MC Estimation Data-driven Estimation Simultaneous Fitting Technique for background estimation HistFitter Inputs Systematics Validation Region Low ET miss Control Regions definition γ + jet background estimation in the Validation Region Simultaneous Fit Background estimation in the Signal Region

5 CONTENTS 5 5 Interpretations Model Independent limit on the presence of new physics Results Limits on Dark Matter direct production WIMP samples Results Conclusions 103 Acknowledgements 104 Bibliography 105

6 Introduction In 2012 the discovery of a new particle with properties very close to those expected for the Higgs boson of the Standard Model was announced at CERN. Thanks to this discovery, the last missing piece of the Standard Model has been set and no significant discrepancies with respect to its predictions have been observed over a large number of precision measurements. However, we know that we do not have a complete microscopic description of nature. From the particle physics point of view, several clues, mainly based on naturalness arguments, tell us that, despite its incredible level of precision in describing particle physics phenomena, the Standard Model (SM) is probably not the most fundamental theory. Hence, several Beyond Standard Model (BSM) theories have been proposed and a big effort is ongoing among theorists to find compelling Standard Model extensions. Moreover, from cosmological measurements we know that we understand quite well only 5% of the total content of the universe. Several observational probes for the existence of a new component of matter (the so called Dark Matter, DM) have been found on a wide range of astronomical scales but its physical properties are still unknown. Hence, one of the main unanswered questions today is related to the nature of Dark Matter. Among the most compelling DM candidates there are the weakly interacting massive particles (WIMPs) which are thought to be thermal relics of the early Universe. If DM is a thermal relic, a weak scale cross section gives the correct abundance to account for the predicted fraction of Dark Matter. This coincidence is called WIMP miracle and provides a strong indication for a WIMP (χ) in the mass range 10 MeV< m χ < 100 TeV. Such particles are predicted in models of physics beyond SM, such as Supersymmetry and Large Extra Dimensions. This thesis lies among collider Dark Matter searches carried out at the Large Hadron Collider (LHC) at CERN and presents a search for new phenomena in proton-proton collisions at s=8 TeV recorded by ATLAS detector. If DM couples to nuclear matter it can be produced via p-p collisions. There are many ways in which DM searches can be pursued at colliders; one way is to probe specific theories and models, by looking at all the decay channels predicted by the theory. The drawback of this approach is that there is a complex scenario and a plethora of possible channels. The simplest scenario to reveal DM particles at colliders is the situation where the Dark Matter is the only accessible new state to our experiments. Once produced, if DM is weakly interacting and stable enough, it will pass through the detector without leaving any trace and without decaying into other particles. Therefore the existence of Dark Matter particles escaping 6

7 CONTENTS 7 the detector may be inferred from an imbalance in the visible transverse momentum in the event and by requiring an energetic SM particle from initial state radiation (ISR) tagging the event. The ISR particle can be a jet (mono-jet signature), a photon (monophoton) or a boson (W or Z), denoting these searches as mono-x signatures. The final state investigated in this thesis is defined by the presence of an energetic photon (p γ T >125 GeV) and large missing transverse momentum (Emiss T >150 GeV). Events with such a final state constitute a clean and distinctive signature at LHC, and are studied to test three scenarios for physics beyond the SM: the production of WIMPs, Large Extra Dimensions and a particular scenario of Supersymmetry. Even if at hadronic colliders the mono-jet analysis is statistically favoured, the mono-photon has some compelling features making it a competitive search: photons are well-measured objects, the main sources of backgrounds are electroweak processes which are well known and relatively easy to estimate. The mono-photon analysis is a counting analysis: it aims to count the number of observed events in a Signal Region (SR) which is designed to contain the largest fraction of expected signal events. The number of observed events in data is then compared to the Standard Model prediction. The main effort of this kind of analysis is the precise estimation of all the sources of background entering the SR. The background may be grouped in two main categories: the background coming from W/Z + jet processes (where the leptons or jet are missed or reconstructed as a photon) and γ + jet processes and the background coming from Z( νν) + γ and W/Z( ll) + γ processes (where leptons are missed). While the former category (W/Z + jet and γ + jet), when possible, is estimated via data-driven methods, the estimation of the latter (Z/W +γ) is based on the definition of Control Regions (CR) (i.e. background-like regions enriched in a certain source of background) enriched in W + γ and Z + γ processes. Two MC normalization factors (k Z and k W ) for the two processes(z + γ and W + γ) are introduced and they are determined from a simultaneous fit in all the regions (CRs and SR). In this thesis, I focused both on the estimation of two sources of background (the jet faking photon and the γ + jet background) via two different data-driven methods and on the estimation of the total background by performing the simultaneous fit. I tested the simultaneous fitting technique in a Validation Region (i.e. Signal-like region in background fractions with small signal contribution) and appropriate Control Regions. The observed events in data in the Signal Region are compared to the total background estimation in the Signal Region and translated both into exclusion limits on physics beyond the SM and into exclusion limits on WIMP pair production, in the framework of effective field theories. This thesis is organized as follows. Chapter 1 gives an introduction to the Dark Matter problem. It presents some compelling Dark Matter candidates, the approaches to search for Dark Matter and scenarios for physics beyond the Standard Model which are interesting for the mono-photon signature. The LHC collider and the ATLAS experiment are described in Chapter 2 while Chapter 3 describes the physics objects reconstruction used by ATLAS. Chapter 4 details the mono-photon analysis and Chapter 5 shows how

8 CONTENTS 8 results are translated into exclusion limits on physics beyond the Standard Model and on specific Dark Matter models. Chapter 6 is devoted to conclusions.

9 Chapter 1 The Dark Matter problem In 2012 it has been announced the discovery of a new particle with properties very close to those expected for the Higgs boson of the Standard Model [1]. Thanks to this discovery the last missing piece of the Standard Model has been set and no significant discrepancies have been observed over a large number of precision measurements. A natural question now arises: do we have a complete microscopic description of nature? From cosmological measurements we know that we understand quite well only 5% of the total content of the Universe and we know that the main component of the Universe is still unknown (dark). This chapter aims to present the main motivations for the existence of the so called Dark Matter, both from Astrophysical and Cosmological measurements and from Particle Physics point of view. After this introduction, a short list of the main Dark Matter candidates is discussed and the three experimental approaches pursued in the Dark Matter searches are presented focusing on collider searches. Finally some scenarios for physics beyond the Standard Model, which are interesting for the mono-photon analysis, are described. 1.1 Evidences and theoretical motivation from Astrophysics and cosmology Over the past century, researchers have accumulated clues and probes that in the Universe there is an elusive and unknown substance, that we call Dark Matter. There are several astronomical observations and discoveries to support this idea but I will just present a brief summary of the most compelling evidences. In the last 30 years, thanks to precise cosmological and astrophysical measurements, our knowledge of the Universe structure increased considerably. Evidences (probes?) that Dark Matter is out there are described in [3], [6]. Here I will only mention three sets of evidences to give the reader an idea of the current understanding of Dark Matter. 9

10 1.1 Evidences and theoretical motivation from Astrophysics and cosmology Rotation Curves of stars in galaxy The study of rotation curves of stars in galaxies constitutes one of the most compelling evidence for the existence of Dark Matter. Spiral galaxies are stable gravitationally bound systems. In these systems stars occupy small regions and are not distributed spherically but in a thin disk. Consider the circular velocity V of a star and M(r) the mass of the galaxy inside a radius r. To require stability, the centrifugal acceleration should be equal to the gravitational pull: V r = GM(r) r 2 (1.1) which implies that the velocity follows the Kepler s law: GM(r) V = (1.2) r What we know from astronomical observations is that the behaviour is different from this expectation. Indeed, rotation curves become approximate constant after r 5 pc. The easiest explanation to this behaviour is the presence of a huge Dark Matter halo in the galaxy. If the mass distribution is proportional to r then the velocity distribution should be constant. In this way Dark Matter and visible matter should counterbalance to reproduce what we see from measurements. Fitting the rotation curves is a complex problem, the main elements used to fit are a stellar and gaseous disk and a Dark Matter halo modeled by a quasi isothermal sphere [4], [3]. Figure 1.1 shows a typical rotation curve which is fitted using the three components above Gravitational lensing The second set of evidences for Dark Matter comes from the Gravitational lensing. The very basic idea is the following. Looking at the sky, consider a foreground galaxy cluster. It acts as a gravitational lens for light rays coming from a background object (galaxy, quasar). The background object appears as multiple distorted images, from which the mass of the lens can be reconstructed. A typical case of strong gravitational lensing is the Bullet Cluster. It consists of two colliding clusters of galaxies. Figure 1.2 shows the clusters after collision. The red halo is the X ray distribution. Lensing measurements show that the main amount of matter is concentrated elsewhere and do not correspond to the red halo. It correspond to the blue halo which is the Dark Matter. This picture is particularly impressive because we see ordinary matter separated from DM. Another probe for DM coming form lensing measurements is the case of the ring of Dark Matter [7] CMB One of the strongest evidences comes from cosmological measurements, in particular from the study of the Cosmic Microwave background (CMB). It is a relic thermal radiation background permeating the Universe which is almost isotropic (2.73 K). It has

11 1.1 Evidences and theoretical motivation from Astrophysics and cosmology 11 Figure 1.1: Rotation curves of several galaxies. The fit is performed using three-parameters. In each figure, the solid line represents the total fit. The individual contributions are also shown: luminous components (dashed), gas (dotted), and dark matter (dash-dotted). [5] Figure 1.2: Bullet Cluster, superposition of optical, X-ray (red) and Lensing Map (Blue).

12 1.2 Necessity of Physics beyond the Standard Model from Particle Physics 12 been emitted years after Big Bang. Before that time, radiation and matter were tightly coupled resulting in a ionized plasma. As the universe was expanding and cooling, the temperature decreased up to make it possible the formation of neutral atoms. After that time the Universe became transparent to photons. CMB radiation had its origin at that time. Before decoupling, radiation was tightly coupled to matter and once decoupled they were not largely perturbed. Hence, CMB provide us a sort of footprint of the Universe at that time. From the study of CMB lots of information can be inferred. One of these is the amount of matter. The most precise measurements of the CMB we have comes from the PLANCK experiment. It provides the following density estimation: density of the ordinary matter 5%, DM 27%, and dark energy 68% [15]. 1.2 Necessity of Physics beyond the Standard Model from Particle Physics The Standard Model (SM) is the theory able to describe leptons, quarks and their interactions through the strong and electroweak interactions. Its validity has been extensively studied at LEP, Tevatron and LHC. As far as we can understand, data we collected until now at colliders experiments are well described by Standard Model predictions as summarised in figure 1.3. Despite its incredible level of precision in describing Particle Physics phenomena, the Standard Model is not considered to be the most fundamental theory. Indeed there are some open questions and limitations whose explanations is not provided by the SM. For this reason several Beyond Standard Model theories have been proposed and a big effort is ongoing among theorists to find compelling Standard Model extensions One of the Standard Model open questions One of the most discussed issue of the SM is the hierarchy problem. It is related to the presence in the theory of the Higgs scalar field. The basic problem is that the Standard Model Higgs field receives large mass corrections at energies above the Electroweak scale; and fine tuning is required to compensate and avoid these large corrections. Considering the Higgs mass m H, it could be written as the sum of the bare mass m H0 and the radiative corrections: These corrections can be written in the following form: m H = (m H0 ) 2 + m 2 H (1.3) m 2 H = 16 λ2 f 16π 2 (2Λ2 + O[m 2 f ln( Λ m f )]) (1.4) where λ f is the Yukawa coupling of fermions with the Higgs field, m f are the masses of fermions, Λ is an energy cutoff and represents the energy scale up to which SM is valid. If the Standard Model is valid until the Plank scale, there would be a divergent

13 1.2 Necessity of Physics beyond the Standard Model from Particle Physics 13 Figure 1.3: The plot shows the SM cross section measurements of many processes, compared to theoretical predictions. Data used in this plot have been taken during the LHC Run 1. Data generally are well described by Standard Model predictions.

14 1.3 Dark Matter Candidates: the non barionic candidate zoo 14 Lagrangian (the quantum corrections would be 30 orders of magnitude larger than m 2 H, m H 125 GeV). To observe an Higgs mass such as the one we observe, these corrections should be cancelled out, requiring a large fine tuning which seems to be highly unnatural. For this reason the Standard Model is not believed to be the theory of everything and a huge amount of theories try to provide a natural cancellation of these corrections. The most famous one is Supersymmetry (SUSY). Its very basic idea is that each SM particle has a superpartner particle whose spin differs by a half-integer. In SUSY theories the quantum corrections described above are cancelled out by those from the corresponding superpartners above the Supersymmetry breaking scale. Another compelling feature of SUSY is that it provides a straightforward candidate for Dark Matter. The SUSY scenarios are very complex and the phenomenology strongly depends on the model assumptions. It was probed at the LHC, but until now we do not have any striking signal for the existence of particles predicted by any of the Supersimmetry extensions of the SM. Section 1.6 provides three possible scenarios of physics beyond the standard Model which are interesting for the analysis described in this work. Before going into their description I will show a brief summary on possible Dark Matter candidates and the way we are looking for those. 1.3 Dark Matter Candidates: the non barionic candidate zoo The pressing questions I will try to address in the section are: what is Dark Matter made of? What is the nature of Dark Matter? Is there only one kind of Dark Matter particle or more? Which interactions beyond gravity does Dark Matter manifest? How does the Dark Matter couple to the Standard Model? In the following sections these questions will be passed through, starting from listing some Dark Matter candidates I will move to present how Dark Matter searches are carried out and finally the mono-photon search will be introduced Why Particle Dark Matter? There are some questions we have to look at, before moving to exotic candidates: why do we think about particle Dark Matter? Why not modify gravity? could Dark Matter be Barionic Matter? Studies on modified gravity theories (MOND, MOdified Newtonian Dynamics) and TeVeS (Tensor-Vector-Scalar theories of gravity, [8]), which attempt to explain Dark Matter without new particles, with new interactions or modified gravity, are ongoing. In general these theories are able to describe observations on galactic rotation curves [9]. Nevertheless, their main issue is that they must describe the whole observations ( [10], [8], [11]) and at the moment nobody has been able to do it. By contrast it is much more economical and easy to explain everything by introducing Dark Matter particles. In this case,

15 1.3 Dark Matter Candidates: the non barionic candidate zoo 15 well justified theories exist and they are testable. The problem is: why not just ordinary (dark) baryons? There are strong constraints which tell us that Dark Matter should be non-barionic matter. CMB [15] and Big Bang nucleosynthesis (BBN) make independent measurements of the baryon fraction and its predictions agree with observations. Thanks to these studies the possible contribution of Massive astrophysical compact halo object (MACHO) [12] have been ruled out and we know that DM is not a SM barion. It is interesting to explicitly mention the case of SM neutrinos Neutrinos At a first glance the SM itself provides compelling DM candidates: neutrinos. Neutrinos seem to be the perfect candidate for Dark Matter: they are known to exist (without invoking exotic models) and to have mass (which needs a SM extension to be explained). However, Standard Model neutrinos cannot explain the whole Dark Matter content. Constraints on the neutrinos relic density come from CMB studies [15] and double β decay experiments. It could be shown [6] that the relic density coming form neutrinos contributions is not abundant enought to account for the principal component of Dark Matter. Sterile Neutrinos Even if there is a good confidence about the fact that SM neutrinos cannot account for the main Dark Matter component of the Universe, there is still the possibility that other, not yet discovered, neutrinos can be Dark Matter candidates. They are called Sterile Neutrinos. These particles are hypothesized to be right-handed neutrinos which do not interact via any of the known fundamental SM interactions except gravity. Several experiments (for example MiniBooNE, MicroBooNE experiments) aim to search them but they have not proved to exist, yet Axions The SM has another unsolved problem, the so called Strong CP problem. This problem arises from the fact that quantum chromodynamics (QCD) has not been observed to break CP-symmetry. The point is that in principle QCD could violate this symmetry because there is no fundamental reason to conserve it. One of the solutions to this problem is the one proposed by Peccei and Quinn in This theory (Peccei-Quinn theory [13]) predicts new scalar particles, the axions. The calculation of axions relic density suffers of large uncertainties and depends on the production mechanism, but if axions exist in a low mass range ( ev), they can be a compelling candidate for cold Dark Matter [14]. Many experiments are ongoing all over the world to search for the existence of axions, among these there is for example ADMX (Axion Dark Matter experiment) which looks for the axion conversion into photons.

16 1.3 Dark Matter Candidates: the non barionic candidate zoo WIMPs There is a broad class of particles which are called Weakly Interacting Dark Matter Particles (WIMPs) which are considered compelling DM candidates. WIMPs are thermal relic of the early universe. If WIMPs, χ, have additional interactions besides the gravitational one, processes of annihilation and decay into Standard Model particles could be possible. In particular, at the time the temperature of the Universe was above the DM mass, DM was in thermal equilibrium and processes of annihilation and decay of DM and SM particles χχ f f (1.5) happened in both directions. When the temperature decreased, the interaction rate dropped below the expansion rate of the Universe, the equilibrium was not maintained and the only processes which could occur were: the number density started to follow the following equation [6]: χχ f f (1.6) dn χ dt + 3Hn χ = < σv > (n 2 χ n 2 eq) (1.7) where n χ is the particle number density, < σv > is the thermal average of the total cross section times the velocity, H is the Hubble constant and n eq is the number density at thermal equilibrium. Due to the fact that these processes are taking place in an expanding universe, the Hubble term is added. The number density is a function of time and when the Hubble term became important, the annihilation freeze out occurs. At that time, the number density stopped changing and that is the relic abundance of Dark Matter we observe today. Figure 1.4 shows this process. What is important is that, the later this process happens, the least DM abundance we have, the weaker the cross section is, the earlier it freeze out and the higher the number DM density is. The amazing point is that, if DM is a thermal relic, a weak scale cross section gives the correct abundance. The mass range allowed for WIMPs is about 10MeV< m χ < 100TeV and the allowed range for interaction cross sections with normal matter is cm 2 (For details and calculations refer for example to [6] and [16]). What is interesting is that many Beyond Standard Model theories predict WIMPs candidates. For example, as briefly mentioned in 1.2 the lightest neutralino is a DM candidate Kaluza Klein states In the framework of Large Extra Dimensions theories, proposed to solve the hierarchy problem, excitations of Standard Model fields are predicted to appear. These are called Kaluza-Klein states. There are several models involving Large Extra Dimensions, in section for the mono photon analysis.

17 1.4 Dark Matter Searches 17 Figure 1.4: Evolution of the WIMP number density in the early Universe at the time of the freeze-out processes. The Y variable is linked to n χ while Y eq to n eq. Refer to [21] for details Other exotics Dark Matter candidates Besides the DM candidates listed so far, there is a plethora of other possible DM candidates [21]. Among those there are super-heavy DM candidates (such as WIMPZIL- LAs [17]), Q balls [18], self interacting DM [19], etc. 1.4 Dark Matter Searches In the recent past a lot of efforts have been done to look for Dark Matter particles and to reveal something about their nature. Several experiments are actively running and taking data. In this section I will try to address the question: how can we look for Dark Matter? The cartoon in figure 1.5 represents the three different approaches which could be used to look for Dark Matter. One way is to look for DM annihilation or decay, this is the task of the so called Indirect Detection experiments (ID). It is possible also to look at the interaction between Dark Matter and Standard Model particles, this is the aim of Direct Detection (DD) experiments which looks for WIMPs. The last possibility is to search DM at colliders by looking for its production arising from SM particles collision. In this section I will give a brief overview on the different approaches used in the Dark Matter search Direct Detection Direct Detection experiments are mainly looking for WIMPs. The underlying idea of Direct Detection experiments is simple. If our galaxy is filled by

18 1.4 Dark Matter Searches 18 Figure 1.5: Diagram representing three approaches to search for DM. If DM has some couplings to ordinary matter we can detect it: 1) by looking at its annihilation products (Indirect Detection), 2) looking for its scattering with nuclear matter (Direct Detection), 3) producing it at colliders (Collider Searches). Dark Matter particles and if the Dark Matter halo is extending till our planet, there is a certain rate of WIMPs which are passing through the Earth. The WIMP flux on the earth is estimated to be of the order of 10 5 (100 GeV/m χ )cm 2 s 1 [21]. Most of them will pass without interacting but there will be a small but potentially detectable fraction which can interact with nuclei. The aim of Direct Detection experiments is to detect these extremely rare interactions with normal matter. In particular these experiments aim to measure the rate R of nuclear recoils and the nuclear recoil energies E R caused by a WIMP interaction. To make it possible it is necessary to reduce as much as possible every kind of background which could fake the WIMP signal. Indeed, the background comes from cosmic rays, environmental radioactivity and detector material radioactivity. Sophisticated experiments are located in underground sites (such as Gran Sasso laboratories in Italy and SNOLab in Canada) to reduce the cosmic ray background. Their main challenge is to make the experimental environment extremely low radioactive in order to be able to detect the recoiling nucleus kick off by a WIMP and develop sophisticated background-rejection techniques. In particular there are two classes of Direct Detection experiments, time independent and time dependent, which look at different energy signals and use different techniques and technologies to detect the energy released in the detector. In the following paragraphs I will provide only the example of time independent Direct Detection experiments. Event Rate As mentioned above, a Direct Detection experiment aims to measure the rate R and the energies E R of the nuclear recoils caused by WIMP interactions. In particular, in a time independent DD experiment the energy dependence of the recoil rate is examined. Assume that the Dark Matter is not only gravitationally interacting and consider an elastic scatter of a WIMP off a nucleus. Roughly speaking, in an experiment the rate of

19 1.4 Dark Matter Searches 19 expected events per unit time and per unit detector material mass is: R m=i N i n χ < σ iχ > (1.8) where the index i runs over nuclei species in the detector, N i is the number of target nuclei in the detector, n χ is the local WIMPs density and < σ iχ > is the WIMP-nucleus cross section averaged over the relative WIMP velocity. More in detail it is possible to show that the differential recoil rate as a function of energy [count kg 1 day 1 ] is, basically, the convolution between the scattering cross section for an individual nucleus ) and WIMP speed distribution in detector frame (V f(v (t))): ( dσ(v,e R) de R dr(t) de R = ρ m N m χ Vmax V min V f(v (t)) dσ(v, E R) de R dv, (1.9) where t is the time, m N is the target nucleus mass, m χ is the local WIMP density over the WIMP mass. The elastic scattering happens in the extreme non-relativistic case in the laboratory frame: E R = µ2 N V 2 (1 cos θ ) (1.10) m N where θ is the angle of the recoiling nucleus in the centre of mass frame, with respect to the initial vector of the DM particle, and µ N is the WIMP-nucleus reduced mass: ρ µ N = m Nm χ m N + m χ. (1.11) The recoil needs to be sufficiently large to be observed, so there is a minimum WIMP velocity which can cause a recoil of energy E R : E R m N V min = 2µ 2 (1.12) N For this reason in equation 1.9 the lower limit of integration is V min. The upper limit is formally infinite but there is a maximum velocity given by the local escape speed in the galactic rest frame for WIMPs which are gravitationally bound to the Milky Way. At this point the two ingredients to compute the event rate are the WIMP-nucleus cross section and the velocity distribution. The way in which a WIMP scatter off a nucleus depends on the WIMP interaction properties. Basically it depends on the WIMP-quark interaction strenght. There are two ways a WIMP can scatter off a nucleus, via a spin dependent interaction or a spin independent interaction. So, isolating the spin dependent (SD) and spin independent (SI) contribution, the WIMP-nucleus cross section in equation 1.9 can be written in the following form, : dσ de R = ( dσ de R ) SI + ( dσ de R ) SD (1.13)

20 1.4 Dark Matter Searches 20 and introducing the respective form factors: dσ de R = m N 2µ 2 V 2 [σ SIF 2 SI(E R ) + σ SD F 2 SD(E R )] (1.14) In particular the Spin Independent term arise from scalar or vector coupling, while Spin Dependent term from pseudoscalar or axial vector couplings. In the spin-independent case the WIMP-nucleus cross section depends on A 2 amplifying the sensitivity of experiments using nuclei with large A to search for SI scattering while in the spin-dependent case the cross section is proportional to a function of the nuclear angular momentum (calculations in [21]). The differential event rate depends also on the velocity distribution. According to the Standard Halo Model the local Dark Matter Density is ρ DM 0.3 GeV cm 3 and the Speed Distribution is Maxwellian: f( V ) = 1 2πσ exp( V 2 2σ 2 ) (1.15) where σ rms 270 km/s and V 0 =220 km/s. Combining all the informations, under some simplifying assumptions (WIMP with equal coupling to protons and neutrons), it is possible to calculate the expected WIMP nuclear differential recoil rate for a spin-0 target. Current DD experiments do not aim to measure a spectrum of the rate as a function of the recoil energies (the expected interaction rate is expected to be too low to attempt to measure a spectrum), they instead attempt to identify and count WIMPinduced recoils above a certain energy threshold (which is set by the detector efficiency and backgrounds). For this reason, the predicted total integrated WIMP-nuclear recoil rate above a certain threshold is studied for different target materials (such as xenon, argon, germanium, and sodium) for a 100 GeV wimp with a fixed scattering cross section and galactic halo parameters. The two elements which play a role in determining the shape of the distributions are A 2 and the nuclear form factor. The comparison between different target materials shows that heavier elements gain in rate only if the detection threshold is low enough to take advantage on the form factor contribution. This is the reason why DM experiments with Silicium and Germanium targets have good sensitivity for low DM masses (for instance Cryogenic Dark Matter Search experiment, CDMS), while Argon targets have better sensitivity for high DM masses (Dark Side experiment). By counting the WIMP-induced recoils above a certain energy threshold it is possible to set upper limits on the WIMP-nucleon cross section. At the leading order the exclusion plots of different experiments show a common behaviour (for example fig. 1.8): for low WIMP masses there is a steep slope which depends on the fact that each detector has an energy threshold (only recoil energies above that threshold can be detected. For this reason, DD experiments are not able to detect very low mass WIMPs); the slope at higher WIMP masses comes from the dependence on 1 m χ. Status up to now Current experiments are more sensitive to Spin Independent Dark Matter scattering (thanks to the dependence on A 2 of the WIMP-nucleon cross section) than Spin

21 1.4 Dark Matter Searches 21 Figure 1.6: Constraints on spin-independent WIMP-nucleon cross sections as a function of WIMP mass as of Summer 2013 [22] Dependent Dark Matter scattering (whose WIMP-nucleon cross section depends on the total nuclear angular momentum of the target nuclei). Current limits for SI searches are shown in figure 1.6 while limits for SD searches are shown in figure 1.7a and 1.7b. A summary of WIMP-nucleon spin-independent cross section limits, hints for WIMP signals and projections for future are shown in figure 1.8. The leading experiment for spin-independent searches is LUX, which presented updated results in February 2014 [23]. Until now, no statistically significant signal of DM has been observed. CDMS-lite (low ionization threshold experiment) experiment focused its attention on low mass WIMP and in 2013 detected three candidates events [24] which set an upper limit on cross section of cm 2 for a WIMP of mass 10 GeV/c 2. Also latest results from LUX experiment do not show any evidence of WIMP signals [23]. Current direct detection dark matter experiments will achieve their projected sensitivities by the next few years. Meanwhile the second generation of DM experiments is under design. The second generation (G2) experiments will be built as ton-scale experiments to increase their sensitivity. They will be able to probe also the Higgs-mediated cross sections. They will also extend their sensitivity to both GeV low-mass WIMPs and TeV masses that are beyond the reach of colliders. Recently it has been decided by the American Department of Energy (DOE) that the future funded experiments on which the US will beat are LUX, CDMS for WIMPs search and ADMX for the axion search Indirect Detection A big effort in the search for Dark Matter is put in Indirect Detection experiments. The idea is that, if WIMPs are thermal relics from the early Universe, WIMPs can annihilate today as well. This is expected to happen in regions where the Dark Matter density is high, such as the galactic center where the dark matter density profile is expected to

22 1.4 Dark Matter Searches 22 (a) Spin-Dependent WIMP-proton cross sections (b) Spin-Dependent WIMP-neutron cross sections Figure 1.7: Constraints on spin-dependent WIMP-proton cross sections (figure 1.7a) and Constraints on spin-dependent WIMP-neutron cross sections (figure 1.7b) as a function of WIMP mass as of Summer 2013 [22] Figure 1.8: Summary of WIMP-nucleon spin-independent cross section limits (solid curves), hints for WIMP signals (shaded closed contours) and projections (dot and dot-dashed curves) for U.S.-led direct detection experiments that are expected to operate over the next decade. Also shown is a band indicating the cross sections where WIMP experiments will be sensitive to backgrounds from solar, atmospheric, and diffuse supernovae neutrinos. [22]

23 1.4 Dark Matter Searches 23 grow as a power law of the radius. The annihilation products are Standard Model particles (gamma rays, neutrinos, electrons, positrons, protons, antiprotons, deuterons, and antideuterons). Therefore, it is possible to search WIMPs by observing these potentially detectable Standard Model Particles. It is possible using ground based instrument (like HESS, or Neutrino telescopes such as IceCube) or space satellites (Fermi-LAT Telescope, Pamela, AMS). The dark matter annihilation signal depends both on particle physics properties, such as the annihilation cross section, and on the density distribution of Dark Matter. The key point is to be able to distinguish the background coming from known astrophysical processes from the Dark Matter signal. For this reason among the secondary products, antiparticles, gamma-rays and neutrinos are particularly interesting. Antiparticles (positrons, antiprotons, antideuterons) are less abundant than the respective particles while gamma rays travel in straight lines and practically unabsorbed in the local Universe. AMS-02 recently published results on energetic cosmic ray electrons and positrons which are consistent with a Dark Matter particle of mass on the order of 1 TeV [27]. To determine if the observed new phenomena is from DM or from astrophysical sources such as pulsars other measurements are needed. A smoking gun Dark Matter signature would be a line in the gamma ray spectrum. The Fermi-LAT experiment has observed a signal which could be interpreted as a gamma-ray line from dark matter annihilation [25]. It could not be considered an evidence for the discovery of DM because other explanations such as instrumental issues could cause the recorded excess and are still not ruled out Collider Searches The third strategy adopted to look for Dark Matter is the one pursued by experiments at high-energy particle colliders. If Dark Matter has some couplings to normal matter, it can be produced in collisions at particle accelerators. In particular, if DM has couplings to nuclear matter, it can be produced at the LHC. There are many ways in which DM searches can be pursued at colliders. One way is to probe specific theories and models, by looking at all the decay channels predicted by the theory. This is the strategy adopted by SUSY searches for example, the drawback is that there is a complex scenario and a plethora of possible channels. The simplest scenario to reveal DM particles at colliders is the situation where the Dark Matter is the only accessible new state to our experiments. Once produced, if DM is weakly interacting and stable enough, it will pass through the detector without leaving any trace and without decaying into other particles, similarly to what neutrinos do. Therefore the existence of Dark Matter particles escaping the detector may be inferred from an imbalance in the visible transverse momentum in the event (section 3.5). The problem is that it is impossible to trigger an event where only invisible particles are in the final state. Hence the necessity to have a Standard Model particle to tag the event. The minimal requirement which can be done is the request of a Standard Model particle from initial state radiation (ISR). In this simple case, the radiated SM particle is not correlated to the DM production mechanism and it is only needed to tag the event. Therefore, the most model independent DM searches at colliders are the so called mono-x searches, characterized by large missing transverse

24 1.5 Effective Field Theories 24 momentum and an energetic particle from ISR such as a single jet (mono-jet), a single photon (mono-photon) or a boson (W or Z)... These searches are striking signature for new physics: as well as WIMPs pair production they could be interpreted as ADD large extra-dimensions, SUSY related searches, in some cases invisible Higgs and other exotic channels. At the LHC, and at hadronic colliders in general, the most sensitive search is the monojet. The mono-photon search is a very compelling signature at lepton colliders but also at the LHC have good sensitivity and thanks to the presence of a photon it provides a very clean signature. 1.5 Effective Field Theories To interpret results and to set limits on the existence of DM it is important to have a theoretical structure describing how DM interacts with the SM. The simplest and most economical description is the one provided by Effective Field Theories (EFT). As the precise nature of interaction is not known, EFT are used to represent our ignorance. EFT are the low energy, non-renormalizable, description of a fundamental theory. If the energy scale of the considered process is much lighter than the mediator masses involved in the interaction, it is possible to integrate the mediators out and to ignore them. The EFT approach is analogous to the Fermi theory, which applies at energy much smaller than the boson masses. In the EFT approach, if there is only one operator, the interaction is described by two parameters: the DM mass (m χ ) and suppression scale M (i.e. the scale above which Dark Matter particles interact with Standard Model fermions). There are some conditions which needs to be verified for EFT validity. To stay in the perturbative regime, a constraint on the coupling is set (g SM,g DM <4π, where g SM and g DM are the couplings of the mediator with SM and DM particles respectively). Supposing a mediator of mass M, the suppression scale M is defined as: M M g SM g DM (1.16) The interaction can be described by an EFT if the momentum transfer (Q tr ) is much smaller than the mediator mass M (Q tr < M = M g SM g DM ). The EFT is used in Direct Detection experiments, where the momentum exchange is very small (few KeV) and the masses of the particles other than the WIMPs are expected to be heavier. The problem is that at the LHC the momentum transfer involved in the interaction can be so high that EFT are not well justified. At the LHC we are probing another mass regime with respect to the one probed by Direct Detection experiments, so it is possible to compare LHC and DD results only in the EFT framework, which may not be true. The validity of the EFT approach at the LHC is under discussion and an intense study about this topic is ongoing [26].

25 1.6 Models for physics beyond the Standard Model Models for physics beyond the Standard Model As introduced in 1.2 there are some problems which motivate the idea that the SM is an effective model of a more fundamental theory. In the following sections, three scenarios for physics beyond the SM will be reviewed. These models are of particular interests in this thesis, because they predict new observable phenomena in the energy reach of the LHC and that would appear with a mono-photon signature Graviton production in the ADD scenario One of the solutions to the hierarchy problem (1.2) involves large extra dimensions. Arkani-Hamed, Dimopoulos and Dvali proposed a model (called ADD) where new spatial extradimensions are added to the four dimensional time-space. In particular, gravity propagates in the 4 + n-dimensional bulk of space-time while the other SM fields are confined to our usual four dimensions. In this model, the four-dimensional Planck scale, M P l, is related to the fundamental 4 + n-dimensional Planck scale, M D, by: M 2 P l M 2+n D Rn (1.17) where n is the number of dimensions and R is a parameter related to the size of the extra dimensions. This relation is of particular interest because depending on R, the gravity scale M D could be as low as the electroweak scale M EW. If these two scales are close to each other the hierarchy problem would be overcome. If M D M EW from eq R results to be: R n 17 1T ev cm( ) 1+ 2 n (1.18) M EW The case where n = 1 is excluded by experimental evidences since in this scenario R cm and deviations of the gravity in the range of the solar system would be predicted. The ADD model is a low energy effective field theory which predicts the existence of physical particle, the graviton. The graviton is predicted to be a spintwo particle which mediates the gravitation interaction. In the framework of compact extra-dimensions, gravitons result as a sum of different states of mass m i (Kaluza-Klein states). This model is particular interesting for the analysis discussed in this thesis because LHC, the graviton modes (G) can be produced in association with a jet or a photon. In particular, fig. 1.9 shows processes where a graviton is produced in association with a quark or a jet while 1.10a shows the production of a graviton with a photon. As gravitons do not interact within the detector these processes have a typical mono-jet or mono-photon signature Direct production of WIMPs Assuming that the WIMP particle χ is odd under some Z 2 symmetry, each coupling involves an even number of WIMPs. In the mono-photon analysis we look for the direct

26 1.6 Models for physics beyond the Standard Model 26 Figure 1.9: Processes at leading order for the graviton production in association with a jet or a gluon. production of WIMP pairs is looked for in association with a photon. As WIMPs do not interact with the detector, the mono-photon signature consists of large missing transverse momentum (section 3.5) and an isolated photon. Different theoretical framework could be used to interpret WIMP production. EFT The first WIMP production mechanism which can be probed by the mono-photon analysis is the one where an effective field theory (section 1.5) is used to describe possible interactions between WIMPs and partons. Figure 1.10b represents the corresponding Feynman diagram. The central circle represents our ignorance about the precise nature of the interaction. Different operators are introduced to mimic the nature of the mediator. In particular the following ones can be considered in the mono-photon analysis: D1: Scalar mediator that couples to quarks: χχ ff D5: Vector mediator χγ µ χ fγ µ f D8: Axial Vector mediator: χγ µ γ 5 χ fγ µ γ 5 f D9: Tensor mediator: χσ µν χ fσ µν f C1: Complex Scalar mediator: χ χ ff D1, D5 and C1 are spin-independent operators while D8 and D9 are spin-dependent. For the D5, D8 and D9 operators it is possible to consider an effective vertex by integrating out the mediator as follows: σ(pp χχ) g 2 SM g2 DM (Q 2 tr M 2 mediator )2 + Γ 2 M 2 mediator g2 SM g2 DM M 4 mediator 1 M 4. (1.19) This is valid if Q tr < g SM g DM M.

27 1.6 Models for physics beyond the Standard Model 27 γ q q χ γ γ q (a) ADD Graviton production in association with a photon. G q (b) DM production in the EFT framework. χ q γ χ q γ γ χ q (c) Z like mediator DM in a simplified model approach. χ q (d) EFT DM model compatible with Fermi-LAT line at 130GeV. χ q γ χ 0 1 q q q χ 0 1 q (e) Squark production and direct decay. Figure 1.10: The monophoton signature as a probe of the production of BSM particles in the various models studided in this analysis. q EFT model to compare the Fermi Line to LHC data As mentioned in sec , the Fermi-LAT experiment observed a peak at 130 GeV in the photon spectrum which could be interpreted as a DM signal. An effective field theory between pairs of DM particles and pairs of photons has been proposed [29] both to describe a process resulting in a Fermi-like gamma line (χ χ γγ) and in a γ + ET miss final state at the LHC. Figure 1.10d represents the Feynman diagram of this process. Simplified Models To study the validity of the EFT approach at the momentum transfer range currently probed at the LHC, models with a Z like mediator with vector and pseudovector interactions can be considered. Figure 1.10c shows the Feynman diagram of this process.

28 1.7 Conclusions Compressed Squark scenario The mono-photon analysis is sensitive to an extension of the SUSY direct squark simplified model. The process which can be probed by the mono-photon analysis is represented in figure 1.10e: the pair production of squarks of the first and second generations, which are considered to be degenerate in mass, is followed by their decay into a quark and the lightest neutralino which is the lighest sparticle (LSP). This process becomes interesting for the mono-photon analysis if the mass difference between the squark and the neutralino is of the order of O(10 GeV) (this situation is referred to as compressed spectra). In this situation the jet produced in the squark decay is very soft and often not identified as a signal jet. 0-lepton SUSY analysis are optimized for other situations and often loose these compressed scenarios. If a photon is radiated from an initial quark or from one of the produced squark, this process has a mono-photon signature: a single ISR photon and missing transverse momentum coming from neutralinos. 1.7 Conclusions This is the golden age of Dark Matter. Different approaches are used to discover Dark Matter particles. All these approaches can probe a certain range of DM masses and cross sections. Collider searches are particular compelling because they allow to set model independent limits on the presence of new physics and to carry out model independent searches (mono-x searches). Moreover their results can be interpreted both in the framework of EFT and Simplified Models. With respect to Direct Detection experiments, in the EFT frameworks, collider searches are more sensitive for low mass WIMPs (DD experiments are limited by the detector energy threshold) and for spin dependent interactions. Among the mono-x searches the mono-jet and the mono-photon are the most promising at the LHC. Even if at hadronic colliders the mono-jet has a better sensitivity, the mono-photon analysis provides a particularly clean signature. Chapter 4 describes in detail the mono-photon analysis.

29 Chapter 2 The LHC and the ATLAS Detector The data used in the analysis presented in this thesis have been collected by the ATLAS detector in 2012 and consist of 20.3fb 1 of p-p collisions at s = 8 TeV. ATLAS (A Toroidal Lhc ApparatuS) [32] is one of the four main experiments at the Large Hadron Collider (LHC) at CERN. ATLAS is a general purpose experiment with a broad research programme which goes from the understanding of the mechanism of spontaneous symmetry breaking to searches for new physics beyond the Standard Model. Throughout this chapter I will discuss the main characteristics of the ATLAS detector. The description is focused on the experimental aspects of my analysis and so I will omit many aspects which are not essential here. For a comprehensive and exhaustive overview refer to [32]. 2.1 The Large Hadron Collider The LHC is designed to provide proton-proton collisions at a nominal center of mass energy of 14 TeV and heavy ion collisions with an energy of 5.5 TeV per nucleon, its design instantaneous luminosity is cm 2 s 1. The LHC is located at CERN near Geneva and is built in a circular underground tunnel. It consists of a 27-kilometre ring of superconducting magnets with a number of accelerating cavities to boost the energy of the particles along the way. Inside the accelerator, two high-energy proton beams travel at relativistic speeds and in four points they are made to collide. The proton beams travel in opposite directions in separate beam pipes kept at ultra-high vacuum. The two beams are kept into their orbit by thousands superconducting magnets operating at a temperature of 1.9 K and providing a field above 8 T. Particles are accelerated and stored using a 400 MHz superconducting cavity system. The superconducting magnet system includes 1232 dipole magnets which bend the beams, and 392 quadrupole magnets which focus the beams so that the r.m.s width of the beam in the transverse plane is about 17 µm. The four interaction points are surrounded by the LHC experiments: ATLAS [32] and CMS [33] are multipurpose experiments, designed to study high transverse momentum events for the search of the Higgs boson and phenomena beyond the Standard Model, 29

30 2.1 The Large Hadron Collider 30 Figure 2.1: Schematic layout of the LHC. Along the ring four main experiments are located. They are: ATLAS, CMS, LHCb and ALICE. LHCb [34] is an experiment devoted to the b-quark physics study and CP violation studies, while ALICE [35] aims at lead ion collisions. A schematic layout of the LHC is shown in Figure 2.1. The LHC started operations on September 10, 2008, but immediately after, during the commissioning phase, a major accident imposed a one year stop. During Fall 2009 operations started again, culminating in the first 900 GeV collisions, recorded by the LHC experiments from November 23, 2009, and followed shortly after by collisions at 2.36 TeV. For machine safety reasons it was decided to limit the maximum center of mass energy to 7 TeV, and the first collisions at this world record energy took place on the March 30, From then on the number of proton bunches and the number of bunches per beam have been increasing day by day. During 2012 LHC ran at 8 TeV energy. The data used in this work have been collected along that year. At the time this work has been written the LHC is shut down for a series of updates and the experiments are undergoing essential maintenance and upgrades. This is the LHC first long shutdown (LS1) to reach the nominal energy of 14 TeV. Collisions will start again in 2015 at the new energy frontier of 14 TeV. This will be an incredible milestone for the LHC which could open unexpected scenarios for Particle

31 2.1 The Large Hadron Collider 31 Physics, hopefully making it possible to reveal some hints of new Physics. Luminosity The LHC has been designed with the main purposes to look for the Higgs boson and to reveal some hints of new physics beyond the Standard Model. To reach these goals, not only the availability of the highest possible center of mass energy is crucial, but also to push up the rate of interactions as much as possible. Indeed the successful search for extremely rare processes characterized by low cross sections is possible only if the interaction rate is large enough to collect a significant sample of events. For this reason one of the most important parameters denoting the quality of a collider is the luminosity. The interaction rate R = dn/dt (defined as the number of events produced per second) is given by the following equation: R = dn/dt = L σ; (2.1) where L is the instantaneous luminosity and σ (measured in barns 1 ) is the cross section for a certain process. Integrating the rate over the time for a certain process we can relate the total number of events collected in that period to the integrated luminosity L: N = L σ. (2.2) The instantaneous luminosity depends only on parameters of the beam. To understand the meaning of the luminosity definition consider the simple example of a beam colliding on a fixed target. Suppose that the beam is made of N particles per second, the target has a density n [atoms/cm 3 ] and its thickness is d, hence the rate of events is: R = N n d σ (2.3) and the instantaneous luminosity is L = N n d [cm 2 s 1 ]. In the case of colliders, where two beams are made to collide, the luminosity is given by the following formula: L = N 1bN 2b n b f 4πσ x σ y F (2.4) where: N 1b and N 2b are the number of particles per bunch for the two beams. At the LHC N 1b N 2b ; n b is the number of bunches per beam; f is the revolution frequency of the machine; σ x and σ y are the r.m.s. of the distribution of the beams along the transverse plane. They are used to take into account the effective beam size; 1 1 barn [b] is defined as 1 b = m 2

32 2.1 The Large Hadron Collider 32 Figure 2.2: The peak instantaneous luminosity delivered to ATLAS per day versus time during the p-p runs of 2010,2011 and 2012 is shown on the left plot, while the integrated luminosity recorded by ATLAS as a function of data taking is shown in the right plot. (From Ref. [31]) F is a reduction (i.e. <1) geometric factor. It is needed because when the two beams are colliding they have a crossing angle different from zero(they do not collide exactly head on). Table 2.1 summarizes some of the nominal parameters of the LHC. Figure 2.2 shows Parameter Nominal values for the LHC Number of particles per bunch Number of bunches per beam 2808 Revolution frequency Hz Relativistic gamma factor 7000 Geometric luminosity reduction factor 0.84 Geometrical factor F 0.84 Transverse beam size σ 17µm Table 2.1: Some nominal parameters entering the luminosity definition for the LHC the luminosity delivered by LHC in the last past years and the integrated luminosity recorded by ATLAS as a function of time. An increase in the luminosity can be achieved by changing the quantities entering equation 2.4. In particular a reduction of the beam transverse size or an increase of the number of particles per bunch could be adopted. However, it is not so simple due to both technological limitations and other undesidered effects such as the pile-up. Two classes of collision events can be considered: Soft Collision: The most frequent events are originated by long distance inelastic collisions between the two incoming protons in which the protons behave as an elementary particle. The momentum transfer of these interaction is small. Hence, the particles produced in the final state of such interactions have large longitudinal momentum, but small transverse momentum p T (see section 2.2.1)( < p T > 500 MeV). These events are called minimum bias.

33 2.2 The ATLAS Detector 33 Figure 2.3: On the X axis there is the mean number of interaction per bunch crossing. This distribution shows the pile-up contribution affecting the ATLAS data taking at 7 TeV and 8 TeV [31]. Hard Collision: The hard collisions are rare events compared to soft collision events, they are the most interesting ones from the experimental point of view. They are short distance collisions between partons (the constituents of protons, quaks and gluons, which carry a fraction of the total momentum of the proton) and are characterised by an high exchange of momentum and possible production of particles at large angle, high transverse momentum and high mass. Any hard scattering process between partons has an underlying event associated, characterized by many objects at low transverse momentum and small angle, which come from the rest of the colliding protons. The pile-up consists of the superposition of events: it is originated mainly from soft collisions, which are not interesting and are treated as a background. The number of pile-up interactions per bunch crossing µ is proportional to L f and it increases with the peak luminosity. Figure 2.3 can be instructive to understand the pile-up importance in the last years. 2.2 The ATLAS Detector Introduction The general layout of the ATLAS detector is shown in figure 2.4. The detector has a cylindric symmetry around the beam line. Its dimensions are 25 m height and 44 m in length, while the overall weight is approximately 7000 tonnes. The ATLAS detector is forward-backward symmetric with respect to the interaction point. A right handed Cartesian system is used in Atlas (figure 2.5): the nominal interaction point is defined as the origin of the coordinate system, the z-axis is defined by the beam direction and the x-y plane is transverse to the beam direction. The positive x-axis is defined as pointing from the interaction point to the centre of the LHC ring and the positive y-axis is defined as pointing upwards. The azimuthal angle φ is measured

34 2.2 The ATLAS Detector 34 Figure 2.4: General layout of the ATLAS detector.

35 2.2 The ATLAS Detector 35 Figure 2.5: Schematic view of the coordinate system used at ATLAS and generally in a collider experiment. Figure 2.6: Some useful psudorapidity values and the respective θ values around the beam axis, and the polar angle θ is the angle from the beam axis. At hadronic colliders, it is useful to introduce another angular variable, the so called rapidity. Rapidity is defined as Y = 1 2 ln[(e + p z)/(e p z )]. Indeed, in hadronic collisions, the parton-parton system have a boost along the z-axis. The rapidity is a particularly convenient variable because has a simple Lorentz transformation law under this boost (only a constant term is added under a boost along z, hence the difference in rapidity Y is conserved). For ultra-relativistic particles, the rapidity is equal to the pseudorapidity which is linked to the polar angle θ and it is defined as: η = ln(tan( θ )) (2.5) 2 The plot in figure 2.6 shows the pseudorapidity for different values of θ. Throughout this work the pseudorapity will be used instead of θ. Since protons are not elementary particles, the p-p interactions involve partons. The problem is that the energy of the interacting quarks and gluons is not known. Assuming that the longitudinal momentum component of partons is dominant with respect to their transverse momentum, it is possible to describe the interactions defining kinematical quantities in the transverse plane, where kinematics is closed and energy and momentum conserved. Transverse quantities, such as the transverse momentum p T and the transverse energy ET miss, are defined in the x-y plane. Hence the transverse momentum p T is defined as: p T = P sin(θ) (2.6) where P is the momentum of the particle. The distance R in the pseudorapidity-azimuthal angle space is defined as R = η 2 +

36 2.2 The ATLAS Detector 36 Figure 2.7: Schematic view of the general structure of a modern collider experiment. From the interaction point (bottom) to the outer side (top) the main components are shown. Different particles interact via different interactions, hence they can be detected through the kind of interaction they are subjected to. φ General layout of ATLAS Modern multi-purpose experiment used in high energy particle colliders have a general common layout. The underlying idea is to detect particles through their interactions with matter, therefore detectors are built as a series of different modules devoted to detect different kind of interactions. Fig. 2.7 shows a schematic view of the main components based on this principle. Starting from the interaction point there is the tracker, the electromagnetic calorimeter, the hadronic calorimeter and the muon spectrometer. In the following sections all these components are described separately Inner detector The inner part of the detector is the so called Inner Detector (ID) which is a tracking detector [36]. The Inner Detector is devoted to the measurements of charged particles

37 2.2 The ATLAS Detector 37 tracks. This task is essential for momentum measurement of charged particles and for the reconstruction of secondary vertices. As an example to understand the ID role in particle identification, consider a photon and an electron, they both are detected in the electromagnetic calorimeter. Hence, to distinguish them, the inner detector plays a crucial role, because electrons are charged particles and in ideal conditions (ideal ID reconstruction, full detector acceptance...) electrons have an associated track. The ID has an acceptance in pseudo-rapidity of η <2.5 for particles coming from the interaction region while it has a full coverage in Φ. The ID is situated in a 2T magnetic field provided by a solenoid and it is composed of three complementary sub-detectors: the Pixel Detector, the Semiconductor Tracker and the Transition Radiation Tracker illustrated in figure 2.8a and 2.8b. The Pixel detector is the innermost subdetector and it consists of silicon pixel modules arranged in three concentric layers in order to provide three measurement points to reconstruct the track. The Pixel detector is very close to the beam interaction point where an excellent track resolution is needed. The Pixel detector cost and the enormous number of readout channels make it possible to use it only in the inner region. The Pixel detector is characterised by an intrinsic hit position resolution of 12 µm along R-Φ and a resolution of 115 µm along the z direction. The pixels provide generally 3 points for each track crossing the detector. Behind the pixel detector the SemiConductor Trackers (SCT) completes the high precision tracking, with eight hits per track expected. The SCT consists of modules of silicon strips arranged in four concentric barrels and two endcaps of nine disks each. The intrinsic hit resolution of the strips is 16 µm along R-Φ and 580µm along z axis. The outer ID module is the TRT. It is made up by 4 mm straw tubes, arranged parallel to the beams in the barrel region and radially in the end-cap. The TRT provides a large number of hits (typically 36 per track) and it provides only the R-Φ information, for which it has an intrinsic accuracy of 130 µm per straw. The straw hits at the outer radius contribute significantly to the momentum measurement, since the lower precision per point compared to the silicon is compensated by the large number of measurements and longer measured track length. The inner detector system provides tracking measurements in a range matched by the precision measurements of the electromagnetic calorimeter. The electron identification capabilities are enhanced by the detection of transition-radiation photons in the xenonbased gas mixture of the straw tubes. The combination of precision trackers at small radii with the TRT at a larger radius gives very robust pattern recognition and high precision in both R Φ and z coordinates Calorimetry The detectors dedicated to measure the energy of the particle (apart from muons) and their position are the calorimeters. The calorimetric system [32] is composed by an electromagnetic compartment, dedicated to the measurements of electrons and photons, and an hadronic compartment, suited for jet reconstruction. For precise missing tranverse energy measurements a full covarage in pseudorapidity (extending up to η <4.9) is

38 2.2 The ATLAS Detector 38 (a) Inner Detector (ID) layout (b) Detailed view of the barrel Inner Detector. The ID includes: the Pixel detector, the SemiConductor Trackers (SCT) and the Transition Radiation Tracker (TRT).

39 2.2 The ATLAS Detector 39 Figure 2.9: View of the ATLAS calorimetric system. provided. The ATLAS calorimeters are sampling calorimeters, which means that they are built as a serie of alternating layers of active material and absorber material. The electromagnetic calorimeter uses liquid Argon as an active medium while the barrel hadronic calorimeter employs a scintillator material. Electromagnetic Calorimetry The Electromagnetic Calorimeter (EM) is optimized to contain and detect electrons and photons. As mentioned before, it is built as a LAr (liquid argon) ionization chamber with lead adsorbers. It is composed of two half-barrels, which cover the pseudorapidity range η <1.37, and two endcap regions, covering the range 1.52< η <2.37. The region where 1.37< η <1.52 is called crack region and it suffers of a poor performance because of the ID services passing through. For this reason, in the analysis described in this work, the photons entering the crack region are rejected. In the design of an EM calorimeter one of the fundamental parameters is the so called radiation lenght X 0. It is defined as the mean distance over which a high energy electron loses 1 e of its energy by bremsstrahlung. The EM calorimeter has a thickness of 24X 0 for an almost full containment of the electromagnetic showers generated by electrons and photons. The EM calorimeter is a sampling detector with full azimuthal symmetry obtained with accordion-shaped electrodes and lead absorbers in liquid argon. Figure 2.9 shows in detail this geometry. Between the electrodes, across the LAr medium, a high voltage

40 2.2 The ATLAS Detector 40 Figure 2.10: Accordion geometry of the Electromagnetic Calorimeter. This particular geometry is designed to provide a full hermetic coverage and a good segmentation. potential is set. In this way, once a particle enters the detector, ionization charges are produced by its passage and they are collected by the electrodes. In the barrel the accordion shaped waves are parallel to the beam axis and their folding angle varies along the radius in order to keep the liquid argon gap as constant as possible. In the electromagnetic endcaps, the accordion waves run axially and the folding angle varies with radius. In the endcap the gap varies with the pseudorapidity due to the accordion geometry and therefore the high voltage needs to vary accordingly to maintain a constant calorimeter response as a function of the pseudorapidity. The EM calorimeter is longitudinally segmented in three sections, called strips, middle and back sections: strips: starting from the interaction point it is the first layer. It is finely segmented in η to provide a high resolution and an accurate position measurement. In addition it is used to distinguish photons from decaying neutral pions; middle: is the central layer in the EM calorimeter and collects the largest fraction of the energy of the electromagnetic shower being 10X 0 in depth; back: is the last layer from the interaction point, it aims at collecting the tail of the electromagnetic shower which could leak the hadronic calorimeter. Its thickness is about 2X 0. To precisely measure the energy of electrons and photons, energy losses need to be accounted for. For this reason an additional very thin ( 1mm) layer is set in front of the EM calorimeter (presampler) to recover the energy lost by electron and photons in the material in front of the calorimeter. The EM calorimeter segmentation values are shown in Table 2.2 for each section.

41 2.2 The ATLAS Detector 41 Section Segmentation (η Φ) for η < for 1.4< η <1.375 Strip for 1.375< η < for 2.5< η <3.2 Middle Back Table 2.2: Electromagnetic Calorimeter segmentation The energy resolution of each sub-calorimeter was evaluated with beams of electrons and pions before their insertion in the ATLAS detector and cross checks in situ. The experimental measurements of the relative energy resolution, after noise subtraction, for the EM calorimeter have been fitted with the expression: σ(e) E = a E c (2.7) where E is in GeV, a 10%-17%[ GeV ] and c 0.7% [38]. The energy scale is determined by in situ measurements with a precision of few %. Hadronic Calorimetry At hadronic colliders a huge quantity of quarks, gluons, hadrons is produced. Quark and gluons are subject to hadronization originating a bundle of collimated particles resulting as jets and hadrons interacting with the detector material create showers of particles coming from the inelastic scattering of hadrons with the nuclear material. All the particles other than electrons and photons do not release all their energy and are not fully contained in the EM calorimeter. The ones interacting via the strong interaction reach the end of the EM without being stopped and they enter the hadronic calorimeter. Hadronic showers are complicated processes. They are dominated by inelastic scattering processes with nuclei. The mean free path for nuclear interactions (i.e. the mean distance before hadrons interact with nuclear matter) is called λ I and is equal to λ I 1 ρ A 1 3, where ρ is the nuclear density and A is the mass number of the atomic species. The hadronic calorimeter has a depth of 9.7 λ I in the barrel and 10 λ I in the endcaps, thus ensuring a good containment and a good resolution for jets energy measurements. In particular, the whole hadronic calorimeter detector consists of the Tile calorimeter in the central region, Liquid Argon hadronic endcap calorimeter (HEC) and Liquid Argon forward calorimeter (FCal). The absorber material for HEC and FCal is copper and tungsten while the active material is the Liquid Argon. The Tile calorimeter is made of a steel absorber and of a doped polystyrene scintillator as the active medium. Particles entering this active medium release their energy through the interaction with the scintillator materials. Consequentially light is produced and the energy measurement is made from the collected light. The energy resolution of the barrel and endcap

42 2.2 The ATLAS Detector 42 Figure 2.11: Overview of the ATLAS muon spectrometer components. detectors is E E = 50% 3% while the energy resolution for the forward calorimeter E[GeV ] is E E = 100% 10%. E[GeV ] Muon Detector Muons are leptons much heavier than electrons (m µ 200m e ). For this reason, muons typically loose their energy through ionization and therefore are the most penetrating particles. The detector devoted to detect muons and measure their energy and position is the last detector from the interaction point. The muon system is shown in figure To achieve high resolution for momentum measurements it is designed as a air-core spectrometer. A large volume magnetic field is necessary to bend the particle trajectories. It is provided by a barrel toroid in the region η < 1.4, by two smaller endcap magnets in the 1.6 < η < 2.7 region and by a combination of the two in the transition region 1.4 < η < 1.6. The tracking system is realized in the barrel by three concentric modules at radii 5, 7.5 and 10m from the beam pipe. In the endcap it is performed by wheels at 7.4, 10.8, 14, 21.5m from the interaction point, along the z axis. These modules are the Monitored Drift Tubes (MDTs) and the Cathod Strips Chambers (CSCs) at large pseudorapidities. The muon detector has its own trigger system which covers the region up to η < 2.4, and is composed by Resistive Plate Chambers (RPCs) in the barrel and Thin Gap Chamber (TGC) in the end-caps. The triggering system provides bunch-crossing identification (BCID), well-defined pt thresholds and a measurement of the muon coordinate in the direction orthogonal to the chambers dedicated to precision tracking. Momentum resolution σ(p T )/p T of the ATLAS muon spectrometer system is of about 2 3% over most of the kinematic range, while it reaches 10% for momenta of the order of 1 TeV.

43 2.2 The ATLAS Detector Trigger System The bunch crossing rate at LHC is 40 MHz which corresponds, at a design luminosity of cm 2 s 1, to an interaction rate of 1 GHz. Since the maximum rate of events which ATLAS can record is 200 Hz, the rate of selected events must be reduced. Among all the events produced it is necessary to record the ones that have a potential physical interest. For this reason a sophisticated trigger system [39] is used to reduce the rate of events by a factor of 10 7 but at the same time retaining an excellent efficiency for the interesting events (high p T events against minimum bias processes). In order to reduce dead time and to have a high trigger efficiency to select interesting events, the trigger system is divided into three levels which are called level 1 (L1), level 2 (L2) and event filter (EF). Each of these levels uses a larger amount of data (i.e. more detector channels) with respect to the previous level. The aim of L1 is to do a first coarse selection looking for high transverse-momentum particles (muons, electrons, photons, jets, and τ leptons decaying into hadrons, as well as large missing and total transverse momentum). L1 uses information coming from a subset of detectors. When such an object is found, a region of interest is identified: the η and Φ region, the subdetector hit and the threshold fired. These informations are the seed for L2. The L1 output rate is 75 khz. At this point L2 uses the information from all the detectors around the ROI provided by L1. At L2 a first reconstruction of physics objects is performed. L2 reduces the trigger rate to about 4 khz before the last stage for event selection. This last stage is carried out by the EF which uses the full granularity available from each subdetector and uses algorithms similar to offline algorithms. It reduces the rate to 200Hz and it leads to record raw data for offline analyses.

44 Chapter 3 Physics objects reconstruction This chapter introduces the procedures used in ATLAS to reconstruct the main physics objects and quantities, focusing on the ones which are relevant for the analysis described in this thesis. Only the main aspects and features of reconstruction algorithms are presented [37]. 3.1 Electrons and Photons The main signature of electrons and photons is the presence of a cluster in the electromagnetic calorimeter. Compared to photons, electrons, since are charged particles, leave a track in the inner detector pointing to the cluster. On the contrary, a cluster with no tracks pointing to it can be classified as an unconverted photon candidate (i.e. a photon not converted into an electron-positron pair). The experimental signature of a converted photon is the presence of two tracks coming from a secondary vertex pointing to the cluster. As photons play a central role in this analysis, in this section the main features of photon reconstruction are discussed in detail Photons Photon reconstruction The photon reconstruction process starts with the search for energy deposits in the EM calorimeter. A cluster finding algorithm called sliding window is used to look for potential photon clusters ( [40], [41]). It is an algorithm which starts from energy deposits ( seed ) in the EM calorimeter and forms rectangular windows of fixed size around the seed. The size of these windows is 3 5 cells in η Φ in the middle layer. In particular a transverse energy exceeding 2.5 GeV is required. If such a cluster is found, it is a potential photon/electron cluster. To distinguish between a photon or an electron, at first order a track matching check is pursued. The track is required to be reconstructed within a window of in η Φ of the cluster barycentre and this track is required to have an associated energy not less than 10% of the cluster energy. If a track with 44

45 3.1 Electrons and Photons 45 these features is found, the cluster is assumed to be an electron candidate. Otherwise the cluster is taken to be an unconverted photon and it is considered a photon candidate. Clusters matched to pairs of tracks that are consistent with originating from a secondary vertex are classified as converted photons. In ambiguous cases when for example only one track is compatible with coming from a secondary vertex are treated as electron and photon candidates. The efficiency to reconstruct a photon of p T >25GeV is 96% while the efficiency to identify a photon as converted is 80%. In the 3-10% of cases, electrons are reconstructed as photons. Photon identification Once photons are reconstructed a list of candidate photons is provided. The second step is the so called photon identification, which aims to discriminate real photons from fake photons originating, for example, from misreconstructed jets. The photon identification process is based on variables which describe the electron/photon shower profile (both longitudinal and transversal) in the EM calorimeter. For example, the photon deposits a small amount of its energy in the hadronic calorimeter. Furthermore its energy deposit in the second layer of the EM calorimeter is expected to be narrower than the one from jets. Information from the first layer (which has a η high granularity) is used to separate single photons from the photon showers coming from the decay π 0 γγ. Below I briefly list the photon selection variables: Energy ratios: R had : is the ratio of the transverse energy measured in the hadronic calorimeter to the transverse energy of the photon cluster; R had1 : the ratio of the transverse energy in the first sampling layer of the hadronic calorimeter to the transverse energy of the photon cluster; Second EM Layer variables: electromagnetic clusters are narrower than hadronic clusters. These variables are used to quantify the lateral spread of the shower; w 2 : it characterizes the lateral width of the shower in η, over a region of 3 5 cells in η Φ around the center of the photon cluster. Its definition is the following: w 2 = Ei η 2 i Ei ( ) Ei η 2 i (3.1) Ei when i runs over the cell index from 0 to 14 and η i is the position in η of the barycentre of the iâăťth cell; R η : it measures the energy deposit immediately outside the cluster in the η direction. Its definition is: R η = ES2 3 7 E7 7 S2 (3.2)

46 3.1 Electrons and Photons 46 E S2 x y is the energy contained in x y cells (η Φ) of the second layer, around the cluster barycenter; R Φ : it is the variable which measures the spread in Φ of the energy of the candidate. It is defined as: R Φ = ES2 3 3 E3 7 S2 (3.3) where E S2 x y is defined in the same way as for R η ; Strip Variables: these variables characterize the shower profile in the strip layer. They exploit the high η granularity of the strips to distinguish between single photons (which should produce a well defined peak) and multiple photons (for example the pair of photons arising from the neutral pion decay). F side : it measures the lateral spread in η of the deposit energy, it is defined as: F side = ES1 7 1 ES1 3 1 E7 1 S1 (3.4) where E S1 x y is the energy contained in the x y strips in η Φ around the strip with the largest energy; w s,3 : it measures the weighted shower width in η in the three strips centered around the strip with the largest measured energy. It is defined as: 2 i=0 w s,3 = E i(i i max ) 2 (3.5) Ei where the index i runs on the strip number and i max is the index of the strip with the largest energy deposit; w s,tot : this variable is defined in the same way of w s,3 but it is measured over all the strips in a region of η Φ (20 2 strips); E: it quantifies the presence of two peaks in the energy profile. It is defined as: E = Emax2 s1 Emin s1 (3.6) where E S1 max2 is the second maximum energy deposit in the strips, and ES1 min is the energy of the strip with the minimum energy found between the two maxima. For candidates without a distinguishable second peak, the value of E is close to zero, while candidates with a second peak have a larger value; E ratio : it is a measure of the asymmetry of the second peak relative to the first. It is defined as: E ratio = ES1 max1 ES1 max2. (3.7) Emax1 S1 + ES1 max2

47 3.1 Electrons and Photons 47 Loose and Tight selection Criteria There are two ensambles of selection criteria, called loose selection criteria and tight selection criteria. The loose selection criteria are the less strict selection criteria and in this analysis are used to perform a preliminary photon selection. Tight selection criteria are used to select the final photon candidates. The loose selection criteria use only the variables based on the second sampling. These selection criteria are the same for converted and unconverted photons. The so called tight selection criteria use all the variables listed above. In this case, different selection criteria are used for converted and unconverted photons. The values of the selection criteria are optimized to reach the greatest background rejection for a photon efficiency of 80%. The differences observed between data and MC in the discriminating variables are measured comparing the shower shape distributions, and parametrized as simple shifts. These correction parameters are called fudge factors and are computed as the difference between the means of a given variable in data and MC; they are then applied to the photon discriminating variables in MC samples in order to obtain corrected efficiencies. The photon ID efficiency is measured in data and scale factors are applied to correct the variable distributions predicted by MC. The identification efficiency varies as a function of E γ T, for photons in this analysis is >95%. Electron Identification As explained above, a cluster is considered an electron candidate if a matching track is found while no conversion is flagged. This early classification allows to apply different corrections to electron candidates and is the starting point of a more refined identification. Three levels of electron quality are defined (loose, medium, tight): Loose selection: This set of selection criteria performs a simple electron identification based only on limited information from the calorimeters. Cuts are applied on the hadronic leakage and on shower-shape variables, derived from the middle layer of the EM calorimeter only. This set of cuts provides excellent identification efficiency, but poor background rejection; Medium selection: This set of cuts improves the background rejection quality, by adding cuts on the energy deposits in strips in the first layer of the EM calorimeter and on the tracking variables. Strip-based cuts are adequate for e π 0 separation. The tracking variables include the number of hits in the pixels, the number of silicon hits (pixels plus SCT) and the transverse impact parameter. The medium cuts increase the jet rejection by a factor of 3-4 with respect to the loose cuts, while reducing the identification efficiency by 10%; Tight selection: This set of cuts, in addition to the cuts used in the medium selection, applies stricter selections in order to reject electrons from conversions and to reject the dominant background from charged hadrons. Two different final selections are available within this tight category, and are optimised differently for

48 3.1 Electrons and Photons 48 isolated and non-isolated electrons. In the case of isolated electrons, an additional energy isolation cut is applied to the cluster while in case of non-isolated electrons tighter cuts on the TRT information are applied to further remove the background from charged hadrons. Photon Isolation The leading photon considered in this analysis is selected if it is isolated. It means that it is required to be isolated from hadronic activity. The idea is to measure the energy surrounding a photon candidate and if it exceeds a certain threshold the photon is considered not isolated. This helps to further reject jets with high EM content. This section explains how the photon isolation is computed, the same concepts apply also for electrons. The isolation energy is measured in calorimeters: it is defined as the scalar sum of the transverse energy in all calorimeter cells (both EM and hadronic) within a cone of some radius (typically R = 0.4) around the photon. To exclude the photon energy from the sum, a rectangular region positioned on the centre of the circle of radius R = 0.4, is removed. Figure 3.1 shows this idea. The variable which measures this scalar sum is called EtCone40. The containment of an electromagnetic shower within the calorimeter is commonly characterized by the Moliere radius: the radius of the circle (in the η Φ plane) containing the 90% of the shower energy. For the ATLAS electromagnetic calorimeter the Moliere radius is approximately 4.8 cm which corresponds to 1.3 cells in the LAr barrel. Hence, the leakage of the photon energy within the subtracted central core should be limited to the few-percent level. However, there are two more contributions, in addition to contributions from the photon itself, which play a role in the isolation profile for isolated objects. These effects are the calorimeter noise and the underlaying events coming from soft processes and pile-up. Thus, there are four primary components of the final measurement of a given EtCone: Energy from the object itself that is not properly removed from the sum (I leakage ); Energy from detector noise (I noise ); Energy from the underlying event and pile-up (I UE ); Energy associated with the hard process that produced the photon candidate (I). The measured isolation energy I measured is the sum of these components: I measured = I leakage + I UE + I noise + I (3.8) The quantity I is the variable of interest: it is the energy that comes from final state particles produced in the same hard-scattering process as the photon candidate. So, one can measure it as follows: I = I measured I leakage I UE I noise (3.9)

49 3.2 Jets 49 Figure 3.1: A sketch to show how the EtCone40 variable is computed. Consider a photon in the η Φ plane and draw a circle around it. The circle defines the region where the energy from all calorimeter cells is summed. The energy in a rectangle at the center of the cone is excluded, to remove the photon energy leakage and to correctly estimate the energy deposits coming from other processes. The effect of the noise on the isolation profile is to induce a Gaussian smearing on the measured isolation, with a total width which depends on the number of cells in the cone. The exclusion of the central core of cells can still leave a non-trivial fraction of the photon E T in the cone: this is the leakage component. For photons of large E T the residual leakage dominates the isolation profile. It is possible to study the EtCone distributions as a function of the photon E T and to extract E T dependent correction coefficients to correct for the contribution from underlying events and pile-up. These coefficients are used to compute the I leakage term (I call Ileakage rec the estimation of the I leakage term). The technique used to account for the underlying event (IUE rec ) is described in [45] and [46]. The procedure is designed to extract an estimate of the ambient transverse energy density event-by- event, rather than applying an average correction to all events. This has the benefit of naturally accounting for potentially large event-by-event variations in the amount of activity from the underlying event and pile-up. What is important is to stress that the photon isolation energy considered in this analysis is corrected both for the underlying events (UE) contribution and for pile-up. The final corrected EtCone variable is calculated as: I = I measured I rec leakage Irec UE (3.10) 3.2 Jets In hadronic collisions jets play a major role. Quantum chromodynamics (QCD) describes strong interactions as resulting from the interaction of spin-1/2 quarks and spin- 1 gluons. Quarks appears in different flavours (u, d, c, s, t, b) and each flavour appear in different colours (red, blue, green and the corresponding anticolours). The colour singlet is a combination of three colours. A single quark cannot be a colour singlet and thus should not appear as a physical particle. This property is called confinement. Quarks

50 3.2 Jets 50 are confined inside physical hadrons, which are always colour singlets colourless. Also gluons carry colour charge. Another important feature of QCD is asymptotic freedom: quarks interact weakly at high energies and strongly at low energies, preventing the unbinding of baryons and mesons. Therefore, a quark or a gluons almost immediately after being produced fragments and hadronizes because it cannot emerge as an isolated coloured particle: with increasing separation of the quark from the hadron it comes from, the potential energy stored in the pair increases. At some point the creation of a quark-antiquark pair becomes favorable. The initial quark or gluon materialize into a collimated bunch of hadrons flying roughly in the same direction of the original parton, leading to a collimated spray of energetic hadrons, which is the so called jet. The spray of hadrons is treated and reconstructed as a single object as coming from the same parton. The jet reconstruction is crucial to resolve the partonic flow coming from the hard scattering. The definition of a jet that works from both the theoretical and experimental point of view is a complicate matter. As it is written in [47], there are several important properties that a jet definition should meet: it should be simple to implement in an experimental analysis, it should be simple to implement in a theoretical calculation, it should be defined at any order of the theory, Yields finite cross-section at any order of perturbation theory, Yields a cross-section that is relatively insensitive to hadronization Reference [48] presents a brief review of jets and reconstruction alghoritms Jets Reconstruction in this analysis To reconstruct jets the first step is to identify separate energy clusters by properly grouping single cells in the calorimeters. A jet clustering algorithm starts from these clusters to reconstruct jets. There are two main cluster-definition algorithms, TopoTower and TopoCluster. The TopoTower selects cells around a seed and defines a cone of fixed radius around the seed: the energy of the final cluster is the sum of contributions coming from the cells within this cone. The TopoCluster algorithm selects seed cells according to their signal to noise ratio and builds a 3D cluster around these cells. In this analysis jets are reconstructed from TopoClusters using the anti-k t algorithm. The Anti k t Algorithm The anti k t algorithm [49] belongs to the family of sequential recombination algorithms. A distance d i,j between each pair of TopoClusters is introduced as follows: d i,j = min ( k 2p ti, 2 k2p i,j tj R 2 ) (3.11)

51 3.3 Muons 51 and a distance d i,b between object i and the beam b is defined: d i,b = k 2p i (3.12) where 2 i,j = (y i y j ) 2 + (Φ i Φ j ) 2, while k ti, y i, Φ i are respectively the transverse momentum, rapidity and azimuth of particle i. R is a radius parameter, which for this analysis is taken to be 0.4, and p is a parameter governing the relative power of the energy versus geometrical ( i,j ) scales. p=-1 corresponds to the Anti k t Algorithm. The algorithm starts from a list of clusters and groups them into jets. It works as follows: loop over all the object pairs and find the minimum across the d i,j and d i,b if the minimum distance is a d i,b then the ith object is a jet, add it to the jet list and remove it from the input list if the minimum distance is d ij then combine togheter i and j, according to a recombination scheme, remove i and j from the input list and add the new object to the input list the process is repeated until the input list is empty The idea which is at the basis of the anti k t algorithm is the following. Consider an event with a few well-separated hard particles and many soft particles. The distance d i,j between an hard particle and a soft particle, according to its definition 3.11, is exclusively determined by the transverse momentum of the hard particle and their geometrical separation. The d i,j between similarly separated soft particles will instead be much larger. Hence, soft particles will tend to cluster with hard ones long before they cluster among themselves. More details can be found in [49] and [51]. 3.3 Muons Muons can be reconstructed using information both from the ID and the muon spectrometer. In this analysis muons are reconstructed as follows. Combined muons (CB): the track measurements in the MS are combined with a track in the Inner Detector. The muon momentum is computed as a weighted combination of the momentum measurement from the muon spectrometers (which is dominant for high-p T muons) and the ID (which is dominant for low-p T muons); the momentum resolution is therefore excellent for muons of any p T. Using tracks from ID CB muons can only be reconstructed for η < 2.5. Segment-tagged muons (ST): the ST muon reconstruction is based on the tracks reconstructed in the ID. A track in the ID is identified as a muon if the trajectory extrapolated to the muon spectrometer can be associated with a straight track segment in the precision muon chambers. This algorithm is designed especially for low-pt muons that do not reach the outer muon stations. The pseudorapidity coverage is η < 2.5, as for CB muons.

52 3.4 Taus Taus The reconstruction of tau leptons in ATLAS focuses on hadronic decay channels, since it is impossible to distinguish muons and electrons from tau decays from prompt ones. Hadronic decays can be distinguished in 1-prong and 3-prong decays, that is with one or three charged particles in the final state. There is a small fraction of 5-prong decays, but this can hardly be detected in a QCD context. The dominant hadronic decay channels are characterised by the presence of pions, both charged and neutral, in the final state. Hadronic tau decays typically show up in the detector as collimated jets of hadrons, in particular pions, with a limited number of associated tracks. Since hadronic tau decays have a complex topology both tracking and calorimeter information is exploited in the tau reconstruction. Due to the very low contribution of taus in this analysis, we do not reconstruct taus. 3.5 Missing Transverse Momentum E miss T In this thesis we look for events where a pair of Dark Matter particles is produced in association with a photon. The Dark Matter particles (or the other non SM particles which could be produced) are so weakly interacting that they do not leave any track, or any energy deposit in the detector. Since in the transverse plane xy kinematics is closed and energy and momentum are conserved, a momentum imbalance in the transverse plane may point out the presence of non interacting particles. The energy deposits measured in calorimeters are converted into an energy flow vector Ēi = E i n i where n i is the unit vector pointing from the interaction point to the deposit of energy. For relativistic particles and for an ideal detector response, if no invisible particle is emitted and if all the particles produced in the event are reconstructed, it is expected that i E T,i = 0. Hence it is possible to define the missing transverse momentum as: where E miss T = E miss x (E miss (E miss x ) 2 + (Ey miss ) 2 (3.13) y ) = E x (E y ) (3.14) Ex and E y are the sum of the x(y) component of all energy deposits reconstructed in the detector. The missing transverse momentum is the signature that invisible particles are present or that some object is partially lost. The missing transverse momentum vector is defined as the negative vector sum of the momenta of all the particles detected. The missing transverse energy vector is denoted E miss T as: while its magnitude is denoted E miss T Φ miss = atan. Its azimuthal coordinate Φmiss is computed ( ) E miss x Ey miss (3.15)

53 3.5 Missing Transverse Momentum E miss T E miss T reconstruction A good measure of ET miss is crucial at the LHC: both search analyses and precision measurements need an accurate estimation of ET miss. In fact the Emiss T plays an important role in all the SM processes involving neutrinos and in processes of new physics beyond the SM. To accurately measure the ET miss a good hermeticity of the calorimeters is mandatory. There are several effects which can cause fake ET miss and makes the ET miss computation a challenge. Among these effects there is the presence of dead detector regions, noise and energy deposits caused by cosmic rays. Moreover, the high pile-up can leads to a degradation of the performance due to fluctuations in the soft term determination (see equation 3.16) causing a wrong ET miss measurement. A refined alghoritm used for the E miss T reconstruction.more in detail, to avoid overlap between the different objects, calorimeter cells are associated to reconstructed and identified high p T objects in a specific order: electrons, photons, taus, jets, muons. The is: E miss T E miss x(y) = Emiss,e x(y) + E miss,γ x(y) + E miss,τ x(y) + E miss,jets x(y) miss,soft terms + Ex(y) + E miss,µ x(y) (3.16) where each term is calculated as the negative sum of the calibrated reconstructed objects, miss,soft terms projected onto the x and y directions. The Ex(y) entering equation 3.16 comes from topoclusters and tracks unmatched to any reconstructed object. Attention is payed in order to not double count the objects, because it would lead to fake ET miss. In the following list, each term entering eq is briefly described: Electron term: Electrons, to be taken into account in the E miss T computation, need to pass medium identification criteria and to have p T >10GeV Photon term: Photons are required to pass the tight selection criteria and to have p T >10GeV. If a photon is reconstructed also as an electron, it is considered an electron and not as a photon τ term: taus are required to pass medium selection criteria and have a p T >20GeV. If taus are reconstructed also as electrons or photons, the electron/photon is kept jet term: jets used for the E miss T computation are reconstructed with the anti-kt algoritm. They are required to have p T >20GeV. Soft term: it is calculated from calorimeter topoclusters and tracks not associated to hight p T objects as defined above muon term: it is computed from the momenta of the reconstructed muons. For a precise estimation different type of muons are considered. I will not go in the details of this procedure because, as it will be explained later, in this analysis the muon term is not included in the ET miss reconstruction

54 3.5 Missing Transverse Momentum E miss T E miss T in this analysis The ET miss calculation in this analysis is performed treating taus as jets and not including muons in the ET miss computation. It means that muons are counted as invisible particles, hence they contribute to the measured ET miss. We define the Signal Region requiring a muon veto, so in that region the ET miss reconstruction with or without the muon term is exactly the same. This ET miss definition allows to define Control Regions with muons which have a ET miss spectrum as the Signal Region. In a similar way it is possible to define a control region with electrons (because electrons are vetoed in the Signal Region) as long as electrons are removed from the ET miss computation.

55 Chapter 4 The mono-photon Analysis This chapter describes the mono-photon analysis performed with the full 2012 dataset of p-p collisions at s=8tev. This analysis repeats the one performed with the 7 TeV dataset [42]. The mono-photon final state is a compelling signature for the search of new physics such as DM direct production. Even if at hadronic colliders the mono-jet final state is statistically favoured, the mono-photon has some compelling features which make it a competitive search. Photons are well-measured objects and the main sources of backgrounds are electroweak processes which are well known and relatively easy to estimate. The mono-photon analysis is a counting experiment: it aims to count the number of observed events in a Signal Region (section 4.2) in data and to compare it to the Standard Model prediction (section 4.3). The main effort of this kind of analysis is the precise estimation of all the sources of background entering the Signal Region (section 4.2). In particular the background may be grouped in two main subsets: the irreducible background which is estimated by rescaling the MC prediction with a simultaneous fitting technique, and the reducible background which, when possible, is estimated via data-driven methods (sections 4.6 and 4.7). The simultaneous fit has been tested in a Validation Region (section 4.9) and then used for the background estimation in the Signal Region (section 4.10). Once the total background estimation is performed, the observation in data is compared to the Standard Model expectation and a model independent limit on the presence of new physics is set (5.1). Moreover constraints on model dependent limits are set in the context of specific models (section 5.2). 4.1 Data and Simulation samples Data This analysis is performed on data from proton-proton collisions at s = 8 TeV collected by the ATLAS detector in The whole dataset is used for a total integrated luminosity of 20.3 fb 1. The uncertainty on the luminosity is 2.8% [53]. 55

56 4.1 Data and Simulation samples Simulations The Monte Carlo samples used in this analysis were produced at s = 8 TeV using different generators. The list of the samples, their generator and their cross sections can be found in Table 4.1 and 4.2. Some processes are simulated using a filter such that the generator will only retain certain events that pass some selection cuts. In particular, W/Z + jet samples are generated in exclusive bins of p W T /pz T and with orthogonal c- and b-quark filters. Samples have been generated using either full simulations of the detector ( [43]) or parametrizations of the calorimeter response ( fast simulations [44]). All the W/Z + γ and γ + jet samples are Full Simulations while W/Z + jet samples are Atlas Fast simulations for the low boson p T range and Full Simulations for the high boson p T range MC re-weighting When dealing with simulation there are two effects which need to be taken into account. MC distributions should reflect data distributions but many experimental effects could not be simulated and are only known after data taking. For this reason some features are not part of the simulation process but need to be taken into account at a later stage. Among these experimental features there are pile-up corrections and corrections to the interaction vertex position in z, for example. Moreover, to correctly compare numbers of events between MC datasets and data, the integrated luminosity and the cross section of a given process has to be taken into account. Hence, simulation samples need to be weighted correctly both to well approximate and reproduce the data behaviour and to be normalized to the luminosity and to the generator cross section. Therefore, making use of the basic equation N = Lσ relating the number of events N, cross section σ and luminosity L, each MC selected event should be weighted according to eq. 4.1: w MC = LσW N generatedevents (4.1) where w MC is the total weight to multiply each event, W includes all the weights which need to be taken into account, σ is the cross section for the process considered, N generatedevents is the number of generated events including the sum of weights W. In particular W is the product of: the pile up weight: condition of data; MC samples are reweighted in order to reproduce pile-up the Z vertex weight: the vertex position in z is also reweighted; the MC weight: depending on the generator used for event generation there might be an event generator weight associated to each event. In particular SHERPA samples used in this analysis are generated with an event weight different from 1;

57 4.1 Data and Simulation samples 57 Channel Generator Cross-section [pb] k-factor γ + Z(νν) SHERPA γ + W (eν) SHERPA γ + W (µν) SHERPA γ + W (τν) SHERPA γ + Z(ee) SHERPA γ + Z(µµ) SHERPA γ + Z(ττ) SHERPA W (eν) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA W (µν) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA W (τν) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA γ + jet 80 < p γ T < 150 GeV Pythia < p γ T < 300 GeV Pythia p γ T > 300 GeV Pythia Table 4.1: Cross-sections for the Standard Model backgrounds considered in this analysis.

58 4.1 Data and Simulation samples 58 Channel Generator Cross-section [pb] k-factor Z(νν) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA Z(ee) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA Z(µµ) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA Z(ττ) + jet p T > 0GeV, b-filter SHERPA p T > 0GeV, c-filter/b-veto SHERPA p T > 0GeV, c-veto/b-veto SHERPA < p T < 140 GeV, b-filter SHERPA < p T < 140 GeV, c-filter/b-veto SHERPA < p T < 140 GeV, c-veto/b-veto SHERPA < p T < 280 GeV, b-filter SHERPA < p T < 280 GeV, c-filter/b-veto SHERPA < p T < 280 GeV, c-veto/b-veto SHERPA < p T < 500 GeV, b-filter SHERPA < p T < 500 GeV, c-filter/b-veto SHERPA < p T < 500 GeV, c-veto/b-veto SHERPA p T > 500 GeV, b-filter SHERPA p T > 500 GeV, c-filter/b-veto SHERPA p T > 500 GeV, c-veto/b-veto SHERPA Table 4.2: Cross-sections for the Standard Model backgrounds considered in this analysis (continued from Table 4.1). The generator filter efficiency is included in the cross-section.

59 4.2 Event Selection 59 k factor: generators typically calculate the cross sections up to leading order or next-to-leading order. For processes where the cross-section at a higher order is known a (usually constant) factor called the k-factor is applied to the expected cross-section of the MC sample (listed in Table 4.1 and 4.2); sample filter effciency: sometimes a filter is applied during event generation such that the generator will only retain events that pass some selections to increase statistics in a particular region of phase space. For this reason the filter effciency needs to be taken into account to correctly weight the events of the simulated process; photon identification efficiency: to reconstruct a tight photon a series of cuts is applied. Each of these cuts has an associated efficiency which needs to be taken into account. A small correction is applied to the Monte Carlo efficiency to account for data to MC differences in the measured efficiency. In the following, whenever MC samples are used it is implied that all these weights are taken into account. In section it will be stressed the importance of this procedure. 4.2 Event Selection To select a sample of candidate events a sequence of kinematic cuts is applied both on data and MC. The following preselection cuts are required both in the Signal Region and in the Control Regions: data quality: in order to select data collected in periods in which all the subdetectors were working properly only good luminosity blocks in the good run list (GRL) are retained; trigger: events are required to pass an ET miss trigger called EF _xe80_tclcw_tight, which selects events with a ET miss higher than 80 GeV; good vertex: a primary vertex with at least 5 associated tracks must be reconstructed. It is a requirement to reject non-collision background events; event cleaning: bad events where noise bursts in the EM calorimeter and data corruption occurred are rejected; jet cleaning: events with any jet, not overlapping with neither leptons (electrons and muons) nor photons, with calibrated p T >20 GeV are selected; ET miss cleaning: reject events with E T <0 or any jet (before overlap removal) with p T >40GeV and Φ(jet, ET miss )<0.3. These cuts are applied in order to avoid events where a jet partially lost could mimic high ET miss : if a jet is found to be aligned to the ET miss it is a hint that it is causing fake ET miss.

60 4.3 Standard Model Background 60 The resulting ensemble of events consists of good detector level quality events. To this subset, the analysis cuts are applied in order to select events entering the Signal Region. The Signal region is defined by the following cuts: ET miss >150 GeV; at least one loose photon (defined in par ) with p T >125 GeV, η <2.37 excluding the crack (1.37<η<1.52); the leading photon (i.e. the one with the highest p T ) must be tight and η <1.37; the leading photon must be isolated. In particular the energy within the cluster in a cone of R<0.4 (par ) is required to be less than 5 GeV; the leading photon must be well separated from ET miss in order to avoid events in which fake ET miss is caused by the partial loss of a photon (fake ET miss events), hence Φ(γ, ET miss )>0.4 is required; jet veto: events where more than one jet with p T >30 GeV are reconstructed are vetoed; if the jet is present it must be well separated from ET miss to avoid fake ET miss events. Hence, Φ(jet, ET miss )>0.4 is required; lepton veto: no electron (with p T >7 GeV and η<2.47) and no muon (with p T >6 GeV and η<2.5) must be reconstructed. 4.3 Standard Model Background The Standard Model processes which contribute to the mono-photon Signal Region mainly come from electroweak processes. They can be classified into two categories: irreducible and reducible backgrounds. Processes which have exactly the same signature as the studied process are called irreducible background, while processes which have a different final state with respect to the mono-photon Signal Region, but which could be misclassified as signal due to experimental issues, are called reducible background. The only irreducible background for this analysis, which constitutes the dominant source of Signal Region content ( 70%), is given by Z(νν)+γ processes: neutrinos coming from the Z decay produce large missing transverse momentum and the photon is produced in association with the boson. The reducible background is given by the following processes: W (lν) + γ: ( 15%); W (eν) + γ: the electron is not reconstructed or reconstructed as a photon; W (µν) + γ: the muon is not reconstructed;

61 4.4 Control Regions 61 W (τν) + γ: the tau could decay either leptonically and the lepton be missed or not reconstructed, or adronically entering the Signal Region if the final state jet satisfies the signal region conditions. Moreover, the tau could be not reconstructed; Z/W+jet, diboson, t t, single-t, multi-jet ( 10%); Z(νν) + jet: the jet is reconstructed as a photon (jet faking photon); W (eν) + jet: the electron or the jet fakes a photon; W (µν) + jet and jet + W (τν): the µ/τ is not reconstructed or the τ enters the signal region as a jet allowed by the jet veto. The jet fakes a photon; t t, single-t, and diboson: similar to the W + jet processes; multi jet processes: a jet is partially lost faking high ET miss and the other jet fakes a photon; Z(ll) + γ: both leptons are missed ( 0.3%); γ + jet: a jet is partially lost faking high E miss T ( < 0.1%). Other processes such as t t + γ have been considered, but their contribution is found to be negligible. 4.4 Control Regions As previously explained, this is a counting experiment. This analysis aims to count the number of data events recontructed in the Signal region and to compare it to the expected Standard Model processes. If an excess is observed, its significance needs to be quantified and interpreted as possible presence of new physics. Otherwise, exclusion limits are set. The main effort for this kind of analysis is the accurate estimation of all the Standard Model processes entering the Signal Region. The yield of Standard Model processes playing a role in this analysis (section 4.3) needs to be quantified. A typical strategy is to measure on data the normalization of the different processes, in specific Control Regions (CR). Control regions are background-enriched regions where the presence of the signal is minimized while the presence of a certain background is maximized. Control Regions are chosen in order to not overlap to the SR and to maximize the fraction of a certain background. For this purpose, usually they are constructed by inverting a SR cut. All the background estimation methods used here make use of CRs. Data driven methods are preferred whenever it is possible to use them: in particular the jet faking photon and the electron faking photon probability are known to be poorly described by MC and more precise data driven techniques have been developed. Different Control Regions are considered. In this section the definition of Z + γ enriched Control Regions (2µCR and 2eCR) and W + γ enriched Control Region (1µCR) is provided. Another Control Region has been used to study the γ + jet background, as defined in section 4.7.

62 4.4 Control Regions 62 W + γ-enriched Control Region (1µCR) A W + γ enriched Control region is considered, denoted as 1µCR. It is defined by reversing the lepton veto and requiring exactly one muon reconstructed with the same criteria of the vetoed muons, plus: the scalar sum of the p T tracks within a cone of radius R = 0.2 around the muon direction is required to be less than 15% of the muon p T ; R(µ, γ)>0.5 to be consistent with the generator level cut of the Z/W+γ background samples. The photon pseudorapidity is relaxed with respect to the signal region selection: η γ < 2.37, excluding the crack 1.37 < η γ < 1.52 Since the ET miss definition used in this analysis treats muons as invisible particles, both the neutrino and the muon coming from the W decay, contribute to E miss Z( µµ) + γ-enriched control region (2µCR) Since the ET miss is reconstructed treating muons as invisible particles, Z( νν) + γ and Z( µµ) + γ events in this analysis are similar. Hence a Z( µµ) + γ enriched Control Region is considered. It is defined as the 1µCR with the difference that exactly two muons of the same quality of the vetoed muons are requested. Additionally, the invariant lepton mass m µµ is required to be m µµ >50 GeV to be consistent with the generator level cut of the Z+γ background samples. Z( ee) + γ-enriched control region (2eCR) As the 2µCR is not so much populated, another Z +γ-enriched control region is built in order to reduce the statistical uncertainty associated to the Z + γ scale factor which is essential to estimate the Z νν + γ background in the SR. This CR is defined by reversing the lepton veto and requiring exactly two electrons to be reconstructed with the same quality criteria as the electrons counted in the veto, and to have: p T >10 GeV; T. the scalar sum of the p T tracks within a cone of radius R = 0.2 around the electron inner detector is required to be less than 15% of the electron p T ; R(e, γ)>0.5 to be consistent with the generator level cut of the Z+γ background samples; the invariant lepton mass m ee is required to be m ee >50 GeV to be consistent with the generator level cut of the Z+γ background samples. As for the other CRs, the pseudorapidity cut is relaxed:

63 4.5 Electrons faking photons 63 η γ < 2.37, excluding the crack 1.37 < η γ < Events entering this region are selected using a photon trigger and not the usual ET miss trigger. The photon trigger used (EF_g120_loose trigger) basically triggers events which have at least one loose photon with p γ T >120 GeV. Here ET miss is constructed treating the two electrons as invisible particles. 4.5 Electrons faking photons As electrons and photons have a similar signature in the EM calorimeter, and converted photons could leave one or two tracks in the ID, there is a certain probability that an electron is misidentified as a photon. In the mono-photon analysis we have to face the possibility that an energetic electron is reconstructed as energetic photon (E T >125 GeV) and therefore generates a fake event in the Signal Region. To estimate this contribution, the probability that an electron is misreconstructed as a photon needs to be first quantified. The idea is to measure the electron fake rate (defined precisely as the probability of reconstructing a photon from a true electron divided by the probability of reconstructing an electron from a true electron): P (reco photon real electron) P (reco electron real electron), (4.2) and to multiply this rate by the number of events reconstructed in a mono-electron region (i.e. a region similar to the mono-photon Signal Region except that a single electron is required instead of a photon). In a similar way, a mono-electron plus 1µ, plus 2µ and plus 2ele control regions are defined to estimate the electron faking photon contribution respectively in the 1µCR, 2µCR and 2eleCR. The electron faking photon rate (equation 4.2) is measured through the so called tagand-probe method. The basic idea is to select events with true electrons (typically coming from the Z( ee) decays) and check the number of events in which a photon is reconstructed. Two kind of events are selected: events with one good electron (tag) of E T >20 GeV and one electron (probe) of E T >125 GeV; events with one good electron (tag) of E T >20 GeV and one photon (probe) of E T >125 GeV. To be sure that the probe electron is a true electron and that the probe photon is an electron faking a photon, the invariant mass of the tag and probe particles is required to be compatible with the Z mass. The electron fake rate is measured from the ratio between the number of probe photons and that of probe electrons which have been selected in the invariant mass range m Z ±10 GeV. The measured fake rate has some dependence on both photon η and E T, hence different

64 4.6 Jet faking photon background 64 η regions are defined. In each region, a parametrization as a function of E T is performed. The fake rate measured from data is also compared to the MC Z( ee) and W ( eν) truth values. Besides the data statistical fluctuation, the difference between the MC Z( ee) truth and tag-and-probe values and the difference between the MC Z( ee) truth and MC W ( eν) truth values are taken as systematic uncertainties. The electron fake rate used in this analysis is summarized in Table 4.3. E T range [GeV] η 0.8 (stat+syst) 0.8 < η 1.37 (stat+syst) η 1.52 (stat+syst) [125, 150] 2.1 ± 0.3 ± 0.6% 2.5 ± 0.4 ± 0.8% 3.3 ± 0.5 ± 0.4% [150, 200] 1.1 ± 0.3 ± 0.3% 1.5 ± 0.4 ± 0.4% 3.9 ± 0.7 ± 1.0% [200, ] 1.1 ± 0.4 ± 0.2% 1.3 ± 0.5 ± 1.1% 3.6 ± 1.3 ± 1.0% Table 4.3: Summary of electron fake rate (stat+syst) measured from the data. Once the fake rate is estimated, a data-driven mono-electron Control Region is used to estimate the electron fakes in the mono-photon Signal Region (4.2). In a similar way mono-electron plus 1 muon (plus 2 muon, plus 2 electrons) is defined to estimate the electron fake contribution in the mono-photon CRs defined in 4.4. Table 4.4 shows the final electron fake estimation in various regions. Region photon selection ET miss selection electron fake (stat+syst) 0-muon η < 1.37 ET miss > 150GeV 63.4 ± 1.0 ± 27.5 η < 2.37 [110, 150]GeV 38.4 ± 1.0 ± muon η < 2.37 ET miss > 150GeV 6.1 ± 0.4 ± 2.3 η < 2.37 [110, 150]GeV 6.3 ± 0.4 ± muon η < 2.37 ET miss > 150GeV 0.5 ± 0.1 ± 0.2 η < 2.37 [110, 150]GeV 0.3 ± 0.1 ± electron η < 2.37 Emiss T > 150GeV 0.6 ± 0.1 ± 0.3 η < 2.37 [110, 150]GeV 1.0 ± 0.2 ± 0.3 Table 4.4: Electron fakes in various regions: 0-muon signal region, 1-muon plus photon control region, 2-muon plus photon control region and 2-electron plus photon control region. 4.6 Jet faking photon background The jet faking photon background comes from events with large ET miss and a high energetic jet reconstructed as a tight and isolated photon. This kind of contamination mainly comes from Z+jet and W+jet processes. A data-driven method, which is more reliable than MC predictions, is used in this analysis, the so called two dimensional Sideband Method (2DSideband) is used [54] Basic 2D Sideband Method The two-dimensional Sideband Method is a data-driven technique which allows to estimate the background contamination in a given signal region. The signal contami-

65 4.6 Jet faking photon background 65 nation is extracted from three control regions. Hence, the first step is the choice of an appropriate definition of these control regions. Our signal consists of tight and isolated photons, hence we build a plane (X,Y) where the X-axis is defined by the photon isolation energy and the Y-axis is defined by the photon identification tightness. Figure 4.2 reports a schematic view of this method. We split the Y-axis into two bins: a bin with photon candidates that pass all the tight selection criteria, and a bin with photons which fail one or more cuts (we call them non-tight photons, par 4.6.2). On the X-axis two regions are used in the measurement: a so called non-isolated bin (10 GeV<ET iso <45 GeV) and an isolated bin (Eiso T <5 GeV). The intermediate region (5<ET iso <10 GeV) is ignored and is considered as a safety gap between the isolated and non isolated region in order to reduce the signal leakage (i.e. a fraction of tight and isolated photons which can leak into the three control regions) in the non isolated control regions. The region where E iso >45 GeV contains only background. T Photon isolation and identification definitions As it appears from the previous section, two variables gain particular importance in this method. Those are the photon identification variable and the photon isolation energy variable. Photon Isolation In this analysis, the photon isolation energy variable used is Etcone40 corrected which corrects the raw energy deposits from leakage and underlying event effects. As previously mentioned, for isolation energies less than 5 GeV the photon is isolated, while for different values the photon is non isolated. In particular in the 2DSideband a cut of 10 GeV<E iso <45 GeV is chosen to count non isolated photons. T Photon Identification The photon identification variables used to identify photons are listed in section If the photon passes the selection criteria defined by the discriminating variables, the photon is classified as tight. If the photon fails one of these cuts, the photon is classified as non-tight. To use the 2DSideband method we need to select both tight and non-tight photons. The selection of non-tight photons is performed by relaxing the request of tight photons. Non tight photons entering the 2DSideband plane, are requested to: be non tight (i.e. fail the selection criteria defined by the tightness selection); pass relaxed identification criteria. To select non-tight photons, a subset of the strip discriminating variables (section 3.1.1) is masked so that the photon identification criteria are relaxed, therefore non-tight photons are defined by masking the following variables: E, F side, w s,3, E ratio which are found to be the less correlated with isolation variable. We call these photons tight- 4 photons, because four is the number of reversed variables. As explained in section a different number of reversed variables (2, 3, 5) is considered to take into account systematics uncertainties depending on this choice.

66 4.6 Jet faking photon background 66 Figure 4.1a shows as an example the isolation distribution for all the different masks, for data in the 1µCR. The isolation distribution for tight photons is peaked around zero, while the non tight isolation distributions are different, they are broader and shifted from zero because we are selecting background events. The more the bits are masked the broader the distribution is. As a comparison, figure 4.1b shows the same distribution for a W ( µν) + γ MC sample in the 1µCR. As expected, only the tight distribution is populated and it is well peaked around zero because this is an example of a MC signal-only (i.e. tight and isolated real photons) sample. The signal region (SR) is the region of the plane of tight and isolated photons and the three control regions (CR) are defined as the regions entered by: photons non tight and isolated, (CR1) photons non tight and non isolated (CR2) photons tight and non isolated, (CR3) If N A, N B, M B, M A are the number of events entering respectively the SR, CR1, CR2, CR3, we define N bkg sign A the background contamination (i.e. fake photons) and NA the real signal (i.e. tight and isolated real photons). If the following two assumptions are valid we can estimate in a simple way the background contamination in the SR: 1. the background isolation distribution does not depend on identification, this implies that the correlation between isolation and identification variables is negligible; 2. the leakage of real photons in control regions is negligible and therefore we can assume that: N B = N bkg B M A = M bkg A M B = M bkg B From the first assumption we can derive that: N bkg A N bkg B = M bkg A M bkg B (4.3) (4.4) and combining equation 4.4 with the second assumption (equation 4.3) we have the background contamination estimation: N bkg A = N bkg M bkg A B M bkg B The purity of the sample can be estimated by using: N sign A = N A N bkg A = N B M A M B (4.5) = N A N B M A M B (4.6)

67 4.6 Jet faking photon background tight 5 tight 4 tight 3 tight 2 tight E iso [MeV] (a) Isolation distribution for data in the 1µCR tight 5 tight 4 tight 3 tight 2 tight E iso [MeV] (b) Isolation distribution for W ( µν) + gamma MC sample in the 1µCR. Figure 4.1: Isolation distributions are shown both for data and for a MC signal sample.

68 4.6 Jet faking photon background 68 Figure 4.2: Schematic view of the 2DSideband plane definition and is defined as: P = N sign A N A = 1 N B M A M B 1 N A (4.7) It is important to stress that the validity of these results depends on the two assumptions discussed above. Assumption 1 can only be checked on MC pure background computing the correlation factor R defined as: R = N bkg,mc A N bkg,mc B M bkg,mc B M bkg,mc A (4.8) It is determined using MC samples and it is related to the correlation between X and Y axis (figure 4.2) for the background. If the ratio R is compatible with unity the correlation between the isolation and identification variables can be considered negligible. If the correlation R is not negligible and if a precise value of R is available, N bkg A and N sign A have to be estimated including the correlation R [54]. The MC sample W (µν) + jet is used to compute the ratio in eq We find R = 1.31 ± 0.25 ± 0.65, compatible with unity within the uncertainties, and we conclude that the correlation between the isolation and identification photon variable is under control and no corrections are needed. Regarding assumption 2, as mentioned before, we leave a blank gap between the isolated and non isolated region (5<ET iso <10 GeV safety gap) in order to reduce the signal leakage in non isolated region. Nevertheless, as discussed in the following paragraph, a signal leakage correction is extracted from MC and used in the estimation of N sign A and N bkg A Signal Leakage Corrections As part of the signal could leak in the three control regions, equation 4.5 could be affected by this effect, resulting in an overstimation of N bkg A.

69 4.6 Jet faking photon background 69 To take into account the signal leakage, basically, we have to subtract the signal yield in each control region from N B, M B and M A. The event yield in each CR is given by the sum of signal and background contributions, so it could be written in the following way: N B = N bkg B M A = M bkg A M B = M bkg B + N sign B + M sign A + M sign B = N bkg B = M bkg A = M bkg B + N sign N sign B A N sign A + N sign A + N sign A M sign A N sign A M sign B N sign A (4.9) We use signal MC samples to compute the fraction of signal leakage in CR1, CR2, CR3 which can be expressed in form of the following signal leakage coefficients: Therefore: N sign A = N A N bkg M bkg A B M bkg B c 1 = N sign,mc B N sign,mc A c 2 = M sign,mc A N sign,mc, A c 3 = M sign,mc B N sign,mc. A, ( ) = N A N B N sign A c MA N sign A c 2 1 M B N sign A c 3 (4.10) (4.11) With simple computations it is possible to show that the number of signal events corrected by the signal leakage results to be: ( ) N sign M A 1 A = N A N B M B 1 + c (4.12) 3N A c2n B c1m A M B Notice that if c 1, c 2 and c 3 are equal to zero, eq gets back to the original basic 2DSideband estimation (eq. 4.5) Uncertainties Statistical uncertainties Each of the quantities derived using the 2DSideband method (N bkg A, N sign A and P ) has an associated statistical uncertainty. This uncertainty is computed simply propagating the poissonian uncertainty on N A, N B, M A, M B. In particular the statistical uncertainties associated to N bkg A, N sign A and P can be written as: ( ) σ(n sign A ) = M 2 ( A 1 N A + N B ) (4.13) M B N B M A M B

70 4.6 Jet faking photon background 70 (a) Evaluation of the systematic error depending on the choiche of non tight region (b) Evaluation of the systematic error depending on the choiche of non isolated region Figure 4.3: The cartoons above show how systematics uncertanties depending on the choiche of control regions are taken into account. ( ) ( σ(n bkg A ) = M A 1 N B M B N B + 1 M A + 1 M B ) (4.14) σ(p ) = N B M A (4.15) N A M B N A N B M A Systematic uncertainties The results from the 2DSideband method should be independent on the choiche of the three control regions (CR1, CR2 and CR3). Hence, a systematic uncertainty depending on the choice of Control Regions is estimated. I evaluated the systematic uncertainty by: 1. repeating the procedure varying the non tight definition: relaxing a different number of tightness cuts (tight-2, tight-3, tight-5 masks, defined in sec ) (fig. 4.3a); 2. repeating the procedure changing the definition of the non-isolated region: in particular I applied the method without any blank gap between isolated and non isolated regions and extending the non isolated definition to ET iso >45 GeV (fig. 4.3b) DSideband Method Validation To validate the method (i.e. to check if it works well and if it is able to correctly estimate the signal contamination), the following procedure is used.

71 4.6 Jet faking photon background 71 MC samples are used to simulate a certain fraction of signal and background, so that the exact background contamination in the SR is known. Then the 2DSideband method is applied to estimate the background contamination of that same MC sample. If the sideband is well reproducing the injected signal contamination we can conclude that the 2DSideband is validated. I applied the method in the inclusive muon CR. Validation Results I considered a muon enriched control region requirying all the cuts as in the Signal Region (ET miss >150 GeV p γ T >125 GeV, ηγ <1.37) apart from the lepton veto and I required the presence of at least one muon with the same characteristics listed in par 4.4. I call this region inclusive muon control region. As it is a muon enriched region, the MC W (µν) + γ sample is used to simulate the signal while W (µν) + jet sample has been used to simulate the background. They are mixed according to MC cross sections in Table 4.1 and 4.2. Therefore, the purity for the resulting sample is: P = N events of sample W (µν) + γ tight and isolated region total N events tight and isolated region (W (µν) + γ + W (µν) + jet) (4.16) Once computed the purity coming from MC truth (eq. 4.16), we compared it to the value obtained using the 2DSideband method (eq. 4.7). The following results have been obtained: sample purity using MC truth: 74.87% ± 7.27% sample purity using 2DSideband: 73.92% ± 3.92% ± 9.40% these results show a good agreement so we can conclude that the 2DSideband is validated in the muon CR. It is important to stress that it is not relevant if this purity estimation from MC samples (both from MC truth information and 2DSideband method) correctly reproduce the purity we expect from data in the 1µCR. The purpose of this study was only to check if the 2DSideband is well estimating a certain known sample purity Noise burts It has been observed that in non-tight regions there are many events with cluster time>2ns. It means that they are events out-of-time with respect to the bunch crossing clock. The cluster time>2ns makes these events potential noise bursts caused by the detector and reconstructed as non-tight photons by the analisys. Figure 4.4b and figure 4.4b show the cluster time distribution in Signal Region and 1µCR for non-tight photons. They show that the most of non-tight photons have cluster time less than 2ns but in the Signal Region a non negligible amount of photons with cluster time major than 2ns

72 4.6 Jet faking photon background 72 ph_cl_time_endcapandbarrel ph_cl_time_endcapandbarrel photon_cluster_time [ns] photon_cluster_time [ns] (a) Photon cluster time distribution for non tight (b) Photon cluster time distribution for non tight photon in the 1µCR. Photons are selected both in photon in the 1µCR. Photons are selected both in barrel and in endcap region. barrel and in endcap region. Figure 4.4: Cluster time distribution in Signal Region (p γ T >150 GeV) and in 1µCR are selected. The presence of noise bursts affects most the 0µCR. Indeed the request of a muon reduces the probability to have the simultaneous presence of one muon well reconstructed and a noise burst. All the events with cluster time less than 10 ns are rejected thanks to the pre-selection cleaning cuts applied to photons (par. 4.2) but events with cluster time in the range 2-10 ns are not vetoed. This is a very delicate feature: we need to avoid noise bursts but we do not know the cluster time resolution. Hence, adding a cut on all the photons requiring a cluster time major than 2ns we risk to reject real photons within a cluster time 2ns because of resolution effects. Moreover, these clusters have a visible effect only in the non-tight selections. For these reasons we add a cut requiring a cluster time major than 2ns only on non-tight photons. These cuts reduces of few % the amount of non-tight photons both in SR and in CRs. In conclusion, in the following paragraph it is implied that non-tight photons with cluster time>2ns are rejected Results Once the method has been validated, it is used to estimate the background contamination in the 1muCR, 2muCR, 2eleCR and SR. Figure 4.5 show the isolation distribution for 1muCR, 2muCR and SR. To correct N sign A by taking into account the signal leakage contribution according to eq it is necessary to verify that signal MC isolation distribution well describes data signal isolation distribution (i.e. tight and isolated real photons). The caveat is that the data photon isolation distribution contains both signal and background contribution, so we have to subtract it before the comparison in data. To do it the following procedure is adopted: We hypothesize that if ET iso >20 GeV both the tight and non tight bins of fig. 4.2 contain only background,

73 4.6 Jet faking photon background ATLAS Work in progress s= 8 TeV 1 Ldt=20.3fb µ CR tight tight 2 tight 3 tight 4 tight Et iso [GeV] 100 ATLAS Work in progress s= 8 TeV SR 1 Ldt=20.3fb tight tight 2 tight 3 tight 4 tight Et iso [GeV] 22 ATLAS Work in progress s= 8 TeV 2µ CR 1 Ldt=20.3fb tight tight 2 tight 3 tight 4 tight Et iso [GeV] Figure 4.5: Isolation distributions in the 1µCR, 2µCR and SR.

74 4.6 Jet faking photon background 74 Hence, we can normalize the non tight photon isolation distribution to the tight photon isolation distribution in this region (fig 4.6a), Once normalized, we subtract bin by bin the tight photon isolation distribution and non tight isolation distribution (fig.4.6b). The resulting histogram is the background subtracted isolation distribution in data. At this point we can compare data signal isolation distribution to MC signal isolation distribution. The MC signal used is: 1µCR : MC sample W (µν) + γ; 2µCR : MC sample Z(µµ) + γ; SR : MC sample Z(νν) + γ; Figure 4.7 shows the comparison between MC and data signal isolation distributions for each of the previously listed regions. The MC signal distribution is shifted compared to data signal. This shift is estimated to be about 1 GeV. To not underestimate the signal leakage we have to superimpose MC to data. Therefore, to compute the signal leakage coefficients (eq. 4.10) the MC distribution is shifted of 1 GeV or, equivalently, the definition of the non isolated region has to be shifted of 1 GeV for MC samples only. This shift is taken into account as a systematic uncertanty by considering the variation in N bkg A with or without shifting the MC distribution. In principle the signal leakage fraction in CR3 could be estimated using data instead of MC by counting the number of events entering CR3 from the data signal isolation distribution. The problem is that the signal isolation distribution we get from data does not have enought statistic, the distribution in the region ET iso >8 GeV fluctuates around zero making the signal leakage estimation non reliable. For this reason we choose to evaluate signal leakage coefficients (eq. 4.10) from MC also in CR3. To estimate N bkg A in SR and 1µCR we used eq Since the 2µCR and 2eleCR suffer lack of statistic, therefore to estimate the jet fake background in these regions a slightly different method has been used. To gain statistic we combined the non-tight regions for the 1muCR and 0µCR. We computed the ratio of non tight isolated events over the tight isolated events: r = N B (4.17) M B and we estimated N bkg A in the 2µCR and and 2eleCR by doing: ( ) N bkg A = M A M sign,mc A r (4.18) where M sign,mc A is the leakage in CR3, which is computed as previously explained. Table 4.5 shows the results for r while Table 4.6 summarizes the fake estimation results. If ones compares the background estimation from the basic SideBand Method to the results obtained correcting for the leakage (Table 4.6) observe that, as expected, taking into

75 4.7 The γ + jet Background 75 account signal leakage, N bkg A decreases: if signal leakage is subtracted from CR1, CR2, CR3, the amount of background in those three regions decreases and, as a consequence, the background estimation in tigh and isolated region decreases, too. (a) Not normalized tight and non tight isolation distribution for data (b) The red distribution is the data tight isolation distribution distribution, the blue disribution is the non tight isolation distribution normalized in the ET iso >20 GeV region, the green distribution is the signal isolation distribution for data, it is the subtraction of blue to red distribution. Figure 4.6: Figures show the isolation distribution before and after background subtraction. 4.7 The γ + jet Background Another source of background which could enter the Signal Region is given by γ +jet and di-jet processes.

76 4.7 The γ + jet Background 76 Figure 4.7: Figures show the comparison between the signal isolation distribution in data (after background subtraction) and MC.

77 4.7 The γ + jet Background 77 region 1µCR SR (p γ T >125 GeV) combined region non-tight, iso non-tight, non-iso Ratio ± ± ± Table 4.5: Measure of ratio r as defined in eq using photons both in barrel and in endcap region. Non-tight photons with ph_cl_time >2ns are rejected. r is measured in 1µCR and SR ( p γ T >125 GeV). The error associated to r is only statistical. Region γ selection N sign A N bkg A SR p γ T >125 GeV η γ <1.37 ± ± 4.56 ± 3.76 ± µCR p γ T >150 GeV η γ <1.37 ± ± 2.51 ± 2.48 ± µcr p γ T >125 GeV η γ <2.37 ±19.25±3.04 ± 3.47 ± µcr p γ T >125 GeV η γ <2.37 ±8.89 ±3.50 ± 0.61 ± elecr pγ T >125 GeV η γ <2.37 ± 8.20 ± 0.77 ± ± 0.77 Table 4.6: Estimation of N sign A and N bkg A corrected by signal leakage contribution

78 4.7 The γ + jet Background 78 The γ + jet events enter the Signal Region if the jet is badly reconstructed and partially lost making high fake ET miss, while di-jets processes mimic Signal Region events if one jet is misreconstructed as a photon and the other jet is badly reconstructed resulting in high fake ET miss. This background is expected to be very low because in the Signal Region the jet is required to be well separated from the ET miss ( Φ(jet, ET miss ) > 0.4) precisely to avoid events in which ET miss is caused by the partial loss of a jet. This contamination is estimated with MC samples and a data-driven approach is used as a cross check MC Estimation The MC estimation is computed using Pythia samples listed in Table 4.1 and gives 0.38 ± 0.21(stat) (syst) events in the Signal Region. The MC samples have been re-weighted as explained in par The Systematic error is computed by varying each source of systematic uncertainties related to the energy/momentum scale of the reconstructed objects and their identification, reconstruction and selection efficiencies, overall uncertainties (trigger, luminosity) on selection efficiency (4.8.3). Table 4.7 show both the raw and weighted cutflow for the γ + jet Pythia samples. Selection Cut raw yield weighted yield All e trigger Good Run List vertex jet cleaning ET miss cleaning E miss >150 GeV T p γ T >125 GeV additional event cleaning (BCH) leading γ tight and η < leading γ isolated Φ(γ, ET miss )> up to 1 jet, Φ(jet, ET miss )> lepton veto Table 4.7: Cutflow for the γ + jet MC samples. The cuts applied are detailed listed in section 4.2. The table shows both the raw yield and the yield weighted according to section Data-driven Estimation The data-driven method adopted to cross check the MC estimation is described below.

79 4.7 The γ + jet Background 79 Figure 4.8: Figure schematically shows the different kinematic configurations of γ + jet events which are allowed, vetoed or can enter the SR. Events with a jet of p T >30 GeV which is not well-separated from ET miss are vetoed (Sec. 4.2); only γ + jet events with a badly recontructed jet with p T < 30 GeV could enter the Signal Region as schematically shown in figure 4.8. A control region is defined by requiring all the cuts as in the Signal Region 4.2 but reversing the Φ(jet, ET miss ) cut. In this way, asking for Φ(jet, Emiss T ) < 0.4, pathological events in which the jet is aligned to the ET miss are selected. From these events the electroweak background, coming from W/Z + jet and W/Z + γ processes, needs to be subtracted. MC simulation is used to subtract electroweak backgrounds. The MC samples used to simulate the electroweak background are official SHERPA samples (Table 4.1). The Z/W + γ samples consist of 7 Full Simulation samples. For the Z/W + jet samples, each process (i.e W τν + jet or Zµν + jet, or whatever) is simulated in 5 bins of p T (inclusive samples, 70<p Z/W T <140 GeV, 140<p Z/W T <280 GeV, 280<p Z/W T <500 GeV, p Z/W T >500 GeV) and each one of these p T bins is simulated with b-c massive quarks, hence each p T bin is simulated with other three samples in which b-c quark filter/veto are used (b-filter, c-filter and b-veto, b-veto and c-veto). Finally, the full Z/W + jet ensemble results to be composed of 105 samples. Counting also the Z/W + γ samples we have to deal with 112 MC samples which need to be weighted correctly and subtracted to data. The γ + jet and di-jet contribution in the SR is estimated from the resulting jet p T distribution by extrapolating it to the low (< 30 GeV) jet p T region. This procedure is simple but the crucial point is to correctly normalize MC samples. Figure 4.9 shows the jet p T distribution for data, MC electroweak background and γ + jet from MC. Figure 4.10 shows the jet p T distribution for data after MC EW background subtraction, compared to the distribution for γ + jet. The data-driven estimation suffers from the lack of statistics: only 23 events enter the control region defined above. MC and data are subject to statistical fluctuations and the first bins are dominated by the electroweak background. Figure 4.9 shows that in the first bin there is an excess of electroweak MC with respect to data. Table 4.8 shows the event yield for the electro-weak processes which mainly contribute to the first bins. The estimation from data is compatible with the MC prediction (fig.4.10).

80 4.7 The γ + jet Background 80 Figure 4.9: Jet p T distribution for data and electroweak MC samples Figure 4.10: Jet p T distribution for data with electroweak MC background subtracted compared to γ + jet distribution

81 4.7 The γ + jet Background 81 Figure 4.11: Jet p T distribution for data and electroweak MC background Sample raw yield weighted yield Z(νν)+γ Z(µµ)+γ Z(τ τ)+γ W(eν)+γ W(µν)+γ W(τ ν)+γ Z+jet W+jet Table 4.8: Event yield mainly contributing to histogram in fig. 4.9

82 4.8 Simultaneous Fitting Technique for background estimation Simultaneous Fitting Technique for background estimation Z( νν) + γ, Z( ll) + γ and W ( lν) + γ processes are estimated simultaneously performing a simultaneous fit. The idea is to built a fit using all the information available (coming from the CRs and SR) to extrapolate their yield in the SR. A likelihood function is built as follows: L = L phys L constraint (4.19) where L phys is given by the product of the likelihood functions for each region: L phys = i P oiss(nr obs i µ N exp, sign R i exp, Z+γ + N R i + N exp, W +γ R i + N others R i ) (4.20) where: R i =CRs, SR; N obs R i is the number of observed data in each region; µ is the signal strenght; exp, sign NR i is the number of signal events; NR others i is the number of expected events which do not have to be normalized in CRs (data driven estimation or purely MC estimation such as γ + jet background; N exp R i is the expected (MC) yield for each of the Z/W + γ background, rescaled by a factor k Z or k W (equation 4.21): N exp,z(νν)+γ R i N exp,z(ll)+γ R i = K Z N MC,Z(νν)+γ R i = K Z N MC,Z(ll)+γ R i exp,w (lν)+γ NR i = K W N MC,Z(lν)+γ R i (1 + σ systn R i θ systn ); n (1 + σ systn R i θ systn ); n (1 + σ systn R i θ systn ); n (4.21) while N others R i : N others R i = N estimated, process R i (1 + σ syst R i θ syst ) (4.22) The systematic uncertainties are introduced in the model via nuisance parameters θ multiplied by the actual value of the uncertainty (1+σ syst R i θ syst ). θ for a given systematic

83 4.8 Simultaneous Fitting Technique for background estimation 83 is equal in each region. θ follow a gaussian distribution centered on zero and with width equal to 1. Hence L constraint results to be: L constraint = n gauss(0 θ systn, 1) (4.23) At the time the analysis was blinded, the Signal Region could not be included in the fit to constrain the scale factors. The following procedure has been adopted to overcome this problem: blinded analysis: a simultaneous fit including only the three control regions (1µCr, 2µCR and 2eleCR) has been performed. The scale factors resulting from the fit are applied to the expected background in the SR; unblinded analysis: once the analysis has been unblinded a simultaneous fit including also the Signal Region to constrain the scale factors could be performed. Otherwise a background only fit can be performed for the unblinded analysis, too HistFitter The package used to carry out this part of the analysis is HistFitter [55]. HistFitter is a high-level user-interface to perform likelihood fits and follow-up with their statistical interpretation. The user interface and its underlying configuration manager are written in python, and are executing external computational software compiled in C + + such as HistFactory [56], RooStats [57] and RooFit [58]. HistFitter is a python run script that takes a python configuration file as input. The configuration file sets up the pdf of the control, validation, and signal regions of physics analysis. The inputs of my python script are Root:TTrees for all the background processes, for each region and for each systematics considered. These inputs contain all the selected events information: the event weights, the region they are entering and the systematics variation. For each process, the proper configuration needs to be set. As previously mentioned, the Z/W + γ processes are required to be normalized in CRs (SR, too for unblinded analysis) which means that scale factors are considered as free parameters. Data driven estimations are treated in a different way. They are not normalized and their associated error is taken to be the statistical and systematic error resulting form data driven estimation. In a similar way, the γ + jet background is not fitted and its systematic uncertainty is estimated re-running the MC γ + jet samples with different systematic variations Inputs For each region the background yield is computed. I did not focused on the electron fake estimation, hence the values used are taken from the other analysis group members. Apart from electron fake, all the other inputs have been provided by myself.

84 4.9 Validation Region Systematics There is a large class of systematic uncertainties. They are related to the energy/momentum scale of the reconstructed objects and their identification, reconstruction and selection efficiencies, overall uncertainties (trigger, luminosity) on selection efficiency. For each source of uncertainty, the expected yields are recomputed by varying each quantity by 1σ of its value. As this analysis is dominated by statical uncertainties, for my thesis, I used only a subset of the main systematics. Electrons and Photons Systematic uncertainties linked to electrons and photons come from various sources. To measure the contributions of systematic uncertainties depending on energy scale and resolution, up/down variations of the energy scale and the energy resolution are applied coherently to all electrons and photons in the event. Uncertainties depending on the electron identification and reconstruction efficiency scale factors are taken into account by varying coherently the scale factors associated to all the electrons in the event. Systematics on the photon identification efficiency are applied by varying the data/mc scale factors within their uncertainty. In this thesis, I considered the systematics related to the total Z ee scale uncertainties, (EGZEE_[UP/DOWN]) and the uncertainty depending on the photon identification efficiency scale factor (ph_[up/down]). Jets A large number of uncertainties linked to the jet energy scale and resolution are provided by the jet/et miss performance group. I considered the jet energy scale and resolution uncertanty, JER and JES. Missing transverse momentum I considered the E miss T energy scale uncertainty, I will refer to as SCALEST_[UP/DOWN] Luminosity, Trigger, PDF, scale, ISR/FSR uncertainties I considered a 2.8% luminosity uncertainty. 4.9 Validation Region I validated the fit in a Validation Region (VR) which has been chosen as much as possible similar to the SR in background composition and statistical power but with a small signal contamination. Events are selected as in the SR except for the request of a lower ET miss and a larger photon pseudorapidity to enhance the number of events. In particular the following cuts are required:

85 4.9 Validation Region <ET miss <150 GeV η γ <2.37 A large amount of γ + jet contamination has been observed in this region (section 4.9.2), therefore to suppress it and to bring it at the same level as in the SR, an additional cut has been added. When there is a jet in the event, a cut on the azimuthal separation between the photon and the jet is applied: Φ(γ, jet)<2.7 These cuts make the VR similar in background concentration to the SR. To minimize the contamination of signal events in this region, a request on the azimuthal separation between the photon and the ET miss has been applied. This separation is required to be: Φ(γ, E miss T )<3.00 The Φ(γ, ET miss ) cut has been optimized on a subsample of MC signals. I will not go through the details on how this has been carried out. Summarizing, the Validation region is defined by the following cuts: p γ T >125 GeV η γ < <ET miss <150 GeV Φ(γ, jet)<2.7 Φ(γ, E miss T )< Low E miss T Control Regions definition The simultaneous fit is performed in the VR using three low ET miss control regions. These control regions are similar to the W γ, Z( µµ)γ and Z( ee)γ enriched control regions defined in 4.4 apart from the ET miss cut. These low ET miss CRs are defined by 110<ET miss <150 GeV. Neither the Φ(γ, jet)<2.7 nor the Φ(γ, ET miss )<3.00 has been applied in these Control Regions. The Φ(γ, ET miss ) has not been applied because it reduces too much the number of selected events. No biases are expected beacuse the Φ(γ, ET miss ) distributions have been checked to be similar in data and MC simulations.

86 4.9 Validation Region γ + jet background estimation in the Validation Region Lowering the ET miss cut, the γ +jet background increases significantly. In the VR, the ET miss cut in principle can be lowered till 100 GeV to increase statistics. The problem is that the γ +jet contribution in the [100, 110]GeV ET miss range is high and suffers of large statistical uncertanty (the MC prediction is 60±30 events). To avoid this problem, the ET miss cut in the VR has been set to 110<ET miss <150 GeV. As done in the SR (section 4.7) the γ + jet background is estimated using simulations. A data-driven approach is used to cross check the MC estimation. MC Estimation From simulations (γ + jet samples listed in 4.1) the γ + jet contribution is: Data Driven estimation 2.5 ± 2.3(stat) (syst) (4.24) The data-driven procedure adopted is described in The problem here is that the Φ(γ, jet)<2.7 cut extremely reduces the candidate fake-et miss events in data, making it difficult to attempt a data-driven estimation at that level. Figure 4.13a shows the Φ(γ, jet) distribution at the end of the selection cuts (i.e. after applying the Φ(γ, jet)<2.7 cut), figure 4.13b shows the corresponding jet p T distribution. These distributions show that the γ + jet background is extremely reduced by the Φ(γ, jet) cut, but at this level it is impossible to extrapolate an estimation in the low-p T jet region (jet p T <30 GeV). It is possible to attempt the γ + jet estimation before applying the Φ(γ, jet)<2.7. Figure 4.12a shows the Φ(γ, jet) distribution before requiring it to be <2.7 and figure 4.12b shows the corresponding jet p T distribution. The γ + jet estimation can be done at this level. Figure 4.14 shows the final jet p T distribution both from data and MC. The [30, 50]GeV bin in data (data - ew background) has 8.49 ± 5.06(stat) entries. In a conservative approach, this number can be considered as an upper limit under the assumption that if the p T of the jet tends to zero, the number of events also does so. The amount of γ + jet events in the nominal VR ( Φ(γ, jet)<2.7 cut applied) is done by multipling that number to the fraction of MC events which passes the Φ(γ, jet) cut, this fraction is very low ( ). Roughly speaking, this estimation confirm that setting the ET miss range to 110 GeV<ET miss <150 GeV the γ + jet background, as from MC, is kept under control thanks to the Φ(γ, jet)<2.7 cut Simultaneous fit The Simultaneous fit has been validated in the VR. Two fits has been performed: 1. a fit using only CRs to constrain the scale factors;

87 4.9 Validation Region 87 (a) Φ(jet, γ) (b) Jet p T distribution Figure 4.12: Distributions before requiring Φ(jet, γ)<2.7. Figure 4.12a shows the Φ(jet, γ) distribution for data, electroweak background and γ +jet MC, after requiring Φ(γ, ET miss)<3.0, after requiring Φ(jet, ET miss )<0.4 and before requiring Φ(jet, γ)<2.7. The distribution shows that the main fraction of γ + jet events are concentrated in the range Φ(jet, γ)>2.7. Figure 4.12b shows the jet p T distribution at the selection level of figure 4.12a (a) Φ(jet, γ) (b) Jet p T distribution Figure 4.13: Distributions after requiring Φ(jet, γ)<2.7. Figure 4.13a shows the Φ(jet, γ) distribution for data, electroweak background and γ + jet MC, after requiring Φ(jet, γ)<2.7 to reject γ +jet events. There is only a little contribution from γ +jet MC and only 1 event survive in data. Figure 4.13b shows the jet p T distribution after having required Φ(jet, γ)<2.7.

88 4.10 Background estimation in the Signal Region 88 Figure 4.14: The plot shows the subtraction of jet p T distribution of electroweak samples to data of figure 4.12b. From this distribution an estimation of the γ + jet background can be attempted. 2. a fit where the VR has been included. In this fit the strenght parameter has been set to zero, because no signal is expected to enter the VR. Results from fit performed as case 1. are shown in Table 4.9. Results from fit performed as case 2. are shown in Table 4.10 In both cases, results show agreement between the fitted background and the data observation Background estimation in the Signal Region The simultaneous fit has been performed to estimate the Z/W + γ background in the Signal region both including CRs only (table 4.11) and including CRs plus SR (table 4.12). Figures 4.15 and 4.16 show the pre fit and post fit agreement between data and MC in the various regions, for the fit performed with CR only. As they show, the postfit plots show a good agreement between data and MC. To perform the fit using both SR and CRs (table 4.12), the signal strenght has been set to zero. It has been checked that letting it float for positive values, the situation does not change because the best fit value is approximately µ The inclusion of the signal region in the fit adds more information to constrain the background, hence the statistical uncertainty is reduced: Table 4.12 shows that including the SR the systematic uncertanty is extremely reduced with respect to the CR only fit. It is a consequence of the particular result we obtain in data: since the observed events are less than the predicted events, the discovery fit pulls down the background estimate with respect to the CR-only fit. The background source which is less contraint by the

89 4.10 Background estimation in the Signal Region 89 table 1µCR 2µCR 2eleCR VR Observed events Fitted bkg events ± ± ± ± 16.81(stat) ±18.22(syst) Fitted Zmumu_gamma events ± ± Fitted Znunu_gamma events 0.97 ± ± ± ± Fitted Zee_gamma events 0.00 ± ± ± ± 0.00 Fitted Ztautau_gamma events 8.10 ± ± ± ± 0.40 Fitted Wmunu_gamma events ± ± ± ± 1.69 Fitted Wtaunu_gamma events ± ± ± 5.43 Fitted Wenu_gamma events ± ± ± 1.48 Fitted GammaJet events ± ± 2.29 Fitted JetFakePh events ± ± ± 8.09 Fitted ELECTRONFakePh events 6.28 ± ± ± ± MC exp. SM events ± ± ± ± MC exp. Zmumu_gamma events ± ± MC exp. Znunu_gamma events 1.27 ± ± ± ± 5.92 MC exp. Zee_gamma events 0.00 ± ± ± ± 0.00 MC exp. Ztautau_gamma events ± ± ± ± 0.30 MC exp. Wmunu_gamma events ± ± ± ± 1.03 MC exp. Wtaunu_gamma events ± ± ± ± 3.11 MC exp. Wenu_gamma events ± ± ± 1.00 MC exp. GammaJet events 1.09 ± ± ± 2.29 MC exp. JetFakePh events ± ± ± 8.24 MC exp. ELECTRONFakePh events 6.30 ± ± ± ± Table 4.9: Fit results in the validation region. The fit is performed using only the low-et miss CR to constrain the scale factors. Nominal MC expectations (normalised to MC cross-sections) are given for comparison. The errors shown are statistical plus systematic uncertainties (this fit is done including the systematic uncertainties listed in 4.8.3). Uncertainties on the fitted yields are symmetric by construction, where the negative error is truncated when reaching to zero event yield.

90 4.10 Background estimation in the Signal Region 90 table 1µCR 2µCR 2eleCR VR Observed events Fitted bkg events ± ± ± ± 12.45(stat) ±8.38(syst) Fitted Zmumu_gamma events ± ± ± ± 0.72 Fitted Znunu_gamma events 1.16 ± ± ± ± Fitted Zee_gamma events 0.00 ± ± ± ± 0.00 Fitted Ztautau_gamma events 9.71 ± ± ± ± 0.38 Fitted Wmunu_gamma events ± ± ± ± 1.51 Fitted Wtaunu_gamma events ± ± ± 4.74 Fitted Wenu_gamma events ± ± ± 1.27 Fitted GammaJet events ± ± Fitted JetFakePh events ± ± ± 9.32 Fitted ELECTRONFakePh events 6.72 ± ± ± ± MC exp. SM events ± ± ± ± MC exp. Zmumu_gamma events ± ± MC exp. Znunu_gamma events 1.27 ± ± ± ± 5.88 MC exp. Zee_gamma events 0.00 ± ± ± ± 0.00 MC exp. Ztautau_gamma events ± ± ± ± 0.30 MC exp. Wmunu_gamma events ± ± ± ± 1.02 MC exp. Wtaunu_gamma events ± ± ± ± 2.43 MC exp. Wenu_gamma events ± ± ± 0.89 MC exp. GammaJet events 1.09 ± ± ± 2.29 MC exp. JetFakePh events ± ± ± 8.24 MC exp. ELECTRONFakePh events 6.30 ± ± ± ± Table 4.10: Fit results in the validation region. The fit is performed using both the low-e miss T CR and the Validation Region itself to constrain the scale factors. Nominal MC expectations (normalised to MC cross-sections) are given for comparison. The errors shown are statistical plus systematic uncertainties (this fit is done including the systematic uncertainties listed in 4.8.3). Uncertainties on the fitted yields are symmetric by construction, where the negative error is truncated when reaching to zero event yield. fit nella VR, includendo la VR, con le syst, con lumierr

91 4.10 Background estimation in the Signal Region 91 CRs is the fake electron. The fit uses the fake background as a handle because: It has a large uncertainty within which it can vary; It is less constrained than the W/Z+gamma components by the CRs because it only has a sizable contribution in the signal region. The nuisance parameter variation can result in a large change in the W/Z + jet signal region component, which is basically allowed to float freely within its Gaussian constraint. Therefore, once this nuisance parameter varies, the Z( νν) + γ component will also change, as they are anticorrelated (figure 4.17). It can be checked that if the signal strenght µ is allowed to float for negative values, the fit gives a negative values for µ, instead of using the fake handle. In this case, the best fit value for the signal strength is µ Since including SR in the fit we obtain this significant decrease of systematics uncertanty, we decided to consider the background estimation from the CR only fit. Hence the total background estimation in the signal region is: ± 35.42(stat) ± 25.52(syst). A different procedure is used for setting a model independent limit (section 5.1).

92 4.10 Background estimation in the Signal Region 92 Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh Wenu_gamma Ztautau_gamma Znunu_gamma Wtaunu_gamma Zmumu_gamma Wmunu_gamma Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh Wenu_gamma Ztautau_gamma Znunu_gamma Wtaunu_gamma Zmumu_gamma Wmunu_gamma Data / SM Region Data / SM Region (a) 1-muon CR before fit (b) 1-muon CR after fit Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh Ztautau_gamma Wtaunu_gamma Zmumu_gamma Wmunu_gamma Events / ( 1 ) 80 Data 2012 ( s=8 TeV) Standard Model 70 ELECTRONFakePh JetFakePh 60 Ztautau_gamma Wtaunu_gamma 50 Zmumu_gamma Wmunu_gamma Data / SM Region Data / SM Region (c) 2-muon CR before fit (d) 2-muon CR before fit Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh GammaJet Wenu_gamma Zee_gamma Ztautau_gamma Wtaunu_gamma Events / ( 1 ) 70 Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh 60 JetFakePh GammaJet 50 Wenu_gamma Zee_gamma Ztautau_gamma 40 Wtaunu_gamma Data / SM Region Data / SM Region (e) 2-ele CR before fit (f) 2-ele CR before fit Figure 4.15: Pre fit and post fit agreement between data and MC in the various control regions. The after fit results show a good agreement between data and MC.

93 4.10 Background estimation in the Signal Region 93 Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh GammaJet Wenu_gamma Ztautau_gamma Znunu_gamma Wtaunu_gamma Zmumu_gamma Wmunu_gamma Events / ( 1 ) Data 2012 ( s=8 TeV) Standard Model ELECTRONFakePh JetFakePh GammaJet Wenu_gamma Ztautau_gamma Znunu_gamma Wtaunu_gamma Zmumu_gamma Wmunu_gamma Data / SM region Data / SM region (a) SCR before fit (b) SR after fit Figure 4.16: Pre fit and post fit agreement between data and MC in the signal region. The after fit results show a good agreement between data and MC. h_corr_roofitresult_datafitregions_fitregions_onemucr_cuts_twomucr_cuts_twoelecr_cuts_sr_cuts Lumi alpha_egzee alpha_errsyst alpha_jer alpha_jes alpha_scalest alpha_ph alpha_stat gamma_stat_onemucr_cuts_bin_ gamma_stat_sr_cuts_bin_0 gamma_stat_twoelecr_cuts_bin_0 gamma_stat_twomucr_cuts_bin_0 mu_w mu_z Lumi alpha_egzee gamma_stat_sr_cuts_bin_0 gamma_stat_onemucr_cuts_bin_0 alpha_stat alpha_ph alpha_scalest alpha_jes alpha_jer alpha_errsyst mu_z mu_w gamma_stat_twomucr_cuts_bin_ gamma_stat_twoelecr_cuts_bin_ Figure 4.17: Table show the correlation factors between the nuisance parameters. As written in the table, the nuisance parameter of the statistical uncertanty associated to the electron fake (called alpha_errsyst in the table) is anticorrelated to the scale factor k Z (called mu_z in the table).

94 4.10 Background estimation in the Signal Region 94 channel 1µCR 2µCR 2eleCR SR Observed events Fitted bkg events ± ± ± ± 35.42(stat) ±25.52(syst) Fitted Zmumu_gamma events ± ± ± ± 0.16 Fitted Znunu_gamma events 1.53 ± ± ± ± Fitted Zee_gamma events 0.00 ± ± ± ± 0.00 Fitted Ztautau_gamma events 8.20 ± ± ± ± 0.22 Fitted Wmunu_gamma events ± ± ± ± 1.42 Fitted Wtaunu_gamma events ± ± ± ± 3.93 Fitted Wenu_gamma events ± ± ± 1.53 Fitted GammaJet events 0.00 ± ± ± 0.21 Fitted JetFakePh events ± ± ± 4.72 Fitted ELECTRONFakePh events 6.19 ± ± ± ± MC exp. SM events ± ± ± ± MC exp. Zmumu_gamma events ± ± ± ± 0.16 MC exp. Znunu_gamma events 1.75 ± ± ± ± MC exp. Zee_gamma events 0.00 ± ± ± ± 0.00 MC exp. Ztautau_gamma events 9.34 ± ± ± ± 0.21 MC exp. Wmunu_gamma events ± ± ± ± 1.18 MC exp. Wtaunu_gamma events ± ± ± ± 2.53 MC exp. Wenu_gamma events ± ± ± 1.55 MC exp. GammaJet events 0.00 ± ± ± 0.21 MC exp. JetFakePh events ± ± ± 4.69 MC exp. ELECTRONFakePh events 6.10 ± ± ± ± Table 4.11: Fit performed in the Signal Region including control regions only. The associated error to each yield includes both statistical and systematic uncertainty. Only a subset of systematics have been considered.

95 4.10 Background estimation in the Signal Region 95 channel 1µCR 2µCR 2eleCR SR Observed events Fitted bkg events ± ± ± ± 19.12(stat) ±3.92(syst) Fitted Zmumu_gamma events ± ± ± ± 0.15 Fitted Znunu_gamma events 1.49 ± ± ± ± Fitted Zee_gamma events 0.00 ± ± ± ± 0.00 Fitted Ztautau_gamma events 7.95 ± ± ± ± 0.20 Fitted Wmunu_gamma events ± ± ± ± 1.41 Fitted Wtaunu_gamma events ± ± ± ± 3.91 Fitted Wenu_gamma events ± ± ± 1.15 Fitted GammaJet events 0.00 ± ± ± 0.23 Fitted JetFakePh events ± ± ± 4.28 Fitted ELECTRONFakePh events 5.38 ± ± ± ± MC exp. SM events ± ± ± ± MC exp. Zmumu_gamma events ± ± ± ± 0.16 MC exp. Znunu_gamma events 1.75 ± ± ± ± MC exp. Zee_gamma events 0.00 ± ± ± ± 0.00 MC exp. Ztautau_gamma events 9.34 ± ± ± ± 0.21 MC exp. Wmunu_gamma events ± ± ± ± 1.18 MC exp. Wtaunu_gamma events ± ± ± ± 2.53 MC exp. Wenu_gamma events ± ± ± 0.98 MC exp. GammaJet events 0.00 ± ± ± 0.21 MC exp. JetFakePh events ± ± ± 4.69 MC exp. ELECTRONFakePh events 6.10 ± ± ± ± Table 4.12: Fit performed in the Signal Region including both control regions and Signal Region. The associated error to each yield includes both statistical and systematic uncertainty. Only a subset of systematics have been considered.

96 Chapter 5 Interpretations 5.1 Model Independent limit on the presence of new physics Limits on the presence of physics beyond the SM are extracted from the comparison between the number of events measured in data and the SM predictions. 521 events are observed in data and the estimated SM prediction is ± 35.42(stat) ± 25.52(syst) events (section 4.10). The number of selected events coming from a potential new physics process of crosssection σ is N new = L σ A ɛ, where L is the integrated luminosity, A ɛ is the product of the acceptance of the selection (A) and the experimental efficiency to select signal events (ɛ). Without any hypothesis on the model of new physics, a limit on the visible cross section, defined as σ vis = σ A ɛ, is computed. The idea is to estimate limits by comparing the probability of the data events in SR to be compatible with the predicted background, and with the background plus a given signal. To compute the limits, a profiled likelihood ratio statistical test is used. The results are expressed in terms of the visible cross section parameter σ vis which is the parameter of interest (POI) of the fit. The likelihood function, the same used to perform a simultaneous fit for the background estimation (section 4.8), derives from poissonian distributions with the mean set to the mean value of the background and signal predictions. Isolating the pdf of SR and CRs and isolating σ vis in equation 4.20, it is possible to write the likelihood function as: L = P oiss(n obs SR N exp,signal SR + N exp,bkg SR ) i P oiss(n obs R i N exp,bkg R i ) L constraint (5.1) The first term of equation 5.1 is a Poisson pdf which describes the probability of observing NSR obs events in the signal region, given as expectation value the number of expected events in the SR. The expected events in the SR are both signal and background events: expected signal events: N exp,signal SR = σ vis L n ν(θ n), where ν(θ n ) is a response 96

97 5.1 Model Independent limit on the presence of new physics 97 function for each parameters defined as ν(θ n ) = (1 + σ systn R i θ systn ) (as described in section 4.8); expected background events: N exp,bkg SR = k β kb k n ν(θ n), where k runs over the various background sources, B k is the nominal expectation for the k th background and β k represents the scale factors k W, k Z in the case of the W/Z + γ backgrounds (while it s 1 otherwise); each background yield is multiplied by response functions ν(θ n ), accounting for the effect of systematic uncertainties. The second term of Eq. 5.1 consists of the product of a Poisson pdf for each control region R i. In control regions, only background events are expected, hence N exp,bkg R i = k β kb k n ν(θ n). The third term of Eq. 5.1 is a likelihood function which constrains each nuisance parameter with a Gaussian pdf, defined by equation To test a hypothesized value of σ vis a test statistic based on the profiled likelihood ratio is used. The test statistic q σvis used [59] is defined as: L(σ 2 ln vis,ˆθ) L(σ vis =0, θ(σ vis ˆ =0)) q σvis = 2 ln L(σ vis,ˆθ) if 0 L( σ vis ˆ,ˆθ) 0 if σˆ vis > σ vis. The test is performed as follows: if σˆ vis < 0, σˆ vis < σ vis, (5.2) σˆ vis < 0: the numerator is computed for each tested value of the POI σ vis, maximising L with σ vis fixed. ˆθ is the set of nuisance parameter values after this fit (ˆθ is called conditional maximum likelihood estimator). The denominator is maximized fixing σ vis =0, (hence θ(σ vis ˆ = 0) is the set of parameters which maximize L for σ vis =0); 0 σˆ vis < σ vis : as before, the numerator is computed for each tested value of the POI σ vis : L is maximized with σ vis fixed. The denominator is maximized with all the parameters free ( σ vis ˆ and ˆθ are called unconditional maximum likelihood estimators). To extract the upper limit on σ vis the modified frequentist CL s approach is used [60]. For each tested value of σ vis the CL s is computed as: where: CL s = CL s+b CL b = p σ vis 1 p b = f(q σvis σ vis, ˆθ σvis ) is the pdf of q σvis q obs σvis f(q σvis σ vis, ˆθ σvis )dq σvis qσvis obs f(q σvis 0, ˆθ. 0 )dq σvis when the value σ vis of the POI is tested; f(q σvis 0, ˆθ 0 ) is the pdf of q σvis POI is set to zero). when the background only hypothesis is tested (i.e.

98 5.2 Limits on Dark Matter direct production 98 Figure 5.1: Evolution of the confidence level of the signal hypotesis (CLs) with the visible cross section (σ A ɛ) of new physics. The σ A ɛ [fb] above the red continuous line are expected to be excluded at 95%CL. The 95% CL limit on σ vis is then given by the solution to the equation CL s = The procedure described above is used to compute observed limits. To compute expected limits (i.e. to set limits without using the number of observed events in data) an Asimov dataset is used instead of data. The first step to compute expected limits is to perform a simultaneous fit including in the fit both control regions and the signal region with visible cross section set to 0. Scale factors (k W, k Z ) and the nuisance parameters related to systematic uncertainties are extracted and then used to built an Asimov dataset in the SR. The resulting number of events in the SR are used in place of N obs SR Results Results are computed with no systematic uncertainty attached to the signal component. Systematics uncertainties considered on the background are listed in section The upper limits at 95% CL on σ A ɛ are shown in figure 5.1 and table 5.1. We can exclude at 95% CL the presence of a new physics process with a visible cross section above 4.04 fb. This result sets more stringent constraints than the mono-photon analysis performed on the 7 TeV dataset (4.6 fb 1 ) [61] which excludes values of σ visible above 6.8 fb at 95% CL. 5.2 Limits on Dark Matter direct production In this section, exclusion limits on WIMPs direct production in the framework of effective field theories are computed. As explained in section 1.5, in the framework

DARK MATTER. Martti Raidal NICPB & University of Helsinki Tvärminne summer school 1

DARK MATTER. Martti Raidal NICPB & University of Helsinki Tvärminne summer school 1 DARK MATTER Martti Raidal NICPB & University of Helsinki 28.05.2010 Tvärminne summer school 1 Energy budget of the Universe 73,4% - Dark Energy WMAP fits to the ΛCDM model Distant supernova 23% - Dark

More information

Lecture 12. Dark Matter. Part II What it could be and what it could do

Lecture 12. Dark Matter. Part II What it could be and what it could do Dark Matter Part II What it could be and what it could do Theories of Dark Matter What makes a good dark matter candidate? Charge/color neutral (doesn't have to be though) Heavy We know KE ~ kev CDM ~

More information

The Mystery of Dark Matter

The Mystery of Dark Matter The Mystery of Dark Matter Maxim Perelstein, LEPP/Cornell U. CIPT Fall Workshop, Ithaca NY, September 28 2013 Introduction Last Fall workshop focused on physics of the very small - elementary particles

More information

LHC searches for dark matter.! Uli Haisch

LHC searches for dark matter.! Uli Haisch LHC searches for dark matter! Uli Haisch Evidence for dark matter Velocity Observed / 1 p r Disk 10 5 ly Radius Galaxy rotation curves Evidence for dark matter Bullet cluster Mass density contours 10 7

More information

Dark Matter in Particle Physics

Dark Matter in Particle Physics High Energy Theory Group, Northwestern University July, 2006 Outline Framework - General Relativity and Particle Physics Observed Universe and Inference Dark Energy, (DM) DM DM Direct Detection DM at Colliders

More information

ATLAS Missing Energy Signatures and DM Effective Field Theories

ATLAS Missing Energy Signatures and DM Effective Field Theories ATLAS Missing Energy Signatures and DM Effective Field Theories Theoretical Perspectives on New Physics at the Intensity Frontier, Victoria, Canada James D Pearce, University of Victoria Sept 11, 014 1

More information

Lecture 14. Dark Matter. Part IV Indirect Detection Methods

Lecture 14. Dark Matter. Part IV Indirect Detection Methods Dark Matter Part IV Indirect Detection Methods WIMP Miracle Again Weak scale cross section Produces the correct relic abundance Three interactions possible with DM and normal matter DM Production DM Annihilation

More information

Project Paper May 13, A Selection of Dark Matter Candidates

Project Paper May 13, A Selection of Dark Matter Candidates A688R Holly Sheets Project Paper May 13, 2008 A Selection of Dark Matter Candidates Dark matter was first introduced as a solution to the unexpected shape of our galactic rotation curve; instead of showing

More information

Spectra of Cosmic Rays

Spectra of Cosmic Rays Spectra of Cosmic Rays Flux of relativistic charged particles [nearly exactly isotropic] Particle density Power-Law Energy spectra Exponent (p, Nuclei) : Why power laws? (constraint on the dynamics of

More information

Dennis Silverman UC Irvine Physics and Astronomy Talk to UC Irvine OLLI May 9, 2011

Dennis Silverman UC Irvine Physics and Astronomy Talk to UC Irvine OLLI May 9, 2011 Dennis Silverman UC Irvine Physics and Astronomy Talk to UC Irvine OLLI May 9, 2011 First Discovery of Dark Matter As you get farther away from the main central mass of a galaxy, the acceleration from

More information

Search for SUperSYmmetry SUSY

Search for SUperSYmmetry SUSY PART 3 Search for SUperSYmmetry SUSY SUPERSYMMETRY Symmetry between fermions (matter) and bosons (forces) for each particle p with spin s, there exists a SUSY partner p~ with spin s-1/2. q ~ g (s=1)

More information

Collider Searches for Dark Matter

Collider Searches for Dark Matter Collider Searches for Dark Matter AMELIA BRENNAN COEPP-CAASTRO WORKSHOP 1 ST MARCH 2013 Introduction Enough introductions to dark matter (see yesterday) Even though we don t know if DM interacts with SM,

More information

Discovery Physics at the Large Hadron Collider

Discovery Physics at the Large Hadron Collider + / 2 GeV N evt 4 10 3 10 2 10 CMS 2010 Preliminary s=7 TeV -1 L dt = 35 pb R > 0.15 R > 0.20 R > 0.25 R > 0.30 R > 0.35 R > 0.40 R > 0.45 R > 0.50 10 1 100 150 200 250 300 350 400 [GeV] M R Discovery

More information

Week 3 - Part 2 Recombination and Dark Matter. Joel Primack

Week 3 - Part 2 Recombination and Dark Matter. Joel Primack Astro/Phys 224 Spring 2012 Origin and Evolution of the Universe Week 3 - Part 2 Recombination and Dark Matter Joel Primack University of California, Santa Cruz http://pdg.lbl.gov/ In addition to the textbooks

More information

Cristiano Alpigiani Shanghai Jiao Tong University Shanghai, 18 May 2017

Cristiano Alpigiani Shanghai Jiao Tong University Shanghai, 18 May 2017 Searches for dark matter in ATLAS Shanghai Jiao Tong University Shanghai, 18 May 2017 Dark Matter and Particle Physics Astrophysical evidence for the existence of dark matter! First observed by Fritz Zwicky

More information

Nucleosíntesis primordial

Nucleosíntesis primordial Tema 5 Nucleosíntesis primordial Asignatura de Física Nuclear Curso académico 2009/2010 Universidad de Santiago de Compostela Big Bang cosmology 1.1 The Universe today The present state of the Universe

More information

An Introduction to Particle Physics

An Introduction to Particle Physics An Introduction to Particle Physics The Universe started with a Big Bang The Universe started with a Big Bang What is our Universe made of? Particle physics aims to understand Elementary (fundamental)

More information

Not reviewed, for internal circulation only

Not reviewed, for internal circulation only Searches for Dark Matter and Extra Dimensions with the ATLAS detector Shawn McKee / University of Michigan On Behalf of the ATLAS Collaboration November 21, 2013 Dark Matter Motivations Numerous astrophysical

More information

The Search for Dark Matter. Jim Musser

The Search for Dark Matter. Jim Musser The Search for Dark Matter Jim Musser Composition of the Universe Dark Matter There is an emerging consensus that the Universe is made of of roughly 70% Dark Energy, (see Stu s talk), 25% Dark Matter,

More information

Astronomy 182: Origin and Evolution of the Universe

Astronomy 182: Origin and Evolution of the Universe Astronomy 182: Origin and Evolution of the Universe Prof. Josh Frieman Lecture 12 Nov. 18, 2015 Today Big Bang Nucleosynthesis and Neutrinos Particle Physics & the Early Universe Standard Model of Particle

More information

A first trip to the world of particle physics

A first trip to the world of particle physics A first trip to the world of particle physics Itinerary Massimo Passera Padova - 13/03/2013 1 Massimo Passera Padova - 13/03/2013 2 The 4 fundamental interactions! Electromagnetic! Weak! Strong! Gravitational

More information

Dark Matter and Dark Energy components chapter 7

Dark Matter and Dark Energy components chapter 7 Dark Matter and Dark Energy components chapter 7 Lecture 3 See also Dark Matter awareness week December 2010 http://www.sissa.it/ap/dmg/index.html The early universe chapters 5 to 8 Particle Astrophysics,

More information

LHC searches for momentum dependent DM interactions

LHC searches for momentum dependent DM interactions LHC searches for momentum dependent interactions Daniele Barducci w/ A. Bharucha, Desai, Frigerio, Fuks, Goudelis, Kulkarni, Polesello and Sengupta arxiv:1609.07490 Daniele Barducci LHC searches for momentum

More information

Particles in the Early Universe

Particles in the Early Universe Particles in the Early Universe David Morrissey Saturday Morning Physics, October 16, 2010 Using Little Stuff to Explain Big Stuff David Morrissey Saturday Morning Physics, October 16, 2010 Can we explain

More information

SEARCH FOR EXTRA DIMENSIONS WITH ATLAS AND CMS DETECTORS AT THE LHC

SEARCH FOR EXTRA DIMENSIONS WITH ATLAS AND CMS DETECTORS AT THE LHC SEARCH FOR EXTRA DIMENSIONS WITH ATLAS AND CMS DETECTORS AT THE LHC S. SHMATOV for ATLAS and CMS Collaborations Joint Institute for Nuclear Research, Dubna, Russia E-mail: shmatov@cern.ch A brief review

More information

PoS(LHCPP2013)016. Dark Matter searches at LHC. R. Bellan Università degli Studi di Torino and INFN

PoS(LHCPP2013)016. Dark Matter searches at LHC. R. Bellan Università degli Studi di Torino and INFN Dark Matter searches at LHC R. Bellan Università degli Studi di Torino and INFN E-mail: riccardo.bellan@cern.ch U. De Sanctis Università degli Studi di Udine and INFN E-mail: umberto@cern.ch The origin

More information

Overview of Dark Matter models. Kai Schmidt-Hoberg

Overview of Dark Matter models. Kai Schmidt-Hoberg Overview of Dark Matter models. Kai Schmidt-Hoberg Evidence for dark matter. Compelling evidence for dark matter on all astrophysical scales: Galactic scales: Rotation curves of Galaxies Kai Schmidt-Hoberg

More information

Search for Dark Ma-er and Large Extra Dimensions in the jets+met Final State in Proton-Proton Collisions at s = 13 TeV

Search for Dark Ma-er and Large Extra Dimensions in the jets+met Final State in Proton-Proton Collisions at s = 13 TeV Search for Dark Ma-er and Large Extra Dimensions in the jets+met Final State in Proton-Proton Collisions at s = 13 TeV Emine Gurpinar (Texas Tech University) on behalf of the CMS Collaboration PPC2017,

More information

Dark Matter Searches. Marijke Haffke University of Zürich

Dark Matter Searches. Marijke Haffke University of Zürich University of Zürich Structure Ι. Introduction - Dark Matter - WIMPs Ι Ι. ΙΙΙ. ΙV. V. Detection - Philosophy & Methods - Direct Detection Detectors - Scintillators - Bolometer - Liquid Noble Gas Detectors

More information

The positron and antiproton fluxes in Cosmic Rays

The positron and antiproton fluxes in Cosmic Rays The positron and antiproton fluxes in Cosmic Rays Paolo Lipari INFN Roma Sapienza Seminario Roma 28th february 2017 Preprint: astro-ph/1608.02018 Author: Paolo Lipari Interpretation of the cosmic ray positron

More information

MICROPHYSICS AND THE DARK UNIVERSE

MICROPHYSICS AND THE DARK UNIVERSE MICROPHYSICS AND THE DARK UNIVERSE Jonathan Feng University of California, Irvine CAP Congress 20 June 2007 20 June 07 Feng 1 WHAT IS THE UNIVERSE MADE OF? Recently there have been remarkable advances

More information

1 Introduction. 1.1 The Standard Model of particle physics The fundamental particles

1 Introduction. 1.1 The Standard Model of particle physics The fundamental particles 1 Introduction The purpose of this chapter is to provide a brief introduction to the Standard Model of particle physics. In particular, it gives an overview of the fundamental particles and the relationship

More information

Measuring Dark Matter Properties with High-Energy Colliders

Measuring Dark Matter Properties with High-Energy Colliders Measuring Dark Matter Properties with High-Energy Colliders The Dark Matter Problem The energy density of the universe is mostly unidentified Baryons: 5% Dark Matter: 20% Dark Energy: 75% The dark matter

More information

Physics 214 Experimental Particle Physics. Lecture 1 What to expect.

Physics 214 Experimental Particle Physics. Lecture 1 What to expect. Physics 214 Experimental Particle Physics Lecture 1 What to expect. We ll start with a grand tour. I do not expect you to understand this tour in detail. Instead, think of it as an orientation to which

More information

Particle + Physics at ATLAS and the Large Hadron Coillder

Particle + Physics at ATLAS and the Large Hadron Coillder Particle + Physics at ATLAS and the Large Hadron Coillder Discovering the elementary particles of the Universe Kate Shaw The International Centre for Theoretical Physics + Overview Introduction to Particle

More information

SFB 676 selected theory issues (with a broad brush)

SFB 676 selected theory issues (with a broad brush) SFB 676 selected theory issues (with a broad brush) Leszek Motyka Hamburg University, Hamburg & Jagellonian University, Krakow Physics of HERA and goals of the Large Hadron Collider The Higgs boson Supersymmetry

More information

Searches for Dark Matter with in Events with Hadronic Activity

Searches for Dark Matter with in Events with Hadronic Activity Searches for Dark Matter with in Events with Hadronic Activity Gabriele Chiodini, On behalf of the ATLAS Collaboration. INFN sezione di Lecce, via Arnesano 73100 Lecce - Italy ATL-PHYS-PROC-2017-176 02

More information

The Dark Side of the Higgs Field and General Relativity

The Dark Side of the Higgs Field and General Relativity The Dark Side of the Higgs Field and General Relativity The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration:

More information

Earlier in time, all the matter must have been squeezed more tightly together and a lot hotter AT R=0 have the Big Bang

Earlier in time, all the matter must have been squeezed more tightly together and a lot hotter AT R=0 have the Big Bang Re-cap from last lecture Discovery of the CMB- logic From Hubble s observations, we know the Universe is expanding This can be understood theoretically in terms of solutions of GR equations Earlier in

More information

PHY326/426 Dark Matter and the Universe. Dr. Vitaly Kudryavtsev F9b, Tel.:

PHY326/426 Dark Matter and the Universe. Dr. Vitaly Kudryavtsev F9b, Tel.: PHY326/426 Dark Matter and the Universe Dr. Vitaly Kudryavtsev F9b, Tel.: 0114 2224531 v.kudryavtsev@sheffield.ac.uk Indirect searches for dark matter WIMPs Dr. Vitaly Kudryavtsev Dark Matter and the Universe

More information

Search for Extra Dimensions with the ATLAS and CMS Detectors at the LHC

Search for Extra Dimensions with the ATLAS and CMS Detectors at the LHC Available on CMS information server CMS CR 2006/086 October 31, 2006 Search for Extra Dimensions with the ATLAS and CMS Detectors at the LHC Sergei Shmatov Joint Institute for Nuclear Research, Dubna,

More information

Summarising Constraints On Dark Matter At The Large Hadron Collider

Summarising Constraints On Dark Matter At The Large Hadron Collider Summarising Constraints On Dark Matter At The Large Hadron Collider Isabelle John Thesis submitted for the degree of Bachelor of Science Project Duration: Sep Dec 2016, half-time Supervised by Caterina

More information

Hot Big Bang model: early Universe and history of matter

Hot Big Bang model: early Universe and history of matter Hot Big Bang model: early Universe and history of matter nitial soup with elementary particles and radiation in thermal equilibrium. adiation dominated era (recall energy density grows faster than matter

More information

PHY323:Lecture 11 SUSY and UED Higgs and Supersymmetry The Neutralino Extra Dimensions How WIMPs interact

PHY323:Lecture 11 SUSY and UED Higgs and Supersymmetry The Neutralino Extra Dimensions How WIMPs interact PHY323:Lecture 11 SUSY and UED Higgs and Supersymmetry The Neutralino Extra Dimensions How WIMPs interact Candidates for Dark Matter III The New Particle Zoo Here are a few of the candidates on a plot

More information

The Expanding Universe

The Expanding Universe Cosmology Expanding Universe History of the Universe Cosmic Background Radiation The Cosmological Principle Cosmology and General Relativity Dark Matter and Dark Energy Primitive Cosmology If the universe

More information

DARK MATTERS. Jonathan Feng University of California, Irvine. 2 June 2005 UCSC Colloquium

DARK MATTERS. Jonathan Feng University of California, Irvine. 2 June 2005 UCSC Colloquium DARK MATTERS Jonathan Feng University of California, Irvine 2 June 2005 UCSC Colloquium 2 June 05 Graphic: Feng N. Graf 1 WHAT IS THE UNIVERSE MADE OF? An age old question, but Recently there have been

More information

Searches for Exotica with CMS

Searches for Exotica with CMS Searches for Exotica with CMS Albert De Roeck CERN, Geneva, Switzerland Antwerp University Belgium UC-Davis California USA NTU, Singapore 17 th May 2017 Introduction to Searches Searches for Outline New

More information

Dark Matter WIMP and SuperWIMP

Dark Matter WIMP and SuperWIMP Dark Matter WIMP and SuperWIMP Shufang Su U. of Arizona S. Su Dark Matters Outline Dark matter evidence New physics and dark matter WIMP candidates: neutralino LSP in MSSM direct/indirect DM searches,

More information

Introduction to Cosmology

Introduction to Cosmology Introduction to Cosmology Subir Sarkar CERN Summer training Programme, 22-28 July 2008 Seeing the edge of the Universe: From speculation to science Constructing the Universe: The history of the Universe:

More information

1. What does this poster contain?

1. What does this poster contain? This poster presents the elementary constituents of matter (the particles) and their interactions, the latter having other particles as intermediaries. These elementary particles are point-like and have

More information

Mono-X, Associate Production, and Dijet searches at the LHC

Mono-X, Associate Production, and Dijet searches at the LHC Mono-X, Associate Production, and Dijet searches at the LHC Emily (Millie) McDonald, The University of Melbourne CAASTRO-CoEPP Joint Workshop Melbourne, Australia Jan. 30 - Feb. 1 2017 M. McDonald, University

More information

Overview. The quest of Particle Physics research is to understand the fundamental particles of nature and their interactions.

Overview. The quest of Particle Physics research is to understand the fundamental particles of nature and their interactions. Overview The quest of Particle Physics research is to understand the fundamental particles of nature and their interactions. Our understanding is about to take a giant leap.. the Large Hadron Collider

More information

Physics 214 Experimental Particle Physics. Lecture 1 What to expect.

Physics 214 Experimental Particle Physics. Lecture 1 What to expect. Physics 214 Experimental Particle Physics Lecture 1 What to expect. We ll start with a grand tour. I do not expect you to understand this tour in detail. Instead, think of it as an orientation to which

More information

A100H Exploring the Universe: Quasars, Dark Matter, Dark Energy. Martin D. Weinberg UMass Astronomy

A100H Exploring the Universe: Quasars, Dark Matter, Dark Energy. Martin D. Weinberg UMass Astronomy A100H Exploring the :, Dark Matter, Dark Energy Martin D. Weinberg UMass Astronomy astron100h-mdw@courses.umass.edu April 19, 2016 Read: Chaps 20, 21 04/19/16 slide 1 BH in Final Exam: Friday 29 Apr at

More information

Kiwoon Choi (KAIST) 3 rd GCOE Symposium Feb (Tohoku Univ.)

Kiwoon Choi (KAIST) 3 rd GCOE Symposium Feb (Tohoku Univ.) Exploring New Physics beyond the Standard Model of Particle Physics Kiwoon Choi (KAIST) 3 rd GCOE Symposium Feb. 2011 (Tohoku Univ.) We are confronting a critical moment of particle physics with the CERN

More information

Higgs Signals and Implications for MSSM

Higgs Signals and Implications for MSSM Higgs Signals and Implications for MSSM Shaaban Khalil Center for Theoretical Physics Zewail City of Science and Technology SM Higgs at the LHC In the SM there is a single neutral Higgs boson, a weak isospin

More information

Wesley Smith, U. Wisconsin, January 21, Physics 301: Introduction - 1

Wesley Smith, U. Wisconsin, January 21, Physics 301: Introduction - 1 Wesley Smith, U. Wisconsin, January 21, 2014 Physics 301: Introduction - 1 Physics 301: Physics Today Prof. Wesley Smith, wsmith@hep.wisc.edu Undergraduate Physics Colloquium! Discussions of current research

More information

Dark Matter Searches in CMS. Ashok Kumar Delhi University - Delhi

Dark Matter Searches in CMS. Ashok Kumar Delhi University - Delhi Dark Matter Searches in CMS Ashok Kumar Delhi University - Delhi 28th Rencontres de Blois, Particle Physics and Cosmology! May 29 June 03, 2016 Dark Matter at LHC Benchmark Models Results at 8 TeV Mono-photon

More information

Neutrinos and DM (Galactic)

Neutrinos and DM (Galactic) Neutrinos and DM (Galactic) ArXiv:0905.4764 ArXiv:0907.238 ArXiv: 0911.5188 ArXiv:0912.0512 Matt Buckley, Katherine Freese, Dan Hooper, Sourav K. Mandal, Hitoshi Murayama, and Pearl Sandick Basic Result

More information

Chapter 29 Lecture. Particle Physics. Prepared by Dedra Demaree, Georgetown University Pearson Education, Inc.

Chapter 29 Lecture. Particle Physics. Prepared by Dedra Demaree, Georgetown University Pearson Education, Inc. Chapter 29 Lecture Particle Physics Prepared by Dedra Demaree, Georgetown University Particle Physics What is antimatter? What are the fundamental particles and interactions in nature? What was the Big

More information

CMB constraints on dark matter annihilation

CMB constraints on dark matter annihilation CMB constraints on dark matter annihilation Tracy Slatyer, Harvard University NEPPSR 12 August 2009 arxiv:0906.1197 with Nikhil Padmanabhan & Douglas Finkbeiner Dark matter!standard cosmological model:

More information

ATLAS Run II Exotics Results. V.Maleev (Petersburg Nucleare Physics Institute) on behalf of ATLAS collaboration

ATLAS Run II Exotics Results. V.Maleev (Petersburg Nucleare Physics Institute) on behalf of ATLAS collaboration ATLAS Run II Exotics Results V.Maleev (Petersburg Nucleare Physics Institute) on behalf of ATLAS collaboration What is the dark matter? Is the Higgs boson solely responsible for electroweak symmetry breaking

More information

The ATLAS Experiment and the CERN Large Hadron Collider

The ATLAS Experiment and the CERN Large Hadron Collider The ATLAS Experiment and the CERN Large Hadron Collider HEP101-2 January 28, 2013 Al Goshaw 1 HEP 101-2 plan Jan. 14: Introduction to CERN and ATLAS DONE Today: 1. Comments on grant opportunities 2. Overview

More information

Elementary Particle Physics Glossary. Course organiser: Dr Marcella Bona February 9, 2016

Elementary Particle Physics Glossary. Course organiser: Dr Marcella Bona February 9, 2016 Elementary Particle Physics Glossary Course organiser: Dr Marcella Bona February 9, 2016 1 Contents 1 Terms A-C 5 1.1 Accelerator.............................. 5 1.2 Annihilation..............................

More information

Moment of beginning of space-time about 13.7 billion years ago. The time at which all the material and energy in the expanding Universe was coincident

Moment of beginning of space-time about 13.7 billion years ago. The time at which all the material and energy in the expanding Universe was coincident Big Bang Moment of beginning of space-time about 13.7 billion years ago The time at which all the material and energy in the expanding Universe was coincident Only moment in the history of the Universe

More information

Chapter 22: Cosmology - Back to the Beginning of Time

Chapter 22: Cosmology - Back to the Beginning of Time Chapter 22: Cosmology - Back to the Beginning of Time Expansion of Universe implies dense, hot start: Big Bang Future of universe depends on the total amount of dark and normal matter Amount of matter

More information

M. Lattanzi. 12 th Marcel Grossmann Meeting Paris, 17 July 2009

M. Lattanzi. 12 th Marcel Grossmann Meeting Paris, 17 July 2009 M. Lattanzi ICRA and Dip. di Fisica - Università di Roma La Sapienza In collaboration with L. Pieri (IAP, Paris) and J. Silk (Oxford) Based on ML, Silk, PRD 79, 083523 (2009) and Pieri, ML, Silk, MNRAS

More information

Particle accelerators

Particle accelerators Particle accelerators Charged particles can be accelerated by an electric field. Colliders produce head-on collisions which are much more energetic than hitting a fixed target. The center of mass energy

More information

The Story of Wino Dark matter

The Story of Wino Dark matter The Story of Wino Dark matter Varun Vaidya Dept. of Physics, CMU DIS 2015 Based on the work with M. Baumgart and I. Rothstein, 1409.4415 (PRL) & 1412.8698 (JHEP) Evidence for dark matter Rotation curves

More information

Dark matter at LHC (Mostly ATLAS) Ian Hinchliffe LBNL

Dark matter at LHC (Mostly ATLAS) Ian Hinchliffe LBNL Dark matter at LHC (Mostly ATLAS) Ian Hinchliffe LBNL 1 Dark matter at LHC 2 Two classes of searches Model dependent: dependent on other new particles Higgs to invisible Supersymmetry Model independent

More information

Effective Field Theory for Nuclear Physics! Akshay Vaghani! Mississippi State University!

Effective Field Theory for Nuclear Physics! Akshay Vaghani! Mississippi State University! Effective Field Theory for Nuclear Physics! Akshay Vaghani! Mississippi State University! Overview! Introduction! Basic ideas of EFT! Basic Examples of EFT! Algorithm of EFT! Review NN scattering! NN scattering

More information

Kaluza-Klein Dark Matter

Kaluza-Klein Dark Matter Kaluza-Klein Dark Matter Hsin-Chia Cheng UC Davis Pre-SUSY06 Workshop Complementary between Dark Matter Searches and Collider Experiments Introduction Dark matter is the best evidence for physics beyond

More information

Learning from WIMPs. Manuel Drees. Bonn University. Learning from WIMPs p. 1/29

Learning from WIMPs. Manuel Drees. Bonn University. Learning from WIMPs p. 1/29 Learning from WIMPs Manuel Drees Bonn University Learning from WIMPs p. 1/29 Contents 1 Introduction Learning from WIMPs p. 2/29 Contents 1 Introduction 2 Learning about the early Universe Learning from

More information

UNVEILING THE ULTIMATE LAWS OF NATURE: DARK MATTER, SUPERSYMMETRY, AND THE LHC. Gordon Kane, Michigan Center for Theoretical Physics Warsaw, June 2009

UNVEILING THE ULTIMATE LAWS OF NATURE: DARK MATTER, SUPERSYMMETRY, AND THE LHC. Gordon Kane, Michigan Center for Theoretical Physics Warsaw, June 2009 UNVEILING THE ULTIMATE LAWS OF NATURE: DARK MATTER, SUPERSYMMETRY, AND THE LHC Gordon Kane, Michigan Center for Theoretical Physics Warsaw, June 2009 OUTLINE! Some things we ve learned about the physical

More information

FACULTY OF SCIENCE. High Energy Physics. WINTHROP PROFESSOR IAN MCARTHUR and ADJUNCT/PROFESSOR JACKIE DAVIDSON

FACULTY OF SCIENCE. High Energy Physics. WINTHROP PROFESSOR IAN MCARTHUR and ADJUNCT/PROFESSOR JACKIE DAVIDSON FACULTY OF SCIENCE High Energy Physics WINTHROP PROFESSOR IAN MCARTHUR and ADJUNCT/PROFESSOR JACKIE DAVIDSON AIM: To explore nature on the smallest length scales we can achieve Current status (10-20 m)

More information

Simplified models in collider searches for dark matter. Stefan Vogl

Simplified models in collider searches for dark matter. Stefan Vogl Simplified models in collider searches for dark matter Stefan Vogl Outline Introduction/Motivation Simplified Models for the LHC A word of caution Conclusion How to look for dark matter at the LHC? experimentally

More information

Frontiers in Theoretical and Applied Physics 2017, Sharjah UAE

Frontiers in Theoretical and Applied Physics 2017, Sharjah UAE A Search for Beyond the Standard Model Physics Using Final State with Light and Boosted Muon Pairs at CMS Experiment Frontiers in Theoretical and Applied Physics 2017, Sharjah UAE Alfredo Castaneda* On

More information

Chapter 22 Lecture. The Cosmic Perspective. Seventh Edition. The Birth of the Universe Pearson Education, Inc.

Chapter 22 Lecture. The Cosmic Perspective. Seventh Edition. The Birth of the Universe Pearson Education, Inc. Chapter 22 Lecture The Cosmic Perspective Seventh Edition The Birth of the Universe The Birth of the Universe 22.1 The Big Bang Theory Our goals for learning: What were conditions like in the early universe?

More information

Dark Matter searches in ATLAS: Run 1 results and Run 2 prospects

Dark Matter searches in ATLAS: Run 1 results and Run 2 prospects Dark Matter searches in ATLAS: Run 1 results and Run 2 prospects Lashkar Kashif University of Wisconsin Summer School and Workshop on the Standard Model and Beyond Corfu, Greece, September 8, 2015 Outline

More information

Higgs Searches and Properties Measurement with ATLAS. Haijun Yang (on behalf of the ATLAS) Shanghai Jiao Tong University

Higgs Searches and Properties Measurement with ATLAS. Haijun Yang (on behalf of the ATLAS) Shanghai Jiao Tong University Higgs Searches and Properties Measurement with ATLAS Haijun Yang (on behalf of the ATLAS) Shanghai Jiao Tong University LHEP, Hainan, China, January 11-14, 2013 Outline Introduction of SM Higgs Searches

More information

Dark matter searches and prospects at the ATLAS experiment

Dark matter searches and prospects at the ATLAS experiment Dark matter searches and prospects at the ATLAS experiment Wendy Taylor (York University) for the ATLAS Collaboration TeVPA 2017 Columbus, Ohio, USA August 7-11, 2017 Dark Matter at ATLAS Use 13 TeV proton-proton

More information

Chapter 32 Lecture Notes

Chapter 32 Lecture Notes Chapter 32 Lecture Notes Physics 2424 - Strauss Formulas: mc 2 hc/2πd 1. INTRODUCTION What are the most fundamental particles and what are the most fundamental forces that make up the universe? For a brick

More information

Cosmology and particle physics

Cosmology and particle physics Cosmology and particle physics Lecture notes Timm Wrase Lecture 5 The thermal universe - part I In the last lecture we have shown that our very early universe was in a very hot and dense state. During

More information

A model of the basic interactions between elementary particles is defined by the following three ingredients:

A model of the basic interactions between elementary particles is defined by the following three ingredients: I. THE STANDARD MODEL A model of the basic interactions between elementary particles is defined by the following three ingredients:. The symmetries of the Lagrangian; 2. The representations of fermions

More information

The Dark Matter Puzzle and a Supersymmetric Solution. Andrew Box UH Physics

The Dark Matter Puzzle and a Supersymmetric Solution. Andrew Box UH Physics The Dark Matter Puzzle and a Supersymmetric Solution Andrew Box UH Physics Outline What is the Dark Matter (DM) problem? How can we solve it? What is Supersymmetry (SUSY)? One possible SUSY solution How

More information

Open Questions in Particle Physics. Carlos Wagner Physics Department, EFI and KICP, Univ. of Chicago HEP Division, Argonne National Laboratory

Open Questions in Particle Physics. Carlos Wagner Physics Department, EFI and KICP, Univ. of Chicago HEP Division, Argonne National Laboratory Open Questions in Particle Physics Carlos Wagner Physics Department, EFI and KICP, Univ. of Chicago HEP Division, Argonne National Laboratory Society of Physics Students, Univ. of Chicago, Nov. 21, 2016

More information

Astro-2: History of the Universe. Lecture 5; April

Astro-2: History of the Universe. Lecture 5; April Astro-2: History of the Universe Lecture 5; April 23 2013 Previously.. On Astro-2 Galaxies do not live in isolation but in larger structures, called groups, clusters, or superclusters This is called the

More information

The Discovery of the Higgs Boson: one step closer to understanding the beginning of the Universe

The Discovery of the Higgs Boson: one step closer to understanding the beginning of the Universe The Discovery of the Higgs Boson: one step closer to understanding the beginning of the Universe Anna Goussiou Department of Physics, UW & ATLAS Collaboration, CERN Kane Hall, University of Washington

More information

November 24, Scalar Dark Matter from Grand Unified Theories. T. Daniel Brennan. Standard Model. Dark Matter. GUTs. Babu- Mohapatra Model

November 24, Scalar Dark Matter from Grand Unified Theories. T. Daniel Brennan. Standard Model. Dark Matter. GUTs. Babu- Mohapatra Model Scalar from November 24, 2014 1 2 3 4 5 What is the? Gauge theory that explains strong weak, and electromagnetic forces SU(3) C SU(2) W U(1) Y Each generation (3) has 2 quark flavors (each comes in one

More information

Dark matter in split extended supersymmetry

Dark matter in split extended supersymmetry Dark matter in split extended supersymmetry Vienna 2 nd December 2006 Alessio Provenza (SISSA/ISAS) based on AP, M. Quiros (IFAE) and P. Ullio (SISSA/ISAS) hep ph/0609059 Dark matter: experimental clues

More information

Unsolved Problems in Theoretical Physics V. BASHIRY CYPRUS INTRNATIONAL UNIVERSITY

Unsolved Problems in Theoretical Physics V. BASHIRY CYPRUS INTRNATIONAL UNIVERSITY Unsolved Problems in Theoretical Physics V. BASHIRY CYPRUS INTRNATIONAL UNIVERSITY 1 I am going to go through some of the major unsolved problems in theoretical physics. I mean the existing theories seem

More information

Probing Dark Matter at the LHC

Probing Dark Matter at the LHC July 15, 2016 PPC2016 1 Probing Dark Matter at the LHC SM SM DM DM Search for production of DM particles from interacting SM particles Models tested at ATLAS assume DM WIMP Infer DM production through

More information

IMPLICATIONS OF PARTICLE PHYSICS FOR COSMOLOGY

IMPLICATIONS OF PARTICLE PHYSICS FOR COSMOLOGY IMPLICATIONS OF PARTICLE PHYSICS FOR COSMOLOGY Jonathan Feng University of California, Irvine 28-29 July 2005 PiTP, IAS, Princeton 28-29 July 05 Feng 1 Graphic: N. Graf OVERVIEW This Program anticipates

More information

32 IONIZING RADIATION, NUCLEAR ENERGY, AND ELEMENTARY PARTICLES

32 IONIZING RADIATION, NUCLEAR ENERGY, AND ELEMENTARY PARTICLES 32 IONIZING RADIATION, NUCLEAR ENERGY, AND ELEMENTARY PARTICLES 32.1 Biological Effects of Ionizing Radiation γ-rays (high-energy photons) can penetrate almost anything, but do comparatively little damage.

More information

Dark Matter and Dark Energy components chapter 7

Dark Matter and Dark Energy components chapter 7 Dark Matter and Dark Energy components chapter 7 Lecture 4 See also Dark Matter awareness week December 2010 http://www.sissa.it/ap/dmg/index.html The early universe chapters 5 to 8 Particle Astrophysics,

More information

Generic Dark Matter 13 TeV. Matteo Cremonesi FNAL On behalf of the ATLAS and CMS Collaborations Moriond EWK - March 18, 2016

Generic Dark Matter 13 TeV. Matteo Cremonesi FNAL On behalf of the ATLAS and CMS Collaborations Moriond EWK - March 18, 2016 Generic Dark Matter Searches @ 13 TeV Matteo Cremonesi FNAL On behalf of the ATLAS and CMS Collaborations Moriond EWK - March 18, 2016 Introduction From cosmological observations, 85% of the matter comprised

More information

Cosmic Background Radiation

Cosmic Background Radiation Cosmic Background Radiation The Big Bang generated photons, which scattered frequently in the very early Universe, which was opaque. Once recombination happened the photons are scattered one final time

More information

FERMION PORTAL DARK MATTER

FERMION PORTAL DARK MATTER FERMION PORTAL DARK MATTER Joshua Berger SLAC UC Davis Theory Seminar! w/ Yang Bai: 1308.0612, 1402.6696 March 10, 2014 1 A HOLE IN THE SM Van Albada et. al. Chandra + Hubble What else can we learn about

More information

components Particle Astrophysics, chapter 7

components Particle Astrophysics, chapter 7 Dark matter and dark energy components Particle Astrophysics, chapter 7 Overview lecture 3 Observation of dark matter as gravitational ti effects Rotation curves galaxies, mass/light ratios in galaxies

More information