Particle Identification in the NA48 Experiment Using Neural Networks L. Litov University of Sofia
Introduction NA 48 detector is designed for measurement of the CP-violation parameters in the K decays successfully carried out. Investigation of rare K s and neutral Hyperons decays 22 Search for CP-violation and measurement of the parameters of rare charged Kaon decays 23 A clear particle identification is required in order to suppress the background In K decays µ, and e Identification of muons do not cause any problems We need as good as possible e/ - separation
NA48 detector
Introduction 23 Program for a precision measurement of Charged Kaon Decays Parameters Direct CP violation in, Ke4 - K m e Scattering lengths Radiative decays K a,a 2 K ν (ν ) γγ, K m K γγγ, K γ
Introduction
Introduction K3 Background control region signal signal The standart way to separate e and is to use E/p Ε energy deposited by particle in the EM calorimeter p particle momentum cut at E/p >.9 for e cut at E/p<.8 for K3 background E /p
Sensitive Variables Difference in development of e.m. and hadron showers Lateral development EM calorimeter gives information for lateral development From Liquid Kripton Calorimeter (LKr) E/p Emax/Eall, RMSX, RMSY Distance between the track entry point and the associated shower Effective radius of the shower
Sensitive variables - E/p E/p distribution MC simulation A correct simulation of the energy deposed by pions in the EM calorimeter - problem for big E/p It is better to use experimental events
Sensitive variables - RMS RMS of the electromagnetic cluster
Distance Distance between track entry point and center of the EM cluster
Sensitive variables - Emax/Eall, Reff
MC To test different possibilities we have used: Simulated Ke3 decays 1.3 M Simulated single e and 8 K and 2 K e Using different cuts we have obtained e Relatively to E/p <.9 cut ε = 15.7 1 e ε eff Keeping > 95 % Using Neural Network it is possible to reach e/ separation: Relatively to E/p <.9 cut e ε eff Keeping > 98% K The background from ~.1% ε eff e eff < 2. 1 m 2 2
Neural Network Powerful tool for: Classification of particles and final states Track reconstruction Particle identification Reconstruction of invariant masses Energy reconstruction in calorimeters
Neural Network
Neural Network Multi-Layer-Feed Forward network consists of: Set of input neurons One or more layers of hidden neurons Set of output neurons The neurons of each layer are connected to the ones to the subsequent layer Training Presentation of pattern Comparison of the desired output with the actual NN output Backwards calculation of the error and adjustment of the weights Minimization of the error function E = 1 ( 2 t j j o j ) 2
NN 1-3-2-2-1 Output layer Hiden layers Input layer
Neural Network
Experimental data E/pi separation to teach and test the performance of NN We have used experimental data from two different runs Charged kaon test run # 1 21 + electrons from K e e γ m pions fromk K e4 run 21 electrons from K + pions fromk e m ν
Charged run Pions K m Track momentum > 3 GeV Very tight selection K m Track is chosen randomly Requirement E/p <.8 for the other two tracks
K e + e γ Electron selection Selection : 3 tracks Distance between each two tracks > 25 cm All tracks are in HODO and MUV acceptance Selecting one of the tracks randomly Requirement two are e (E/p >.9) and (E/p <.8) The sum of tracks charges is 1 Three-track vertex CDA < 3 cm γ m m k One additional in LKr, at least 25 cm away from the tracks.128 GeV < <.14 Gev,482 GeV < <.55 GeV
K e + e γ Electron selection
Charged run E/p and momentum distributions
Charged run NN output NN output Out for If out > cut e If out < cut -
Net: 1-3-2-2-1 Charged run NN performance Input: E/p, Dist, Rrms, p, RMSx, RMSy, dx/dz, dy/dz, DistX, DistY K Teaching: 1 -, 5 e - m K e + e γ
K m e ν (ν ) E/p >.9 Non symmetric E/p distribution 3 M eff E/p E/p >.9 out >.9 Symmetric E/p distribution 3 M eff E/p
K m e ν (ν ) out >.9 E/p distribution E/p out >.8 E/p distribution There is no significant change in the parameters E/p
K m e ν (ν ) 3 M eff MC EXP There is a good agreement between MC and Experimental distributions
K e m ν reconstruction with NN Decay K e m ν Significant background comes from when one is misidentified as an e K + Teaching sample: + Pions - from K, 8 K events K e m ν Electrons - from, 22 K events
K e m E/p distribution ν reconstruction with NN NN output
K e m ν e identification efficiency reconstruction with NN rejection factor
Ke4 run NN performance Net: 1-3-2-2-1 Input: E/p, Dist, Rrms, p, RMSx, RMSy, dx/dz, dy/dz, DistX, DistY + m Teaching: 1 - K, 5 e - K e ν
K e m m 3 / GeV ν reconstruction with NN 1/R Ellipse Ke3 Ke4 R 22 p 6 MeV 7 MeV M ( t t 22 K 3 ( ) + ( 3 = ( ) 498 MeV )) 2.5 MeV 22
K e m ν recognition with NN NN output versus 1/R the background from K3 is clearly separated
K e m ν e/ Neural Network Performance no bkg subtraction! using nnout >.9 cut works visibly very well but what about bkg? L. Litov Particle Identification in the NA48 Experiment Using Neural Networks ACAT 22
K e m ν reconstruction with NN e/ Neural Network Background extending range of 1/R to 5 obviously there is bkg! Ke4 MC 1/R E /p without NN with NN 1/R E /p E /p
K e m ν reconstruction with NN
K e m ν reconstruction with NN e/ Neural Network Performance background is fitted both with and without NN ratio R (rejection factor) is measure of performance
K e m ν reconstruction with NN e/ Neural Network NN rejection factor background Optimization goal: optimize the cut values for nnout and 1/R NN efficiency nnout Signal 1/R nnout 1/R
K e m e/ Neural Network ν reconstruction with NN systematical limits statistical limits Optimization σ value to minimize: combined statistical and systematical error statistical error goes with N -½ systematical error grows with background σ = 1 + c N bkg sig nnout 1/R
K e m ν reconstruction with NN e/ Neural Network Performance background can be reduced at level.3 % Ke4 reconstruction efficiency at level 95%
Conclusions e/pi separation e/ separation with NN has been tested on experimental data For charged K run we have obtained: Relatively to E/p <.9 cut ε eff ~ At 96% e eff ~ 3.4 1 A correct Ke4 analysis can be done without additional detector (TRD) Background can be reduced at the level of ~ 1% ~ 5 % of the Ke4 events are lost due to NN efficiency ε 2 For Ke4 run we have obtained: Rejection factor ~ 38 on experimental data Background ~.3% at ε ~ 95% eff
Conclusions NN analysis Additionally Neural Network for Ke4 recognition has been developed The combined output of the two NN is used for selection of Ke4 decays NN approach leads to significant enrichment of the Ke4 statistics ~2 times This work was done in collaboration with C. Cheshkov, G. Marel, S. Stoynev and L. Widhalm