Predictions and PDFs for the LHC

Similar documents
CTEQ6.6 pdf s etc. J. Huston Michigan State University

QCD for the LHC. PDFs for the LHC Jets and Photons for the LHC Matrix Elements for the LHC the week of Joey

PDFs, the LHC and the Higgs

Difficult calculations

CTEQ-TEA update +some discussion topics

PDFs for Event Generators: Why? Stephen Mrenna CD/CMS Fermilab

Back to theory: W production to NLO

Progress in CTEQ-TEA (Tung et al.) PDF Analysis

4/ Examples of PDF Uncertainty. May 2005 CTEQ Summer School 25

Higgs Cross Sections for Early Data Taking Abstract

NNPDF. Progress in the NNPDF global analysis. Juan Rojo! STFC Rutherford Fellow! Rudolf Peierls Center for Theoretical Physics! University of Oxford!

PDF4LHC update +SCET re-weighting update

QCD at the LHC Joey Huston Michigan State University

3.2 DIS in the quark parton model (QPM)

Les Houches SM and NLO multi-leg group: experimental introduction and charge. J. Huston, T. Binoth, G. Dissertori, R. Pittau

QCD at/for the LHC Joey Huston Michigan State University. 21 June seminar at Orsay

Results on the proton structure from HERA

Physics at Hadron Colliders Part II

F λ A (x, m Q, M Q ) = a

Results on the proton structure from HERA

Les Houches SM and NLO multi-leg group: experimental introduction and charge. J. Huston, T. Binoth, G. Dissertori, R. Pittau

Lecture 3 Cross Section Measurements. Ingredients to a Cross Section

Global QCD Analysis of Nucleon Structure: Progress and Prospects

CT10, CT14 and META parton distributions

Introduction. The LHC environment. What do we expect to do first? W/Z production (L 1-10 pb -1 ). W/Z + jets, multi-boson production. Top production.

Status of parton distributions and impact on the Higgs

Physics at LHC. lecture one. Sven-Olaf Moch. DESY, Zeuthen. in collaboration with Martin zur Nedden

CT14 Intrinsic Charm Parton Distribution Functions from CTEQ-TEA Global Analysis

PDF at LHC and 100 TeV Collider

PDF constraints from! recent LHCb data

Direct measurement of the W boson production charge asymmetry at CDF

Proton anti proton collisions at 1.96 TeV currently highest centre of mass energy

Measurements of the Vector boson production with the ATLAS Detector

Structure Functions and Parton Distribution Functions at the HERA ep Collider

The structure of the proton and the LHC

Lecture 5. QCD at the LHC Joey Huston Michigan State University

Electroweak results. Luca Lista. INFN - Napoli. LHC Physics

Tests of QCD Using Jets at CMS. Salim CERCI Adiyaman University On behalf of the CMS Collaboration IPM /10/2017

PDF Studies at LHCb! Simone Bifani. On behalf of the LHCb collaboration. University of Birmingham (UK)

A M Cooper-Sarkar on behalf of ZEUS and H1 collaborations

HERAFitter - an open source QCD Fit platform

Global parton distributions for the LHC Run II

Resummation in PDF fits. Luca Rottoli Rudolf Peierls Centre for Theoretical Physics, University of Oxford

Parton Distribution Functions and the LHC

Physique des Particules Avancées 2

PoS(ICHEP2012)300. Electroweak boson production at LHCb

The Structure of the Proton in the Higgs Boson Era

Measurement of jet production in association with a Z boson at the LHC & Jet energy correction & calibration at HLT in CMS

INCLUSIVE D- AND B-MESON PRODUCTION

Lepton-hadron collider for the 2020s, based on the high lumi LHC Can we add ep and ea collisions to the existing LHC pp, AA and pa programme?

Proton PDFs constraints from measurements using the ATLAS experiment

Intrinsic Heavy Quarks

W/Z + jets and W/Z + heavy flavor production at the LHC

Top production measurements using the ATLAS detector at the LHC

QCD at CDF. Régis Lefèvre IFAE Barcelona On behalf of the CDF Collaboration

Top, electroweak and recent results from CDF and combinations from the Tevatron

QCD and jets physics at the LHC with CMS during the first year of data taking. Pavel Demin UCL/FYNU Louvain-la-Neuve

Precision QCD at the Tevatron. Markus Wobisch, Fermilab for the CDF and DØ Collaborations

Summary of HERA-LHC Meeting

Parton Uncertainties and the Stability of NLO Global Analysis. Daniel Stump Department of Physics and Astronomy Michigan State University

PANIC August 28, Katharina Müller on behalf of the LHCb collaboration

Matrix Elements for the LHC. J. Huston Michigan State University, IPPP Durham

PoS(EPS-HEP2015)309. Electroweak Physics at LHCb

PoS(EPS-HEP 2013)455. HERAFitter - an open source QCD fit framework. Andrey Sapronov. on behalf of theherafitter team

The Strong Interaction and LHC phenomenology

Testing QCD at the LHC and the Implications of HERA DIS 2004

Measurement of t-channel single top quark production in pp collisions

The inclusive jet cross section, jet algorithms, underlying event and fragmentation corrections. J. Huston Michigan State University

Recent QCD results from ATLAS

Recent Results on Jet Physics and α s

Outline. Heavy elementary particles. HERA Beauty quark production at HERA LHC. Top quark production at LHC Summary & conclusions

4th Particle Physcis Workshop. National Center for Physics, Islamabad. Proton Structure and QCD tests at HERA. Jan Olsson, DESY.

Top and Electroweak Physics at. the Tevatron

Novel Measurements of Proton Structure at HERA

Electroweak Measurements at LHCb!

arxiv:hep-ph/ v1 3 Mar 2003

Beauty contribution to the proton structure function and charm results

Electroweak Physics at the Tevatron

Parton Distribution Functions at the LHC

CTEQ-TEA Parton distribution functions and α s

Status and news of the ABM PDFs

First V+jets results with CMS. Vitaliano Ciulli (Univ. & INFN Firenze) V+jets workshop, 8-10 Sep 2010, Durham

arxiv: v1 [hep-ex] 18 Nov 2010

Proton Structure Functions: Experiments, Models and Uncertainties.

Particles and Deep Inelastic Scattering

Transverse Energy-Energy Correlation on Hadron Collider. Deutsches Elektronen-Synchrotron

LO and NLO PDFs in Monte Carlo Event Generators

Parton-parton luminosity functions for the LHC

The HERAPDF1.0 PDF set, which was an NLO QCD analysis based on H1 and ZEUS combined

QCD Studies at the Tevatron

The gluon PDF: from LHC heavy quark production to neutrino astrophysics

QCD at hadron colliders

FYST17 Lecture 6 LHC Physics II

PoS(Photon 2013)004. Proton structure and PDFs at HERA. Vladimir Chekelian MPI for Physics, Munich

Physics at Hadron Colliders

Determination of the strong coupling constant from multi-jet production with the ATLAS detector

High Energy Physics. Lecture 9. Deep Inelastic Scattering Scaling Violation. HEP Lecture 9 1

Progress On Studying Uncertainties Of Parton Distributions And Their Physical Predictions

W/Z inclusive measurements in ATLAS

Outline: Introduction Quantum ChromoDynamics (QCD) Jet algorithms Tests of QCD QCD analyses at HERA extraction of the proton PDFs

Transcription:

Predictions and PDFs for the LHC J. Huston Michigan State University (huston@msu.edu) Sparty we ll talk more about Sparty tomorrow

Two advertisements

Excitement about my visit

Understanding cross sections at the LHC We re all looking for BSM physics at the LHC Before we publish BSM discoveries from the early running of the LHC, we want to make sure that we measure/understand SM cross sections detector and reconstruction algorithms operating properly SM backgrounds to BSM physics correctly taken into account and, in particular, that QCD at the LHC is properly understood

Cross sections at the LHC Experience at the Tevatron is very useful, but scattering at the LHC is not necessarily just rescaled scattering at the Tevatron Small typical momentum fractions x for the quarks and gluons in many key searches dominance of gluon and sea quark scattering large phase space for gluon emission and thus for production of extra jets intensive QCD backgrounds or to summarize, lots of Standard Model to wade through to find the BSM pony

Cross sections at the LHC Note that the data from HERA and fixed target cover only part of kinematic range accessible at the LHC We will access pdf s down to 10-6 (crucial for the underlying event) and Q 2 up to 100 TeV 2 We can use the DGLAP equations to evolve to the relevant x and Q 2 range, but we re somewhat blind in extrapolating to lower x values than present in the HERA data, so uncertainty may be larger than currently estimated we re assuming that DGLAP is all there is; at low x BFKL type of logarithms may become important (more later about DGLAP and BFKL) DGLAP BFKL?

Understanding cross sections at the LHC but to understand cross sections, we have to understand QCD (at the LHC) PDF s, PDF luminosities and PDF uncertainties LO, NLO and NNLO calculations K-factors benchmark cross sections and pdf correlations underlying event and minimum bias events jet algorithms and jet reconstruction Sudakov form factors

Factorization Factorization is the key to perturbative QCD the ability to separate the short-distance physics and the long-distance physics In the pp collisions at the LHC, the hard scattering cross sections are the result of collisions between a quark or gluon in one proton with a quark or gluon in the other proton The remnants of the two protons also undergo collisions, but of a softer nature, described by semiperturbative or nonperturbative physics The calculation of hard scattering processes at the LHC requires: (1) knowledge of the distributions of the quarks and gluons inside the proton, i.e. what fraction of the momentum of the parent proton do they have ->parton distribution functions (pdf s) (2) knowledge of the hard scattering cross sections of the quarks and gluons, at LO, NLO, or NNLO in the strong coupling constant α s

Parton distribution functions and global fits Calculation of production cross sections at the LHC relies upon knowledge of pdf s in the relevant kinematic region Pdf s are determined by global analyses of data from DIS, DY and jet production Three Two major groups that provide semi-regular updates to parton distributions when new data/ theory becomes available MRS->MRST98->MRST99 ->MRST2001->MRST2002 ->MRST2003->MRST2004 ->MSTW2008 CTEQ->CTEQ5->CTEQ6 ->CTEQ6.1->CTEQ6.5 ->CTEQ6.6->CT09->CT10 NNPDF1.0->NNPDF1.1- >NNPDF1.2->NNPDF2.0

Global fits With the DGLAP equations, we know how to evolve pdf s from a starting scale Q 0 to any higher scale but we can t calculate what the pdf s are ab initio one of the goals of lattice QCD We have to determine them from a global fit to data factorization theorem tells us that pdf s determined for one process are applicable to another So what do we need a value of Q o (1.3 GeV for CTEQ, 1 GeV for MSTW) lower than the data used in the fit (or any prediction) a parametrization for the pdf s a scheme for the pdf s hard-scattering calculations at the order being considered in the fit pdf evolution at the order being considered in the fit a world average value for α s a lot of data with appropriate kinematic cuts a treatment of the errors for the experimental data MINUIT

Orders and Schemes Fits are available at LO CTEQ6L1 or CTEQ6L 1 loop or 2 loop α s in common use with parton shower Monte Carlos poor fit to data due to deficiencies of LO ME s LO* better for parton shower Monte Carlos NLO CTEQ6.1,CTEQ6.6,CT09 precision level: error pdf s defined at this order NNLO more accurate but not all processes known At NLO and NNLO, one needs to specify a scheme or convention for subtracting the divergent terms Basically the scheme specifies how much of the finite corrections to subtract along with the divergent pieces most widely used is the modified minimal subtraction scheme (or MSbar) used with dimensional regularization: subtract the pole terms and accompanying log 4π and Euler constant terms also may find pdf s in DIS scheme, where full order α s correction for F 2 in DIS absorbed into quark pdf s

LO->NLO->NNLO There is a big change in general for PDFs in going from LO to NLO, but not from NLO to NNLO

Scales and Masses Processes used in global fits are characterized by a single large scale DIS-Q 2 lepton pair production-m 2 vector boson production-m V 2 jet production-p T jet By choosing the factorization and renormalization scales to be of the same order as the characteristic scale can avoid some large logarithms in the hard scattering cross section some large logarithms in running coupling and pdf s are resummed Different treatment of quark masses and thresholds zero mass variable flavor number scheme (ZM- VFNS) fixed flavor number scheme (FFNS) general mass variable flavor number scheme (GM-VFNS)

Data sets used in global fits (CTEQ6.6) 1. BCDMS F 2 proton (339 data points) 2. BCDMS F deuteron 2 (251 data points) 3. NMC F 2 (201 data points) 4. NMC F 2d /F p 2 (123 data points) 5. F 2 (CDHSW) (85 data points) 6. F 3 (CDHSW) (96 data points) 7. CCFR F 2 (69 data points) 8. CCFR F 3 (86 data points) 9. H1 NC e - p (126 data points; 1998-98 reduced cross section) 10. H1 NC e - p (13 data points; high y analysis) 11. H1 NC e + p (115 data points; reduced cross section 1996-97) 12. H1 NC e + p (147 data points; reduced cross section; 1999-00) 13. ZEUS NC e - p (92 data points; 1998-99) 14. ZEUS NC e + p (227 data points; 1996-97) 15. ZEUS NC e + p (90 data points; 1999-00) 16. H1 F c 2 e + p (8 data points;1996-97) 17. H1 Rσ c for ccbar e + p (10 data points;1996-97) 18. H1 R b σ for bbbar e + p (10 data points; 1999-00) 19. ZEUS F c 2 e + p (18 data points; 1996/97) 20. ZEUS F C 2 e+p (27 data points; 1998/00) 21. H1 CC e - p (28 data points; 1998-99) 22. H1 CC e + p (25 data points; 1994-97) 23. H1 CC e + p (28 data points; 1999-00) 24. ZEUS CC e - p (26 data points; 1998-99) 25. ZEUS CC e + p (29 data points; 1994-97) 26. ZEUS CC e + p (30 data points; 1999-00) 27. NuTev neutrino dimuon cross section (38 data points) 28. NuTev anti-neutrino dimuon cross section (33 data points) 29. CCFR neutrino dimuon cross section (40 data points) 30. CCFR anti-neutrino cross section (38 data points) 31. E605 dimuon (199 data points) 32. E866 dimuon (13 data points) 33. Lepton asymmetry from CDF (11 data points) 34. CDF Run 1B jet cross section (33 data points) 35. D0 Run 1B jet cross section (90 data points) 2794 data points from DIS, DY, jet production All with (correlated) systematic errors that must be treated correctly in the fit Note that DIS is the 800 pound gorilla of the global fit with many data points and small statistical and systematic errors and fixed target DIS data still have a significant impact on the global fitting, even with an abundance of HERA data To avoid non-perturbative effects, kinematic cuts on placed on the DIS data Q 2 >5 GeV 2 W 2 (=m 2 +Q 2 (1-x)/x)>12.25 GeV 2

Global fitting: best fit Using our 2794 data points, we do our global fit by performing a χ 2 minimization where D i are the data points and T i are the theoretical predictions; we allow for a normalization shift f N for each experimental data set but we provide a quadratic penalty for any normalization shift where there are k systematic errors β for each data point in a particular data set and where we allow the data points to be shifted by the systematic errors with the shifts given by the s j parameters but we give a quadratic penalty for non-zero values of the shifts s j where σ i is the statistical error for data point i χ 2 = For each data set, we calculate i f N D i k β ij s j j =1 T i k 2 + s σ j i For a set of theory parameters it is possible to analytically solve for the shifts s j,and therefore, continually update them as the fit proceeds To make matters more complicated, we may give additional weights to some experiments due to the utility of the data in those experiments (i.e. NA-51), so we adjust the χ 2 to be 1 f χ 2 = w k χ 2 k + w N N,k k k σ N norm where w k is a weight given to the experimental data and w N,k is a weight given to the normalization 2 2 j =1 2

Minimization and errors Free parameters in the fit are parameters for quark and gluon distributions f (x) = x (a 1 1) (1 x) a 2 ea 3x [1 + e a 4 x]a 5 Too many parameters to allow all to remain free some are fixed at reasonable values or determined by sum rules 20 free parameters for CTEQ6.1, 22 for CTEQ6.6,24 for CT09 Result is a global χ 2 /dof on the order of 1 for a NLO fit worse for a LO fit, since the LO pdf s can not make up for the deficiencies in the LO matrix elements

PDF Errors So we have optimal values (minimum χ 2 ) for the d=20 (22,24) free pdf parameters in the global fit {a µ },µ=1, d Varying any of the free parameters from its optimal value will increase the χ 2 It s much easier to work in an orthonormal eigenvector space determined by diagonalizing the Hessian matrix, determined in the fitting process H uv = 1 2 χ 2 a µ a ν To estimate the error on an observable X(a), due to the experimental uncertainties of the data used in the fit, we use the Master Formula ( ΔX ) 2 X = Δχ 2 µ,ν a µ ( H 1 ) µν X a ν

PDF Errors Recap: 20 (22,24) eigenvectors with the eigenvalues having a range of >1E6 Largest eigenvalues (low number eigenvectors) correspond to best determined directions; smallest eigenvalues (high number eigenvectors) correspond to worst determined directions Easiest to use Master Formula in eigenvector basis To estimate the error on an observable X(a), from the experimental errors, we use the Master Formula ( ΔX ) 2 X = Δχ 2 µ,ν a µ ( H 1 ) µν X a ν where X i + and X i - are the values for the observable X when traversing a distance corresponding to the tolerance T(=sqrt(Δχ 2 )) along the i th direction

PDF Errors What is the tolerance T? This is one of the most controversial questions in global pdf fitting? We have 2794 data points in the CTEQ6.6 data set (on order of 2000 for CTEQ6.1) Technically speaking, a 1-sigma error corresponds to a tolerance T(=sqrt(Δχ 2 ))=1 This results in far too small an uncertainty from the global fit with data from a variety of processes from a variety of experiments from a variety of accelerators For CTQE6.1, we chose a Δχ 2 of 100 to correspond to a 90% CL limit MSTW in the past has chosen a smaller Δχ 2 criterion; now as we will see, MSTW and CTEQ agree well Treatment for new PDFs for both is more sophisticated than shown here

What do the eigenvectors mean? Each eigenvector corresponds to a linear combination of all 20 (22,24) pdf parameters, so in general each eigenvector doesn t mean anything? However, with 20 (22,24) dimensions, often eigenvectors will have a large component from a particular direction Take eigenvector 1 (for CTEQ6.1); error pdf s 1 and 2 It has a large component sensitive to the small x behavior of the u quark valence distribution Not surprising since this is the best determined direction

Aside: PDF re-weighting Any physical cross section at a hadron-hadron collider depends on the product of the two pdf s for the partons participating in the collision convoluted with the hard partonic cross section Nominally, if one wants to evaluate the pdf uncertainty for a cross section, this convolution should be carried out 41 times (for CTEQ6.1); once for the central pdf and 40 times for the error pdf s However, the partonic cross section is not changing, only the product of the pdf s So one can evaluate the full cross section for one pdf (the central pdf) and then evaluate the pdf uncertainty for a particular cross section by taking the ratio of the product of the pdf s (the pdf luminosity) for each of the error pdf s compared to the central pdf s f i is the error pdf and f 0 the central pdf f i a / A (x a,q 2 ) f i b / B (x b,q 2 ) f 0 a / A (x a,q 2 ) f 0 b / B (x b,q 2 ) This works exactly for fixed order calculations and works well enough (see later) for parton shower Monte carlo calculations. Most experiments now have code to easily do this and many programs will do it for you (MCFM)

A very useful tool Allows easy calculation and comparison of pdf s

Uncertainties uncertainties get large at high x uncertainty for gluon larger than that for quarks pdf s from one group don t necessarily fall into uncertainty band of another would be nice if they did

Uncertainties and parametrizations Beware of extrapolations to x values smaller than data available in the fits, especially at low Q 2 Parameterization may artificially reduce the apparent size of the uncertainties Compare for example uncertainty for the gluon at low x from the recent neural net global fit to global fits using a parametrization note gluon can range negative at low x Q 2 =2 GeV 2

Correlations Consider a cross section X(a) i th component of gradient of X is Now take 2 cross sections X and Y or one or both can be pdf s Consider the projection of gradients of X and Y onto a circle of radius 1 in the plane of the gradients in the parton parameter space The circle maps onto an ellipse in the XY plane The angle φ between the gradients of X and Y is given by If two cross sections/pdf s are very correlated, then cosφ~1 uncorrelated, then cosφ~0 anti-correlated, then cosφ~-1 The ellipse itself is given by

PDF uncertainties at the LHC gg Higgs tt Note that for much of the SM/discovery range, the pdf luminosity uncertainty is small Need similar level of precision in theory calculations It will be a while, i.e. not in the first fb -1, before the LHC data starts to constrain pdf s qq W/Z NBIII: tt uncertainty is of the same order as W/Z production gq NB I: the errors are determined using the Hessian method for a Δχ 2 of 100 using only experimental uncertainties,i.e. no theory uncertainties NB II: the pdf uncertainties for W/Z cross sections are not the smallest

Ratios:LHC to Tevatron pdf luminosities Processes that depend on qq initial states (e.g. chargino pair production) have small enchancements Most backgrounds have gg or gq initial states and thus large enhancement factors (500 for W + 4 jets for example, which is primarily gq) at the LHC W+4 jets is a background to tt production both at the Tevatron and at the LHC tt production at the Tevatron is largely through a qq initial states and so qq->tt has an enhancement factor at the LHC of ~10 Luckily tt has a gg initial state as well as qq so total enhancement at the LHC is a factor of 100 but increased W + jets background means that a higher jet cut is necessary at the LHC known known: jet cuts have to be higher at LHC than at Tevatron gg gq qq

Ratios:LHC to Tevatron pdf luminosities But wait, we re not running at 14 TeV, but at 10 TeV (if we re lucky) 7 In fact, we ll probably never reach 14 TeV, just as we never reached 2 TeV at the Tevatron gg gq qqbar

Precision benchmarks: W/Z cross sections at the LHC CTEQ (CTEQ6.1) and MRST (MRST2004) NLO predictions in good agreement with each other NNLO corrections are small and negative NB: this is a specific statement for MRST2004 PDF s, i.e. not true for MSTW2008 PDFs NNLO mostly a shift in normalization (K-factor); NLO predictions adequate for most predictions at the LHC

Rapidity distributions and NNLO Effect of NNLO just a small normalization factor over the full rapidity range NNLO predictions using NLO pdf s are close to full NNLO results, but outside of (very small) NNLO error band

Correlations As expected, W and Z cross sections are highly correlated Anti-correlation between tt and W cross sections more glue for tt production (at higher x) means fewer anti-quarks (at lower x) for W production mostly no correlation for H and W cross sections

Heavy quark mass effects in global fits CTEQ6.1 (and previous generations of global fits) used zero-mass VFNS scheme With new sets of pdf s (CTEQ6.5), heavy quark mass effects consistently taken into account in global fitting cross sections and in pdf evolution In most cases, resulting pdf s are within CTEQ6.1 pdf error bands But not at low x (in range of W and Z production at LHC) Heavy quark mass effects only appreciable near threshold ex: prediction for F 2 at low x,q at HERA smaller if mass of c,b quarks taken into account thus, quark pdf s have to be bigger in this region to have an equivalent fit to the HERA data implications for LHC phenomenology

W/Z agreement Inclusion of heavy quark mass effects affects DIS data in x range appropriate for W/Z production at the LHC but MSTW2008 also has increased W/Z cross sections at the LHC due at least partially to improvements in their heavy quark scheme now CTEQ6.6 and MSTW2008 in good agreement CTEQ6.5(6) MSTW08

Correlations with Z, tt Define a correlation cosine between two quantities Z tt If two cross sections are very correlated, then cosφ~1 uncorrelated, then cosφ~0 anti-correlated, then cosφ~-1

Correlations with Z, tt Define a correlation cosine between two quantities If two cross sections are very correlated, then cosφ~1 uncorrelated, then cosφ~0 anti-correlated, then cosφ~-1 Note that correlation curves to Z and to tt are mirror images of each other tt Z By knowing the pdf correlations, can reduce the uncertainty for a given cross section in ratio to a benchmark cross section iff cos φ > 0;e.g. Δ(σ W +/σ Z )~1% If cos φ < 0, pdf uncertainty for one cross section normalized to a benchmark cross section is larger So, for gg->h(500 GeV); pdf uncertainty is 4%; Δ(σ H /σ Z )~8%

Modified LO pdf s (LO*) What about pdf s for parton shower Monte Carlos? standard has been to use LO pdf s, most commonly CTEQ5L/ CTEQ6L, in Pythia, Herwig, Sherpa, ALPGEN/Madgraph+ but LO pdf s can create LHC cross sections/acceptances that differ in both shape and normalization from NLO due to influence of HERA data and lack of ln(1/x) and ln(1-x) terms in leading order pdf s and evolution and are often outside NLO error bands experimenters use the NLO error pdf s in combination with the central LO pdf even with this mis-match causes an error in pdf re-weighting due to non-matching of Sudakov form factors predictions for inclusive observables from LO matrix elements for many of the collider processes that we want to calculate are not so different from those from NLO matrix elements (aside from a reasonably constant K-factor)

Modified LO pdf s (LO*) but we (and in particular Torbjorn Sjostrand) like the low x behavior of LO pdf s and rely upon them for our models of the underlying event at the Tevatron and its extrapolation to the LHC as well as calculating low x cross sections at the LHC and no one listened to me when I urged the use of NLO pdf s thus, the need for modified LO pdf s

Parton showers and PDFs There s a difference between tree level fixed order predictions and those from parton shower Monte Carlos The incoming partons can radiate hard gluons pushing the incoming partons off-mass shell We expect that there will be a kinematic suppression of distributions formed by parton showers compared to those generated by tree level calculations Although this is a sizeable effect for low Q 2 processes, the effect is diminished for most processes we want to calculate at the LHC

CTEQ talking points LO* pdf s should behave as LO as x->0; as close to NLO as possible as x->1 LO* pdf s should be universal, i.e. results should be reasonable run on any platform with nominal physics scales It should be possible to produce error pdf s with similar Sudakov form factors similar UE so pdf re-weighting makes sense LO* pdf s should describe underlying event at Tevatron with a tune similar to CTEQ6L (for convenience) and extrapolate to a reasonable UE at the LHC

Where are the differences between LO and NLO partons? low x and high x for up everywhere for gluon missing ln(1-x) terms in LO ME missing ln(1/x) terms in LO ME

The shapes for the cross sections shown to the right are well-described by LO matrix elements using NLO PDFs, but there are distortions that are evident when LO PDFs are used Normalizations are not fully described using LO matrix elements (Kfactor) LO and NLO distributions

CTEQ mod LO PDFs Include in LO* fit (weighted) pseudo-data for characteristic LHC processes produced using CTEQ6.6 NLO pdf s with NLO matrix elements (using MCFM), along with full CTEQ6.6 dataset (2885 points) low mass bb fix low x gluon for UE tt over full mass range higher x gluon W +,W -,Z 0 rapidity distributions quark distributions gg->h (120 GeV) rapidity distribution Allow total momentum in proton to exceed 1.00 if needed to fit the real and pseudo-data other sum rules intact Use 1-loop or 2-loop α s two different fits and thus 2 different PDFs will concentrate on 2-loop results here keep α s (m Z ) fixed on 1-loop (2-loop) world averages, for better connection to other CTEQ PDFs Also, another technique involving use of scales

Some observations χ 2 improves with momentum sum rule free without pseudo-data in fit, the momentum sum increases by ~3-4% Pseudo-data has conflicts with global data set that s the motivation of the modified pdf s Requiring better fit to pseudodata increases chisquare of LO fit to global data set by about 10-20% (although this is not the primary concern; the fit to the pseudo-data is) prefers more momentum (1.10 for 1-loop and 1.14 for 2-loop); mostly goes into the gluon distribution No strong preference for 1-loop or 2-loop α s that I can see, with fits containing weighted pseudo-data; without pseudodata, prefers 2-loop Normalization of pseudo-data (needed K-factor) gets closer to 1 1.00 for W production (instead of 1.15) ~1.1 for tt production (instead of 1.4) ~1.4 for Higgs (120 GeV) (instead of 1.7)

Results Mod LO W + rapidity distribution agrees better with NLO prediction in both magnitude and shape Agreement at 7 and 10 TeV (not in fit) even better

Results

Results

Cross sections and uncertainties In the ATLAS Higgs group, we ve just gone through an exercise of compilation of predictions for Higgs production at LO/NLO/NNLO at a number of LHC center-ofmass energies This has involved a comparison of competing programs for some processes, a standardization of inputs, and a calculation of uncertainties, including those from PDF s/variation of α s from eigenvectors in CTEQ/MSTW using the NNPDF approach This is an exercise that other physics groups will be going through as well, both in ATLAS and in CMS ATLAS Standard Model group now, for example There are a lot of tools/ procedures out there now, and a lot of room for confusion

For example Use the Durham PDF plotter for the gluon uncertainties at the LHC, comparing CTEQ6.6 and MSTW2008 MSTW uncertainty appears substantially less than that of CTEQ6.6 as was typical in the past, where the Δχ 2 used by CTEQ was ~100 and ~50 for MRST/MSTW But CTEQ uncertainty plotted is 90%CL; MSTW2008 is 68%CL I ve often seen plots in presentations showing PDF uncertainties without specifying whether they are at 68% or 90% CL (or sometimes even what PDF is plotted!)

but compare on a consistent basis See A. Vicini s talk in last PDF4LHC meeting

PDF errors So now, seemingly, we have more consistency in the size of PDF errors, at least for this particular example The eigenvector sets represent the PDF uncertainty due to the experimental errors in the datasets used in the global fitting process Another uncertainty is that due to the variation in the value of α s It has been traditional in the past for the PDF groups to publish PDF sets for variant values of α s, typically over a fairly wide range experiments always like to demonstrate that they can reject a value of α s (m Z ) of 0.128 MSTW has recently tried to better quantify the uncertainty due to the variation of α s, by performing global fits over a finer range, taking into account any correlations between the values of α s and the PDF errors more recent studies by CTEQ and NNPDF have shown that for their PDF s the correlation between α s errors and PDF errors is small enough that the two sources can be added in quadrature

α s (m Z ) and uncertainty But of course life is not that simple Different values of α s and of its uncertainty are used CTEQ and NNPDF use the world average (actually 0.118 for CTEQ and 0.119 for NNPDF), where MSTW2008 uses 0.120, as determined from their fit Latest world average (from Siggi Bethke->PDG) α s (m Z ) = 0.1184 +/- 0.0007 What does the error represent? Siggi said that only one of the results included in his world average was outside this range suppose we re conservative and say that +/-0.002 is a 90% CL joint CTEQ/NNPDF study to appear in Les Houches proceedings using this premise Could it be possible for all global PDF groups to use the world average value of α s in their fits, plus a prescribed range for its uncertainty (if not 0.002, then perhaps another acceptable value)? I told Albert that if he could persuade everyone of this, that I personally would nominate him for the Nobel Peace Prize

New CTEQ6.6 α s series CTEQ6.6 central α s (m Z ) value=0.118 Error PDFs with α s values of 0.116 0.117 0.118 0.119 0.120 available in next version of LHAPDF α s error roughly half that of PDF error

(My) interim recommendation for ATLAS Cross sections should be calculated with MSTW2008 and CTEQ6.6 Upper range of prediction should be given by upper limit of error prediction using prescription for combining α s uncertainty with error PDFs in quadrature for CTEQ6.6 using α s eigenvector sets for MSTW2008 Ditto for lower limit So for a Higgs mass of 120 GeV at 14 TeV, the gg cross section limits would be 34.9 pb (defined by the CTEQ6.6 lower limit, α s =0.120) and 41.4 pb (defined by the MSTW2008 upper limit) note that central predictions for CTEQ6.6 (35.74 pb) and MSTW2008 (38.45 pb) are different not because of the gluon distribution (which coincide very closely in the relevant x range), but because of the different values of α s used Where possible, NNPDF predictions (and uncertainties) should be used as well in the comparisons

PDF Benchmarking 2010 Benchmark processes, all to be calculated (i) at NLO (in MSbar scheme) (ii) in 5-flavour quark schemes (definition of scheme to be specified) (iii) at 7 TeV [ and 14 TeV] LHC (iv) for central value predictions and +-68%cl [and +- 90%cl] pdf uncertainties (v) and with +- α s uncertainties (vi) repeat with α s (m Z )=0.119 (prescription for combining with pdf errors to be specified) Using (where processes available) MCFM 5.7 gzipped version prepared by John Campbell using the specified parameters (and the new CTEQ6.6 α s series) this will be shipped out this week

Cross Sections 1. W +, W -, and Z total cross sections and rapidity distributions total cross section ratios W + /W - and (W + + W - )/Z rapidity distributions at y = -4,-3,...,+4 and also the W asymmetry: A_W(y) = (dw + /dy - dw - /dy)/(dw + /dy + dw - /dy) using the following parameters taken from PDG 2009 M Z =91.188 GeV M W =80.398 GeV zero width approximation G F =0.116637 X 10-5 GeV -2 other EW couplings derived using tree level relations BR(Z-->ll) = 0.03366 BR(W-->lnu) = 0.1080 CKM mixing parameters from eq.(11.27) of PDG2009 CKM review 0.97419 0.2257 0.00359 V_CKM = 0.2256 0.97334 0.0415 0.00874 0.0407 0.999133 scales: µ R = µ F = M Z or M W

Cross Sections 2. gg->h total cross sections at NLO M H = 120, 180 and 240 GeV zero Higgs width approximation, no BR top loop only, with m top = 171.3 GeV in sigma_0 scales: µ R = µ F = M H 3. ttbar total cross section at NLO m top = 171.3 GeV zero top width approximation, no BR scales: µ R = µ F = m top

The following are optional 4. inclusive jet cross section distribution at NLO use FastNLO "fnl0004" option with sqrt(s) = 14 TeV, D=0.7 k T algorithm jet rapidity bin: 0 < y J < 0.8 scales: mu_r = mu_f = p T J 5. Drell-Yan NLO dsigma /dm dy at y=0 for e.g. M = 7 and 14 GeV scales: mu_r = muf = M coupling = alpha_em(0) = 1/137.036

The Future How well do we know PDFs going into the start of LHC running? For much of the kinematic region, the uncertainty is pretty small Primarily because of the precision data that came from HERA And the precision will improve as the final HERA data sets are released to the public gq gg qqbar

The Future, continued Of course, as the LHC data comes in, we will use it in future PDF fits But in order to be useful, the precision has to be high, and most early data will not fulfill that requirement The global fits are dominated not by statistical errors but by systematic errors and the correlations

Normalization It may be useful in the beginning (and actually throughout the running) of the LHC to normalize cross sections to the W/Z cross sections Do have to worry about correlations/anti-correlations with the W/Z cross sections though What about Z production at high p T? unlike inclusive Z production, dominated by qqbar initial states, Z production at high p T is produced primarily by gq initial states

% pdf uncertainty gq (% pdf uncertainty)

p Z T > 25 GeV/c

for p Z T > 200 GeV/c, not only does the Z uncertainty become small, but there s also a correlation developing with the tt cross section, because of the gluon being in a similar x range

and finally http://www.youtube.com/watch?v=adcf3copit8