Transformative Petascale Particle-in-Cell Simulation of Laser Plasma Interactions on Blue Waters

Similar documents
Petascale Particle-in-Cell Simulation of Laser Plasma Interactions in High Energy Density Plasmas on Blue Waters

Modeling Nonlinear Optics Of Plasmas (Relevant To IFE)

Plasma Wakefield Acceleration of. Positron Bunches. Jorge Vieira

X-ray Free-electron Lasers

Non-neutral fireball and possibilities for accelerating positrons with plasma

4 FEL Physics. Technical Synopsis

Linac Based Photon Sources: XFELS. Coherence Properties. J. B. Hastings. Stanford Linear Accelerator Center

Chan Joshi UCLA With help from Weiming An, Chris Clayton, Xinlu Xu, Ken Marsh and Warren Mori

Electron Acceleration in a Plasma Wakefield Accelerator E200 FACET, SLAC

SPARCLAB. Source For Plasma Accelerators and Radiation Compton. On behalf of SPARCLAB collaboration

Simulations of the Microbunching Instability in FEL Beam Delivery Systems

Simulation Techniques for HED I: Particle-in-Cell Methods

Beam-plasma Physics Working Group Summary

CONCEPTUAL STUDY OF A SELF-SEEDING SCHEME AT FLASH2

Electron Linear Accelerators & Free-Electron Lasers

Femto-second FEL Generation with Very Low Charge at LCLS

X-band RF driven hard X-ray FELs. Yipeng Sun ICFA Workshop on Future Light Sources March 5-9, 2012

Research with Synchrotron Radiation. Part I

First results from the plasma wakefield acceleration transverse studies at FACET

Introduction to electron and photon beam physics. Zhirong Huang SLAC and Stanford University

USPAS course on Recirculated and Energy Recovered Linacs Ivan Bazarov, Cornell University Geoff Krafft, JLAB. ERL as a X-ray Light Source

Towards a Low Emittance X-ray FEL at PSI

Brightness and Coherence of Synchrotron Radiation and Free Electron Lasers. Zhirong Huang SLAC, Stanford University May 13, 2013

Ionization Injection and Acceleration of Electrons in a Plasma Wakefield Accelerator at FACET

First observation of Current Filamentation Instability (CFI)

Beam Echo Effect for Generation of Short Wavelength Radiation

Linac Driven Free Electron Lasers (III)

E200: Plasma Wakefield Accelera3on

An Adventure in Marrying Laser Arts and Accelerator Technologies

Modeling Laser-Plasma Interactions in MagLIF Experiment on NIF

Generation and characterization of ultra-short electron and x-ray x pulses

FURTHER UNDERSTANDING THE LCLS INJECTOR EMITTANCE*

arxiv: v1 [physics.acc-ph] 1 Jan 2014

Overview of accelerator science opportunities with FACET ASF

SHIELDING CALCULATIONS FOR THE HARD X-RAY GENERATED BY LCLS MEC LASER SYSTEM R. QIU, J. C. LIU, S. H. ROKNI AND A. A. PRINZ

An Astrophysical Plasma Wakefield Accelerator. Alfven Wave Induced Plasma Wakefield Acceleration

SCSS Prototype Accelerator -- Its outline and achieved beam performance --

VARIABLE GAP UNDULATOR FOR KEV FREE ELECTRON LASER AT LINAC COHERENT LIGHT SOURCE

PoS(EPS-HEP2017)533. First Physics Results of AWAKE, a Plasma Wakefield Acceleration Experiment at CERN. Patric Muggli, Allen Caldwell

Ion acceleration in a gas jet using multi-terawatt CO 2 laser pulses

Part V Undulators for Free Electron Lasers

Echo-Enabled Harmonic Generation

Free-electron laser SACLA and its basic. Yuji Otake, on behalf of the members of XFEL R&D division RIKEN SPring-8 Center

Speeding up simulations of relativistic systems using an optimal boosted frame

Tight-Focusing of Short Intense Laser Pulses in Particle-in-Cell Simulations of Laser-Plasma Interaction

Integrated Modeling of Fast Ignition Experiments

3. Synchrotrons. Synchrotron Basics

A proposed demonstration of an experiment of proton-driven plasma wakefield acceleration based on CERN SPS

LCLS-II SCRF start-to-end simulations and global optimization as of September Abstract

High quality beam genera1on in density downramp injec1on and its early applica1ons

PAL LINAC UPGRADE FOR A 1-3 Å XFEL

Harmonic Lasing Self-Seeded FEL

LCLS Commissioning Status

Measuring very low emittances using betatron radiation. Nathan Majernik October 19, 2017 FACET-II Science Workshop

Plasma Accelerator Based FELs Status of SLAC Task Force Efforts

Measurement of wakefields in hollow plasma channels Carl A. Lindstrøm (University of Oslo)

Results of the Energy Doubler Experiment at SLAC

Optics considerations for

SPARCLAB. Source For Plasma Accelerators and Radiation Compton with Laser And Beam

Opportunities and Challenges for X

MaRIE. MaRIE X-Ray Free-Electron Laser Pre-Conceptual Design

III. CesrTA Configuration and Optics for Ultra-Low Emittance David Rice Cornell Laboratory for Accelerator-Based Sciences and Education

High Energy Gain Helical Inverse Free Electron Laser Accelerator at Brookhaven National Laboratory

O rion. The ORION Facility at SLAC. Bob Siemann AAC Workshop, June 15, 2000

Observation of Coherent Optical Transition Radiation in the LCLS Linac

Simulations of the IR/THz Options at PITZ (High-gain FEL and CTR)

Nonlinear Optics (WiSe 2015/16) Lecture 12: January 15, 2016

What must the PWFA Collabora2on do at FACET II Overview of requirements for Plasma Sources based on experimental needs

Monoenergetic Proton Beams from Laser Driven Shocks

parameter symbol value beam energy E 15 GeV transverse rms beam size x;y 25 m rms bunch length z 20 m charge per bunch Q b 1nC electrons per bunch N b

Excitements and Challenges for Future Light Sources Based on X-Ray FELs

Analysis of FEL Performance Using Brightness Scaled Variables

Excitements and Challenges for Future Light Sources Based on X-Ray FELs

Numerical Study of Advanced Target Design for FIREX-I

CSR Benchmark Test-Case Results

Tight-Focusing of Short Intense Laser Beams in Particle-in-Cell Simulations of Laser-Plasma Interaction

Understanding the pulsar magnetosphere through first-principle simulations

Linac optimisation for the New Light Source

The$Par(cle$Beam$Physics$Lab$(PBPL):$ Research$at$the$Intersec(on$of$ Rela(vis(c$Beam$and$Plasma$Physics$

Research Topics in Beam Physics Department

3D Simulations of Pre-Ionized and Two-Stage Ionization Injected Laser Wakefield Accelerators

Speeding up simulations of relativistic systems using an optimal boosted frame

FACET*, a springboard to the accelerator frontier of the future

First operation of a Harmonic Lasing Self-Seeded FEL

E-157: A Plasma Wakefield Acceleration Experiment

Short Pulse, Low charge Operation of the LCLS. Josef Frisch for the LCLS Commissioning Team

Plasma-based Acceleration at SLAC

γmy =F=-2πn α e 2 y or y +ω β2 y=0 (1)

Femtosecond and sub-femtosecond x-ray pulses from a SASE-based free-electron laser. Abstract

Physics of Laser-Plasma Interaction and Shock Ignition of Fusion Reactions

Integrated simulations of fast ignition of inertial fusion targets

The MEC endstation at LCLS New opportunities for high energy density science

Introduction to single-pass FELs for UV X-ray production

Some Sample Calculations for the Far Field Harmonic Power and Angular Pattern in LCLS-1 and LCLS-2

An Introduction to Plasma Accelerators

II) Experimental Design

Light Source I. Takashi TANAKA (RIKEN SPring-8 Center) Cheiron 2012: Light Source I

High-Brightness Electron Beam Challenges for the Los Alamos MaRIE XFEL

FEL WG: Summary. SLAC National Accelerator Lab. Kwang-Je Kim (Part I, Mo-Tu) Joe Bisognano (Part II, Th) Future Light Source WS 2010: FEL WG

Coherent X-Ray Sources: Synchrotron, ERL, XFEL

Transcription:

Transformative Petascale Particle-in-Cell Simulation of Laser Plasma Interactions on Blue Waters Frank Tsung Viktor K. Decyk Weiming An Xinlu Xu Han Wen Warren Mori (PI) Special Thanks to Galen Arnold & BW Consultants

Summary and Outline OUTLINE/SUMMARY Overview of the project Particle-in-cell method Our main production code OSIRIS Application of OSIRIS to plasma based accelerators: Production of high quality electrons for ultra-bright XFEL light sources useful for nuclear science (Xu et al, being prepared for Nature Physics (2017)). Higher (2 & 3) dimension simulations of LPI s relevant to laser fusion Adding realisms in 2D LPI simulations relevant to laser fusion. Controlling LPI s by temporal incoherence. Estimates of large scale 3D LPI simulations (& justify the need for new new algorithms on exascale supercomputers) Code Developments to reduce simulation time and to move toward exa-scale. Code developments toward exa-scale (multi-gpu s and OpenMP/MPI PIC codes)

Profile of OSIRIS + Introduction to PIC PIC algorithm Particle du/dt 9.3% time, 216 lines Integration of equations of motion, moving particles F i u i x i Particle dx/dt 5.3% time, 139 lines Field interpolation 42.9% time, 290 lines Interpolation ( E, B ) j F i Δt Current Deposition (x,u) j J j Current deposition 35.3% time, 609 lines Integration of Field Equations on the grid J j ( E, B ) j The particle-in-cell method treats plasma as a collection of computer particles. The interactions does not scale as N 2 due to the fact the particle quantities are deposited on a grids and the interactions are calculated on the grids only. Because (# of particles) >> (# of grids), the timing is dominated by the particle calculations and scales as N (orbit calculation + current & charge deposition). The code spends over 90 % of execution time in only 4 routines These routines correspond to less than 2 % of the code, optimization and porting is fairly straightforward, although not always trivial.

s 3.0 O iris osiris framework Massivelly Parallel, Fully Relativistic Particle-in-Cell (PIC) Code Visualization and Data Analysis Infrastructure Developed by the osiris.consortium UCLA + IST Ricardo Fonseca: ricardo.fonseca@tecnico.ulisboa.pt Frank Tsung: tsung@physics.ucla.edu http://epp.tecnico.ulisboa.pt/ http://plasmasim.physics.ucla.edu/ code features Scalability to ~ 1.6 M cores (on sequoia) and achieved sustained speed of > 2.2PetaFLOPS on Blue Waters SIMD hardware optimized Parallel I/O Dynamic Load Balancing QED module Particle merging OpenMP/MPI parallelism CUDA/Intel Phi branches

Livingston Curve for Accelerators --- Why plasmas? Plasma Wake Field Accelerator(PWFA) A high energy electron bunch Laser Wake Field Accelerator(LWFA, SMLWFA) A single short-pulse of photons The Livingston curve traces the history of electron accelerators from Lawrence s cyclotron to present day technology. When energies from plasma based accelerators are plotted in the same curve, it shows the exciting trend that within a few years it is will surpass conventional accelerators in terms of energy. Drive beam Trailing beam

FACET Plasma based accelerator experiments @ LCLS (linac coherent light source) Facility for Advanced accelerator Experimental Tests FACET is a new facility to provide high-energy, high peak current e - & e + beams for PWFA experiments at SLAC.

Recent Highlights (in Nature journals) in Plasma Based Acceleration (< Last 10 years) -- Simulations play a big role in all of these discoveries!!! 42 GeV in less than one meter! (i.e., 0-42 GeV in 3km, 42-85 GeV in 1m) Simulations also identified ionization induced erosion as the limiting mechanism for energy gain (Blumenfeld et al, (2007)). 2014 Full Speed Ahead Cover on Nature 2GeV energy gain in 36 cm of plasma with narrow energy spread. PIC simulations explained the narrow energy spread and produced quantitative agreements between simulation and experiment. (Litos et al (2014)) 2015 Positron Experiments: in 1.3 meters of plasma, positrons gained 3-10GeVs of energy with low energy spread (1.8 to 6%). PIC simulations show that the trailing bunch flattens the wake and produces a high quality positron beam. (Corde et al (2015)).

X-ray FEL Coherent light source at Angstrom scale Can we make compact radiation sources for nuclear science? In an X-ray FEL (XFEL), a coherent electron beam enters an undulatory and a bright x- ray comes out, the electron beam can be diverted via an magnet (see right). The need for XFEL s light sources can be justified by looking at the light sources in terms of photon energy and brilliance. Brilliance, also called brightness, is a measure of the coherence of the photon beam (or roughly the # of photons per volume, where the time corresponds the longitudinal distance divided by the speed of light). Improved longitudinal coherence will further increase the brilliance. Shortening thee pulse also produces probes with better time resolution (on the femto-second scale)). Compared to synchrotron sources, LCLS, which began in 2009, represents a 9 order of magnitude jump in brightness. XFEL s for the first time allow us to probe materials on the nuclear (Angstrom) length scale with femto-second resolution. Laser, while provides high peak brilliance, operates in the ~micron range, which cannot resolve effects on the the nuclear length scale Using PIC simulations, we are trying to study ways to generate high qualities electron beams with high energy and high quality to produce 20keV (0.62 Angstrom wavelength) lights comparable to those generated at LCLS. XFEL s produced by these beams will allow to probe matter on the nuclear scale. (Bohr radius ~ 1 Angstrom and femto-second resolution). r = (laser wavelength) u 2 2 (1 + K2 ) (beam energy)

Question: Is it possible to accelerate a beam with 10-4 energy spread using PWFA? In order for the electrons to radiate coherently in an XFEL, the electrons within a slice (i.e., a small neighborhood in the longitudinal direction) needs to be very coherent. Theory estimates that relative energy spread of 10-4 is needed for to produce coherent x-ray in the undulator. Main Parameters witness beam I [ka] σ z [μm] ε n [μm] σ r [μm] Q [pc] E b [GeV] σ Eb [kev] Driver 2 16 1.2 0.52 269 8 80 I [ka] σ z [μm] ε n [μm] σ r [μm] Q [pc] E b [GeV] σ Eb [kev] Witness 2 6 0.4 0.3 103 8 80 n p [cm -3 ] k p -1 [μm] Plasma 7E+16 20 QuickPIC simula.on setup: Ø Box: 167 μm 167 μm 167 μm Ø Grid Size: 40 nm 40 nm 163 nm Because the beams are tightly focused and the need to study the beam evolution within a slice, high resolutions are required in these simulations. Blue Waters resources is critical to study this problem 4000 x 4000 x 1000 (16 billion grids) 4 million particles in each bunch (642 million real electrons in the witness bunch) 64 billion plasma electrons (fixed ions) to 128 billion total plasma particles(with mobile ions) 1/2 million core hour / simulation

Simulation results at z=1.55 m 1. slice energy in most parts of the beam is less than the requirement: 3 10-4. 2. The emittance (which is the measure of transverse coherence of the beam) is conserved after 1.5 meters. This work is being prepared for publication in Nature Physics (X. Xu and co-workers).

Laser Plasma Interactions in IFE IFE (inertial fusion energy) uses lasers to compress fusion pellets to fusion conditions. The goal of these experiments is to extract more fusion energy from the fuel than the input energy of the laser. In this case, the excitation of plasma waves via LPI (laser plasma interactions) is detrimental to the experiment in 2 ways. Laser light can be scattered backward toward the source and cannot reach the target LPI produces hot electrons which heats the target, making it harder to compress. The LPI problem is very challenging because it spans many orders of magnitude in lengthscale & lengthscale The spatial scale spans from < 1 micron (which is the laser wavelength) to milli-meters (which is the length of the plasma). The temporal scale spans from a femtosecond(which is the laser period) to nano-seconds (which is the duration of the fusion pulse). A typical PIC simulation spans ~10ps. laser wavelength (350nm) 1μm 10μm speckle width Laser period (1fs) Laser Plasma Interactions Lengthscales speckle length 100μm Timescales non-linear interactions (wave/wave, wave particle, and multiple speckles) ~10ps LPI growth time NIF National Ignition Facility 1mm Inner Beam Path (>1mm) 1fs 1ps 1ns Final laser spike (1ns) NIF pulse (20ns)

Performing a typical 1D OSIRIS Simulation Using NIF parameters Currently, experimentalists @ NIF can re-construct plasma conditions (such as density and temperature) using a hydro code. Using this plasma map, we can perform a series of 1D OSIRIS simulations along each of the beam path indicated by the dash line, each taking ~100 CPU hours. 1D OSIRIS Simulations can predict: Spectrum of backscattered lights (which can be compared against experiments) spectrum of energetic electrons (shown below) energy partition, i.e., how the incident laser energy is converted to transmitted light backscattered light energetic electrons I laser = 2 8 x 10 14 W/cm 2 λ laser = 351nm, T e = 2.75 kev, T i = 1 kev, Z=1, t max up to 20 ps Length = 1.5 mm Density profiles from NIF hydro simulations I 0 = 4e14, Green profile I 0 = 8e14, Red profile Laser direction 14 million particles ~100 CPU hours per run ~1 hr on modest size local cluster Due to LDI of backscatter Due to backscatter Due to rescaber of inical SRS Due to LDI of rescaber

We have simulated stimulated Raman scattering in multi-speckle scenarios (in 2D) Although the SRS problem is 1D (i.e., the instability grows along the direction of laser propagation). The SRS problem in IFE is not strictly 1D -- each beam (right) is made up of 4 lasers, called a NIF quad, and each laser is not a plane wave but contains speckles, each one a few microns in diameter. These hotspots are problematic because you can have situations where according to linear theory, the averaged laser is LPI unstable only inside these hotspots (and the hotspots can move in time by adding colors near the carrier frequency). And the LPI s in these hotspots can trigger activities elsewhere. The multi-speckle problem are inherently 2D and even 3D. NIF Quad We have been using OSIRIS to look at SRS in multi-speckle scenarios. In our simulations we observed the excitation of SRS in under-threshold speckles via: seeding from backscatter light from neighboring speckles seeding from plasma wave seeds from a neighboring speckle. inflation where hot electrons from a neighboring speckle flatten the distribution function and reduce plasma wave damping. Recently experiments have shown that external magnetic fields can reduce LPI activities. This is another area of active research in our group. Focusing without smoothing Smooth seed beam Laser amplifier chain Focusing with phase scrambler Smooth seed beam Laser amplifier chain Distorted beam Distorted beam Phase Focusing with phase scrambler and corrector smoothing by spectral dispersion (SSD) Smooth seed beam Laser amplifier chain Distorted beam SSD

In the past year, we have added realistic beam effects into OSIRIS ISI (Induced Spatial Incoherence) SSD (Smoothing by Spatial Dispersion) & STUD (Spike Train of Uneven Duration)

LPI Simulation Results Temporal bandwidth reduces LPI Temporal bandwidth can suppress SRS growth Small simulations (90k core-hours each) to identify interesting parameters before starting full simulations (<1 million core-hours each) 15 speckles across and ~120 microns long. ~100 million grids and ~10 billion particles each. Incorporating polarization smoothing can further reduce SRS reflectivity RPP STUD (Spike Train of Uneven Duration) ISI

PIC simulations of 3D LPI s is still a challenge, and requires exa-scale supercomputers, this will require code developments in both new numerical methods and new codes for new hardwares 2D multi-speckle along NIF beam path 3D, 1 speckles 3D, multi-speckle along NIF beam path Speckle scale 50 x 8 1 x 1 x 1 10 x 10 x 5 Size (microns) 150 x 1500 9 x 9 x 120 28 x 28 x 900 Grids 9,000 x 134,000 500 x 500 x 11,000 1,700 x 1,700 x 80,000 Particles 300 billion 300 billion 22 trillion Steps 470,000 (15 ps) 540,000 (5 ps) 540,000 (15 ps) Memory Usage* 7 TB 6 TB 1.6 PB CPU-Hours 8 million 13 million 1 billion (2 months on the full Blue Waters

Designing New Particle-in-Cell (PIC) Algorithms on GPU s On the GPU (and multi-cores), we apply a local domain decomposition scheme based on the concept of tiles. Particles ordered by tiles, varying from 2 x 2 to 16 x 16 grid points (typical tile size is 16 x 16 in 2D and 8 x 8 x 8 in 3D) On Fermi M2090: On each GPU, the problem is partitioned into many tiles, and the code associate a thread block with each tile and particles located in that tile We created a new data structure for particles, partitioned among threads blocks (i.e., particles are sorted according to its tile id, and there is a local domain decomposition within the GPU), within the tile the grid and the particle data are aligned and the loops can be easily parallelized. We created a new data structure for particles, partitioned among threads blocks: dimension part(npmax,idimp,num_blocks)

Porting PIC codes to GPU s (continued) Designing New Particle-in-Cell (PIC) Algorithms: Maintaining Particle Order Three steps: 1. Particle Push creates a list of particles which are leaving a tile 2. Using list, each thread places outgoing particles into an ordered buffer it controls 3. Using lists, each tile copies incoming particles from buffers into particle array GPU Particle Reordering 4 1 Particles buffered in Direction Order 3 1 2 34 5678 7 8 6 5 2 GPU Tiles GPU Buffer G A particle manager is needed to maintain the data alignment. This is done every timestep. Less than a full sort, low overhead if particles already in correct tile Essentially message-passing, except buffer contains multiple destinations 1 7 4 3 6 5 2 8 GPU Tiles In the end, the particle array belonging to a tile has no gaps Particles are moved to any existing holes created by departing particles If holes still remain, they are filled with particles from the end of the array

Evaluating New Particle-in-Cell (PIC) Algorithms on GPU: Electromagnetic Case 2-1/2D EM Benchmark with 2048x2048 grid, 150,994,944 particles, 36 particles/cell optimal block size = 128, optimal tile size = 16x16 GPU algorithm also implemented in OpenMP Hot Plasma results with dt = 0.04, c/vth = 10, relativistic CPU:Intel i7 GPU:Fermi M2090 OpenMP(12 CPUs) Push 66.5 ns. 0.426 ns. 5.645 ns. Deposit 36.7 ns. 0.918 ns. 3.362 ns. Reorder 0.4 ns. 0.698 ns. 0.056 ns. Total Particle 103.6 ns. 2.042 ns. 9.062 ns (11.4x speedup). The time reported is per particle/time step. The total particle speedup on the Fermi M2090 was 51x compared to 1 Intel i7 core. Field solver takes an additional 10% on GPU, 11% on CPU. OK, so how about multiple CPU/GPU s?

Evaluating New Particle-in-Cell (PIC) Algorithms on GPU: Electrostatic Case 2D ES Benchmark with 2048x2048 grid, 150,994,944 particles, 36 particles/cell optimal block size = 128, optimal tile size = 16x16 (8x8x8 in 3D). Single precision. Fermi M2090 GPU GPU-MPI Particle Reordering GPU Buffer GPU 1 G MPI Send Buffer MPI Recv Buffer GPU Tiles GPU 2 Hot Plasma results with dt = 0.1 CPU:Intel i7 1 GPU 24 GPUs 108 GPUs Push 22.1 ns. 0.327 ns. 13.4 ps. 3.46 ps. Deposit 8.5 ns. 0.233 ns. 11.0 ps. 2.60 ps. Reorder 0.4 ns. 0.442 ns. 19.7 ps. 5.21 ps. Total Particle 31.0 ns. 1.004 ns. 49.9 ps. 13.10 ps. The time reported is per particle/time step. The total particle speedup on the 108 Fermi M2090s compared to 1 GPU was 77x (>70% efficient), We feel that we can improve on the current efficiency. Currently, field solver (which uses FFT) takes an additional 5% on 1 GPU, 45% on 2 GPUs, and 73% on 108 GPUs (there are 3 GPUs per node on our cluster, sharing one ethernet port). And we believe the efficiency should be higher for PIC codes with a finite-difference solver. We are also working on a Intel Phi version and a OpenMP + MPI version (which achieved ~80% efficiency on 50,000 cores on Edison @ NERSC)! here are available at the UCLA PICKSC web-site http://picksc.idre.ucla.edu/

UCLA Particle-in-Cell and Kinetic Simulation Software Center (PICKSC), NSF funded center whose Goal is to provide and document parallel Particle-in-Cell (PIC) and other kinetic codes. http://picksc.idre.ucla.edu/ github: UCLA Plasma Simulation Group Planned activities Provide parallel skeleton codes for various PIC codes on traditional and new hardware systems. Provide MPI-based production PIC codes that will run on desktop computers, mid-size clusters, and the largest parallel computers in the world. Provide key components for constructing new parallel production PIC codes for electrostatic, electromagnetic, and other codes. Provide interactive codes for teaching of important and difficult plasma physics concepts Facilitate benchmarking of kinetic codes by the physics community, not only for performance, but also to compare the physics approximations used Documentation of best and worst practices for code writing, which are often unpublished and get repeatedly rediscovered. Provide some services for customizing software for specific purposes (based on our existing codes) Key components and codes will be made available through standard open source licenses and as an open-source community resource, contributions from others are welcome.

Summary and Outline OUTLINE/SUMMARY Overview of the project Particle-in-cell method Our main production code OSIRIS Application of OSIRIS to plasma based accelerators: Production of high quality electrons for ultra-bright XFEL light sources useful for nuclear science (Xu et al, being prepared for Nature Physics (2017)). Higher (2 & 3) dimension simulations of LPI s relevant to laser fusion Adding realisms in 2D LPI simulations relevant to laser fusion. Controlling LPI s by temporal incoherence. Estimates of large scale 3D LPI simulations (& justify the need for new new algorithms on exascale supercomputers) Code Developments to reduce simulation time and to move toward exa-scale. Code developments toward exa-scale (multi-gpu s and OpenMP/MPI PIC codes)