Cloud Native

How Distributed Computing is Driving the Next Generation of Mini Particle Accelerators

27 Dec 2015 10:13am, by

Most of us think of particle accelerators as cavernous, resource-intensive, multi-billion-dollar projects that take over large swaths of land. A prime example would be the 17-mile long Large Hadron Collider at CERN, the accelerator responsible for ground-breaking discoveries such as evidence of the so-called “God particle,” in addition to the elusive pentaquark. These machines are enormous because they rely on longer electromagnetic wavelengths, and therefore, miles of distance to push particles close to light speed.

But these behemoths of science may soon become relics of the past. Thanks to recent advances in laser technology, future particle accelerators will shrink in size and be many more times powerful. What’s more, distributed computing infrastructures are playing a pivotal role in the evolution of these new accelerators by helping scientists develop new designs, while also crunching the vast amounts of experimental data at scale, and by powering the complex simulations behind such experiments.

Desktop-sized particle accelerators

Recent studies are showing there’s great promise in this next generation of particle accelerators. Scientists from the US Department of Energy’s Lawrence Berkeley National Lab demonstrated last year they could push subatomic particles up to record-breaking speeds on a desktop-sized unit using lasers. But behind the scenes, the experiment’s success was made possible by complex computer simulations done at the National Energy Research Scientific Computing Center (NERSC).

The researchers’ findings, published in the journal of Physical Review Letters, detailed how the team used a powerful laser at the Berkeley Lab Laser Accelerator (BELLA) — one capable of producing a quadrillion watts (or a petawatt) of energy. Aiming it with extreme precision through a 500-micron wide hole in 3.5-inch long tube containing charged-particle gas (plasma), placed about 46 feet away, the team was able to achieve an estimated acceleration of 4.25 giga-electron volts — or an energy level proportionally about 1,000 times more than a conventional collider, in a compact machine about 3 million times smaller than the LHC.laser-plasma-accelerator-1

As the laser pulse blasts through the plasma, it creates a channel through it, displacing electrons while leaving heavier ions behind. This separation generates an immense, electric “wakefield” which sucks free electrons back in after the laser has passed, accelerating them to near-light speeds in a comparatively short distance. It’s similar to how a moving boat will displace water that will immediately flow back in after it’s passed, forming a visible track of turbulence or “wake.”

 

In contrast, colliders like the LHC depend on greater electromagnetic wavelengths to propel particles, requiring large, land-intensive and costly installations that only a lucky few have access to. Compact accelerators would undoubtedly democratize research, and potentially have a longer life span, compared to traditional metal-lined accelerators that start to break down at higher field intensities beyond 100 mega-electron volts per meter.

Nonlinear systems

Precision when using these powerful lasers is paramount. “Small changes in the setup give you big perturbations,” explains Eric Esarey, senior science advisor for the Accelerator Technology and Applied Physics Division at the Berkeley Lab. That’s why computer simulations are run prior to the actual experiment being carried out on the laser-plasma accelerator. But the challenge lies in the fact that physics of laser-plasma interactions are nonlinear and quite complex, so nonlinear simulation tools are needed to understand the intricate dynamics between accelerating beam, plasma, electrons and the wakefield generated.

According to experts, simulations allow for better optimization, better control of parameters, as well as detailed diagnostics that might not be otherwise available during the experiment itself. Such simulations can eat up millions of CPU-processor hours and produce terabytes of output, thus requiring powerful supercomputers with tens of thousands of processors, spread across networked distributed computing clusters like those of NERSC. As the contributing scientists note over at SciDAC:

Large-scale particle simulations provide essential understanding of accelerator physics to advance beam performance and stability of high-gradient, laser plasma particle accelerators. Such simulations demand both massive parallelism and careful model development.

Scientists are now working to develop laser accelerators and methodologies that will drive particles toward an acceleration energy of 10 GeV and beyond. The particle simulations now being developed by scientists with the aid of distributed computing infrastructures are advancing the field tremendously, providing a universal framework for studying and understanding the physics behind these experiments, while also giving important insights on how to improve the design of future laser-plasma accelerators to reach even higher levels of acceleration. With tomorrow’s particle accelerators potentially becoming smaller, more precise and more accessible to the wider scientific community, we can look forward to even more exciting discoveries uncovering the inner workings of the universe.

Read more over at Berkeley Lab and SciDAC.

Images: Berkeley Lab

Participate in The New Stack surveys and be the first to receive the results of our original research.