AI Tames Plasma Instabilities in Fusion Reactors, Boosting Performance

Fusion researchers led by engineers at Princeton and the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) have successfully deployed machine learning methods to suppress harmful edge instabilities in fusion reactors without sacrificing plasma performance. Their approach, which optimizes the system’s suppression response in real-time, demonstrated the highest fusion performance without the presence of edge bursts at two different fusion facilities, each with its own set of operating parameters. The findings, reported on May 11 in Nature Communications, underscore the vast potential of machine learning and other artificial intelligence systems to quickly quash plasma instabilities.

Achieving a sustained fusion reaction is a delicate balancing act, requiring a sea of moving parts to come together to maintain a high-performing plasma: one that is dense enough, hot enough, and confined for long enough for fusion to take place. However, as researchers push the limits of plasma performance, they have encountered new challenges for keeping plasmas under control, including one that involves bursts of energy escaping from the edge of a super-hot plasma. These edge bursts negatively impact overall performance and even damage the plasma-facing components of a reactor over time.

The Limitations of Conventional Fixes

One fix for these instabilities involves using the magnetic coils that surround a fusion reactor to apply magnetic fields to the edge of the plasma, breaking up the structures that might otherwise develop into a full-fledged edge instability. Yet this solution is imperfect: while successful at stabilizing the plasma, applying these magnetic perturbations typically leads to lower overall performance.

“We have a way to control these instabilities, but in turn, we’ve had to sacrifice performance, which is one of the main motivations for operating in the high-confinement mode in the first place,” said Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment at Princeton, who is also a staff research physicist at PPPL.

The performance loss is partly due to the difficulty of optimizing the shape and amplitude of the applied magnetic perturbations, which in turn stems from the computational intensity of existing physics-based optimization approaches. These conventional methods involve a set of complex equations and can take tens of seconds to optimize a single point in time—far from ideal when plasma behavior can change in mere milliseconds.

Real-Time Optimization Through Machine Learning

The Princeton-led team’s machine learning approach slashes the computation time from tens of seconds to the millisecond scale, opening the door for real-time optimization. The machine learning model, which is a more efficient surrogate for existing physics-based models, can monitor the plasma’s status from one millisecond to the next and alter the amplitude and shape of the magnetic perturbations as needed. This allows the controller to strike a balance between edge burst suppression and high fusion performance, without sacrificing one for the other.

“With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” said co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.

The team demonstrated the success of their approach at both the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. At both facilities, which each have a unique set of magnetic coils, the method achieved strong confinement and high fusion performance without harmful plasma edge bursts.

The researchers are already working to refine their model to be compatible with other fusion devices, including planned future reactors such as ITER, which is currently under construction. They are also enhancing their model’s predictive capabilities to recognize the precursors to these harmful instabilities, potentially allowing for optimization without encountering a single edge burst.

Kolemen said the current work is yet another example of the potential for AI to overcome longstanding bottlenecks in developing fusion power as a clean energy resource. “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges,” he said.

The paper, “Highest fusion performance without harmful edge energy bursts in tokamak,” was published May 11 in Nature Communications, with support from the U.S. Department of Energy, the National Research Foundation of Korea, and the Korea Institute of Fusion Energy.


Substack subscription form sign up
The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.