High-Energy Physicists Set Record for Network Data Transfer

Researchers have set a new world record for data transfer, helping to usher in the next generation of high-speed network technology. At the SuperComputing 2011 (SC11) conference in Seattle during mid-November, the international team transferred data in opposite directions at a combined rate of 186 gigabits per second (Gbps) in a wide-area network circuit. The rate is equivalent to moving two million gigabytes per day, fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.

The team of high-energy physicists, computer scientists, and network engineers was led by the California Institute of Technology (Caltech), the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), Florida International University, and other partners.

According to the researchers, the achievement will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology—which allows transfer rates of 40 and 100 Gbps—that will be built in the next couple of years.

“Our group and its partners are showing how massive amounts of data will be handled and transported in the future,” says Harvey Newman, professor of physics and head of the high-energy physics (HEP) team. “Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence.”

Using a 100-Gbps circuit set up by Canada’s Advanced Research and Innovation Network (CANARIE) and BCNET, a non-profit, shared IT services organization, the team was able to reach transfer rates of 98 Gbps between the University of Victoria Computing Centre located in Victoria, British Columbia, and the Washington State Convention Centre in Seattle. With a simultaneous data rate of 88 Gbps in the opposite direction, the team reached a sustained two-way data rate of 186 Gbps between two data centers, breaking the team’s previous peak-rate record of 119 Gbps set in 2009.

In addition, partners from the University of Florida, the University of California at San Diego, Vanderbilt University, Brazil (Rio de Janeiro State University and the São Paulo State University), and Korea (Kyungpook National University and the Korean Institute for Science and Technology Information) helped with a larger demonstration, transferring massive amounts of data between the Caltech booth at the SC11 conference and other locations within the United States, as well as in Brazil and Korea.

The fast transfer rate is also crucial for dealing with the tremendous amounts of data coming from the Large Hadron Collider (LHC) at CERN, the particle accelerator that physicists hope will help them discover new particles and better understand the nature of matter, and space and time, solving some of the biggest mysteries of the universe. More than 100 petabytes (more than four million Blu-ray disks) of data have been processed, distributed, and analyzed using a global grid of 300 computing and storage facilities located at laboratories and universities around the world, and the data volume is expected to rise a thousand-fold as physicists crank up the collision rates and energies at the LHC.

“Enabling scientists anywhere in the world to work on the LHC data is a key objective, bringing the best minds together to work on the mysteries of the universe,” says David Foster, the deputy IT department head at CERN.

“The 100-Gbps demonstration at SC11 is pushing the limits of network technology by showing that it is possible to transfer petascale particle physics data in a matter of hours to anywhere around the world,” adds Randall Sobie, a research scientist at the Institute of Particle Physics in Canada and team member.

The key to discovery, the researchers say, is in picking out the rare signals that may indicate new physics discoveries from a sea of potentially overwhelming background noise caused by already understood particle interactions. To do this, individual physicists and small groups located around the world must repeatedly access—and sometimes extract and transport—multiterabyte data sets on demand from petabyte data stores. That’s equivalent to grabbing hundreds of Blu-ray movies all at once from a pool of hundreds of thousands. The HEP team hopes that the demonstrations at SC11 will pave the way towards more effective distribution and use for discoveries of the masses of LHC data.

“By sharing our methods and tools with scientists in many fields, we hope that the research community will be well positioned to further enable their discoveries, taking full advantage of 100 Gbps networks as they become available,” Newman says. “In particular, we hope that these developments will afford physicists and young students the opportunity to participate directly in the LHC’s next round of discoveries as they emerge.”

More information about the demonstration can be found at http://supercomputing.caltech.edu. See a video about the demonstration here.

This work was supported by the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners. Equipment and support was also provided by the team’s industry partners: CIENA, Brocade, Mellanox, Dell and Force10 (now Dell/Force10), and Supermicro.

Contact:
Harvey B. Newman, Professor of Physics
[email protected]
(626) 395-6656

Sonia Chernobieff

[email protected]


Substack subscription form sign up