Internet Speed Quadrupled by International Team

For the second consecutive year, the ”High Energy Physics” team of physicists, computer scientists, and network engineers have won the Supercomputing Bandwidth Challenge with a sustained data transfer of 101 gigabits per second (Gbps) between Pittsburgh and Los Angeles. This is more than four times faster than last year’s record of 23.2 gigabits per second, which was set by the same team.

From Caltech:

Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge

For the second consecutive year, the ”High Energy Physics” team of physicists, computer scientists, and network engineers have won the Supercomputing Bandwidth Challenge with a sustained data transfer of 101 gigabits per second (Gbps) between Pittsburgh and Los Angeles. This is more than four times faster than last year’s record of 23.2 gigabits per second, which was set by the same team.

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and deploy a new generation of revolutionary Internet applications.

The international team is led by the California Institute of Technology and includes as partners the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, the University of Florida, the University of Manchester, University College London (UCL) and the organization UKLight, Rio de Janeiro State University (UERJ), the state universities of S?o Paulo (USP and UNESP), the Kyungpook National University, and the Korea Institute of Science and Technology Information (KISTI). The group’s ”High-Speed TeraByte Transfers for Physics” record data transfer speed is equivalent to downloading three full DVD movies per second, or transmitting all of the content of the Library of Congress in 15 minutes, and it corresponds to approximately 5% of the rate that all forms of digital content were produced on Earth during the test.

The new mark, according to Bandwidth Challenge (BWC) sponsor Wesley Kaplow, vice president of engineering and operations for Qwest Government Services exceeded the sum of all the throughput marks submitted in the present and previous years by other BWC entrants. The extraordinary achieved bandwidth was made possible in part through the use of the FAST TCP protocol developed by Professor Steven Low and his Caltech Netlab team. It was achieved through the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch-routers provided by Cisco Systems at the Caltech Center for Advanced Computing (CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The external network connections included four dedicated wavelengths of National LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps links) and ESnet. 10 gigabit ethernet (10 GbE) interfaces provided by S2io were used on servers running FAST at the Caltech/CACR booth, and interfaces from Chelsio equipped with transport offload engines (TOE) running standard TCP were used at the SLAC/FNAL booth. During the test, the network links over both the Abilene and National Lambda Rail networks were shown to operate successfully at up to 99 percent of full capacity.

The Bandwidth Challenge allowed the scientists and engineers involved to preview the globally distributed grid system that is now being developed in the US and Europe in preparation for the next generation of high-energy physics experiments at CERN’s Large Hadron Collider (LHC), scheduled to begin operation in 2007. Physicists at the LHC will search for the Higgs particles thought to be responsible for mass in the universe and for supersymmetry and other fundamentally new phenomena bearing on the nature of matter and spacetime, in an energy range made accessible by the LHC for the first time.

The largest physics collaborations at the LHC, the Compact Muon Solenoid (CMS), and the Toroidal Large Hadron Collider Apparatus (ATLAS), each encompass more than 2000 physicists and engineers from 160 universities and laboratories spread around the globe. In order to fully exploit the potential for scientific discoveries, many petabytes of data will have to be processed, distributed, and analyzed. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, terabyte-scale data samples on demand, in order to optimally select the rare ”signals” of new physics from potentially overwhelming ”backgrounds” from already-understood particle interactions. This data will be drawn from major facilities at CERN in Switzerland, at Fermilab and the Brookhaven lab in the U.S., and at other laboratories and computing centers around the world, where the accumulated stored data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Future optical networks, incorporating multiple 10 Gbps links, are the foundation of the grid system that will drive the scientific discoveries. A ”hybrid” network integrating both traditional switching and routing of packets, and dynamically constructed optical paths to support the largest data flows, is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic grid supporting many-terabyte and larger data transactions is practical.

While the SC2004 100+ Gbps demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the United States, Europe, Latin America, and Asia Pacific, it is expected that networking on this scale in support of largest science projects (such as the LHC) will be commonplace within the next three to five years.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems, and Boston Ltd., as well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network, the Consortium for Education Network Initiatives in California (CENIC), ESnet, the TeraGrid, the AmericasPATH network (AMPATH), the National Education and Research Network of Brazil (RNP) and the GIGA project, as well as ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight international peering point in Chicago. The international connections included the LHCNet OC-192 link between Chicago and CERN at Geneva, the CHEPREO OC-48 link between Abilene (Atlanta), Florida International University in Miami, and S?o Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, G?ant, and Abilene (New York). The APII-TransPAC links to Korea also were used with good occupancy. The throughputs to and from Latin America and Korea represented a significant step up in scale, which the team members hope will be the beginning of a trend toward the widespread use of 10 Gbps-scale network links on DWDM optical networks interlinking different world regions in support of science by the time the LHC begins operation in 2007. The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy and the National Science Foundation, in cooperation with the agencies of the international partners.

As part of the demonstration, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE), developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid, the Grid Physics Network and the International Virtual Data Grid Laboratory (GriPhyN/iVDGL), and Open Science Grid projects. This involved the transfer of data to CERN, Florida, Fermilab, Caltech, UC San Diego, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth in London and Manchester also were used for disk-to-disk transfers from Pittsburgh to England. This gave physicists valuable experience in the use of the large, distributed datasets and to the computational resources connected by fast networks, on the scale required at the start of the LHC physics program.

The team used the MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system developed at Caltech to monitor and display the real-time data for all the network links used in the demonstration. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and grid systems as well as the scientific applications themselves. Detailed results for the network traffic on all the links used are available at http://boson.cacr.caltech.edu:8888/.

Multi-gigabit/second end-to-end network performance will lead to new models for how research and business is performed. Scientists will be empowered to form virtual organizations on a planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, in ”data intensive” fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

Harvey Newman, professor of physics at Caltech and head of the team, said, ”This is a breakthrough for the development of global networks and grids, as well as inter-regional cooperation in science projects at the high-energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 gigabit-per-second range, can be used effectively over long distances.

”This is a common theme that will drive many fields of data-intensive science, where the network needs are foreseen to rise from tens of gigabits per second to the terabit-per-second range within the next five to 10 years,” Newman continued. ”In a broader sense, this demonstration paves the way for more flexible, efficient sharing of data and collaborative work by scientists in many countries, which could be a key factor enabling the next round of physics discoveries at the high energy frontier. There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable.”

Les Cottrell, assistant director of SLAC’s computer services, said: ”The smooth interworking of 10GE interfaces from multiple vendors, the ability to successfully fill 10 gigabit-per-second paths both on local area networks (LANs), cross-country and intercontinentally, the ability to transmit greater than 10Gbits/second from a single host, and the ability of TCP offload engines (TOE) to reduce CPU utilization, all illustrate the emerging maturity of the 10Gigabit/second Ethernet market. The current limitations are not in the network but rather in the servers at the ends of the links, and their buses.”

Further technical information about the demonstration may be found at http://ultralight.caltech.edu/sc2004 and http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2004/hiperf.html A longer version of the release including information on the participating organizations may be found at http://ultralight.caltech.edu/sc2004/BandwidthRecord

Contact: Robert Tindol (626) 395-3631 [email protected]


Substack subscription form sign up

1 thought on “Internet Speed Quadrupled by International Team”

  1. Easy Earning money in online never been this easy and transparent. You would find great tips on how to make that dream amount every Day.Sitting in the home earn around $100 perday. So go ahead and click here for more details and open floodgates to your online income. All the best.

    Thanks.

Comments are closed.