Caltech Home > PMA Home > News > High Energy Physicists Set New Record...
open search form

High Energy Physicists Set New Record for Network Data Transfer

80+ Gbps Sustained Rates for Hours Set a New Standard and Demonstrate that Current and Next Generation Long-Range Networks Can Be Used Efficiently by Small Computing Clusters

RENO, Nevada--Building on six years of record-breaking developments, an international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, the University of Michigan, the National Institute of Information Technology in Pakistan, Polytehnica University in Romania, Fermilab, Brookhaven National Laboratory, and CERN, and partners from Brazil (Rio de Janeiro State University, UERJ, and two of the State Universities of São Paulo, USP and UNESP) and Korea (Kyungpook National University, KISTI) joined forces to set new records for sustained data transfer among storage systems during the SuperComputing 2007 (SC07) conference.

Caltech's exhibit at SC07 by the High Energy Physics (HEP) group and the Center for Advanced Computing Research (CACR) demonstrated applications for globally distributed data analysis for the Large Hadron Collider (LHC) at CERN, along with Caltech's collaboration system EVO (Enabling Virtual Organizations; http://evo.caltech.edu), near real-time simulations of earthquakes in the Southern California region, experiences in time-domain astronomy with Google Sky, and recent results in multiphysics multiscale modeling.

The focus of the exhibit was the HEP team's record-breaking demonstration of storage-to-storage data transfer over wide-area networks from a single rack of servers on the exhibit floor. The high-energy physics team's demonstration of "High Speed Data Distribution for Physics Discoveries at the Large Hadron Collider" achieved a bidirectional peak throughput of 88 gigabits per second (Gbps) and a sustained data flow of more than 80 Gbps for two hours among clusters of servers on the show floor and at Caltech, Michigan, Fermilab, CERN, Brazil, Korea, and locations in the US LHCNet network in Chicago, New York, and Amsterdam.

Following up on the previous record transfer at 17.8 Gbps among storage systems on a single 10-Gbps link at SC06, the team used just four pairs of servers and disk arrays to sustain bidirectional transfers of 18 to 19 Gbps on the "UltraLight" link between Caltech and Reno for more than a day, often touching the theoretical limit on a single link. By sustaining rates of more than 40 Gbps at times in both directions, the results showed that a well-designed and configured single rack of servers is now capable of saturating the next generation of wide-area network links, which have a capacity of 40 Gbps in each direction.

The record-setting demonstration was made possible through the use of seven 10-Gbps links to SC07 provided by SCinet, CENIC, National Lambda Rail, and Internet2, together with a fully populated Cisco 6500E series switch-router, 10-gigabit Ethernet network interfaces provided by Intel and Myricom, and a fiber channel disk array provided by Data Direct Networks equipped with 4-Gbps host bus adapters from QLogic. The server equipment consisted of 36 widely available Supermicro systems using dual quad-core Intel Xeon processors, and Western Digital SATA disks.

One of the key advances in this demonstration was Fast Data Transfer (FDT; http://monalisa.cern.ch/FDT), a Java application developed by the Caltech team in close collaboration with the Polytehnica Bucharest team. FDT runs on all major platforms and uses the NIO libraries to achieve stable disk reads and writes coordinated with smooth data flow across long-range networks. The FDT application streams a large set of files across an open TCP socket, so that a large data set composed of thousands of files, as is typical in high-energy physics applications, can be sent or received at full speed, without the network transfer restarting between files. FDT works with Caltech's MonALISA system to dynamically monitor the capability of the storage systems as well as the network path in real-time, and sends data out to the network at a moderated rate that achieves smooth data flow across long-range networks.

By combining FDT with FAST TCP, developed by Steven Low of Caltech's computer science department, together with an optimized Linux kernel, provided by Shawn McKee of Michigan, known as the "UltraLight kernel," the team reached an unprecedented throughput level of 10 gigabytes/sec with a single rack of servers, limited only by the speed of the disk systems. Additionally, the team found that its combination of an advanced application, TCP protocol stack and kernel, and the use of real-time monitoring and multiple threads to sustain the data flow, performed extremely well even on network links with significant levels of packet loss.

The 10-Gbps network connections used included a dedicated link via CENIC to Caltech; two National Lambda Rail (NLR) FrameNet links and two NLR PacketNet links to Los Angeles and Seattle; Pacific Wave, and the Internet2 network to Chicago. Onward links included multiple links to Fermilab provided by ESnet; two links between Starlight in Chicago and Michigan provided by MiLR; US LHCNet (co-managed by Caltech and CERN) across the Atlantic; the GLORIAD link to Korea; and Florida Lambda Rail and the CHEPREO/WHREN-LILA link.

Overall, this year's demonstration, following the team's record memory-to-memory transfer rate of 151 Gbps at SuperComputing 2005 and its storage-to-storage record on a single link at SuperComputing 2006, represents a major milestone in providing practical, widely deployable applications capable of massive data transfers. The applications at SC07 exploited advances in state-of-the-art TCP-based data transport, servers (Intel Woodcrest-based systems), storage systems, and the Linux kernel over the last 24 months. FDT also represents a clear advance in basic data transport capability over wide-area networks compared to 2005, in that 20 Gbps could be sustained in a few streams memory-to-memory over long distances very stably for many hours, using a single 10-gigabit Ethernet link very close to full capacity in both directions.

The two largest physics collaborations at the LHC, CMS and ATLAS, each encompassing more than 2,000 physicists and engineers from 170 universities and laboratories, are about to embark on a new round of exploration at the frontier of high energies, breaking new ground in our understanding of the nature of matter and space-time and searching for new particles, when the LHC accelerator and their experiments begin operation next summer. In order to fully exploit the potential for scientific discoveries, the many petabytes of data produced by the experiments will be processed, distributed, and analyzed using a global Grid of 130 computing and storage facilities located at laboratories and universities around the world.

The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, multiterabyte data sets on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" from already-understood particle interactions. This data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Professor Harvey Newman of Caltech, head of the HEP team and US CMS Collaboration board chair, who originated the LHC Data Grid Hierarchy concept, said, "This demonstration was a major milestone showing the robustness and production-readiness of a new class of data-transport applications where each of the LHC Tier-1 and Tier-2 computing clusters could be used to acquire or distribute data fully using the capacity of the current generation of 10-Gbps, as well as the next generation of 40-Gbps network links. We also demonstrated the real-time analysis of some of the data using `ROOTlets,' a distributed form of the ROOT system (http://root.cern.ch) that is an essential element of high-energy physicists' arsenal of tools for large-scale data analysis.

"These demonstrations provided a new, more agile and flexible view of the globally distributed LHC Grid system that spans the U.S., Europe, Asia, and Latin America, along with several hundred computing clusters serving individual groups of physicists. By substantially reducing the difficulty of transporting terabyte-and-larger-scale data sets among the sites, we are enabling physicists throughout the world to have a much greater role in the next round of physics discoveries expected soon after the LHC starts."

David Foster, head of Communications and Networking at CERN, said, "The efficient use of high-speed networks to transfer large data sets is an essential component of CERN's LHC Computing Grid (LCG) infrastructure that will enable the LHC experiments to carry out their scientific missions."

Iosif Legrand, senior software and distributed system engineer at Caltech, the technical coordinator of the MonALISA and FDT projects, said, "We demonstrated a realistic, worldwide deployment of distributed, data-intensive applications capable of effectively using and coordinating high-performance networks. A distributed agent-based system was used to dynamically discover network and storage resources, and to monitor, control, and orchestrate efficient data transfers among hundreds of computers."

Richard Cavanaugh of the University of Florida, technical coordinator of the UltraLight project that is developing the next generation of network-integrated grids aimed at LHC data analysis, said, "By demonstrating that many 10-Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic Grid supporting many terabyte-and-larger data transactions is practical."

Shawn McKee, associate research scientist in the University of Michigan department of physics and leader of the UltraLight network technical group, said, "This achievement is an impressive example of what a focused network and storage system effort can accomplish. It is an important step towards the goal of delivering a highly capable end-to-end network-aware system and architecture that meet the needs of next-generation e-Science."

Paul Sheldon of Vanderbilt University, who leads the NSF-funded Research and Education Data Depot Network (REDDnet) project that is deploying a distributed storage infrastructure, commented on the innovative network storage technology that helped the group achieve such high performance in wide-area, disk-to-disk transfers. "When you combine this network-storage technology, including its cost profile, with the remarkable tools that Harvey Newman's networking team has produced, I think we are well positioned to address the incredible infrastructure demands that the LHC experiments are going to make on our community worldwide."

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and plan to deploy a new generation of revolutionary Internet applications. Multigigabit/s end-to-end network performance will empower scientists to form "virtual organizations" on a planetary scale, sharing their collective computing and data resources in a flexible way. In particular, this is vital for projects on the frontiers of science and engineering, in data-intensive fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

The new bandwidth record was achieved through extensive use of the SCinet network infrastructure at SC07. The team used all 10 of the 10-Gbps links coming into the showfloor, connected to two Cisco Systems Catalyst 6500 Series switches at the Caltech booth, together with 10-gigabit Ethernet server interfaces provided by Intel and Myricom.

As part of the SC07 demonstration, a distributed analysis of simulated LHC physics data was carried using the Grid-enabled Analysis Environment (GAE) developed at Caltech for the LHC. This demonstration involved the use of the Clarens Web Services portal developed at Caltech, the use of Root-based analysis software, and numerous architectural components developed in the framework of Caltech's "Grid Analysis Environment." The analysis made use of a new component in the Grid system: "Rootlets" hosted by Clarens servers. Each Rootlet is a full instantiation of CERN's Root tool, created on demand by the distributed clients in the Grid. The design and deployment of the Rootlets/Clarens system was carried out under the auspices of an STTR grant for collaboration between Deep Web Technologies (www.deepwebtech.com) of New Mexico, Caltech, and Indiana University.

The team used Caltech's MonALISA (MONitoring Agents using a Large Integrated Services Architecture-http://monalisa.caltech.edu) system to monitor and display the real-time data for all the network links used in the demonstration. MonALISA is a Dynamic, Distributed Service System that is capable of collecting any type of information from different systems, analyzing it in near-real time, and providing support for automated control decisions and global optimization of workflows in complex grid systems. It is currently used to monitor 340 sites, more than 50,000 computing nodes, and tens of thousands of concurrent jobs running on different grid systems and scientific communities.

MonALISA is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and Grid systems, as well as the scientific applications themselves. Vanderbilt demonstrated the capabilities of their L-Store middleware, a scalable, open-source, and wide-area-capable form of storage virtualization that builds on the Internet Backplane Protocol (IBP) and logistical networking technology developed at the University of Tennessee. Offering both scalable metadata management and software-based fault tolerance, L-Store creates an integrated system that provides a single file-system image across many IBP storage "depots" distributed across wide-area and/or local-area networks. Reading and writing between sets of depots at the Vanderbilt booth at SC06 and Caltech in California, the team achieved a network throughput, disk to disk, of more than one GByte/sec. On the floor, the team was able to sustain throughputs of 3.5 GByte/sec between a rack of client computers and a rack of storage depots. These two racks communicated across SCinet via four 10-GigE connections.

Further information about the demonstration may be found at: http://supercomputing.caltech.edu

About Caltech: With an outstanding faculty, including five Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,200 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. http://www.caltech.edu

About CACR: Caltech's Center for Advanced Computing Research (CACR) performs research and development on leading-edge networking and computing systems, and methods for computational science and engineering. Some current efforts at CACR include the National Virtual Observatory, ASC Center for Simulation of Dynamic Response of Materials, Computational Infrastructure for Geophysics, Cascade High Productivity Computing System, and the TeraGrid. http://www.cacr.caltech.edu/

About CERN: CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About Netlab: Caltech's Networking Laboratory, led by Professor Steven Low, develops FAST TCP. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems. http://netlab.caltech.edu

About NIIT: NIIT (NUST Institute of Information Technology), established in 1999 as the IT wing of National University of Sciences and Technology, ranks today with premier engineering institutions in Pakistan . It possesses state-of-the-art equipment and prides itself on its faculty, a team of highly capable and dedicated professionals. NIIT is also reputed abroad for its collaborative research linkages with CERN, Geneva; Stanford (SLAC), U.S.A.; Caltech, U.S.A; and EPFL, Switzerland, just to name a few. NIIT continually seeks to stay in the forefront as a center of academic and research excellence. Efforts are under way to seek collaboration with sound Pakistani universities willing to participate in joint research projects. NIIT is also engaged in close interaction with indigenous industrial entrepreneurs in IT, electronics, and communication engineering through NIIT-Corporate Advisory councils. http://www.niit.edu.pk

About the University of Michigan: The University of Michigan, with its size, complexity, and academic strength, the breadth of its scholarly resources, and the quality of its faculty and students, is one of America's great public universities and one of the world's premier research institutions. The university was founded in 1817 and has a total enrollment of 54,300 on all campuses. The main campus is in Ann Arbor, Michigan, and has 39,533 students (fall 2004). With over 600 degree programs and $739M in FY05 research funding, the university is one of the leaders in innovation and research. For more information, see http://www.umich.edu.

About Fermilab: Fermi National Accelerator Laboratory (Fermilab) is a national laboratory funded by the Office of Science of the U.S. Department of Energy, operated by Fermi Research Alliance, LLC. Experiments at Fermilab's Tevatron, the world's highest-energy particle accelerator, generate petabytes of data per year, and involve large, international collaborations with requirements for high-volume data movement to their home institutions. It is also the western hemisphere Tier-1 data host for the upcoming CMS experiment at the HC. The laboratory actively works to remain on the leading edge of advanced wide-area network technology in support of its science collaborations.

About UERJ (Rio de Janeiro): Founded in 1950, the Rio de Janeiro State University (UERJ; http//www.uerj.br) ranks among the ten largest universities in Brazil, with more than 23,000 students. UERJ's five campuses are home to 22 libraries, 412 classrooms, 50 lecture halls and auditoriums, and 205 laboratories. UERJ is responsible for important public welfare and health projects through its centers of medical excellence, the Pedro Ernesto University Hospital (HUPE) and the Piquet Carneiro Day-care Policlinic Centre, and it is committed to the preservation of the environment. The UERJ High Energy Physics group includes 15 faculty, postdoctoral, and visiting PhD physicists and 12 PhD and master's students, working on experiments at Fermilab (D0) and CERN (CMS). The group has constructed a Tier2 center to enable it to take part in the Grid-based data analysis planned for the LHC, and has originated the concept of a Brazilian "HEP Grid," working in cooperation with USP and several other universities in Rio and São Paulo.

About UNESP (São Paulo): Created in 1976 with the administrative union of several isolated institutes of higher education in the State of São Paulo, the São Paulo State University, UNESP, has 39 institutes in 23 different cities in the State of São Paulo. The university has 33,500 undergraduate students in 168 different courses and almost 13,000 graduate students. Since 1999 the university has had a group participating in the DZero Collaboration of Fermilab, which is operating the São Paulo Regional Analysis Center (SPRACE). This group is now a member of CMS Collaboration of CERN. See http://www.unesp.br.

About USP (São Paulo): The University of São Paulo, USP, is the largest institution of higher education and research in Brazil, and the third in size in Latin America. The university has most of its 35 units located on the campus of the capital of the state. It has around 40,000 undergraduate students and around 25,000 graduate students. It is responsible for almost 25 percent of all Brazilian papers and publications indexed on the Institute for Scientific Information (ISI). The SPRACE cluster is located at the Physics Institute. See http://www.usp.br.

About Kyungpook National University (Daegu): Kyungpook National University is one of the leading universities in Korea, especially in physics and information science. The university has 13 colleges and 9 graduate schools with 24,000 students. It houses the Center for High Energy Physics (CHEP), in which most Korean high-energy physicists participate. CHEP (chep.knu.ac.kr) was approved as one of the designated Excellent Research Centers supported by the Korean Ministry of Science.

About GLORIAD: GLORIAD (GLObal RIng network for Advanced application development) is the first round-the-world high-performance ring network jointly established by Korea, the United States, Russia, China, Canada, the Netherlands, and the Nordic countries, with optical networking tools that improve networked collaboration with e-Science and Grid applications. It is currently constructing a dedicated lightwave link connecting the scientific organizations in GLORIAD partners. See http://www.gloriad.org/.

About CHEPREO: Florida International University (FIU), in collaboration with partners at Florida State University, the University of Florida, and the California Institute of Technology, has been awarded an NSF grant to create and operate an interregional Grid-enabled Center from High-Energy Physics Research and Educational Outreach (CHEPREO; www.chepreo.org) at FIU. CHEPREO encompasses an integrated program of collaborative physics research on CMS, network infrastructure development, and educational outreach at one of the largest minority universities in the U.S. The center is funded by four NSF directorates, including Mathematical and Physical Sciences, Scientific Computing Infrastructure, Elementary, Secondary and Informal Education, and International Programs.

About Internet2®: Led by more than 200 U.S. universities working with industry and government, Internet2 develops and deploys advanced network applications and technologies for research and higher education, accelerating the creation of tomorrow's Internet. Internet2 recreates the partnerships among academia, industry, and government that helped foster today's Internet in its infancy. For more information, visit: www.internet2.edu.

About National LambdaRail: National LambdaRail (NLR) is a major initiative of U.S. research universities and private-sector technology companies to provide a national-scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation's scientists and researchers. Visit http://www.nlr.net for more information.

About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada's CANARIE and the Netherlands' SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. See www.startap.net/starlight.

About the Florida LambdaRail: Florida LambdaRail LLC (FLR) is a Florida limited liability company formed by member higher education institutions to advance optical research and education networking within Florida. Florida LambdaRail is a high-bandwidth optical network that links Florida's research institutions and provides a next-generation network in support of large-scale research, education outreach, public/private partnerships, and information technology infrastructure essential to Florida's economic development. For more information: http://www.flrnet.org.

About CENIC: CENIC (www.cenic.org) is a not-for-profit corporation serving the California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multitiered advanced network services for this research and education community.

About ESnet: The Energy Sciences Network (ESnet; www.es.net) is a high-speed network serving thousands of Department of Energy scientists and collaborators worldwide. A pioneer in providing high-bandwidth, reliable connections, ESnet enables researchers at national laboratories, universities, and other institutions to communicate with each other using the collaborative capabilities needed to address some of the world's most important scientific challenges. Managed and operated by the ESnet staff at Lawrence Berkeley National Laboratory, ESnet provides direct high-bandwidth connections to all major DOE sites, multiple cross connections with Internet2/Abilene, and connections to Europe via GEANT and to Japan via SuperSINET, as well as fast interconnections to more than 100 other networks. Funded principally by DOE's Office of Science, ESnet services allow scientists to make effective use of unique DOE research facilities and computing resources, independent of time and geographic location.

About AMPATH: Florida International University's Center for Internet Augmented Research and Assessment (CIARA) has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH; www.ampath.fiu.edu). AMPATH's goal is to enable wide-bandwidth digital communications between U.S. and international research and education networks, as well as a variety of U.S. research programs in the region. AMPATH in Miami acts as a major international exchange point (IXP) for the research and education networks in South America, Central America, Mexico, and the Caribbean. The AMPATH IXP is home for the WHREN-LILA high-performance network link connecting Latin America to the U.S., funded by the NSF and the Academic Network of São Paulo.

About the Academic Network of São Paulo (ANSP): ANSP unites São Paulo's University networks with Scientific and Technological Research Centers in São Paulo, and is managed by the State of São Paulo Research Foundation (FAPESP). The ANSP Network is another example of international collaboration and exploration. Through its connection to WHREN-LILA, all of the institutions connected to ANSP will be involved in research with U.S. universities and research centers, offering significant contributions and the potential to develop new applications and services. This connectivity with WHREN-LILA and ANSP will allow researchers to enhance the quality of current data, inevitably increasing the quality of new scientific developments. http://www.ansp.br

About RNP: RNP, the National Education and Research Network of Brazil, is a not-for-profit company that promotes the innovative use of advanced networking, with the joint support of the Ministry of Science and Technology and the Ministry of Education. In the early 1990s, RNP was responsible for the introduction and adoption of Internet technology in Brazil. Today, RNP operates a nationally deployed multigigabit network used for collaboration and communication in research and education throughout the country, reaching all 26 states and the Federal District, and provides both commodity and advanced research Internet connectivity to more than 300 universities, research centers, and technical schools. http://www.rnp.br

About KISTI: KISTI (Korea Institute of Science and Technology Information) is a national institute under the supervision of MOST (Ministry of Science and Technology) of Korea and is playing a leading role in building the nationwide infrastructure for advanced application researches by linking the supercomputing resources with the optical research network, KREONet2. The National Supercomputing Center in KISTI is carrying out national e-Science and Grid projects as well as the GLORIAD-KR project and will become the most important institution based on e-Science and advanced network technologies. See http://www.kisti.re.kr.

About Intel: Intel, the world leader in silicon innovation, develops technologies, products, and initiatives to continually advance how people work and live. Additional information about Intel is available at www.intel.com/pressroom and blogs.intel.com.

About Myricom: Founded in 1994, Myricom Inc. created Myrinet, the High-Performance Computing (HPC) interconnect technology used in many thousands of computing clusters in more than 50 countries. With its Myri-10G solutions, Myricom achieved a convergence at 10-Gigabit data rates between its low-latency Myrinet technology and mainstream Ethernet. Myri-10G bridges the gap between the rigorous demands of traditional HPC and the growing need for affordable computing speed in enterprise data centers. Myricom solutions are sold direct and through channels. Myri-10G clusters are supplied by OEM computer companies and by leading cluster integrators worldwide. Privately held and based in Arcadia, California, Myricom achieved and has sustained profitability since 1995 with 12 consecutive profitable years through 2006.

About Data Direct Networks: DataDirect Networks is the leading provider of scalable storage systems for performance- and capacity-driven applications. DataDirect's S2A (Silicon Storage Appliance) architecture enables modern applications such as video streaming, content delivery, modeling and simulation, backup and archiving, cluster and supercomputing, and real-time collaborative workflows that are driving the explosive demand for storage performance and capacity. DataDirect's S2A technology and solutions solve today's most challenging storage requirements, including providing shared, high-speed access to a common pool of data, minimizing data center footprints and storage costs for massive archives, reducing simulation computational times, and capturing and serving massive amounts of digital content. www.datadirectnet.com

About QLogic: QLogic is a leading supplier of high-performance storage networking solutions, including Fibre Channel host bus adapters (HBAs), blade server embedded Fibre Channel switches, Fibre Channel stackable switches, iSCSI HBAs, iSCSI routers and storage services platforms for enabling advanced storage-management applications. The company is also a leading supplier of server networking products, including InfiniBand host channel adapters that accelerate cluster performance. QLogic products are delivered to small-to-medium businesses and large enterprises around the world via its channel partner community. QLogic products are also powering solutions from leading companies like Cisco, Dell, EMC, Hitachi Data Systems, HP, IBM, NEC, Network Appliance, and Sun Microsystems. QLogic is a member of the S&P 500 Index. For more information go to www.qlogic.com.

About the National Science Foundation: The NSF is an independent federal agency created by Congress in 1950 "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense...." With an annual budget of about $5.5 billion, it is the funding source for approximately 20 percent of all federally supported basic research conducted by America's colleges and universities. In many fields such as mathematics, computer science, and the social sciences, NSF is the major source of federal backing.

About the DOE Office of Science: DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the nation and ensures U.S. world leadership across a broad range of scientific disciplines. The Office of Science also manages 10 world-class national laboratories with unmatched capabilities for solving complex interdisciplinary problems, and it builds and operates some of the nation's most advanced R&D user facilities, located at national laboratories and universities. These facilities are used by more than 19,000 researchers from universities, other government agencies, and private industry each year.

Acknowledgements The demonstration and the developments leading up to it were made possible through the strong support of the partner network organizations mentioned, the U.S. Department of Energy Office of Science, and the National Science Foundation, in cooperation with the funding agencies of the international partners, through the following grants: US LHCNet (DOE DE-FG02-05-ER41359),WAN In Lab (NSF EIA-0303620), UltraLight (NSF PHY-0427110), DISUN (NSF PHY-0533280), CHEPREO/WHREN-LILA (NSF PHY-0312038/OCI-0441095 and FAPESP Projeto 04/14414-2), as well as the NSF-funded PLaNetS, FAST TCP and CAIGEE projects, and the US LHC Research Program funded jointly by DOE and NSF.

Written by Jill Perry

Caltech Media Relations