DOE's goals in the HPCC Program are to enable effective applications of HPCC technologies and the emerging National Information Infrastructure (NII) to scientific problems that are critical to implementing the Energy Policy Act (PL 102-486), other DOE mission programs, and the national interest. DOE's missions encompass such diverse activities as energy technologies, studies of energy supply and usage in the U.S., environmental remediation, fundamental research, investigations in the health and environmental sciences, and national security.
To better understand combustion processes and the effluent gases produced by these processes, hydrogen/air turbulent reacting flame streams are studied. From left to right, false color renderings of the temperature, OH radical concentration, and NO concentrations are shown. Low values are in blue and high values in red. (Temperatures in the left image range from 30 C to 2,800 C and the visible flame is in the red band.) The research and object oriented parallel software are the products of a collaboration among Sandia National Laboratory, Lawrence Berkeley Laboratory, and the University of California at Berkeley.
All five components of the HPCC Program are important to achieving DOE's goals. The DOE program for FY 1994 will continue to build on the foundation of joint interagency, interdisciplinary, and private sector collaborations established during the earlier phases of the HPCC Program.
With its focus on applying HPCC technologies, it is critical that the DOE effectively encourage technology transfer and collaborations among researchers in different disciplines and among researchers and their counterparts in U.S. commercial enterprises. In order to ensure the effectiveness of this approach, the program:
The Office of Scientific Computing has been assigned the lead responsibility for Department-wide participation in the HPCC Program.
A research program has been initiated to investigate the performance of computer systems based on an integrated analysis of algorithms used in Grand Challenge applications. In addition, a researcher at DOE's Ames Laboratory was awarded a patent for developing SLALOM, an innovative benchmarking system for any type of computer including parallel computers. Two large university projects in parallel systems research at the University of Illinois and at New York University were extended. Several projects have been initiated to evaluate the effectiveness of early prototype HPCC systems on scientific problems important to the DOE at the Department's new High Performance Computing Research centers.
Efforts continue to build on the activities initiated in FY 1992 and FY 1993. The ESnet FY 1992 fast packet services procurement will provide advanced network capabilities and services to DOE sites and researchers, in addition to accelerating the availability of these public telecommunications vendor-based services to other Federal, state, and research and education organizations. The ESnet is an integral part of the Internet; it provides multiprotocol seamless connectivity to scientific resources and facilities, for international collaborative science, and supports DOE sponsored education activities. Other activities and accomplishments include collaborative work in packet video and voice, collaborative distributed work environment tools, multiprotocol support, GOSIP/OSI to TCP/IP gateways for e-mail and other services, and other precompetitive and short term research and development efforts.
ESnet provides access to Energy Research facilities and resources for DOE and DOE-supported researchers and educators.
DOE participates in the existing gigabit testbeds, the development of high speed local area networks (e.g., HIPPI), tools needed to support remote experimentation and distributed computing, multimedia data manipulation, high speed network management, and the standards associated with each of these technologies.
Support for Grand Challenges
DOE set up an interagency panel, comprised of DOE program managers and other HPCC agency participants and chaired by the NASA HPCC program director, to evaluate proposals for the DOE Grand Challenge computational research projects. The selected projects are cofunded by other DOE programs and by industrial partners and include DOE laboratory, university, industry, and other HPCC agency participation. The six multi-year Grand Challenge projects selected in FY 1992 are:
In work jointly supported by NIH, DOE, and NSF, researchers at New York University, Sloan Kettering Cancer center, and Oak Ridge National Laboratory are elucidating the effects of biochemically activated environmental chemicals, such as benzo-a-pyrene, a material present in automobile exhaust gases, bound to DNA. The figures illustrate calculations of two such derivatives that are mirror images of one another, (+) and (-) anti-benzo-a-pyrene-diol-epoxide [BPDE], bound to a segment of normal right-handed DNA. There is a profound difference in the health effects of these two cases, the (+) BPDE case (red figure) is tumorigenic while the (-) BPDE is benign.
By the end of FY 1992 DOE supported 14 computational projects. No new computational projects or Grand Challenges were started in FY 1993 due to lack of funds. In FY 1994, the DOE expects to evaluate the existing projects and may initiate new projects to balance the program in response to the goals and objectives of the DOE HPCC Program.
Projects were initiated in: object oriented data base technology (applied to and cofunded with the Superconducting SuperCollider), large spatial databases, distributed computing technologies, and software environments.
The PVM (Parallel Virtual Machine) software permits a heterogeneous collection of networked Unix machines (serial, vector, or parallel) to be used as a single large parallel computer. This public-domain software is now used at hundreds of sites worldwide.
The DOE initiated projects on computational techniques applied to catalysis, drug design, and in chemistry and the materials sciences in cooperation with private sector firms.
High Performance Computing Research centers (HPCRC) and Advanced Computing Resources
Two HPCRCs were created in FY 1992, at Los Alamos National Laboratory (LANL) and at Oak Ridge National Laboratory (ORNL), involving industrial and university partnerships as well as cofunding from other DOE programs. These HPCRCs conduct research in all HPCC Program component areas; operate massively parallel high performance computing prototypes (a Thinking Machines Inc. CM5 at LANL and an Intel Corp. Paragon at ORNL); perform computational investigations in global climate research, environmental groundwater transport modeling, and in materials sciences calculations; and provide HPCS resources for the other computational research projects described above. The HPCRC at LANL is also a partner in an NSF- sponsored Science and Technology Consortium. The Serial #1 Kendall Square system was installed at the ORNL HPCRC where it will be evaluated on energy applications.
A Cray Research Inc. C-90 supercomputer system was installed in FY 1993 in the National Energy Research Supercomputer center (NERSC) at Lawrence Livermore National Laboratory to provide high performance computer technology in a full production environment for DOE applications. The C-90 has 16 processors and is being used in a mode that encourages the use of parallel programming techniques.
National Storage Laboratory (NSL)
In concert with NASA, the NSL was established by a CRADA among six industrial firms and NERSC. This collaboration currently involves several DOE Laboratories, about a dozen U.S. storage vendors, and a university. The NSL will help serve the data storage needs of researchers at the DOE and at other Federal agencies and will help participating vendors develop new mass storage products.
The NSL will advance the state of the art in high performance data storage systems that are capable of storing terabytes of data. Such mass storage is required by applications such as the Grand Challenges and those requiring storage of large amounts of experimental data.
Overview of the architecture of the National Storage Laboratory located at NERSC, in which different networks are used for control of the attached devices and for data transmission. This architecture overcomes the bottleneck in earlier hierarchical storage designs in which all of the data had to pass through the control computer. The technology will first be used in fusion energy and climate modeling applications. The NSL architecture is being extended in a new High Performance Storage System to support scalable parallel I/O and integration with massively parallel computers.
DOE brings expertise in information technologies to the task of developing the NII. DOE has developed communication networks and has applied computing technology to a broad range of applications and technical problems addressed in collaborations involving scientists and engineers located across the country as well as around the world. The many DOE CRADAs are evidence of industry's interest in working with the Department. DOE Laboratories have a long history of collaborating with research universities.
The early deployment of the NII is important to the successful implementation of the Energy Policy Act. The NII is critical to the Act's goals of:
In FY 1994 a candidate DOE activity is to begin developing prototype applications to demonstrate national scale applications and use of the NII in selected energy-related areas. It would work closely with corresponding programs of other agencies in these efforts.
DOE supports several educational programs including the National Education Supercomputer Program (NESP) at NERSC; the Southwest Indian Polytechnic Institute in which 30 Native Americans participate; and workshops to train teachers in HPCC technology and use, and to conduct such workshops themselves. FY 1993 highlights include:
"Superkids," participants in the High-School Science Students Honors Program, with the National Education Supercomputer at NERSC. Each summer this program brings one student from each state, the District of Columbia, Puerto Rico, American Samoa, the DOD Dependents Schools, and eight foreign students to Lawrence Livermore National Laboratory.
The Applied Mathematics program supports a broad range of activities at universities and DOE Laboratories in modeling, analysis, and numerical simulation of physical and biological phenomena that arise in energy and environmental systems. Most of the projects involve applications-driven studies of the mathematical and numerical tools required to understand the behavior of complex discrete and continuous systems with an emphasis on algorithms for parallel computing. It has cofunded with NSF the Geometry Science and Technology center at the University of Minnesota to apply new HPCC technology to traditional mathematics problems. It has initiated new efforts in complex nonlinear behavior that underlies most natural phenomena, and in graph and group theories related to discrete phenomena and topology for application to genome sequencing and protein structure.
Clockwise from lower left, volume renderings display the density of a bubble being hit by a shock wave. New adaptive gridding and fast, accurate, and robust numerical algorithms are used to simulate the behavior of such highly nonlinear fluid flows in complicated three- dimensional geometries. These techniques are implemented on the latest vector and massively parallel architectures. Applied mathematicians and computational scientists performed this research at Lawrence Livermore National Laboratory, Los Alamos National Laboratory, New York University, and the University of California at Berkeley.
In computer science the focus is on understanding how parallel and distributed computer systems can be applied more effectively to large-scale scientific problems. Supported projects include research in programming models and tools, improved software libraries for parallel computers, scientific visualization of large data sets, software performance analysis techniques, and message-passing utilities to facilitate distributed computing.
Begin evaluating one or two additional early prototype systems in cooperation with vendors.
Expand computer systems performance analysis project, including top-down algorithmic analysis of Grand Challenge codes at HPCRCs.
Continue development of high performance hierarchical data storage system at the NSL in NERSC.
Begin prototype project in telecommuting access to DOE experimental facilities.
Complete ESnet upgrades to 45 Mb/s at 20 sites.
Initiate ESnet upgrades to 144-622 Mb/s at selected sites.
Continue gigabit research projects, emphasizing distributed applications.
Implement production-quality packetized workstation-based video over ESnet and other Internet components.
Review, evaluate, and adjust program of Grand Challenge and computational research projects begun in 1992.
Implement fully three-dimensional algorithms based on adaptive mesh refinement techniques for computational fluid dynamics.
Deliver prototypes of discipline-oriented computing environments for Grand Challenge applications.
Conduct Grand Challenge workshop together with other HPCC participants, emphasizing progress in software tools and computing environments.
Develop policies, standards, and software engineering methodologies to ease the transition of research tools into commercial products.
Develop software to integrate the high performance hierarchical data storage system at the NSL into advanced computing environments.
Provide detailed feedback to computer vendors on experience gained with prototype computers installed at HPCRCs.
A single frame from an eight month simulation for the Pacific Northwest using the Penn State/NCAR Mesoscale Meteorological (MM) model. The simulation was executed on the Intel Touchstone Delta at Caltech. MM model results are used in high resolution regional forecasts, allowing more efficient energy use and helping reduce storm damage. Argonne National Laboratory (ANL) and NSF- supported NCAR researchers used fast fine-grained massively parallel software and PCN, a parallel programming language developed at ANL and Caltech with support from the NSF Science and Technology center for Parallel Computation.
Clockwise from the top, a slice of a three-dimensional model of an 8 kilometer/second impact on a planet the size of the Earth by a planet the size of Mars, at 0, 60, 120, and 200 seconds. The gravity of the larger planet later traps mantle material and forms a moon. Using multidimensional shock physics software developed in a collaboration among several DOE Laboratories and others under DOE and ARPA sponsorship, this is an example of materials modeling under extreme conditions. The same software is used to model other high velocity impact phenomena, including explosives, resulting in fewer experiments, reduced cost, and increased safety.
Publish white paper reviewing applicability of NII to telecommuting and defining the enabling technologies required for and the barriers to widespread implementation of telecommuting and other forms of telework.
Evaluate scope of DOE generated information for inclusion in NII digital libraries.
Conduct cooperative evaluation of potential for energy supply and demand management using the NII.
Begin collaborative development of NII technologies for applications testbeds.
Refine and make widely available the graduate-level computational science electronic textbook.
Initiate college-level computational science electronic textbook.
Review and evaluate ongoing high school education programs and associated teacher workshops.
Initiate junior high school educational projects.
Rebalance applied math research program based on comprehensive review conducted in 1993.