HPCC Program software development efforts were originally planned to address the Grand Challenges associated with agency missions. As the Program has matured, these efforts have been expanded to support the needs of industry and improve U.S. competitiveness.
The range of applications software being developed under the Program will assure that high performance computing systems can be broadly useful to the American economy. Now these systems must be made easier to use. Experienced software developers and applications researchers, many at or connected to HPCRCs, are working to meet these needs.
It took a decade to develop a collection of efficient and robust software for vector machines, and it is widely believed that it will take at least that long for parallel systems. Performance is dramatically improving on these systems, due to new algorithms, systems, and experienced people. Continued software work is needed to realize their full potential. The user community is growing so fast that demand for computer time on these systems exceeds supply.
4.1. Systems Software and Software Tools
The high performance computing environment model involves workstation interaction with high performance systems. This approach makes it possible for users to almost transparently access higher performance machines as problem size grows and the software on these machines matures.
This environment is fundamentally and profoundly different from and more complicated than traditional computing environments. Much of the systems software and software tools for parallel computing has been redesigned and rewritten to take advantage of the theoretical benefit of parallelism and enhance user productivity:
Evolving conventions and standards enable developers to transport software to different architectures and make it look the same to users. High Performance Fortran (HPF) is an example. Coordinated by the center for Research on Parallel Computation at Rice University, the HPF Forum is a coalition of government, the high performance computer industry, and academic groups that is developing standard extensions to Fortran on vector processors and massively parallel SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instruction Multiple Data) systems.
The software development process is evolutionary -- tool developers debug codes and make them more efficient while users and their applications place new demands on these tools, resulting in further improvements, refinements, speed-ups, and increased user-friendliness. Accelerating this cycle is one objective of the HPCC Program.
4.2. Scientific Computation Techniques
The development and analysis of fundamental algorithms for use in computational modeling represented in the Grand Challenges are as critical to realizing peak performance of scalable systems as are improvements in hardware. Such research includes studies of algorithms applicable to a wide range of parallel machines as well as those that take advantage of the strengths of specific architectures.
These algorithms address both numerical computation (where arithmetic calculations predominate) and non-numeric (where finding and moving data rule). Widely used numerical computations include multidimensional fast Fourier transforms, fast elliptic and Riemann solvers in partial differential equations, and numerical linear algebra. The latter includes manipulating vectors and matrices, solving systems of linear equations, and computing eigenvalues and eigenvectors. Efficient algorithms attuned to specialized matrix structure (for example, dense or sparse) are especially sought. Numerical linear algebra is an area that was assumed to be well understood, having been subject to substantial research for vector processors. Somewhat surprisingly, algorithmic breakthroughs made as these codes were ported to parallel systems have also resulted in improved performance on vector processors.
These computations are common to so many applications that they are developed by experts to attain maximal efficiency, and their implementations are included in general-purpose reusable software libraries. When these libraries are updated with the more efficient software, users immediately observe faster execution times for their applications. Several HPCC agencies are building such libraries and making them widely available.
4.3. Grand Challenge Applications
The successful use of scalable parallel systems for Grand Challenge applications requires designing new hardware, developing new systems software and software tools, and integrating these with the idealized setting of mathematics and with the complex environment of real world applications and real world users. The maturing HPCC Program is placing increased emphasis on facilitating this integration. For example, the Program sponsored the first "Workshop and Conference on Grand Challenge Applications and Software Technology" in May 1993. Some 250 people representing 34 Grand Challenge teams evaluated progress and planned future activities. A second workshop is scheduled for 1995.
As this new software becomes more efficient, stable, and robust, applications researchers are porting their software to the new parallel systems more quickly and are achieving faster run times. They are also obtaining more realistic results by taking advantage of the faster speeds, larger memory, and the opportunity to add complexity, which was not possible before the new architectures became available. This realism comes through:
Entirely new approaches are being developed for cases in which existing models or their algorithmic components are inappropriate for parallel architectures.
Researchers are now benchmarking Grand Challenge applications on a variety of high performance systems to determine not only where they run fastest but also the reasons why, since demand for processors, memory, storage, and communication all affect speed. These researchers work closely with systems software and software tool developers, and they communicate intensively with hardware vendors. This feedback loop results in a wide range of improvements in high performance software and hardware.
As Grand Challenge applications prove the mettle of scalable parallel architectures, commercial software vendors are becoming more active in moving their software to these new machines. The success of commercial applications software is crucial to the success of the high performance computing industry.
NSF, with assistance from ARPA, has funded 16 Grand Challenge Applications Groups for three years beginning in FY 1993 or FY 1994. DOE has funded nine multi-year Grand Challenge projects, some jointly with other DOE programs, HPCC agencies, and industry. NASA, NIH, NIST, NOAA, and EPA have similar Grand Challenge groups. These groups are addressing problems in the following areas.
Improved and more realistic models and computer simulations of aerospace vehicles and propulsion systems are being developed. These will allow for analysis and optimization of designs over a broad range of vehicle classes, speeds, and physical phenomena, using affordable, flexible, and fast computing resources. Development of parallel benchmarks is an area of intensive activity. These applications are computationally unrealistic with traditional computing technology. NASA's Computational Aeroscience Grand Challenges include the High Speed Civil Transport, Advanced Subsonic Civil Transport, the High-Performance Aircraft, and Rotorcraft. This clearly is an area of significant mutual benefit to both the HPCC Program and other major NASA programs.
Simulation of a tiltrotor aircraft (V-22 Osprey) during takeoff. Shown are streaklines, rendered as smoke and computed using UFAT (Unsteady Flow Analysis Toolkit), a new time-dependent particle tracing code.
In addition, NSF has funded a Grand Challenge Applications Group to address fundamental problems in coupled field problems and geophysical and astrophysical fluid dynamics turbulence.
NSF has funded Grand Challenge Applications Groups in:
DOE's Grand Challenge projects are exploring:
Model showing initiation of water heater combustion.
Environmental Monitoring and Prediction
The environmental Grand Challenges include weather forecasting, predicting global climate change, and assessing the impacts of pollutants. High performance computers allow better modeling of the Earth and its atmosphere, resulting in improved guidance for weather forecasts and warnings, and improved global change models.
High resolution local and regional weather models are being incorporated into larger national and global weather forecasting counterparts. Several of these models are used widely by researchers to investigate and monitor the behavior of the atmosphere through numerical simulation. NOAA scientists are redesigning some of these models to take full advantage of new scalable systems. Some global models are "community models," used by researchers worldwide to compare results with observations and with other related models, and to evaluate performance. For example, a set of modular, portable benchmark codes is being developed and evaluated on several scalable systems and networked workstations at the Boulder Front Range Consortium. Funding for this work comes from ARPA's National Consortium for High Performance Computing, NOAA's Forecast Systems Laboratory, the NSF-funded National center for Atmospheric Research, and the University of Colorado. Users of these improved models include the Federal Aviation Administration and the National Weather Service.
The track of Hurricane Emily predicted by NOAA's GFDL (yellow) and the observed track (orange).
Models of the atmosphere and the oceans are being rewritten in modular form and transported to several large parallel systems; funding sources include DOE's Los Alamos National Laboratory, NOAA's Geophysical Fluid Dynamics Laboratory (GFDL), and the Navy. A much greater level of detail is now possible -- local phenomena such as eddies in the Gulf of Mexico are being modeled, which allows for better warning of weather emergencies and improved design of equipment such as oil rigs.
Separate air and water quality models are being combined into a single model which is being transported to a variety of massively parallel systems where benchmarks are being used to evaluate performance. These models are needed to assess the impact of pollutant contributions from multimedia sources and to ensure adequate accuracy and responsiveness to complex environmental issues. Nutrient loading in the Chesapeake Bay and PCB transport in the Great Lakes are being targeted beginning in FY 1994.
In FY 1995, complexities such as aerosols, visibility, and particulates will be added; entire environmental models will be integrated into parallel computing environments with a focus on emissions modeling systems and integration with Geographical Information Systems.
EPA will acquire a large massively parallel system to be used initially for this research. A transition to an operational computing resource that supports improved environmental assessment tools is planned.
Model prediction of the amount of lead, a toxic pollutant, deposited by atmospheric processes during August 1989.
DOE's Grand Challenge projects are exploring:
NASA has funded Grand Challenge research teams in:
The environmental monitoring and prediction Grand Challenge enables better decision making by government and industry on issues that affect both the economy and the environment.
Molecular Biology and Biomedical Imaging
FY 1994 NIH accomplishments include:
A closeup view looking down on the aortic valve in a computational model of blood flow in the heart. The model was developed by Charles Peskin and David McQueen of New York University and run on the Pittsburgh Supercomputing center's Cray C90.
FY 1995 NIH plans include:
NSF has funded Grand Challenge Applications Groups in:
DOE's Grand Challenge projects are exploring computational structural biology -- understanding the components of genomes and developing a parallel programming environment for structural biology.
Product Design and Process Optimization
Beginning in FY 1995, NIST will develop new high performance computing tools for applying computational chemistry and physics to the design, simulation, and optimization of efficient and environmentally sound products and manufacturing processes. Initial focus will be on the chemical process industry, microelectronics industry, and biotechnology manufacturing. Existing software for molecular modeling and visualization will be adapted for distributed, scalable computer architectures. New computational methods for first-principles modeling and analysis of molecular and electronic structure, properties, interactions and dynamics will be developed. Macromolecular structure, molecular recognition, nanoscale materials formation, reacting flows, the automated analysis of complex chemical systems, and electronic and magnetic properties of thin films will receive particular emphasis.
Diamond-tool turning and grinding machines are the epitome of precision manufacturing tools, capable of machining high precision optical finishes without additional polishing (such as on this copper mirror for a laser system). NIST researchers and their industrial partners are developing methods for monitoring and controlling diamond turning machines to improve the precision and production of highly efficient optics such as mirrors for laser welders.
NSF has funded a Grand Challenge Applications Group to address fundamental problems in high capacity atomic-level simulations for the design of materials.
A DOE Grand Challenge project involves first-principles simulation of materials properties using a hierarchy of increasingly more accurate techniques that exploit the power of massively parallel computing systems.
NASA's Earth and Space Science projects support research in:
NSF has funded Grand Challenge Applications Groups to address fundamental problems in: