NITRD 30th Symposium Panel 1 Recap

NITRD’s 30th Anniversary Symposium Recap – Panel 1: Computing at Scale

(June 14, 2022)

This post is the first in a series highlighting the panels from that day, starting with Panel 1: Computing at Scale, moderated by Ben Zorn (Microsoft) and featuring distinguished panelists Luiz André Barroso (Google), Ian Foster (Argonne NL), Timothy Pinkston (USC), and Kathy Yelick (UCB). The panel led a riveting discussion celebrating the incredible past achievements of high performance computing (HPC) and cloud computing, as well as looking ahead to where this technology is going and how it is affecting society.

SC21 BIRDS OF A FEATHER: “From Exascale to Quantum and Beyond: The Future of Federal HPC R&D”

(October 29, 2021)

Thirty years has brought so much in the way of new emerging high-performance computing (HPC) technology to the United States and the global community. Why is 30 years an important marker in the computing world? It has been 30 years since the HPC Act of 1991 was signed into law and since then we have seen an explosion of data, supercomputers faster than the creators of Star Trek could have imaged, revolutionary types of computing – exascale, quantum, and neuromorphic – and more! This Act led to federally-funded IT R&D and the formation of the Networking and Information Technology R&D (NITRD) program that would coordinate the R&D.

Convergence-HPC-BD-ML-JointWSreport-2019-slide

THE CONVERGENCE OF HIGH PERFORMANCE COMPUTING, BIG DATA, AND MACHINE LEARNING

(September 9, 2019)

The high performance computing (HPC) and big data (BD) communities are evolving in response to changing user needs and technological landscapes. Researchers are increasingly using machine learning (ML) not only for data analytics but also for modeling and simulation; science-based simulations are increasingly relying on embedded ML models not only to interpret results from massive data outputs but also to steer computations. Science-based models are being combined with data-driven models to represent complex systems and phenomena. There also is an increasing need for real-time data analytics, which requires large-scale computations to be performed closer to the data and data infrastructures, to adapt to HPC-like modes of operation. These new use cases create a vital need for HPC and BD systems to deal with simulations and data analytics in a more unified fashion.