Feature Slides

NITRD 20th Anniversary Symposium

NITRD 20th Anniversary Symposium

The Knight Conference Center, Newseum, Washington D.C.

February 16, 2012




MORE AND MOORE: GROWING COMPUTING PERFORMANCE FOR SCIENTIFIC DISCOVERY

Webcast

PDF Slides



Download Slides


Summary

The past two decades of growth in computing performance have enabled breakthroughs across many disciplines, from biology to astronomy, and from engineering to fundamental science. Within the energy space, computation is used to develop alternative energy sources, to design energy efficient devices, and to understand and mitigate the impacts of energy choices. Demand for computational capability across science grows unabated, with the growth coming from three general areas:

Two decades ago, the field of high performance computing was facing a major crisis, as the vector supercomputers of the past were no longer cost effective. Microprocessor performance increases were tracking Moore's Law, making massively parallel systems the cost effective choice for science. But this hardware revolution came with significant challenges for algorithm and software developers. Funded in large part by Networking and Information Technology Research and Development (NITRD) Program investments, researchers developed new software tools, libraries, algorithms, and applications and adapted to that change, while also increasing the capability and complexity of the problems being simulated.

Today, the field again faces a major revolution in computer architecture. The clock speed benefits of Moore's Law have ended, and future system designs will instead be constrained by power density and total system power, resulting in radically different architectures. The challenges associated with continuing the growth in computing performance will require broad research activities across computer science, including the development of new algorithms, programming models, system software and computer architecture. While these problems are most evident at the high end, they limit the growth in computing performance across scales, from department-scale clusters to major computational centers.



Bio

Katherine Yelick is the co-author of two books and more than 100 refereed technical papers on parallel languages, compilers, algorithms, libraries, architecture, and storage. She co-invented the UPC and Titanium languages and demonstrated their applicability across computer architectures through the use of novel runtime and compilation methods. She also co- developed techniques for self-tuning numerical libraries, including the first self-tuned library for sparse matrix kernels which automatically adapt the code to properties of the matrix structure and machine. Her work includes performance analysis and modeling as well as optimization techniques for memory hierarchies, multicore processors, communication libraries, and processor accelerators. She has worked with interdisciplinary teams on application scaling, and her own applications work includes parallelization of a model for blood flow in the heart. She earned her Ph.D. in Electrical Engineering and Computer Science from MIT and has been a professor of Electrical Engineering and Computer Sciences at UC Berkeley since 1991, with a joint research appointment at Berkeley Lab since 1996. She has received multiple research and teaching awards and is a member of the California Council on Science and Technology, a member of the Computer Science and Telecommunications Board and a member of the National Academies committee on Sustaining Growth in Computing Performance.