The Big Data Interagency Working Group (BD IWG) works to facilitate and further the goals of the White House Big Data R&D Initiative.
The CPS IWG is to coordinate programs, budgets, and policy recommendations for Cyber Physical Systems (CPS) research and development (R&D).
Cyber Security and Information Assurance (CSIA) Interagency Working Group coordinates the activities of the CSIA Program Component Area.
The Health Information Technology Research and Development Interagency Working Group coordinates programs, budgets and policy recommendations for Health IT R&D.
HCI&IM focuses on information interaction, integration, and management research to develop and measure the performance of new technologies.
HCSS R&D supports development of scientific foundations and enabling software and hardware technologies for the engineering, verification and validation, assurance, and certification of complex, networked, distributed computing systems and cyber-physical systems (CPS).
The HEC IWG coordinates the activities of the High End Computing (HEC) Infrastructure and Applications (I&A) and HEC Research and Development (R&D) Program Component Areas (PCAs).
LSN members coordinate Federal agency networking R&D in leading-edge networking technologies, services, and enhanced performance.
The purpose of the SPSQ IWG is to coordinate the R&D efforts across agencies that transform the frontiers of software science and engineering and to identify R&D areas in need of development that span the science and the technology of software creation and sustainment.
Formed to ensure and maximize successful coordination and collaboration across the Federal government in the important and growing area of video and image analytics
The Wireless Spectrum R&D (WSRD) Interagency Working Group (IWG) has been formed to coordinate spectrum-related research and development activities across the Federal government.
Working in collaboration with, and striving to serve the needs of a growing, diverse set of emerging and existing applications requiring computing and other IT capabilities beyond those typically available to individual research groups yet not well served by existing HPC systems, the Pittsburgh Supercomputing Center (PSC) is carrying out an accelerated, development pilot project to create, deploy and test software building blocks and hardware implementing functionalities specifically designed to support data-analytic capabilities for data intensive scientific research. This Data Exacell (DXC) project, building on the PSC’s successful Data Supercell (DSC) technology which replaced a conventional tape-based archive with a disk-based system to economically provide the much lower latency and higher bandwidth data success necessary for data-intensive activities, will implement and bring to production quality additional functionalities important to such work. These include improved local performance, additional abilities for remote data access and storage, enhanced data integrity, data tagging and improved manageability. In support of data-analytics and data-intensive processing, we are acquiring hardware appropriate for running a broad collection of databases and interacting with them directly (e.g. over the Web) or as part of data-intensive workflows thereby increasing DXC’s effectiveness and applicability for the full range of data-analytic research. In the technological ecosystem, DXC fills the void between computationally-intensive systems (now driving towards exascale) and shared-nothing clusters (including commercial clouds).
We will discuss DXC’s current application complement, its architecture, computational engines (including small and large database systems, large cache-coherent memory system, graph analytics and data-analytic systems) and storage systems (including shared SSDs, local and multi-petabyte, shared, high-performance file systems) interconnected by a high-performance fabric in a ‘data-centric’ architecture. In addition we will sketch the extension of the DXC pilot project to the recently announced NSF award to the Bridges, large-scale production facility scheduled to go on-line in 2016.