XSEDE Science Successes

« Back

Bridges to the Future

Bridges to the Future

Bridges, available starting in 2016 at the Pittsburgh Supercomputing Center, is a new concept in high-performance computing—a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. It is a richly-connected set of interacting systems offering a flexible mix of gateways (Web portals), workflows, Hadoop and Spark ecosystems, interactivity, and batch processing. Bridges will include:

  • Compute nodes with hardware-supported shared memory ranging from 128GB to 12TB per node to support genomics, machine learning, graph analytics, and other fields where partitioning data is impractical
  • GPU nodes to accelerate diverse applications, for example, in machine learning, image processing, and materials science
  • Database nodes to support data management, analytics, integration, and fusion and to drive gateways and workflows
  • Webserver nodes to host gateways and provide access to community datasets
  • Data transfer nodes with 10 GigE connections to enable data movement between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure

Bridges could be a good fit for you if:

  • You want to scale your research beyond the limits of your laptop, using familiar software and user environments.
  • You want to collaborate with other researchers with complementary expertise. 
  • Your research can take advantage of any of the following:
  • Rich data collections: Rapid access to data collections will support their use by individuals, collaborations and communities.
  • MPI jobs requiring up to 1176 cores (ensemble and other loosely-coupled jobs requiring more cores in aggregate can exceed 1176 cores).
  • Cross-domain analyses: Concurrent access to datasets from different sources, along with tools for their integration and fusion, will enable new kinds of questions.
  • Gateways and workflows: Web portals will provide intuitive access to complex workflows that run "behind the scenes."
  • Large coherent memory: Bridges' 3TB and 12TB nodes will be ideal for memory-intensive applications, such as genomics and machine learning.
  • In-memory databases: Bridges' large-memory nodes will be valuable for in-memory databases, which are important due to their performance advantages.
  • Graph analytics: Bridges' hardware-enabled shared-memory nodes will execute algorithms for large, nonpartitionable graphs and complex data very efficiently.
  • Optimization and parameter sweeps: Bridges is designed to run large numbers of small to moderate jobs extremely well, making it ideal for large-scale optimization problems.
  • Rich software environments: Robust collections of applications and tools, for example in statistics, machine learning and natural language processing, will allow researchers to focus on analysis rather than coding.
  • Data-intensive workflows: Bridges' file systems and high bandwidth will provide strong support for applications that are typically I/O bandwidth-bound. One example is an analysis that runs best with steps expressed in different programming models, such as data cleaning and summarization with Hadoop-based tools, followed by graph algorithms that run more efficiently with shared memory.
  • Contemporary applications: Applications written in Java, Python, R, MATLAB, SQL, C++, C, Fortran, MPI, OpenMP, OpenACC, CUDA, and other popular languages will run naturally on Bridges.

Information on Submitting Allocation Requests for Bridges

More on Bridges