Extended Collaborative Support program provides expert assistance in a wide range of
cyberinfastructure technologies. Any user may request this assistance through the XSEDE allocation process.
The primary goal of this monthly symposium is to allow the over 70 staff members working in ECSS to exchange information about successful techniques used to address challenging science problems. Tutorials on new technologies may also be featured. Two 30-minute, technically-focused talks will be presented each month and will include a brief question and answer period. This series is open to all.
These sessions will be recorded. For this large webinar, only the presenters and host will be broadcasting audio. Attendees may submit questions to the presenters through a moderator by sending a chat message.
September 19, 2017
COSMIC2 - A Science Gateway for Cryo-Electron Microscopy with Globus for Terabyte-sized Dataset
Presenter(s): Mona Wong-Barnum (SDSC)
Principal Investigator(s): Andres Leschziner (UCSD) Michael Cianfrocco (University of Michigan)
Structural biology is in the midst of a revolution. Instrumentation and software improvements have allowed for the full realization of cryo-electron microscopy (cryo-EM) as a tool capable of determining atomic structures of protein and macromolecular samples. These advances open the door to solving new structures that were previously unattainable, which will soon make cryo-EM a ubiquitous tool for structural biology worldwide, serving both academic and commercial purposes. However, despite its power, new users to cryo-EM face significant obstacles. One major barrier consists of the handling of large datasets (10+ terabytes), where new cryo-EM users must learn how to interface with the Linux command line while also dealing with managing and submitting jobs to high performance computing resources. To address this barrier, we are developing the COSMIC2 Science Gateway as an easy, web-based, science gateway to simplify cryo-EM data analysis using a standardized workflow. Specifically, we have adapted the successful and mature Cyberinfrastructure for Phylogenetic Research (CIPRES) Workbench  and integrated Globus Auth  and Globus Transfer  to enable federated user identity management and large dataset transfers to Extreme Science and Engineering Discovery Environment's (XSEDE)  high performance computing (HPC) systems. With the support of XSEDE's Extended Collaborative Support Services (ECSS)  and the Science Gateway Community Institute's (SGCI) Extended Developer Support (EDS), this gateway will lower the barrier to high performance computing tools and facilitate the growth of cryo-EM to become a routine tool for structural biology. Talk previously given at PEARC'17
First steps in optimising Cosmos++: A C++ MPI code for simulating black holes
Presenter(s): Damon McDougall (ICES)
Principal Investigator(s): Patrick C. Fragile (College of Charleston)
This ECSS project is to have Cosmos++ run on Stampede2 effectively. Stampede2, at present, is made up entirely of Intel Xeon Phi nodes. These are low clock-frequency but high core-count nodes, and there are some challenges associated with running on this hardware efficiently. Although the project's end goal is to hybridise a pure MPI code, this talk will focus on some of the initial steps we have taken to improve serial performance and how these steps relate to C++ software design. Prior knowledge of compiled languages and custom types would be beneficial but isn't required.
August 15, 2017
HTC with a Sprinkle of HPC: Finding Gravitational Waves with LIGO
Presenter(s): Lars Koesterke (TACC)
Principal Investigator(s): Duncan Brown (Syracuse University) Josh Willis (Abilene Christian University)
XSEDE is supporting the LIGO project to detect signatures of gravitational waves in a stream of data generated by (currently) two observatories in the U.S., located in Washington State and Louisiana. I will report on an ECSS project tasked to improve the performance of one of the largest (most resource demanding) pipelines called pycbc (python compact binary collision). The software evolved from a slow and performance-unaware state to a high-performing pipeline capable of utilizing Xeon, Xeon Phi, and Nvidia GPU architectures alike. Achieving high performance required only a few sprinkles of HPC (High Performance Computing) on top of a HTC (High Throughput Computing) pipeline. While the HPC pieces relevant for this particular project are all well known to ECSS staff it may be surprising what was missing in the considerations of the software developers. Hence this is more a story of how to educate users than a story of new and groundbreaking HPC concepts. Nevertheless I am confident that my fellow ECSS staffers will find this project interesting and enlightening.
Enabling multi-events 3D simulations for earthquake hazard assessment
Presenter(s): Yifeng Cui (SDSC)
Principal Investigator(s): Morgan Moschetti (USGS)
Researchers from USGS use Stampede to perform a series of computationally intensive simulations for improved understanding of earthquake hazards. Hercules, a finite element solver developed at CMU, is used to make the calculations which combines meshing, partitioning and solving functions in a single, self-contained code. Meshing employs a highly efficient octree-based algorithm that scales well. The simulation results are used to investigate the effects of complex geologic structure and topography on seismic wave propagation and ground-shaking hazards, and to evaluate model uncertainties in U.S. seismic hazard models. This talk will provide an overview of current status of the seismic hazard analysis research, and introduce the code performance, the optimizations involved in supporting multi-event simulations for this study through the ECSS project.
June 16, 2015
A Short Story of Efficiently Using Two Open-Source Applications on Stampede
Presenter(s): Ritu Arora (TACC)
This presentation will cover a summary of two challenges and solutions related to running the DROID (Digital Record Object Identification) and the FLASH astrophysics code on a large number of nodes on Stampede.
DROID is a software tool developed by The National Archives to perform automated batch identification of file formats. It is written in Java and works well when only one copy of it is run on a node. PI Jesscia Trelogan from the Institute of Classical Archaeology at UT Austin has been using DROID as part of her workflow for managing a large archaeological data collection. It would take her more than 2 days to extract metadata from about 4.3 TB of data using DROID on a local server. Since the process of culling and reorganizing the data collection is iterative, the metadata extraction using DROID needs to be done often. The goal of the ECSS project with PI Trelogan was to provide support in leveraging Stampede for parts of her workflow, which includes DROID, so that the overall time-taken in conducting all the steps in the workflow is reduced. The main challenge in using DROID on Stampede was related to executing its multiple copies in parallel on different nodes in a batch mode. An overview of this challenge and its solution strategy will be discussed during this presentation.
In another project, a copy of the FLASH astrophysics code was optimized such that the code does striped I/O on the Lustre File System. This project was proposed after it was found that a user overloaded the Lustre servers (which eventually became unresponsive) while running FLASH on 7000+ cores. The problem was related to the step that involved reading a checkpoint file. An overview of the problem and its solution will be included in this talk.
Optimization of Text Processing for the WordFlare Knowledge Graph
Presenter(s): Robert Sinkovits (SDSC)
Principal Investigator(s): Michael Douma (IDEA)
The goal of the WordFlare project is to create a tablet-based app to engage K-12 and lifelong learners in exploring language and knowledge. The app is based on a massive thesaurus and features dynamic visualizations of word relationships. Approximately 9% of the content is human-curated, while the other 91% is derived using computational methods executed on XSEDE resources. In this talk, I will describe the steps taken to accelerate two key steps in the automated text processing – optimization of the Latent Dirichlet Allocation (LDA) algorithm and the development of a fast method to simultaneously search for large numbers of words in a corpus. The speedups we obtain are highly problem dependent, ranging from 1.5-2.2x for the LDA algorithm and up to 1500x for the word search when using a large reference dictionary (e.g. the 400K words found in Wiktionary).
May 19, 2015
ECSS experience with non-traditional HPC users
Presenter(s): Junqi Yin (NICS)
Principal Investigator(s): Annette Engel (U. Tenn) Yong Zeng (UMKC)
Mothur is an open source bioinformatics pipeline used for biological sequence analysis that has gained increasing attention in the microbial ecology community. Because a large set of functionalities in Mothur are memory bound, it is well suited for shared memory architectures. I will discuss performance results for several commands in Mothur that are popular in the operational taxonomic unit analysis, and show that pipeline processes can be accelerated by orders of magnitude faster.
Real-time Bayesian estimation for financial ultra-high frequency data is plagued with the curse of high dimensionality. Methods have been developed to manage this problem through the use of MPI. By porting to CUDA, I'll show that an adequately equipped GPU workstation can rise to the task, producing reasonably real-time results with actual data from financial markets.
P3DFFT: a scalable open-source solution for Fourier Transforms and other algorithms in three dimensions
Presenter(s): Dmitry Pekurovsky (SDSC)
P3DFFT is an open-source package developed at SDSC. It implements three-dimensional Fourier Transforms and other algorithms, in a highly scalable and efficient way. P3DFFT achieves good scaling on hundreds of thousands of compute cores. It has received much interest and use from scientists in diverse fields such as DNS turbulence simulations, astrophysics, oceanography and material science. Recently it has been the subject of an internal ECSS project, aimed at making it XSEDE community software. It has been ported, tested and documented on the largest computational systems at XSEDE. Additional features have been added to help widen the impact in the community. In this presentation I will go over the main features of P3DFFT, including the recently added, and review how users of XSEDE can access it on XSEDE platforms.
April 21, 2015
reproducibility@XSEDE: Reporting Back to our Colleagues
Presenter(s): Doug James (TACC) Carlos Rosales (TACC) Nancy Wilkins-Diehr (SDSC)
The reproducibility@XSEDE workshop (www.xsede.org/reproducibility) was a full-day event held in conjunction with XSEDE14. The workshop featured an interactive, open ended, discussion-oriented agenda focused on reproducibility in large-scale computational science. This presentation includes (1) independent reactions to the event by three of the workshop principals; and (2) an open discussion on the topic of reproducibility in general.
March 17, 2015
Gateway Building for the Non-Linear Adjoint Coefficient Estimation (NLACE) project
Presenter(s): Lan Zhao (Purdue) Chris Thompson (Purdue)
Principal Investigator(s): Paul Barbone (Boston University)
Presenters will discuss work providing a solution for the NLACE (Non-Linear Adjoint Coefficient Estimation) research group to making biomechanical imaging analysis model available to the community using XSEDE resources. The research has a wide variety of medical applications including brain scanning, bone structure analysis, and cancer detection. The Barbone group created and maintains the NLACE model and needed help with science gateway development. They have an allocation on Gordon, and the ECSS team was able to help them get their model installed there and quickly create an application for utilizing it on DiaGrid, a HubZero-based gateway for hosting scientific applications
Real-Time Next Generation Sequencing (NGS) in the Classroom using Galaxy
Presenter(s): Josephine Palencia (PSC) Alex Ropelewski (PSC)
We present an interesting real-user case scenario supporting 30 Carnegie Mellon University (CMU) Bioinformatics students from three classes performing real-time next generation sequencing (NGS). We describe the system setup, the scaling preparations, the tools and the full workflow, the data and reference files and the lessons learned from the classroom experience.
March 6, 2015
PRACE-XSEDE Interoperability projects
Presenter(s): Morris Riedel (Juelich Supercomputing Centre) Sandra Gesing (Notre Dame University) Shantenu Jha (Rutgers University)
Presentation Slides Gesing Presentation
Presentation Slides Jha Presentation
1) "Smart Data Analytics for Earth Sciences across XSEDE and PRACE", speaker Morris Riedel, Juelich Supercomputing Centre.
The ever-increasing amount of earth science data arising from measurements or computational simulations requires new ‘smart data analytics techniques' capable of extracting meaningful findings from ‘pure big data'. XSEDE as well as PRACE provides excellent resources that enable efficient and effective data analytics when several technical frameworks and data analysis packages would be available. While we assessed tools and technologies for a couple of earth science case studies, the scientific case in this particular Webinar is driven by one particular earth science analytics use case: automated anomaly/outlier detection of earth science time series datasets that require a parallel and scalable clustering detection algorithm that is able to take advantage of the interoperability between XSEDE and PRACE systems today. As one of the key results of our technology assessment project, we present our parallel and scalable Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithm implementation and how it can be used across different infrastructures using open standards to decouple architecture from concrete implementations. Solutions will be outlined that can be used today in production if associated resource allocations in XSEDE and/or PRACE are granted for the user. More details will be presented at the Research Data Alliance (RDA) 5th Plenary Big Data Analytics Group Session at San Diego in March 2015 and at the European Geosciences Union (EGU) 2015 Big Data for Earth Science session in Vienna in April 2015.
2) "Unicore Use Case Integration for XSEDE and PRACE", speaker Sandra Gesing, U of Notre Dame
European Team: Molecular Simulation Community represented by MoSGrid, SCI-BUS and ER-flow and the computational radiation physics community represented by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), US: Center for Research Computing, University of Notre DameThe project focuses on the integration of two UNICORE use cases for the joint support of XSEDE and PRACE: the first one targets the molecular simulation community, the second one the computational radiation physics community. The project MoSGrid (Molecular Simulation Grid) offers a web-based science gateway supporting the community with various services for quantum chemistry, molecular modeling, and docking applying UNICORE as grid middleware.
The main technical challenge in MoSGrid has been to extend the portal infrastructure for the intuitive use of the XSEDE and PRACE infrastructure via UNICORE and according credentials. The members of the computational radiation physics community involved in this project focuses on the generation of advanced laser-driven sources of particle and X-ray beams. They aim to "simulate what is measured" to reproduce experimental measurements and connecting them to the fundamental plasma processes on the single-particle scale. The main goal wasto make the according tools on both XSEDE and PRACE systems via UNICORE available to allow for the exchange of common workflows, which can be applied on both infrastructures. The talk will go into detail about the goals, the lessons learned and the accomplished steps.
3) "Interoperable High Throughput Binding Affinity Calculator for Personalised Medicine", speaker Shantenu Jha, Rutgers Team European: Prof Peter V. Coveney (University College London) and Prof Dieter Kranzlmuller (LRZ/LMU), US: Prof. Shantenu Jha (RADICAL, Rutgers)
To improve the ability calculate drug-binding affinities the CCS at UCL has developed the Binding Affinity Calculator (BAC), which allows the building of the necessary patient specific models required to simulate drug performance, a process which requires a complex number of steps in order to customise a generic model with patient specific information, and then run the calculations As part of this XSEDE-PRACE project, BAC has been interfaced with RADICAL-Cybertools to interoperably utilize XSEDE and PRACE resources. We will present some preliminary results
JOIN WEBEX MEETING
Meeting number: 849 368 891
Meeting password: PRACE-XSEDE
February 17, 2015
Migration to Phis and Enhancing Vectorization
Presenter(s): Jim Browne (University of Texas)
This talk will cover two topics: selection of execution modes for use of heterogeneous compute nodes which include Intel Xeon Phis and enhancing vectorization in application codes. Each lecture will be about 20 minutes. The lecture on selection of execution modes for heterogeneous compute nodes describes a systematic process for choosing among CPU only, Phi only, symmetric node and offload execution modes. The lecture is based on the Quick Start Guide developed by the Stampede Technology Insertion project. The Quick Start Guide is available from the TACC web site. Enhancement of vectorization is accomplished by combining compiler static analysis with runtime measurements of execution behavior to generate recommendations for adding directives, pragmas and source code changes to increase vectorization. The average gain in performance obtained in the case studies was in excess of 40% for execution on the Xeon Phis. The process for enhanced vectorization is supported by a tool called MACVEC which will soon be available for general use on Stampede.
January 20, 2015
Genomics Calculations as an Outreach Strategy
Presenter(s): Hugh Nicholas (PSC) Alexander Ropelewski (PSC)
Genomics calculations are a large part of the calculations carried out at the XSEDE centers and they have been an effective tool for attracting and teaching many minority scientists in how to effectively use XSEDE resources to carry out complete genome sequences, transcriptomes and studies on the analysis of individual genes. We have delivered a series of two-week workshops and longer internships that have been useful in training minority scientists and students in carrying out this range of calculations on a variety of different biological systems in the health and agricultural fields. We have maintained a very successful outreach program based on this type of calculation and it will be described and some individual accomplishments of scientists and students highlighted.
December 16, 2014
Presenter(s): Mike Norman (UCSD) Niall Gaffney (TACC) Jim Lupo (LSU)
Presentation Slides - Mike Norman
Presentation Slides - Niall Gaffney
Presentation Slides - Jim Lupo
The advanced digital resources ecosystem is constantly evolving as resources decommissioned when they reach the end of their life and new resources are added. On this call you¹ll hear about the newest XSEE resources one of which is available now and two that will be available in early 2015.
- Michael Norman, San Diego Supercomputing Center
- Comet, a new 2 Pflops supercomputer designed to transform advanced scientific computing by expanding access and capacity among traditional as well as non-traditional research domains.
- Niall Gaffney, Texas Advanced Computing Center
- Wrangler, a groundbreaking data analysis and management system for the national open science community. https://www.tacc.utexas.edu/systems/wrangler
- SuperMIC, a ~1 Pflops system of Intel Xeon Phi processors, 40% of which is allocated via XSEDE