Loading…
This event has ended. Visit the official site or create your own event on Sched.
Gateways 2020 is scheduled from October 12 to October 23, with the tutorial and workshop track during the first week and the main conference track during the second week. This fifth Gateways annual conference is an opportunity for gateway creators and enthusiasts to learn, share, connect, and shape the future of gateways, while supporting and growing our community. Register for the conference by October 5.

The default time zone is Eastern Time. You can adjust it to your time zone on the right side of the schedule underneath the search box (or in the top bar, depending on the width of your screen).

Already registered for the conference and want to personalize your own schedule? Sign up for your own free Gateways 2020 Sched account. Note: Signing up for Sched is NOT the same as registering for the conference.
Tutorial Rooms [clear filter]
Monday, October 12
 

1:00pm EDT

Deploying Science Gateways with Apache Airavata
Limit: 50 participants

The authors present the Apache Airavata framework for deploying science gateways, illustrating how to request, administer, modify, and extend a basic gateway tenant to the middleware.

Science gateways provide science-specific user interfaces to scientific applications and data for end users who are unfamiliar with or need more capabilities than provided by command-line interfaces. Frameworks for science gateway provide software that can be used to create science gateways, but these still must be operated and maintained by the gateway provider. Hosted solutions for science gateways are based on frameworks and provide configurable science gateway tenants that can be managed without the gateway provider needing to install and operate any science gateway software, removing the burden of operating a production gateway from the gateway provider.

This tutorial provides an overview of the Apache Airavata software framework for science gateways and focuses on how to request, configure, manage, and customize a tenant gateway on SciGaP.org, the hosted version of Apache Airavata operated by the Cyberinfrastructure Integration Research Center (https://circ.iu.edu) at Indiana University.

  • Audience: Anyone interested in the topic is welcome to attend. Participants will benefit from prior general knowledge of how to execute scientific applications on HPC and Cloud environments, but this is not required. 
  • Skill Level: Intermediate
  • Prerequisite Knowledge: Familiarity with Python, basics of web development, basics of running scientific applications on clusters are required for the hands-on exercises. All interested participants are welcome. Familiarity with Python, Linux, and basics of Web programming will help with the hands-on exercises but are not required. Attendees will be provided with a sample hosted Django tenant to SciGaP services and will have access to a virtual cluster on XSEDE’s Jetstream with pre-installed applications.
  • Technology Requirements and Setup: See https://apache-airavata-django-portal.readthedocs.io/en/latest/tutorial/gateways_tutorial/
  • Agenda: https://cwiki.apache.org/confluence/display/AIRAVATA/Gateways20+Tutorial 



Monday October 12, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms

1:00pm EDT

How to Build and Engage with Your Community
Community engagement is one of the cornerstones of successful science gateways. The approach “If I build it, they will come” seldom works for science gateways - user-driven design and development, close collaboration with the user community offering support measures as well as outreach from the beginning of a project are some of the measures to build and grow a community for a science gateway. An active and growing community contributes to its sustainability and, thus, to a sustainable software life cycle. The goal is to enhance computational solutions instead of starting for every project from scratch or reinventing the wheel. Typically, the principal investigators (PIs) leading the creation of science gateways and gateway creators are domain experts or research computing specialists and not community engagement specialists.
This 3-hour tutorial will introduce success stories especially in the community engagement area and their specific outreach measures and strategies. The hands-on sessions will give the participants the ability to discuss and work through exercises for their own projects and science gateways and/or examples given in advance. Exercises include (i) elevator pitches about a science gateway to different target audiences: from the user community to stakeholders to anticipated partners in the project; (ii) communication exercises for meetings assuming participants from diverse backgrounds and knowledge; (iii) BrainTrust exercises in working groups where participants can present or discuss a specific challenge they face in their project or science gateway related to community engagement. The exercises are inspired by SGCI’s successful Focus Week (Science Gateways Community Institute) and the successful series of Virtual Residency workshops led by Oklahoma University.

  • Audience: Participants of any skill level may attend the tutorial and there are no technical or software requirements.



Monday October 12, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms
 
Tuesday, October 13
 

1:00pm EDT

DataAtRisk.org Workshop
Limit: 50 participants

The DataAtRisk.org Workshop will introduce participants to the NSF Science Gateways project: DataAtRisk.org through hands-on activities with data that is actually at risk. Workshop leaders will introduce the DataAtRisk.org platform, a data help desk for nominating data assets for in-depth data curation, and guide two tracks of data curation activities to highlight different user roles (Advocates and Heros).

DataAtRisk.org responds to the clear need for a community-building application by connecting data in need to data expertise and resources. Data owners, managers, or others nominate assets for targeted preservation action by data management experts and volunteers, and create a community of Data Heroes who ensure the longevity of data products essential to high quality research.

DataAtRisk.org is a web platform that creates and leverages a commons-based approach to data stewardship and preservation. Fundamentally a networking tool that connects users with data in need to data stewardship experts, DataAtRisk.org allows users to nominate or identify threatened data with substantial research value for a team of volunteers to evaluate, curate, and deposit in appropriate data repositories. This workshop will be split into two tracks according to the design of the DataAtRisk.org system: one for participants to evaluate datasets for curation requirements and another for participants to work in groups to execute pre-identified curation “tasks.”
  • Audience: The DataAtRisk.org Workshop is suitable for participants with any level of data curation skills, including those new to the topic.
  • Technology Requirements: Participants will not require any additional equipment or setup beyond a computer and internet access.



Tuesday October 13, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms

1:00pm EDT

Open OnDemand, XDMoD, and ColdFront: an HPC center management toolset
The University at Buffalo Center for Computational Research (UB CCR) and Ohio Supercomputer Center (OSC) team up to offer HPC systems personnel a step-by-step tutorial for installing, configuring and using what many centers now consider vital software products for managing and enabling access to their resources. UB CCR offers two open source products - an allocations management system, ColdFront, and an HPC metrics & data analytics tool, XDMoD. OSC provides the open source Open OnDemand portal for easy, seamless web-based access for users to HPC resources. These three tools have been designed to work together to provide a full package of HPC center management and access products. In this tutorial the system administrators and software developers from OSC and UB CCR will provide an overview of the installation and configuration of each of these software packages. We’ll show how to use these three products in conjunction with each other and the Slurm job scheduler.

We will begin the tutorial with a short overview of each software product and how they tie together to provide seamless management of an HPC center. We’ll spend the first half focusing on ColdFront and XDMoD. The second half will be spent on OnDemand including a demonstration of configuring interactive apps for use on the cluster. We’ll end with instructions on how to tie together XDMoD with OnDemand for access to job metrics within OnDemand. Due to the pace of this workshop, we do not anticipate attendees will be able to follow along step-by-step. We will provide a full Docker cluster-in-a-container environment with instructions for attendees to complete outside of the workshop. In addition to this, we will offer post-workshop Zoom sessions and Slack channels for each of the three software products so attendees can ask specific questions of the individual development teams.
  • Audience: Target audience is HPC system administrators and user support personnel. 
  • Skill level: No experience necessary for presentation portion of tutorial. Intermediate experience recommended for utilizing the Docker cluster-in-a-container environment.
  • Prerequisites: An understanding of HPC clusters and batch scheduling is highly recommended. Docker experience is helpful, but not required for using the toolset environment provided.
  • Technology Requirements:  Only a connection to Zoom is required (provided through the QiqoChat conference platform). However, to utilize the HPC toolset cluster-in-a-container provided as part of the tutorial, users will need to install Docker and any prerequisites. Installation information can be found here:  https://github.com/ubccr/hpc-toolset-tutorial


Tuesday October 13, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms
 
Wednesday, October 14
 

1:00pm EDT

Secure Coding Practices & Automated Assessment Tools
Limit: 30 participants

High performance computing increasingly involves the development and deployment of network and cloud services to access resources for computation, communication, data, instruments, and analytics. Unique to the HPC field is the large amount of software that we develop to drive these services. These services must assure data integrity and availability, while providing access to a global scientific and engineering community.
Securing your network is not enough. Every service that you deploy is a window into your data center from the outside world, and a window that could be exploited by an attacker.

This tutorial is relevant to anyone wanting to learn about minimizing security flaws in the software they develop or manage. We share our experiences gained from performing vulnerability assessments of critical middleware. You will learn skills critical for software developers and analysts concerned with security.

Software assurance tools – tools that scan the source or binary code of a program to find weaknesses – are the first line of defense in assessing the security of a software project. These tools can catch flaws in a program that affect both the correctness and safety of the code. This tutorial is also relevant to anyone wanting to learn how to use these automated assessment tools to minimize security flaws in the software they develop or manage.
  • Audience: This tutorial is targeted at developers wishing to minimize the security flaws in the software that they develop. The target audience for this tutorial is anyone involved with the development, deployment, assessment, or management of critical software.
  • Skill level: 50% beginner, 25% intermediate, 25% advanced
  • Prerequisites: To gain maximum benefit from this tutorial, attendees should be familiar with the process of developing software and at least one of the Java, C, C++ or scripting programming languages. This tutorial does not assume any prior knowledge of security assessment or vulnerabilities.
  • Advance Setup: This tutorial includes hands-on exercises, and a few steps can help you prepare for it. In https://www.cs.wisc.edu/mist/tutorial-instructions.pdf, you can find the instructions for downloading the virtual machine image we'll use for the hands-on exercises. Please follow those instructions, and feel free to contact elisa@cs.wisc.edu if you have any questions or issues. 


Wednesday October 14, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms

1:00pm EDT

Securing Science Gateways with Custos Services
The authors present Custos, a cybersecurity service based on open source software that helps science gateways manage user identities, integrate with federated authentication systems, manage secrets such as OAuth2 access tokens and SSH keys needed to connect to remote resources, and manage groups and access permissions to digital objects. This tutorial will provide an overview of Custos’s capabilities, provide hands-on exercises on using its features, demonstrate to gateway providers how to integrate the services into their gateways with software development kits for the Custos API, introduce developers to the code and how to review and contribute to it, supply gateway providers with information on how Custos services are deployed for high availability and fault tolerance, and how Custos operations handle incidence response.

Additional documentation for this tutorial is available at: https://cwiki.apache.org/confluence/display/CUSTOS/Custos+Gateways+2020+Tutorial

  • Audience: All interested attendees are welcome. The hands-on portions are intended for developers and gateway operators, but the tutorial can also be followed by any interested observers. Participants should have familiarity with science gateway and Web development concepts. The hands-on tutorial segments will assume familiarity with science gateway concepts and require intermediate skill-level familiarity with the Python programming language.
  • Skill Level: Intermediate for hands-on.
  • Prerequisites: All interested participants are welcome but for the hands-on portion should have familiarity with Python, basics of running scientific applications on clusters, basics of web application and REST API development.
  • Technology Requirements: A commonly used Web browser and Terminal application.


Wednesday October 14, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms
 
Thursday, October 15
 

1:00pm EDT

A Deep Dive into Constructing Containers for Scientific Computing and Gateways (Part 1 of 2)
NOTE: This is a 2-part tutorial on Thursday and Friday.

In recent years, using containers has been rapidly gaining traction as a solution to lower the barriers to using more software on HPC and cloud resources. However, significant barriers still exist to actually doing this in practice, particularly for well-established community codes which expect to run on a particular operating system version or resource. Additional barriers exist for researchers unfamiliar with containerization technologies. While many beginner tutorials are available for building containers, they often stop short of covering the complexities that can arise when containerizing scientific computing software. The goal of this tutorial is to demonstrate and work through building and running non-trivial containers with users. We will containerize community scientific software, exhibit how to share with the larger community via a container registry, and then run on a completely separate HPC resource, with and without the use of a Science Gateway. The subject matter will be approachable for intermediate to advanced users, and is expected to be of interest to a diverse audience including researchers, support staff, and teams building science gateways.
  • Audience: Gateway developers, Researchers (grad students, faculty, etc.), XSEDE Campus Champions/ACI-Ref Facilitators, Campus research computing staff. Participants do not need to be experts, but a basic comfort level with opening, editing, and saving files in a command-line environment is expected.
  • Skill level: Intermediate and Advanced users. The tutorial will be about 60-70% hands-on content. It will make use of several virtual machines (instances) on the Jetstream cloud; participants registered for the tutorial WITH the training account will have access to a development/build machine for creating new container images, a separate instance running a container registry to store the images, and a virtual HPC cluster for actually running jobs using their containerized software.
  • Prerequisites: Basic Linux knowledge (especially command-line), text editor skills (vi/vim, emacs, or nano), basic HPC concepts knowledge a plus.
  • Technology Requirements: Computer with browser, Terminal application with ssh and ability to copy-paste, and knowledge of a command-line text editor such as vi/vim, emacs, or nano.


Thursday October 15, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms

1:00pm EDT

Simplifying Science Gateway Data Management with Globus
Globus is a service for transferring and sharing data in the research community. Globus is designed to manage data at the scales seen in data-intensive research and in scenarios involving multiple institutions. Globus has been continuously managing data transfers for the research community for more than ten years. This scenario-driven, 180-minute tutorial introduces intermediate to advanced researchers and science gateway developers to a series of web applications that use Globus to overcome data management challenges. Attendees will leave the tutorial with a clear understanding of the kinds of challenges Globus can help them with, online resources they can use to reproduce the solutions presented in the tutorial, and new peer contacts they will benefit from throughout their research careers.
  • Audience: Intermediate to advanced science gateway developers and researchers. Throughout the tutorial, we will invite attendees to share their own science gateway challenges and experiences, encouraging interactions between individuals with similar gateways, research interests, and professional environments.


Presenters
avatar for Lee Liming

Lee Liming

Subscriber Engagement Manager, Globus, University of Chicago
I can tell you all about Globus, our services, and our subscriptions for campuses and other research organizations. I am also part of the XSEDE team, focused on user requirements management and integrating other services with XSEDE.


Thursday October 15, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms
 
Friday, October 16
 

1:00pm EDT

A Deep Dive into Constructing Containers for Scientific Computing and Gateways (Part 2 of 2)
NOTE: This is a 2-part tutorial which began on Thursday.

In recent years, using containers has been rapidly gaining traction as a solution to lower the barriers to using more software on HPC and cloud resources. However, significant barriers still exist to actually doing this in practice, particularly for well-established community codes which expect to run on a particular operating system version or resource. Additional barriers exist for researchers unfamiliar with containerization technologies. While many beginner tutorials are available for building containers, they often stop short of covering the complexities that can arise when containerizing scientific computing software. The goal of this tutorial is to demonstrate and work through building and running non-trivial containers with users. We will containerize community scientific software, exhibit how to share with the larger community via a container registry, and then run on a completely separate HPC resource, with and without the use of a Science Gateway. The subject matter will be approachable for intermediate to advanced users, and is expected to be of interest to a diverse audience including researchers, support staff, and teams building science gateways.
  • Audience: Gateway developers, Researchers (grad students, faculty, etc.), XSEDE Campus Champions/ACI-Ref Facilitators, Campus research computing staff. Participants do not need to be experts, but a basic comfort level with opening, editing, and saving files in a command-line environment is expected.
  • Skill level: Intermediate and Advanced users. The tutorial will be about 60-70% hands-on content. It will make use of several virtual machines (instances) on the Jetstream cloud; participants registered for the tutorial WITH the training account will have access to a development/build machine for creating new container images, a separate instance running a container registry to store the images, and a virtual HPC cluster for actually running jobs using their containerized software.
  • Prerequisites: Basic Linux knowledge (especially command-line), text editor skills (vi/vim, emacs, or nano), basic HPC concepts knowledge a plus.
  • Technology Requirements: Computer with browser, Terminal application with ssh and ability to copy-paste, and knowledge of a command-line text editor such as vi/vim, emacs, or nano.



Friday October 16, 2020 1:00pm - 4:30pm EDT
Tutorial Rooms
 
  • Timezone
  • Filter By Date Gateways 2020 Oct 9 -23, 2020
  • Filter By Venue Online
  • Filter By Type
  • Break
  • Concurrent Session Room 1
  • Concurrent Session Room 2
  • Learning Labs
  • Plenary
  • Tutorial/Workshop
  • Welcome


Filter sessions
Apply filters to sessions.