No Travel? No Problem.

Remote Participation

The SC Conference Series is a test bed for cutting-edge developments in high-performance networking, computing, storage, and analysis. Network Research Exhibition (NRE) demonstrations leverage the advanced capabilities of SCinet, SC’s dedicated high-capacity network.

Additionally, each year, a selection of NRE participants are invited to share the results of their demos and experiments from the preceding year’s conference as part of the Innovating the Network for Data-Intensive Science (INDIS) Workshop.

Network researchers and professionals from government, education, research, and industry are invited to submit proposals for demonstrations and experiments at the SC Conference that display innovation in emerging network hardware, protocols, and advanced network-intensive scientific applications.

NRE Topics

Topics for the Network Research Exhibition demos and experiments may include:

  • Software-defined networking
  • Novel network architecture
  • Switching and routing
  • Alternative data transfer protocols
  • Network monitoring, management, and control
  • Network security, encryption, and resilience
  • Open clouds and storage area networks
  • Automation and AI tools
  • Real-time data applications

Accepted NRE Demos

SC21-NRE-001 PDF
Hecate: Towards Self-Driving Networks in Real-World
Location: Booth 2227 (Department of Energy)

Traffic optimization and path computation is a challenging task for network engineers. Technologies such as Google’s B4, SWAN and MPLS-TE require meticulously designed heuristics to calculate optimal routing strategies and do not take into account traffic characteristics. We demonstrate HECATE, a revolutionary AI-driven solution that uses data-driven deep-reinforcement learning on traffic characteristics, network conditions, and historical behaviors to determine optimal traffic engineering patterns. Hecate is designed as a stand-alone system that can be plugged into network setups to optimize their traffic engineering efforts without excessive human interaction.

SC21-NRE-002 PDF
​​LHC Networking and NOTED
Location: Booth 2727 (StarLight)

This demo is an experimental technique, NOTED (Network Optimized for Transfer of Experimental Data), being developed by CERN for potential use by the Large Hadron Collider (LHC) networking community. This SC21 NRE will demonstrate the capabilities of NOTED using an international networking testbed. The goal of the NOTED project is to optimize transfers of LHC data among sites by addressing problems such as saturation, contention, congestion, and other impairments.

SC21-NRE-003 PDF
Towards Autonomous Quantum Network Control
Location: Booth 2227 (Department of Energy)

Quantum networks are going to disrupt how we perceive supercomputing networks. Quantum network testbed programs prompt the development of advanced management and control strategies to distribute entanglement across a multi-node quantum network. Long-distance distribution of quantum information across a network needs preliminary demonstrations and exploration of quantum network applications, but is a challenge due to errors arising from decoherence channels in quantum network components. In this demo, we address these challenges through the development of advanced control routines that improve the Quality of Entanglement (QoE) between network nodes demonstrating a truly distributed quantum network.

SC21-NRE-004 PDF
Global Research Platform: A Distributed Environment for Science Research and New Knowledge Discovery
Location: Booth 2727 (StarLight)

An international collaboration has been established to design, develop, implement, and operate a highly distributed environment – the Global Research Platform (GRP) for large scale international science collaborations. For SC21 NRE, GRP will provide remote and show floor science resources orchestration and monitoring services for a number of NRE projects, experiments, and demonstrations. These experiments and demonstrations showcase the capabilities of the GRP to support large scale data intensive world-wide science research. Additional capabilities will demonstrate globally accessible DTN-as-a-Service capabilities, network programming, including data plane programming with P4 and K8 as a large scale orchestrator for highly distributed workflows.

SC21-NRE-005 PDF
1.2 Tbps Services WAN Services: Architecture, Technology and Control Systems
Location: Booth 2727 (StarLight)

Data production among science research collaborations continues to increase, a long term trend that will accelerate with the advent of new science instrumentation, including planned high luminosity research instrumentation. Consequently, the networking community must begin preparing for service paths beyond 100 and 400 Gbps, including multi-Tbps WAN and LAN services. Before 100 Gbps WAN/LAN services were widely deployed, it was necessary to develop techniques to effectively utilize that level of capacity. Today, the requirements and implications of multi Tbps Gbps WAN and LAN services must be explored. These demonstrations showcase large-scale 1.2 Tbps Gbps WAN services from the StarLight International/National Communications Exchange Facility in Chicago to the SC21 venue.

SC21-NRE-006 PDF
400 Gbps E2E WAN Services: Architecture, Technology and Control Systems
Location: Booth 2727 (StarLight)

Data production among science research collaborations continues to increase, a long term trend that will accelerate with the advent of high luminosity research instrumentation. Consequently, the networking community must begin preparing for service paths beyond 100 Gbps, including 400 Gbps WAN and LAN services. Before 100 Gbps WAN/LAN services were widely deployed, it was necessary to develop techniques to effectively utilize that level of capacity. Today, the requirements and implications of 400 Gbps WAN services must be explored at scale. These demonstrations showcase large-scale E2E 400 Gbps WAN services from the StarLight International/National Communications Exchange Facility in Chicago to the SC21 venue.

SC21-NRE-007 PDF
IRNC Software Defined Exchange (SDX) International Testbed Integration
Location: Booth 2727 (StarLight)

Computer science requires experimental research on testbeds at scale, including those that are implemented world-wide. iCAIR is designing, developing, implementing and experimenting with an International Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which supports a multi-service platform that enables integration of resources world-wide including computer science testbeds. This demonstration will showcase a testbed integration prototype implemented among several international testbeds.

SC21-NRE-008 PDF
IRNC Software Defined Exchange (SDX) Multi-Services for Petascale Science
Location: Booth 2727 (StarLight)

iCAIR is designing, developing, implementing and experimenting with an international Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which integrates multiple services based on a flexible, scalable, programmable platform. This SDX has been proven to be able to integrate multiple different types and services and to enable services isolation. Services include those based on 100 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans- oceanic WANs, to provide high performance transport services for petascale science, controlled using Software Defined Networking (SDN) techniques. SDN enabled DTN services are being designed specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research.

SC21-NRE-009 PDF
StarLight DTN-as-a-Service and Kubernetes Integration for High-Performance Data Movement Support with Research Platforms
Location: Booth 2727 (StarLight)

DTN-as-a-Service focuses on moving large data in a cloud environment such as Kubernetes to improve the performance of the data movement over the high- performance networks. We implement cloud-native services for data movement within and among Kubernetes clouds through the DTN-as-a-Service framework, which sets up, optimizes, and monitors the underlying system and network. DTN-as-a-Service provides APIs to identify, examine and tune the underlying node for high-performance data movement in Kubernetes and enables data movement over a long- distance network. To map the big-data transfer workflow to a science workflow, a controller is implemented in Jupyter notebooks, a popular tool for data science.

SC21-NRE-010 PDF
Kubernetes with International P4 Experimental Networks for The Global Research Platform and Other Research Platforms
Location: Booth 2727 (StarLight)

Recent successes of implemented “research platforms” have been demonstrated. These platforms are based on architecture consisting of different orchestration techniques (e.g., Kubernetes), low management overhead, and tenant-oriented applications. This approach has focused services for research science communities, especially for data intensive science. For most research platforms usage scenarios, isolation between different research services and other multi- tenant support functions has not been a high priority concern. However, for research testbeds such as the International P4 Experimental Networks testbed, isolation between different tenant projects is important for a majority of its P4 research projects.

SC21-NRE-011 PDF
High Performance Data Transfer Nodes for Petascale Science with NVMe-over-Fabrics as Microservice
Location: Booth 2727 (StarLight)

The PetaTrans with NVMe-over-Fabrics as microservice is a research project aimed at improving large-scale WAN microservices for streaming and transferring large data among high-performance Data Transfer Nodes (DTNs). We are designing, implementing, and experimenting with NVMe- over-Fabrics on 100 Gbps Data Transfer Nodes (DTNs) over large-scale, long-distance networks with direct NVMe-to- NVMe service connections. NVMe-over-Fabrics microservice connects remote NVMe devices without userspace applications, which will reduce overhead in high- performance transfer. The primary advantage of NVMe- over-Fabrics microservice is that it can be deployed in multiple DTNs as a container.

SC21-NRE-012 PDF
In-Band Network Telemetry @ AmLight
Location: Booth 2835 (SCinet Theater)

The AmLight network uses a hybrid network strategy that combines the use of optical spectrum and leased capacity to build a reliable, leading-edge network infrastructure for research and education. AmLight supports high-performance network connectivity required by international science and engineering research and education collaborations involving the National Science Foundation (NSF) research community, with expansion to South America and West Africa. AmLight has implemented a flexible Software-Defined Networking (SDN) fabric to support network experimentation, flexible forwarding pipelines, and the deployment of new network functions. AmLight offers to the academic community 630G of upstream bandwidth to the U.S., dynamic provisioning, network programmability, network telemetry, integration with academic distributed orchestrators, and 100G DTNs.

SC21-NRE-014 PDF
A Web based Service Function Chaining Platform for Real-time Adaptive Networking using DTNs
Location: Booth 2835 (SCinet Theater)

Service identification, path characterization, traffic steering and dynamic resource allocation remain the major challenges to make virtualization a service-centric solution for supporting new applications. In this project, we propose to demonstrate an orchestration and traffic steering platform that allows users to set up their service function chains (SFCs) dynamically. Flows are classified as small (IOT), medium (streaming) and large (DTN) and these services are deployed as containerized applications. The SFCs connecting these applications are deployed using Network Service Mesh (NSM) protocol within different Kubernetes domains. User requests are translated into application and network intent which are consequently implemented as SFCs.

SC21-NRE-015 PDF
Bottleneck and AI-Aware Traffic Engineering for Data Intensive Sciences using GradientGraph® and NetPredict Across the Pacific Research Platform and Global Network Advancement Group Multidomain Testbed
Location: Booth 2835 (SCinet Theater)

In this Network Research Exhibition (NRE) demonstration, we will present key technologies and a new operational paradigm for next generation networks with intelligent AI-empowered control and data planes. The target applications include both the most challenging data intensive science programs such as the Large Hadron Collider (LHC) and Vera Rubin Observatory as well as many other data intensive applications using a wide area topology spanning local, regional, national and transoceanic distances.

SC21-NRE-016 PDF
AutoGOLE/SENSE: End-to-End Network Services and Workflow Integration
Location: Booth 2727 (StarLight)

The GNA-G AutoGOLE/SENSE WG demonstration will present key technologies, methods and a system of dynamic Layer 2 and Layer 3 virtual circuit services to meet the challenges and address the requirements of the largest data intensive science programs, including theLarge Hadron Collider (LHC) the Vera Rubin Observatoryand programs challenges and programs in many other disciplines. The services are designed to support multiple petabyte transactions across a global footprint,represented by a persistent testbed spanning the US,Europe, Asia Pacific and Latin American regions.

SC21-NRE-017
Resilient Distributed Processing and Reconfigurable Networks
Location: Booth 2727 (StarLight)

This demonstration will build on our previous demonstrations. We aim to show dynamic arrangement and rearrangement of widely distributed processing of large volumes of data across a set of compute and network resources organized in response to resource availability and changing application demands. A real-time video processing pipeline will be demonstrated from SC21 to the Naval Research Laboratory assets in Washington, DC, and back to SC21. High volume bulk data will be transferred concurrently across the same data paths. A software-controlled network will be assembled using a number of switches and multiple SCinet 100G/400G connections from DC to St. Louis. We plan to show rapid deployment and redeployment, real-time monitoring and QOS management of these application data flows with very different network demands. Technologies we intend to leverage include SDN, RDMA, RoCE, NVMe, GPU acceleration and others.

SC21-NRE-018 PDF
N-DISE: NDN for Data Intensive Science Experiments
Location: Booth 2727 (StarLight)

The NDN for Data Intensive Science Experiments (N-DISE) project aims to accelerate the pace of breakthroughs and innovations in data-intensive science fields such as the Large Hadron Collider (LHC) high energy physics program and the BioGenome and human genome projects. Based on Named Data Networking (NDN), a data-centric future Internet architecture, N-DISE will deploy and commission a highly efficient and field-tested petascale data distribution, caching, access and analysis system serving major science programs. The N-DISE project will build on recently developed high-throughput NDN caching and forwarding methods, containerization techniques, leverage the integration of NDN and SDN systems concepts and algorithms with the mainstream data distribution, processing, and management systems of CMS, as well as the integration with Field Programmable Gate Arrays (FPGA) acceleration subsystems, to produce a system capable of delivering LHC and genomic data over a wide area network at throughputs approaching 100 Gbits per second, while dramatically decreasing download times. N-DISE will leverage existing infrastructure and build an enhanced testbed with high performance NDN data cache servers at participating institutions.

NRE Proposal Submissions: Open March 2, 2021

Stage 1: Preliminary Abstract

One-page document outlining the high-level goals and activities related to the proposal. Only a single experiment should be included. Additional experiments should be outlined in separate documents. Details on sources and network needs should be included if known.

  • Submission format: Download Word Template TBA
  • Submission location: NRE Submissions Dropbox
  • Submission deadline: June 16, 2021

Stage 2: Network Requirements

Document outlining the sources (city, state, country of origin), bandwidth needs, VLAN requirements and destination booth drops.

  • Submission format: Any relevant submission style will be accepted.
  • Submission location: NRE Submissions Dropbox
  • Submission deadline: June 26, 2021

Stage 3: Final Publishable Submission

Two- to three-page submission with all relevant details of the activities, drawings and other research details. This document will be published on the SCinet NRE webpage.

  • Submission format: Any relevant submission style will be accepted.
  • Submission location: NRE Submissions Dropbox
  • Submission deadline: October 29, 2021

Stage 4: Research Results and Findings

A new component of the submission process includes the submission of findings and results of the demonstrations. These will be collected and provided to the INDIS workshop team for inclusion in the workshop for the following year. The handoff to INDIS will include the final publishable submission as well as the read-out of the demonstration(s) in terms of results.

  • Submission format: Same as the Final Publishable Submission
  • Submission location: NRE Submissions Dropbox
  • Submission deadline: January 28, 2022
Back To Top Button