SC21 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Optimizing and Extending the Functionality of EXARL for Scalable Reinforcement Learning

Authors: Sai Chenna (Los Alamos National Laboratory, University of Florida); Katherine Cosburn (Los Alamos National Laboratory, University of New Mexico); Uchenna Ezeobi (Los Alamos National Laboratory; University of Colorado, Colorado Springs); and Maxim Moraru, Hyun Lim, Julien Loiseau, Jamal Mohd-Yusof, Robert Pavel, Vinay Ramakrishnaiah, Andrew Reisner, and Karen Tsai (Los Alamos National Laboratory)

Abstract: Easily eXtendable Architecture for Reinforcement Learning (EXARL) is a scalable reinforcement learning framework, part of the Exascale Computing Project (ECP) funded ExaLearn program, designed to facilitate reinforcement learning (RL) research for complex scientific environments. In RL, agents are algorithms that interact and learn from an environment, such as a game or a scientific simulation, with an aim to maximize reward. We improve the existing EXARL framework by optimizing and extending the current functionality by incorporating additional agents, such as Advantage Actor-Critic (A2C/A3C) and Twin Delayed Deep Deterministic Policy Gradient (TD3) to the framework, as well as improve upon these and existing agents using v-trace and prioritized experience replay. We also improve the scalability of the framework by optimizing communication across multiple learners and between actors and learners, as well as accelerate the data generation pipeline of the Deep Q-Network (DQN) agent by leveraging environment processes.

Best Poster Finalist (BP): no

Poster: PDF
Poster summary: PDF

Back to Poster Archive Listing