No Travel? No Problem.

Remote Participation
Passel: Improved Scalability and Efficiency of Distributed SVM Using a Cacheless PGAS Migrating Thread Architecture
Author/Presenters
Event Type
Workshop
Tags
Online Only
Algorithms
Extreme Scale Comptuing
Registration Categories
W
TimeFriday, 19 November 202110:50am - 11:10am CST
LocationOnline
DescriptionStochastic Gradient Descent (SGD) is a valuable algorithm for large-scale machine learning, but has proven difficult to parallelize on conventional architectures because of communication and memory access issues. The HogWild series of mixed logically distributed and physically multi-threaded algorithms overcomes these issues for problems with sparse characteristics by using multiple local model vectors with asynchronous atomic updates. While this approach has proven effective for several reported examples, there are others, especially very sparse cases, that do not scale as well. This paper discusses an SGD Support Vector Machine (SVM) on a cacheless migrating thread architecture using the Hogwild algorithms as a framework. Our implementations on this novel architecture achieved superior hardware efficiency and scalability over that of a conventional cluster using MPI. Furthermore these improvements were gained using naive data partitioning techniques and hardware with substantially less compute capability than that present in conventional systems.
Back To Top Button