Numerous microarchitectural optimizations unlocked tremendous processing power for deep neural networks that in turn fueled the AI revolution. With the exhaustion of such optimizations, the growth of modern AI is now gated by the performance of training systems, especially their data movement. Instead of focusing on single accelerators, we investigate data-movement characteristics of large-scale training at full system scale. Based on our workload analysis, we design HammingMesh, a novel network topology that provides high bandwidth at low cost with high job scheduling flexibility. Specifically, HammingMesh can support full bandwidth and isolation to deep learning training jobs with two dimensions of parallelism. Furthermore, it also supports high global bandwidth for generic traffic. Thus, HammingMesh will power future large-scale deep learning systems with extreme bandwidth requirements.

HammingMesh: A Network Topology for Large-Scale Deep Learning / Hoefler, Torsten; Bonato, Tommaso; DE SENSI, Daniele; Di Girolamo, Salvatore; Li, Shigang; Heddes, Marco; Belk, Jon; Goel, Deepak; Castro, Miguel; Scott, Steve. - (2022). (Intervento presentato al convegno International Conference for High Performance Computing, Networking, Storage and Analysis (was Supercomputing Conference) tenutosi a Dallas).

HammingMesh: A Network Topology for Large-Scale Deep Learning

Daniele De Sensi;
2022

Abstract

Numerous microarchitectural optimizations unlocked tremendous processing power for deep neural networks that in turn fueled the AI revolution. With the exhaustion of such optimizations, the growth of modern AI is now gated by the performance of training systems, especially their data movement. Instead of focusing on single accelerators, we investigate data-movement characteristics of large-scale training at full system scale. Based on our workload analysis, we design HammingMesh, a novel network topology that provides high bandwidth at low cost with high job scheduling flexibility. Specifically, HammingMesh can support full bandwidth and isolation to deep learning training jobs with two dimensions of parallelism. Furthermore, it also supports high global bandwidth for generic traffic. Thus, HammingMesh will power future large-scale deep learning systems with extreme bandwidth requirements.
2022
International Conference for High Performance Computing, Networking, Storage and Analysis (was Supercomputing Conference)
Networking; Deep Learning; Clusters
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
HammingMesh: A Network Topology for Large-Scale Deep Learning / Hoefler, Torsten; Bonato, Tommaso; DE SENSI, Daniele; Di Girolamo, Salvatore; Li, Shigang; Heddes, Marco; Belk, Jon; Goel, Deepak; Castro, Miguel; Scott, Steve. - (2022). (Intervento presentato al convegno International Conference for High Performance Computing, Networking, Storage and Analysis (was Supercomputing Conference) tenutosi a Dallas).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1661238
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 2
social impact