Overview

Welcome to the High-Performance Deep Learning project created by the Network-Based Computing Laboratory of The Ohio State University. The availability of large data sets (e.g. ImageNet, PASCAL VOC 2012) coupled with massively parallel processors in modern HPC systems (e.g. NVIDIA GPUs) have fueled a renewed interest in Deep Learning (DL) algorithms. In addition to the popularity of massively parallel DL accelerators like GPUs, the availability and memory-abundance of modern CPUs poses a viable alternative for DL training. This resurgence of DL applications has triggered the development of DL frameworks like Caffe, PyTorch, TensorFlow, Apache MXNet, and CNTK. While most DL frameworks provide experimental support for multi-node training, their distributed implementation is often suboptimal. The objective of the HiDL project is to exploit modern HPC technologies and solutions to scale out and accelerate DL frameworks.

The HiDL packages are being used by more than 75 organizations worldwide in 37 countries (Current Users) to accelerate Deep Learning and Machine Learning applications. As of Feb '23, more than 2,400 downloads have taken place from this project's site. The HiDL project contains the following packages.


MPI-Driven DL Training (TensorFlow, Pytorch, MXNet) with Horovod and MVAPICH2

Horovod is a distributed deep learning training framework with support for popular deep learning frameworks like TensorFlow, Keras, PyTorch, and Apache MXNet. MVAPICH2, MVAPICH2-X, and MVAPICH2-GDR provide many features to augment data parallel distributed training with Horovod on both CPUs and GPUs.

  • Build with Python 2.x or 3.x, CUDA 9.x, 10.x, or 11.x
  • Full support for Tensorflow, Pytorch, Keras, and Apache MXNet
  • Optimized support at MPI-level for deep learning workloads
    • Efficient large-message collectives (e.g. Allreduce) on CPUs and GPUs
    • GPU-Direct Algorithms for all collective operations (including those commonly used for model-parallelism, e.g. Allgather and Alltoall)
    • Support for fork safety
  • Exploits efficient large message collectives in MVAPICH2, MVAPICH2-X and MVAPICH2-GDR
  • Tested with
    • Mellanox InfiniBand adapters (e.g., EDR, FDR, HDR)
    • NVIDIA GPU K80, P100, V100, Quadro RTX 5000, A100
    • CUDA [9.x, 10.x, 11.x] and CUDNN [7.5.x, 7.6.x, 8.0.x]
    • Tensorflow [1.x, 2.x], Pytorch 1.x, Apache MXNet 1.x

Horovod Performance on MVAPICH2-X and MVAPICH2-GDR

For instructions on building Horovod with MVAPICH2-X or MVAPICH2-GDR, please refer to the Horovod Userguide

MPI-Driven ML Training with MPI4cuML

cuML is a distributed machine learning training framework with a focus on GPU acceleration and distributed computing. MVAPICH2-GDR provides many features to augment distributed training with cuML on GPUs.

  • Based on cuML 22.02.00
    • Include ready-to-use examples for KMeans, Linear Regression, Nearest Neighbors, and tSVD
  • MVAPICH2 support for RAFT 22.02.00
    • Enabled cuML’s communication engine, RAFT, to use MVAPICH2-GDR backend for Python and C++ cuML applications
    • KMeans, PCA, tSVD, RF, LinearModels
    • Added switch between available communication backends (MVAPICH2 and NCCL)
  • Built on top of mpi4py over the MVAPICH2-GDR library
  • Tested with
    • Mellanox InfiniBand adapters (FDR and HDR)
    • NVIDIA GPU A100, V100 and, P100
    • Various x86-based multi-core platforms (AMD and Intel)

cuML Performance on MVAPICH2-GDR

For instructions on building cuML with MVAPICH2-GDR, please refer to the Userguide for MPI4cuML 0.5

Announcements


MPI4cuML 0.5 (based on cuML 22.02.00) with support for RAFT 22.02.00, C++ and Python APIs, built on top of mpi4py over the MVAPICH2-GDR library, handles to use MVAPICH2-GDR backend for Python cuML applications (KMeans, PCA, tSVD, RF, and LinearModels) is available. [more]

The 10th Annual MVAPICH User Group (MUG) Conference was held successfully in a hybrid manner on August 22-24, 2022 with more than 105 attendees. Slides of the Presentations are available here.

Partnership and contribution to the NSF-Awarded $20M AI-Institute on Intelligent CyberInfrastructure (ICICLE). Details.

HiDL in the News