Hidet
Task-Mapping Programming Paradigm for Deep Learning Tensor Programs
Authors: Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, Gennady Pekhimenko
Cutting-edge open-source deep learning compiler dedicated to accelerating inference without affecting the accuracy of the model.
Hidet is an open-source deep learning compiler, written in Python. It supports end-to-end compilation of DNN models from PyTorch and ONNX to efficient cuda kernels. In order to accelerate the performance a series of graph-level and operator-level optimizations are applied.
Currently, Hidet focuses on optimizing the inference workloads on NVIDIA GPUs, and requires:
Linux OS
CUDA Toolkit 11.6+
Python 3.8+
The easiest way to use Hidet is to use the torch.compile() function with hidet as the backend. Next, we use resnet18 model as an example to show how to optimize a PyTorch model with Hidet.
Leveraging advanced graph-level and operator-level optimizations, Hidet elevates the performance of neural network workloads on NVIDIA GPUs, making it an indispensable tool for researchers and developers in the field of deep learning.
GitHub Review
Here is an example to demonstrate how to use Hidet Script to write kernel programs.
Hidet originates from the following research work:
Task-Mapping Programming Paradigm for Deep Learning Tensor Programs
Authors: Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, Gennady Pekhimenko