Hidet

Cutting-edge open-source deep learning compiler dedicated to accelerating inference without affecting the accuracy of the model.

GitHub Documentation

Hidet is an open-source deep learning compiler, written in Python. It supports end-to-end compilation of DNN models from PyTorch and ONNX to efficient cuda kernels. In order to accelerate the performance a series of graph-level and operator-level optimizations are applied.

Requirements

Currently, Hidet focuses on optimizing the inference workloads on NVIDIA GPUs, and requires:

Linux OS

CUDA Toolkit 11.6+

Python 3.8+

Optimize PyTorch model with Hidet

The easiest way to use Hidet is to use the torch.compile() function with hidet as the backend. Next, we use resnet18 model as an example to show how to optimize a PyTorch model with Hidet.

Learn More

Leveraging advanced graph-level and operator-level optimizations, Hidet elevates the performance of neural network workloads on NVIDIA GPUs, making it an indispensable tool for researchers and developers in the field of deep learning.

GitHub Review

Hidet Script

Here is an example to demonstrate how to use Hidet Script to write kernel programs.

Learn More

Hidet originates from the following research work:

Get started

Need fast, cost effective models in production? Book a Demo