CentML @ HPCA’23

The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform models, frameworks, and system stacks. It lacks standard tools and methodologies to evaluate and profile models or systems. Due to the absence of standard tools, the state of the practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error-prone — stifling the adoption of the innovations in a rapidly moving field.

The goal of the tutorial is to bring experts from the industry and academia together to shed light on the following topics to foster systematic development, reproducible evaluation, and performance analysis of deep learning artifacts. It seeks to address the following questions:

  1. What are the benchmarks that can effectively capture the scope of the ML/DL domain?
  2. Are the existing frameworks sufficient for this purpose?
  3. What are some of the industry-standard evaluation platforms or harnesses?
  4. What are the metrics for carrying out an effective comparative evaluation?

February 25, 2023
10:50AM-11:20PM EST

Hotel Bonaventure Montreal
900 Rue De La Gauchetière O
Montreal, Quebec, Canada
H5A 1E4

Presentation Slides

HPCA MLBench Workshop - Feb 25, 2023

Share this

Get started

Need fast, cost effective models in production? Book a Demo