International Workshop on Machine Learning Hardware (IWMLH), Co-located with ISC2020

Presentations: click here for full playlist

Keynote, Albert Cohen: Compiler Construction for Hardware Acceleration: Challenges and Opportunities PDF Slides

Preferred Networks: MN-Core: Massively SIMD Deep Learning Accelerator (PDF slides)
GraphCore: Scalable Machine Intelligence Systems (PDF slides)
Groq: Groq’s Tensor Streaming Processor (PDF slides)
SambaNova: Accelerating Software 2.0 (PDF slides)
Cerebras: Wafer-scale AI for science and HPC (PDF slides)

A 90 minute live Q/A Session was conducted on June 25th for the ISC2020 audience to ask questions on both the keynote and participant’s presentations.


Invited speaker

Albert Cohen, Research Scientist, Google Research.

Compiler Construction for Hardware Acceleration: Challenges and Opportunities

This is a new golden age for optimizing compilers. We live in a heterogeneous world of domain-specific languages and accelerators, freeing programming language and computer architects from the chains of general-purpose, one-size-fits all designs.
[John Hennessy and Dave Patterson’s Turing award lecture, shamelessly adapted.]

The emphasis moves from abstraction penalty removal to zero-cost abstraction by design, from optimization targeting a Von Neumann architecture to the orchestration of a distributed hierarchy of computational and memory resources, from performance through native libraries to just-in-time code generation through active libraries, from expert-written heuristics to machine learning compilation and program synthesis. Beyond performance, compiler construction for heterogeneity also raises challenges in debuggability across abstractions and languages, formal methods and security.

Building on applications from high performance computing and machine learning, we will illustrate these challenges on the design of MLIR, an open source infrastructure supported by the LLVM foundation, to accelerate innovation in machine learning (ML) and high-performance computing (HPC). MLIR is built for extension and evolution, enabling research and engineering on heterogeneous compilation; it is also a research artifact raising its own design, semantics and algorithmic challenges.


Participating companies

Adrian Macias, Machine Learning Systems Specialist.
Kunle Olukotun, Chief Technologist and Co-Founder.
Matt Fyles, SVP, Software.
Andy Hock, Senior Director and Head of Product.
Yusuke Doi, VP of Computing Infrastructure.


Workshop Scope

Recent years have seen a surge of investment in AI chip companies worldwide. Most companies design accelerators for industrial applications, as opposed to scientific workloads. As the use of machine learning (ML) accelerates in the HPC field itself, there is concern that the scientific community should influence the design of this new specialized hardware. Indeed, scientific computing has an uncommon set of requirements regarding platform usage and administration, and how those chips answer those demands will shape the future of their integration within the global scientific computing infrastructure.

The workshop will feature the participation of select AI accelerator companies, with discussions centered on the following aspects:


Pete Beckman, Argonne National Laboratory -
Swann Perarnau, Argonne National Laboratory -
Rosa M.Badia, Barcelona Supercomputing Center
Kentaro Sano, RIKEN
Valentin Reis, Argonne National Laboratory -