77 - NHR PerfLab Seminar 2025-01-15: Efficient and Robust Hardware for Neural Networks [ID:55969]
Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:44:07 Min

Aufnahmedatum

2025-01-15

Hochgeladen am

2025-01-15 15:46:04

Sprache

en-US

Speaker: Prof. Dr. Grace Li Zhang, Technische Universität Darmstadt

Slides

Abstract:

The last decade has witnessed significant breakthroughs of deep neural networks (DNNs) in many fields. These breakthroughs have been achieved at extremely high computation and memory costs. Accordingly, the increasing complexity of DNNs has led to a quest for efficient hardware platforms. In this talk, class-aware pruning is first presented to reduce the number of multiply-and-accumulate (MAC) operations in DNNs. Class-exclusion early-exit is then examined to reveal the target class before the last layer is reached. To accelerate DNNs, digital accelerators such as systolic arrays from Google can be used. Such an accelerator is composed of an array of processing elements to efficiently execute MAC operations in parallel. However, such accelerators suffer from high energy consumption. To reduce energy consumption of MAC operations, we select quantized weight values with good power and timing characteristics. To reduce energy consumption incurred by data movement, the logic design of neural networks is presented. Analog In-Memory-Computing platform based on RRAM crossbars will also be discussed. In the end, ongoing research topics and future research plans will be summarized.

For a list of past and upcoming NHR PerfLab seminar events, see: https://hpc.fau.de/research/nhr-perfl...

Einbetten
Wordpress FAU Plugin
iFrame
Teilen