No video

tinyML Talks: Processing-In-Memory for Efficient AI Inference at the Edge

  Рет қаралды 2,373

tinyML Foundation

tinyML Foundation

Жыл бұрын

"Processing-In-Memory for Efficient AI Inference at the Edge"
Kaiyuan Yang
Assistant Professor
Rice University
Weier Wan
Head of Software-Hardware Co-design
Aizip
Performing ever-demanding AI tasks in battery powered edge devices requires continuous improvement in AI hardware energy and cost-efficiency. Processing-In-Memory (PIM) is an emerging computing paradigm for memory-centric computations like deep learning. It promises significant energy efficiency and computation density improvements over conventional digital architectures, by alleviating the data movement costs and exploiting ultra-efficient low-precision computation in the analog domain. In this talk, Dr. Kaiyuan Yang will share his research group’s recent silicon-proven SRAM-based PIM circuit and system designs, CAP-RAM and MC2-RAM. Next, Dr. Weier Wan will introduce his recent RRAM-based PIM chip, NeuRRAM. Through full-stack algorithm-hardware co-design, these demonstrated PIM systems attempt to alleviate the critical inference accuracy loss associated with PIM hardware while retaining the desired energy, memory, and chip area benefits of PIM computing.

Пікірлер
The Hard Tradeoffs of Edge AI Hardware
14:11
Asianometry
Рет қаралды 88 М.
This Dumbbell Is Impossible To Lift!
01:00
Stokes Twins
Рет қаралды 42 МЛН
IBM's New Computer Chip is Pushing the LIMITS! 🔥
18:04
Anastasi In Tech
Рет қаралды 245 М.
AI Hardware w/ Jim Keller
33:29
Tenstorrent
Рет қаралды 33 М.
The race is on: Getting ahead with AI inference
38:41
Hewlett Packard Enterprise
Рет қаралды 1,6 М.
AI’s Hardware Problem
16:47
Asianometry
Рет қаралды 625 М.
ChatGPT for Data Analytics: Full Course
3:35:30
Luke Barousse
Рет қаралды 261 М.
This Dumbbell Is Impossible To Lift!
01:00
Stokes Twins
Рет қаралды 42 МЛН