Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) | MIT Deep Learning Series

  Рет қаралды 56,414

Lex Fridman

Lex Fridman

Күн бұрын

Lecture by Vivienne Sze in January 2020, part of the MIT Deep Learning Lecture Series.
Website: deeplearning.mit.edu
Slides: bit.ly/2Rm7Gi1
Playlist: bit.ly/deep-learning-playlist
LECTURE LINKS:
Twitter: / eems_mit
KZbin: / @miteemsviviennesze
MIT professional course: bit.ly/36ncGam
NeurIPS 2019 tutorial: bit.ly/2RhVleO
Tutorial and survey paper: arxiv.org/abs/1703.09039
Book coming out in Spring 2020!
OUTLINE:
0:00 - Introduction
0:43 - Talk overview
1:18 - Compute for deep learning
5:48 - Power consumption for deep learning, robotics, and AI
9:23 - Deep learning in the context of resource use
12:29 - Deep learning basics
20:28 - Hardware acceleration for deep learning
57:54 - Looking beyond the DNN accelerator for acceleration
1:03:45 - Beyond deep neural networks
CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman

Пікірлер: 46
@lexfridman
@lexfridman 4 жыл бұрын
I really enjoyed this talk by Vivienne. Here's the outline: 0:00 - Introduction 0:43 - Talk overview 1:18 - Compute for deep learning 5:48 - Power consumption for deep learning, robotics, and AI 9:23 - Deep learning in the context of resource use 12:29 - Deep learning basics 20:28 - Hardware acceleration for deep learning 57:54 - Looking beyond the DNN accelerator for acceleration 1:03:45 - Beyond deep neural networks
@gggrow
@gggrow 4 жыл бұрын
Looking forward to watching this, but shouldn't the Vladimir Vapnik lecture be coming first?
@createchannel8815
@createchannel8815 4 жыл бұрын
Me too. Invite her again.
@createchannel8815
@createchannel8815 4 жыл бұрын
Great talk. The Speaker Vivienne was clear and concise. Very informative.
@NomenNescio99
@NomenNescio99 4 жыл бұрын
Thank you for sharing the lecture, this is the type of content I really enjoy.
@gonzalochristobal
@gonzalochristobal 4 жыл бұрын
thank you lex, the amount of information you already shared is invaluable, eternally grateful
@JonMcGill
@JonMcGill 2 жыл бұрын
I used to be a Field Apps engineer for telecom, and she's certainly correct about the power problem with respect to chip technology. Very likeable lecturer!!
@burkebaby
@burkebaby Жыл бұрын
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
I am not an expert in the field of Vivienne Sze, however, she was an extremely good lecturer. Every concept was extremely clear.
@samuelec
@samuelec 4 жыл бұрын
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks. I wonder how she managed to go through those 80 slides so fast and if there is someone that watched it all in one go without lose the focus !
@jayhu6075
@jayhu6075 4 жыл бұрын
Many thanks for sharing to the people that not study can afford at the MIT. Respect.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
Agree with you. Respect.
@pierreerbacher4864
@pierreerbacher4864 4 жыл бұрын
The density of neurons in this channel is incredibly high.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
Vivienne is incredibly smart, it is a pleasure to listen to her.
@davidvijayramchurn1860
@davidvijayramchurn1860 4 жыл бұрын
Ironically, if you call someone 'dense' in English slang, it would imply the opposite.
@summersnow7296
@summersnow7296 4 жыл бұрын
Excellent lecture 👏👏👏. Things that we don’t usually think about as a ML practitioner but highly important. Great insights.
@nikhilpandey2364
@nikhilpandey2364 4 жыл бұрын
I was researching about this on my own. I have been doing the network pruning wrong. I wouldn’t mind a hit in accuracy if my latency budget were met but now I think I can be far more frugal with the decrease in accuracy. Thanks a lot.
@JousefM
@JousefM 4 жыл бұрын
Thanks for a rather "exotic" topic I need to learn about as an AI newbie, much appreciated Lex!
@warsin8641
@warsin8641 4 жыл бұрын
I love this I will rewatch everything when I'm older and hopefully understand better and deeper I'm only a junior in high school 😖
@tomfillot5453
@tomfillot5453 4 жыл бұрын
Maybe start by looking at Crash Course computer science. They give a good overview of how a computer actually works, and should give you more context for what are the different types of memory, operation and stuff like that. Then 3Blue1Brown has an excellent video series on neural networks. A lot of understanding comes from calculus, but fortunately he also has an excellent video series on that !
@alterna19
@alterna19 4 жыл бұрын
Warsin I like your avatar
@thusspokeshabistari
@thusspokeshabistari 4 жыл бұрын
Try to watch the video slowly in segmented chunks, and then write down what you understand and don't understand about the particular segment of the video(s), and then you can Google what you don't understand and then get back to viewing the video again later.
@ayushdutta8050
@ayushdutta8050 3 жыл бұрын
Haha .. senior year here 😅 . . AI has no age cutoff thank god haha
@UglyG82
@UglyG82 4 жыл бұрын
Great stuff Lex. Thank you !
@UglyG82
@UglyG82 4 жыл бұрын
And Thank you Vivienne for the fantastic insight
@XCSme
@XCSme 4 жыл бұрын
Great video and an interesting problem. Why stop at architecture? What about using different materials for specialized DNN hardware? Maybe using some lower power transistors that are less accurate but good enough for inference. I don't think the brain neurons are always 100% accurate and consistent, but the brain seems to be somewhat fault tolerant.
@merlinmystique
@merlinmystique 4 жыл бұрын
Thank you, every video you post is incredibly useful. Though, it is really hard to enter this field from scratch: everything you learn forces you to go learn thousands other things, it gets really frustrating sometimes. I hope in time this will go better
@BlackHermit
@BlackHermit 4 жыл бұрын
FastDepth is really interesting. Could be useful for many people.
@Happy-wi7ml
@Happy-wi7ml 4 жыл бұрын
Brilliant thank you
@ganeshdongari7098
@ganeshdongari7098 3 жыл бұрын
Excellent
@machinimaaquinix3178
@machinimaaquinix3178 4 жыл бұрын
This was a great talk, thank goodnees KZbin has a .75 speed mode. She talks fast!
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
She has very high knowledge throughput: 10Gb information per second xD
@Soulixs
@Soulixs 3 жыл бұрын
thanks lex
@punyaslokdutta4362
@punyaslokdutta4362 4 жыл бұрын
Trade off between number of filters on the 3D Convolution and a 4D Convolution ? . Convolution is a matrix operation (w*Imap+ Bias). RELU Activation is mostly used to provide non-linearity . I feel the number of Filters is needed to see higher abstracted stuff . For instance, The initial layer of a CNN understands pixels based information primarily for edges , cuts, depths. The layer following it understands shape , structures. The further layers help us understand semantic meaning of eyes, skin, ears, nose, face. But, How will the model perform when instead of multiple filters , we having more layers . That is, the information in filters in stuffed inside the CNN layers. Or is it done for easing computation while training ?
@kitgary
@kitgary 4 жыл бұрын
Genius!!!!!
@adamsimms8528
@adamsimms8528 4 жыл бұрын
I'm trying to imagine how this structure will become half relevant as we move into UltraRAM which is as close to as fast as DRAM but NOT volatile like memory stick type RAM. What are the implications if the data can be laid out and accessed in place where it is saved. Suddenly the whole structure is no longer useful.
@masbro1901
@masbro1901 2 жыл бұрын
1:10:37 its 100x faster than FPGA ?? wth, wow, thats blowing my mind, i thought designing custom hardware for specialized algorithm on FPGA its the fastest way on the planet, is it really??
@minhongz
@minhongz 4 жыл бұрын
So essentially power consumption and speed are almost equivalent for AI chips. Does anyone know what architecture Tesla chips employ?
@paulrautenbach
@paulrautenbach 4 жыл бұрын
While watching this I was seeing parallels with what I know about the Tesla chips from their Autonomy Investor Day presentation. The Tesla chips were designed with an energy budget in mind and so address many of the same things. One advantage the Tesla chips have is they do not need to be general purpose - so, to a large extent, only need to support a single architecture or configuration. In some cases the Tesla chips avoid storage and retrieval of intermediate data by passing outputs directly to inputs via hardware channels between successive computational stages implemented as separate hardware. A large proportion of the Tesla chips are used for static memory to implement what she called global memory. This avoids going off-chip for most values.
@samoha0812
@samoha0812 Жыл бұрын
This is what I exactly wanted to hear. Thank you. I expected to hear about how AI chip is designed to minimize energy consumption attracted by her presentation title but lot of content is focused on computing algorithm rather than hardware design but great presentation providing comprehensive understanding about computing and energy consumption. Thank you
@jessenochella4309
@jessenochella4309 4 жыл бұрын
REEVERSIBLE computing uses less power! But you need new chip architecture and algorithms.
@jkobject
@jkobject 4 жыл бұрын
what about neuromorphic computing?
@alexanderpadalka5708
@alexanderpadalka5708 3 жыл бұрын
🗽
@fayssalelansari8584
@fayssalelansari8584 4 жыл бұрын
not bad
@vuththiwattanathornkosithg5625
@vuththiwattanathornkosithg5625 4 жыл бұрын
Tesla hardware 3.0???
@dapdizzy
@dapdizzy 4 жыл бұрын
I don’t think the content presented by those youngsters is on par with the talks to the legends. You know what I mean.
Generative AI + Education: Will Generative AI Transform Learning and Education
43:48
Massachusetts Institute of Technology (MIT)
Рет қаралды 9 М.
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 103 МЛН
GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - 681
46:53
The TWIML AI Podcast with Sam Charrington
Рет қаралды 2 М.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 767 М.
Understand Calculus in 10 Minutes
21:58
TabletClass Math
Рет қаралды 8 МЛН
Nuts and Bolts of Applying Deep Learning (Andrew Ng)
1:19:48
Lex Fridman
Рет қаралды 381 М.
Carregando telefone com carregador cortado
1:01
Andcarli
Рет қаралды 1,5 МЛН
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
Samsung vs Apple Vision Pro🤯
0:31
FilmBytes
Рет қаралды 1,3 МЛН