L15.5 Long Short-Term Memory

  Рет қаралды 6,549

Sebastian Raschka

Sebastian Raschka

3 жыл бұрын

Slides: sebastianraschka.com/pdf/lect...
-------
This video is part of my Introduction of Deep Learning course.
Next video: • L15.6 RNNs for Classif...
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 13
@virozz1024
@virozz1024 2 жыл бұрын
Best videos i have ever seen about ai you clarify everything and it's pretty easy to understand.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Thanks a lot!
@hungerbear4858
@hungerbear4858 3 жыл бұрын
You're so good at explaining complex topics. Thanks!
@yogeshrampariya8346
@yogeshrampariya8346 2 жыл бұрын
Thanks professor for detailed explanation, I have been following deep learning playlist. I have not found such in depth explanation and code anywhere.
@harishpl02
@harishpl02 2 жыл бұрын
Clear explanation, Thanks
@rishavkumar8341
@rishavkumar8341 2 жыл бұрын
Thanks for the great explanation. Neverthless, note that at 11:06 *i sub t* is "input gate" whereas at 12:51 it is denoted as "input node". It is a minor errata.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Oops, good catch!
@ibrahimwalid416
@ibrahimwalid416 Жыл бұрын
Thank you so much, subbed.
@736939
@736939 2 жыл бұрын
Your explanation is awesome. The only thing, that always makes me confused, is how does forget gate know what to forget and what to save? I understand that there is a sigmoid [-1, 1] function, but how does it learn what to save or forget? Thank you.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
That's an interesting question. Its range is [0, 1] where 0 would be forget and 1 would be not forget (and values in between are "forget a bit" lol). It does depend on the input values, and my guess is that it learns to shape the inputs in a certain way over the time steps. Honestly, why it works so well I have no idea. There is a relatively recent paper though that revisits the LSTM, and apparently the forget gate is one of the most important components: arxiv.org/abs/1503.04069
@736939
@736939 2 жыл бұрын
@@SebastianRaschka Thank you very much. This question I asked so many times in forums but it works like a magic )) Also interesting to know the back propagation in steps of elementwise multipication and elementwise sum how they are calculated. BTW your newest book is awesome, I bought it from Amazon. Thank you.
L15.6 RNNs for Classification: A Many-to-One Word RNN
29:07
Sebastian Raschka
Рет қаралды 6 М.
Long Short-Term Memory (LSTM), Clearly Explained
20:45
StatQuest with Josh Starmer
Рет қаралды 504 М.
I wish I could change THIS fast! 🤣
00:33
America's Got Talent
Рет қаралды 110 МЛН
Can You Draw A PERFECTLY Dotted Line?
00:55
Stokes Twins
Рет қаралды 97 МЛН
БОЛЬШОЙ ПЕТУШОК #shorts
00:21
Паша Осадчий
Рет қаралды 8 МЛН
你们会选择哪一辆呢#short #angel #clown
00:20
Super Beauty team
Рет қаралды 14 МЛН
L19.5.1 The Transformer Architecture
22:36
Sebastian Raschka
Рет қаралды 16 М.
L16.2 A Fully-Connected Autoencoder
16:35
Sebastian Raschka
Рет қаралды 4 М.
L19.4.2 Self-Attention and Scaled Dot-Product Attention
16:09
Sebastian Raschka
Рет қаралды 20 М.
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 502 М.
What is LSTM (Long Short Term Memory)?
8:19
IBM Technology
Рет қаралды 181 М.
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33
Ремонтяш
Рет қаралды 337 М.
When you have 32GB RAM in your PC
0:12
Deadrig Gaming
Рет қаралды 1,2 МЛН
Опыт использования Мини ПК от TECNO
1:00
Андронет
Рет қаралды 520 М.