NLP Demystified 13: Recurrent Neural Networks and Language Models

  Рет қаралды 9,750

Future Mojo

Future Mojo

Күн бұрын

Пікірлер: 12
@futuremojo
@futuremojo 2 жыл бұрын
Timestamps 00:00:00 Recurrent Neural Networks 00:00:23 The problem with bag-of-words techniques 00:02:28 Using recurrence to process text as a sequence 00:07:53 Backpropagation with RNNs 00:12:03 RNNs vs other sequence processing techniques 00:13:08 Introducing Language Models 00:14:37 Training RNN-based language models 00:17:40 Text generation with RNN-based language models 00:19:44 Evaluating language models with Perplexity 00:20:54 The shortcomings of simple RNNs 00:22:48 Capturing long-range dependencies with LSTMs 00:27:20 Multilayer and bidirectional RNNs 00:29:58 DEMO: Building a Part-of-Speech Tagger with a bidirectional LSTM 00:42:22 DEMO: Building a language model with a stacked LSTM 00:58:04 Different RNN setups
@HazemAzim
@HazemAzim Жыл бұрын
Really super mix between theory , concepts , math and then coding .. highly underrated ..
@futuremojo
@futuremojo Жыл бұрын
Thanks, Hazem! I was going for a particular mix that explored the subject at multiple levels. It's good to hear it resonated with you.
@danialb9894
@danialb9894 Жыл бұрын
I hope you provided full and detailed course on neural networks. You're the best.
@samuelcortinhas4877
@samuelcortinhas4877 Жыл бұрын
Excellent video! You really bring this subject to life
@adityashukla9840
@adityashukla9840 3 ай бұрын
I Really want to show your videos on different topics like CNN GANS
@KemalCanKara
@KemalCanKara Жыл бұрын
Why didn't you use embedding layer? What is the purpose here? What would have changed if we add one?
@futuremojo
@futuremojo Жыл бұрын
You'll find the answer in this comment cell: colab.research.google.com/github/nitinpunjabi/nlp-demystified/blob/main/notebooks/nlpdemystified_recurrent_neural_networks.ipynb#scrollTo=2DgNpgicAMbr "We're not using embeddings for the input. We can, but since this is a character model with just a few dozen possible choices, we can get away with one-hot encoding. There's also no reason to think a particular letter should be closer to another in vector space as we would want in a word-level model." I haven't tried with an embedding layer (give it a shot!). My prediction for this particular example is that it wouldn't make much of a difference since this is a character-level model.
@KemalCanKara
@KemalCanKara Жыл бұрын
@@futuremojo Thank you very much for the answer.
@moistnar
@moistnar Жыл бұрын
At 31:20, what is the `+/` operator? I've never seen that before in Python and I can't find much on google
@futuremojo
@futuremojo Жыл бұрын
The forward slash is just a line continuation operator: www.google.com/search?q=pyton+multiline+with+forward+slash
NLP Demystified 12: Capturing Word Meaning with Embeddings
42:47
Which team will win? Team Joy or Team Gumball?! 🤔
00:29
BigSchool
Рет қаралды 15 МЛН
ТЫ В ДЕТСТВЕ КОГДА ВЫПАЛ ЗУБ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 3,4 МЛН
When u fight over the armrest
00:41
Adam W
Рет қаралды 24 МЛН
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 736 М.
Gail Weiss: Thinking Like Transformers
1:07:12
Formal Languages and Neural Networks Seminar
Рет қаралды 16 М.
NLP Demystified 10: Neural Networks From Scratch
1:10:05
Future Mojo
Рет қаралды 14 М.
NLP Demystified 1: Introduction
8:48
Future Mojo
Рет қаралды 27 М.
Pytorch Transformers from Scratch (Attention is all you need)
57:10
Aladdin Persson
Рет қаралды 315 М.
Attention for Neural Networks, Clearly Explained!!!
15:51
StatQuest with Josh Starmer
Рет қаралды 275 М.
Which team will win? Team Joy or Team Gumball?! 🤔
00:29
BigSchool
Рет қаралды 15 МЛН