Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 6 - Language Models and RNNs

  Рет қаралды 115,275

Stanford Online

Stanford Online

Күн бұрын

Пікірлер: 12
@kiran5918
@kiran5918 Жыл бұрын
Handled the questions so well. Crisp and clear answers
@kartiksirwani4657
@kartiksirwani4657 2 жыл бұрын
her lectures are on point and very clear.
@adithyagiri7933
@adithyagiri7933 Жыл бұрын
Clear and crisp Lecture
@saeedvahidian53
@saeedvahidian53 Жыл бұрын
Very clear and to-the-point lecture. Better than Chris!
@paninilal8322
@paninilal8322 Жыл бұрын
Use of precise examples is very nice
@stanfordonline
@stanfordonline Жыл бұрын
Awesome feedback, thanks for watching!
@danielsun4928
@danielsun4928 2 жыл бұрын
seems better than Chris' talk
@shrutiiyyer2783
@shrutiiyyer2783 3 ай бұрын
A question here! Will the RNN keep generating the same exact sequence ( during sequence generation and not training) if the starting word is the same. For example, if it randomly chooses 'the' to be the first word y0 will the rest of the sequence be the same if this generation was run twice given that both the times the first word generated was 'the'?
@mohammedbouri8351
@mohammedbouri8351 Жыл бұрын
Why the RNNs share the same weights at each time step? I didn't understand the goal behind it.
@unknownhero6187
@unknownhero6187 2 жыл бұрын
What is the physical interpretation of the hidden state and corresponding weights matrix?
@Teng_XD
@Teng_XD Жыл бұрын
just linear combinations of vectors
@paninilal8322
@paninilal8322 Жыл бұрын
It is the inner working of an intelligent unit like NN. Just like (actual) neurons inside the brain. A different subset(s) of neurons gets activated each time corresponding the respective input.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
Debate: Do Language Models Need Sensory Grounding for Meaning and Understanding?
2:04:51
NYU Center for Mind, Brain and Consciousness
Рет қаралды 24 М.
MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention
1:02:50
Lecture 1 | Natural Language Processing with Deep Learning
1:11:41
Stanford University School of Engineering
Рет қаралды 781 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 792 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН