DeepMind x UCL | Deep Learning Lectures | 7/12 | Deep Learning for Natural Language Processing

  Рет қаралды 39,080

Google DeepMind

Google DeepMind

Күн бұрын

Пікірлер: 34
@leixun
@leixun 4 жыл бұрын
*DeepMind x UCL | Deep Learning Lectures | 7/12 | Deep Learning for Natural Language Processing* *My takeaways:* *1. Plan for this lecture **0:23* *2. Background: Deep learning and language **3:03* 2.1 Language applications use deep learning in very different extent 4:12 2.2 Why is deep learning such an effective tool for language processing 7:08 2.3 Understand languages: this is import for building language models 7:50 *3. The Transformer **22:14* 3.1 Distributed representation of words 23:40 3.2 Self-attention over word input embeddings 32:13 3.3 Multi-head self-attention 38:55 3.4 Feedforward layer 41:57 3.5 A complete Transformer block 42:23 3.6 Skip connections 42:38 3.7 Position encoding of words 46:02 3.8 Summary 50:58 *4. Unsupervised and transfer learning with BERT **54:45* 4.1 Problems in language 55:39 4.2 BERT 59:42 -Unsupervised learning --Masked language model pertaining 1:02:05 --Next sentence prediction pertaining 1:05:55 -BERT fine-tuning 1:09:55 -BERT supercharges transfer learning 1:12:05 *7. Extract language-related knowledge from the environment **1:13:55* -Grounded language learning at DeepMind: towards language understanding in a situated agent *8. To conclude **1:27:18*
@prakhyatshankesi3749
@prakhyatshankesi3749 Жыл бұрын
This is hands down, The best explanation of Transformers!
@antonioskarvelas1325
@antonioskarvelas1325 7 ай бұрын
Best explanation? Unfortunately, it was difficult for me to follow ...
@lukn4100
@lukn4100 3 жыл бұрын
Is the picture at 37:12 correct? Because, if we take a small amout of the value of each of the other words, plus the value of the word "beetle" to the next layer, then for me the v term from the word "the" should be connected to lambda1 and not the v term for the word "beetle". The same logic should be applied to the other words and their lambdas.
@gwendal-lv
@gwendal-lv 3 жыл бұрын
I agree, there seems to be an issue with arrows in that figure. As the lambdas sum to 1, if the figure was right then v' would be equal to v_beetle.
@martinho1688
@martinho1688 4 жыл бұрын
Thank you very much for taking the time to prepare this incredible lecture series! #respectfrombrazil 🇧🇷
@ながれる季節
@ながれる季節 3 жыл бұрын
I'm completely lost. Is this a graduate level course?
@seremetvlad
@seremetvlad 4 жыл бұрын
Thank you! This is a great series of lectures!
@khadijakhaldi6468
@khadijakhaldi6468 4 жыл бұрын
Thank you so much for the very informative lecture!
@cuenta4384
@cuenta4384 3 жыл бұрын
can anybody post the paper at the end where it says McClelland et 2019
@YeTianlinguist
@YeTianlinguist 4 жыл бұрын
Thank you for the amazing lecture. Why are there only feedforward, but not feedback mechanisms in language models? Would that make a difference? We process language both bottom up and top down. Our expectation of the world, our beliefs of people's intentions can influence how we process a sequence of sound, just like how topdown processes make us hallucinate certain aspects of vision. The skip level connections allow lower down information to feedback up, but does not allow higher level representations to influence representation lower down, at least not at inference time. Would it be possible to have such a structure in Transformers? Would it help?
@luksdoc
@luksdoc 4 жыл бұрын
One of the best lectures in the series.
@6452gaurav
@6452gaurav 4 жыл бұрын
Looks like Linus Sebastian is taking the lecture :D
@user-or7ji5hv8y
@user-or7ji5hv8y 4 жыл бұрын
Not easy to follow the exact steps with the visualization and explanation provided. I think more detail would be helpful.
@lukn4100
@lukn4100 3 жыл бұрын
The explanations by the lecturer are great but the slides do not reflect this. They are too poor.
@markusbuchholz3518
@markusbuchholz3518 4 жыл бұрын
Impressive effort has been done in preparation regarding lecture. Thanks for sharing the knowledge and research.
@abdurrezzakefe5308
@abdurrezzakefe5308 3 жыл бұрын
I got Covid from 15:28 lol Great lectures btw, huge thanks to DeepMind and UCL!
@kirillazhitsky9842
@kirillazhitsky9842 4 жыл бұрын
It's really informative, thank you. There is only one noticeable failure - it is not a fruit fly on the picture :)
@lukn4100
@lukn4100 3 жыл бұрын
Great lecture and big thanks to DeepMind for sharing this great content.
4 жыл бұрын
1:27:57 "We've reached the end of the lecture, because I urgently need to go now…"
@iamjameswong
@iamjameswong 4 жыл бұрын
Thanks Felix! You're a great teacher. That's it.
@JuanPabloBragaBrum
@JuanPabloBragaBrum 4 жыл бұрын
Thanks for sharing knowledge!!
@fgh509
@fgh509 4 жыл бұрын
Amazing explanation of the Transformer, thanks so much
@bryanbosire
@bryanbosire 3 жыл бұрын
Superb Lecture...Thank you
@pariveshplayson
@pariveshplayson 3 жыл бұрын
Amazing lecture!
@wy2528
@wy2528 4 жыл бұрын
Thank you for sharing the research.
@ragulshan6490
@ragulshan6490 4 жыл бұрын
Thank you for for this amazing tutorial. Well organised!!
@prizmaweb
@prizmaweb 4 жыл бұрын
Excellent,.
@fabb802
@fabb802 Жыл бұрын
Thanks!
@a.gmathiu7995
@a.gmathiu7995 3 жыл бұрын
Head of search
@firstnamesecondname5341
@firstnamesecondname5341 4 жыл бұрын
He who is first shall be last, or just seen of as a twat 😁🤦🏻‍♂️🤣👍
@JigawattMusic
@JigawattMusic 4 жыл бұрын
FIRST
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
Concepts you MUST KNOW as a Programmer!
21:51
hdeleon.net
Рет қаралды 1,6 М.
DeepMind x UCL | Deep Learning Lectures | 2/12 |  Neural Networks Foundations
1:24:13
LSTM is dead. Long Live Transformers!
28:48
Seattle Applied Deep Learning
Рет қаралды 530 М.
Lecture 1 | Natural Language Processing with Deep Learning
1:11:41
Stanford University School of Engineering
Рет қаралды 781 М.
DeepMind x UCL | Deep Learning Lectures | 12/12 |  Responsible Innovation
1:02:28