Contextual word embeddings in spaCy

  Рет қаралды 3,896

Applied Language Technology

Applied Language Technology

Күн бұрын

Пікірлер: 12
@python-programming
@python-programming 2 жыл бұрын
I was looking for a good video to share with a colleague to explain this concept in spaCy. This was fantastic, as always! Thanks so much.
@SP-db6sh
@SP-db6sh 2 жыл бұрын
Best tutorial on such a complex topic.
@AppliedLanguageTechnology
@AppliedLanguageTechnology 2 жыл бұрын
Thanks!
@RaminH
@RaminH 2 жыл бұрын
Thank you for the clear and concise explanation. Great job!
@AppliedLanguageTechnology
@AppliedLanguageTechnology 2 жыл бұрын
Thanks! I'm glad you found the video helpful!
@Julia-ej4jz
@Julia-ej4jz Жыл бұрын
Thank you for sharing this opportunity to learn!
@ajitkumar15
@ajitkumar15 2 жыл бұрын
Great Video !!!
@gareebmanus2387
@gareebmanus2387 2 жыл бұрын
Thanks for a very succinct description. However, I am familiar only with the W2V (static) representation in which you store a lookup table of pairs. When we need to use the embeddings, we simply lookup the word in the table and their plug the retrieved embedding instead o the word in the input to some neural net, etc. How does one store and use the contextual embeddings--obviously the store and lookup paradigm won't work? Can you please explain...For example I wish to do NMT from English to Finnish...then?
@AppliedLanguageTechnology
@AppliedLanguageTechnology 2 жыл бұрын
Hi Gareeb! An excellent question - in this case, you would simply use BERT or some other model capable of learning contextual embeddings to extract representations for the entire text to be translated, and then use these representations to train some sequence-to-sequence model to translate from one language to another. In this video, we simply pick out embeddings for certain words to show that despite their similar form, their representations differ. In other words, with contextual word embeddings, you typically learn representations for entire sequences as opposed to individual words, as in word2vec.
@gareebmanus2387
@gareebmanus2387 2 жыл бұрын
@@AppliedLanguageTechnology Thank you very much for your clarification. Also, thanks again for sharing your excellent course material.
@ruwang3132
@ruwang3132 Жыл бұрын
it is a nice talk! but why the code sometimes doesn't work out, and sometimes works
@AppliedLanguageTechnology
@AppliedLanguageTechnology Жыл бұрын
Thanks! Could you be a bit more specific? Does the code raise an error?
Introduction to the CoNLL-U annotation schema
4:36
Applied Language Technology
Рет қаралды 2,8 М.
Creating a spaCy Doc object manually
5:22
Applied Language Technology
Рет қаралды 1,8 М.
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 1,8 МЛН
1 сквиш тебе или 2 другому? 😌 #шортс #виола
00:36
Não sabe esconder Comida
00:20
DUDU e CAROL
Рет қаралды 42 МЛН
啊?就这么水灵灵的穿上了?
00:18
一航1
Рет қаралды 79 МЛН
Exploring syntactic dependencies using spaCy
3:47
Applied Language Technology
Рет қаралды 1,4 М.
Visualising word embeddings in spaCy using whatlies
4:33
Applied Language Technology
Рет қаралды 1 М.
Embeddings: What they are and why they matter
38:38
Simon Willison
Рет қаралды 23 М.
Text Analysis with Python: Intro to Spacy
19:32
Pythonology
Рет қаралды 6 М.
Vectoring Words (Word Embeddings) - Computerphile
16:56
Computerphile
Рет қаралды 294 М.
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
StatQuest with Josh Starmer
Рет қаралды 322 М.
A Complete Overview of Word Embeddings
17:17
AssemblyAI
Рет қаралды 110 М.
What are Word Embeddings?
8:38
IBM Technology
Рет қаралды 13 М.
The Biggest Misconception about Embeddings
4:43
ritvikmath
Рет қаралды 18 М.
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 1,8 МЛН