One of the best channels/videos regarding NLP. You strike the balance between video length and in-depth explanation of every topic. Also, the video is easy to follow and instructive.
@r5bc4 жыл бұрын
I saw the master class play list and i saw all the videos in this this one too. And i want to thank you for the amazing work, the extremely big value you provide to us and the light you shed on the topic . I can't wait to see the next video. Please keep up the good work 👍👍👍👍
@anupammitra3 жыл бұрын
Would like to thank Rasa for amazing work on explaining the concepts. Truly exceptional
@WahranRai2 жыл бұрын
10:01 I think it is - log(Cij) instead of + log(Cij) (similar to square root error)
@firstnamelastname31063 жыл бұрын
bruh, you somehow managed to explain it better than the dude from stenford, thanks !
@duongphanai7094 Жыл бұрын
this video saved my life! Good job! Keep it up!
@SreeramAjay4 жыл бұрын
at 9:56 , should that be (minus) -log(cij) ?
@masoudparpanchi5054 жыл бұрын
🤔🤔
@kajumilylys26173 жыл бұрын
Thank you for this series it's been super helpful ❤️
@kajumilylys26173 жыл бұрын
I would like to know why do the weights change every time we retrain the embedding layer!?
@AttiDavidson4 жыл бұрын
Thank you very much! Very good explanation.
@amirhosseinramazani7572 жыл бұрын
I really enjoyed it! Thanks!
@abdulrahmanmohamed28244 жыл бұрын
Great, explnation! but what about if there another one like this for BERT?
@RasaHQ4 жыл бұрын
A few videos later in the series are about transformers/attention which are the core components of BERT.
@anupammitra3 жыл бұрын
What are the publicly available word embedding available for use apart from GloVe
@RasaHQ3 жыл бұрын
Hi Anupam! There are a lot of different word embeddings available. Some of the most common ones are Glove, Fasttext and word2vec. Contextual embeddings are also increasingly popular, including Elmo and BERT. (All of these are for English; embeddings are also available in other languages.)