BERT for Topic Modeling - EXPLAINED!

  Рет қаралды 18,202

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 35
@nextXstep
@nextXstep 3 жыл бұрын
Couldn't find a single article explaining the codes behind bertopic, the explanation in this video is absolutely perfect thanks!!
@CodeEmporium
@CodeEmporium 3 жыл бұрын
You're welcome :)
@pierrickleroy780
@pierrickleroy780 2 жыл бұрын
Another clear explanation of multiple sota useful concepts, cheers man I really like your videos and way of communicating things !
@IRiViI
@IRiViI 3 жыл бұрын
I just wanted to say that I love your videos!
@MLA263
@MLA263 Жыл бұрын
This is amazing, thank you, you hero
@chetanetrx2005
@chetanetrx2005 3 жыл бұрын
Nice video.. Can you please explain how sentence transformer works at inference time when we have only one sentence?
@lepton555
@lepton555 3 жыл бұрын
Awesome job, man! Absolutely horrific! Thank you!
@lucyledezma709
@lucyledezma709 2 жыл бұрын
Great video!!!🤗🤗🤗
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thanks a ton :)
@kabeerjaffri4015
@kabeerjaffri4015 3 жыл бұрын
Hey can you make a video on unsupervised temporal action localization in video. like when you search google for somthing and they show time intervals where video content matches your query. I think its a great topic and may spark your interest. BTW great content as usual
@teetanrobotics5363
@teetanrobotics5363 3 жыл бұрын
Amazing content bro. Please keep updating your playlists.
@yazdipour
@yazdipour 3 жыл бұрын
love ur bert/nlp contents
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Glad you do. I'll try making more of this :)
@emanuelgerber
@emanuelgerber 3 жыл бұрын
Thanks for making this video! Helps a lot
@gunavardhinip
@gunavardhinip Жыл бұрын
Brother will you explaint bart for text summarization
@datasci4547
@datasci4547 3 жыл бұрын
Very clear explanation. Thank you :)
@alexnim4873
@alexnim4873 3 жыл бұрын
Amazing video bro. Very helpful. Thanks so much
@CodeEmporium
@CodeEmporium 3 жыл бұрын
You are very welcome :)
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
Really well explained
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Thanks :)
@wilfredomartel7781
@wilfredomartel7781 Жыл бұрын
Just a doubt. So tripplet dataset is better for improving embeddings. May a video of how to fine tuning a non english transformer?
@CodeEmporium
@CodeEmporium Жыл бұрын
Sure thing. The goal is to make a series in BERT training and code soon from scratch. In the mean time, maybe you’ll enjoy the playlist called “Transformers from Scratch” where I build a translator for a non-English language. Though there is no “pre-training” and “fine tuning”, many components of BERT are similar to the transformer. So I recommend checking that out
@wilfredomartel7781
@wilfredomartel7781 Жыл бұрын
@@CodeEmporium thank you to take your time in resondijg my message. Maybe a link suggestion my friend.
@CodeEmporium
@CodeEmporium Жыл бұрын
Transformers from scratch kzbin.info/aero/PLTl9hO2Oobd97qfWC40gOSU8C0iu0m2l4
@Daniel-gy1rc
@Daniel-gy1rc 2 жыл бұрын
can u make a video about Google-Palm?
@anonxnor
@anonxnor 3 жыл бұрын
You should be able to do this with the CLS token embeddings, instead of sentence embeddings from S-BERT, if you use regular BERT right?
@pranaymehta8471
@pranaymehta8471 3 жыл бұрын
Have you tried to work with BERTopic with datasets that are bigger in size ? For instance 100k, 500k data sizes ? From what I have seen, the sentence transformer takes a lot of time to create the N dimensional embeddings . I am not sure if berTopic runs things in parallel.
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Good question. Sorry I'm late to this. I've worked with Sentence Transformers in general and i can say the speed and quality really depends on which sentence transformer you use. You'll just need to choose the one that balances both qualities if speed is Essential (like for online applications as opposed to postmortem analysis) Also BERT and hence the sentence transformer process information in parallel. I know Sentence Transformers have this method called "embedding()" where you can pass in a list and we fetch the embeddings in parallel
@geoffreyanderson4719
@geoffreyanderson4719 3 жыл бұрын
My spidey-sense tingles for me when more than half of the topics in the corpus are unclustered. Exploratory data analysis on those might reveal some easily fixed errors. Like, I might want to see what happens when you topic model just those bad bois, letting all the rest of the good bois to go home on the regular schoolbus. Maybe the Breakfast Club misfits have something in common with each other after all.
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Aye good eye. I was mostly just trying to illustrate BERT. But some analysis on this would have been nice too as a follow up
@Burakyesilvlogs
@Burakyesilvlogs 3 жыл бұрын
Great video
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Thank you 😊
@boubacarbah1455
@boubacarbah1455 2 жыл бұрын
Hello , i'm trying to reproduce your exercice. But i got a problem when i tried to import BERTOPIC " from import bertopic ".I get this error " no module named "llvmlite.binding.dylib". And i could not fix it; Si i wonder if you have a solution ?
@maryamaziz3841
@maryamaziz3841 3 жыл бұрын
Hi, How can use BERT to word embedding
@greenital
@greenital 3 жыл бұрын
in love ** divine * :D
Transformer Embeddings - EXPLAINED!
15:43
CodeEmporium
Рет қаралды 33 М.
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
Fireside chat with Matt Garman, CEO, AWS | AWS Events
41:38
RAG - Explained!
30:00
CodeEmporium
Рет қаралды 3,2 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 542 М.
BERTopic Explained
45:14
James Briggs
Рет қаралды 26 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 398 М.
Sentence Transformers - EXPLAINED!
17:51
CodeEmporium
Рет қаралды 31 М.
BERTopic: Topic Modeling by Combining the Old with the New
19:41
John Snow Labs
Рет қаралды 2,2 М.
How a Transformer works at inference vs training time
49:53
Niels Rogge
Рет қаралды 58 М.
LSTM is dead. Long Live Transformers!
28:48
Seattle Applied Deep Learning
Рет қаралды 530 М.
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН