Great video but i have one question. What you have shown in this video is strictly related to "BertTokenizer" from the transformers library? I've heard that you said a lot about word embeddings and now im confused what is the purpose of SentenceTransformer('all-MiniLM-L6-v2') then? Why not just use BertTokenizer?
@sidharth-singh Жыл бұрын
Quite helpful. I watched it after reading BERT paper and it makes more sense now.
@RishiGarg-ld5hx8 ай бұрын
To implement the BERT model, do I need to run it on the google colab where I have access to their GPU or my local windows is sufficient for it?
@RohanPaul-AI7 ай бұрын
Always better to run in a machine with GPU. In windows without GPU you may be able to run, but will be terribly slow.
@8003066717 Жыл бұрын
is there sources available, actually i want to get the BERT embedding and pass it to BiLSTM ?? i want to learn about ?
@RohanPaul-AI Жыл бұрын
Generally first learning resources on BERT is HuggingFace official doc. Then for many implementations examples you can search in Kaggle and Github
@kevon217 Жыл бұрын
great step-by-step. easy to follow and helpful!
@RohanPaul-AI Жыл бұрын
Great to know you liked @kevon
@punithandharani Жыл бұрын
Easy to understand. Thank you so much! 🙏
@RohanPaul-AI Жыл бұрын
Glad it was helpful!
@muhdbaasit8326 Жыл бұрын
Greate video. Please Explain regarding how tokenTypeIds gets generated. its required for for the tabular data.
@RohanPaul-AI Жыл бұрын
Thanks @muhdbaasit8326 - And Good question, will try to cover that very soon. Stay tuned.