Рет қаралды 20,212
🎁 Free NLP for Semantic Search Course:
www.pinecone.io/learn/nlp
BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP).
In many cases, we might be able to take the pre-trained BERT model out-of-the-box and apply it successfully to our own language tasks.
But often, we might need to pre-train the model for a specific use case even further.
Further training with MLM allows us to tune BERT to better understand the particular use of language in a more specific domain.
Out-of-the-box BERT - great for general purpose use. Fine-tuned with MLM BERT - great for domain-specific use.
In this video, we'll cover exactly how to fine-tune BERT models using MLM in PyTorch.
👾 Code:
github.com/jamescalam/transfo...
Meditations data:
github.com/jamescalam/transfo...
Understanding MLM:
• Training BERT #1 - Mas...
🤖 70% Discount on the NLP With Transformers in Python course:
bit.ly/3DFvvY5
📙 Medium article:
towardsdatascience.com/masked...
🎉 Sign-up For New Articles Every Week on Medium!
/ membership
📖 If membership is too expensive - here's a free link:
towardsdatascience.com/masked...
🕹️ Free AI-Powered Code Refactoring with Sourcery:
sourcery.ai/?YouTu...