Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
@tuananhvo9506Ай бұрын
Complete this new one I may have been roughly watching all of the videos of StatQuest already. Deeply invested in the channel for the last few months, I feel much more confident in my quest to get the first AI related job. Massive thanks Josh for relentlessly bringing the right intuition for the mass of us!!
@statquestАй бұрын
Good luck with that first job!
@PradeepKumar-hi8mrАй бұрын
Wowww! Glad to have you back, Sir. Awesome videos 🎉
@statquestАй бұрын
Thank you!
@NottoriousGGАй бұрын
Such a cleverly disguised master of the craft. 🙇
@statquestАй бұрын
bam! :)
@KimgeemАй бұрын
So excited to watch this later 🤩✨
@statquestАй бұрын
future bam! :)
@mbeugelisallАй бұрын
Just the thing I’m learning about right now!
@statquestАй бұрын
bam! :)
@nossonweissmanАй бұрын
Yay!!! ❤❤ I'm starting it now and saving to remember to finish later. Also, I'm requesting a video on Sparse AutoEncoders (used in Anthropic's recent research). They seem super cool and I have a basic idea on how they work, but I'd to see a "simply explained" version of them.
@statquestАй бұрын
Thanks Nosson! I'll keep that topic in mimd.
@free_thinker4958Ай бұрын
You're the man ❤️💯👏 thanks for everything you do here to spread that precious knowledge 🌹 we hope if you could possibly dedicate a future video to talk about multimodal models (text to speech, speech to speech etc...) ✨
@statquestАй бұрын
I'll keep that in mind!
@davidlu1003Ай бұрын
And thx for the courses. They are great!!!!😁😁😁
@statquestАй бұрын
Glad you like them!
@tcsi_Ай бұрын
100th Machine Learning Video 🎉🎉🎉
@statquestАй бұрын
Yes! :)
@THEMATT222Ай бұрын
Noice 👍 Doice 👍Ice 👍
@davidlu1003Ай бұрын
I love you, I will keep going and learn the other courses of yours if they are always free. keep them free please, I will always be your fan.😁😁😁
@statquestАй бұрын
Thank you, I will!
@kamal9294Ай бұрын
Nice explanation, if the next topic is about rag or reinforcement learning , i will be happier (or even object detection, object tracking).
@statquestАй бұрын
I guess you didn't get to 16:19 where I explain how RAG works...
@kamal9294Ай бұрын
@statquest bro but in LinkedIn I saw many rag types and some retrieval techniques using advanced dsa(like HNSW). That's why I asked.
@statquestАй бұрын
@@kamal9294 Those are just optimizations, which will change every month. However, the fundamental concepts will stay the same and are described in this video.
@kamal9294Ай бұрын
@@statquest Now I am clear, thank you!.
@thegimelАй бұрын
Great instructional video, as always, StatQuest! You mentioned in the video that the training task for these networks is next word prediction, however, models like BERT have only self-attention layers so they have "bidirectional awareness". They are usually trained on masked language modeling and next sentence prediction, if I recall correctly?
@statquestАй бұрын
I cover how a very basic word embedding model might be trained in order to illustrate its limitations - that it doesn't take position into account. However, the video does not discuss how an encoder-only transformer is trained. That said, you are correct, an encoder-only transformer uses masked language modeling.
@etgaming6063Ай бұрын
This video came just in time, trying to make my own RoBERTa model and have been struggling understanding how they work under the hood. Not anymore!
@statquestАй бұрын
BAM!
@swarupdas804316 күн бұрын
What can be better to learn ML when we have a teacher like you. Thanks for all the effort you have put into. I would buy if you have any Udemy courses covering ML stuff. Please let me know
@statquest15 күн бұрын
I have a book coming out in the next few weeks about all these neural network videos with Pytorch tutorials
@iamumairjafferАй бұрын
Well explained ❤❤❤
@statquestАй бұрын
Thanks!
@rishidixit793929 күн бұрын
Very Beautifully Explained as Always. It takes a great amount of intuitive understanding and talent to explain a relatively tougher topic in such an easy way. I just had some doubts - 1. In case of context aware embeddings of a Sentence of a Doc are the individual Embeddings of the tokens averaged. Does this have something to do with the CLS token ? 2. Like a Variational Autoencoder helps in understanding the intricate patterns of images and then creates its own latent space , can BERT (or any similar model) do that for Vision task (or are they only suitable for NLP Tasks) 3. Are Knowledge Graphs made using BERT ? Any help on these will be appreciated . Thank You again for the Awesome Explanation
@statquest29 күн бұрын
1. The CLS token is specifically used for classification problems and I talk about how it works in my upcoming book. That said, if you embed a whole sentence, then you can average the output values. 2. Transformers work great with images and image classificaiton. 3. I don't know.
@tonym4926Ай бұрын
Are you planning to add this video to neutral network/ deep learning playlist?
@statquestАй бұрын
yes! Just did.
@aryasunil9041Ай бұрын
Great Video, When is the Neural Networks book coming out? Very eager for it
@statquestАй бұрын
Early january. Bam! :)
@barackobama7757Ай бұрын
Hello StatQuest. I was hoping if you could make a video on PSO (Particle Swarm Optimisation) Will really help! Thank you, amazing videos as always!
@statquestАй бұрын
I'll keep that in mind.
@draziraphaleАй бұрын
Great explanation
@statquestАй бұрын
Thanks!
@alecollins01Ай бұрын
THANK YOU
@statquestАй бұрын
double bam! :)
@nathannguyen2041Ай бұрын
Did math always come easy to you? Also how did you study? Do math topics stay in your mind e.g., fancy integral tricks in probability theory, or dominated convergence, etc?
@statquestАй бұрын
Math was never easy for me and it's still hard. I just try to break big equations down into small bits that I can plug numbers into and see what happens to them. And I quickly forget most math topics unless I can come up with a little song that will help me remember.
@benjaminlucas9080Ай бұрын
Have you done anything on vision tranformers? or can you?
@statquestАй бұрын
I'll keep that in mind. They are not as fancy as you might guess.
@noadsensehere9195Ай бұрын
good
@statquestАй бұрын
Thanks!
@aihsdiaushfiuhidnvaАй бұрын
not many people outside the know knows about bert it seems
@statquestАй бұрын
yep.
@SuperRobieboyАй бұрын
Great video, encoders are very interesting in applications like vector search or down-stream prediction tasks (my thesis!). I'd love to see a quest on positional encoding, but perhaps generalised to not just word positions in sentences but also pixel positions in an image or graph connectivity. Image and graph transformers are very cool and positional encoding is too often only discussed for the text-modality. Would be a great addition to educational ML content on KZbin ❤
@statquestАй бұрын
Thanks! I'll keep that in mind.
@dailygrowth796715 күн бұрын
PIZZA GREAT!❤
@statquest13 күн бұрын
:)
@epberdugocАй бұрын
Actually is, LA PIZZA ES MAGNÍFICA!! ha ha
@statquestАй бұрын
:)
@Apeiron242Ай бұрын
Thumbs down for using the robot voice.
@statquestАй бұрын
Noted
@ChargedPulsarАй бұрын
Another bad video, promises simplicity dives right into graphs with no background or explanation.
@statquestАй бұрын
Noted
@AustinlorenzmccoyАй бұрын
@@ChargedPulsar the video is great, visualization helps people capture context more Maybe cause i have read about it before but it sure explains better But if you feel you do better, create the content and share so we dive in too