Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@NottoriousGG17 сағат бұрын
Such a cleverly disguised master of the craft. 🙇
@statquest5 сағат бұрын
bam! :)
@PradeepKumar-hi8mr12 күн бұрын
Wowww! Glad to have you back, Sir. Awesome videos 🎉
@statquest12 күн бұрын
Thank you!
@tcsi_12 күн бұрын
100th Machine Learning Video 🎉🎉🎉
@statquest12 күн бұрын
Yes! :)
@THEMATT2229 күн бұрын
Noice 👍 Doice 👍Ice 👍
@nossonweissman12 күн бұрын
Yay!!! ❤❤ I'm starting it now and saving to remember to finish later. Also, I'm requesting a video on Sparse AutoEncoders (used in Anthropic's recent research). They seem super cool and I have a basic idea on how they work, but I'd to see a "simply explained" version of them.
@statquest12 күн бұрын
Thanks Nosson! I'll keep that topic in mimd.
@kamal92949 күн бұрын
Nice explanation, if the next topic is about rag or reinforcement learning , i will be happier (or even object detection, object tracking).
@statquest9 күн бұрын
I guess you didn't get to 16:19 where I explain how RAG works...
@kamal92949 күн бұрын
@statquest bro but in LinkedIn I saw many rag types and some retrieval techniques using advanced dsa(like HNSW). That's why I asked.
@statquest9 күн бұрын
@@kamal9294 Those are just optimizations, which will change every month. However, the fundamental concepts will stay the same and are described in this video.
@kamal92949 күн бұрын
@@statquest Now I am clear, thank you!.
@free_thinker495812 күн бұрын
You're the man ❤️💯👏 thanks for everything you do here to spread that precious knowledge 🌹 we hope if you could possibly dedicate a future video to talk about multimodal models (text to speech, speech to speech etc...) ✨
@statquest12 күн бұрын
I'll keep that in mind!
@etgaming606312 күн бұрын
This video came just in time, trying to make my own RoBERTa model and have been struggling understanding how they work under the hood. Not anymore!
@statquest11 күн бұрын
BAM!
@barackobama77579 күн бұрын
Hello StatQuest. I was hoping if you could make a video on PSO (Particle Swarm Optimisation) Will really help! Thank you, amazing videos as always!
@statquest8 күн бұрын
I'll keep that in mind.
@davidlu10034 күн бұрын
And thx for the courses. They are great!!!!😁😁😁
@statquest4 күн бұрын
Glad you like them!
@thegimel12 күн бұрын
Great instructional video, as always, StatQuest! You mentioned in the video that the training task for these networks is next word prediction, however, models like BERT have only self-attention layers so they have "bidirectional awareness". They are usually trained on masked language modeling and next sentence prediction, if I recall correctly?
@statquest12 күн бұрын
I cover how a very basic word embedding model might be trained in order to illustrate its limitations - that it doesn't take position into account. However, the video does not discuss how an encoder-only transformer is trained. That said, you are correct, an encoder-only transformer uses masked language modeling.
@Kimgeem10 күн бұрын
So excited to watch this later 🤩✨
@statquest10 күн бұрын
future bam! :)
@tonym492612 күн бұрын
Are you planning to add this video to neutral network/ deep learning playlist?
@statquest12 күн бұрын
yes! Just did.
@aryasunil90418 күн бұрын
Great Video, When is the Neural Networks book coming out? Very eager for it
@statquest7 күн бұрын
Early january. Bam! :)
@davidlu10034 күн бұрын
I love you, I will keep going and learn the other courses of yours if they are always free. keep them free please, I will always be your fan.😁😁😁
@statquest4 күн бұрын
Thank you, I will!
@mbeugelisall12 күн бұрын
Just the thing I’m learning about right now!
@statquest12 күн бұрын
bam! :)
@nathannguyen204112 күн бұрын
Did math always come easy to you? Also how did you study? Do math topics stay in your mind e.g., fancy integral tricks in probability theory, or dominated convergence, etc?
@statquest12 күн бұрын
Math was never easy for me and it's still hard. I just try to break big equations down into small bits that I can plug numbers into and see what happens to them. And I quickly forget most math topics unless I can come up with a little song that will help me remember.
@benjaminlucas908010 күн бұрын
Have you done anything on vision tranformers? or can you?
@statquest10 күн бұрын
I'll keep that in mind. They are not as fancy as you might guess.
@draziraphale12 күн бұрын
Great explanation
@statquest12 күн бұрын
Thanks!
@iamumairjaffer12 күн бұрын
Well explained ❤❤❤
@statquest12 күн бұрын
Thanks!
@aihsdiaushfiuhidnva5 күн бұрын
not many people outside the know knows about bert it seems
@statquest5 күн бұрын
yep.
@epberdugoc12 күн бұрын
Actually is, LA PIZZA ES MAGNÍFICA!! ha ha
@statquest12 күн бұрын
:)
@noadsensehere919511 күн бұрын
good
@statquest11 күн бұрын
Thanks!
@SuperRobieboy12 күн бұрын
Great video, encoders are very interesting in applications like vector search or down-stream prediction tasks (my thesis!). I'd love to see a quest on positional encoding, but perhaps generalised to not just word positions in sentences but also pixel positions in an image or graph connectivity. Image and graph transformers are very cool and positional encoding is too often only discussed for the text-modality. Would be a great addition to educational ML content on KZbin ❤
@statquest12 күн бұрын
Thanks! I'll keep that in mind.
@Apeiron24212 күн бұрын
Thumbs down for using the robot voice.
@statquest12 күн бұрын
Noted
@ChargedPulsar12 күн бұрын
Another bad video, promises simplicity dives right into graphs with no background or explanation.
@statquest12 күн бұрын
Noted
@Austinlorenzmccoy10 күн бұрын
@@ChargedPulsar the video is great, visualization helps people capture context more Maybe cause i have read about it before but it sure explains better But if you feel you do better, create the content and share so we dive in too