The intro song feels like Phoebe Buffay used to sing from 'Friends' 😂 BTW Nice Explanation ❤
@ahmedfarag71383 сағат бұрын
2024 PHD Student 10000000 BAM
@user-hl6xe8dz9x3 сағат бұрын
puop poopup pooh
@rishidixit79395 сағат бұрын
Very Beautifully Explained as Always. It takes a great amount of intuitive understanding and talent to explain a relatively tougher topic in such an easy way. I just had some doubts - 1. In case of context aware embeddings of a Sentence of a Doc are the individual Embeddings of the tokens averaged. Does this have something to do with the CLS token ? 2. Like a Variational Autoencoder helps in understanding the intricate patterns of images and then creates its own latent space , can BERT (or any similar model) do that for Vision task (or are they only suitable for NLP Tasks) 3. Are Knowledge Graphs made using BERT ? Any help on these will be appreciated . Thank You again for the Awesome Explanation
@deepaksurya20786 сағат бұрын
hi smal mistake: 1- 3/8 should 5/8 at 10:31 rest all good, love your videos!
@soroya1236 сағат бұрын
AAAAASDF
@ritardstrength51698 сағат бұрын
Thank you sir. I just suffered through a Math Stats class at a well-regarded graduate program that never explained MLEs as well as this short video.
@SST00198 сағат бұрын
Bruh is so confusing
@qaziaman81949 сағат бұрын
If you could please make a video on assumptions of linear regression, that would be helpful.
@person_14710 сағат бұрын
BAM!!! Thanks mate
@muallagun676511 сағат бұрын
thisss was very fun and enjoyable to watch, thank you !!!!! :))
@mari-with-a-gun14 сағат бұрын
1:29 this decision tree can simplified to either half of the silly songs decision and it would generate the exact same results
@statquest13 сағат бұрын
noted
@sdyoung200814 сағат бұрын
14:30 a little let down with no BAM.
@statquest13 сағат бұрын
bam! :)
@salvatoreiovinella664014 сағат бұрын
the psst kill me every time :)
@statquest13 сағат бұрын
:)
@mmmory287816 сағат бұрын
You are reminded me of my teacher. You are so cool, dear StatQuest🤩
@statquest13 сағат бұрын
Thanks!
@NurEnglish17 сағат бұрын
I still don't get what P-value is and why we need that?
@statquest16 сағат бұрын
What time point in the video, minutes and seconds, seems confusing? Is there anything in the section that starts at 0:17 that is particularly confusing?
@patolizac2318 сағат бұрын
my teacher keeps flying to new york and doesn't teach us crap about this so thank you for this pookie <3
@statquest17 сағат бұрын
Thanks!
@mehmetoguzalpkan255421 сағат бұрын
BAMMM indeeed
@statquest17 сағат бұрын
Thanks!
@bernaridho22 сағат бұрын
To be consistent with you distinguishing probability from likelihood; you should say probability AND likelihood instead of OR.
@statquest17 сағат бұрын
What time point in the video, minutes and seconds, are you referring to?
I was thinking about creating embeddings for a domain specific task to use it later in helping processing Research Papers. The task that I had in my mind was that I want to interpret Research Papers in the Chemistry Domain Specifically. For that I need word embeddings specifically trained on this type of Data. The problems that I am encountering are that - 1) How to get this Data in bulk 2) How to train the embeddings. Should I use an AnutoEncoder to train the Embeddings so that they have contextual and semantic understanding or should I use Transformers The trained Embeddings can later be used for more task later like they will help in training other models which will help in parsing lab submissions, other research work etc Please if anyone can help it would be great
@statquest17 сағат бұрын
This is the sort of problem that Encoder-Only Transformers are great at. Here's a video that explains why: kzbin.info/www/bejne/fXWxZ2dvjcSUmac
@ching-tangwang682Күн бұрын
Tremendous explanation of how classification tree works!!!
@statquest19 сағат бұрын
Thanks!
@siddharthvm8262Күн бұрын
Reason behind why we would like a lower pvalue for our R-squared : Null hypothesis - The model explains no variance in the dependent variable (R squared = 0, the predictors have no effect) Alternate hypothesis - the model explains some variance ( R squared is greater than 0)
@soumyadas4261Күн бұрын
Nice tutorial and very well explained as always @statquest. Have one question though. The screen at 16:22 shows x -1.30 and x 2.28on the connections leaving the hidden layer, but in reality they are changing y value. Is there a reason why it is mentioned as x? Also the results from two connections are summed to get the final result. In reality does this sum happen in another node? Does it have a special name like combiner or aggregator?
@statquest19 сағат бұрын
The "x" is a "times" or "multiplication" symbol, not a variable. The summation is just what happens when you have more than 1 connection to a single node and can happen anywhere in the neural network.
@keepfaith_gКүн бұрын
in conclusion.... shame on shannon!!
@FlyBoyGroundedКүн бұрын
Clear as mud - and I've got a degree in statistics.
@statquest19 сағат бұрын
What part of the video was confusing?
@anastasiasinarso2572Күн бұрын
Hi! Love your videos! I have one question, in this video, you started with plotting the variables on samples as the axes, but on your newer video about PCA, you started the other way around. Why are these two videos different? Or am I getting it wrong?
@statquest19 сағат бұрын
There are two ways to do PCA (both lead to the same results). This specific video focuses on the old way - which is based on eigendecomposition of the variances and covariances around in the data. The new way is explained in the newer video. The new way uses Singular Value Decomposition and is more numerically stable and, in general, preferred.
@alecollins01Күн бұрын
THANK YOU
@statquestКүн бұрын
double bam! :)
@alecollins01Күн бұрын
Thank for the KNOWLEDGE
@statquestКүн бұрын
bam! :)
@ekaterinaponizovskayadevin2812Күн бұрын
LOL Good video, but usually if you see a "Dear friend" in the email it is spam. And if you see "lunch" it is usually a normal email despite how many times the word "money" was written. It could be a discussion of a budget for a trip and where we should have lunch together. But yeah, that is why it is naive, I guess.
@statquestКүн бұрын
:)
@tomgraner741Күн бұрын
I don´t know but this Bam pushed me out of my fking chair. It´s so cool to understand how it works. Thank you so much!!
@statquestКүн бұрын
Thanks!
@renzo13okКүн бұрын
wow, such an amazing explanation!!!
@statquestКүн бұрын
Thanks!
@openyardКүн бұрын
"Take me to your leader" is a classic.
@statquestКүн бұрын
Bam! :)
@Harshsaklani-p2t2 күн бұрын
Hey bro, we all know that you have very much knowledge about Machine learning Algorithms, you are like a gentleman but don't use very smart word and not sing between the video's, if you want to become a singer then go for it , but don't make a video in funny way like Singing or murmuring
@statquestКүн бұрын
Noted
@ManEskkk2 күн бұрын
Thanks you saved my life!
@statquestКүн бұрын
BAM! :)
@rustybayonet2 күн бұрын
Infinity BAM!
@statquestКүн бұрын
Thank you!
@ArshamSheykh2 күн бұрын
You're wondeeful❤
@statquest2 күн бұрын
Thank you!
@DevShah-d1g2 күн бұрын
I didn't understand why are the data sets perpendicular to each other. Can anyone please explain😅
@statquest2 күн бұрын
We just give each variable that we measure on a different axis.
@jidgesanathkumar80382 күн бұрын
Hello Josh sir, Can you please make a video on SUPPORT VECTOR REGRESSION
@statquest2 күн бұрын
I'll keep that in mind.
@dhruvmehta29512 күн бұрын
5:33 can anyone explain this part more clearly
@statquest2 күн бұрын
I show what this means in this video: kzbin.info/www/bejne/jp6VdJKdiaafbsU
@jidgesanathkumar80382 күн бұрын
The way you introduced SUPPORT VECTOR MACHINES..!!!☀ BAM!!!
@statquest2 күн бұрын
Thanks! :)
@flszen2 күн бұрын
GitHub Copilot Chat recommended this to me, suggesting it was the next URL I needed in my decidedly not clustering related code, lol
@statquest2 күн бұрын
ha!
@1gorSouz42 күн бұрын
I couldn't know how to say both words in my language, because for Google translate, probability and likelihood both translate to the SAME word. Cool.
@statquest2 күн бұрын
In conversational english, both words mean the same thing, but, specifically in the context of statistics, they are different.
@XiChen-i4g2 күн бұрын
Thank you!
@statquest2 күн бұрын
bam!
@bodashatta84292 күн бұрын
god bless your double bam soul josh, thank you
@statquest2 күн бұрын
Thank you!
@sourinsaha29092 күн бұрын
penta BAMI!!.
@statquest2 күн бұрын
:)
@adventurerwannabe2 күн бұрын
i really wanted to see how you would handle a target categorical variable with more than 2 categories....
@statquest2 күн бұрын
Noted
@Before20302 күн бұрын
This is easy to understand and sooooo fun to watch. I literally laughed out loud at the "BAM"s, hahaaaa you're so funny. people around me might be thinking I'm watching some random Mr Beast videos
@statquest2 күн бұрын
Thanks! BAM! :)
@viniciuskosmota63263 күн бұрын
Thank you for the class Josh! Why not not use dpred/dw1 directly in the dssr/dw1 calculaction?
@statquest2 күн бұрын
What time point, minutes and seconds, are you asking about?
@amirbijandi59713 күн бұрын
Greate work. You might not know how good you are . Let me tell you. You are a gifted teacher.