I wish more people would understand probability and statistics better.. So much misinformation about the world is spread by media who misinterpret studies.. Also hate how often we talk about averages without an included variance. Without which people make lots of wrong conclusions from just averages.
@samsheppard82656 сағат бұрын
Vampire
@azuarc7 сағат бұрын
I already understood probability. This video does nothing to explain what likelihood is or why it's different. The creator only acknowledges *that* it is different and how to compute it, but why do I even care?
@statquest7 сағат бұрын
In common conversation we use probability and likelihood interchangeably. However, statisticians make a clear distinction that is important to understand if you want to follow their logic. The good news is that they are both super simple. The bad news is that they are easy to get mixed up. This StatQuest gives you visual images that make them both easy to remember so you'll always keep them straight. NOTE: This video was originally made as a follow up to an overview of Maximum Likelihood kzbin.info/www/bejne/jpbTiaeibr5-rcU . That video provides context that gives this video more meaning. So if you are wondering what a likelihood of 0.21 represents, please watch that video.
@kmcng11 сағат бұрын
Amazing explanation!
@statquest7 сағат бұрын
Thank you!
@ShubhamPlays13 сағат бұрын
thanks for your dedication by explaining with such good visuals and yes not forgeting your music good job.
@statquest7 сағат бұрын
Thanks!
@cmexgd709614 сағат бұрын
Thank you Josh, you are the one that makes the internet the best achievement the humanity has ever achieved, You are the definition of its bright side. I wouldn't be so happy and concentrated studying all this stuff 6 hours a day If there were no you. Thank you again and TRIPLE BAM!!!
@statquest7 сағат бұрын
Thank you very much! :)
@bikashkumarpaul592215 сағат бұрын
It’s amazing how effective learning can be when the content is this well-presented and thoughtfully made!
@statquest7 сағат бұрын
Thanks!
@dshefman15 сағат бұрын
Are you sure encoder-only transformers are the same as embedding models? I think they have different architectures.
@statquest7 сағат бұрын
There are lots of ways to create embeddings - and this video describes those ways. However, BERT is probably the most commonly used way to make embeddings with an LLM.
@acleahcim16 сағат бұрын
OMG this video is much better than the class I"m taking explained it.
@statquest7 сағат бұрын
Thanks!
@collinsk.langat623016 сағат бұрын
amazing
@statquest7 сағат бұрын
Thanks!
@citruskeys18 сағат бұрын
What a wonderful world, to open a video on a R function only to hear a lovely ukelele theme song.
@statquest7 сағат бұрын
bam! :)
@AdwaitMalghade-pb2ed22 сағат бұрын
GOAT
@statquest7 сағат бұрын
:)
@uniquekatnoria5380Күн бұрын
I am new to statistics and have trouble understanding the formal terms stated in books. The content from this channel really makes it easy to get intuition and understand the underlying principles. Great work!!
@statquest7 сағат бұрын
Thank you!
@MauricioGaelPerezArroyoКүн бұрын
is he on drugs?
@statquest7 сағат бұрын
:) nope!
@jasrajanand5759Күн бұрын
small bam
@statquest7 сағат бұрын
:)
@taofeecohadesanuКүн бұрын
Almost 8 years old gem and I find this very comprehensive enough for my level. Thank youuuuu.
@statquest7 сағат бұрын
Glad to hear it's still helpful! :)
@OtoniumКүн бұрын
I want a StatSquatch plushie
@statquestКүн бұрын
That would be awesome!
@brpawankumariyengar4227Күн бұрын
Thank you so very much Sir ❤ … I very much appreciate these videos…. Thanks a ton 😊
@statquestКүн бұрын
TRIPLE BAM!!! Thank you so much for supporting StatQuest! :)
@RichGwilliamКүн бұрын
You know what I love? When lecturers do falsetto freeform scat jazz when I'm trying to follow their maths.
@statquestКүн бұрын
:)
@JustinShaedoКүн бұрын
The likely hood that this video labeled the Y axis is zero. The probability that this video labeled the Y axis is also zero.
@amirhasansafizade21282 күн бұрын
Great as always. Thanks Josh
@statquestКүн бұрын
Thanks again!
@jasrajanand57592 күн бұрын
amazing
@statquest2 күн бұрын
Thanks!
@football-soccer98532 күн бұрын
Wonderful explanation. Thank you.
@statquest2 күн бұрын
Thanks!
@annieghostofyourapple6392 күн бұрын
Thank you for this video! Very helpful, one of the best on this topic!
@statquest2 күн бұрын
Thank you!
@starnemi68252 күн бұрын
i totally didnt fall asleep in class, miss the entire lesson, and completely lose any sense of direction in my class wdym 😁😁😁
@statquest2 күн бұрын
:)
@mohdyusuf96942 күн бұрын
6:04 WHERE THIS VALUE FOR SQUARED RESIDUALS COMING FROM?
@statquest2 күн бұрын
The residuals are the vertical distances from the line to the data points. We then square those distances.
@mohdyusuf9694Күн бұрын
@@statquest got that!
@stuartkramer10822 күн бұрын
By any chance do you have the actual small dataset handy you could put in a comment? I'd like to use it to replicate your results to test my understanding. Thanks for all the great videos!
@statquest2 күн бұрын
Because the process of bootstrapping requires random sampling, you'd get something similar, but not the same, as what I have. That said, I believe the "shifted" values are -4, -3.5, -2.2, 1.2, 1.5, 1.6, 2.5, 3
@stuartkramer10822 күн бұрын
@@statquest Many thanks! When I try to reproduce your results, I get close on the p value of 63%, but I'm way off from you in the split. I get about even percentages in the tails, like 32% 29% -- I'll keep working on it ;-)
@@statquest Well, now I'm confused ha ha ha. When I plug in data.1 into my excel variation of your model (-4, -4, -2.2, -2.2, 1.2, 1.5, 2.5, 3), I get the result you show in the video. So, that's great. And I see how raw.data turns into shifted.data consistent with your explanation in the video as to how to derive the null hypothesis distribution from the initial sample. My confusion is figuring how data.1 (the average of data.1 is about -.5) is derived from shifted.data, cause data.1seems to be the basis of the null hypothesis graph starting at around 3:30 in the video.
@statquest2 күн бұрын
@@stuartkramer1082 Ooops! It looks like I messed up when calculating the numbers. I should have used "shifted.data" in the loop instead of "raw.data". That said, when corrected, the p-value is still >0.05 so we don't need to change the result.
@hammadrehman57532 күн бұрын
I was very confused that how to proceed to the third stump (that is how to obtain its bootstrapped dataset), chatgpt cleared it for me as: First calculate the total error and amount of say from the bootstrapped dataset. Apply the stump to the main dataset to obtain the number of correct and incorrect classifications Then apply the amount of say to the weights of the samples in the main dataset respectively. Please Correct me if any of this is incorrect.
@statquest2 күн бұрын
You just add all of the samples back to the pool and bootstrap from the original dataset (with updated weights).
@hammadrehman5753Күн бұрын
@statquest thats what I'm confused about, how do you update the weights in the main dataset, there will be some data samples which would not be in the current bootstrapped dataset, how to deal with their weights?
@statquestКүн бұрын
@@hammadrehman5753 You just use whatever their old weights were and then do the normalization step.
@lailatp71012 күн бұрын
This is my first review on youtube. I just have to say thank you! This is so good explained, that it even looks simple. And it is funny too (I am just smiling at each "BAM!")!. I am finalizing my studies in Data Science and ML and this is by far the best source of information that I could find. I have your both books and, after watching the videos I only need to take a short look into the books to remember what neural networks or other concepts and approaches are all about. Thank you very much for saving me so much time! Please, keep teaching. And keep it so funny :)
@statquest2 күн бұрын
Thank you very much!
@jasrajanand57592 күн бұрын
beautiful
@statquest2 күн бұрын
Thank you! Cheers!
@Sarah-cm6nv2 күн бұрын
Thank you so much sir, you really clearly explained it! no one explained it like you did!