This was great because i needed to understand this for my exam tomorrow.
@ranaarslan804011 ай бұрын
volume is too much low
@anomalous504811 ай бұрын
thank you so much.
@shell925 Жыл бұрын
Thank you, could be please share the homework link here if it's possible?
@zaranto7023 Жыл бұрын
Thank you
@karanacharya18 Жыл бұрын
Fantastic video explanation! Crisp, clear and formula-based. Easy to follow once you know the concepts and this video helps us clear the confusion among these fancy terms like joint, conditional and independence.
@vagabond71992 жыл бұрын
The audio is not clear. Very bad audio.
@vagabond71992 жыл бұрын
26:43 Isn't it Smoke is conditionally independent of Alarm given Fire?
@RajarshiBose Жыл бұрын
Traditional Fire Alarm detects smoke not fire , so if there are other reason of smoke like someone smoking, it can increase the chance of alarm though it is not related to fire broken out.
@vagabond71992 жыл бұрын
20:43 His explanation is quite confusing.
@vagabond71992 жыл бұрын
The audio is not so clear.
@Melianareginali2 жыл бұрын
Haha
@boccaccioe2 жыл бұрын
Good explanation of likelihood weighting, very helpful
@aliamorsi61483 жыл бұрын
The content here flows extremely well. Thank you for making it public.
@ulissemini54923 жыл бұрын
start at 9:22 if you know probability, if you don't this is a terrible introduction and I'd suggest watching the 3b1b videos on bayes rule. a good textbook is intro to probability by blitzstein hwang
@fratdenizmuftuoglu47553 жыл бұрын
It is just an application of bunch of expressions without a context and a delivery of logic. In my opinion, it does not teach the one anything, but just gives things to memorize.
@mmshilleh4 жыл бұрын
Is there no need to normalize?
@connorbeveridge2517 Жыл бұрын
He forgot!
@channelforstream61964 жыл бұрын
Best Explanation
@songsbyharsha4 жыл бұрын
Perfect!
@heyitsme54084 жыл бұрын
👍
@mdazizulislam96534 жыл бұрын
Thanks for your very clear explanation. For more examples on d-separation see this kzbin.info/www/bejne/r3XWkKRsn7B7mJI
@typebin5 жыл бұрын
sound volume is too small.
@ruydiaz71965 жыл бұрын
Is this really MLE? Or is it MAP? 'XD
@ruydiaz71965 жыл бұрын
Perfect!
@mavericktutorial40055 жыл бұрын
Really appreciate it.
@shreyarora7715 жыл бұрын
Shouldn't the score of Alpha A1 at @11:00 be decreased and alpha B1 be increased since B is the right class?
@searcher94fly6 жыл бұрын
Hi, at 4:17 didn't you do a switcheroo of the formula? Like instead of P(x,y) = P(x)P(y|x), it should've been P(x,y) = P(y)P(x,y) ? From what I hear in the video, this is the way you explained.
@tubesteaknyouri4 жыл бұрын
P(y|x)P(x) = P(x|y)P(y) because both are equal to P(x,y). See below: P(x|y) = P(x,y)/P(y) P(x,y)P(y) = P(x,y) P(y|x) = P(x,y)/P(x) P(y|x)P(x) = P(x,y) P(y|x)P(x) = P(x|y)P(y)
@Neonb884 жыл бұрын
@@tubesteaknyouri And he did that so you get Bayes' Rule out of it. It wasn't just for the heck of it
@nuevecuervos6 жыл бұрын
The content here was extremely helpful, but the audio was really poor. Still, I wouldn't have figured this out without this particular video, so thank you!
@kudamushaike6 жыл бұрын
*for first question 2(-1) + -2(2) = -6 not -2
@samcarpentier6 жыл бұрын
By far the most efficient source of information about this topic I could find anywhere on the internet
@oguzguneren487411 ай бұрын
After 5 years, its still the only one on whole internet
@ryanschachte19077 жыл бұрын
This was great!
@Mokodokococo7 жыл бұрын
Hey sorry but I don't get why we sample whereas we already have the true distribution... I don't see how it can be useful... Does anyone have an explaination please :)
@michaelhsiu1157 жыл бұрын
Great explanation!!! Thank you!
@qwosters7 жыл бұрын
Dude I love you all for posting these lectures but this is a 75 mins one on how to multiply two numbers together. Soooo painful :) <3
@qbert655367 жыл бұрын
Really got a lot out of this thank you!
@terng_gio7 жыл бұрын
How do you calculate the update weight? Could you provide an example to calculate it?
@dissdad87448 жыл бұрын
Unfortunately the explanation of calculating entropy and information gain is very unintuitive.
@hansen11018 жыл бұрын
concerning ex. 2f: isn't the largest factor generated 2^4? because the join on all factors containing T generates a table over 4 variables (say f2') of which one is summed out to get f2. so f2' has size 2^4
@user-ze4qq8mm1q5 жыл бұрын
this is a good thought, but the given observation value of +z is a constant, not a variable, so although it is contained in f2(U, V, W, +z) the only variables of f2 are U, V, W, hence 2^3 = 8 .
@zaman8668 жыл бұрын
Thanks for video. I am just wondering how we normalize to sum to 1 in part g. Can you give any numerical example? Thanks
@hansen11018 жыл бұрын
+Zs Sj assume f5 gives you a vector with 2 entries for +y and -y, say [1/5, 3/5]. to normalize this vector simply divide each coordinate by the sum of all coordinates [1/5 * 5/4 , 3/5 * 5/4] = [1/4, 3/4]
@zaman8668 жыл бұрын
Thanks
@zaman8668 жыл бұрын
hansen1101 do you know why we should normalize this and how this became non-normalized one?
@hansen11018 жыл бұрын
+Zs Sj in this particular case you are calculating a distribution of the form P(Q|e) where e is an instantiation of some evidence variables. By definition this form has to sum to 1 over all instances of the query variable Q (i.e. P(q1|e) + P(q2|e) = 1 in the binary case). Be careful, there are queries of other forms that need not sum to 1 and therefore normalization is not necessary (i.e P(Q,e) or P(e|Q)). This became non normalized after applying Baye's rule and only working with the term in the numerator, leavin out the joint prob. over the instantiated evidence vars in the denominator. Therefore you'll have to rescale in the end.
@vedhasp9 жыл бұрын
Can anybody please explain results on the slide at 1:05:11 for the given probability tables?
@vedhasp9 жыл бұрын
+sahdeV ok I got it... Observation we have is +u and not -u. So there are 4 ways in which +u is possible. Rain, Rain, Umbrella or TT-U Sun Rain Umbrella or FT-U Sun Sun Umbrella or FF-U Rain Sun Umbrella or TF-U Probability of each is respectively: 0.5*0.7*0.9 0.5*0.3*0.9 0.5*0.7*0.2 0.5*0.3*0.2 T-U probability is therefore 63+27/(63+27+14+6)=0.818 F-U probability is 14+6/(63+27+14+6)-0.182 *************** For the next stage, time based update alone gives us probabilities as B'(T)=0.818*0.7+0.182*0.3=0.6272 B'(F)=0.818*0.3+0.182*0.7=0.3728 observation (u+) based updates give us B(T)=0.6272*0.9/(0.6272*0.9+0.3728*0.2)=0.883 B(F)=0.3728*0.2/(0.6272*0.9+0.3728*0.2)=0.117
@ilyaskarimov1755 жыл бұрын
@@vedhasp Thank you very much.
@WahranRai9 жыл бұрын
Audio not goooooooood
@WahranRai9 жыл бұрын
1:17:14 It is bad example for LCV !!! This case never happens because MRV heuristic will color SA with blue (one color left !!!)