Spring 2016 Exam 8 Solutions
24:19
8 жыл бұрын
Spring 2016 Exam Section 6 Solutions
13:13
Spring 2016 Exam Section 5 Solution
29:16
Spring 2016 Exam Section 3 Solution
24:05
Lecture 9 MDPs II
1:15:07
8 жыл бұрын
Lecture 8 MDPs I
1:13:42
8 жыл бұрын
Lecture 5 CSPs II
1:23:06
8 жыл бұрын
Lecture 4 CSPs I
1:15:51
8 жыл бұрын
Пікірлер
@brandoncazares8452
@brandoncazares8452 Ай бұрын
This was great because i needed to understand this for my exam tomorrow.
@ranaarslan8040
@ranaarslan8040 11 ай бұрын
volume is too much low
@anomalous5048
@anomalous5048 11 ай бұрын
thank you so much.
@shell925
@shell925 Жыл бұрын
Thank you, could be please share the homework link here if it's possible?
@zaranto7023
@zaranto7023 Жыл бұрын
Thank you
@karanacharya18
@karanacharya18 Жыл бұрын
Fantastic video explanation! Crisp, clear and formula-based. Easy to follow once you know the concepts and this video helps us clear the confusion among these fancy terms like joint, conditional and independence.
@vagabond7199
@vagabond7199 2 жыл бұрын
The audio is not clear. Very bad audio.
@vagabond7199
@vagabond7199 2 жыл бұрын
26:43 Isn't it Smoke is conditionally independent of Alarm given Fire?
@RajarshiBose
@RajarshiBose Жыл бұрын
Traditional Fire Alarm detects smoke not fire , so if there are other reason of smoke like someone smoking, it can increase the chance of alarm though it is not related to fire broken out.
@vagabond7199
@vagabond7199 2 жыл бұрын
20:43 His explanation is quite confusing.
@vagabond7199
@vagabond7199 2 жыл бұрын
The audio is not so clear.
@Melianareginali
@Melianareginali 2 жыл бұрын
Haha
@boccaccioe
@boccaccioe 2 жыл бұрын
Good explanation of likelihood weighting, very helpful
@aliamorsi6148
@aliamorsi6148 3 жыл бұрын
The content here flows extremely well. Thank you for making it public.
@ulissemini5492
@ulissemini5492 3 жыл бұрын
start at 9:22 if you know probability, if you don't this is a terrible introduction and I'd suggest watching the 3b1b videos on bayes rule. a good textbook is intro to probability by blitzstein hwang
@fratdenizmuftuoglu4755
@fratdenizmuftuoglu4755 3 жыл бұрын
It is just an application of bunch of expressions without a context and a delivery of logic. In my opinion, it does not teach the one anything, but just gives things to memorize.
@mmshilleh
@mmshilleh 4 жыл бұрын
Is there no need to normalize?
@connorbeveridge2517
@connorbeveridge2517 Жыл бұрын
He forgot!
@channelforstream6196
@channelforstream6196 4 жыл бұрын
Best Explanation
@songsbyharsha
@songsbyharsha 4 жыл бұрын
Perfect!
@heyitsme5408
@heyitsme5408 4 жыл бұрын
👍
@mdazizulislam9653
@mdazizulislam9653 4 жыл бұрын
Thanks for your very clear explanation. For more examples on d-separation see this kzbin.info/www/bejne/r3XWkKRsn7B7mJI
@typebin
@typebin 5 жыл бұрын
sound volume is too small.
@ruydiaz7196
@ruydiaz7196 5 жыл бұрын
Is this really MLE? Or is it MAP? 'XD
@ruydiaz7196
@ruydiaz7196 5 жыл бұрын
Perfect!
@mavericktutorial4005
@mavericktutorial4005 5 жыл бұрын
Really appreciate it.
@shreyarora771
@shreyarora771 5 жыл бұрын
Shouldn't the score of Alpha A1 at @11:00 be decreased and alpha B1 be increased since B is the right class?
@searcher94fly
@searcher94fly 6 жыл бұрын
Hi, at 4:17 didn't you do a switcheroo of the formula? Like instead of P(x,y) = P(x)P(y|x), it should've been P(x,y) = P(y)P(x,y) ? From what I hear in the video, this is the way you explained.
@tubesteaknyouri
@tubesteaknyouri 4 жыл бұрын
P(y|x)P(x) = P(x|y)P(y) because both are equal to P(x,y). See below: P(x|y) = P(x,y)/P(y) P(x,y)P(y) = P(x,y) P(y|x) = P(x,y)/P(x) P(y|x)P(x) = P(x,y) P(y|x)P(x) = P(x|y)P(y)
@Neonb88
@Neonb88 4 жыл бұрын
@@tubesteaknyouri And he did that so you get Bayes' Rule out of it. It wasn't just for the heck of it
@nuevecuervos
@nuevecuervos 6 жыл бұрын
The content here was extremely helpful, but the audio was really poor. Still, I wouldn't have figured this out without this particular video, so thank you!
@kudamushaike
@kudamushaike 6 жыл бұрын
*for first question 2(-1) + -2(2) = -6 not -2
@samcarpentier
@samcarpentier 6 жыл бұрын
By far the most efficient source of information about this topic I could find anywhere on the internet
@oguzguneren4874
@oguzguneren4874 11 ай бұрын
After 5 years, its still the only one on whole internet
@ryanschachte1907
@ryanschachte1907 7 жыл бұрын
This was great!
@Mokodokococo
@Mokodokococo 7 жыл бұрын
Hey sorry but I don't get why we sample whereas we already have the true distribution... I don't see how it can be useful... Does anyone have an explaination please :)
@michaelhsiu115
@michaelhsiu115 7 жыл бұрын
Great explanation!!! Thank you!
@qwosters
@qwosters 7 жыл бұрын
Dude I love you all for posting these lectures but this is a 75 mins one on how to multiply two numbers together. Soooo painful :) <3
@qbert65536
@qbert65536 7 жыл бұрын
Really got a lot out of this thank you!
@terng_gio
@terng_gio 7 жыл бұрын
How do you calculate the update weight? Could you provide an example to calculate it?
@dissdad8744
@dissdad8744 8 жыл бұрын
Unfortunately the explanation of calculating entropy and information gain is very unintuitive.
@hansen1101
@hansen1101 8 жыл бұрын
concerning ex. 2f: isn't the largest factor generated 2^4? because the join on all factors containing T generates a table over 4 variables (say f2') of which one is summed out to get f2. so f2' has size 2^4
@user-ze4qq8mm1q
@user-ze4qq8mm1q 5 жыл бұрын
this is a good thought, but the given observation value of +z is a constant, not a variable, so although it is contained in f2(U, V, W, +z) the only variables of f2 are U, V, W, hence 2^3 = 8 .
@zaman866
@zaman866 8 жыл бұрын
Thanks for video. I am just wondering how we normalize to sum to 1 in part g. Can you give any numerical example? Thanks
@hansen1101
@hansen1101 8 жыл бұрын
+Zs Sj assume f5 gives you a vector with 2 entries for +y and -y, say [1/5, 3/5]. to normalize this vector simply divide each coordinate by the sum of all coordinates [1/5 * 5/4 , 3/5 * 5/4] = [1/4, 3/4]
@zaman866
@zaman866 8 жыл бұрын
Thanks
@zaman866
@zaman866 8 жыл бұрын
hansen1101 do you know why we should normalize this and how this became non-normalized one?
@hansen1101
@hansen1101 8 жыл бұрын
+Zs Sj in this particular case you are calculating a distribution of the form P(Q|e) where e is an instantiation of some evidence variables. By definition this form has to sum to 1 over all instances of the query variable Q (i.e. P(q1|e) + P(q2|e) = 1 in the binary case). Be careful, there are queries of other forms that need not sum to 1 and therefore normalization is not necessary (i.e P(Q,e) or P(e|Q)). This became non normalized after applying Baye's rule and only working with the term in the numerator, leavin out the joint prob. over the instantiated evidence vars in the denominator. Therefore you'll have to rescale in the end.
@vedhasp
@vedhasp 9 жыл бұрын
Can anybody please explain results on the slide at 1:05:11 for the given probability tables?
@vedhasp
@vedhasp 9 жыл бұрын
+sahdeV ok I got it... Observation we have is +u and not -u. So there are 4 ways in which +u is possible. Rain, Rain, Umbrella or TT-U Sun Rain Umbrella or FT-U Sun Sun Umbrella or FF-U Rain Sun Umbrella or TF-U Probability of each is respectively: 0.5*0.7*0.9 0.5*0.3*0.9 0.5*0.7*0.2 0.5*0.3*0.2 T-U probability is therefore 63+27/(63+27+14+6)=0.818 F-U probability is 14+6/(63+27+14+6)-0.182 *************** For the next stage, time based update alone gives us probabilities as B'(T)=0.818*0.7+0.182*0.3=0.6272 B'(F)=0.818*0.3+0.182*0.7=0.3728 observation (u+) based updates give us B(T)=0.6272*0.9/(0.6272*0.9+0.3728*0.2)=0.883 B(F)=0.3728*0.2/(0.6272*0.9+0.3728*0.2)=0.117
@ilyaskarimov175
@ilyaskarimov175 5 жыл бұрын
@@vedhasp Thank you very much.
@WahranRai
@WahranRai 9 жыл бұрын
Audio not goooooooood
@WahranRai
@WahranRai 9 жыл бұрын
1:17:14 It is bad example for LCV !!! This case never happens because MRV heuristic will color SA with blue (one color left !!!)
@Scientity
@Scientity 9 жыл бұрын
This is very helpful! Thank you