What a good channel i've come across. This will definitely become a part of my library to extend my problem solving toolbox as a developer. Thank you!
@brandoncazares8452Ай бұрын
I struggled to understand this in my class, I'm glad I watched this video. These are very helpful.
@nelsonekos3 ай бұрын
What is the link to the next video immediately after this?
@nelsonekos3 ай бұрын
He mentioned the next class at the end. What video is the next class?
@gunhatornie4 ай бұрын
remember when??
@Ama-be4 ай бұрын
Thanks so much for this video! It was very insightful for me👍🏾
@nehalkalita4 ай бұрын
11:49 Can anyone tell me why phi(A, C) = P(C|A) and not P(A)P(C|A)?
@thegrey535 ай бұрын
Heat Death
@imtryinghere16 ай бұрын
Never take this down Bert. I send it to everyone asking me what is a good overview of the main models for classic ML, even if it's 9 years old.
@reanwithkimleng6 ай бұрын
❤❤❤❤❤
@reanwithkimleng6 ай бұрын
Thanks for helping me❤❤❤
@saisheinhtet24467 ай бұрын
awesome
@HoangTran-bu2ej7 ай бұрын
what is 'phi' sir
@Ahmed.r.a8 ай бұрын
thank you for this brilliant explanation. I wished there was a Question with solution to practice on.
@morzen58949 ай бұрын
very unclear and comfusing using venn diagrams to represent some of the probabilities and giving detail example of the math using numbers to show how it runs would be of great help, for people discovering the subject. I am fairly sure this is a great video for people who already understand the subject or have some grapst on it. But for new comer it is very confusing. not to mention the rise in difficulty between the first part which is quite easy to understand (although venn diagrams would help) and the second part which looks like elvish.
@newbie805110 ай бұрын
Very simple explanation, thans !
@zoltankurti10 ай бұрын
The bot sacrificed the bishop because of the depth 3 search. It saw bishop to b5, pawn takes bishop and then queen takes rook. It didn't see you can recapture the rook, that's depth 4. It thoight it can take a rook for a bishop and pawn.
@cosmopaul877310 ай бұрын
Thanks for great video! Helped me a lot in understanding this stuff for my Uni course :)
@BradyL-e7z10 ай бұрын
you suck at following any sort of linear pace. Fuck youtube videos
@BradyL-e7z10 ай бұрын
oh let me explain how the product rule works and let me show you how the simple probability of a and b works, then immediately reference 7 more advanced terms without any introduction like we all know them perfectly. Cunt you just taught nothing congrats
@niyaali237911 ай бұрын
great stuff!!
@RishiRajvid11 ай бұрын
from Bihar (INDIA)
@UsefulMotivation36511 ай бұрын
With the given respect to you, but not to the people that created this "Variable elimination" thing, this variable "elimination" sounds like bullshit because you are already computing all the possible states of the variable that you are going to eliminate, meaning that you aren't eliminating nothing already. Or I'm wrong?
@dr.merlot1532 Жыл бұрын
absolutely useless.
@deeplearn6584 Жыл бұрын
Thanks for the great explanation! Finally understood the implementation of HMM`s
@siomokof3425 Жыл бұрын
6:52
@ytpah9823 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:00 📊 Probabilistic graphical models, such as Bayesian networks, represent probability distributions through graphs, enabling the visualization of conditional independence structures. 01:34 🎲 Bayesian networks consist of nodes (variables) and directed edges representing conditional dependencies, allowing the representation of full joint probability distributions. 03:21 🔀 Bayesian network structures reveal conditional independence relationships, simplifying the calculation of conditional probabilities and inference. 09:10 🧠 Naive Bayes and logistic regression can be viewed as specific Bayesian networks, with the former relying on conditional independence assumptions. 11:55 📜 Conditional independence is a key concept in Bayesian networks, defining that each variable is independent of its non-descendants given its parents. 15:15 ⚖️ Inference in Bayesian networks often involves calculating marginal probabilities efficiently, which can be achieved through variable elimination, avoiding full enumeration. 23:54 ⚙️ Variable elimination is a technique used in Bayesian networks to replace summations over variables with functions, reducing computational complexity for inference. 24:05 🧮 Variable elimination is a technique used to compute marginal probabilities efficiently by eliminating variables one by one. 28:07 ⏱️ In tree-structured Bayesian networks, variable elimination can achieve linear time complexity for exact inference. 29:02 📊 Learning in a fully observed Bayesian network is straightforward, involving counting probabilities based on training data.
@JebbigerJohn Жыл бұрын
This is so good!!!
@EdupugantiAadityaaeb Жыл бұрын
What is the name of textbook
@jub8891 Жыл бұрын
thank you so much, you explain the subject very well and have helped me to understand..
@theedmaster7748 Жыл бұрын
My professor for AI explained this so badly that I had no idea what was going on. Thanks for this in-depth and logical explanation of these topics
@seanxu6741 Жыл бұрын
Fantastic video! Thanks a lot!
@hongkyulee9724 Жыл бұрын
Thank you for the good explanation :D
@Iris-pb2er Жыл бұрын
the BEST lecture about fairML
@JazzLispAndBeer Жыл бұрын
Great for getting up to speed again!
@elita__ Жыл бұрын
i dont want to learn technology but i want you so bad bro.
@tuongnguyen9391 Жыл бұрын
This very nice, instead in LDPC decoder, this numerical stuff happen so very often. Back then I use matlab vpa but it very slow. Thank you so so much !!
@stevendurr Жыл бұрын
Fantastic video Thanks so much for making this
@alvynabranches1 Жыл бұрын
Next time prepare for streams before going live. Check "Machine Learning with Phil"
@alvynabranches1 Жыл бұрын
Can you please update the recent code on your github repository. The one currently on github is to old. And get and error `ValueError: shapes (395,) and (790,) not aligned: 395 (dim 0) != 790 (dim 0)`
@rajkiran1982 Жыл бұрын
Can you please let me know what software do you use for writing ? -- Is it notes in Zoom ? Seeing your face surely makes it better.
@anto1756 Жыл бұрын
Is this playlist still useful if I want to use deep learning instead of reinforcement learning ?
@ea1766 Жыл бұрын
Top tier video without a doubt.
@margotgorske6986 Жыл бұрын
When a flock leaves a tree, it is as if the tree went bare!
@bitvision-lg9cl Жыл бұрын
Very impressive, you make the model crystal clear, and I know that compute bayesian network is nothing than that to calculate a probability (for discrete variables), or a probability distribution (for continuous variables) efficiently.
@rezaqorbani13272 жыл бұрын
Greate explanation! Thank you for the video!
@efchen15902 жыл бұрын
very good explantation!
@lemke54972 жыл бұрын
Exactly what I was looking for. Can't wait to get some free time in a few weeks to start the project myself.
@quaternion32672 жыл бұрын
Thank You! what is your chess rating?
@robotech25662 жыл бұрын
the hardest part of python language is that, you can't easily pass the variables/function to parent/child/sister levels, and that's your problem too at 34:56