What an amazing video! Congratulations on your work!
@georgeboutselis503116 күн бұрын
I don't know if the sum of two independent normal variables has 2 peaks..
@minditon326416 күн бұрын
great place to learn Are not you making other videos on latest yolo models bringing new methods
@sujathaontheweb374020 күн бұрын
@kapil How did you think of formulating the problem as p(w|x, t)?
@sujathaontheweb374020 күн бұрын
This channel should already have a million likes and 10 million subscribers!! 🎉🌈💛💡
@ahsentahir447321 күн бұрын
Excellent! That was an amazing tutorial!
@newbie805123 күн бұрын
Ah, feels interesting Got to know about Contrastive Learning through a paper that we had to read as part of our coursework, which was on Probabilistic Contrastive Learning (nicknamed as proco) Will explore this more in the coming months, thanks for the simple explanation ! You sound like a young excited prof, takes time to explain obscure stuff
@rickc.ferreira27 күн бұрын
18:18 shouldn't it be inverse of f of x instead of f of x
@GabrielOduoriАй бұрын
I am only learning bayesian filters and I am yet to get any other material that explains it like you did in this video. A huge part of the work I recently embarked on! Thank you
@KapilSachdevaАй бұрын
🙏
@simonson6498Ай бұрын
Thx so much!
@kiit8337Ай бұрын
Professor any good tutorials or books can u suggest to start from estimation theory to a bit advanced stats we can learn and moreover how much statistics we need in ML feilds can u tell ?
@iliasaarab7922Ай бұрын
Awesome explanation!
@scheingor7968Ай бұрын
Sir, thank you for your explanation. However, may I ask to confirm whether I grasp the information in a right way. So the kernel counts are based on the input channels and the filter bank counts are based on the output channels? If thats the case, in the case of 3 input channels with 2 output channels, does the first feature map will be use as addition elements to the last 2 feature maps or through averaging? Or in the case of 4 input channels and 2 output channels does the first feature maps will be added to the second one while the third feature map will be added to the fourth one to make it 2 feature maps? Thank you for your attention
@manueljohnson1354Ай бұрын
Excellent
@ameliadunstone6161Ай бұрын
Thanks for this informative video! Here is a link to the detailed balance paper as the ones in the description are not accessible any more: kkhauser.web.illinois.edu/teaching/notes/MetropolisExplanation.pdf
@rishidixit7939Ай бұрын
What is the meaning of realization of a Random Variable?
@KapilSachdevaАй бұрын
a sample of a Random Variable. A random variable has an associated probability distribution. A realization is a sample from that distribution.
@JackHoff-w8cАй бұрын
fantastic video. thank you
@nick45beАй бұрын
In 3:13 why do you refer to x=0.6 as the first value while immediately after you refer to x=0.6 as the first sample? shouldn't x=0.6 be a single value from a sample of data?
@KapilSachdevaАй бұрын
It’s the first sample.
@fayezalhussein7115Ай бұрын
so clear and good
@mumbo2526Ай бұрын
Very good video! I just have a question on 12:07 : Once we know that the derivatives are x^i, what allows us to just write x^i there in the formula? i is only the exponent of xn and I can’t see any sum iterating over „i“ or anything clarifying which i I need to use? I find it quite difficult to understand that part of the formula during those steps. Only after transforming everything into vectors and Matrices „i“ disappears and the formula becomes readable again.
@HàCôngNgaVNUHanoi2 ай бұрын
Amazing
@kapilkhurana43432 ай бұрын
Kapil Sachdeva ji . Thanks very much for clearing my doubt over Bayesian equation inherently using marginal distribution. You really are a great teacher 🎉❤
@MuhammadRandhawa-u7t2 ай бұрын
the CV model has detected a good boi
@RobertWhite-m3p2 ай бұрын
Franco Neck
@ChrisOffner2 ай бұрын
Wonderful video, thank you so much. Your style is very pleasant.
@bivekpokhrel56252 ай бұрын
perfect Thank you
@rakeshojha63952 ай бұрын
Sir I am asst prof Bioinformatics I am using this for molecular phylogenetics...thank you so much sir for this video
@KapilSachdeva2 ай бұрын
🙏
@AndyLianQ2 ай бұрын
Thanks for the extraordinary job done! Helped me get a real quick grasp of the meaning of the paper.
@TeddyFlanagan-q8l2 ай бұрын
Clement Landing
@leoalanya42902 ай бұрын
How can we just sample from the target distribution in order to calculate the acceptance ratio? Doesn't that defeat the purpose of the algorithm: Why wouldn't I just take samples from the target directly? Or is the problem that we are not able to draw independently from the target?
@DorisCorey-j7i2 ай бұрын
Hernandez Betty Lewis Kenneth Gonzalez Christopher
@TameraSweet-n3t2 ай бұрын
Haley Corner
@SM-mj5np2 ай бұрын
You're awesome.
@Blu3B33r2 ай бұрын
5:57 gave me the aha!-moment. Thank you so much!
@snehamishra52752 ай бұрын
So the bounding box comes with labeled data ?.....or we ourself are creating bounding box
@JagadesanGanesan3 ай бұрын
What a clear explanation ! A gem.
@mechros44603 ай бұрын
Exactly what I was looking for, thank you!
@somashreechakraborty11293 ай бұрын
Brilliant explanation! Thank you so much!
@alexandrkazda70713 ай бұрын
Thank you, the tutorial helped me a lot to get started with Einops.
when we were calculating Pr(x>5) what is the role of h(x) here ? Cant we just use p(x)
@MichelleMoore-l2c3 ай бұрын
Pagac Road
@AlixChace-x7d3 ай бұрын
I have a question if it is possible for the sum of probabilities for future state to be greater than 1 as in the case of s3 at 14:04 in video...? It seems it should sum to 1 always.