*Github Code* - github.com/explainingai-code/DDPM-Pytorch *DDPM Implementation Video* - kzbin.info/www/bejne/rKaZln6qmq-Km9k Note: There’s a typo at 19:49, in the denominator, for the variance, instead of cumulative product of alphas till t-1, it should be cumulative product of alphas till t. So \bar{\alpha}_t instead of \bar{\alpha}_{t-1}
@anirbantarafdar282 ай бұрын
After spending one month in web/youtube reading blogs, watching videos finally I got this GEM. Its the best explanation on the mathematics from entire universe. It took me 10 days to grasp all these excellent mind blowing thoughts behind the DDPM. Kudus to you. Keep posting on recent topics.
@Explaining-AIАй бұрын
Thank you for these kind words :) Yes I definitely plan to keep posting videos on both recent as well as slightly older (but still relevant) topics
@amirzarei495511 ай бұрын
without a doubt the best video ever made on the subject of DDPM. Even better than the original paper. Thank you very much for that. ❤
@Explaining-AI11 ай бұрын
I am truly humbled by your generous comment(brought a big smile to my face :) ). Thank you so much for the kind words.
@adityapuranik84695 ай бұрын
I can’t genuinely agree more ❤️
@ASHISH-wv9wi3 ай бұрын
I find it the best source for ddpm maths on web, it took me a week to watch this video. Keep uploading theses kind of math derivations video Thanks
@shizhouhuang487210 ай бұрын
That is the best video that i have watched about teaching the diffusion model.
@Explaining-AI10 ай бұрын
Thank you :)
@sladewinter4 ай бұрын
Thanks man, this really helped clear some fundamental doubts which remained even after going through multiple articles on DDPMs. Terrific job!
@raghavamorusupalli75573 ай бұрын
I learnt maths of DM from this lecture. Thank you
@sushilkhadka806911 ай бұрын
This is a great video, i completely understood till "Simplifying the Likelihood for Diffusion Models". I'll need to replay multiple times but the video is very helpful.. Please make more such video diving into maths. Most youtubers leave out the maths part while teaching DL part which is crazy because it's all math.
@Explaining-AI11 ай бұрын
Thank you for saying that! And Yes the idea is to dive into that as doing that also gives me the best shot at ensuring I understand everything.
@raul825able5 ай бұрын
Wow! This is an incredibly clear explanation of the complex mathematics behind DDPM. Thank you so much, Tushar! This video is a real gem. The formulas may seem intimidating at first, but it's amazing how such a complex model can be derived from a fundamentally simple idea.
@Explaining-AI5 ай бұрын
Thank You for this :)
@bayesianmonk6 ай бұрын
I watched your video again, and cannot give you enough compliments on it! Great job!
@Explaining-AI6 ай бұрын
@bayesianmonk Thank you so much for taking the time to comment these words of appreciation(that too twice) 🙂
@learningcurveai8 ай бұрын
Best explanation of diffusion process with connection to VAE process!
@Explaining-AI8 ай бұрын
Thank you for the kind words!
@raul825able5 ай бұрын
Absolutely! Bringing VAE really helped me understand the concept in a clearer way.
@daryoushmehrtash76015 ай бұрын
Thanks. Many interesting nuggets that I had missed from reading the paper.
@HassanHamidi-v8s3 ай бұрын
Definitely the best explanation I've ever seen on this topic. Keep it up! :)
@Explaining-AI3 ай бұрын
Thank You!
@tejasstanleyАй бұрын
Beautiful video, the efforts put in making it must have been enormously huge, Thanks a lot!!!
@Explaining-AIАй бұрын
Thank you for the appreciation :)
@mycotina64387 ай бұрын
Superb, the math doesn't looks all that scary after your explanation! Now I just need pen an paper to sink it in.
@Explaining-AI7 ай бұрын
Thank You!
@vikramsandu60547 ай бұрын
I don't have enough words to describe this masterpiece. VERY WELL EXPLAINED. Thanks. :)
@Explaining-AI7 ай бұрын
Thank you so much for this appreciation :)
@frommarkham4242 ай бұрын
We making it outta the hood with this tutorial🗣🔥💯
@efstathiasoufleri68817 ай бұрын
Great Video! It was very helpful to understand DDPM ! Thank you so much ! : )
@Explaining-AI7 ай бұрын
Thank you :) Glad that the video was helpful to you!
@alicapwn5 ай бұрын
excellent, clear explanation of diffusion
@Explaining-AI5 ай бұрын
Thank You!
@arpitanand46939 ай бұрын
This video was absolutely amazing! Also giving yourself a rating of 0.05 after spending 500 hrs on a topic is crazy(Not that I would know, because I am about a 0.0005 according to this scale) Waiting eagerly for the next one!
@Explaining-AI9 ай бұрын
Thank you so much! the scale was more to indicate how much I don't know(yet)😃 Have already started working on Part 2 of Stable Diffusion Video so that should soon be out.
@kotakviraj401612 күн бұрын
14:30 actually the reason we model reverse process also as gaussian is that the distribution q(x_t-1|xt, x0) is gaussian with certain mean and variance. (can be derived using bayes rule not very high level math there) notice that its conditioned not only on xt but also on x0. But in theory p(x_t-1|xt) is the reverse process distribution. Hence we can't say that reverse process is also gaussian. we are just assuming/approximating it
@nitishkumar-k3w2iАй бұрын
Explanation of this paper, i can only say "just like woow.."❤
@Noname-e7b4 ай бұрын
Damn, really earned that sub! Great work :)
@Adityak19978 ай бұрын
Hey, very helpful video. I'm making a project for our image processing course on diffPIR paper, this video explains everything in sequence. All the bad calculation missed in my paper is explained and with proper intuition very nicely thanks👍 Edit: just one question what about the term E[log(p(x_0|x_1))], what is the idea behind it, does the model minimize it?
@Explaining-AI8 ай бұрын
Thank you! This term is the reconstruction loss which is similar to what we have in vae's. Here its measure that given a slightly noisy image x1(t=1), how well the model is able to reconstruct the original image x0 from it. In an actual implementation this is minimized together with the summation terms itself. So during training instead of uniformly sampling timesteps from t=2 to t=T(to minimize the summation terms), we sample timesteps from t=1 to t=T, and when t=1 , the model is learning to denoise x1 (rather reconstruct x0 from a noisy x1). The only difference happens during inferencing, where at t=1, we simply return the predicted denoised mean, rather than returning a sample from N(mean, scheduled variance) which we do for t=2 to t=T.
@DLwithShreyas8 ай бұрын
Legendry video
@hendrikchiche87183 ай бұрын
Great video thank you ! Some maths would need more explanation though such as at 12:59 where you assume espilon(t), epsilon(t-1),...,espilon(0) are all the same and factorize by a new term named espilon.
@Explaining-AI3 ай бұрын
Thanks! and I agree with you. In hindsight if I made the video again, I think it would be over an hour long atleast, because there are few aspects that I now think I could/should have gone in more detail. Regarding the epsilon terms, I did talk a bit about this briefly @12:16, where I mention that this can be done because sum of two independent gaussian random variables remains a gaussian with mean and variance being the sum of both the means and variances.
@AR-on4wm11 ай бұрын
Yes, in theory the forward process and the reverse process is the same given the process is a Weiner Process(Brownian motion). Intuitively, if you have a microscopic view of a Brownian motion, the forward and the reverse process looks similar (i.e. random). kzbin.info/www/bejne/jnS4naF-hZaHhK8
@Explaining-AI11 ай бұрын
Thank you for sharing the video link
@lucamautino86323 ай бұрын
Amazing job, I'm studyinh DDPMs for my thesis and this is the best resource you can find by far!
@Explaining-AI3 ай бұрын
Thank You :)
@ShivangiTomar-p7j5 ай бұрын
Wow this was awesome!!
@Explaining-AI5 ай бұрын
Thank you
@jeffreyyoon36189 ай бұрын
Appreciate your hard work🎉
@Explaining-AI9 ай бұрын
Thank you for that :)
@miladmas32966 ай бұрын
Amazing video! Thanks
@prathameshdinkar29667 ай бұрын
Very Nice! Keep the good word going!!
@Explaining-AI6 ай бұрын
Thank You!
@tusharmadaan548010 ай бұрын
Mazaa aa gaya Tushar bhai!
@Explaining-AI10 ай бұрын
Thank you 😀
@TirthRadadiya-hp9sq10 ай бұрын
Your explanation is really easy to understand. I have one request. Can you make one video on any virtual try on. On models like dior or tryondiffusion who give good results. Paper explanation and implementation both will really help. I am trying understand them over a month but still couldn't understand anything.
@Explaining-AI10 ай бұрын
Thank you! Yes will add it to my list. It might take some time to get to it but whenever I do it I will have both explanation and implementation.
@TirthRadadiya-hp9sq10 ай бұрын
@@Explaining-AI Thank you Tushar
@BrijrajSingh086 ай бұрын
Nice explanation..!
@Explaining-AI6 ай бұрын
Thank You!
@bhushanatote9 ай бұрын
Hi, Very good attempt of explaining the DDPM, and thank you for sharing the information. Kudos! to answer your question at 14:22 (why reverse process is the diffusion?) because while reverse process, after the prediction of noise by u-net we check for the condition whether it is at t=0(x0-original image state) our output would be mean(has same shape of image) or not, if we are not at t=0 then our output would be mean+variance (with this variance we are adding noise again - based on x0). Hope this helps!
@genericperson82387 ай бұрын
You sure you're answering the question? You're talking about an implementation detail. Could you please elaborate on the mathematical intuition?
@gregkondas645711 ай бұрын
This is a great video! Thanks!
@Explaining-AI11 ай бұрын
Thank you! Glad that the video was of any help
@ahrismileАй бұрын
reeeeeeeeally appreciate
@himanshurai648110 ай бұрын
Amazing tutorial! Thanks for putting this up. Waiting for the stable diffusion video. When can we expect that? :)
@Explaining-AI10 ай бұрын
Thank you @himanshurai6481 :) It will be the next video that gets uploaded on the channel.. will start working on that from tomorrow.
@himanshurai648110 ай бұрын
@@Explaining-AI looking forward to that :)
@EricCartman9003Ай бұрын
Can you tell me what is the exact move of adding noise to each pixels? Why is the process a distribution? Shouldn’t it be a certain function?
@bayesianmonk8 ай бұрын
Amazing video, thanks a lot for all the effort you put in this. Just out of curiosity what do you use for the animation of the formulas?
@Explaining-AI8 ай бұрын
Thank you for the kind words! For creating the equations I use editor.codecogs.com and then use Canva for all the animations
@bayesianmonk8 ай бұрын
I thought you were using manim@@Explaining-AI
@Explaining-AI8 ай бұрын
I haven’t yet given it a try. I started with canva for the first video and found was able to do everything that I wanted to( in terms of animations ), so just kept using that only.
@mrinmoybanik55982 ай бұрын
I think there’s a typo at 19:49, in the denominator the variance would be \sqrt(1-\bar{\alpha}_t)I instead of \sqrt(1-\bar{\alpha}_{t-1})I
@Explaining-AIАй бұрын
Yes you are right. It should be \bar{\alpha}_t instead of \bar{\alpha}_{t-1}. Its correct in the next step but I messed up in the starting expression. Thank You. Have now added this error to pinned comment.
@surfingmindwaves8 ай бұрын
Thank you for this fantastic video on DDPMs, it was super helpful. One thing I'm having trouble understanding is the derivation at 12:29, how can we go from the 3rd line to the 4th line on the right side. I mean this part: sqrt(alpha_t - alpha_t * alpha_{t-1}) * epsilon_{t-1} + sqrt(1 - alpha_t) * episolon_t ...to the next line where we combined these two square roots: sqrt(1 - alpha_t * alpha_{t-1}) * epsilon ?
@Explaining-AI8 ай бұрын
In the third line, just view the epsilon terms as samples from gaussian with 0 mean and some variance. So the two epsilon terms in third line is just adding two gaussians. Then we use the fact that sum of two independent gaussians ends up being a gaussian with mean as sum of the two means(which here for both is 0) and variance as sum of the two variances. Which is why we can rewrite it in the 4th line as a sample from a gaussian with 0 mean and variance as sum of the individual variances present in third line. Do Let me know if this clarifies it.
@surfingmindwaves8 ай бұрын
@@Explaining-AI yes perfectly! Thank you for the quick response, that makes sense :)
@tanishmittal508310 ай бұрын
the reverse process can't be computed. As the process we are doing is not reversible. Can be derived using Non linear dynamics.
@Sherlock14-d6x4 ай бұрын
hey good explanantion. At timestep 19:42 aren't the square roots of all Covariance matrices missing. Please correct me if I am wrong.
@Explaining-AI4 ай бұрын
Thank You! Do you mean that variance should be sqrt(1- alpha_t) ? If you see the formulation for xt @12:00 then you can see that xt = sqrt(alpha_t) x_(t-1) + sqrt(1- alpha_t)e where e is mean zero and variance 1. Which means sqrt(1- alpha_t)e will have mean 0 and variance (1-alpha_t) which is what is used @19:42 Let me know if I misunderstood your question.
@Sherlock14-d6x3 ай бұрын
@@Explaining-AI At 28:18 why are we just returning the mean in the last step, is the variance value 0 for timestep t=0
@prathameshdinkar29664 ай бұрын
At 4:27 In the definition of the dXt, it is mentioned random mean and zero variance, but at bottom when you do the re-parameterization, N(0, I) is mentioned i.e. zero mean and unity variance. Isn't that different than the defination?
@Explaining-AI4 ай бұрын
I think there is a comma missing(sorry about that confusion) , it should actually be ''random, mean zero & variance µ(Xt, t)dt" The last term needs to have mean zero and variance µ(Xt, t)dt .
@prathameshdinkar29664 ай бұрын
@@Explaining-AI Thanks for the clarification
@AniketKumar-dl1ou10 ай бұрын
Bhai Hats off
@Explaining-AI10 ай бұрын
Thank you!
@Sherlock14-d6x4 ай бұрын
I had a doubt. At 17:11 if we had removed this x0 term we would have gotten stuck ahead, and the ground truth reverse function and the approximat ng reverse function would effectively be representing the same thing as both don't have the information of x0. Am I right in saying this?
@Sherlock14-d6x4 ай бұрын
I just wanted to kow for an image how will the end result be a normal distribtuion with mean 0 considering it has valeues between o and 1 after normlaized
@Sherlock14-d6x4 ай бұрын
At 28:11 isn't it good to predict the computed noise, all with the timestep
@Explaining-AI4 ай бұрын
If we dont use the x0 conditioning then what we could get is KL divergence between q(xt|xt-1) and p(xt | xt+1) You can take a look at Page 8 of this tutorial - arxiv.org/pdf/2208.11970 for that derivation and they also explain later problems because of this on Page 9. But now we would end up with the task of computing expectation over samples of two random variables, xt-1 & xt+1(high variance) drawn from joint distribution q(xt-1, xt+1 | x0) (which we dont know how to compute). This is simplified when we add the x0 conditioning which we see later in the video, with expectation now over samples of one random variable xt drawn from q(xt|x0) and what we end up is something we can easily compute. In the tutorial I linked, this change is done on Page 9
@Explaining-AI4 ай бұрын
@@Sherlock14-d6x Thats because at each timestep you are destroying the original structure a bit and adding a noise component. If you look @7:15 in video, you can see that the original values were in range -6 to 6 but that didnt matter as we continued destroying the original structure and adding noise repeatedly we had a normal distribution @7:25
@Explaining-AI4 ай бұрын
@@Sherlock14-d6x Sorry I didnt get this question. Could you elaborate a bit
@NishanthMohankumar7 ай бұрын
crazy stuff
@easyBob1008 ай бұрын
28:11 The algorithm for sampling, namely step 4, looks a lot different than what you explain. Why is that? To me, it looks like they take the predicted noise from xt, do a lil math to it, then subtract it from xt, then add a lil noise to it to get xt-1. You kinda just ran through it like it was nothing, but it doesn't look the same at all.
@Explaining-AI8 ай бұрын
Hello, Do you mean the formulation of mu + sigma*z and Step 4 of Sampling ? They both are actually the same and just require taking sqrt(xt) term out and simplifying the second term. Have a look at this - imgur.com/a/LJL73z1
@easyBob1008 ай бұрын
@@Explaining-AIThank you, now I remember. Shift and scale. :)
@MediocreGuy20232 ай бұрын
Can you tell what will happen if enough noise is not added in the forward process?
@Explaining-AI2 ай бұрын
If you don't add enough noise in each step, then the final distribution(assuming same number of steps) would not really be gaussian(in fact it would still have some original image structure). So the model wouldnt be able to generate images, because during generation you would be asking the model to denoise a random sample(from gaussian distribution), which it would have never seen during training, and hence samples generated by this model would most likely be non-meaningful images.
@MediocreGuy20232 ай бұрын
@@Explaining-AI Thanks for your help. I understood some parts of your reply.
@Explaining-AI2 ай бұрын
@@MediocreGuy2023 Which specific part you had difficulty in understanding ? I can try rephrasing that to clarify it a bit more.
@MediocreGuy20232 ай бұрын
@@Explaining-AI After training, we expect diffusion model to output random samples (similar to original distribution) from arbitrary noise. I mean that we don't run the forward process anymore after training. In that case, what can lead to non-meaningful image generation?
@Explaining-AI2 ай бұрын
@@MediocreGuy2023 Yes you are right, but we expect the model to be able to do the reverse(gaussian to original distribution) ONLY if the forward process end state is indeed gaussian. But in your specific case, when enough noise is not added in forward process, the distribution at end state after 1000 timesteps wont really be gaussian( it will be some other distribution D). We can expect the model to do the reverse only if the starting point is a sample from D, but since we dont know D, we cant sample from D. And a sample from gaussian(which we usually do during inference) when fed as starting point of the reverse process, will not be something the model has ever seen during training, so doesn't know how to go from xT to xT-1 with this sample.
@vinayakkumar45127 ай бұрын
I derived the whole equation for reverse diffusion process and at 21:26 in the last term of equation in the last line, I did not get \sqrt{\alpha t - 1}. Could you share the complete derivation? Also, the third last line seems to be incorrect, it should be (\alpha t - 1) instead of (\alpha t - 1)^2
@Explaining-AI7 ай бұрын
Hello, yes the square on \bar{\alpha_(t-1)} is a mistake which gets corrected in the next line. But thank you for pointing that out! Regarding the last term in last line, just wanted to mention that its \bar{\alpha_(t-1)} which is just coming from rewriting \bar{\alpha_(t)} from the last term in second last line as \alpha_t * \bar{\alpha_(t-1)} .
@vinayakkumar45127 ай бұрын
@@Explaining-AI Ahh yes, ignorant me. Thank you for your time in deriving the equations. I did not find this derivation any where else yet :)
@genericperson82387 ай бұрын
Great video, but as feedback, I'd suggest to breath and pause a bit after each bigger step. You're jumping between statements really fast, so you don't give people to think a little bit about what you just said.
@Explaining-AI7 ай бұрын
Thank you so much for this feedback, makes perfect sense. Will try to improve on this in the future videos.
@isaacbautista3721Ай бұрын
Just pause the video dude. I love the tempo, keep it coming
@SagarSarkale5 ай бұрын
i have a doubt at this timestamp: kzbin.info/www/bejne/fmWYnXlqqLqan6csi=mzOMzB0uACX8mPd6&t=528 - when you do summation of GP - wont the common factor be sqrt(1-beta)? - hence the final summation equation seems wrong to me. need some help to understand that formulation. captions during the time stamp: ... the rest of the terms are all gaussian with zero mean but different variances however since all are independent we can formulate them as one gaussian with mean zero and variance as sum of all individual variances. Thanks
@Explaining-AI5 ай бұрын
Hello yes while the factors being multiplied to each zero mean unit variance gaussians are indeed sqrt(B), sqrt(B * (1-B)) and so on. But this means that each of the terms individually are gaussians with variances B, (B * (1-B)) and so on. The sum of these gaussians will be a gaussian with variance B + (B * (1-B)) + B(1-B)(1-B) ... and zero mean. The GP that I am referring to is for these summation of variances and hence when I use the formulation, I use terms B and 1-B rather than sqrt(B) and sqrt(1-B) , to say the final gaussian will be a zero mean and unit variance gaussian as the summation of variances(using the summation of GP) is 1 Let me know if this clarifies your doubt
@SagarSarkale5 ай бұрын
@@Explaining-AI did not full understand this - "The sum of these gaussians will be a gaussian with variance B + (B * (1-B)) + B(1-B)(1-B) ... and zero mean." So I did some digging around it, the key point is this: Sum of two independent normally distributed random variables is normal (+ your explanation in the video about Markov processes helped) Proof: en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables#Proof_using_convolutions This allows you to combine all the terms together as distributions and not algebraic terms. I think i get it now. Let me know if my interpretation is lacking something. Thanks
@Explaining-AI5 ай бұрын
@@SagarSarkale Yes. Sorry, I should have clarified this a bit more in the video. Just to add more details for somebody else reading it. Since if X and Y are independent random variables each drawn from gaussian distributions, X+Y is also a gaussian distribution which has mean as sum of their means and variance as sum of individual variances. The means of all gaussian distributions here are 0. The distribution created by summing all these terms(each of which are generated by 0 mean and some variance) will be another gaussian with mean as 0 and variance as sum of these variances. To compute this variance we use that GP formula which ends up proving that the variance is 1.
@SagarSarkale5 ай бұрын
@@Explaining-AI Yes. Thanks for the detailed reply and ofcourse the video much help. 🙌
@jakeaustria54455 ай бұрын
Why does the noise need to be normal? Can't it be uniform?
@Explaining-AI4 ай бұрын
Thank you for this question. I don't think the noise distribution MUST be normal. There are papers which have experimented with non-gaussian distributions. Like in arxiv.org/pdf/2106.07582 the authors experiment with Gamma distributions, In arxiv.org/pdf/2304.05907 , authors experiment with Uniform and few other distributions with the aim to determine which noise distribution leads to better generated data. In DDPM, the authors used gaussian noise. What were the exact reasons of using gaussian noise only. I dont really know the answer to that. From the perspective of the model being a markov chain of latent variables, a lot of simplifications occur because the noise is gaussian. For instance the property of adding two gaussian distributions leading to another gaussian, enables us to sample states at any timesteps in the markov chain without worrying about all previous time steps(xt in terms of x0 rather than xt-1). But apart from the math being simpler, is there any advantage of using gaussian noise over non-gaussian noise purely in terms of generation results(and if so why?) and under what condition(if any) a non-gaussian noise is better? Unfortunately, I don't know the answer to these yet. If you come across more information on this particular topic, please do share here.
@jakeaustria54454 ай бұрын
@@Explaining-AI I'm not an expert in ML, but I tried using uniform distribution as noise. Here's what I found. Consider x_(t+1) = a*x_t+(1-a)*u, 0
@acatisfinetoo301811 ай бұрын
bruh my brain is exploding from the math😅
@Explaining-AI11 ай бұрын
Yes this one indeed has a lot of math required for understanding it which is why I tried to put forth every detail :) Though maybe I could have done a better job presenting it in a better/simpler manner.
@alicapwn5 ай бұрын
now do flow matching
@Explaining-AI5 ай бұрын
Added this one to my list :)
@sanchittanwar6073Ай бұрын
Amazing, but please remove the background music.
@Explaining-AIАй бұрын
Yeah, I have gotten that feedback of background music being distracting. Sorry about that. Have taken care of this in my recent videos.
@anshumansinha58749 ай бұрын
Hi, did you count 500 hrs as in only on diffusion? Or including previously learned concepts like VAEs, ELBO, KLD etc ?
@Explaining-AI9 ай бұрын
Hello, That number was just for diffusion as for 4-5 weeks all I was doing during the day(dont work as of now ) was understanding diffusion. And then post that, implementation. And I give myself ample time to understand things at my own speed, so somebody else can understand the same rather much more/better in lesser time :) But that number was just a means to express on scale as to how much I don't know still and how the video is just my current understanding of it all. Nothing more than that!
@anshumansinha58749 ай бұрын
@@Explaining-AI Thanks for the reply. I also try to time myself during learning. As I think a definite number (lower bound) is required to build the concepts of any topic. That's why I was curious if 500 hours was a calculative number as Andrej Karapathy in his blogs also recommends an average figure of 10,000 hours to become a good beginner in Machine learning.