The intuitive derivation part is really beautiful. I was finding it hard to grasp the concept previously with just the paper derivation but this makes so much sense. And to me personally, the intuitive derivation as well as the final code (specially the simple sampling part) seems to be even more helpful if I think of the application with different interpolation methods for different noise.
@AntonVictorin2 күн бұрын
Hey man! Fantastic video! I am working on a student project where we try to apply Flow Matching on tomour picture data, and your video (and code) really helped me understand the foundations of the stuff going on!
@audioentropy624223 күн бұрын
Really nice series on Diffusion models. Thanks for doing them. I'd like to understand the ideas behind single step Generation models more, such as Consistency Models or Rectified Flow Models
@outliier23 күн бұрын
@@audioentropy6242 thank you! My idea for the next video is distillation which kind of aligns with what you suggested
@audioentropy624223 күн бұрын
@@outliier Nice, looking forward to that. Will be very helpful for me🙏
@sushicommander25 күн бұрын
😃 Thank you for making this. I’ve been looking for an explainer like this for ages. Always learn new nuggets of info I missed from papers.
@MrDanielnis12319 күн бұрын
While studying for a uni course, I came across your channel and found it very helpful. Keep up the good work!
@gnorts_mr_alien25 күн бұрын
amazing video. the efforts you spend to really understand and the creativity you have to make things simple to teach really shows.
@outliier25 күн бұрын
@@gnorts_mr_alien thank you :3
@danielrose975424 күн бұрын
I love the way you showed how to move from the FM to the CFM objective! In Yaron Lipman’s talk on the topic, he simply mentioned that the gradients coincide, but your derivation helps a lot with the understanding! As a follow-up, I would be very interested in a video on conditioning of FM models with CFG, like it was done in the paper “Scaling Rectified Flow Transformers for High-Resolution Image Synthesis” (Esser et al.). Keep up the great work!
@JGLambourne23 күн бұрын
I see. Very nice. I remember reading the paper and not understanding how the x_1 data was matched with the x_0 noise. I did suspect that it was just random, but thought that wouldn't work. The way you explain it makes it seem more reasonable that it would.
@outliier20 күн бұрын
Yea honestly the way x_1 is matched with x_0 being random, feels a bit weird given we do MSE loss eventually Probably there is room for an even better method for generative models
@msanterre24 күн бұрын
I love these videos. I always struggle a bit to get to fully understand these concepts by reading the papers, but how you explain it really makes it stick.
@wolfeinstien31323 күн бұрын
Finally! I've been waiting for this video. Just started watching it, but I know it is going to be great! Thank you :)
@팽도리-v6s12 күн бұрын
Love this video. This is so simple and wonderful. Thanks❤
@gustavgille932324 күн бұрын
Awesome video! Impressive how you can break it down to such a simple idea 👍
@jojodi23 күн бұрын
Really great video. I implemented this in pytorch just based on my memory of your intuitive description and it...just worked? Transformed 2d gaussian noise to a "ring"-shaped distribution. Pretty awesome :)
@outliier23 күн бұрын
@@jojodi oh that is awesome
@cpldcpu24 күн бұрын
The authors of this paper arrived at the same approach, but derived it rather by intuition: "Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model" (arxiv 2305.03486).
@zuchti569925 күн бұрын
You always give me so good pointers and intuitions to go on and read my papers. Thanks so much for your work!
@outliier25 күн бұрын
@@zuchti5699 thank you! Could you point to me to an exact timestamp where this could be improved?
@scotth.hawley156021 күн бұрын
Great video as usual. Keep it up!
@outliier20 күн бұрын
Thanks :3
@spartancoder25 күн бұрын
Extraordinary video.
@felipedilho25 күн бұрын
You have made some incredible videos, I come from a Mathematical Physics background and I really appreciate your mathematical exposition of these machine learning papers on image generation! Thank you very much!!! If I could suggest another topic to make a video I would like to suggest a video on Nvidia''s new open source model SANA that affirms to be more efficient than the majority of image synthesis generative models. They Talk about methos such as DiT with linear attention and a Flow-DPM-Solver which I don't know if these are the same as Flow Matching.
@cpldcpu24 күн бұрын
Thank you for the nice video! I always found diffusions models to be packaged into way too much math compared to the actual implementation.
@outliier20 күн бұрын
100%
@peichunhua713818 күн бұрын
Amazing! It's just now I feel a little bad for myself trying hours to understand the math of DDPM...
@outliier18 күн бұрын
@@peichunhua7138 haha well
@alivecoding499517 күн бұрын
Great! 😊
@EkShunya25 күн бұрын
another banger video
@abelhutten453224 күн бұрын
Nice video's! I'd like to see RL video's, especially model based RL
@abelhutten453223 күн бұрын
Many of the methods involve image generation for goal setting or as part of the world model, there is lots of research involving multimodal models and about learning compressed latent spaces. Would fit well with your other videos. Something like dreamer v3 would be cool, just as an example. Showing a computer program learning to mine diamonds in Minecraft is also very spectacular
@seehow-ai21 күн бұрын
Great video! At 14:50 you mentioned it's canonical transformation for Gaussian, does this still holds for non-gaussian distributions?
@outliier21 күн бұрын
@@seehow-ai yea you can also use other distributions
@RadRebel425 күн бұрын
amazing video bro
@amortalbeing25 күн бұрын
Thanks a lot
@SN-uc3vr25 күн бұрын
You're awesome! I would be curious to know whether you're a PhD, in industry, or self-taught?
@outliier25 күн бұрын
@@SN-uc3vr mostly self taught, studied a bachelor on AI in Germany and now I work in the industry at Luma AI :3
@apartmenttour400025 күн бұрын
hi , at around 9:26 is q(x1) the probability density of the marginalization variable x1 ? thanks
@outliier25 күн бұрын
Yea and technically this could also just be written as p_1(x_1) as far as I understand this But this is how the paper did this too so I kept this part
@michaelchung90045 күн бұрын
Did you make a mistake ? kzbin.info/www/bejne/bZSwq5mhjKuKnqs isn't it pred - target ? Thanks @outlier
@outliier5 күн бұрын
@@michaelchung9004 for the MSE loss you can have this either way. Will give the same results