Tutorial on Denoising Diffusion-based Generative Modeling: Foundations and Applications

  Рет қаралды 128,381

Arash Vahdat

Arash Vahdat

Күн бұрын

This video presents our tutorial on Denoising Diffusion-based Generative Modeling: Foundations and Applications. This tutorial was originally presented at CVPR 2022 in New Orleans and it received a lot of interest from the research community. After the conference, we decided to record the tutorial again and broadly share it with the research community. We hope that this video can help you start your journey in diffusion models.
Visit this page for the slides and more information:
cvpr2022-tutorial-diffusion-m...
Outline:
0:00:00 Introduction (Arash)
0:08:17 Part 1: Denoising Diffusion Probabilistic Models (Arash)
0:52:14 Part 2: Score-based Generative Modeling with Differential Equations (Karsten)
1:47:40 Part 3: Advanced Techniques: Accelerated Sampling, Conditional Generation (Ruiqi)
2:37:39 Applications 1: Image Synthesis, Text-to-Image, Semantic Generation (Ruiqi)
2:58:29 Applications 2: Image Editing, Image-to-Image, Superresolution, Segmentation (Arash)
3:20:42 Applications 3: Discrete State Models, Medical Imaging, 3D & Video Generation (Karsten)
3:35:20 Conclusions, Open Problems, and Final Remarks (Arash)
Follow us on Twitter:
Karsten Kreis: / karsten_kreis
Ruiqi Gao: / ruiqigao
Arash Vahdat: / arashvahdat
#CVPR2022 #generative_learning #diffusion_models #tutorial #ai #research

Пікірлер: 64
@mipmap256
@mipmap256 Жыл бұрын
Don't complain about the audio is terrible. It is diffused with Gaussian noises. You need to decode the audio first.
@Vanadium404
@Vanadium404 8 ай бұрын
💀
@piyushtiwari2699
@piyushtiwari2699 Жыл бұрын
It is amazing to see how one of the biggest companies in the world are collaborating to produce tutorials but couldn't invest in 10$ mic.
@listerinetotalcareplus
@listerinetotalcareplus Жыл бұрын
lol cannot agree more
@Vanadium404
@Vanadium404 8 ай бұрын
That was hard 💀
@TheAero
@TheAero 8 ай бұрын
Just goes to show that lot's of people and companies don't try for exellence.
@atharvramesh3793
@atharvramesh3793 8 ай бұрын
I think they are doing this on their personal capacity. It's a re-recording
@conchobar0928
@conchobar0928 3 ай бұрын
lmao they rerecorded it and gave a shout out to the people who commented about the audio!
@ksy8585
@ksy8585 10 ай бұрын
So my question is if any diffusion model can de-noise audio of this fantastic tutorial.
@danielhauagge
@danielhauagge Жыл бұрын
Awesome video, thanks for posting. One thing though, the audio quality is pretty bad (low volume, sounds very metallic).
@Vikram-wx4hg
@Vikram-wx4hg Жыл бұрын
Dear Arash, Karsten, and Ruiqui, thanks a ton for putting this up! Was referring to your tutorial slides earlier, but this definitely helps much better.
@maerlich
@maerlich Жыл бұрын
This is a brilliant lecture!! I've learned so much from it. Thank you, prof. Arash, Ruigui and Karsten!
@redpeppertunaattradersday1967
@redpeppertunaattradersday1967 Жыл бұрын
Thanks for comprehensive introduction! it is really helpful :)
@saharshbarve1966
@saharshbarve1966 10 ай бұрын
Fantastic presentation. Thank you to the entire team!
@prabhavkaula9697
@prabhavkaula9697 Жыл бұрын
Thank you for recording and uploading the tutorial. It is helpful in understanding the sudden boom in diffusion models and compares the techniques very well. I wanted to know if the slides for the part 3 are slightly different from the slides on the website? (for eg. the slide 11)
@karthik.mishra
@karthik.mishra 2 ай бұрын
Thank you for uploading! This was very helpful!!
@weihuahu8179
@weihuahu8179 5 ай бұрын
Amazing tutorial -- very helpful!
@ankile
@ankile Жыл бұрын
The content is fantastic, but the sound quality makes it materially harder to follow everything. A good microphone would lift the quality enormously!
@amortalbeing
@amortalbeing Жыл бұрын
Good stuff. Thanks guys
@deeplearningpartnership
@deeplearningpartnership Жыл бұрын
Thanks for posting.
@danielkiesewalter3097
@danielkiesewalter3097 Жыл бұрын
Thanks for uploading this video! Great resource, which covers the topics in a good depth and pace. Only point of critique I have would be to use a better microphone next time, as it can be hard at times to understand what you are saying. Other than that great video.
@Vikram-wx4hg
@Vikram-wx4hg Жыл бұрын
A request: Please record this video again. I come to it many times, but the audio recording quality makes it very difficult to comprehend.
@nikahosseini2244
@nikahosseini2244 Жыл бұрын
Thank you great lecture
@mehdidehghani7706
@mehdidehghani7706 Жыл бұрын
Thank you very much
@Vikram-wx4hg
@Vikram-wx4hg Жыл бұрын
Why do I feel that there is an audio issue (in recording) with this video for the first two speakers?
@windmaple
@windmaple Жыл бұрын
Would have been great if they run diffusion process on the audio
@90bluesun
@90bluesun Жыл бұрын
same here
@parsarahimi71
@parsarahimi71 Жыл бұрын
Nice job arash ...
@Farhad6th
@Farhad6th 10 ай бұрын
The quality of the presentation was so good. Thanks.
@bibiworm
@bibiworm Жыл бұрын
3: 1:58:35 On the previous slide, it reads that Variational diffusion models, unlike diffusion models using a fixed encoder, include learnable parameters in the encoder. So the training objectives on this paper are for reverse diffusion process or the entire forward and reverse diffusion process? I also do not understand when the speaker said "if we want to optimize forward diffusion process in the continuous time setting, we only need to optimize the signal-to-noise ratio at the beginning and the end of the forward process."
@awsaf49
@awsaf49 Жыл бұрын
Thank you for the tutorial. Feedback: The audio quality is quite bad, having hard time understanding the words even using youtube transcription.
@maksimkazanskii4550
@maksimkazanskii4550 Жыл бұрын
Guys please apply the diffusion process to the audio. Excellent material but almost impossible to listen due to the quality of the audio.
@bibiworm
@bibiworm Жыл бұрын
2. 2:00:38. can take a pre-trained diffusion model but with more choices of sampling procedure。What does it mean? Would it be possible to find answers in the paper listed in the footnote? Thanks.
@bibiworm
@bibiworm Жыл бұрын
This feels like a semester's course in 4 hours. I have so many questions. I am just gonna ask hoping someone can shed some light. 1. 2:05:26 I don't quite understand the conclusion there: since these three assumptions hold, no need to specify q(x_t|x_t-1) as markovian process? What's the connection there? Thanks.
@liuauto
@liuauto Жыл бұрын
22:08 is there any theory to explain why we can use a Gaussian to do the approximation as long as beta is small enough?
@piby2
@piby2 Жыл бұрын
Yes, search for Kolmogorov's forward and backward Markov chains
@howardkong8927
@howardkong8927 Жыл бұрын
Part 3 is a bit hard to follow. A lot of formulae are shown without an explanation of what they mean.
@zongliangwu7461
@zongliangwu7461 20 күн бұрын
Do you have an open source denoiser to denoise the recording? Thanks
@theantonlulz
@theantonlulz Жыл бұрын
Good god the audio quality is horrible...
@piby2
@piby2 Жыл бұрын
Fantastic tutorial, I learnt a lot. Please buy a good microphone for future, or make a Gofundme for a mic, I will be happy to donate.
@Cropinky
@Cropinky Ай бұрын
thx
@smjain11
@smjain11 Жыл бұрын
At around 16:24 marginal is equated to joint . Didnt quite comprehend it. Can you please explain
@smjain11
@smjain11 Жыл бұрын
Is the reason that we are generating the gaussian at time t by multiplying a diffusion kernel (which itself is a gaussian) and gaussian at time t-1. So joint representation of t-1 gaussian with kernel at t-1 is the marginal at t. And the problem setup is to learn the reverse diffusion kernel at each step
@dhruvbhargava5916
@dhruvbhargava5916 Жыл бұрын
the point of the equation is to demonstrate that as we don't have the exact PDF at all time steps we can't just sample x(t) from q(x(t)) directly instead we sample x(0) from initial distribution(sample a data point from the dataset) and then transform the data point using diffusion kernel to obtain a sample at timestep t do this for enough data points and you can approximate the distribution q(x(t)). q(x(t)) is marginal i.e. independent of x(0). Now coming to the equated part, marginal probability is not being equated to joint probability, let's see how! the x(0) is not a value in it self it is a random variable, to avoid confusions let's change x(0) with i. >now i is a random variable which can take any value in the given domain(data set),let us assume an image data set. >then q(i) describes the distribution of our data set. >q(i=image(1)) describes the probability of i being image(1) >now let i(t)=x(t) >q(i(t)) describes the approximate distribution of noisy images which we got by repeteadly sampling images from initial dataset at time step 0 and then carrying out the diffusion process for t time steps(t convolutions). >q(i(t),i) describes the joint probability distribution for all possible pairs of values of i and i(t). >q(i(t)=noisy_image,i=image(1)) describes the joint probability of the pair occuring, i.e. the probability that we started with i = image(1) and then after t time steps ended up with i(t) = noisy_image,which is = q(i=image(1)).q(i(t)=noisy_image|i=image(1)),Here image(1) is the first image in data set and noisy_image is a certain noisy image sampled from q(i(t)). > now imagine we calculated q(i(t)=noisy_image ,i) for all possible values of i(i.e. all possible images in the data set as starting point) then added all these probabilities what we would end up with is the probability of getting i(t) = noisy_image independent of what value we chose for the random variable i, this value is represented by q(i(t)=noisy_image). > the integral q(i).q(i(t)| i).di gives us the above mentioned quantity, now an important thing to note is the video explanantion assumes the dataset to be continuos, while explaining the said part, where as in my explanation I assumed a training set of images which is discrete, so the integral can be substituted by a summation over all samples.
@RishiYashParekh
@RishiYashParekh Жыл бұрын
What is the probability distribution of q(X0). Because that is the original image itself. So will it be a distribution ?
@meroberto8370
@meroberto8370 Жыл бұрын
It's not known . Check 26:47 . You try to approximate it by decreasing the divergence between the distribution you get when adding some noise to the image (it becomes Gaussian as you do it) and the reverse process where you generate the image through model distribution (also Gaussian ). So in other words by decreasing q(x|noise)/p(x) divergence you approximate the data distribution without knowing it.
@bibiworm
@bibiworm Жыл бұрын
@@meroberto8370 So by equations on page 25, all we need is x_t, produced by the forward diffusion, some hyper-parameters, such as beta, etc., forward diffusion epsilon which is known, backward diffusion epsilon which is estimated by U-Net....
@houzeyu1584
@houzeyu1584 3 ай бұрын
Hello, I have question in 10:36, I known N(mu, std^2) is a Normal distribution, how to understand N(x_t; mu, std^2) ?
@user-jp5cb8gm7y
@user-jp5cb8gm7y 3 ай бұрын
It indicates that x_t​ is a random variable distributed according to a Gaussian distribution with a mean of μ and a variance of σ^2.
@mehmetaliozer2403
@mehmetaliozer2403 5 ай бұрын
waiting for diffusion workshop 2023 records 🙏
@steveHoweisno1
@steveHoweisno1 Жыл бұрын
I am confused by a very basic point. At 22:38, he says that q(x_{t-1}|x_t) is intractable. But how can that be?? It's very simple. Since x_t = sqrt(1-beta)*x_{t-1}+sqrt(beta)*E where E is N(0,I). Therefore x_{t-1} = (1-beta)^{-1/2}x_t - sqrt{beta/(1-beta)}*E = (1-beta)^{-1/2}x_t + sqrt{beta/(1-beta)}*R where R is N(0,I) (since negative of a Gaussian is still Gaussian. Therefore q(x_{t-1}|x_t) = N(x_{t-1}; (1-beta)^{-1/2}x_t, beta/(1-beta)I). What gives?
@Megabase99
@Megabase99 Жыл бұрын
A little bit late, but, from what i have understood the real problem is that, q(x_{t-1}|x_t) depends on x_t, but, x_t is a distribution that need the entire dataset to be calculated, because you don't have a sample that defined x_t|x_0, but you are just asking for the distribution of x_t for all x_0. However, q(x_{t-1}|x_t,x_0) is calculable, given a start sample x_0 you can describe the distribution of x_t related to x_0
@coolarun3150
@coolarun3150 10 ай бұрын
It's a nice lecture, but DDPM explanation can't be justified as a tutorial here, but if you already have a basic idea of the DDPM process, then it will make sense. good for revising math behind DDPM but not a detailed tutorial if you are new to it.
@jeffreyzhuang4395
@jeffreyzhuang4395 Жыл бұрын
The sound quality is horrible.
@Vanadium404
@Vanadium404 8 ай бұрын
You can add captions please? The voice quality is so absurd and the generated captions are not accurate. Thanks for the tutorial btw Kudos
@rarefai
@rarefai Жыл бұрын
You may still need to re-record this tutorial to correct and improve the audio quality. Resoundingly poor.
@MilesBellas
@MilesBellas 4 күн бұрын
The audio is too garbled. Maybe use an Ai voice for clarity?
@andreikravchenko8250
@andreikravchenko8250 Жыл бұрын
audio is horrible.
@anatolicvs
@anatolicvs Жыл бұрын
Please, purchase a better microphone......
@manikantabandla3923
@manikantabandla3923 11 ай бұрын
Part 2 voice is terrible.
@wminaar
@wminaar Жыл бұрын
poor video quality, pointless delivery
@aashishjhaa
@aashishjhaa Ай бұрын
bro buy a mic please and rerecord this your voice sound so much muddy
@Matttsight
@Matttsight 9 ай бұрын
why the f all these people didnt have a common sense to buy a good mic, I dont know , what is the use of this video if it is not delivering its value ? and Another bunch of people who is writing research papers so that no one in the world should not understand. Ml community will improve better only if these knowledge is accessible.
@Bahador_R
@Bahador_R 8 ай бұрын
bah bah! lezzat bordam!
Diffusion and Score-Based Generative Models
1:32:01
MITCBMM
Рет қаралды 65 М.
Introduction to Normalizing Flows (ECCV2020 Tutorial)
58:54
Marcus Brubaker
Рет қаралды 34 М.
Pray For Palestine 😢🇵🇸|
00:23
Ak Ultra
Рет қаралды 25 МЛН
How many pencils can hold me up?
00:40
A4
Рет қаралды 17 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
How Stable Diffusion Works (AI Image Generation)
30:21
Gonkee
Рет қаралды 129 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 121 М.
Build a full stack UBER EATS clone - 3/5 Days Challenge  🔴
3:59:46
notJust․dev
Рет қаралды 351 М.
Diffusion Models | Paper Explanation | Math Explained
33:27
Outlier
Рет қаралды 222 М.
Diffusion Models Explained : From DDPM to Stable Diffusion
55:35
Martin Andrews
Рет қаралды 6 М.
CVPR #18546 - Denoising Diffusion Models: A Generative Learning Big Bang
3:04:32
ComputerVisionFoundation Videos
Рет қаралды 12 М.
What are Diffusion Models?
15:28
Ari Seff
Рет қаралды 200 М.
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 8 МЛН
⌨️ Сколько всего у меня клавиатур? #обзор
0:41
Гранатка — про VR и девайсы
Рет қаралды 651 М.
Apple, как вас уделал Тюменский бренд CaseGuru? Конец удивил #caseguru #кейсгуру #наушники
0:54
CaseGuru / Наушники / Пылесосы / Смарт-часы /
Рет қаралды 4,4 МЛН
Теперь это его телефон
0:21
Хорошие Новости
Рет қаралды 1,7 МЛН
Куда пропал 3D Touch? #apple #iphone
0:51
Не шарю!
Рет қаралды 509 М.
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 160 М.