Lesson 22: Deep Learning Foundations to Stable Diffusion

  Рет қаралды 6,372

Jeremy Howard

Jeremy Howard

Күн бұрын

Пікірлер: 5
@michaelmuller136
@michaelmuller136 4 ай бұрын
Great lecture, especially about the different samplers, thank you!
@piotr.czapla
@piotr.czapla 2 жыл бұрын
I have a preliminary chapters. Let's see if KZbin let me add them so that it is easier to improve on them. Chapters 00:00 - Intro 00:30 - Cosine Schedule (22_cosine) 06:05 - Sampling 09:37 - Summary / Notation 10:42 - Pedicting the noise level of noisy Fashion MNIST images (22_noise-pred) 12:57 - Why .logit() when predicting alpha bar t 14:50 - Random baseline 16:40 - mse_loss why .flatten() 17:30 - Model & results 19:03 - Why are we trying to predict the noise level? 20:10 - Training diffiusion without t - first attempt 22:58 - Why it isn’t working? 27:02 - Debugging (summmary) 29:29 - Bug in ddpm - paper that cast some light on the issue 38:40 - Kerras (Elucidating the Design Space of Diffusion - Based Generative Models) 49:47 - Picture of target images 52:48 - Scaling problem - (scalings) 59:42 - Training and predictions of modified model 1:03:49 - Sampling 1:06:05 - Sampling: Problems of composition 1:07:40 - Sampling: Rationale for rho selection 1:09:40 - Sampling: Denosing 1:15:26 - Sampling: Heun’s method fid: 0.972 1:19:00 - Sampling: LMS sampler 1:20:00 - Kerras Summary 1:23:00 - Comparison of different approaches 1:25:00 - Next lessons
@myfolder4561
@myfolder4561 2 ай бұрын
@6:04 (kzbin.info/www/bejne/bHPXkmSqjbeInK8). Just to seek clarification, the 'denoise' function essentially calculates for x_0_hat (which is the unbiased estimate of x_0) at any given timestep, right? It's equivalent to making the best unbiased estimate for x_0 (i.e. completely denoised image in the original data distribution) in a single denoising step, as opposed to the recursive approach in the multi-step reverse diffusion process, which essentially iterates through a series of x_0_hat estimation and weigh average against the noisy image x_t at each step of the reverse/denoising process. A single denoising step, albeit unbiased, would produce pretty unsatisfactory outcome that's way off of the original data distribution, given the high variance of this approach.
@myfolder4561
@myfolder4561 2 ай бұрын
@kzbin.info/www/bejne/bHPXkmSqjbeInK8 Hope someone can shed some light and wisdom. Does the modified model incorporating c-skip look like it's consistently not able to denoise an image on the noisy end of the spectrum? Given the modified model's objective function towards noisier images is to emphasize on finding the original image, so does that mean it's not really doing what it's set out to do?
@satirthapaulshyam7769
@satirthapaulshyam7769 Жыл бұрын
17:00 falttened mse
Lesson 23: Deep Learning Foundations to Stable Diffusion
1:40:56
Jeremy Howard
Рет қаралды 7 М.
Lesson 19: Deep Learning Foundations to Stable Diffusion
1:30:03
Jeremy Howard
Рет қаралды 10 М.
JISOO - ‘꽃(FLOWER)’ M/V
3:05
BLACKPINK
Рет қаралды 137 МЛН
She wanted to set me up #shorts by Tsuriki Show
0:56
Tsuriki Show
Рет қаралды 8 МЛН
Counter-Strike 2 - Новый кс. Cтарый я
13:10
Marmok
Рет қаралды 2,8 МЛН
Вопрос Ребром - Джиган
43:52
Gazgolder
Рет қаралды 3,8 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 448 М.
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 1 МЛН
Lesson 20: Deep Learning Foundations to Stable Diffusion
1:45:42
Jeremy Howard
Рет қаралды 9 М.
Inside the V3 Nazi Super Gun
19:52
Blue Paw Print
Рет қаралды 3 МЛН
Lesson 21: Deep Learning Foundations to Stable Diffusion
1:55:47
Jeremy Howard
Рет қаралды 8 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Lesson 12: Deep Learning Foundations to Stable Diffusion
1:50:24
Jeremy Howard
Рет қаралды 18 М.
Diffusion Models (DDPM & DDIM) - Easily explained!
18:28
Soroush Mehraban
Рет қаралды 9 М.
JISOO - ‘꽃(FLOWER)’ M/V
3:05
BLACKPINK
Рет қаралды 137 МЛН