Lesson 23: Deep Learning Foundations to Stable Diffusion

  Рет қаралды 5,944

Jeremy Howard

Jeremy Howard

Күн бұрын

(All lesson resources are available at course.fast.ai.) In this lesson, we work with Tiny Imagenet to create a super-resolution U-Net model, discussing dataset creation, preprocessing, and data augmentation. The goal of super-resolution is to scale up a low-resolution image to a higher resolution. We train the model using AdamW optimizer and mixed precision, achieving an accuracy of nearly 60%. We also explore the potential for improvement by examining the results of other models on Tiny Imagenet from the Papers with Code website.
We discuss the limitations of using a convolutional neural network for image super-resolution and introduce the concept of U-net, a more efficient architecture for this task. We implement perceptual loss, which involves comparing the features of the output image and the target image at an intermediate layer of a pre-trained classifier model. After training the U-net model with the new loss function, the output images are less blurry and more similar to the target images.
Finally, we discuss the challenges of comparing different models and their outputs. We demonstrate how perceptual loss has improved the results significantly, but also note that there isn't a clear metric to use for comparison. We then move on to gradually unfreezing pre-trained networks, a favorite trick at fast.ai. We copy the weights from the pre-trained model into our model and train it for one epoch with frozen weights for the down path. This results in a significant improvement in loss.

Пікірлер: 11
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
1:06:00 modulelist is just like sequential but it doesnt autofwd we need to define the fwd method
@thehigheststateofsalad
@thehigheststateofsalad 2 ай бұрын
I see you are trying to help. However, can you combine all your comments into one and delete the others? It's just messy.
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
38:00 no need of augmentation calback
@mattzucca4102
@mattzucca4102 Жыл бұрын
😮
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
1:09:00 weight initialization of unet
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
49:00 superres
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
1:28:00 replacing some layers of unet with pre trained classifier
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
57:00 why in superres squeez and again unsquezz why not we use stride1
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
34:00 trivial augmentations
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
59:00
@satirthapaulshyam7769
@satirthapaulshyam7769 9 ай бұрын
1:22:00 comb loss
Lesson 24: Deep Learning Foundations to Stable Diffusion
1:55:48
Jeremy Howard
Рет қаралды 8 М.
Stable Diffusion in Code (AI Image Generation) - Computerphile
16:56
Computerphile
Рет қаралды 292 М.
Bike Vs Tricycle Fast Challenge
00:43
Russo
Рет қаралды 98 МЛН
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Паша Осадчий
Рет қаралды 9 МЛН
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 388 М.
Anomaly detection with TensorFlow | Workshop
45:29
TensorFlow
Рет қаралды 107 М.
Lesson 9: Deep Learning Foundations to Stable Diffusion
2:15:16
Jeremy Howard
Рет қаралды 145 М.
Lesson 15: Deep Learning Foundations to Stable Diffusion
1:37:18
Jeremy Howard
Рет қаралды 12 М.
AlexNet and ImageNet Explained
36:20
James Briggs
Рет қаралды 8 М.
Lesson 10: Deep Learning Foundations to Stable Diffusion, 2022
1:49:14
Lesson 11 2022: Deep Learning Foundations to Stable Diffusion
1:48:17
Jeremy Howard
Рет қаралды 22 М.
Intro to graph neural networks (ML Tech Talks)
51:06
TensorFlow
Рет қаралды 177 М.
Lesson 8 - Practical Deep Learning for Coders 2022
1:36:55
Jeremy Howard
Рет қаралды 35 М.
Lesson 22: Deep Learning Foundations to Stable Diffusion
1:26:44
Jeremy Howard
Рет қаралды 6 М.
Bike Vs Tricycle Fast Challenge
00:43
Russo
Рет қаралды 98 МЛН