Great lecture as usual, the pretraining part was nice, thank you!
@thanhphamduy6923 ай бұрын
0:00:00 Introduction and Bug Admission: Discussion of a bug in the FID calculation from the previous lesson and its implications. 0:05:52 Moving to Tiny ImageNet: Rationale for switching datasets and introduction to Tiny ImageNet. 0:07:43 Processing Tiny ImageNet Data: Manual data processing steps, creating datasets, and handling training/validation sets. 0:10:29 WordNet Categories and Data Transformations: Explanation of WordNet categories, creating dictionaries for mapping, and transforming data. 0:16:57 WordNet Hierarchy and Data Augmentation: Exploring the WordNet hierarchy and implementing data augmentation techniques (padding, flipping). 0:22:27 Trivial Augment: Introduction to Trivial Augment and its advantages over resource-intensive augmentation methods. 0:29:29 Comparing Results and Improving the Model: Discussing results, comparing with published research, and introducing techniques like pre-activation ResNets. 0:48:16 Break 0:48:27 Super Resolution Introduction: Defining the super-resolution task and its data requirements. 0:52:20 Super Resolution with Autoencoders: Exploring the limitations of autoencoders for super resolution. 0:59:29 Introduction to UNets: Explanation of UNets, their architecture, and the copy-and-crop mechanism. 1:07:07 Implementing a UNet: Code walkthrough of a UNet implementation, including down-sampling and up-sampling paths. 1:16:13 Perceptual Loss: Introducing perceptual loss and its benefits for image generation tasks. 1:24:59 Fine-tuning and Transfer Learning: Applying gradual unfreezing and transfer learning techniques to the UNet. 1:36:39 Conclusion and Exercise Suggestions: Wrapping up the lesson and proposing exercises like segmentation, style transfer, and other image-to-image tasks. (generated by Gemini)
@satirthapaulshyam7769 Жыл бұрын
1:06:00 modulelist is just like sequential but it doesnt autofwd we need to define the fwd method
@thehigheststateofsalad6 ай бұрын
I see you are trying to help. However, can you combine all your comments into one and delete the others? It's just messy.
@satirthapaulshyam7769 Жыл бұрын
38:00 no need of augmentation calback
@satirthapaulshyam7769 Жыл бұрын
1:28:00 replacing some layers of unet with pre trained classifier
@satirthapaulshyam7769 Жыл бұрын
57:00 why in superres squeez and again unsquezz why not we use stride1