The intuition for denoising auto encoder is greatly presented by adding noise to original image and recreating the image with ae 💯
@alfcnz23 күн бұрын
😊😊😊
@rajgupt3 жыл бұрын
After a long time .. I have found an instructor who is fun as well as easier to follow on complex topics... cheers
@alfcnz3 жыл бұрын
🥳🥳🥳
@ShahNisargAnishBEE3 жыл бұрын
Wow, you work so much over your lectures, from animation - visualizations to codes, and explanation, really enjoyed learning from your lectures, thanks~~
@alfcnz3 жыл бұрын
🥳🥳🥳
@ZobeirRaisi3 жыл бұрын
Excellent, thanks for your effort in teaching us in the best way☺️
@alfcnz3 жыл бұрын
😇😇😇
@buoyrina96692 жыл бұрын
Like your visualization at 50:04
@alfcnz2 жыл бұрын
Thanks! 🤗🤗🤗
@rohitashwachakraborty57253 жыл бұрын
OMG, you really make GANs come alive. 😂😂
@alfcnz3 жыл бұрын
🥳🥳🥳
3 жыл бұрын
Brilliant!
@alfcnz3 жыл бұрын
😀😀😀
@datonefaridze15032 жыл бұрын
Wow, that was really great
@alfcnz2 жыл бұрын
😅😅😅
@melalastro3 жыл бұрын
Epic
@alfcnz3 жыл бұрын
😬😁✌🏻
@Aristocle Жыл бұрын
7:40 figo
@alfcnz Жыл бұрын
😀😀😀
@Na_v-op8ng Жыл бұрын
Greetings, Dr Canziani. Tks verymuch for your class. May I ask you why the decoder input is half of the encoder output ? Best.
@alfcnz11 ай бұрын
You need to tell me minute:second for me to be able to answer your question.
@ShihgianLee3 жыл бұрын
Whoooooa! Another fantastic delivery with cool animation, story line and accent by professor Alfredo! 😊Is there a energy-based VAE paper that I can read? Thanks!
@alfcnz3 жыл бұрын
No, there is not. I'll add a chapter on EBVAE on our book.
@guillaumevermeillesanchezm24273 жыл бұрын
My 2cts, GANs are perfectly fine to use and stable now that we have the R1 / R0 regularizer. All GANs used to explode because the discriminator's gradients were not bounded ; it tried to give fake samples an infinite energy, and zero energy to real points. As the generator performed better and fake sample got closer to real samples, the gradient went steeper and steeper, leading to explosion. This has changed with those regularizer. In limited data regime though, D would be prone to overfitting and give bad gradients ; use R0 or differentiable data augmentation. Don't use spectral norm though, as it has been show to provoke spectral collapse and be unstable.
@alfcnz3 жыл бұрын
Any reference you can share about these claims? 🙂
@guillaumevermeillesanchezm24273 жыл бұрын
@@alfcnz Sure thing! R1 regularizer (re-used in stylegan) : arxiv.org/abs/1801.04406 R0 / 0-GP : arxiv.org/abs/1902.03984 Spectral Collapse (instability shown but not caracterized in BigGAN and followup works exhibiting that only about 60% of runs converge) : arxiv.org/abs/1908.10999 Data Augmentation in GANs : arxiv.org/abs/2006.06676 , arxiv.org/abs/2006.10738 Hope that helps :) My own experience, doing _a lot_ of GANs, largely supports those claims.
@fuzzylogicq3 жыл бұрын
Curious, are the video overlay animations done live during the lecture or editing before posting on KZbin?
@alfcnz3 жыл бұрын
Post editing.
@santhoshnaredla82433 жыл бұрын
Can you please also implement ITL Autoencoders as well
@alfcnz3 жыл бұрын
Information theoretic learning autoencoder? It has 16 citations and it's from 2016. I don't think it's worth it?
@Gabriel-qi8ms2 жыл бұрын
Hey Alfredo! Thank you for the excellent explanations and videos! I have a question regarding the test and training loss of the VAE. Can you explain why the test loss is lower compared to the training loss? Even when I switch off the model.eval(), I still get lower values. Thank you in advance! :)
@alfcnz2 жыл бұрын
You need to add a timestamp to your question if you're referring to specific parts of the video. Otherwise it's impossible for me to answer.
@gianlucafugante86143 жыл бұрын
easy money, king
@alfcnz3 жыл бұрын
🤑🤑🤑
@ahmedbahaaeldin7503 жыл бұрын
Thanks alot for the great detailed lectures. If you can please list some math books that can aid students for solid foundations in deep learning specially probability , calculus and statistics books.
@alfcnz3 жыл бұрын
mml-book.github.io/
@pastrop20033 жыл бұрын
Great presentation! I have to admit the story of Italians in Sicily making fake money and pushing those to the Swiss never gets old...:)
@alfcnz3 жыл бұрын
Haha, I feared to push it a little too far 😅
@pastrop20033 жыл бұрын
Nah, it's all good...Number of years ago I was working in Europe. We had a very international team and the going joke was that Europe is Heaven and Hell mixed together with Heaven being German mechanics, French cuisine, British police, Swiss management and Italian lovers as for the Hell: German police, French mechanics, British cuisine, Swiss lovers and anything managed by Italians...I am not sure I agree with everything of the above now yet when I feel like teasing my European friends, it still works somehow 😂
@alfcnz3 жыл бұрын
@@pastrop2003 🤣🤣🤣
@pastrop20033 жыл бұрын
@@alfcnz you may like this: github.com/eriklindernoren/PyTorch-GAN It is not maintained, sadly, but maybe some of your students will be interested....I found this repo when I had an idea of building sort of "all things Image" clone of Huggingface but life intervened. Building good APIs is time consuming. It made me respect Huggingface folk even more that I already did...
@НиколайНовичков-е1э3 жыл бұрын
Alfredo, I wanted to ask one question. Why don't you use the "cmap = 'Greys' parameter when displaying digits in ImShow? Is it on purpose? I think digits look better when they are drawn in black on a white background. I still have this question from the last time when i saw this part :).
@alfcnz3 жыл бұрын
Because the human eye can better discriminate changes in brightness when coupled with colour. Namely, purple is the minimum value, green is the mean, and yellow the maximum. Using a grey scale colour map would simply be less informative when analysing the data / feature maps / kernels / matrices.
@liongkhaijiet50143 жыл бұрын
Just to be sure, this VAE notebook in the course website are not up to date right?
@alfcnz3 жыл бұрын
I fear I've just reverted it the old one on my machine. The idea was to push the newer version to the new repo. Instead, I've undone the edits because the notebook got corrupted. If you feel like editing it yourself in order to match the video, I'd love to review it and merge your pull request to the new repo. Also, the beta hyperparameter needs some serious tuning. Right now the KL strength is just basically nothing compared to the reconstruction cost. We should monitor U, V, and C independently.
@gg82653 жыл бұрын
Hello Alf, I'm watching your videos to study deep learning myself this summer. It seems DLSP21 is not fully public, are there no practicum videos from week 10 to week 14? And I wonder the best order of lectures if I want to watch the missing part from DLSP20. Thank so much for all your inputs, I really love your passion and purple background!
@alfcnz3 жыл бұрын
I've been trying to push forward with research lately. Yes, there are 4 more practica and 14 lectures that need to come online.
@sekfook973 жыл бұрын
Hi, at 31.37, you mentioned that mu is the first output and logvar is the second output? May I know why since I thought what you did is reshape the output of encoder. Thanks
@alfcnz3 жыл бұрын
Use : to separate minutes from seconds, like 31:37, so it becomes a link I can use when I'm reading these from mobile. Otherwise it's a pain to figure what you're talking about.
@sekfook973 жыл бұрын
Haha ya, I should do that, thanks
@alfcnz3 жыл бұрын
Answering your question, there's no reason. You could have swapped the two, and everything would have been the same. The net doesn't know or care what you use its outputs for until you insert them into a energy term, which is going to send back gradients when you'll minimise the loss.
@sekfook973 жыл бұрын
I see, thank you so much for the explanation! Really enjoyed the lecture.
@alfcnz3 жыл бұрын
@@sekfook97 sweet 😊😊😊
@heshamali52082 жыл бұрын
Could we use Diffusion models as a Self-supervision technique as VAE? like after training Diffusion models, we take the U-Net to do segmentation with it. is this consider a good idea for self supervision?
@Dream45143 жыл бұрын
Thanks alfredo for this amazing videos it’s, helpful, I would like to ask you if I’m learning programming and I’m still in the beginning and i want learn machine learning and deep learning what is your advice for me? From where should i start to learn it
@alfcnz3 жыл бұрын
You're very welcome 🐱🐱🐱 I think it depends on what your ultimate goal is. Are you interested in pursuing a research career or an application and engineering one?
@Dream45143 жыл бұрын
@@alfcnz I'm thinking about an application and engineering, and I'm so ambitious person, I'm planning to learn Machine Learning, deep learning and computer vision but my problem I don't know how to find the road map, should I must learn programming for example web/mobile or start directly learning machine learning, deep learning etc what is your advice for me
@alfcnz3 жыл бұрын
You don't necessarily need to know programming in order to learn machine learning. Machine learning is a combination of linear algebra and numerical analysis, which imply you will use a computer to run the computations. On the other side, if you plan to focus on applications, then ML is only going to be a small fraction of what you'll be doing, and serious object oriented programming is going to be essential to develop maintainable projects.
@skilz80983 жыл бұрын
@@Dream4514 I'm thinking a nice starting point would be to check out sentdex's channel found here: kzbin.info He has various playlists and I would probably suggest starting with his Neural Networks first and get an understanding of the Networking structures the different layers, the different types of operators or sigma functions, and the output layers, then get into both forward, backward and bidirectional propagation as well as error reduction. Then you could dive into the A.I. algorithms of your various different types such as unsupervised and supervised learning, and from there you should be able to follow along with machine learning or deep learning... And the best way to learn it isn't just by watching the videos but following along... Before worrying about "programming or writing code" in any given language but in both of these cases Python... build your algorithms first, write them down on paper make sure your algorithms are right first, then once you have the appropriate algorithms then convert them in to your language's syntax and watch the magic happen... That would be one of my suggestions. If you want to dive a little deeper and find and see many of the practical applications based on research, then you can check out the KZbin channel Two Minute Papers: kzbin.info as he has many excellent and interesting topics and content mostly all pertaining to A.I. algorithms, Neural Nets, and Machine Learning as well as some Graphical or Physics simulations and or animations. Great stuff! Oh and don't forget that google itself is one of your mightiest resources available to you!
@Dream45143 жыл бұрын
@@skilz8098 thank you so much, appreciate 🌹
@kalokng35722 жыл бұрын
Hi Alfredo I've been playing around with you VAE code and encountered a situation that the loss goes to nan after a few episodes. So by first principle thinking I try to inspect the 2 components of the loss function - reconstruction loss and KL Divergence. I've found that the KLD loss becomes nan and I further inspect the mu and variance of the latent. After 20 epochs of training most dimension of mean and variance are very close to 0 and 1 except a few dimension become nan. In the next step I'm trying to tune the beta parameter since the KLD is very small compared with the reconstruction error. However, I'm just wondering why this has an effect on a very few dimension of the latent's mean and variance such that they become nan while all other dimensions seems behave "normally"
@kalokng35722 жыл бұрын
Or that's actually not a problem of beta but the weight initialization before training?
@НиколайНовичков-е1э3 жыл бұрын
Does the bear in the background also listen lectures? :)
@alfcnz3 жыл бұрын
Of course 🐻 Vincenzo listens to all my lectures! He's my most attentive student and has never missed a class! 😊😊😊