Thanks for the video! Very informative! Just to check: At @1:03:42, 3. be "... save back the result to HBM."?
@swapankumarsarkar1737Күн бұрын
Dear Sir please make a video on details explanation on code of diffusion model . It will be helpful. Thanks for understanding and valuable video
@codevacaphe37632 күн бұрын
Always a fan of your video. Your explanation is very informative and helpful for beginner data scientist. Thank you very much.
@parmanandchauhan61823 күн бұрын
Great content ,deep understanding after watching video
@satishsingh90533 күн бұрын
Please use bigger font size. Its difficult to read
@kennymickorr3 күн бұрын
i literally fell asleep, where did this come from.
@bevandenizclgn92823 күн бұрын
Best explanation I found on KZbin, thank you!
@user-dh3up2iw7o3 күн бұрын
Amazing video.
@prethasur73763 күн бұрын
life saver 😭 thank you so very much lots of love and gratitude 💙💙
@carlosjaviergarcialopezdeh92134 күн бұрын
awesome video man!
@rishabhvarshney22344 күн бұрын
superb explanation please make a vedio on RNN in depth with math as well.
@venkateshdesai31504 күн бұрын
Amazing !! I finally understood everything. Good Job, all your videos have in-depth understanding
@Marko-lq6hi4 күн бұрын
39:25 Renormalization will not bring the values to the range of [0,1]. Take for example the samples: (-2, 0, 0, 0, 0, 0, 0, 0, 2) But would rather squeeze (or unsqueeze) the values depending on their range. As a matter of fact, a sample with variance 1 and mean 0 will surely have at least one value outside of the range [-1,1]. In any case, amazing videos, thank you.
@11_neelabh804 күн бұрын
yes , this is what i wanted
@brandonheaton61975 күн бұрын
Best explanations of splines i have seen. Legit 100%
@user-jb3ht1wq5l5 күн бұрын
PLEASE explain spacetimeformer
@Mortazaghafaripour5 күн бұрын
Great 👍
@ChukwuemekaAmblessedchinenye6 күн бұрын
hello sir can you please make a tutorial on pytorch to fellow along with your pytorch projects. Thank you in advance
@sandipanpaul19946 күн бұрын
One of the best videos for BERT in youtube
@saranyav25816 күн бұрын
Thank you for the amazing explanation
@TheBlackNight9716 күн бұрын
Both VAE and AE map the input over a latent space, the difference lies on the **structure** of this latent space. The AE latent space is not "well-organized" as well as the VAE's latent space.
@niysniys14906 күн бұрын
Hello, thanks to your great vedio!! There are some puzzle confusing me a lot. I am wondering how to train Diffusion Model with cfg. I think according to input, the targets also have two images. So, the target image for condiftional input is what? And, the target image for non-conditional input is what? 😀😀😀
@tenzinlhakpa16726 күн бұрын
amazing work, thank you so much !
@jerrylin27906 күн бұрын
was immersing myself in the video. all of a sudden, Umar spoke to his cat in Chinese...haha... now I understand why some comments are left in Chinese..
@jerrylin27905 күн бұрын
generated a cat on my first try... so amazing. thank you Umar
@sharyakbar20867 күн бұрын
can someone please help me how to run this to produce the images from text. I have placed all the files like the GitHub repository still when i run the demo.ipynb file its gives me this error TypeError: 'weights_only' is an invalid keyword argument for Unpickler()
@suriyars44877 күн бұрын
can you please share your slides of this as well as for attention is all you need paper in .pptx( power point format )
@suriyars44876 күн бұрын
@Umar Jamil
@snehotoshbanerjee19387 күн бұрын
Umar, you are a great teacher. I have not seen such a great explanation of transformer. Your transformer from scratch coding is also awesome. So, basically you understand which part needs more explanation. Thanks for your effort.
@ChukwuemekaAmblessedchinenye7 күн бұрын
can you make tutorial video on model like Perplexity that use website live search
@ChukwuemekaAmblessedchinenye7 күн бұрын
can you make tutorial video on model like Perplexity that use website live search
@ChingyuenLiu7 күн бұрын
Hello Umar, you always produce the most concise and clear content ever! I was wondering if you are planning to do any video on the stable diffusion 3 since the paper is out? It would be really great if you could help explain how the flow matching helps or changes regular diffusion models! Thank you again for your content and work. 非常感谢!
@ziyadmuhammad37348 күн бұрын
Thanks!
@bensimonjoules44028 күн бұрын
Amazing content, thanks! I'm very excited about the continual learning properties of these networks.
@agenticmark8 күн бұрын
Please do a video where you show the process from scratch so we can do this with voice models ✊🏼
@wolfie61759 күн бұрын
Good video, quality content.
@terryliu363510 күн бұрын
I learnt a lot from following the steps out of this video and create a transformer myself step by step!! Thank you!!
@ariouathanane10 күн бұрын
Awesome explanation. Cls token is important just because there is no zero values with others token?
@AyushRaj-nt3ot10 күн бұрын
sir, your explanation is just beyond awesome!!! Thank you so much for creating such content. Sir I didn't get the residual connections part. As I am from India, I was working on Indic Languages, so i had to make more code but that's just okay. I just want if you could please help in understanding beam search code, the one which you also gave in the GitHub File. Also, if you could give the code for evaluating the BLEU Score. I'll be really grateful to you. And again, thank you so much for such a comprehensive content. We'd love to see your more videos especially in Generative AI! P.S. : I didn't understand how you wrote it, what I've understood is that we have to take the input of the previous layer and then add with o/p of the same layer and then apply layer norm on that. Basically Add and then LayerNorm. Please help me correct mysefl!
@freeweed4all10 күн бұрын
Sei troppo forte, spieghi bene ed è facile seguirti. Vai avanti così!
@harshitkumar514710 күн бұрын
This is just awesome!
@codevacaphe376310 күн бұрын
Hi, I just happen to see your video. It's really amazing, your channel is so good with valuable information. Hope, you keep this up because I really love your contents.
@andreanegreanu875011 күн бұрын
Very clear, well explained, top notch!
@raviparihar329811 күн бұрын
best video I have ever seen on whole youtube eon transformer model. Thank you so much sir!
@cristiwally11 күн бұрын
the constant you scale by the x come from averaging over a bunch of examples generated by the vae, in order to ensure they have unit variance with the variance taken over all dimensions simultaneously, scale_factor = 1 / std(z)
@shajidmughal338611 күн бұрын
i came here form your VAE video. after that, should i be doing the 5hr long stable diffusion or this one?? what do you suggest?
@jerrylin27905 күн бұрын
I watched the 5 hour one first then come to this. Now I would say, I know how to train the model, thanks to Umar.
@rafa_br3411 күн бұрын
Great video! I'm wondering, is there any reason to save the positional encoding vector? I don't see why you would need to save it since it seems to always be the same value considering the init parameters don't change.
@shajidmughal338611 күн бұрын
Great explanation. Clean!!! Reminds me of school where our physics teacher taught everything practical and it felt so simple. subs+1👍
@elieelezra273411 күн бұрын
Good vid boss
@expectopatronum278411 күн бұрын
23:39 -> loved that intuitive explanation!
@beincheekym811 күн бұрын
Brilliant video! Really clear and with just the right amount of details!