What are Diffusion Models?

  Рет қаралды 196,986

Ari Seff

Ari Seff

Күн бұрын

This short tutorial covers the basics of diffusion models, a simple yet expressive approach to generative modeling. They've been behind a recent string of impressive results, including OpenAI's DALL-E 2, Google's Imagen, and Stable Diffusion.
Errata:
At 12:39, parentheses are missing around the difference: \epsilon(x, t, y) - \epsilon(x, t, \empty). See i.imgur.com/PhUxugm.png for corrected version.
Timestamps:
0:00 - Intro
1:07 - Forward process
3:07 - Posterior of forward process
4:16 - Reverse process
5:34 - Variational lower bound
9:26 - Reduced variance objective
10:27 - Reverse step implementation
11:38 - Conditional generation
13:45 - Comparison with other deep generative models
14:34 - Connection to score matching models
Special thanks to Jonathan Ho and Elmira Amirloo for feedback on this video.
Papers:
Feller, 1949: On the Theory of Stochastic Processes, with Particular Reference to Applications (digitalassets.lib.berkeley.ed...)
Sohl-Dickstein et al., 2015: Deep Unsupervised Learning using Nonequilibrium Thermodynamics (arxiv.org/abs/1503.03585)
Ho et al., 2020: Denoising Diffusion Probabilistic Models (arxiv.org/abs/2006.11239)
Song & Ermon, 2019: Generative Modeling by Estimating Gradients of the Data Distribution (arxiv.org/abs/1907.05600)
Dhariwal & Nichol, 2021: Diffusion Models Beat GANs on Image Synthesis (arxiv.org/abs/2105.05233)
Nichol et al., 2021: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models (arxiv.org/abs/2112.10741)
Saharia et al., 2021: Palette: Image-to-Image Diffusion Models (arxiv.org/abs/2111.05826)
Ramesh et al, 2022: Hierarchical Text-Conditional Image Generation with CLIP Latents (arxiv.org/abs/2204.06125)
Saharia et al., 2022: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (arxiv.org/abs/2205.11487)
Song et al., 2021: Denoising Diffusion Implicit Models (arxiv.org/abs/2010.02502)
Nichol & Dhariwal, 2021: Improved Denoising Diffusion Probabilistic Models (arxiv.org/abs/2102.09672)
Kingma et al., 2021: Variational Diffusion Models (arxiv.org/abs/2107.00630)
Song et al., 2021: Score-Based Generative Modeling through Stochastic Differential Equations (arxiv.org/abs/2011.13456)
Links:
KZbin: / ariseffai
Twitter: / ari_seff
Homepage: www.ariseff.com
If you'd like to help support the channel (completely optional), you can donate a cup of coffee via the following:
Venmo: venmo.com/ariseff
PayPal: www.paypal.me/ariseff

Пікірлер: 112
@user-zd9cz1lq6b
@user-zd9cz1lq6b Жыл бұрын
Thank you for not shying away from the math. That level of detail is very much needed for us in industry.
@Raven-bi3xn
@Raven-bi3xn Жыл бұрын
You are one of the most top notch channels out there. Explanation, clarity, REFERENCING, Graphics, everything is 10 out of 10! Love it!
@AZTECMAN
@AZTECMAN Жыл бұрын
I've watched this video more than once... I learn a little more each time. Great work Ari !
@PrajwalSingh15
@PrajwalSingh15 2 жыл бұрын
Thank you for creating a video on this topic. I was wandering the web for a brief explanation of it.
@Mutual_Information
@Mutual_Information 2 жыл бұрын
Very cool. I've been meaning to learn about diffusion models. All I knew was that they 1) are beating GANs recently and 2) do that noise -> to image mapping. Very interesting to see the details here. Exciting stuff. Also, glad to see you're back making content :)
@bbcailiang7
@bbcailiang7 Жыл бұрын
Love your channel, extremely high quality videos! One of the best I have ever seen.
@lucidraisin
@lucidraisin 2 жыл бұрын
Thank you Ari! Your whole channel is gold 🙏
@sunhongfu
@sunhongfu 8 ай бұрын
Explained extremely clear and succinct! I'm sure you've put a lot of effort in making the video. THANKS!
@KastanDay
@KastanDay 2 жыл бұрын
Thank you for the enormous amount of effort you put into research communication. You're a real Grant Sanderson in the making ;)
@tianzhh6699
@tianzhh6699 Жыл бұрын
High quality video! It helps me a lot to better understand Diffusion Models. Thanks!
@vincentaxm8322
@vincentaxm8322 Жыл бұрын
The quality is insane!!!!!! Thanks you very much!
@HH-mf8qz
@HH-mf8qz 5 ай бұрын
This is an amazing channel, very instructive, structured and easy to understand. Instant sub and hobe you make more videos over the coming months and years
@mikhaeldito
@mikhaeldito 2 жыл бұрын
Best explanation of diffusion model. Thanks!
@EvolvedSungod
@EvolvedSungod 7 ай бұрын
Thank you. You are the only page I've found that makes any attempt at explaining this aspect of generative ai. The starting with random noise, making a change, and comparing to the example db/training part is something all the pissed off artists don't seem to understand.
@bingbingsun6304
@bingbingsun6304 2 жыл бұрын
Subscribed. Very good introduction to diffusion model. I am looking forward to more AI introduction from your side. Thank you!
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader Жыл бұрын
The quality of high level ML content on youtube these days is insane. And you're right up there with the best my friend
@RaghavendraK458
@RaghavendraK458 Жыл бұрын
Thanks for the informative video. Please continue making such videos
@valentinakaramazova1007
@valentinakaramazova1007 3 ай бұрын
Extremely good video, not shying away from the math. More like this is needed.
@JoxTeTV
@JoxTeTV Жыл бұрын
Thank you a lot for this video, very good introduction to the subject.
@oualidzari2176
@oualidzari2176 5 ай бұрын
High-quality video! Thank you!
@fugufish247
@fugufish247 Ай бұрын
Fantastic explanation!
@anonymous102592
@anonymous102592 10 ай бұрын
Thanks man, wonderful explanation
@zeio-nara
@zeio-nara Жыл бұрын
Excellent explanaition, thanks
@CEBMANURBHAVARYA
@CEBMANURBHAVARYA Ай бұрын
Nicely explained, thanks
@videowatching9576
@videowatching9576 2 жыл бұрын
Fascinating. I enjoyed at 0.5x speed. I don’t really understand the math, but I appreciate going through this explanation. Maybe a version of this that walks through each step in even more detail, or with some visuals that explains the diffusion math with assumption of not knowing the notation etc? Objective being: explaining how diffusion works, and more broadly then, why it works, and even more broadly, what that means for what else might be built with diffusion. For example, using diffusion for generating videos.
@Starkl3t
@Starkl3t Жыл бұрын
I love how you had to watch this at half speed
@GS-tk1hk
@GS-tk1hk Жыл бұрын
Explaining the math in-depth to someone new to the subject is like an entire university level course in Bayesian probability and statistics. It's probably a better idea to brush up on the relevant concepts first and then return to the video when ready, as there's already a shortage of high-quality videos on advanced ML that assumes some prerequisite knowledge
@onkz
@onkz 10 ай бұрын
@@GS-tk1hk Hi. This is the kind of comment that makes me aware of the current situation surrounding ML and related subjects. Thanks for the information. It appears that there's no easy route for learning about the inner workings of modern/popular ML paradigms; the current set of KZbin content is either hyper-simple (basic info, "hey this is cool" type stuff, GitHub demos, small examples, easy UI/frontends for advanced models, etc) or there's content like this video with 100% in depth information, mathematical data, graphing, explanation into the processes behind the scenes, etc... I'm a CompSci grad who's worked in software dev for a decade. I love the work I've done in a dozen languages over time and understand most coding paradigms & statistics. However, I never really got into the mathematical notation of my hobbies. If I was writing an algorithmic piece in Python or Java, it would not have any real mathemetical documentation associated with it. Videos like this make me realise that there's a whole host of computer scientists who genuinely keep the "scientist" denomination alive. I just wished there was more content available for those people who are new to ML but have absolutely zero experience of this level of mathematics outside of college-level algebra.
@muhammadosamanusrat6748
@muhammadosamanusrat6748 5 ай бұрын
Agreed I am also a graduate but I don’t understand the maths behind it
@Ssiil
@Ssiil 2 жыл бұрын
Thank you for kind explanations
@Galinator9000
@Galinator9000 Жыл бұрын
Great explanation, thanks!
@mfrank1844
@mfrank1844 Жыл бұрын
Thanks for this video Ari. I will admit the math is a little bit beyond me, but I'm slowly understanding various aspects of this process. One thing that I've been trying to wrap my mind around -- is it fair to say that there is some position in latent space that represents the solution from a particular piece of training data? IE. Are there 3 discrete solutions to the guidance of "arctic fox" stored somewhere? Or, in the conditional concept is "arctic fox" constantly getting pushed/pull on by the inference of more and more "arctic fox" labelled data (therefore never having a true latent space representation of a single image.)
@jayseb
@jayseb Жыл бұрын
Fantastic video. Thank you.
@dklvch
@dklvch Жыл бұрын
Great stuff. Subscribed.
@Ayanamiame
@Ayanamiame Жыл бұрын
You helped me tremendously with my grant proposal which includes a part of using conditional diffusion for image translation. Thank you so much 😁
@CodeEmporium
@CodeEmporium Жыл бұрын
This is really nice! Thanks so much for sharing! I'll be hopping onto this topic soon too!
@monsieur910
@monsieur910 2 жыл бұрын
Wow your videos are great!
@chrislloyd1734
@chrislloyd1734 Жыл бұрын
How can a model that is only 3.2GB, produce almost infinite image combinations that can be produced from just a simple text prompt, with so many language variables. What I am interested in, is how a prompt of say a "monkey riding a bicycle" can produce something that visually represents the prompt. How are the data images tagged and categorized in training to do this? As a creative person we often say that an idea is still misty and is not formed yet. What strikes me about this diffusion process is the similarity in how our minds at a creative level seem to work. We iterate and de-noise the concept until it becomes concrete using a combination of imagination and logic. It is the same process that you described to arrive at the finished formula. What also strikes me about the images produced by these diffusion algorithms is that they look so creative and imaginative. Even artists are shocked when they see them for the first time and realize a machine made them. My line of thinking here is that we use two main tools to acquire and simulate knowledge and experience. They are images and language. Maybe this input is then stored in a similar way as a diffusion model within our memory. Logic, creativity and ideas are just a consequence of reconstituting this data due to our current social or environmental needs. This could explain our thinking process and why our memory is of such low resolution. The de-noising process could also explain many human conditions such as depression and even why we dream etc. This brings up the interesting question " Could a diffusion model be created to simulate a human personality"? Or provide new speed think concepts and formulas for the solving of a multitude of complex problems for that matter. The path would be 1) diffusion model idea/concept 2) ask a GAN like gpt-3 to check if it works 3) feed back to the diffusion model and keep iterating until it does in much the same way as de-noising a picture. Just a thought from a diffusion brain.
@Veptis
@Veptis Жыл бұрын
I looked for a video that explains the concept well. And this video did help me understand it better. For large transformer langauge models, people are 'probing' the layers and activations to try and understand how the model works and what it's laten weights mean. Essentially trying to find signs of intelligence by seeing linguistic concepts. Now I am wondering about how probing sich a diffusing model could work and if it will be possible to extract and maybe even inject intermediate representations of the many hidden layers. And if it shows any parallels to for example computer rendering, drawing and painting etc.
@vinhkhuc1565
@vinhkhuc1565 2 жыл бұрын
Thanks for the great tutorial! It captures most of the recent papers on diffusion models. Looks like there is a typo in 12:39 in the formula for "classifier-free diffusion guidance": in the right-hand side, there should be an open bracket after the guidance scale "s" and a closed bracket at the end.
@ariseffai
@ariseffai 2 жыл бұрын
thanks! this is now under errata in the description
@Jaeoh.woof765
@Jaeoh.woof765 7 ай бұрын
As a physicist, it's fascinating that physics concepts are used in deep learning.
@JerryChi
@JerryChi 2 жыл бұрын
great video! Is there a better way to learn diffusion models other than just reading all the linked papers from top to bottom?
@miguelcampos867
@miguelcampos867 5 ай бұрын
Great vídeo!
@khaledsakkaamini4743
@khaledsakkaamini4743 5 ай бұрын
great video thank you Ari
@NaudVanDalen
@NaudVanDalen 8 ай бұрын
I'd never in a billion years come up with this.
@welcomeaioverlords
@welcomeaioverlords Жыл бұрын
Very nice!
@dd884e5d8a
@dd884e5d8a 2 жыл бұрын
This is good!
@mahab944
@mahab944 Жыл бұрын
Great Video! It's sad I can press the like button only once for this video.
@dingran
@dingran Жыл бұрын
Really great content! How did you make the animations?
@cipcapcopcup
@cipcapcopcup Жыл бұрын
Kudos ❤
@PokeballmasterInc
@PokeballmasterInc 2 жыл бұрын
Thanks for the great video, Ari. Do you have any course recommendations to better understand the basics behind Diffusion Models? I would love to learn more about them.
@KarmCraft
@KarmCraft Жыл бұрын
Watched twice but the math ist still above me, but fell asleep very nicely 👍 the style from 3blue1one is very appropriate here. Maybe revisit the topic with longer explanations and better derivations and better production quality once you're bigger? Thanks for the hard work you put behind these.
@sunhongfu
@sunhongfu 8 ай бұрын
Thanks!
@jayurbain
@jayurbain Жыл бұрын
Excellent. Why perform loss on noise rather than gaussian? Thanks.
@digitalghosts4599
@digitalghosts4599 Жыл бұрын
Thank you so much for this wonderful explanation! It takes a true skill and dedication to be able to explain complicated formulas with such organic fluency that they became intuitively understandable. You saved me a lot of digging with this video! Do you have Patreon? I wish I could support you somehow!
@ariseffai
@ariseffai Жыл бұрын
Thank you for the very generous comment! Glad you enjoyed it. If you'd like, you can buy me a coffee via the "Thanks" button under the video :)
@stormzrift4575
@stormzrift4575 11 күн бұрын
Amazing
@koustubhavachat
@koustubhavachat 2 жыл бұрын
This channel is hidden gem
@kunalsamanta4353
@kunalsamanta4353 Жыл бұрын
Can you recommend a resource (as in statistics/probability) to get into grips with the intricate mathematical details of the video?
@iFloxy
@iFloxy Жыл бұрын
Also curious!
@blasttrash
@blasttrash Жыл бұрын
2:23 What does that capital N function mean? Does it mean we are pulling xt from a gaussian distribution which has a mean as given by 2nd thingy after semi colon and variance given by last thingy before ending bracket?
@5ty717
@5ty717 11 ай бұрын
Very good
@emmanuelkoupoh7979
@emmanuelkoupoh7979 2 жыл бұрын
Thank you for your explanations. At 02:00, what does the I represent? (The I of Beta t*I)
@ariseffai
@ariseffai 2 жыл бұрын
That's the identity matrix :) So when we multiply it by Beta_t, we get a square matrix with Beta_t at each diagonal element and zeros elsewhere.
@emmanuelkoupoh7979
@emmanuelkoupoh7979 2 жыл бұрын
@@ariseffai Thanks
@CommanderCraft98
@CommanderCraft98 Жыл бұрын
At 3:55 you say that choosing a small beta such that x_t is close to x_{t-1} justifies using a uni-modal Gaussian for the "posterior of the forward process q(x_{t-1} | x_t)". This is the same as saying "for the transition probability of the reverse process p_theta(x_{t-1} | x_t)", right? I am mostly confused, because the forward process itself is referred to as the posterior in the paper, but you are talking about the posterior OF the forward process, right? Please correct me if I am speaking nonsense. Awesome video, thanks!
@pastuh
@pastuh Жыл бұрын
You listen, even you can't understand.. It's like in music.. you miss some notes, but it still sounds good :))
@gvyyuy
@gvyyuy Жыл бұрын
I lost you right after the image of the dog 😂
@yashashgaurav09
@yashashgaurav09 Жыл бұрын
At 8:29 I could not get the step when I tried doing this by hand. Any place that can explain me that? I wish I could paste pictures here 😅
@tomkent4656
@tomkent4656 Жыл бұрын
My brain is hurting!
@frankvandermeulen4415
@frankvandermeulen4415 Жыл бұрын
At 9.32, the expectation E_q is only on the 3rd term, right?
@nevokrien95
@nevokrien95 Жыл бұрын
The elb is take from bnns And i belive the name is kinda missleading. Its basicly the log of bayses law and we ignore the part that is reliant on the probabilety of the evidance. Now that part is not effected by the model so it dosent effect derivatives
@brandonakey6616
@brandonakey6616 7 ай бұрын
uhm, do you have a ELI5 version? XD
@enes_duran
@enes_duran 4 ай бұрын
Great work really, there is a mistake in the equation on the rightmost side at 12:46.
@Bangy
@Bangy Жыл бұрын
Thanks. Now I know how to debate anti diffusion luddites.
@rubenguerrero393
@rubenguerrero393 Жыл бұрын
Does anyone know which are the links to the papers that he is referring to in the tutorial?
@nz69
@nz69 Жыл бұрын
its in the description,
@TanTan-ey5eu
@TanTan-ey5eu 2 жыл бұрын
Can someone tell me why in 8:29, p_theta(x_0|x_{1:T})*p_thata(x_{1:T}) = p_theta(x_{0:T}) ?
@DrumsBah
@DrumsBah Жыл бұрын
In case you didnt figure this out; the markov assumption and conditional independence of each step of p_theta means that p_theta(x_{0:T}) can be decomposed to the multiplication of each step p_theta(x_i|x_i+1:T).
@1PercentPure
@1PercentPure 5 ай бұрын
king
@Tanaka-6850
@Tanaka-6850 Жыл бұрын
OK, I don't understand any of those mathematic fomulas... But I want to understand them. Where should I start from?
@DrumsBah
@DrumsBah Жыл бұрын
If you have familiarity with probability and calculus, read up on variational inference and Langevin dynamics. These are the building block concepts of the formulas in this video
@HelloWorlds__JTS
@HelloWorlds__JTS 2 жыл бұрын
Incredibly insightful explanation here, but mostly useful for those who already know what you mean. I think you'll need to be a bit more careful about how you word things before you'll find a broader audience with ambitious newbie learners. Great work!
@whimahwhe
@whimahwhe Жыл бұрын
I lack the technical mathematical skills to understand this explanation. What field of maths should I get into in order to understand it?
@DrumsBah
@DrumsBah Жыл бұрын
First probability and calculus, then variational inference and Langevin dynamics
@whimahwhe
@whimahwhe Жыл бұрын
@@DrumsBah thanks
@malekibrahim7697
@malekibrahim7697 2 жыл бұрын
Excellent video. Could someone please help me understand what is meant by the "posterior of the forward process"? From Google, I see "posterior" means later in time. But the question is posed as q(x_t-1 | x_t) = ? (3:23). In my eyes, this equation is saying "forward process of the sample in a previous step given the next step is unknown". Is this interpretation correct? And if so, how is this task related to finding the posterior? I'm sorry, I've been trying to wrap my head around this for several hours now and have not been able to figure out what a posterior distribution means, and even what is meant by "distribution". Distribution of what? Pixel values? Measurements? What are the x and y-axes shown at 3:23? Please any help on this would be greatly appreciated.
@HelloWorlds__JTS
@HelloWorlds__JTS 2 жыл бұрын
Yeah, it seems that was worded a bit carelessly. The forward distribution is q, which describes how the noise is added in the forward path or direction. But in the context of your question about how this relates to a posterior, it's then the reverse path that is relevant. So it's the posterior on q wrt the reverse path. And "distribution" is a standard term for probability, as in probability distribution.
@anikatabassum8689
@anikatabassum8689 2 жыл бұрын
Nice tutorial. Would be great, if you could also add the links of the papers mentioned in the tutorial
@quadhd1121
@quadhd1121 9 ай бұрын
Can u drop some new video ?
@kabokbl2412
@kabokbl2412 Жыл бұрын
To think that i want to be a data analyst its slightly scary watching the math holding the world together
@stylishskater92
@stylishskater92 Жыл бұрын
Am i stupid to still not fully understand this outside of the basic concept of how noise is used? The video is very well produced, and i think its great teaching content. But maybe i lack the mathematical foundation to follow all the explanations here.
@zahar1875
@zahar1875 Жыл бұрын
I didn't get anything, but sounds smart
@pradkadambi
@pradkadambi 2 жыл бұрын
How am I supposed to read or understand the semicolon notation in the normal normal distribution's mean parameter (reverse process section)? I take this to mean that x_{t-1} is parameterized by the \mu_\theta function that acted on x_t, given knowledge of the time step. In that case, why isn't the same notation applied to the covariance (i.e. \Sigma_{t-1} ; \gamma_\theta(x_t, t) where \gamma is some function)?
@Piineapple.
@Piineapple. Жыл бұрын
Quite hard to understand, some notations (like I, what is theta, why is there a sum of tuples on theta but theta doesn't appears in x_t and t, how do we define the gaussian function at the beginning) aren't defined These math are quite different to classical deep learning approaches, so I am a bit lost
@ravishanker8539
@ravishanker8539 Жыл бұрын
Excellent explanation. Email to reach out.
@Sciencehub-oq5go
@Sciencehub-oq5go 9 ай бұрын
I did not yet understand how this enables models to draw something like a cow wrapped in spaghetti with that high fidelity. What component in this process has this world knowledge incorporated of how to creatively combine such concepts and draw it photoreallistically.
@gurusystem
@gurusystem Жыл бұрын
It would be nice to explain the math notation a little bit before diving into the formulae.
@NoahElRhandour
@NoahElRhandour 2 жыл бұрын
only 5k subscribers is far too little!!!
@lord_of_mysteries
@lord_of_mysteries 4 ай бұрын
It's uses diffusion model..... search it up, you would then realize why it can't inspect images
@user-xh9pu2wj6b
@user-xh9pu2wj6b Ай бұрын
what are you even talking about?
@computing218
@computing218 Жыл бұрын
do people not feel these probability symbols are so annoying to read and the actual computation flow is so much more intuitive for people to understand? these simple ideas don't have to be explained in these long, abstract and ugly formula.
@WilsonSolofoniaina-dn6ky
@WilsonSolofoniaina-dn6ky Ай бұрын
Shingle Magic better than RoofMaxx
@taseronify
@taseronify Жыл бұрын
Problem with these `explanation` videos is, they don't explain WHY WE ADD NOISE TO A COHERENT IMAGE? What is our goal? And why do we REVERSE IT? What do we achieve by this?
@Chex_Mex
@Chex_Mex Жыл бұрын
The reverse part is the part we actually want, we're training the AI to get an image out of noise. We want to be able to generate images out of nothing, basically. But we have to start from somewhere, so as part of the training process, we turn actual images gradually into noise and then train the AI on how well it can get it back to that original. For applications, you can see the AI-generated art that's out there right now, like Dalle 2 and Midjourney as some examples.
@xiaohanma2584
@xiaohanma2584 10 ай бұрын
the expression "add noise" is only there in order to make the reverse logic easier to compare to. In reality noise is not added by humans or at least not intentionally. Think of an old photo from 1900s in which you can't see people's faces clearly or a security footage where you can't tell who is who due to the lack of light or low quality cameras. Those are noises. To reverse is to make images appear clearer (although not necessarily accurate) when the originals are not. That is the achivement, and something people working in the imagary-related professions have always wanted. This is only the most basic idea tho. Diffusion can achieve much more than just enhancing photos.
@Aisaaax
@Aisaaax 7 ай бұрын
I gotta say this video loses me immediately as soon as you get into formulaic notation. And I am a driver software dev who does math regularly. I just don't think this video is well explained, unless you are already very good at math. 😮
@seahammer303
@seahammer303 3 ай бұрын
as a beginner i understood nothing. change the title because these are not the basics
@Whit3Whisk3y
@Whit3Whisk3y 11 ай бұрын
This video is a great summary. I absolutely recomend to read this paper before/after to better understand the math and intuitions behind it. arxiv.org/pdf/2208.11970.pdf
@RafiKamal
@RafiKamal Жыл бұрын
Thanks!
@ariseffai
@ariseffai Жыл бұрын
Thanks Rafi!
Diffusion Models | Paper Explanation | Math Explained
33:27
Outlier
Рет қаралды 217 М.
Diffusion models from scratch in PyTorch
30:54
DeepFindr
Рет қаралды 222 М.
ISSEI funny story😂😂😂Strange World | Magic Lips💋
00:36
ISSEI / いっせい
Рет қаралды 121 МЛН
😱СНЯЛ СУПЕР КОТА НА КАМЕРУ⁉
00:37
OMG DEN
Рет қаралды 1,8 МЛН
顔面水槽がブサイク過ぎるwwwww
00:58
はじめしゃちょー(hajime)
Рет қаралды 98 МЛН
одни дома // EVA mash @TweetvilleCartoon
01:00
EVA mash
Рет қаралды 6 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 6 М.
What are Transformer Neural Networks?
16:44
Ari Seff
Рет қаралды 158 М.
How I Understand Diffusion Models
17:39
Jia-Bin Huang
Рет қаралды 17 М.
How Stable Diffusion Works (AI Image Generation)
30:21
Gonkee
Рет қаралды 125 М.
Vectoring Words (Word Embeddings) - Computerphile
16:56
Computerphile
Рет қаралды 275 М.
What is Automatic Differentiation?
14:25
Ari Seff
Рет қаралды 100 М.
How Diffusion Models Work
9:17
Leo Isikdogan
Рет қаралды 4,8 М.
The U-Net (actually) explained in 10 minutes
10:31
rupert ai
Рет қаралды 71 М.
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 367 М.
Распаковка айфона в воде😱 #shorts
0:25
Mevaza
Рет қаралды 1,3 МЛН
Apple Event - May 7
38:32
Apple
Рет қаралды 6 МЛН
Распаковал Xiaomi SU7
0:59
Wylsacom
Рет қаралды 2,8 МЛН
Рекламная уловка Apple 😏
0:59
Яблык
Рет қаралды 809 М.