L3 Flow Models -- CS294-158-SP20 Deep Unsupervised Learning -- UC Berkeley -- Spring 2020

  Рет қаралды 30,056

Pieter Abbeel

Pieter Abbeel

Күн бұрын

Пікірлер: 27
@not_a_human_being
@not_a_human_being 4 жыл бұрын
This is some great content - very easy to follow, nicely paced, not rushed, and not more complex than it has to be!
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
These videos should have a million view. Worth more than Bitcoin.
@rj-nj3uk
@rj-nj3uk Ай бұрын
42:28 that student almost stumped Pieter with that question. That was a nice question.
@rj-nj3uk
@rj-nj3uk Ай бұрын
and Pieter gave a great explanation too.
@rj-nj3uk
@rj-nj3uk Ай бұрын
Bu the still got confused on "why a Mixture of Gaussian is a valid flow" argument.
@PaxiKaksi
@PaxiKaksi 4 жыл бұрын
Woah. Thanks for making the lecture publicly available :3
@Michael-vs1mw
@Michael-vs1mw 3 жыл бұрын
How do we work with Gaussian mixtures at 56:40? What does it mean for the flow for x1 to be the CDF of a Gaussian mixture? These CDFs require integration, since the Normal CDF is an integral. But here we're somehow obtaining mixture CDFs without any integration whatsoever. How does this work?
@huaijinwang6908
@huaijinwang6908 4 жыл бұрын
I cannot understand how neural nets is related to the gaussian mixture model at kzbin.info/www/bejne/gHPFZqaJeJV9pbs. How can we impose the f \theta to be a Gaussian mixture CDF (maybe some assumption here with kl divergence in the objective function, I'm not sure)?
@AbhinavKumar-tr7ww
@AbhinavKumar-tr7ww 4 жыл бұрын
When we are manipulating in the latent space, (at 1:43:04), we add the smile vector to the latent representation of the image we want to modify. Is the latent space still euclidean where such addition is valid?
@shuminghu
@shuminghu 4 жыл бұрын
I suppose for euclidean space you were referring to property like "P + (v + w) = (P + v) + w" (P being a point, v, w being vectors) rather than distance metric like L2-norm. My feeling is that, in this case, they are not asserting the space is euclidean but P + (v + w) = (P + v) + w is true for w that orthogonal to v so that the order doesn't matter, i.e. they found an independent vector/direction in latent space, to represent smile for example.
@saihemanthiitkgp
@saihemanthiitkgp 4 жыл бұрын
At 1:06:30, if we condition z2 on x1 instead of z1, then training can be parallelized, isn't it? In that way, both training and sampling can be faster
@saihemanthiitkgp
@saihemanthiitkgp 4 жыл бұрын
oh, then sampling will be slower as x2 will depend on x1.
@rangugoutham7249
@rangugoutham7249 4 жыл бұрын
@@saihemanthiitkgp w.r.t your actual question conditioning z2 on x1 is what happens in AF modelling, and you are right inherently then x2 will depend on x1 making the sampling slower.
@shuminghu
@shuminghu 4 жыл бұрын
For 2D autogressive flow, z2 is conditioned on x1. Doesn't that mean dz2/dx1 should also be included in the loss?
@shuminghu
@shuminghu 4 жыл бұрын
Ah it's not needed. dz1/dx2 is 0 so when only dz1/dx1 and dz2/dx2 survive when evaluating the determinant of the Jacobian.
@shuminghu
@shuminghu 4 жыл бұрын
This also generalizes to N-D autoregressive flows since the Jacobian is a lower triangular matrix.
@SimonSlangen
@SimonSlangen 3 жыл бұрын
Was just wondering this while watching the video, so thanks for being one of the rare people who comes back to answer their own question.
@SimonSlangen
@SimonSlangen 3 жыл бұрын
On second thought, showing equivalence to the determinant formula just shows that the decomposition formula likely wasn't in error. But it doesn't answer the question of why we don't need a |dz2/dx1| term. -- I believe it's by construction because f2(x2; x1) != f2(x2, x1). i.e. f2 is parametrised in x1. Another way to write it would be f2_{x1}(x2).
@leoguti85
@leoguti85 4 жыл бұрын
Thanks for this excellent material. How does the flow model would work in the case we have data with mixed-type variables, i.e., continuous + discrete variables? should we have a different invertible transformation for each type of variable?.. Thank you!
@sinancalsr726
@sinancalsr726 4 жыл бұрын
Hi, thanks a lot for sharing the course and the materials. There is a jump in the video around 1:09:22 I was okay until that time but it became harder to follow after that. I couldn't understand especially the volume part.
@Shottedsheriff
@Shottedsheriff 4 жыл бұрын
I guess it was just before the pizza break (take a look on the slides from this lecture, after the jump it is just the next slide). However, the content might be more challenging
@GenerativeDiffusionModel_AI_ML
@GenerativeDiffusionModel_AI_ML 2 жыл бұрын
it is log(exp(sigma))?
@bender2752
@bender2752 4 жыл бұрын
So flow is one kind of latent variable model right? Since it have a Gaussian (or other kind) distribution based latent space.
@shuminghu
@shuminghu 4 жыл бұрын
People usually refer to latent variable models as the ones with latent space having less dimensions that data. Flow's latent space has the same dimension as data. Lecture 4 has more details.
@LucaPuggini
@LucaPuggini 4 жыл бұрын
is there any reference paper for flow models?
@tingchen6586
@tingchen6586 4 жыл бұрын
the last page of lecture slides docs.google.com/presentation/d/1WqEy-b8x-PhvXB_IeA6EoOfSTuhfgUYDVXlYP8Jh_n0/edit#slide=id.g4f883b39d8_2_34
Lecture 5 Implicit Models -- GANs  Part I --- UC Berkeley, Spring 2020
2:32:32
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.
번쩍번쩍 거리는 입
0:32
승비니 Seungbini
Рет қаралды 182 МЛН
Accelerating scientific discovery with AI
29:02
Vetenskapsakademien
Рет қаралды 54 М.
What are Normalizing Flows?
12:31
Ari Seff
Рет қаралды 76 М.
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Schwartz Reisman Institute
Рет қаралды 177 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 225 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.