1:07:12 I think it is because we optimize against a score, and the arg_max/ arg_min does not change when the statistical ranges are offset or scaled, e.g. optimizing x-1 is the same as optimising x skipping sampling process basically is similar to the linearization (EKF) / momentum, linearly extrapolate using the result of previous time step
@coolarun3150 Жыл бұрын
so far detailed and awesome!!!
@howardjeremyp Жыл бұрын
Glad you think so!
@michaelmuller1364 ай бұрын
Very interesting, thank you!!
@bayesianmonk8 ай бұрын
Sometimes explaining the math helps more than escaping it, no heavy math is used anyway. I found the explanation of DDIM not very clear. Thanks for the course and videos.
@thehigheststateofsalad8 ай бұрын
We need another session to explain this process.
@maxkirby85008 ай бұрын
Yeah. I've been spending quite a bit of time trying to bridge the gap by reading through the papers and stuff, but maybe that's intented...
@satirthapaulshyam7769 Жыл бұрын
Samples in these diffusion models r b2in -1 and 1 29:59
@kettensaegenschlucker Жыл бұрын
1:27:35 - Wondering what made you cheer, Johno ... 😂 Edit: spoiler alert, but there is a resolution shortly after
@frankchieng8 ай бұрын
i thought in the class of WandBCB(MetricsCB) def _log(self, d): if self.train: should be modified with if d['train']=='train'