Something I forgot to mention: They generated 100x more training data and filtered to pick the 'best' results for training! One way to try and improve the data quality I guess :)
@Yenrabbit Жыл бұрын
Also apologies about the one errant slack notification that made it into the edited video!
@shubhchaurasia2437 Жыл бұрын
@@Yenrabbit o
@lkewis Жыл бұрын
Was hoping you'd cover this, great video!! Thanks
@adityakharbanda32905 ай бұрын
Could you please explain the loss function in a bit more detail? Thanks
@aliteshnizi672 Жыл бұрын
Thanks for the great video! Question: At inference time, is z_t a randomly sampled vector or is it a diffused version of the input image? Cause if it's the latter, then they're passing the original image information in 2 ways (the initial latent and the image conditioning)
@parthwagh36074 ай бұрын
this is not working on updated stable diffusion webui and forge
@imaine-qn5vv Жыл бұрын
Thanks for cool review. I got some question while reading paper, I think this model can not only overall style transfer but also do localized object change. But there is no direct hint that this model can infer where to change on image like masking or swaping word attention map. I guess localizing ability of this model came from generated dataset (instructions for GPT and images from Prompt2Prompt) eventhough balancing guidance level might also affect. Whats your opinion about this?
@LIMBICNATIONARTIST Жыл бұрын
Amazing!
@kornellewychan Жыл бұрын
Love u bro, u will me me bilioner, greate fucking work !!!