I downloaded this beautiful model the day it was released, and today I'm already watching your tutorial )) Beautiful work! Thank you so much!
@KLEEBZTECHАй бұрын
Thanks for watching!
@Martin-bx1etАй бұрын
I really appreciate your talk-through on this.
@NalestechАй бұрын
Super helpful! Thank you!
@KLEEBZTECHАй бұрын
You're welcome!
@SouthbayCreationsАй бұрын
Fantastic video, a lot of good solid information. The lora came out great!! Thanks for the shout out!!
@gumvue.studioАй бұрын
thanks, this is very helpfull
@KLEEBZTECHАй бұрын
You're welcome! I am not sure how useful but I figure the more info people have the better we can all learn.
Ай бұрын
I am not expert neither, but I was thinking: maybe if you try to come up with nonrreal trigger word, because when you write "miniature person" it triggers all the neural paths responsible for plastic miniatures. Maybe if you switch the word for "m1n1ppl" or something like that, and avoid using miniature in your prompt, it might pop out better results, closer to your lora training data. But as I said, I am just guessing :D Edit: Oh. I am now just on <a href="#" class="seekto" data-time="600">10:00</a> and I see you did that :D I should comment only after watching full video next time :D PS: You just earned a subscriber :3
@oskar4239Ай бұрын
This is awesome!
@jmagick7281Ай бұрын
Works good. I have a realistic renders after 5 tries. Using Mini-People and Faetastic-Details at a strength of 1.0. Would love to see a V2 of this cool Lora. Thanks! Liked and Subscribed.
@KLEEBZTECHАй бұрын
Thanks! I do hope to improve it soon. Working on more source material to allow for more options. I keep trying different detail LoRAs but in my testing I was not fully satisfied with most. I am using my Hyper Detailed Illustration LoRA at low weight since that can seem to help.
@thewebstylistАй бұрын
Wowww BIG UPS brah!! 🎉👏🏻💯
@MichauxJHyattАй бұрын
Pretty dope!
@mauricioc.almeida2482Ай бұрын
Thank you very much for your Stable Diffusion tutorials. They're always great! Is it possible to install Fooocus on my local machine?
@KLEEBZTECHАй бұрын
@@mauricioc.almeida2482 older video but install method is the same. kzbin.info/www/bejne/oGK6poSkmdKafKc
@mauricioc.almeida2482Ай бұрын
@@KLEEBZTECH Hello, thank you for your response. I may have asked the question incorrectly. Let me try again: is it possible to install the miniature (safetensors) model in Fooocus 2.5.5? I already have Fooocus installed locally. I tried with the current version (Fooocus 2.5.5) and the result was people with plastic doll characteristics. I'm running the checkpoint "juggernautXL_v8Rundiffusion" in Fooocus.
@KLEEBZTECHАй бұрын
No. I have not released an SDXL version. I have tried and will attempt again in the near future but so far the results have always been terrible.
@CyberwizardProductionsАй бұрын
i would have 1. used photoshop to scale people down and position them on objects, then used those images for the data set and 2. used a unique trigger word that wasn't likely to be in the base model's training set.
@KLEEBZTECHАй бұрын
Tried a unique trigger. Made no difference in the end. I thought of doing the images of people and scaling down but just didn't feel I would get them looking right. I am not expert on that.
@KLEEBZTECHАй бұрын
Plus Flux knows what miniature people are but it makes them plastic so figured this would just replace that look.
@alpenjager398622 күн бұрын
Where have you gone? I haven't seen your content related to Text to Image generator for a long time.
@KLEEBZTECH18 күн бұрын
New videos soon. Things outside of KZbin taking my time up recently.
@deadpool_gamer89487 күн бұрын
In Foocus How do I use the program with the CPU?
@metanulskiАй бұрын
Some Random Tips: Let ChatGPT writ a python program for vs code that does screen captures every x seconds from a video. Tell ChatGPT that you also want to specify the start and end time and that the filename has to include the video name and the timestamp. Then you can start with some screenshots from the full video and go info details based on the result. No need for resolve an going though 100000 captures ;-)