Krita AI, UltraSpice Model Review
9:56
Пікірлер
@DiogoVKersting
@DiogoVKersting 19 сағат бұрын
That's really cool. Thank you
@jcsoto
@jcsoto 3 күн бұрын
Thanks, but your head is all over the controls we are supossed to see.
@Niberspace
@Niberspace 6 күн бұрын
Can it be used with the Digital Artworks XL style or does it have to be realistic photo?
@streamtabulous
@streamtabulous 5 күн бұрын
works with cartoon, art, anything does not have to be faces,
@Niberspace
@Niberspace 5 күн бұрын
@@streamtabulous cool, any chance you could show how to combine this method with the Digital Artworks XL style? I really want my own faces in them
@johnny5805
@johnny5805 10 күн бұрын
This was the first SD AI generator I used. And then Fooocus came out and blew it out of the water. Easy Diff is like banging two rocks together in comparisons to Fooocus.
@streamtabulous
@streamtabulous 9 күн бұрын
have you seen my krita videos? ED been abandoned for a long time.
@ikvzo
@ikvzo 10 күн бұрын
How to fix can't open safetensor but ckpt file in krita? thx
@andreiirimia414
@andreiirimia414 12 күн бұрын
One questions, how many repetitions did you put on this one? From the output seems like you put one and it does came great. I tried with 20 and myne got cooked also took alot of time
@streamtabulous
@streamtabulous 12 күн бұрын
yes its as the settings and default of 1 that i went with, worked well for me. i did play with it and it was like you found too too long and results messed up, i found adding duplicate images was better increasing the data set. sometimes id mirror images in the data set, gave better results on models i was not happy with
@andreiirimia414
@andreiirimia414 12 күн бұрын
@streamtabulous thank you very much for the detailed tutorial, it's the first one that really works for me even If i burned the first one but it's a really good starting point and i am gratefull that you responded. Very useful 🤗
@andreiirimia414
@andreiirimia414 12 күн бұрын
@@streamtabulous one more question and sorry to bother, have you tried using the mask feature in tools of onetrainer ? As it's supposed to train only with the areea that is masked. I was wondering If that would result in a better model and If you ever tried it.
@streamtabulous
@streamtabulous 11 күн бұрын
@@andreiirimia414 i have not done masks but thats simply wanting the Ai too take the background into account and i normally also manually mask images before i put in my data set, i believe if small numbers of images in a data set masking can help.
@CrazyCamo
@CrazyCamo 13 күн бұрын
most of the time I don't actually watch the videos anyways.
@patrickputz574
@patrickputz574 14 күн бұрын
ReOups... I use Daz 3D only for the characters that I save as png, then I use this layer with the ControlNet in Krita
@patrickputz574
@patrickputz574 14 күн бұрын
Oupssss, I use Daz 3D only for the characters that I save as png, then I use this layer with the ControlNet in Krita
@patrickputz574
@patrickputz574 14 күн бұрын
Bonjour ami, pour avoir exactement des poses justes et bien d'autre choses trop aléatoires avec une IA, j'utilise Daz 3D, c'est gratuit et tu maitrises même les phalanges... Bonne journée. Hello friend, to have exactly the right poses and many other too random things with an AI, I use Daz 3D, it's free and you even master the phalanges... Have a nice day. After months of testing, Krita is for me the best AI solution
@obezuna
@obezuna 18 күн бұрын
Greetings, I have a question, I have a 2060 8gb super, I want to train models on Lora pony, which is better to use now? Kohya_ss or OneTrainer?
@streamtabulous
@streamtabulous 16 күн бұрын
onetrainer hands down less issues
@obezuna
@obezuna 16 күн бұрын
@@streamtabulous Thanks
@hasanhuseyindincer5334
@hasanhuseyindincer5334 19 күн бұрын
👍👍
@saravanakgopi
@saravanakgopi 20 күн бұрын
Sir, intel latest card B580 plus intel i5 14000f desktop possible to run krita AI? or shall go with Nvidia, pls recommend which is better for krita ai generative images thanks
@streamtabulous
@streamtabulous 19 күн бұрын
unfortunately the open source community use nvidia and its made to work with cuda, shock with tensor cores, so works best with nvidia as other cards its software emulate using the gpus power, so recommend is nvidia with the card having as much vram as possible
@ligmasBallers
@ligmasBallers 24 күн бұрын
I have one question, does it work on the MOBILE DEVICES?!?!?!
@streamtabulous
@streamtabulous 23 күн бұрын
laptops with nvidia gpus, not ipad, Android etc, though Ackly was working on cloud gpu but that would be pay to use
@ligmasBallers
@ligmasBallers 13 күн бұрын
​@@streamtabulous dang it
@yuramisuper
@yuramisuper Ай бұрын
In my case, the server path mentioned in the video is located in the parent folder 'krita' rather than under 'pykrita'. Because of this, even though I modified the .yaml file, the paths for the models within Krita are still incorrectly specified. There is a reply below mentioning the same issue, and since I have a similar case, I am posting this question. Could it be that this problem arises from following different KZbin tutorial videos where people mention different path specifications when installing the ai_diffusion plugin?
@patrickputz574
@patrickputz574 Ай бұрын
Thank you for all this information which is very helpful, I take advantage of this and send it to wish you an excellent year 2025 and if you have time, do you know how to install controlnet for Flux in Krita? I thank you again and a very very happy new year 2025
@davidzachary7152
@davidzachary7152 Ай бұрын
It makes the same set of pictures with same prompt every time
@Dusterlog
@Dusterlog Ай бұрын
I use Kohya but now it take longer time than training for 1.5 models updated GPU recently now i have 4070ti super 16gb vram but still training takes long time. If in Kohya sample i know how setup everything on OneTrainer steps and epochs setup differently you dont have to setup 100_name folder or specify it in ui table. So can someone tell me what setting i should use for example i have 168 images how many steps i should input in OneTrainer and where.
@Erlandsson1964
@Erlandsson1964 Ай бұрын
Kohya need newer python now. And it is so broken it is impossible to get to work. I had incedibly enough a semiworking kohya. Most presets would not work and and some other things. I then tried to reinstall. Should not have done that. I have now sat for 4 whole days trying to get it working again. Now it won't complete training. Randomly stops with gibberish i don't understand. Sometimes IF i am lucky and only train quick like for 15 minutes o could get all files. It complains about not able to suddnely read python files, which obviously are in their correct folders. And complaining about this and that. I soon give up.
@streamtabulous
@streamtabulous Ай бұрын
i move to one trainer due to kohya and its issues, kzbin.info/www/bejne/bZKvkKaprZiXa6csi=7VTO6SKm1MKudQYR
@arkofknowledge3724
@arkofknowledge3724 Ай бұрын
Thank you for your tutorials. They will always be people with negative feedback, and they are often more vocal. Don't pay them any attention and just keep doing the good works.
@petertremblay3725
@petertremblay3725 Ай бұрын
Question: If i use a different flux models like ''unet\fluxFusionV24StepsGGUFNF4_V2GGUFQ5KS.gguf'' do i have to train it with this model or i am force to download the huge flux dev model for it to work or maybe FP8 would do?
Ай бұрын
Hi. I tried following the guide, but I still couldn't get it to work. I just wanted to add one model as a test, but I still can't see it in Krita. Even though I placed it directly into the models\checkpoints folder, it doesn’t show up in the list in Krita. Only the default models are there. So I tried mapping it according to your guide, but it still didn’t work. What am I doing wrong? On web UI model is visible. Thanks!
@yuramisuper
@yuramisuper Ай бұрын
I have a similar case 😭😭😭
@davidpiscopo9301
@davidpiscopo9301 Ай бұрын
I found this really hard to follow. He glossed over a number of things and made assumptions. Also with his image at the top right of the video obscured part of the video where he was selecting things
@streamtabulous
@streamtabulous Ай бұрын
I flipped the video window in newer video, its honesty very easy to do ill redo a d update a new video of this method soon.
@TobagoTech
@TobagoTech Ай бұрын
@streamtabulous, slow and steady is the way to go, i rather enjoy the video, i was able to take the required notes while you spoke rather than always having to pause, thank you for that. I installed and ran One Trainer based on your instructions, it worked great from the start, i sat for about 20 minutes while the Epoch ran it's 100 cycles with a batch size of 4, i monitored my samples folder to see my 100 image getting better at representing what look like. After the final run i got an error that states: Creating Backup E:/One Trainer/workspace/Run\backup\2024-12-19_16-25-29-backup-400-100-0 fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Saving E:/One Trainer/LoRA Models Made/My-LoRA.safetensors fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git as i see this as a tweeking settings issue i look for guidence in fixing this error.
@AItekenen-k7s
@AItekenen-k7s Ай бұрын
This series is very useful, I'm born with cerebral palsy. Drawing is difficult with spasms in the hands and fingers. I use Stable diffusion Forge webui, but i'm searching for a more integrated workflow for adding text etc.
@streamtabulous
@streamtabulous Ай бұрын
text is a tricky one, flux is best but my system can hardly handle it and the streaming system not at all, when I back to doing some videos i do use some trick using krita text and ai to use that to render
@DarthVegas1
@DarthVegas1 Ай бұрын
Thanks so much, this is the only tutorial i can understand whats even happening😅. Thanks a lot for keeping it updated as well (update) I just read about your struggles. I deeply feel you. I have a debilitating anxiety, social anxiety, OCD... all of which robs me of just living life. But i keep going and hope one day ill be able to afford myself the right therapy
@francescoalicino5648
@francescoalicino5648 Ай бұрын
Hi do you do coaching? or do you offer this kind of work on a hourly basis?
@streamtabulous
@streamtabulous Ай бұрын
just always done this stuff for myself, started the videos as people as how and what do i mean, easier to show so started this channel, i did set up a twitch but felt people would get bored.
@francescoalicino5648
@francescoalicino5648 Ай бұрын
@@streamtabulous ok but do you teach how to do it?
@RayHutchins-z1k
@RayHutchins-z1k Ай бұрын
Confused if I am supposed to do anything further with the prompt.txt file once I paste in the text from the command window. What directory does it need to be in to be used?
@RayHutchins-z1k
@RayHutchins-z1k Ай бұрын
Confused if I am supposed to do anything further with the prompt.txt file once I paste in the text from the command window. What directory does it need to be in to be used?
@shobley
@shobley Ай бұрын
Thanks for making this. I have had mixed results when training on images that have been manipulated by AI. I suspect that the "latent" representation of an AI manipulated image has unusual weights to its "vectors" (not really sure about the terminology) and this results in strange looking final output images.
@patrickputz574
@patrickputz574 Ай бұрын
Excellent
@patrickputz574
@patrickputz574 Ай бұрын
Très très chouette tuto, merci beacoup. Heureusement que tu es là
@ItsXanderDee
@ItsXanderDee Ай бұрын
Dude this guide was amazing. Im training as I type this! Im shook! 😂 Thank you!
@Fai2012
@Fai2012 Ай бұрын
So Kohya doesn't work in a virtual environment? As of now my sys python is 3.13 but I setup a virtual with 3.10.6, but kohya seems to ignore it. Would be nice if that can work.
@streamtabulous
@streamtabulous Ай бұрын
Would need a VM that uses the real hardware and no Virtual hardware, and I'm not sure how cuda drivers would go in a VM, I'd wonder if a dual boot system with Linux os could be a good option.
@Rico_Roberts
@Rico_Roberts Ай бұрын
Just to let you know, Euler is pronounced as Oiler. Named after Swiss mathematician and physicist; Leonhard Euler. Great video btw.
@Z-Rollz
@Z-Rollz Ай бұрын
@streamtabulous dear streamtabulous, is there a way I can use krita ai on my wimpy laptop across town and have it connect to my hulkish dinosaur pc at home? My backup would be using like a team viewer app, but COULD the laptop point to my home pc working as a server ?
@streamtabulous
@streamtabulous Ай бұрын
it can apparently do that, same way it connects to a cloud pc, I'm just not sure on how stable and Acly said it was limited so certain things don't work. how will be on the git, you should be able to ask for help there on how etc github.com/Acly/krita-ai-diffusion
@ExarduffmanTheFirst
@ExarduffmanTheFirst Ай бұрын
The higher the number of repeats for the training set, the more the NN will favour the earlier images. Having a lower repeats and higher epochs will average out the model across all images.
@testingbeta7169
@testingbeta7169 Ай бұрын
kudos to you to keep going despite what life threw at you. All the best from my heart and dont mind the nay sayers
@monsmc5526
@monsmc5526 2 ай бұрын
Hello, the pose tool does not work for me, I click on the button to generate control layer but it does not generate the skeleton and sometimes it generates a black layer, what is happening?
@JanSaskaMusic
@JanSaskaMusic 2 ай бұрын
Hello. Where will I find "AI image generation"? tool? I dont see it in right toolbar. Thanks
@drfelipe4943
@drfelipe4943 2 ай бұрын
add only this command --disable-cuda-malloc work for me. work in 1024x1024 resolution with fp32 very fast. my gpu 1660 super 6gb
@Merotryz
@Merotryz 15 күн бұрын
where do you add that?
@TheSwordinTheWind
@TheSwordinTheWind 2 ай бұрын
Has anything changed about this. I am trying this on a 3070Ti (8Gigz) with your config after changing to bf16 and seems like its taking about 90 minutes per epoch for a 100 image Lora. Most images lower than 800p res, and a few under 1500p.
@streamtabulous
@streamtabulous 2 ай бұрын
dad and my cat died 2 months ago back to back so not used since then so unsure if its changed, make sure you have the cuda tool kit installed that makes a major difference
@TheSwordinTheWind
@TheSwordinTheWind 2 ай бұрын
@@streamtabulous sorry to hear that mate. hope you are doing better now. I do have cuda 11.8 installed for kohya ss and other AI stuff. I am not sure i am supposed to do anything different for this. Trying to filter it down to whats causing the long times but not sure yet other than 100 images vs 30 images if that could be a problem.
@streamtabulous
@streamtabulous 2 ай бұрын
@@TheSwordinTheWind no i have used just 100 images around 1 and a half to 2 hours. can't think of anything atm cuda_11.8.0_522.06_windows python-3.10.6-amd64 vs_BuildTools visual studio www.mediafire.com/file/h27lrfzbqf8n07t/vb_what_to_install_.png/file
@ssduo5574
@ssduo5574 2 ай бұрын
I have successfully trained a Lora on my own face and the samples looked really good, but for some reason no matter what I do on stable diffusion, it generates an image of some other person and not me, not even close to me actually. I don't know what I'm doing wrong, I've tried using all sorts of trigger word found in the captions and even used the name of the concept but it doesn't work. Can anyone help me?
@streamtabulous
@streamtabulous 2 ай бұрын
make sure the prompt has the triger. ie: <lora:Audrey 2> triger with weight <lora:Audrey 2:1.2> other examples plus directory to lora file <lora:SDXL/My Loras/Audrey 2> <lora:SDXL/My Loras/Animal_Muppet:0.7> <lora:Animal_Muppet:0.7>
@hmmyaa7867
@hmmyaa7867 2 ай бұрын
Hey, I just wanna tell you that finding your channel is like a diamond to me. It may change my life. A lot has been going on in life. 19 yo didnt graduate high school due to health reasons and bla bla bla. And im going to take a shot in this kind of field. Whatever happens to you, I just want to express my grateful for the amazing knowledge you share.
@patrickputz574
@patrickputz574 2 ай бұрын
Excellent, beaucoup de choses très intéressantes, merci beaucoup et bonne journée, je m'abonne... Excellent, lots of very interesting things, thank you very much and have a nice day, I'm subscribing...
@tomreiner8676
@tomreiner8676 2 ай бұрын
Thanks!
@999ipad
@999ipad 2 ай бұрын
Hi. I want to use the same 5 prompts on 10 different pictures. How can I do that without having to do the same stets again on each pictures?
@User27j
@User27j 2 ай бұрын
how to turn on the negative prompt in krita? thank you Bro
@sizlax
@sizlax 2 ай бұрын
If no one has said this yet, take off your glasses when taking pictures of yourself for AI. Same if you wear hats, scarves, anything that covers the part of your body that you want the model to learn. Glasses can be added to the image during generation. Alternatively, you could have a set of images specifically in which you wear the glasses, and adjust your head in position and angle relative to the camera, and in different light settings, then add those to the dataset while specifically tagging things like lighting, position, and angle. As an alt alternative, you could use those images to create a second (glasses) lora to use with the first, and adjust the strength of that lora as needed. in that lora, you would avoid tagging any hair or facial features (theorizing with this), so the two loras don't get confused, and the second primarily focuses on the glasses. That's the beauty of doing a lora of yourself, is that you are your own IP, so if you're willing to put in the time, you can diversify the hell outta the dataset and have it create a perfect 'you' every time. Now, setting up the camera, if you're using a smartphone and don't want the 'selfie' pose every time, is a different challenge in itself. Edit: sorry, It's 3am and I was only half paying attention. I went back over part of the video and realized that you already mentioned the glasses thing.
@sizlax
@sizlax 2 ай бұрын
That tip about the clothing is actually pretty smart. ChatGPT, and even a search I did online suggested that if I didn't want the model to be trained on the clothing, and only wanted it to focus on the character, not to add the clothing tags, which made sense to me, but what you've just said actually lines up well with what I've experienced in image generation when stuff shows up that I don't want in there.