5 EASY WAYS TO MAKE BETTER AI ART | Stable Diffusion 2023

  Рет қаралды 26,771

Binks

Binks

Жыл бұрын

A lot of people are struggling with generating AI art to their likings on a local machine, not Midjourney or DALL-E. Here are 5 easy ways to improve your workflow and start creating better images this year. If you liked this video, don't forget to like and sub!
Discord: / discord
Protogen x3.4 Photorealism: civitai.com/models/3666/proto...

Пікірлер: 50
@etp7393
@etp7393 Жыл бұрын
The quality of the video is great, definitely trying AI art now!!!
@randomvideosoninternet8954
@randomvideosoninternet8954 10 ай бұрын
this video is gold, and in years to come, this will be one of the best video out there, and become classic.
@jahonky5573
@jahonky5573 Жыл бұрын
I appreciate the continuous uploads!
@binks_live
@binks_live Жыл бұрын
thanks so much jihad (you're clearly very good at valorant from this comment I can just tell)
@chaerazard
@chaerazard Жыл бұрын
Thank you for the tips, I am very inspired by your work
@arbrian683
@arbrian683 Жыл бұрын
Thank you so much for the information
@morphman86
@morphman86 Жыл бұрын
For the iteration process, I've found that the revamped loopback script is quite handy. Instead of batching, you can run iterative prompt changes and generate a few dozen (or hundreds) images and pick your favourite from those to iterate again. I found this gives much faster results than batching 2-4 and changing the prompt, scale or denoising then run again 10-20 times. Just don't forget to tick the box to save the prompt. You don't want to lose a prompt set you really like just because you tinkered with one or two values for the next iteration.
@AI_Generated_21
@AI_Generated_21 Жыл бұрын
Very useful video! Great job! Subscribed!
@binks_live
@binks_live Жыл бұрын
Glad it was helpful! Thanks so much for the sub!
@swannschilling474
@swannschilling474 Жыл бұрын
Sampling Steps up is also always helping when doing img2img, also playing with embeddings and Hypernetworks is a great way to get different Styles!
@binks_live
@binks_live Жыл бұрын
I have a video in the works on hypernetworks, if you have any tips let me know!
@swannschilling474
@swannschilling474 Жыл бұрын
@@binks_live I realized that the right Hypernetwork helps a lot to get your faces right, and it its a lot faster than the restore faces option... 😊
@addisonavery_
@addisonavery_ Жыл бұрын
In this sea of hype around the new era of AI, it's refreshing to find a channel with no nonsense and clear instructions. Thank you! I'm working on a graphic novel and sometimes I add a general prompt and it repeatedly generates a very similar character even with seed set to -1. Now this isn't necessarily bad as I'd like to use the same character throughout but I'd also like to explore my options before it "locks" it in. Is this a bug with the SD WebUI? Also I've noticed that once I open around 4-5 instances of the WebUI ( not running simultaneously ) I begin to see a degradation of quality. Specifically there seems to be a faint orange peel like texture over my images. Is this normal? Sorry for the long message.
@binks_live
@binks_live Жыл бұрын
Hey Addison! Sorry I missed this comment but hopefully this can still be useful. The 'seed' variable has the most impact from generation to generation but some models can be predisposed to provide similar looking faces. Unfortunately, this can only be avoided by using a different model with more training data or training the model additionally yourself. As far as the degradation of quality, if you're running multiple instances at a time could be using up a lot of your available VRAM or you could be right about it being a bug within WebUI. I'm looking into a better solution and considering developing my own if I can't find one I'm a big fan of. If you have any questions, feel free to join my Discord! discord.gg/JvcYXZr86q
@nackedgrils9302
@nackedgrils9302 Жыл бұрын
I've also found that using SD a lot sometimes gives me poorer and poorer results and I've also encountered the xformers bug that makes every output image look scrambled. The solution I've found was to re-install it (already re-installed it two times in a single week already). I've also noticed that using img2img and re-running the output again messes up the colours if I'm using a model that doesn't have a .vae file.
@thebrokenglasskids5196
@thebrokenglasskids5196 Жыл бұрын
Can save a lot of those roulette rounds in img to img by using in-painting at that stage instead. Just mask the crown and run batches of 4 until you get what you want by in-painting masked only + original and clear the prompt out and replace it with a prompt specifically for what you want changed in the masked area. That way all you’re altering is the crown and keeping everything in the original that you liked instead of having it change as well, thus fixing one problem, but creating others. I recommend creating a separate custom in-painting model based of whatever model you're using to render. There's tutorials around the internet on how to do that so I won't get into that here, but It's not difficult and can be an invaluable tool to getting exactly what you want into your image, thus giving you near total control of your subject matter instead of relying on Russian roulette style trial and error batch runs. Also, you can increase the quality a lot by sending it back to image to image at the end and refining the details further by running multiple low denoise rounds on it. Start with a batch of 4 at something around 0.25 and up the samples to 60. Pick the best result and repeat the process, but lower the denoise further while increasing the sample to 100 or more. Keep repeating this and choosing the best result until you’re at 0.1 denoise and maxed at 150 samples. Refining the subtle details like this makes a huge difference in getting that perfect end result before upscaling. I wouldn't leave upscaling "as is" either tbh. Dialing in a good upscaling mix between two upscalers per model can yield higher results in my experience. Especially if you're going for life-like realism. Should also turn CodeFormer visibility up to 1 and set the strength to be the same as you have it set to in the main settings under Face Restoration so you get an upscaled face that looks consistent to your base renders of the image. You can then tweak it from there in subsequent passes to get it right where you want it.
@nackedgrils9302
@nackedgrils9302 Жыл бұрын
Wow, I didn't know that there was an upscaler in Extras, I've always used the img2img SD Upscaler script with underwhelming results most of the time, so I have to re-roll with different settings which is extremely time-consuming on my setup (20min. to upscale 2x). Also couldn't figure out how to run batches of images in txt2img, I thought that the setting for it was the ''Batch Size'' slider which my setup wouldn't allow to run with any other value than 1. Now I'll be able to prompt, go do something else and choose which image to work with when I get back! It's such a pain to be using this on a laptop but SD has now convinced me to save up to build a proper PC!
@viralvideocli
@viralvideocli Жыл бұрын
AI shorts kzbin.info/www/bejne/bpCbaXmsipqll6s
@maurisnake15
@maurisnake15 Жыл бұрын
Which prompts were you using? Amazing results
@SirSalter
@SirSalter Жыл бұрын
Take a sip of your drink, every time he says “go ahead”.
@binks_live
@binks_live Жыл бұрын
You got me laughing uncontrollably in the airport, thank you! 🤣
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
I've been turning Restore Faces OFF a lot lately. I find the option just ruins the faces. It smooths them out, makes them blurrier and saps detail. It makes them look very photoshopped. Turning it off I get much more detailed and real feeling faces and any issues like screwed up eyes or teeth I can just fix later.
@thebrokenglasskids5196
@thebrokenglasskids5196 Жыл бұрын
The effect of restore faces depends on the model being used and how it was trained when it was created. For some models it helps, others it hurts. Also depends on the prompts being used. Especially the negative ones.
@user-zv4xg2bj4h
@user-zv4xg2bj4h Жыл бұрын
좋아요😄~구독~♡
@hypnotic852
@hypnotic852 Жыл бұрын
I just stumbled across your videos and I have to say extremely helpful, do you plan on making a video on how to master prompt crafting, I’ve been on an endless journey trying to find the answer but always came up short
@binks_live
@binks_live Жыл бұрын
I certainly can start working on one, check back in a few days! Can you tell me a bit more about what you’re looking for?
@hypnotic852
@hypnotic852 Жыл бұрын
I’ve been using stable diffusion to create characters and when I describe like the clothing or like their physical characteristics in detail a lot of it is lost, I’ve even tried to add weights to parts that the ai wasn’t picking up but when it does other parts are lost and it’s just a massive headache
@sketchionic6356
@sketchionic6356 Жыл бұрын
You are also going to make a tutorial about installing that UI you have for us. Please. thank you
@vintorpraiseandworship
@vintorpraiseandworship Жыл бұрын
amazing tips, can you replay me with that negative prompt
@izmi2938
@izmi2938 Жыл бұрын
I search about negative prompt guide you mentioned but I found nothing, help?
@hplovecraftmacncheese
@hplovecraftmacncheese Жыл бұрын
Does the sampling method make a big difference? A lot of people use Euler a. I'm new to stable diffusion, so I'm just referring to the tutorials I've seen.
@pixelpuppy
@pixelpuppy Жыл бұрын
some sampling methods cause a drastic change from steps. You can use the same seed and try different samplers to see the difference. Most of them just use different ways to resample the diffusion, trading speed for quality. If you turn up the live update frequency, you can see how these samplers work - Euler A does a sorta blurry painting that it refines over time, DIMM does all these whacky colors to define edges.
@hplovecraftmacncheese
@hplovecraftmacncheese Жыл бұрын
@@pixelpuppy I haven't experimented much because there are millions of different ways to use prompts and tweak different settings. I've been just trying to find what the experienced people seem to recommend.
@creatiffshik
@creatiffshik Жыл бұрын
Early step generations in stable diffusion look somewhat impressionistic and that's cool! Moving further in refinement it tend to move twards something more... mainstream, obvious, but in first impression this images are.. somewhat more emotional and give a dophamine shot. I think there's a way to keep this early - like first two step generations - and make some work around them making it more fine in hand made style. They tend to look like my friend's Alish paintings, made from a photo, but keeping overal mouar feeling of wel low quality blooming film and lens, that gives a feeling of an easy shot, made to catch a short-lasting moment of life. Also i tend to feel like this early generetions are looking like a mediocre(in a good life-sense) beauty in its best.
@DecoTunes28
@DecoTunes28 Жыл бұрын
Where can I download this software and is it still free?
@cloudofzero
@cloudofzero Жыл бұрын
Anyone else love reading tiny words? Still Good Video.
@bazingatnt
@bazingatnt Жыл бұрын
i checked their site but there is nothing like you use .Just a spimle site with realy bad result .How we can acsess same panel like yours ?
@Which-Way-Out
@Which-Way-Out Жыл бұрын
He's using Automatic1111 webgui
@BlastGorilla5253
@BlastGorilla5253 Жыл бұрын
Very respectfully I am Asking !!!! I don't know How to get That software and How to Install it. Plz Help Me I am very Eager to learn and Generate AI art. Humble request 🙏🏻
@carlosruiz6179
@carlosruiz6179 Жыл бұрын
Search for how to install "stable diffusion" then learn with these videos.
@thekleroterion
@thekleroterion Жыл бұрын
hi, can this be done from google drive? collab?
@binks_live
@binks_live Жыл бұрын
Not as far as I know. Some Hugging Face models have online versions that are VERY slow but do work. Let me know if you have any more questions!
@thekleroterion
@thekleroterion Жыл бұрын
@@binks_live I got like 50 models, and I have a script that load on collab , but dont kno wwich ones are better , or the name and the model file link
@viralvideocli
@viralvideocli Жыл бұрын
AI shorts kzbin.info/www/bejne/bpCbaXmsipqll6s
@azaharrahat2512
@azaharrahat2512 Жыл бұрын
what is site name???
@Which-Way-Out
@Which-Way-Out Жыл бұрын
He's using a locally installed version of Automatic1111
@ArielTavori
@ArielTavori Жыл бұрын
Dude, you have absolutely got to lock the seed in order to compare what restore faces does, and does not do. The example you show at the beginning suggests it made the whole image better and changed the composition, etc. Which is absolutely not the case. If you lock the seed and regenerate the exact image again you will see the only changes it makes are to the face/hair region; and with a solid model doing close-ups of a face it frequently actually RUINS the face, making it much lower resolution. It also makes a huge difference which algorithm you have selected in the settings. gfpGAN is excellent at protecting the identity of a specific person without changing them, but it has limited usefulness to making 'good' face is 'better'. Codeformer on the other hand, can make a beautiful face out of a complete mess, but it will not protect the original identity and may even change the race.
@ArielTavori
@ArielTavori Жыл бұрын
FYI, there's also an option in the settings to "save a copy before performing restore faces" so you can keep both files and choose the best for each individual image.
@damarh
@damarh Жыл бұрын
This is like mining for crypto, except instead of losing your life savings, you get an e-wiafu.
@witness1013
@witness1013 Жыл бұрын
Most of these explanations are wrong
HOW TO CREATE PHOTOREALISTIC AI IMAGES | Stable Diffusion
6:01
How Many Balloons Does It Take To Fly?
00:18
MrBeast
Рет қаралды 76 МЛН
Does size matter? BEACH EDITION
00:32
Mini Katana
Рет қаралды 16 МЛН
마시멜로우로 체감되는 요즘 물가
00:20
진영민yeongmin
Рет қаралды 19 МЛН
Khó thế mà cũng làm được || How did the police do that? #shorts
01:00
This Prompt Makes Your Prompts 10X BETTER
9:02
metricsmule
Рет қаралды 46 М.
Revealing my Workflow to Perfect AI Images.
13:31
Sebastian Kamph
Рет қаралды 313 М.
Best Practice Workflow for Automatic 1111 - Stable Diffusion
8:00
AIKnowledge2Go
Рет қаралды 220 М.
MORE Consistent Characters & Emotions In Fooocus (Stable Diffusion)
17:05
Intro to LoRA Models: What, Where, and How with Stable Diffusion
21:01
Laura Carnevali
Рет қаралды 189 М.
Stable Diffusion for Flawless Portraits
13:24
Vladimir Chopine [GeekatPlay]
Рет қаралды 233 М.
Stable Diffusion Tools: Master the Art of Stable Diffusion
13:10
Making AI Magic
Рет қаралды 47 М.
Create Cinematic AI Videos for Free | Haiper AI Video Tutorial
15:35
Curious Refuge
Рет қаралды 254 М.
Leonardo AI Train Your Own Custom Models
8:10
Monzon Media
Рет қаралды 125 М.
СБЕГАЮ ОТ ЗЛЫХ РОДИТЕЛЕЙ в Schoolboy Runaway
44:32