Negative Embeddings - ULTRA QUALITY Trick for A1111

  Рет қаралды 87,223

Olivio Sarikas

Olivio Sarikas

Күн бұрын

Negative Embeddings can help a lot to improve your image Quality. Here is how to use them in A1111. Also I show your my unshapen Trick, to get much better results when upscaling.
#### Links from the Video ####
huggingface.co...
huggingface.co...
huggingface.co...
huggingface.co...
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorial...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Affinity Photo Creative Packs: gumroad.com/sa...
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliv...

Пікірлер: 121
@OlivioSarikas
@OlivioSarikas Жыл бұрын
#### Links from the Video #### huggingface.co/yesyeahvh/bad-hands-5/tree/main huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main huggingface.co/nick-x-hacker/bad-artist/tree/main huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
@havemoney
@havemoney Жыл бұрын
Thanks always for the url :D
@Mandraw2012
@Mandraw2012 Жыл бұрын
Hey there @olivio Sarikas, wanted to know is that an extension you use to get stuff from your clipboard to your img2img canvas at 4:20 ?
@medmen04
@medmen04 Жыл бұрын
@@Mandraw2012 that an operaGX thing
@precursor4263
@precursor4263 Жыл бұрын
Are there any embeddings for bad eyes? I know there's the face restoration option, but that usually makes the images photorealistic and sometimes it doesn't work very well for artsy stuff. I don't want to be inpainting eyes, considering I'm working with batch img2img
@LouisGedo
@LouisGedo Жыл бұрын
👋
@fenrir20678
@fenrir20678 Жыл бұрын
Quick little tip: Instead of copy and pasting or memorizing the names of the negative embeddings, just click the "Show/hide extra networks" button in the middle under the Generate button. There you can see all of your embeddings. Just click once in the negative prompts and the simply select which negative embedding you would like to use.
@polystormstudio
@polystormstudio Жыл бұрын
Thanks for the tip!
@S4SA93
@S4SA93 Жыл бұрын
That's nice, but it does not add the pointy brackets. So I wonder does it need the brackets if it is not adding them itself?
@nickkatsivelos6613
@nickkatsivelos6613 Жыл бұрын
@@S4SA93 I think it is all taken care of - Here is the output when I did a run "Textual inversion embeddings loaded(4): bad-artist-anime, bad-ar..." no braces, just comma between each - had other negative prompt text in there with it.
@S4SA93
@S4SA93 Жыл бұрын
@@nickkatsivelos6613 Yea it seems to work without the brackets but I am wondering why he adds them then
@SantoValentino
@SantoValentino Жыл бұрын
What fork Kate you running because that’s not on auto1111… I see it on vladmandic fork
@benjamininkorea7016
@benjamininkorea7016 Жыл бұрын
Very nice Photoshop process. I realized that working artistically with photoshop can save a lot of trouble-- for example, just brush out an extra finger instead of inpainting 20x and hoping. But the sharpening trick is really a game changer!
@AI_EmeraldApple
@AI_EmeraldApple Жыл бұрын
There are other emeddings like ng_deepnegative_v1_75t, bad-image-v2-39000, bad-picture-chill-75v, verybadimagenegative_v1.3, and Unspeakable-Horrors-64v, that work with many models too!
@Vitaliy_zl
@Vitaliy_zl Жыл бұрын
you also can use edge detection filter in photoshop > invert received image(ctrl+I) > and use this image as mask on sharped image to avoid oversharped artifacts as showed in this video
@AltoidDealer
@AltoidDealer Жыл бұрын
Heya, I used your cocktail (minus the anime one) and it's great! However, I also tested adding the popular "easynegative" embed to see what would happen... after comparing dozens of outputs with/without it, I determined that if it's used with 0.5 weight it improved images even further. Note that I was testing on realistic images and omitted the Anime neg embed you showed.
@rproctor83
@rproctor83 Жыл бұрын
Be careful with embeddings, they are normally trained on specific models, when those models are updated and the embeddings are not updated you will get a bit of distortion. As the models progress but the embedding stays the same that distortion becomes more and more prevalent. To further complicate things the embeddings will affect your other networks like LoRA and LyCORIS, which if those are trained on some other model can drastically alter the results in a negative way. Not to mention things like Clip Skip and CFG, they will also greatly alter results of the embeddings.
@Ureroll
@Ureroll Жыл бұрын
Nice tip, It actually makes sense that a sharper image would produce finer details when re- upscaling. For the opposite reason I would be careful with upscaling after those blurring touch ups in the editor and leave it as a last step. Any manual blurring or smearing in my experience has an high chance to be interpreted as part of the background, unless an higher denoise is set, but that mangles everything at that point. Going back and forth long enuff and the color shifting monster will get ya. I have not found a real solution for that issue, the colors slowly shift, a really dark blue will slowly shift to purple and the blacks go up in gamma. I tried with the option in the settings or with the cutoff plugin, nothing really work so far. It would be so cool to just paint something in manually or smear off an extra finger in photoshop, send it to img2img for a beauty pass, go back to photoshop, work some more... but the colors move around too fast for that workflow. Is there a controlnet just for the tones and hue? that would be massive!
@nio804
@nio804 Жыл бұрын
One of my favourite tricks is to use LoRAs with negative weights. You can get some fun effects with the right LoRA
@moon47usaco
@moon47usaco Жыл бұрын
That's an excellent idea, I will try that soon... =]
@eugeniusro
@eugeniusro Жыл бұрын
In Stable diffusion it is very helpful to use negative prompts, interacting with the AI I was amazed at how similar it is to human thinking, and come to think of it we were programmed the same, including using negative prompts such as the 10 commandments from the Bible. 😀
@michail_777
@michail_777 Жыл бұрын
I noticed that GFPGAN visibility CodeFormer helps a lot when generating any persona. In the end, it all depends on the models. Thanks for the link to the text hints.
@justspartak
@justspartak Жыл бұрын
Delightful result! 👍 After sharpening the skin appears better and there is more detail throughout the image.
@coda514
@coda514 Жыл бұрын
Great info as always. Also, you have a really nice looking virtual home. 😉
@Hakaan911
@Hakaan911 Жыл бұрын
embeds use same syntaxe as normal prompt, not as loras
@nalisten
@nalisten Жыл бұрын
Thank you Olivio for being so Consistent 🙏🏽🙏🏽👑💪🏾
@TheElement2k7
@TheElement2k7 Жыл бұрын
Thanks for the tips, something I will check out 😊
@CaptainFutureman
@CaptainFutureman Жыл бұрын
Very nice, but I would recommend trying a different sharpening method than unsharpen masking. Haven't tried it yet, but I would bet using a high-pass filter would not give you the artifacts along the rim of the cloak.
@optimoos
@optimoos Жыл бұрын
uber cool info as always. highly appreciated, Olivio!
@Rjacket
@Rjacket Жыл бұрын
Something I thought was strange when testing out this process of Negitive prompts. If you have TI embeds like "" having a comma in between each negative drastically changes the output. ie ", " opposed to "". Have you ever dealt with this? Do you know why it is happening? Also the changing the position of the Negative was effecting the output. Using only around each negative TI and no comma's in-between, but changing the order of say 5 neg TI's for example. I would really like to see a video on this type of testing, what is the rhyme + reason?
@globalnucleartrue
@globalnucleartrue Жыл бұрын
How is it better than sd upscale? SD UPscale seems more simple and fast.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
SD upscale just upscales the image. Img2img renders a new image with a lot more detail that the original didn't have
@kuromiLayfe
@kuromiLayfe Жыл бұрын
@@OlivioSarikas SDUpscale also applies a few negative prompt img2img to fix a bunch of things that would cause the upscaler to make the bigger image uglier instead of more enhanced. Negative Embeddings are just regular Embeddings but trained on the worst results instead of the best quality.
@arielm9847
@arielm9847 Жыл бұрын
I appreciate the video but I feel like something is missing after 4:40. After sharpening the upscaled image and bringing it back into img2img, what did you do with it? Did you upscale again at an even higher res (2048x3072) for more details? Did you run Generate at the same resolution just hoping more details would be added? Or are you just suggesting this workflow before going into inpainting to tweak specific areas?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
no, i rendered it with the same settings again, but with the sharpened input image
@arielm9847
@arielm9847 Жыл бұрын
@@OlivioSarikas Gotcha. Thank you and thanks for all your videos. They are very helpful.
@snatvb
@snatvb Жыл бұрын
you can use ctr+c -> ctrl+v for copy paste to A1111 from any place :)
@HAJJ101
@HAJJ101 Жыл бұрын
Thanks for making this tutorial! I’ve been trying to figure out how to train and get this idea working. So it’s basically just training images you don’t want and putting that training in the negative embeddings? These people usually train images that are class images that generate messed up faces like “person”, “woman”, etc.? Then use a different class for the negative training after?
@12MANY
@12MANY Жыл бұрын
Thanks a lot Olivio.
@JDRos
@JDRos Жыл бұрын
Aren't the brackets and weight only for Lora and LoCon?
@AIAddict-88
@AIAddict-88 Жыл бұрын
Thanks So Much I Learn So Much From Your Videos! :)
@xzypergods9867
@xzypergods9867 3 ай бұрын
Whenever I use negative embeddings this error always show's up "runtimeerror: expected scalar type half but found float"
@hplovecraftmacncheese
@hplovecraftmacncheese Жыл бұрын
When I add the negative embeddings from the extra networks button, it doesn't use the angle brackets, but for LoRA it does. Do you need the angle brackets for the negative embeddings?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
no you don't need them
@blizado3675
@blizado3675 Жыл бұрын
Useful, but for img2img upscale in need first more VRAM. With extra I can go to a insane resolution, but maybe that work there too? 🤔 Need to test that. And I need to test that negative prompt stuff more.
@Nottiex
@Nottiex Жыл бұрын
sry if it was asked already but what is the plugin or w/e that enables choosing of vae / clip skip on top of the main page in ui?
@treblor
@treblor Жыл бұрын
Its in automattic111 settings, settings/User Interface/QuickSetings list change it to: sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers
@Nottiex
@Nottiex Жыл бұрын
@@treblor oh, thank you very much
@Charkel
@Charkel Жыл бұрын
Why don't I have a embedding folder? :(
@terrence369
@terrence369 Жыл бұрын
Why images of human characters created by AI give results of two heads and more fingers than it should be? And some times, those fingers represents an alien creature like tentacles/hands. Is the neural technology build upon aliens embedded into human interface?
@Shingo_AI_Art
@Shingo_AI_Art Жыл бұрын
I always have these 4 most of the time they give amazing results, however is there a reason behind the use of pointy brackets instead of parenthesis ? 🤔
@AltoidDealer
@AltoidDealer Жыл бұрын
I was wondering the same, so I simply tested both ways. I got consistently better outputs with the pointy brackets as shown in the vid
@manipayami294
@manipayami294 Ай бұрын
why I dont have Restore Faces button?
@EmilioNorrmann
@EmilioNorrmann Жыл бұрын
are the mandatory on the neg prompt ?
@wkdpaul
@wkdpaul Жыл бұрын
Not for embeddings, those brackets are for LORA, using just the name of the embedding works fine
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Really? I didn't know that. Thank you
@PizzaTimeGamingChannel
@PizzaTimeGamingChannel Жыл бұрын
@@OlivioSarikas Also, you can use standard parentheses for those negative embeddings, i.e. (bad-artist:0.8). Don't even need to put "by bad-artist" or anything, just the negative embed is fine. :)
@ocoro174
@ocoro174 Жыл бұрын
yeah but all these models seem to be focused on faces and people. how to get midjourney like doodles/cartoons/food etc
@Simsonlover222
@Simsonlover222 Жыл бұрын
you are a hero i love u
@cobraeconomics4881
@cobraeconomics4881 Жыл бұрын
How does your upscale method compare to Topaz gigapixel?
@dinogators8323
@dinogators8323 3 ай бұрын
thx
@sneirox
@sneirox Жыл бұрын
i fell in love with her
@koguister
@koguister Жыл бұрын
embeddings folder does not exist, should I create one, or I installed something wrong?
@Arty-vy6zs
@Arty-vy6zs Жыл бұрын
another one that is used a lot is a EasyNegative
@metanulski
@metanulski Жыл бұрын
I dont see any improvment in the negative embedings example. The 2 neg embs hat 7 fingers, and the all neg has some extra leaves, but thats it.
@darcasvisual
@darcasvisual Жыл бұрын
Hello colleague, how do you leave the characteristics of the character's face, just change the clothes among others?
@S4SA93
@S4SA93 Жыл бұрын
Unsharpen Mask with 1 1 0 does nothing to my picture in Photoshop, what am I missing?
@BlackJade_OFM
@BlackJade_OFM Жыл бұрын
So how do you actually know what negs are in the neg embedding? is there a way to see what negs are actually used?
@hishamzireeni8932
@hishamzireeni8932 Жыл бұрын
@Olivio, how could you use an actual photograph and render it using AI for whatever prompt while maintaining the face ? i.e. creating an avatar or image of your face to so many different renders. How could that be done?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Check my video on Lora Training: kzbin.info/www/bejne/b363YqFvbK6Hl6c
@shadowdemonaer
@shadowdemonaer Жыл бұрын
Alright, but how would one go about training their own negative embeddings?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Basically like a normal embedding, but with the stuff you don't want to have
@shadowdemonaer
@shadowdemonaer 8 ай бұрын
For things like EasyNegative, you can just type that in and be able to improve your images right away. So are they only tagging their training images with EasyNegative? Are they tagging everything like usual? Usually when someone trains something, like a character, if they didn't want their hair style to change, they would only tag the things in the image they want changed. like if their eyes change color, they'd tag the eye color, but they wouldn't tag the hair. So, for a basic example, if you wanted to make a neg embed to make eyes with too many eye highlights never happen again, you would only tag the eyes, right? Or is this incorrect? That's all that holds me back. @@OlivioSarikas
@Rasukix
@Rasukix Жыл бұрын
is it not better to just use highres fix from the get go?
@sophytes1430
@sophytes1430 Жыл бұрын
Why < > greater than and less than sign?
@MarcioSilva-vf5wk
@MarcioSilva-vf5wk Жыл бұрын
So, is basically a highpass filter with an overlay
@bryan98pa
@bryan98pa Жыл бұрын
Nice videos but maybe you need to add more steps to gain more details.
@skyevent8356
@skyevent8356 Жыл бұрын
in anime girl i always have weird eyes no matter that i write in the negative prompt
@AlexSmith-qw5qg
@AlexSmith-qw5qg Жыл бұрын
should i download this embeddings from hugging face like bad artists etc or they work if i just using them in bad prompts without downloading too
@Jordan-my5gq
@Jordan-my5gq Жыл бұрын
You need to download the embeddings because when you type them in the negative prompt they will be replaced by their values. You do not know their values so you must download them. (Sorry if my English is bad, I am learning. Hope you understand my comment ^^)
@babamaheshvrrajrajeshvre9963
@babamaheshvrrajrajeshvre9963 Жыл бұрын
मुझे फोटोग्राफी का बोहत सिखने है। मेरे पास फोन है। बाकी कोई डीवाईस नहीं है। लेपटॉप कम्पुटर नहीं है। तो मे ऐआई टुल कैसे उपयोग कर सकते हैं। फ्री वाले
@norko7422
@norko7422 Жыл бұрын
my images looks bad when I go up more than 512 in 1.5 based models. what's the issue?
@norko7422
@norko7422 Жыл бұрын
same problem in 2.1 models up to 768...
@Vitaliy_zl
@Vitaliy_zl Жыл бұрын
do all stable diffusion users have a habit of counting fingers on ANY images, or is it just me?
@blizado3675
@blizado3675 Жыл бұрын
The less work you have to create an image, the more you tend to be a perfectionist. :D
@TheRealBlackNet
@TheRealBlackNet Жыл бұрын
I have a RTX 3080Ti and cant go bigger then 1024 without getting a Cuta out of mermory. What card do people use to go up to 1500? I help my self with ultimate upscaler but most times I see the checkerboard. Is there a trick?
@Tigermania
@Tigermania Жыл бұрын
try changing the line in your webui-user.bat to this set COMMANDLINE_ARGS=--precision full --no-half --medvram
@treblor
@treblor Жыл бұрын
can also try: set COMMANDLINE_ARGS= --medvram --upcast-sampling
@snoweh1
@snoweh1 Жыл бұрын
I have a 3080 10gb and I can go higher than 1024.
@TheRealBlackNet
@TheRealBlackNet Жыл бұрын
​@@treblor thanks!
@user-gu9vf3cc4u
@user-gu9vf3cc4u Жыл бұрын
How to use it in negative prompt? Should we use it like ?
@peace.n.blessings5579
@peace.n.blessings5579 Жыл бұрын
What is the system requirements for running stable diffusion?
@Max-sq4li
@Max-sq4li Жыл бұрын
at least minimal RTX3060 12Gb and above More VRAM = more stable and more features work with
@TrentSterling
@TrentSterling Жыл бұрын
I run it locally on a 1060 6gb. It's slow, but in theory any card with 4gb of vram can do it. So minimal is smaller than that haha.
@AIAddict-88
@AIAddict-88 Жыл бұрын
I could run it locally with a GTX980 But I recently upgraded to a 3060ti which is much faster..980 worked though!
@dlep9221
@dlep9221 Жыл бұрын
I'm using A1111 with RTX2080S, 8 Gb, it's running very well (with NVIDIA CUDA & --xformers option)
@mr_frank9016
@mr_frank9016 Жыл бұрын
succesfully using it on a gtx1650 4GB card. can generate up to 1024px, but slow time (1 to 3 minute per image).."extras" upscale take around same time, but img2img upscaling to 8k can take 1 hour with all the steps involved.
@isycoolro
@isycoolro Жыл бұрын
Hello Olivio! Can I have a one on one consultation with you? Do you have an email where I can contact you? Thanks.
@support8804
@support8804 Жыл бұрын
what is A1111? how to install it?
@Steamrick
@Steamrick Жыл бұрын
Automatic1111 and look at his older videos or google it
@havemoney
@havemoney Жыл бұрын
automatic1111 >>> go google
@Tigermania
@Tigermania Жыл бұрын
search for how to install automatic1111 stable diffusion
@Max-sq4li
@Max-sq4li Жыл бұрын
Its an AI software that generate photo from text
@Jordan-my5gq
@Jordan-my5gq Жыл бұрын
​@@Max-sq4li Stable Diffusion is an AI. A1111 is an interface to interact with Stable Diffusion.
@Akami-hz8xz
@Akami-hz8xz Жыл бұрын
you made a mistake including photoshop which is irrelevant.
@MarkDemarest
@MarkDemarest Жыл бұрын
FIRST 🎉
@NiteshSaini1
@NiteshSaini1 Жыл бұрын
Instead of AI I see it more of a programming work which doesn’t improve users artist skills yet can help them to be a programmer. manual work would always be the true Art. AI would be a disaster for mankind created and improved by mankind.
@13RedCorpse
@13RedCorpse Жыл бұрын
The time will tell.
@hectord.7107
@hectord.7107 Жыл бұрын
You don't seem to know much about art then, creating art is not just using a pen or a pencil, it's the entire process that includes the idea, the composition and the execution, many people are just copying and pasting prompts and get a nice picture, but the ones that are doing great things are using AI as one more tool, combined with photoshop and other tools and some insane art will be created in the near future that wouldn't ever be possible to create by human hand alone.
@DarkStoorM_
@DarkStoorM_ Жыл бұрын
@@hectord.7107 This is what no one understands. People jump from video to video, bashing everyone in the comments for using AI whenever a new convenient tool is getting released. Funnily enough, I even found someone commenting in 3Blue1Brown's recent video, that he will stop watching 3B1B, because he used AI images (video contains images *transformed* by another artist aided by Midjourney). People don't seem to realize, that it's not just about _typing words into boxes_ and spamming pretty images over the internet, making artists mad. This argument is getting really annoying and is already obsolete. People already create *insane* images, completely *transforming* the base txt2img result, which immediately throws the the copyright argument straight into the trash can. Thanks to the Inpainting tool in Stable Diffusion, we can make amazing high resolution transformations from a simple photoshop sketch, still putting *massive* amounts of tedious, manual work into the result image, creating it piece by piece, utilizing the creativity to the max, still keeping the sketched composition, which is *your work*. Using artists' names is literally useless nowadays, because it has a very little impact on this process, just like using a random word in the prompt. People, rather than starting nonsense and useless dramas all over the internet, use this to your advantage and stop being a baby :)
@yoteslaya7296
@yoteslaya7296 Жыл бұрын
Thanks for the info but im not paying for photoshop
@blizado3675
@blizado3675 Жыл бұрын
Like he said any image software that has sharpening features will work. There are also free open source alternatives.
@yoteslaya7296
@yoteslaya7296 Жыл бұрын
@@blizado3675 which ones
@Самый-лучший-комент_наверно
@Самый-лучший-комент_наверно Жыл бұрын
какие то костыли, и зачем такие извращения с апскейлом =="
@clumsymoe
@clumsymoe Жыл бұрын
nick-x-hacker/bad-artist a little off very sus nick choice on HuggingFace it shows no pickles detected but you can never be 100% sure
A1111 - Best Advice for amazing Stable Diffusion Images
23:43
Olivio Sarikas
Рет қаралды 67 М.
Мы сделали гигантские сухарики!  #большаяеда
00:44
Matching Picture Challenge with Alfredo Larin's family! 👍
00:37
BigSchool
Рет қаралды 52 МЛН
Blue Food VS Red Food Emoji Mukbang
00:33
MOOMOO STUDIO [무무 스튜디오]
Рет қаралды 33 МЛН
1ОШБ Да Вінчі навчання
00:14
AIRSOFT BALAN
Рет қаралды 5 МЛН
I've done it to 12 Now, Duo.
2:52
NeatheyRblx
Рет қаралды 8
LORA + Checkpoint Model Training GUIDE - Get the BEST RESULTS super easy
34:38
BETTER than PROMPTS - The Future of AI Composition
9:23
Olivio Sarikas
Рет қаралды 135 М.
Photopea Extension for A1111 - Edit, Adjust and Draw Images
11:39
Olivio Sarikas
Рет қаралды 76 М.
AMAZING SD Models - And how to get the MOST out of them!
18:49
Olivio Sarikas
Рет қаралды 108 М.
The Ultimate Guide to A1111 Stable Diffusion Techniques
11:19
AIKnowledge2Go
Рет қаралды 41 М.
How To Fix Hands In Fooocus (SDXL Stable Diffusion)
13:02
Jump Into AI
Рет қаралды 20 М.
5 MUST Have Stable Diffusion Negative Embeddings
4:38
Warm Diffusion
Рет қаралды 18 М.
Stable Diffusion - Embedding
26:02
XpucT
Рет қаралды 66 М.
Мы сделали гигантские сухарики!  #большаяеда
00:44