NEXT-GEN NEW IMG2IMG In Stable Diffusion! This Is TRULY INCREDIBLE!

  Рет қаралды 185,812

Aitrepreneur

Aitrepreneur

Күн бұрын

Пікірлер: 344
@Rafael64_
@Rafael64_ Жыл бұрын
Not even week goes by and there's image to image significant enhancement. What a time to be alive!
@SnowSultan
@SnowSultan Жыл бұрын
If this works as well as it appears to, this is both game-changing and life-changing for artists like myself that work in 3D but want more illustrated or toony results. I've waited 24 years to be able to make true 2D art with 3D methods. Even if it's not perfect yet, this gives me hope.
@muerrilla
@muerrilla Жыл бұрын
I'm playing around with the scribble model and i'm absolutely blown away!
@Smokeywillz
@Smokeywillz Жыл бұрын
topaz studio 2 was coming close but THIS is next level
@PriestessOfDada
@PriestessOfDada Жыл бұрын
I had the same thought. Makes me want to train my cc4 characters
@SnowSultan
@SnowSultan Жыл бұрын
@@PriestessOfDada I've had decent luck using untextured DAZ figures as ControlNet pose references, but I do not know what the results would be if you train a checkpoint or LoRA on a complete 3D character. If you can still get 2D or anime results from it...well, I'll have a lot training to do. ;)
@mrhellinga9440
@mrhellinga9440 Жыл бұрын
this is pure gold
@o.b.1904
@o.b.1904 Жыл бұрын
The pose one looks great, you can pose a character in a 3d program and use it as a base.
@mactheo2574
@mactheo2574 Жыл бұрын
What if you use your own body to pose?
@vielschreiberz
@vielschreiberz Жыл бұрын
Perhaps it will be useful with some simplified solutions like Daz 3d Or with poses library
@Amelia_PC
@Amelia_PC Жыл бұрын
Yup. I've been using Daz to help me with comic book character poses for years and it took only some seconds to put a character in different poses (if a person says it takes more than that, that's because they're newcomers or have not much experience with 3D programs).
@pladselsker8340
@pladselsker8340 Жыл бұрын
and yeah, the 3D software thing is actually a good idea if you implement a good inverse cinematic thingy on it. It can probably save time and be faster than a google search for simple (or really specific and complex) poses. I've been learning how to do that last night in blender, I'm almost done with the model, it's actually not too hard to make. You don't even have to render anything btw, you just have to take a screenshot of it when the angle and everything seems okay, and then paste that in the webui. Works like a charm.
@sonydee33
@sonydee33 Жыл бұрын
Exactly
@NC17z
@NC17z Жыл бұрын
This extension is amazing! I'm having an absolute blast with it. It is solving so many of my problems with matching the look of realism photos with what I'm feeding it for an image and with my prompt. Thank you so much for what you do. You've been my first go to on KZbin for weeks!
@peterbelanger4094
@peterbelanger4094 Жыл бұрын
I'm having fun with the sketch pre processor. Runs fine on my gpu (1060GTX 6GB), not even using the --lowvram option.
@Alex-dr6or
@Alex-dr6or Жыл бұрын
This is exactly what I need. Last night I was having fun with the blend option in Midjourney and wished SD has something similar. This video came at the perfect time
@coda514
@coda514 Жыл бұрын
Saw info about this on Reddit, I knew you would put out a how-to video so I waited to install. Glad I did, you did not disappoint. Sincerely, your loyal subject.
@Aitrepreneur
@Aitrepreneur Жыл бұрын
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@MrArrmageddon
@MrArrmageddon Жыл бұрын
Do we know if these are safe? I don't even know how to scan pickle anymore. I have avoided them for months. Amazing video by the way. Thank you.
@joachim595
@joachim595 Жыл бұрын
“Type cimdy”
@peterbelanger4094
@peterbelanger4094 Жыл бұрын
👍👍👍👍👍👍👍 Great extension! paused the video, got it all downloaded and installed before I finished the video. Runs fine on my 1060GTX 6GB, even without the lowvram option. Actually 10x faster without it. only the xformers option is needed.
@MrArrmageddon
@MrArrmageddon Жыл бұрын
@@peterbelanger4094 If you can Peter? I have a RTX 4080 16GB. I've never used any Xformers. Should I look into them? And if so what purpose they server? lol If you can't explain that is fine.
@zwojack7285
@zwojack7285 Жыл бұрын
what SD version are you using? I only have..1.5 I think and the extension only appears in txt2img, not in img2img
@ristopaasivirta9770
@ristopaasivirta9770 Жыл бұрын
My biggest complaint about SD has been the lack of control. In order to make comic books and alike you need to be able to precisely control the pose of the characters. Gonna see how well this holds up. Thank you for the video!
@Unnaymed
@Unnaymed Жыл бұрын
It's epic, power of stable diffusion is upgraded ! ❤️
@xnooknooknook
@xnooknooknook Жыл бұрын
I really like txt2img but img2img has been where I've spent most of my time. Scribble mode looks amazing! I need it in my life.
@GolpokothokRaktim
@GolpokothokRaktim Жыл бұрын
I recently experimented with blue willow and I'm really amazed to use it. Blue Willow recently launched V2 with a brand new model upgrade. Now I got better-quality images with more aesthetic outputs
@DjDiversant
@DjDiversant Жыл бұрын
Installed it just couple of hours before a vid. Thx for a tut!
@gloxmusic74
@gloxmusic74 Жыл бұрын
Installed it straight away and can honestly say I'm super impressed 👍👍👍
@ysy69
@ysy69 Жыл бұрын
downloading the models now and will try. thank you so much. this seems very powerful. In fact, I've been spending more time on img2img lately. even at the current state, it is fantastic... can't imagine the possibilities with this new extension
@voEovove
@voEovove Жыл бұрын
Yet again, you have manged to blow my mind! Thank you for showing this new amazing functionality! Feels like these tools are getting more insane every single day.
@SuperEpic-vb8nq
@SuperEpic-vb8nq Жыл бұрын
This is absolutely amazing, my only complaint is that it doesn’t seem to work with batch img2img. If that gets to working, then this could easily solve the issue with stable diffusion videos where details tend to be “sticky” due to the seed not shifting with the video. This could help stabilize it. Edit, after an update, it works with batch img2img and it does exactly what I wanted. What a time to be alive!
@Lowbow0
@Lowbow0 Жыл бұрын
yoo do you make videos in stable? because i do and I m just interested in batchmode /animation. consistens characters and places. can we connect on insta ?
@TheDoranMaster
@TheDoranMaster Жыл бұрын
Awesome thanks for sharing! Btw, if you click the down arrow just to the right of the letters LFS it just downloads the models without opening another tab. Small but useful tip :)
@OsakaHarker
@OsakaHarker Жыл бұрын
K you forgot to copy the checkpoints in the ControlNet/annotator/ckpts if you do then the hand pose works amazingly using preprocessor openpose_hand and the model openpose, thnk you for this amazing video, this changes a lot how we create images
@rmb6037
@rmb6037 Жыл бұрын
where are those? I don't see them on the github page
@OsakaHarker
@OsakaHarker Жыл бұрын
@@rmb6037 on the models link page go back once into ControlNet and then enter the annotator/ckpts folder
@ThisOrThat13
@ThisOrThat13 Жыл бұрын
@@rmb6037 look within "annotator" just above the models folder. Then ckpts into those files.
@ThisOrThat13
@ThisOrThat13 Жыл бұрын
That is where I'm lost at now. We would those files (pth & pt) go? Just into the model folder with everything else?
@OsakaHarker
@OsakaHarker Жыл бұрын
@@ThisOrThat13 i noticed they were auto downloading but wasn't working for me and i put them all in here \extensions\sd-webui-controlnet\annotator\ckpts
@kylehessling2679
@kylehessling2679 Жыл бұрын
I've been wishing for this since day one of using SD! This is going to be so useful for generating versions of my graphic design work!
@dreamzdziner8484
@dreamzdziner8484 Жыл бұрын
Wow. So exciting. Thank you dear Overlord 💪🙏🏽
@A_Train
@A_Train Жыл бұрын
Thanks for being on the bleeding edge of this and imparting your knowledge for artists like me. My question is, what is the best way to implement Stable Diffusion in Blender. I used a version like 3 months ago and now that seems so outdated.
@s.foudehi1419
@s.foudehi1419 Жыл бұрын
this is truly nextlvl stuff. im glad i found this video. has anyone already tried to create a depthmap with controlnet and then using that to create a 3model in blender? there's some good tutorials on here as well, you might wanna check that out :)
@purposefully.verbose
@purposefully.verbose Жыл бұрын
i saw people talking about this concept on several other channels, and all were like "i hope this comes out for auto1111", and i'm all "it is!" - and linked this video. hopefully you get more subs.
@yeahbutcanyouredacted3417
@yeahbutcanyouredacted3417 Жыл бұрын
Amazing tool- solves a lot of RNG for us to get closer to the designs we are looking for ty again Aitrepreneur for helping get my home studio going
@haidargzYT
@haidargzYT Жыл бұрын
Cool 😮 A.I community keep surprising us every day
@EntendiEsaReferencia
@EntendiEsaReferencia Жыл бұрын
I've been waiting por the 1.5 depth model and now it's here, and with a few friends 🤗🤗
@h8f8
@h8f8 Жыл бұрын
Never knew you could type cmd on the top directory... thank you so much
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
I'm droolling at the thought of using the pose model!
@kuromiLayfe
@kuromiLayfe Жыл бұрын
cannot wait for this to be an extension to txt2img also : prompt for a specific character and then add a scribble or pre process image to the script to get the described character in the pose you want.
@BlackDragonBE
@BlackDragonBE Жыл бұрын
It already has this.
@BlackDragonBE
@BlackDragonBE Жыл бұрын
@@ClanBez Open the txt2img tab with the extension this video explains installed. At the bottom of the tab you can use the ControlNet just like in img2img, including the scribble model. By providing a prompt and a scribble, you can generate images with lots of control. I suggest lowering the Weight to 0.25-0.5 to start with as you ca get some weird results depending on your drawing skills otherwise. Good luck.
@zwojack7285
@zwojack7285 Жыл бұрын
for some reason it only shows up in txt2img for me lmoa
@thebonuslvl
@thebonuslvl Жыл бұрын
magic... so much more to come thank you for keeping us on the leading edge..
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
So a few thoughts playing with this. I specifically zeroed in on the pose model because one of the things I've been trying (and failing) to do for a long time now is mix my own photography with generation, but having any sort of control over posing was nearly impossible. Until now! First off, this works in txt2img as well, so you can supply it a pose reference image and then do your normal txt2img prompts and get a completely new generation in that pose. Mind fracking blown! That said, temper expectations at least for now as the pose estimation is not so accurate. This is rather surprising actually given 2d pose estimation has been a pretty well solved problem for longer than SD has been a popular thing, so not sure whats up there. Still it has already allowed me to start making fusions of my studio photography with SD generations and it is amazing!
@ysy69
@ysy69 Жыл бұрын
Are you saying this extension can also be used over at txt2img?
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
@@ysy69 Yes! That's mostly where I am using it right now actually. I'm talking some of my models that i've shot in studio, bringing them into SD in txt2img, putting that photo in for controlnet pose, and then using the txt2img to generate a completely new person in roughly the same pose as the one I photographed.
@Seany06
@Seany06 Жыл бұрын
@@ysy69 txt2img is where it works, not img2img
@Seany06
@Seany06 Жыл бұрын
According to github it should be possible with the open pose model to control the skeleton but gradio isnt easy to work with. I'm sure in a few months we'll have a lot more control. These tools are insane so far!
@ysy69
@ysy69 Жыл бұрын
@@JohnVanderbeck when you say model, you're referring to SD custom models right? and not people as models? When you bring a photo to SD, that is img2img... in text2img, one doesn't use an image as reference, so I guess you meant img2img and then using the prompt to make the change into a new person, correct?
@XaYaZaZa
@XaYaZaZa Жыл бұрын
My favorite youtuber 🧡
@sebastianclarke2441
@sebastianclarke2441 Жыл бұрын
Why have I only heard about this for the first time today!? Wow!!
@Irfarious
@Irfarious Жыл бұрын
I love the way you say "down below"
@BenPhelps
@BenPhelps Жыл бұрын
Brilliant. Any tips to install on M1 Mac?
@cybermad64
@cybermad64 Жыл бұрын
Thanks a lot for sharing your miro boards, those are normaly test that I would do myself to understand how the system works, you are saving us a lot of tech investigation time! :)
@AmirZaimMohdZaini
@AmirZaimMohdZaini Жыл бұрын
This feature finally able to make new image with exact style from original input picture.
@upicks
@upicks Жыл бұрын
Simply amazing, thanks for the video! This makes img2img even better than I could have imagined. .
@toddzircher6168
@toddzircher6168 Жыл бұрын
Thank you for the wonderful walk through on this new extension. I have a lot of 2d pose sheets/sketches from various artists and I can totally see using them with a controlnet.
@harambae117
@harambae117 Жыл бұрын
This looks like a lot of fun and really good for professional use. Thanks for sharing dear AI overlord
@Eins3467
@Eins3467 Жыл бұрын
Thanks for another great vid! I'm more interested in the open pose model because then you won't need to prompt a pose too much. Seems like in the video it can retain some details like the clothes color so it also needs a prompt for it to change. Very interesting. Edit: Some more interesting things - It can accept characters provided the model knows them (Azur Lane characters for example) - Like from the video it does eat up vram fast, my Linux almost crashed one time lol.
@thailandbestbest8172
@thailandbestbest8172 Жыл бұрын
Using it with rtx2060 6 GB ram without low vram... Just extract the model and it will be around 1.444 gb instead of 5.7... :) :) :) the extract file in the GitHub repo. And--+ thank u very much u r doing a wonderful job!!!
@Rickbison
@Rickbison Жыл бұрын
I finished my last short with the old img2img. Downloading all the models and lets see how this goes.
@desu38
@desu38 Жыл бұрын
Goddamn, the webui just keeps getting more and more powerful. 😯
@StrongzGame
@StrongzGame Жыл бұрын
i need this video on a flash drive for reference for ever
@cinemantics231
@cinemantics231 Жыл бұрын
This just keeps getting better and better! Thanks for putting this together. Is there any way to merge two different images? Like take the pose from one image and implement it in the style or background of another?
@pastuh
@pastuh Жыл бұрын
its called inpaint. just use photoshop plugin for this. paint over (or put image) on different layer and click inpaint
@SteveWarner
@SteveWarner Жыл бұрын
Top notch training! Thanks for this comprehensive overview! Looking forward to testing this out!
@Varchesis
@Varchesis Жыл бұрын
This is insanely great! Thanks for sharing this info.
@mikishomeonyoutube2116
@mikishomeonyoutube2116 Жыл бұрын
This is TRULY INCREDIBLE!
@angelicafoster670
@angelicafoster670 Жыл бұрын
now we need a way to combine a pose + consistent character together.
@Fingle
@Fingle Жыл бұрын
NO WAY THIS IS INSANE
@dthSinthoras
@dthSinthoras Жыл бұрын
So now we have this which seems better than previous depht-models, we have LORA which seems to be better than Hypernetworks, x versions of Video-Creation (lost track on which is best here),... I would LOVE a "State of the art" Video, with what is outdated, what are different usefull variations for stuff, etc. :)
@rmb6037
@rmb6037 Жыл бұрын
could be a monthly thing
@kallamamran
@kallamamran Жыл бұрын
Finally! If you can say that for something that didn't take 24h 😃
@오오와아아앙
@오오와아아앙 Жыл бұрын
Thanks again for good contents! Pose thing looks interesting 👀
@AnimatingDreams
@AnimatingDreams Жыл бұрын
That's amazing! Will it work on Colab as well?
@duplicatemate7843
@duplicatemate7843 Жыл бұрын
does it work on colab?
@IlRincreTeam
@IlRincreTeam Жыл бұрын
This is VERY impressive
@jivemuffin
@jivemuffin Жыл бұрын
Nice, comprehensive video -- and thanks for the Miro board in particular! Makes me think there's great potential for AI workflows in there. :)
@jucabalmacabro
@jucabalmacabro Жыл бұрын
amazing. I´m buyng a new pc to be able to play with stable diffusion. IN LOVE.
@ThisOrThat13
@ThisOrThat13 Жыл бұрын
1tb isn't enough anymore with all the models I have installed just for Text2text.
@jucabalmacabro
@jucabalmacabro Жыл бұрын
@@ThisOrThat13 omg, my new pc have only 1tb, Im screwed
@ThisOrThat13
@ThisOrThat13 Жыл бұрын
@@jucabalmacabro most Mobo will hold two M.2s now. The 2nd one could be a 2tb for SD and games. Or have a portable HD to change models in and out.
@Cneq
@Cneq Жыл бұрын
Man control net open pose + VR FBT with 11pt can seriously open up some possibilities.
@rickardbengtsson
@rickardbengtsson Жыл бұрын
Great breakdown
@itsalwaysme123
@itsalwaysme123 Жыл бұрын
There exists a safetensors version of the models that take up *signifigantly* less space! but other than that, golden.
@ryanp515
@ryanp515 5 ай бұрын
This is cool. I was wondering could this be used to make line art? That would be a time saver with poses, etc.
@Smiithrz
@Smiithrz Жыл бұрын
Thank you so much for making this video. I was just talking about how this stuff will solve a lot of posing problems we have without it. Can’t wait to try it with my model photography as references 👏🏻
@MrOopsidaisy
@MrOopsidaisy Жыл бұрын
Are you able to create an updated installation video? I've been out of the loop for a few months with Stable Diffusion and feel lost with all the updates.......... :(
@pladselsker8340
@pladselsker8340 Жыл бұрын
This is THE extension everyone needed. I don't know if people realise, but this is a MEGA GAME CHANGER. Up untill now we ONLY had tools for changing the details of the image. Inpainting, pix2pix, img2img, loras, etc. All of this is JUST for the details, and can even make the control of composition even harder to achieve (especially for TIs and loras, they do affect it negatively quite a lot in general). This new extension now handles the composition of the images. This thing is literally filling a hole in the AI image making process. One other such hole that I hope someone or a team will learn how to fill now is to be able to make loras of stuff that don't yet exist, like OCs.
@lefourbe5596
@lefourbe5596 Жыл бұрын
someone like me ? totally agree, i strive to make my bad CG model into glorious anime characters in blender, using stable diffusion as render engine. i already have Dreambooth and Lora ready to use but lack consistency on character feature to make a nice animation. i lack time sadly and i'm still an amateur, definitly interested to have a discord gathering ppl togeter to find the best settings for everyone
@pladselsker8340
@pladselsker8340 Жыл бұрын
@@lefourbe5596 I think touhou AI project is one if the best servers for that. A lot of people just throw in information everyday, and some of them share their whole workflow if you ask. It's so hard to keep up with them haha.
@duplicatemate7843
@duplicatemate7843 Жыл бұрын
hi sir. how do I download these files if I'm using stable diffuser via Google Colab?
@rickland1810
@rickland1810 Жыл бұрын
Amazing videos! Thank you. Just a suggestion, maybe the part where you download models in your vids should be first, so they download while we do the other steps. I already know your videos so I get ahead of this. But again, thank you.
@shongchen
@shongchen Жыл бұрын
Hello, I have a question , how to find the control stable diffusion with human pose tab ? Thank you for your share.
@digitalkm
@digitalkm Жыл бұрын
Awesome, thank you!
@Amelia_PC
@Amelia_PC Жыл бұрын
Please, consider talking about ControlNet with Anime Line Drawing for Stable Diffusion when it's available for the public! :D It's a revolution for comics/mangas and animation.
@gilz33
@gilz33 Жыл бұрын
Can I install this on my macbook M1Max ?
@boythee4193
@boythee4193 Жыл бұрын
i had to restart the whole thing to get the models part to show up, but it did work. too late to test it out now, though: )
@unknowngodsimp
@unknowngodsimp Жыл бұрын
This is awesome. But I'm kinda confused about how the 3 "inputs" relate to each other. Perhaps this is me just not understanding img2img. Basically my question is how do the prompt, image, and 2nd image (for depth map) relate to each other for the resulting image? Doesn't this new extension mean that we could (also) use only an image or prompt with the image for depth map to generate an image? I would love an in-depth answer 🙏
@TomiTom1234
@TomiTom1234 Жыл бұрын
Something you could have covered in the video, is the inpainting using this method. I noticed in their site that you can inpaint a part of an image. I wish you explained that.
@thays182
@thays182 Жыл бұрын
Is there way to use img to img and control net to move an existing character, with clothing and style, into a new pose/new position, and have it still be that initial character? (WIthout a Lora, only going off of one image)?
@puma21puma21
@puma21puma21 Жыл бұрын
downloaded models moved them to models folder but controlnet doesn't show
@RyanBigNose
@RyanBigNose Жыл бұрын
this is awesome
@nathancanbereached
@nathancanbereached Жыл бұрын
I'd love to see a video/cartoon where you transition from one scene to another by increasing midas weight slowly. Would be a very trippy/dream like way to transition to a totally different place.
@SentinelTheOne
@SentinelTheOne Жыл бұрын
Der pip-install funktioniert bei mir nicht. Allerdings ging es hierüber im CMD window: python -m pip install opencv-python
@azmodel
@azmodel Жыл бұрын
Absolutely Crazy. thanks!
@madrockon7357
@madrockon7357 Жыл бұрын
[3:06] Yeah.. And i learnt that the hard way, by trying to run depth map at 1024x1024.
@CCoburn3
@CCoburn3 Жыл бұрын
Wouldn't it be great if you could move the "joints" in the open pose model so you could make changes in the pose? But scribble comes close. Good video.
@Daeca
@Daeca Жыл бұрын
If you're using Automatic1111, there's an extension called OpenPose-Editor. Installing that will create a new tab at the top and you'll be able to edit the 'joints' directly then save as needed. You can even add a background image so that you can properly pose your scene.
@CCoburn3
@CCoburn3 Жыл бұрын
@Daeca Thanks. And there are some websites that allow posing models that include hands. That should help with the deformed hand problem.
@bryan98pa
@bryan98pa Жыл бұрын
Wooow, i like this new tool!!
@gohan4585
@gohan4585 Жыл бұрын
Thank you sensei bro 🙏
@ayanechan-yt
@ayanechan-yt Жыл бұрын
To use the seg model, venv/script/activate then pip install prettytable then restart webui.
@girasan
@girasan Жыл бұрын
thank you so much 🙂
@jameshughes3014
@jameshughes3014 Жыл бұрын
I guess its time to buy a new hard drive cause i need this.
@mrrooter601
@mrrooter601 Жыл бұрын
This is great, the hand one seemed to work for me, at least on one base image. It refuses to work at all with waifu diffusion 1.4e2 though.
@artfoolmonkey2866
@artfoolmonkey2866 Жыл бұрын
Hi and thanks for your amazing guides. I followed thru the steps but for some reason i'm only having access to controlnet-m2m and can't find any other options. Is there any other requisites before install the controlnet extension or something to configure??
@Warzak77
@Warzak77 Жыл бұрын
people are going to bach colorate manga scans with this ! amazing discovery
@MarkHarris-bt4po
@MarkHarris-bt4po Жыл бұрын
Another useful video, cheers. I have a question I was wondering if you might know the answer to. I want to train some models (loras probably) to recognise some designer clothing. So it would be a bunch of items in a particular category. Such as Shirts, Long sleeve/Short sleeve/distressed, Nero collar etc, but I'm not sure which method is best and If I should fully describe the source images or leave out some parts of the caption to get best results, Can you make any recommendations ?
@pladselsker8340
@pladselsker8340 Жыл бұрын
I would suggest using around 50 to 200 images of whatever you're trying to generate, and see how it does with such a dataset. Then, iterate over it untill you're happy with the lora you made.
@ModestJoke
@ModestJoke Жыл бұрын
I noticed you didn't activate your Python virtual environment before installing opencv-python. Do you not use a venv when running stable diffusion? I thought it was on be default, and I'm not even aware of a way to disable it. (There could be, though.) When I run the webui-user.bat file, the very first line in the output is a note that it's running a virtual environment. If you install it in you're system's main Python folder instead of the venv, will it even work? If anyone out there is having trouble getting it to work and think this might be the cause, try this: if you're on Windows, open a command line in your webui folder, then enter the command (with quotes!): "venv/Scripts/activate". This runs activate.bat in that folder to begin using the virtual environment inside your webui folder. Then run the pip command to install opencv-python into that environment. If you're on Linux, you'll have to look up how to activate the virtual environment before installation with pip.
@alephstar
@alephstar Жыл бұрын
There is no way for me to properly thank you for this. I've had the pip issue since I installed it a few days ago and this was the only thing that worked to solve my troubles. Thank you so so so much for taking the time to write this comment.
@Pertruabo
@Pertruabo Жыл бұрын
lord you're a godsend. Hope this got pinned, thanks alot!!
@JonahHache
@JonahHache Жыл бұрын
THANK YOU!!
@martinkaiser5263
@martinkaiser5263 Жыл бұрын
Did exactly the same steps but controlnet not showing up in webUI
@ChrisCapel
@ChrisCapel Жыл бұрын
Man, I've followed the instructions but I'm not even seeing a Control Net tab. Anyone else having this problem?
@RyanBigNose
@RyanBigNose Жыл бұрын
can we use this for Batch ??
@dayswdan
@dayswdan Жыл бұрын
Hi! Just wanna ask if this is possible. We have multiple portraits (pre-made not AI generated), I want to use my customers' cat or dog or pets image to transfer it or use it as a head on my pre-made portraits.
@xd-vf1kx
@xd-vf1kx Жыл бұрын
so cool! I love ya!
@oldaccountfornow1111
@oldaccountfornow1111 Жыл бұрын
Big Thanks
@ThatOneGuyWithAReallyLongName.
@ThatOneGuyWithAReallyLongName. Жыл бұрын
Anybody else having a problem where ControlNET just... stops working? I had it working just fine this morning, and now I'm trying to use it again and it's refusing to acknowledge my depth maps. In fact, I was having a hell of a time even getting it to /make/ a depth map. Finally got it to spit one out and now I'm trying to use said depth map in txt2img and it's ignoring it. ControlNET is Enabled, the depth map has been placed in ControlNET, no preprocessor, and control_sd15_depth is selected as my Model. What am I doing wrong?! E: Fixed it, the depth model for ControlNET just decided to self-corrupt. Deleted and reinstalled the control_sd15_depth model and I'm back up and running.
@RyanBigNose
@RyanBigNose Жыл бұрын
i cant install the first one can anyone help me out why? D:\ai\stable-diffusion-webui>pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file.
@SolracNaujMauriiDS
@SolracNaujMauriiDS Жыл бұрын
The same thing happened to me with CMD, but it worked for me with Anaconda
HELP!!!
00:46
Natan por Aí
Рет қаралды 54 МЛН
小路飞还不知道他把路飞给擦没有了 #路飞#海贼王
00:32
路飞与唐舞桐
Рет қаралды 77 МЛН
Change Background and Clothes using Stable Diffusion - Tutorial
8:21
Lights Camera AI
Рет қаралды 14 М.
Next level AI art Control | My workflow
23:02
Not4Talent
Рет қаралды 100 М.
NEW ControlNet for Stable diffusion RELEASED! THIS IS MIND BLOWING!
11:05
Skip one block gaps in Minecraft.
9:16
Heppe
Рет қаралды 429 М.
Consistency in Stable Diffusion | ControlNet Tutorial
10:04
Mike's Code
Рет қаралды 12 М.