M1 Mac running Stable Diffusion NATIVELY - getting good

  Рет қаралды 34,925

Alex Ziskind

Alex Ziskind

Күн бұрын

Пікірлер: 75
@RSV9
@RSV9 Жыл бұрын
I did a short test. Install SD and Automatic1111. A. Lenovo Legion 5, AMD Ryzen 7 5800H, 64GB RAM, NVIDIA GeForce RTX 3050 TI 4GB,--xformers B. Mac mini M2 Pro base model 16 GB RAM. text to image standard values On average : Lenovo: 3.5 - 4.6 iteration/s M2 Pro: 1.3 - 1.6 iterations/s on Mac mini up to 4 GB swap memory
@alexanderjenkins
@alexanderjenkins 2 жыл бұрын
Crazy, I was just wondering if something like this existed... thanks!
@AZisk
@AZisk 2 жыл бұрын
yeah, this stuff is getting more available and accessible
@yriccoh
@yriccoh 2 жыл бұрын
Version 1.51 explains the sliders. Increase steps >40 you'll see improvement
@RSV9
@RSV9 Жыл бұрын
I would like to see a speed comparison between a mac min m2 pro and a windows laptop with a dedicated graphics card, for example an rtx3050ti. Is it worth installing SD on apple silicon?
@GraylandSmith
@GraylandSmith Жыл бұрын
I would love to see the comparison between the m1 max 64 gb ram and the mac studio. I need an upgrade and I want a mac that'll be great with stable diffusion, AI, Unity, Ar/Vr for creating, Video editing rendering, 4k 8k big ass files (animations) Heavy After Effects and various VFx softwares. I need something thats good especially for video editing and rendering huge files and projects FAST.... I need a beast of a machine
@goodlux777
@goodlux777 Жыл бұрын
i'm going through the same thought process, wanting a single machine to do it all, but Nvida GPUs seem to always beat macs and get updates months in advance.
@ScottLahteine
@ScottLahteine 2 жыл бұрын
Nice. I just went through the process of installing Stable Diffusion from the command line on the MacBook Pro M1 (pro) (using the lstein fork, not magnusviri) and although the install went pretty smoothly it was a bit of a hassle, requiring an install of MiniConda, for a start. The web interface InvokeAI was easier to install than web-ui, but I want to be able to use both with the same SD install so hopefully the sidecar web-ui setup will not be too difficult. My MacBook has only 16GB of RAM and Stable Diffusion uses it all, hitting the VRAM pretty hard. On an 8GB machine it's going to take the SSD out to the woodshed. I'm going to install it on the Mac Studio M1 Ultra later, and if it's any faster or more impressive there I'll let you know!
@woolfel
@woolfel 2 жыл бұрын
I was trying to install the invokeai fork too, but I'm having all sorts of issues. turns out I didn't have llvm installed, which caused the invokeai create env to fail. the joys of native code dependencies
@ScottLahteine
@ScottLahteine 2 жыл бұрын
To follow up, wow just wow. Like Neo: "Whoa." On the Mac Studio Ultra with 64GB of RAM it uses around 16GB at baseline and another 8GB to diffuse a 512x512 image, taking under 25 seconds to generate each image (50 steps). The GPU is pegged to the limit for the whole time. Meanwhile lstein/stable-diffusion is now invoke-ai/InvokeAI, so… And, it's not bad! Now to try some Image-to-Image, and In-Painting, and….
@woolfel
@woolfel 2 жыл бұрын
@@ScottLahteine I was considering submitted a pull request to mention "you need llvm". Took me like 2 hours to track down llvm was causing create env to die. I'm impressed my M1Max 24gpu is faster than RTX 2060 6G: 32 sec vs 65 sec for 1 image.
@ScottLahteine
@ScottLahteine 2 жыл бұрын
@@woolfel The instructions I followed soecified that Xcode had to be installed as the first step, which is where most Mac geeks should get their llvm. On other platforms, I didn't notice whether it was mentioned, but a helpful message at startup or during install would be good!
@woolfel
@woolfel 2 жыл бұрын
@@ScottLahteine I have xcode installed, but what I forgot is I updated MacOS. After I ran brew config, I saw that clang was null. the fix was simple enough, I just needed to install the additional xcode tools to the latest and then conda created the env just fine :) I created a pull request for their macos install to note the lmdb error.
@garynagle3093
@garynagle3093 2 жыл бұрын
Pretty cool. Nice change and yet same great humor
@iham1313
@iham1313 9 ай бұрын
A comparison between m1, 2 and 3 with (mostly) identical setup (max models with same cpu/gpu cores and ram) using different ai tools, like ollama, automatic1111, comfyui and others would be awesome; in order to see perfomance (mainly speed) differences across the models.
@ShyBoyEnt
@ShyBoyEnt Жыл бұрын
How does it run on the M2 Mac mini base model 8g 256ssd ?
@tipoomaster
@tipoomaster Жыл бұрын
Is it using the GPU or neural engine on Apple Silicon?
@goodlux777
@goodlux777 Жыл бұрын
Would love to know how diffusion bee performs on M1/Ultra! Could you tell us the seconds/per iteration at a given size say 512x512 on diffusion bee? I'm actually running it on an Intel mac (usually between 1 - 2 sec per iteration, ouch), but thinking of upgrading to Mac Studio. Not sure if I should go with an Nvidia-based PC instead.
@stephenirving9846
@stephenirving9846 Жыл бұрын
I have an M1 Max and I get 1-4 its as well. I’m guessing the code isn’t optimized for macs yet.
@francute2u
@francute2u 2 ай бұрын
is there any stable diffusion that is optimize for mac? you cant do so much on diffusion bee and stable diffusion webui is pretty bad on mac compare with windows laptop with nvidia.. there are lots of models on civitai that is not working on diffusion bee.
@aeonlancer
@aeonlancer 2 жыл бұрын
Whatever it be, don't forget not to touch the red button. Remember it, monsters have a very bad sense of humor.
@compteprivefr
@compteprivefr Жыл бұрын
This doesn't leverage Apple's coreML so it's not as performant as it could be, the developer seems to have ghosted the project as a pr was opened to include the coreML improvements but ht simply hasn't been active
@MeinDeutschkurs
@MeinDeutschkurs Жыл бұрын
DiffusionBee came far within the last 7 months. Currently, I‘m on Automatic1111.
@ShyBoyEnt
@ShyBoyEnt Жыл бұрын
How does it run on the M2 Mac mini base model, I you heard any information?
@MeinDeutschkurs
@MeinDeutschkurs Жыл бұрын
@@ShyBoyEnt M2 Mac mini: DiffusionBee or AUTOMATIC1111? On m1max both runs quite nice, but the results of AUtOMATIC1111 are way better (512x512 in 7 seconds at 27 iterations.)
@Opelawal
@Opelawal Жыл бұрын
Would you recommend Macbook air M1 16gb ram for the following operation: Xcode. 2 simulator. OBS (For recording or live streaming). Few chrome tabs.? Thanks for all you do.
@CastroDemaria
@CastroDemaria Жыл бұрын
Good, but "Draw Things" on M1 work better with extra model, lora, etc. can be used. Diffusion bee work well on M1, but too limited. An about you title, I suggest to be less spamming you clearly mention Diffusion bee.
@florentinhonorius613
@florentinhonorius613 Жыл бұрын
I have m1 pro 16gpu 10 cpu and 16ram, will it be good? And can i run custom stable diffusion models? Thanks 😊
@wpherigo1
@wpherigo1 2 жыл бұрын
I'm running it on my M1 MacBook Air, 16 GB
@woolfel
@woolfel 2 жыл бұрын
I have stable diffusion working on my windows workstation with RTX 2060 6G and it wouldn't run, so I had to use a fork that is optimized for 4G video memory. Each txt2img run takes 2min. The default script generates 5 images and uses all of the video memory.
@somebrains5431
@somebrains5431 2 жыл бұрын
You're in luck, all of the sudden there are 30 series GPUs everywhere. I put off trying this on a 56050u laptop with no dedicated gpu. It's cool, not sure how I'd integrate it into anything but it's fun to play with.
@woolfel
@woolfel 2 жыл бұрын
@@somebrains5431 I already own several Nvidia video cards. Even though I can afford RTX 3 series, I'm not gonna buy one. Nvidia has gotten too greedy and arrogant. I just compared invokeai running on my M1Max MBP with 24gpu / 32G and it is faster than my RTX 2060 workstation. M1Max MBP 24 gpu 32G - 30 to 32 secs for 1 image Ryzen 3700X RTX 2060 6G 64G - 62 seconds I'm hoping RTX 4 is a big flop and Nvidia gets humbled. If RTX 3090 drops down to 500, then I'd consider buying one. RTX 4090 is too power hungry and I'd need to upgrade PSU, so it's totally not worth it.
@matthieuhenocque7824
@matthieuhenocque7824 2 жыл бұрын
Alright. I may have a request for once. Could you make a Resident Evil Village benchmark on your M1 Ultra ? I can't find any on KZbin or Reddit, I trust your professionalism, and you work too much. Have a very nice weekend !
@MarkMenardTNY
@MarkMenardTNY 2 жыл бұрын
I really want to try this, but I just can't bring myself to install it without a code audit.
@AZisk
@AZisk 2 жыл бұрын
oh what’s the worst that could happen? :)
@njpme
@njpme 2 жыл бұрын
@@AZisk malware? 🤔
@MrSamPhoenix
@MrSamPhoenix Жыл бұрын
What is diffusion?
@jasonhoffman6642
@jasonhoffman6642 2 жыл бұрын
Did you check to see if it was using the SOC GPUs or if it was all running in the CPU?
@AZisk
@AZisk 2 жыл бұрын
GPUs
@honestview
@honestview Жыл бұрын
5:51 you know the prompt you pasted was mid journey specific... --ar 4:3 --version 3 --no text those are all commands for midjourney only, not stable difusion
@AZisk
@AZisk Жыл бұрын
thanks. kindly follow up with the sd versions. cheers
@rickardbengtsson
@rickardbengtsson Жыл бұрын
Sweet
@oliver_ai
@oliver_ai 2 жыл бұрын
Interested in the M1 ultra video 😅
@totem168
@totem168 2 жыл бұрын
I wonder how much CPU usage. Is all core working or only some cores?
@woolfel
@woolfel 2 жыл бұрын
cpu usage is relatively low compared to gpu usage from what I see
@somebrains5431
@somebrains5431 2 жыл бұрын
It pegs my gpu according to Activity Monitor. Uses less than 10% total cpu on a M1 Air depending on your text string, supplied image, in paint out paint. Ram use made me close out everything so it had 8gb to work with. It was reserving 9gb and change for the model. Hey, it runs and my lap is now slightly warmer.
@woolfel
@woolfel 2 жыл бұрын
@@somebrains5431 that's what I see on m1max 24gpu. gpu history shows 98% and cpu was only use 2 efficiency cores. peak memory usage for invokeai port was about 8G for me.
@somebrains5431
@somebrains5431 2 жыл бұрын
@@woolfel Cool, always nice to know a project will scale with resources. Might be worthwhile to see the diff btw cpu and gpu diffusion. Will be useful to know when soc encoders are used. Anyone working in this space would want to plan hardware upgrades accordingly.
@woolfel
@woolfel 2 жыл бұрын
@@somebrains5431 from personal experience with tensorflow, running on CPU is at minimum 5x slower on easy stuff and much more on medium stuff. Not really worth using CPU for tensorflow or pytorch.
@peteburkeet
@peteburkeet Жыл бұрын
Actually I am running it from DiffusionBee on an M1 right now and its really slow and actually made this video stop playing. The download took 10 minutes. Pretty useless.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
I run this on my mac mini from the command line. There's a how to on my channel.
@edmondhung6097
@edmondhung6097 2 жыл бұрын
Does the NPU help?or just GPU?
@woolfel
@woolfel 2 жыл бұрын
powermetrics running the app shows it mainly uses GPU, some CPU and zero ANE
@sweealamak628
@sweealamak628 2 жыл бұрын
Tried DiffsuionBee before. It's creepy.
@grugbrain
@grugbrain 2 жыл бұрын
forgot to try "Alex Ziskind" 🤓🤪🤪
@AZisk
@AZisk 2 жыл бұрын
😂
@sotonin
@sotonin Жыл бұрын
sadly it's horribly restricted. you can't do high resolution generation. unusable.
@honestview
@honestview Жыл бұрын
6:23 it's midjourney... --ar is aspect ration for midjourney
@max00r
@max00r 9 ай бұрын
Najgrzej jak robisz coś o czym nie masz pojęcia...
@wkuser
@wkuser 2 жыл бұрын
Has anyone tried running this with only 8gb of ram?
@njpme
@njpme 2 жыл бұрын
The swap will be crazy
@woolfel
@woolfel 2 жыл бұрын
if you use invokeai fork of stable diffusion, it's optimized to run on 4G of video memory. On windows the original one needs atleast 10G to run. There's an optimized fork that will run on 4G for those who are running on 6G nvidia cards
@somebrains5431
@somebrains5431 2 жыл бұрын
@@woolfel X86 hardware it’s much easier to throw in a ram upgrade or grab a used 3070 as a starting point and scale gpu as your wallet will allow.
@woolfel
@woolfel 2 жыл бұрын
@@somebrains5431 for tensorflow and pytorch, the biggest factor isn't system memory, it's GPU memory. on newegg RTX 3070 Ti 8G is still 650-750, so that won't really help. To run the non optimized stable diffusion, you'd need at minimum 10G which means a 12G rtx 3080 for 700-800 bucks. for my money, M1max with 32G of unified memory is better bang for the buck.
@honestview
@honestview Жыл бұрын
diffusion bee sucks man... out of the box, you get crappy images like the ones you were getting... to get better images you need "models" files that go up to 10GB comprising many images which were converted into an algorithm. You need to install them in "diffusion bee" but unfortunately, it doesn't support the latest "models" so you're stuck with crappy images.
@AZisk
@AZisk Жыл бұрын
it’s amazing what a few months of advancements in ai can do
@Zherebtsow
@Zherebtsow Жыл бұрын
unfortunantly most of results from Diff Bee is garbage and cant be useful as arts for smth) just some strange creepy pictures) why you said "looks pretty cool" ? it looks pretty bad to be honest.... lets call things their names) its trash
@AZisk
@AZisk Жыл бұрын
easy to say this a few months from the future :)
@Ss-zg3yj
@Ss-zg3yj 11 ай бұрын
Seems like my M1 Max MBP was a mistake. I had so much issues with audio card for last years (RME Babyface Pro FS), it made it unusable for music production. Now it turned out it's a complete crap for AI. What the hell, Apple? This could be my last Macbook.
@AZisk
@AZisk 11 ай бұрын
i use an even older RME (fireface 800) and it worked fine once I enabled something or other (i forgot, sorry). check out the rme forums
@Ss-zg3yj
@Ss-zg3yj 11 ай бұрын
@@AZisk I am sitting on their forum for last 2 years with no fix. Some issue with M1 Pro/Max compatibility. Others seems to work fine.
@ismailfateen
@ismailfateen 2 жыл бұрын
Yee I saw DiffusionBee before it's pretty impressive 🥲
FREE Local Image Gen on Apple Silicon | FAST!
9:33
Alex Ziskind
Рет қаралды 50 М.
LLMs with 8GB / 16GB
11:09
Alex Ziskind
Рет қаралды 75 М.
КОГДА К БАТЕ ПРИШЕЛ ДРУГ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 6 МЛН
СОБАКА И  ТРИ ТАБАЛАПКИ Ч.2 #shorts
00:33
INNA SERG
Рет қаралды 1,7 МЛН
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 10 МЛН
62 M4 MacBook Pro Models?! Here’s What You Need to Know
14:15
Alex Ziskind
Рет қаралды 87 М.
M2 Mac - 8GB vs 16GB RAM - Avoid This Costly Mistake!
4:19
Chris Tomshack
Рет қаралды 2,6 МЛН
Intro to LoRA Models: What, Where, and How with Stable Diffusion
21:01
Laura Carnevali
Рет қаралды 205 М.
i tried Vim...
10:06
Alex Ziskind
Рет қаралды 45 М.
FREE Local LLMs on Apple Silicon | FAST!
15:09
Alex Ziskind
Рет қаралды 189 М.
Use Your Face in AI Images - Self-Hosted Stable Diffusion Tutorial
27:22
КОГДА К БАТЕ ПРИШЕЛ ДРУГ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 6 МЛН