I did a short test. Install SD and Automatic1111. A. Lenovo Legion 5, AMD Ryzen 7 5800H, 64GB RAM, NVIDIA GeForce RTX 3050 TI 4GB,--xformers B. Mac mini M2 Pro base model 16 GB RAM. text to image standard values On average : Lenovo: 3.5 - 4.6 iteration/s M2 Pro: 1.3 - 1.6 iterations/s on Mac mini up to 4 GB swap memory
@alexanderjenkins2 жыл бұрын
Crazy, I was just wondering if something like this existed... thanks!
@AZisk2 жыл бұрын
yeah, this stuff is getting more available and accessible
@yriccoh2 жыл бұрын
Version 1.51 explains the sliders. Increase steps >40 you'll see improvement
@RSV9 Жыл бұрын
I would like to see a speed comparison between a mac min m2 pro and a windows laptop with a dedicated graphics card, for example an rtx3050ti. Is it worth installing SD on apple silicon?
@GraylandSmith Жыл бұрын
I would love to see the comparison between the m1 max 64 gb ram and the mac studio. I need an upgrade and I want a mac that'll be great with stable diffusion, AI, Unity, Ar/Vr for creating, Video editing rendering, 4k 8k big ass files (animations) Heavy After Effects and various VFx softwares. I need something thats good especially for video editing and rendering huge files and projects FAST.... I need a beast of a machine
@goodlux777 Жыл бұрын
i'm going through the same thought process, wanting a single machine to do it all, but Nvida GPUs seem to always beat macs and get updates months in advance.
@ScottLahteine2 жыл бұрын
Nice. I just went through the process of installing Stable Diffusion from the command line on the MacBook Pro M1 (pro) (using the lstein fork, not magnusviri) and although the install went pretty smoothly it was a bit of a hassle, requiring an install of MiniConda, for a start. The web interface InvokeAI was easier to install than web-ui, but I want to be able to use both with the same SD install so hopefully the sidecar web-ui setup will not be too difficult. My MacBook has only 16GB of RAM and Stable Diffusion uses it all, hitting the VRAM pretty hard. On an 8GB machine it's going to take the SSD out to the woodshed. I'm going to install it on the Mac Studio M1 Ultra later, and if it's any faster or more impressive there I'll let you know!
@woolfel2 жыл бұрын
I was trying to install the invokeai fork too, but I'm having all sorts of issues. turns out I didn't have llvm installed, which caused the invokeai create env to fail. the joys of native code dependencies
@ScottLahteine2 жыл бұрын
To follow up, wow just wow. Like Neo: "Whoa." On the Mac Studio Ultra with 64GB of RAM it uses around 16GB at baseline and another 8GB to diffuse a 512x512 image, taking under 25 seconds to generate each image (50 steps). The GPU is pegged to the limit for the whole time. Meanwhile lstein/stable-diffusion is now invoke-ai/InvokeAI, so… And, it's not bad! Now to try some Image-to-Image, and In-Painting, and….
@woolfel2 жыл бұрын
@@ScottLahteine I was considering submitted a pull request to mention "you need llvm". Took me like 2 hours to track down llvm was causing create env to die. I'm impressed my M1Max 24gpu is faster than RTX 2060 6G: 32 sec vs 65 sec for 1 image.
@ScottLahteine2 жыл бұрын
@@woolfel The instructions I followed soecified that Xcode had to be installed as the first step, which is where most Mac geeks should get their llvm. On other platforms, I didn't notice whether it was mentioned, but a helpful message at startup or during install would be good!
@woolfel2 жыл бұрын
@@ScottLahteine I have xcode installed, but what I forgot is I updated MacOS. After I ran brew config, I saw that clang was null. the fix was simple enough, I just needed to install the additional xcode tools to the latest and then conda created the env just fine :) I created a pull request for their macos install to note the lmdb error.
@garynagle30932 жыл бұрын
Pretty cool. Nice change and yet same great humor
@iham13139 ай бұрын
A comparison between m1, 2 and 3 with (mostly) identical setup (max models with same cpu/gpu cores and ram) using different ai tools, like ollama, automatic1111, comfyui and others would be awesome; in order to see perfomance (mainly speed) differences across the models.
@ShyBoyEnt Жыл бұрын
How does it run on the M2 Mac mini base model 8g 256ssd ?
@tipoomaster Жыл бұрын
Is it using the GPU or neural engine on Apple Silicon?
@goodlux777 Жыл бұрын
Would love to know how diffusion bee performs on M1/Ultra! Could you tell us the seconds/per iteration at a given size say 512x512 on diffusion bee? I'm actually running it on an Intel mac (usually between 1 - 2 sec per iteration, ouch), but thinking of upgrading to Mac Studio. Not sure if I should go with an Nvidia-based PC instead.
@stephenirving9846 Жыл бұрын
I have an M1 Max and I get 1-4 its as well. I’m guessing the code isn’t optimized for macs yet.
@francute2u2 ай бұрын
is there any stable diffusion that is optimize for mac? you cant do so much on diffusion bee and stable diffusion webui is pretty bad on mac compare with windows laptop with nvidia.. there are lots of models on civitai that is not working on diffusion bee.
@aeonlancer2 жыл бұрын
Whatever it be, don't forget not to touch the red button. Remember it, monsters have a very bad sense of humor.
@compteprivefr Жыл бұрын
This doesn't leverage Apple's coreML so it's not as performant as it could be, the developer seems to have ghosted the project as a pr was opened to include the coreML improvements but ht simply hasn't been active
@MeinDeutschkurs Жыл бұрын
DiffusionBee came far within the last 7 months. Currently, I‘m on Automatic1111.
@ShyBoyEnt Жыл бұрын
How does it run on the M2 Mac mini base model, I you heard any information?
@MeinDeutschkurs Жыл бұрын
@@ShyBoyEnt M2 Mac mini: DiffusionBee or AUTOMATIC1111? On m1max both runs quite nice, but the results of AUtOMATIC1111 are way better (512x512 in 7 seconds at 27 iterations.)
@Opelawal Жыл бұрын
Would you recommend Macbook air M1 16gb ram for the following operation: Xcode. 2 simulator. OBS (For recording or live streaming). Few chrome tabs.? Thanks for all you do.
@CastroDemaria Жыл бұрын
Good, but "Draw Things" on M1 work better with extra model, lora, etc. can be used. Diffusion bee work well on M1, but too limited. An about you title, I suggest to be less spamming you clearly mention Diffusion bee.
@florentinhonorius613 Жыл бұрын
I have m1 pro 16gpu 10 cpu and 16ram, will it be good? And can i run custom stable diffusion models? Thanks 😊
@wpherigo12 жыл бұрын
I'm running it on my M1 MacBook Air, 16 GB
@woolfel2 жыл бұрын
I have stable diffusion working on my windows workstation with RTX 2060 6G and it wouldn't run, so I had to use a fork that is optimized for 4G video memory. Each txt2img run takes 2min. The default script generates 5 images and uses all of the video memory.
@somebrains54312 жыл бұрын
You're in luck, all of the sudden there are 30 series GPUs everywhere. I put off trying this on a 56050u laptop with no dedicated gpu. It's cool, not sure how I'd integrate it into anything but it's fun to play with.
@woolfel2 жыл бұрын
@@somebrains5431 I already own several Nvidia video cards. Even though I can afford RTX 3 series, I'm not gonna buy one. Nvidia has gotten too greedy and arrogant. I just compared invokeai running on my M1Max MBP with 24gpu / 32G and it is faster than my RTX 2060 workstation. M1Max MBP 24 gpu 32G - 30 to 32 secs for 1 image Ryzen 3700X RTX 2060 6G 64G - 62 seconds I'm hoping RTX 4 is a big flop and Nvidia gets humbled. If RTX 3090 drops down to 500, then I'd consider buying one. RTX 4090 is too power hungry and I'd need to upgrade PSU, so it's totally not worth it.
@matthieuhenocque78242 жыл бұрын
Alright. I may have a request for once. Could you make a Resident Evil Village benchmark on your M1 Ultra ? I can't find any on KZbin or Reddit, I trust your professionalism, and you work too much. Have a very nice weekend !
@MarkMenardTNY2 жыл бұрын
I really want to try this, but I just can't bring myself to install it without a code audit.
@AZisk2 жыл бұрын
oh what’s the worst that could happen? :)
@njpme2 жыл бұрын
@@AZisk malware? 🤔
@MrSamPhoenix Жыл бұрын
What is diffusion?
@jasonhoffman66422 жыл бұрын
Did you check to see if it was using the SOC GPUs or if it was all running in the CPU?
@AZisk2 жыл бұрын
GPUs
@honestview Жыл бұрын
5:51 you know the prompt you pasted was mid journey specific... --ar 4:3 --version 3 --no text those are all commands for midjourney only, not stable difusion
@AZisk Жыл бұрын
thanks. kindly follow up with the sd versions. cheers
@rickardbengtsson Жыл бұрын
Sweet
@oliver_ai2 жыл бұрын
Interested in the M1 ultra video 😅
@totem1682 жыл бұрын
I wonder how much CPU usage. Is all core working or only some cores?
@woolfel2 жыл бұрын
cpu usage is relatively low compared to gpu usage from what I see
@somebrains54312 жыл бұрын
It pegs my gpu according to Activity Monitor. Uses less than 10% total cpu on a M1 Air depending on your text string, supplied image, in paint out paint. Ram use made me close out everything so it had 8gb to work with. It was reserving 9gb and change for the model. Hey, it runs and my lap is now slightly warmer.
@woolfel2 жыл бұрын
@@somebrains5431 that's what I see on m1max 24gpu. gpu history shows 98% and cpu was only use 2 efficiency cores. peak memory usage for invokeai port was about 8G for me.
@somebrains54312 жыл бұрын
@@woolfel Cool, always nice to know a project will scale with resources. Might be worthwhile to see the diff btw cpu and gpu diffusion. Will be useful to know when soc encoders are used. Anyone working in this space would want to plan hardware upgrades accordingly.
@woolfel2 жыл бұрын
@@somebrains5431 from personal experience with tensorflow, running on CPU is at minimum 5x slower on easy stuff and much more on medium stuff. Not really worth using CPU for tensorflow or pytorch.
@peteburkeet Жыл бұрын
Actually I am running it from DiffusionBee on an M1 right now and its really slow and actually made this video stop playing. The download took 10 minutes. Pretty useless.
@dr.mikeybee2 жыл бұрын
I run this on my mac mini from the command line. There's a how to on my channel.
@edmondhung60972 жыл бұрын
Does the NPU help?or just GPU?
@woolfel2 жыл бұрын
powermetrics running the app shows it mainly uses GPU, some CPU and zero ANE
@sweealamak6282 жыл бұрын
Tried DiffsuionBee before. It's creepy.
@grugbrain2 жыл бұрын
forgot to try "Alex Ziskind" 🤓🤪🤪
@AZisk2 жыл бұрын
😂
@sotonin Жыл бұрын
sadly it's horribly restricted. you can't do high resolution generation. unusable.
@honestview Жыл бұрын
6:23 it's midjourney... --ar is aspect ration for midjourney
@max00r9 ай бұрын
Najgrzej jak robisz coś o czym nie masz pojęcia...
@wkuser2 жыл бұрын
Has anyone tried running this with only 8gb of ram?
@njpme2 жыл бұрын
The swap will be crazy
@woolfel2 жыл бұрын
if you use invokeai fork of stable diffusion, it's optimized to run on 4G of video memory. On windows the original one needs atleast 10G to run. There's an optimized fork that will run on 4G for those who are running on 6G nvidia cards
@somebrains54312 жыл бұрын
@@woolfel X86 hardware it’s much easier to throw in a ram upgrade or grab a used 3070 as a starting point and scale gpu as your wallet will allow.
@woolfel2 жыл бұрын
@@somebrains5431 for tensorflow and pytorch, the biggest factor isn't system memory, it's GPU memory. on newegg RTX 3070 Ti 8G is still 650-750, so that won't really help. To run the non optimized stable diffusion, you'd need at minimum 10G which means a 12G rtx 3080 for 700-800 bucks. for my money, M1max with 32G of unified memory is better bang for the buck.
@honestview Жыл бұрын
diffusion bee sucks man... out of the box, you get crappy images like the ones you were getting... to get better images you need "models" files that go up to 10GB comprising many images which were converted into an algorithm. You need to install them in "diffusion bee" but unfortunately, it doesn't support the latest "models" so you're stuck with crappy images.
@AZisk Жыл бұрын
it’s amazing what a few months of advancements in ai can do
@Zherebtsow Жыл бұрын
unfortunantly most of results from Diff Bee is garbage and cant be useful as arts for smth) just some strange creepy pictures) why you said "looks pretty cool" ? it looks pretty bad to be honest.... lets call things their names) its trash
@AZisk Жыл бұрын
easy to say this a few months from the future :)
@Ss-zg3yj11 ай бұрын
Seems like my M1 Max MBP was a mistake. I had so much issues with audio card for last years (RME Babyface Pro FS), it made it unusable for music production. Now it turned out it's a complete crap for AI. What the hell, Apple? This could be my last Macbook.
@AZisk11 ай бұрын
i use an even older RME (fireface 800) and it worked fine once I enabled something or other (i forgot, sorry). check out the rme forums
@Ss-zg3yj11 ай бұрын
@@AZisk I am sitting on their forum for last 2 years with no fix. Some issue with M1 Pro/Max compatibility. Others seems to work fine.
@ismailfateen2 жыл бұрын
Yee I saw DiffusionBee before it's pretty impressive 🥲