most of the KZbin based laptop reviews are usually for video processing for people who create more videos on KZbin .. i am glad you are bringing in some AI here. Which is what i have been looking for need more channels like yours.
@gaiustacitus4242Ай бұрын
The system recommendations for running Llama 3.1 405B include 16 to 32 A100 GPUs with 80 GB RAM or H100 GPUs with 96 GB RAM, 2 TB system memory, AMD EPYC or Intel Xeon processors, 4 TB minimum fast NVME SSD for the model and additional storage for dataset handling, logs, checkpoints, backups, etc., and power supplies capable of delivering up to 10 kW or more. A well-built computer for running this model will cost between $550,000 and $1,200,000, and that's not counting electrical work and provisions for an adequate cooling system.
@xcreateАй бұрын
Apple silicon walks into the room...
@MrSamPhoenixАй бұрын
And yet the Apple processors do very well anyway with the program.
@Dragoneye_Dragoneye_Ай бұрын
Way to cheap 😅 this kind of computer are way to boring 😂 PS: who the fuck has the money to build such a computer 😅
@Teluric2Ай бұрын
@@xcreateNo way.
@fVNzOАй бұрын
You could easily get the model running on a sub 100K$ system at a full Q8. The whole system should consume around than 3KW. You don't need 2TB of system memory at all or the most expensive epyc CPU, especially in plural. It can be done on a single socket and motherboard and run off of a single plug in your house at 230V. What you're proposing would be incoherently overkill for just this workflow. Perhaps if you ran double precision the system would climb into the 200K range but that's about it.
@Aaron_Bleu28 күн бұрын
Watching this on my 16GB RAM Intel MacBook Pro made me feel like I should be in a mud hut in Africa. Great video
@xcreate27 күн бұрын
The 16GB Intel Macs are still beasts for most tasks. I remember when I worked on my first iOS app - which actually got a staff pick by Apple, I was using the cheapest second hand eBay first gen 2GB RAM MacBook Air. Loved that computer.
@Aaron_Bleu27 күн бұрын
@@xcreate That’s awesome and I do still love this Intel Mac. It’s perfect for basic stuff like content consumption, email, word processing and PowerPoints. The battery is pretty bad and gets REALLY hot when I ask it to hold multiple applications open😂 Mostly have it docked up
@xcreate27 күн бұрын
Something you can do to really help battery life is to disable turbo boost. I did a guide a few years ago about it, not sure if it still works as I no longer have an Intel mac, but back then it helped a lot: kzbin.info/www/bejne/rXqroZVniZt_iNE
@Aaron_Bleu27 күн бұрын
@@xcreate oooo I’ll check it out. Appreciate your content brother
@ABUNDANCEandBEYONDATHLETE19 сағат бұрын
Same. Mine gets so hot. Sensors say 100 Celsius ! At the die. How is that sustainable. Burn my nuts off. Need ice packs just to use it. I swear when I got it on the OG software it was fine.
@chocoloco9985Ай бұрын
I had a Macbook Air M1 as my first Mac. 8gb of ram. red peaks all the time, had to restart the computer multiple times per day. I love having a bunch of open windows and this simply was not possible on 8gb. Now have an M3 Max with 128 gb and have not had a problem
@sriramsadineniАй бұрын
I need help deciding between 128GB or 64GB RAM for running a local LLaMA model to train on 2 million records as of today. I’m a newbie to machine learning
@xmanreturnАй бұрын
Wait for Mac studio m4 ultra
@westdeckerАй бұрын
Thank you for pushing these tests like this. Very helpful!
@pszntАй бұрын
Hi! Thanks for the video. Just what I was looking for:) Can you share the workaround for ComfyUI with the 128Gb?
@Mrdastapleton20 күн бұрын
Hi, could you please point us to the workaround for the 128gb ram issue please?
@xcreate15 күн бұрын
Sure, just set the height to 512
@MeinDeutschkursАй бұрын
I‘d need 256 GB unified mem, to have ~160 available GB vram with full mem bandwidth. Apple does not map the full mem to vram bandwidth. Why that much? M2 192GB is not enough for 128k context window if the model has more than 3b params. (Yes, already quantized)
@xcreateАй бұрын
Hopefully they'll give us more RAM with the U4 Ultra
@MeinDeutschkursАй бұрын
@@xcreate yeah! Hopefully!
@TerkleTonАй бұрын
Any LLM >= 70B will run terrible. I did this experiment last year with the M3 and came away disappointed. The M4 unfortunately doesn’t speed things up enough for large models.
@xcreateАй бұрын
You're disappointed, I'm impressed. Equilibrium has been reached.
@설리-o2wАй бұрын
72b model quant 4 runs at around 10-12 tokens per second on M4 max. That is more than useable. Find me another way you can run a local LLM as cheaper than a Mac that's 72b at quant 4??
@descendencyАй бұрын
@@설리-o2w I'm mildly curious what an M4 Ultra Mac Studio with presumably 256GB of memory will be able to do.
@xcreateАй бұрын
Me too
@ingalz11Ай бұрын
I’m running 70b 4 bit quantized on 64GB M3 MAX getting around 5-6 t/s. That’s pretty usable for me considering the A6000 gets about 10-15 t/s, costs more, and isn’t portable. Based on what I see, I predict the base M4 Ultra is likely to achieve near parity or slightly exceed the A6000.
@sleepyelk5955Ай бұрын
Do you have any good advice to avoid those crahes? Is there no way to tell exo to not grab less memory for certain systems?
@roopneetsinghАй бұрын
am currently using 2017 macbook pro intel 8 GB for my day to day work and small sized excel models in parallels. Now i need to use parallels EXTENSIVELY for building predictive models in excel (large sized excel sheets) and python coding for data science models (datasets ~10GB) .Should i go for macbook pro m4 pro : 12-Core CPU 16-Core GPU 24GB Unified Memory or 48 GB or 14-Core CPU 20-Core GPU 24GB Unified Memory or 48GB ???
@xcreateАй бұрын
Apple usually do a 2 weeks easy returns policy, so if you're short on money then you can go for the lower spec one and test it out, try things like running Excel natively on macOS to alleviate the VM memory hog. But, if I was you, I'd go for the higher spec one, you're doing it for work, it's a business case, focus on being more productive, than tinkering.
@petersuvaraАй бұрын
Been a while! Welcome back
@SophiaHestia25 күн бұрын
Hello😊 IS 64ram m4pro 20gpu IS ok for making 2d animation short 1minute animation (After effect, toon boom animation...) ?
@xcreate25 күн бұрын
Yeah, more than good enough. Apple usually have a 2 week easy returns policy, so you can always try it out for yourself.
@mitchchen73894 сағат бұрын
Great review, i got myself a 64G M4 Pro mini and was wondering if i should had went for 128G. This helped answer myself, thanks!
@kwameryanАй бұрын
For music production with lots of large virtual instruments and headroom for multitasking, 128GB is the way to go.
@SkyNinjaScythe9RealmАй бұрын
Test Autodesk 3ds Max on the M4 Max
@sder-v2vАй бұрын
Great to see you post stuff again!
@tivtagАй бұрын
How is the nano texture for text clarity (coding)?
@gor5685Ай бұрын
I'm wondering the same thing!
@xcreateАй бұрын
I'll give it a go next time I go to the Apple store
@acasualviewer5861Ай бұрын
How dumb that code crashes because of too much system RAM.. some programmer really messed up there.
@xcreateАй бұрын
It happens
@acasualviewer5861Ай бұрын
@@xcreate I wonder how? I don't even know how to write code like that. If (total_system_ram > 64) exit(-1) ???
@xcreateАй бұрын
Yeah things of that nature, as it's an asset it could be something as simple as workingmemory = malloc( availableram * 0.9 ) if( workingmemory > reall_big_size ) // must be an error so exit to be safe exit( -1 )
@HayesHaugen24 күн бұрын
@@acasualviewer5861 You've heard of 32 bits right? We have 64 bit processors now but many people still think 32 bits is fine. The maximum value 32 bits can have is just over 4 billion (2 to the power of 32). If you have 128 billion bytes of memory it is not that hard to have more than 4 billion of something in that memory. So the program sees that there is plenty of space for 10 billion items but then sticks it in a list that can only hold 4 billion items and the program crashes. That's a simplified way of looking at it but I hope you get the idea. Related: ~2 billion seconds from Jan 1, 1970 will be a problem in 2038.
@PINKGOFFICIALАй бұрын
using image stacks of 200 files from 90MB-200MB each image, and using AI tools on them. I bought the 128GB Ram M4 Max with 8 TB SSD 16"
@xcreateАй бұрын
Congrats mate, are you gonna get the new Mac Studios too when they come out with hopefully 256GB and M4 Ultras?
@djfreaknique1587Ай бұрын
I hate you!!!!
@sleepyelk5955Ай бұрын
I got the 128GB with 4TB 16", for kind of the same reason and also using VMs for pen tests and development ... but I am very disappointed by the keyboard ... may I ask you, do you also recognice different haptic and sound of keys more in the middle of the keyboard instead of those at the left and right? It sounds for me that there is kind of "space" underneath those keys and its kind of annoying, having different feeling while typing^^
@DaveEtchellsАй бұрын
How about groups of agents running in smaller models as a use case for 128 GB? Multi-agent systems seem to be capable of some very impressive stuff, even running relatively modest models.
@khoifotoАй бұрын
My memory pressure is already in the yellow and red with 64GB. Large photoshop files and a bunch of high res mesh in illustrator eats a lot of RAM.
@rogerhuston82877 күн бұрын
128G would allow me to use PGGB to upsample DSD audio. Of course that's only enough for 1 channel at a time. I'd need 256GB to do both channels the same time.
@M.W.777Ай бұрын
Just found your channel! Thanks for the tips and heads up on my next purchase -- You have a new sub :)
@greanch1234Ай бұрын
i need 128gb to run a swarm of containers for my application locally.
@shawn_bullockАй бұрын
128 GB for LLMs, VMs, and more than 5 Chrome tabs
@SoulCal24Ай бұрын
Xcreate can we get ur opinion on Nano vs Glossy screen?
@xcreateАй бұрын
I'll give it a go next time I go to the Apple store
@materialvisionАй бұрын
can you link to the text to video project?
@xcreateАй бұрын
Sure, it's just stable diffusion, I'll try to do a setup guide for Mac if you can't get it to work.
@arozendojrАй бұрын
I have the impression that the MacBook uses swap memory even if it has free RAM, it seems that some operating system programs use swap memory exclusively.
@gaiustacitus4242Ай бұрын
If you're going to run a 70B LLM, then you need to not only purchase 128 GB RAM but also the 8 TB SSD. Otherwise, the constant swapping will greatly shorten the life of the storage. I've seen MacBook Pros which at six months from purchase had less than 250 GB of the original 1 TB remaining as usable storage.
@PeterRinceАй бұрын
Or maybe some apps have... memory leaks and the "dead" memory ends up in a swap file until the app itself is terminated.
@Teluric2Ай бұрын
Its that way. If a mac has 8gigs doesnt mean Mac is using 8 gigs.
@KidIndiaАй бұрын
Hey my man, I do Aftereffects , Cinema 4d + Redshift + HOudini. I got a 64gb m4 max . Do you think I made the right choice? I thought 128gb was a bit too expensive
@mishalw210Ай бұрын
4tb?
@KidIndiaАй бұрын
@ 1 tb
@xcreateАй бұрын
Apple usually have a 2 week easy returns policy, so just give it a go with the projects you use and see how you get on. 64GB is way more than enough for me for 90% of my use cases. And, think about it, if you over-spend this year, you're less likely to justify upgrade next year when everything will be faster again.
@KidIndiaАй бұрын
@ ok thankyou for the advice my man
@leemehdinur3766Ай бұрын
The only reason you need the 128 is if you want to run the mother ship from the other side of the galaxy!!!!
@krishnansrinivasan830Ай бұрын
Awesome & Thanks :)
@ericguizzettiАй бұрын
This was so far over my head that my neck hurts 😅
@saibalaji4713Ай бұрын
Long time no see
@mralxndr284618 күн бұрын
id use it for q8 llama 3.3 70b
@BubbleVolcanoАй бұрын
Llama 70B might work as fast as some of online chatbots
@fintech11228 күн бұрын
Nicely dont i m just jumping into llm Ai stuff and was confused only ur video was able to clear my mind gona buy 64gb for now and after i will be good enough on llms will get sonething super heavy to play with just for fun thanks man
@sleepyelk5955Ай бұрын
Thanks for the great coverage, I also go an 16" with 128GB of ram and really thingking about returning, because it is not "perfect" in a way I though it would be ... as you said, still bugs in software with 128GB and the "funny" coil whine^^ ... do you know, where it comes from? I mean I can ran Cinebench 24 every test, compiling big projects, playing wow and cyberpunk etc. ... no coil whine, only when I use Ollama and a text is generated^^ ... why? And concerning the keyboard: you own currently several 16" versions right? Did you notice any difference in the behaviour of the keys itself? I got different haptic and sound on the ones in the middle of the keyboard, as there is kind of space underneath which leads to a "loosy" feeling and it sounds rather hollow and it feels different ... do you alos notice such behviour?
@xcreateАй бұрын
Apple usually do an 2 weeks easy returns policy - might even be longer for the christmas gifting period, so no harm in trying it out and if you can't justify it this year, it'll at least keep you more open to upgrading next year when everything will be faster again. Coil whine is normal, while it is louder on the M4 Max than the M1, they are pumping more watts into them this year and I remember the old AMD gpus being loud too. I mainly use an external keyboard, so I haven't had any issues with the internal one so far, have you?
@sleepyelk5955Ай бұрын
@@xcreate thanks for your fast response :) I also guess that this coil whine is kind of normal having the performance in mind. I mean, I have a 4090 FE in my gaming pc and this goes crazy like hell concerning coil whine, here it is way less noticable. - I currently do some test, but having 2 MVs running and ollama I already at 70GB of ram, so 128 does make sense to me, but will do some further tests, to check if I run into trouble/bugs ... My internal keyboard is really bad, as already stated above, different haptic and sound of certain keys, especially the ones in the middle, like there is a space below, they make kind of hallow sound and feel not as stiff at the ones at the edges ... not sure if it is normal for the 16" nowadays or if I got bad luck with the keyboard.
@GatoPaintАй бұрын
buying custom built MAcbook is a bad investment JUST BECAUSE of the resell value AND because since it's hella expensive now with more upgrades it will be harder to resell also, I'd upgrade something like a m4 Max base model just if I hate money or if I use too much internal storage, other than that is not a good desicion I'd say
@xcreateАй бұрын
Yes. 100% agree, laptops are tools not investments, if you don't need the upgrades for a business case, you shouldn't.
@ugurcamtas2919Ай бұрын
Where have you been mate...
@xcreateАй бұрын
Happily using my M1 Max, and you?
@ugurcamtas2919Ай бұрын
@ all good, was waiting for a new video from you;)
@mimimoo4493Ай бұрын
95% of the comments section are probably just browsing chrome.
@TheCyberCraftАй бұрын
amazing content!! let's gooo
@nomad4xАй бұрын
128GB, local LLM's
@JamesLee-lq8qb14 күн бұрын
128GB on laptop is insane!
@michaellattaАй бұрын
Larger AI models and other coding activities.
@ThePurpleSnorkАй бұрын
You can be mac for life, or you could buy an x64 based PC and drop in some Quadro RTX cards and smoke a $5000 macbook. The new macbooks are awesome for most things, but serious AI workloads isn't one of them.
@kninezbanksАй бұрын
Quadro flagship gpu alone costs more than a maxed spec macbook pro 16........and the quadro GPU maxes out at 48GB of ram........Tests show that Nvidia's desktop flagship is only twice, sometimes three times as fast as the M4 Max.......while using 300 to 450 Watts.........vs the 50 watts of the M4 Max........and it has 128GB of GPU Memory.........The M4 Ultra will be a single monolithic chip with 256GB of GPU Memory........Nvidia's next gen will max out at 96GB.........So the M4 Ultra will probably be better value than an Nvidia card for AI workloads.
@jamesprentor8433Ай бұрын
If you've got the money and want to go for it, go ahead. It's definitely better to get a higher throughput setup. But the macbooks are the cheapest things you can get for such high effective v-ram. And I guess it's a laptop too that doesn't use that much power. It's just a bit of a tradeoff I guess
@ThePurpleSnorkАй бұрын
@@kninezbanks You don't need a flagship Quadro. You can buy several older cards. The M4 Ultra isn't going to be a better value in any way - and I say that as a person typing on an M4 Max. The words Apple and value rarely go together.
@diverman1023Ай бұрын
@@ThePurpleSnorkAnd yet every other point he made is very valid. The alternative you suggest is bulky, has massive power consumption / runs hot, and frankly hunting for used gpus that were used for mining crypto and are still very expensive for what they offer isn’t a great alternative.
@kninezbanksАй бұрын
@@ThePurpleSnork "You don't need a flagship Quadro".....You don't need a 4070 or above to play games either.....a 4060 is capable enough to play any game.....But if you want higher graphics, higher resolutions, higher framerates, you pay for more expensive graphic cards. You don't need a flagship quadro.....but if you need a lot of memory, you don't have that many options since memory capacity is linked to price. The M4 Ultra will offer up to 256GB of unified CPU & GPU Memory. If you are working with gigantic parameter models, it will outperform all Quadro cards once those guys run out of memory. The M4 Ultra will start at $4000.......with 256GB, it will probably cost around $6500. There are several reviewers showing the M4 Max beating the 4090 & Quadro Flagships at the highest parameters. It is what it is.
@awsom82Ай бұрын
take base M4, its sufficient
@sv12017 күн бұрын
Sufficient for what? Target practice?
@djititude237329 күн бұрын
Thanks for your video, very technical with strong opinion. Just found you, when possible please post something similar with your Windows PC with NVIDIA recommendation, from your statement CUDA cores solution is better than Apple silicon solution. I intend to do Phd on data science but focus on computer architecture, networking, clustering than focusing on code. Best regards. kzbin.info/www/bejne/e6aboINtealgqNE
@robertfloyd4287Ай бұрын
Mac Mini Ultra! lI don't think Apple is going to let that happen.
@acasualviewer5861Ай бұрын
I think he meant Mac Studio
@DanielNeubauerАй бұрын
Since the Mac Mini Pro is already in fan noise city I doubt that will happen, but to his defense he meant Mac Studio Ultra.