Awesome video! I would love to see more LLM or other DL architecures benchmarked between the M3 Max and the RTX 4090m laptop. A definitive video saying the M3 Max is X% better/worse than the 4090m for RNN, CNN, or transformer architecutres would be a gold mine for other AI/ML devs like me!
@asjsjsienxjsks67310 ай бұрын
No it’s not faster. You’re not using fast whisper. Also python implementation absolutely uses the gpu. Set device to mps
@parmeshwarmathpati291610 ай бұрын
Yes can we discuss for setting hardware for building llm
@stephanemignot10010 ай бұрын
Try that unplugged...
@asjsjsienxjsks67310 ай бұрын
@@stephanemignot100 of course man, if you plug it in it’s faster. If you leave it unplugged it slower I’m not debating the fact that the M3 Max is a wonderful chip. All I’m saying is that even the Nvidia 4090 at its peak capability is faster if you want to say that the battery is worse, absolutely not denying that but the M3 Max GPU is not faster than the 40,90
@RunForPeace-hk1cu10 ай бұрын
@@asjsjsienxjsks6734090 doesn’t have 19GB VRAM 😂
@asjsjsienxjsks67310 ай бұрын
@@RunForPeace-hk1cu where did I say that?
@roccellarocks10 ай бұрын
Watched tens of your videos before upgrading from my old i9 MacBook Pro to my M3 Max MacBook Pro. Nowadays I still watch your videos (even if I already have an M3 MacBook) because I like the way you make your content - pragmatism, tone of voice, length and cuts. 👏
@tybaltmercutio10 ай бұрын
How much RAM did you get? I cannot decide between 36 GB, 48 GB or maybe even 64 GB (for future proofing).
@marc-andrevoyer997310 ай бұрын
@@tybaltmercutio same situation as OP, went with 14" and 64gb ram
@Physbook10 ай бұрын
i just keep my i9 macbook pro alongside with alienware rtx4090
@randysavage735110 ай бұрын
Found your channel from Fireship vid ~2ya. Awesome stuff!
@Itcornerbg10 ай бұрын
Hey, amaizing video very useful, 5:18 - i am interesting to see the video how to install whisper with support of GPU etc.
@AZisk10 ай бұрын
Coming soon!
@Itcornerbg10 ай бұрын
@@AZisk - i already testing with Nvidia P40, but its was interesting to see your results
@skyhawk217 ай бұрын
Can you make iPad and iPhone app versions of these tests so we can benchmark m4 on iPad in couple of days?
@mr_ww10 ай бұрын
Thank you! What is the correct way of comparing my current AMD Radeon Pro 5300M 4 GB (MacBook Pro 2019) to an Apple M silicons? In terms of a MacBook gaming experience. I am playing a game from time to time and would like to make sure that a M chip won't take it away from me :)
@Zhiyuai10 ай бұрын
to fine tune llama on m3 max, what size Llama work?how fast?can you release a video for this topic?
@stephensiemonsma10 ай бұрын
Wow, exciting results! I was always optimistic that Apple's unified memory architecture would pay dividends in certain workloads, and MLX appears to be effectively exploiting that paradigm shift. Keep up the good work! Love the channel!
@donaldadugbe69127 ай бұрын
Why don't you run Linux on the 4090 PC
@RolexChan26 күн бұрын
Because he wants to compare two notebooks.
@Fledermaus-2010 ай бұрын
Very nice Video, but can you try Faster Whisper for python on your the devices?
@crearg825910 ай бұрын
Wait what! Last week or two when I checked, Whisper still didn’t support Metal!
@asjsjsienxjsks67310 ай бұрын
Been using whisper metal via python and whisper.cpp for months now
@divyanshbhutra507110 ай бұрын
Nvidia seriously needs to up the game with VRAM capacity. But why would they, when their competitors are as useless as Intel and AMD.
@utkarsh187410 ай бұрын
or apple
@divyanshbhutra507110 ай бұрын
h1874 Apple chips have a lot of memory
@PSYCHOV3N0M10 ай бұрын
@@divyanshbhutra5071Nvidia is working on ARM. They'll release something more powerful (even without tight optimization) than what Apple can ever hope to achieve.
@RunForPeace-hk1cu10 ай бұрын
And kill off the h100 market? 😂😂😂😂😂 You’re so naive
@RunForPeace-hk1cu10 ай бұрын
@@utkarsh1874m2ultra has 192gb memory 😂😂😂😂 what are u on about?
@hifidalityАй бұрын
If you aren't using MacWhisper Pro already, it's the most convenient way to use Whisper on a Mac, including some very nice touches for subtitles and podcasts.
@hsniranjanraoАй бұрын
NVIDIA and Windows do support UML. Some configuration issue.
@GabrielThaArchAngel10 ай бұрын
Hey Alex, I was wondering if you have a video planned for your EDC as a software engineer. I’ve been looking for a light case that I can carry around for my 16 MacBook with the 12.9 iPad. Trying to get ideas of what you utilize
@falcon60Ай бұрын
Do a m4 pro test
@ToySeeker10 ай бұрын
Hi Alex! ❤ love ya my guy 😊your videos are incredible! Can’t wait to fork 🍴
@chrisa530410 ай бұрын
Want to watch the stable diffusion one. Want to meet up? I'm in DMV
@Anshulb0410 ай бұрын
7:23 Vision Pro Light Seal Cushion spotted 👀
@AZisk10 ай бұрын
you got me. i still have mine
@Alexis_Noukan7 ай бұрын
In French bilot sounds like “be low”
@rupertchappelle530310 ай бұрын
Two MacBook Pros died after 14 months. If I could buy. a new one every year, that would be just GREAT. 8GB of RAM is not enough but Apple figures that profits are better than selling a computer with enough memory to do the job. "Job" - does that remind you of someone??? Too bad we are Cooked.
@saintsscholars823110 ай бұрын
How would a Mac Studio M2 32GB stack up vs the MBP M3?
@cogidigm10 ай бұрын
Could you pls make a video on stable diffusion ComfyUI on Mac, I don’t know why nobody ever made any videos about it
@mannkeithc10 ай бұрын
My apologies if I am being dumb, by why wouldn't you use an NPU for this machine learning process, as I thought this is the sought of task NPUs were designed for, and maybe even better at than a GPU? And if you could, how would the performance compare when running on an Apple Silicon NPU (on paper M3 NPU is 18 TOPS for FP16)? And as every processor manufacturer is now getting on the AI bandwagon, you could even extend it to compare the performance of AMD 7000 series with AI NPU (10 TOPS, 8000 series NPUs 16 TOPS) or Intel's Meteor Lake core Ultra with NPU (10 TOPS)? Of course, the processor I would really like to see would be Qualcomm's Snapdragon X Elite with its 45 TOPS NPU, but that's yet to be released.
@paultparkerАй бұрын
The NPU are nowhere near as fast as a good GPU. They mainly exist to do low power ML, presumably in the background like summarizing text threads as messages arrive.
@paultparkerАй бұрын
As far as power consumption, they are insanely efficient, even more so than the Apple SOC GPU cores.
@johnkost251410 ай бұрын
RTX 4090m is equivalent to the desktop RTX 3080 btw.
@GlobalWave15 ай бұрын
Actually it’s more like a 3090. The fact Apple actually gets that much performance from a SOC GPU is insane. I know AMD strix halo SOC is going to try and compete with M4.
@PratimGhosh198610 ай бұрын
WSL uses hyperv, there is no way around it. MSI laptops are always noisy. If you need a powerful and less noisy windows laptop then Lenovo Legion 9i is a better choice
@AZisk10 ай бұрын
Haven't tried that one yet. Thanks
@rondobrondo10 ай бұрын
Part of Apple's long game here is to absolutely dominate the mobile market in every way, and part of that domination is going to require robust machine learning capabilities and speed even for small models that are better suited for mobile uses of machine learning applications. They make their machines able to run small models insanely fast and that's where they're going to have a huge edge in the future
@parmeshwarmathpati291610 ай бұрын
Hi alex can i get your mentorship session i m ready to pay for hardware setup for building llm
@weeee73310 ай бұрын
Is there anyway to run mlx inside xcode ios project?
@devluz10 ай бұрын
very interesting video ... but why do you have so many laptops lying around? :o
@AZisk10 ай бұрын
for testing
@RahulPrajapati-jg4dg10 ай бұрын
Hi can you suggestion which laptop best for LLM + Deep learning I did want to any pc can you please help me
@stephenthumb291210 ай бұрын
Have not had good luck running ai workloads on wsl or wsl2 with a discrete gpu. Everything says my gpu is being used incl docs but performance is pathetic.
@anirudha36610 ай бұрын
Can you make a video on how to install llama using ml?
@dqieu10 ай бұрын
Have you tried timing all the machines with the model already loaded in the GPU's ram to test the raw compute power? It would also be a fairer comparison with cloud-hosted solutions. Anyways, wild that Apple hasn't sent anything to the only ML/AI reviewer on KZbin. AI/ML is the core reason for me to update from M1/2 to M3 Max.
@Slav4o91110 ай бұрын
4090 should be faster as long as the model fits in the VRAM... if the model goes outside... it will be slower.
@MrLocsei10 ай бұрын
"PC Master Race" on suicide watch !! 😂 (and yes, it's quite probably the M-series chips' Unified Memory architecture that's making the difference here)
@D0x1511af10 ай бұрын
lolz....the limitation here is PCIe bottleneck...not Nvidia GPU.... if NVLink protocol running on PC it's will destroy day and night M3 max
@gytispranskunas498410 ай бұрын
?... Lol are you aware that Nvidia is in the making of ARM SOC themselves. You know what that means... Dont you ?... I hate Nvidia pricing. But I know one thing. Thease guys dont play when it comes to performance. Every one knows that when Nvidia releases ARM based SOC in upcoming years... Its gonna destroy everything on the market. Like it always does. Also... This laptop does NOT have RTX 4090. Not even close...
@AZisk10 ай бұрын
if nvidia starts making the entire SoC, they might beat apple, but they are doing too well in just discrete gpus to try that
@sas40810 ай бұрын
@@gytispranskunas4984 why do you hate nvidia pricing? They cost same as AMD but providing RT cores, cuda and they are more stable. Quality and R&D costs money too
@ClearGalaxies10 ай бұрын
PC users huffing copium in the comments section 😂
@Dadgrammer10 ай бұрын
Hmm this difference mayo is from ram/vram sharing on arm Macs. ARM GPU can use up to 75% of ram as vram. I don’t know that you’ve 64/96/128 RAM versions, but in all cases will be more vram than 20gb in 4090.
@hevesizeteny40469 ай бұрын
Hello guys! I might sound weird but how can I look at the subscriptions?:D
@SmirkInvestigator10 ай бұрын
Anybody have a roadmap for me to learn on what about a language or framework performs better on one arch or another. How clever can tensor operations get? Python I get. But what is it b/w mlx, cpp and ggml, jax and mojo?
@DimensionFlux10 ай бұрын
Great to see more MLX content. Please do a comparison with Stable Diffusion MLX vs PC!
@markclayton897710 ай бұрын
Alex, I found your channel when researching for my M3 max laptop purchase. I love your benchmark methodology, but also wish I could copy some of your workflows. If you added a code repository to your membership, I would join!
@AZisk10 ай бұрын
As much as I'd like you to join, there is no need to join to see my repos. This is a "better late than never" repo of my tests which I recently started: github.com/alexziskind1/machine_tests
@softwareengineeringwithkoushik10 ай бұрын
Hi, Alex How are you.. ?
@AZisk10 ай бұрын
yo!
@PhantomEverythingSaif7 ай бұрын
Bro litterlay had a dozen macs!
@chandanankush10 ай бұрын
My takeway is some fancy tech words to explore next week 😢
@Buqammaz10 ай бұрын
We want more content about MLX
@nasirusanigaladima10 ай бұрын
First again from X to youtube. Eveyday i get more impressed with the apple chips and unified memory 😊
@DevVaynerov10 ай бұрын
2nd
@AZisk10 ай бұрын
too fast
@Nucleosynthese6 ай бұрын
I would definitely use a desktop pc for this kind of stuff, not a laptop.
@yesyes-om1po8 ай бұрын
too bad the proprietary silicon is anchored to the pos company which is apple, I don't want to spend 800 dollars on an extra 64gb of memory.
@shawnandrews3296 ай бұрын
Thanks for the analysis. I'm a Java developer. I want to get into AI for Python, C++, and Java. I'm looking for a new laptop. I've never owned a Mac. Because laptop GPUs are not swappable, that's really the driving factor. If you had to buy one or the other for AI software development, would you buy the M3 Max or an Intel RTX 4090m or something else?
@dhk29587 күн бұрын
@@shawnandrews329 M3 is enough
@user343-r6g10 ай бұрын
I want a MacBook that has Apple silicon soooooo badddd 😭😭😭😭😭
@markclayton897710 ай бұрын
What’s your use case? The battery life on even the M1/M2 chips is phenomenal, the M3 chip mostly just adds performance. If you’re using it for light tasks, save some $$$ and get an M1 or M2 series chip
@user343-r6g10 ай бұрын
@@markclayton8977 I’m a photographer I use adobe PS adobe Lr and LRC plus Xcode for my camera app I’m working on and I need to connect to two displays
@ClearGalaxies10 ай бұрын
🥵
@stendall10 ай бұрын
Soooo, the real title of this video should be MLX extremely poorly optimized for CUDA cores.
@yvesvandenbroek605510 ай бұрын
MLX does not run on PC’s and there are no CUDA cores on Apple Silicon 🤷♂
@darshank874810 ай бұрын
Google has a better transcriber in their API Vertex called USM tbh
@KeiGGR10 ай бұрын
Then why is the KZbin one still trash?
@CybernetonPL9 ай бұрын
Doesn't matter if there's like zero software to use on silicon, it just is thar devs always do windows, only billionaire devs support mac, or browser game devs
@RSV98 ай бұрын
But … 8 GB on MacOS is like 16 GB on Windows 🤔
@Mostafaabobakr710 ай бұрын
Red eyes! Check if this is normal
@Mr_BeowulfАй бұрын
Electricity bill this month is 2000$😅
@geog896410 ай бұрын
Thanks.
@aravjain10 ай бұрын
Great video, Alex! You have some really enjoyable content on your channel. Are you able to send me one of your old M-series Macs; I’m a student and I’m trying to learn some ML/AI stuff.
@Buqammaz10 ай бұрын
Finally MLX 🔥
@AndysTV10 ай бұрын
Insanely fast model is actually way faster in 4090
@InsideGreatness-gh8wc10 ай бұрын
Hide your kids, hide your wife
@hariharan.c800910 ай бұрын
hi lenovo loq i5 12450h 8gb 4060 80k vs ideapad ryzen 7 5800h 6gb 3060 71k purpose machine learning college purpose
@ShawMandy-m7f2 ай бұрын
Block Walks
@user-sam446510 ай бұрын
But with windows laptops, you will spend only a few dollars on upgrading ram, but for apple you'll spend much more.
@lesleyhaan11610 ай бұрын
and you are stuck to a wall outlet
@jasonwun611310 ай бұрын
Well, you need to carefully specify the use case of the ram. In AI world, the only ram matters is the one on graphic card and it is not relatively cheaper to upgrade compare to mac
@andrewdunbar82810 ай бұрын
French "l" sounds like "l". If it were double "ll" it would've sounded like "y".
@AZisk10 ай бұрын
darn. should have asked my wife before vid.
@johnbreaker387410 ай бұрын
with all of that machine, you should make GA xD as i need your m3 max moahahaha
@CybernetonPL9 ай бұрын
Actually, if this is true then you didn't pick the best machine for competition cuz there are bazillion non apple laptops, the mathematical consequence is that one of them has to beat the mac, so clickbaiting us with this title is awful
@Intel101-pe1et7 ай бұрын
What kind of mathematics is that ?
@CybernetonPL7 ай бұрын
@@Intel101-pe1et statistics bro, plus probability
@kashalethebear8 ай бұрын
Whisper isn't AI.. no true AI yet exists lol
@saidd.10 ай бұрын
I have no idea why the hack I am watching this now, but everything you say sounds cool. :)) Ps: no idea how to code at all, wish I could.
@LiangQi28 күн бұрын
Perhaps 4090 has better performance on "bare" Linux? 🤣
@einstien24099 ай бұрын
Use a simple RTX 4060 laptop without power plugged in.
@Upscale_King3 ай бұрын
@@einstien2409 it doesn't have enough VRAM.
@Terzaghi1210 ай бұрын
And apple dares to say 8GB are enough
@AZisk10 ай бұрын
not for ml. nobody said 8gb is enough for ml.
@CrYou57510 ай бұрын
Microsoft said 640kB was enough.
@franciscogaxiola94506 ай бұрын
PC owners came just for commenting hate and useless personal opinions, pathetic.
@Mabeylater29310 ай бұрын
Serious question: why would anyone buy a windows pc when you can buy a Mac that not only can run windows on it but runs windows BETTER than a windows pc??? In buying a computer soon and would appreciate the feedback. Thanks.
@olepigeon10 ай бұрын
If power usage isn't your concern, then a PC can and will be faster. 4th Gen Core i9 + RTX 4090 will likely dominate in all benchmarks. For truly mobile performance (as in on battery, not plugged into a wall), Apple undeniably has the best product on the market right now. So long as you don't want to play any games on it.
@ishiddddd47839 ай бұрын
for mobile platforms apple makes sense, for in house usage, it still lags behind by a lot, unless you are already deep into the apple ecosystem or simply prefer it, for pretty much every benchmark the only metric apple is going to win is in power usage which matters a lot in laptops, in desktop, not so much when while using more power, will get the job done far quicker.
@euuIgor7 ай бұрын
Mac is terrible for gaming
@AdelaMoira-b5q3 ай бұрын
Walker Mountains
@DohertyAries-b7v3 ай бұрын
Francesca Glen
@k7amv7 ай бұрын
lol your side face is pretty
@MrDovman10 ай бұрын
What is the purpose of this computing power? Do you need it every moment of your day? And if you don't have it, is it a serious issue? I have a Mac Mini M2 at home. I also have 2 Windows PCs. I have no affection for these two machines that heat up, blow, scream, make a loud noise to obtain the power you're talking about. Not to mention the poor quality of plastics that crack and the miserable battery life of the laptop (whose power supply is larger and heavier than my Mac Mini M2). The production of PCs should be stopped.
@ClearGalaxies10 ай бұрын
Apple beats the competition. As usual 🥱 #PCMasterRace? More like #PCObselete 😂 /j
@crestofhonor234910 ай бұрын
PCs are still better in multiple ways that Macs aren’t. Far from irrelevant
@ClearGalaxies10 ай бұрын
@@crestofhonor2349 you're right. I was just trolling 💚