Zero to Hero LLMs with M3 Max BEAST

  Рет қаралды 121,991

Alex Ziskind

Alex Ziskind

Күн бұрын

Пікірлер: 319
@AZisk
@AZisk 5 ай бұрын
JOIN: youtube.com/@azisk/join
@AdamsTaiwan
@AdamsTaiwan 4 ай бұрын
Just tried the LM Studio and my desktop. Was able to connect my 8 year old notebook's vsCode using Code GPT to it. Pretty nice, but still looking for a solution that can scan all my vs solution and tell me where to fix my problems.
@MaxTechOfficial
@MaxTechOfficial 9 ай бұрын
Keep up the good hustle, Alex! -Vadim
@AZisk
@AZisk 9 ай бұрын
Thanks Vadim!
@univera1111
@univera1111 9 ай бұрын
@@AZisk if I may ask, can you replicate this on a Linux or windows and see which is easier for users. Or u can just say here
@zt9233
@zt9233 9 ай бұрын
@@univera1111also benchmarks
@abhishekjha9041
@abhishekjha9041 9 ай бұрын
​@@AZisksir please make a video for MacBook pro specifications for Machine learnings . I'm so confused about what to buy 16inch with 30 core 96gb ram Or 16inch with 40 core 64 GB ram. Or I have to buy a m3 pro 18 core 36gb ram. I'm so confused and like me other people also so please make a separate video on that it's a request
@abhishekjha9041
@abhishekjha9041 9 ай бұрын
​@@AZiskAnd I have a question that I do some research and find out that MacBook pro in Delaware have zero sales tax which means if I buy MacBook pro in 2500 dollars so I don't have to give any tax on it. It's is true sir.
@bawbee27
@bawbee27 7 ай бұрын
Incredibly helpful - this is the video everyone with an Apple Silicon machine trying to do LLM’s should see!
@giovannimazzocco499
@giovannimazzocco499 9 ай бұрын
Excellent stuff. I searched KZbin for weeks to find benchmarks of DNN models on M3. This is the first and only one I've found so far. There's is a ton of videos on video editing, graphics, gaming and music production on M3s. But for what concerns fresh material about machine learning on Apple Silicon I'm pretty convinced you're the only game in town. Keep it up. Looking forward to seeing more benchmarks.
@gargarism
@gargarism 9 ай бұрын
I think the very first thing I will try out on my already ordered M3 Max, will be to follow what you did. The whole reason I bought the M3 Max is to work with machine learning. So thanks a lot!
@AZisk
@AZisk 9 ай бұрын
Good choice!
@zt9233
@zt9233 9 ай бұрын
@@AZiskis m3 max as good as nvidia for this?
@pec8377
@pec8377 9 ай бұрын
@@zt9233 no it's not, unless you want to run large model that won't dit into nvdias cards, they Will Always beat M3 GPU. Maybe not when ANE IS activated, but none of thé tools présentes hère supports core ml
@MikeBtraveling
@MikeBtraveling 9 ай бұрын
If you are looking for a laptop to work with LLMs on you cant really beat the Mac for models larger than 7bP and you want them to run locally@@zt9233
@JasonHorsnell
@JasonHorsnell 9 ай бұрын
Just got myself an M3 Max and found your videos. You’ve saved me SO MUCH TIME….. Very much appreciated…..
@danieljohnmorris
@danieljohnmorris 6 ай бұрын
How much ram?
@JasonHorsnell
@JasonHorsnell 6 ай бұрын
⁠36GB max base. More than enough for my purposes atm.
@TimHulse
@TimHulse 5 ай бұрын
Same here!
@tonbii
@tonbii 7 ай бұрын
i bought M1 Max with 64GB 3 years ago to do this kind of works. I am so happy to find this video.
@catarinamoreira4805
@catarinamoreira4805 9 ай бұрын
This is fantastic! Thank you so much! More content on LLMs, please!
@joshgarzaBI
@joshgarzaBI 6 ай бұрын
Awesome video here. I'm bummed I didn't do it sooner. I have never seen my M1 (16GB) freeze before. Great teaching here!
@theperfguy
@theperfguy 9 ай бұрын
I have to commend you for your effort. I havent seen any other reviewer showing any other usecase than media comsumption, synthetic benchmarks and video encoding and editing. You are perhaps the only youtuber I know who tries out other things like code compile time and ML workloads, which is what is going to run on majority of the high end machines.
@AZisk
@AZisk 9 ай бұрын
Glad it was helpful!
@aimademerich
@aimademerich 6 ай бұрын
Thank you for the GPU setting in LM Studio at 15:00!! Can you do more videos on proper GPU setup on LLM's for M1-3?
@LukeBarousse
@LukeBarousse 9 ай бұрын
Interesting, I didn't know about LM Studio; that makes things A LOT cleaner
@paraconscious790
@paraconscious790 2 ай бұрын
This is amazing, awesome, super crisp yet easy to understand with absolute engagement 🙌👌🙏
@joshbarron7406
@joshbarron7406 9 ай бұрын
I would love to see a token/second benchmark between M2 Max and M3 Max. Trying to decide if should upgrade
@abhinav9058
@abhinav9058 8 ай бұрын
Hey did you upgrade?
@atldeadhead
@atldeadhead 9 ай бұрын
I enjoy all your videos but this one was particularly interesting. I look forward to future videos that explore machine learning leveraging the power of the M3 Max. Fantastic stuff, Alex. Thank you!
@dennisBZC
@dennisBZC 4 ай бұрын
Hey Alex, I’ve been watching many of your videos, mostly for comedy - as I find you hilarious the way you explain things to a non-tech mortal, but occasionally, try to copy your instructions and try my luck to test out a few things for fun. I’m not one for cutting code, but I still watched the whole thing, just to get to the LM Studio to download a model to try out on my M3 Max. I tried the Phi3, thinking Microsoft might be better than the others. I don’t have a clue what I’m doing, but it seems to work a little. You are a LEGEND! Keep up the great work. Love to see how you train your AI in due course. I keep shouting at it to “sit”…my MacBook hasn’t moved, so I guess, it is quite obedient.
@devdeal4146
@devdeal4146 6 ай бұрын
Just got the m3 max with 48gb ram. Excited to see how it works with your tutorial. Thanks!
@amermoosa
@amermoosa 9 ай бұрын
amazing. just shrinking the whole second grade of engineering college in 17 minutes. incredible 😊
@mr.w7803
@mr.w7803 9 ай бұрын
Dang!! Dude, this video sold me on that M3 Max configuration… this is EXACTLY what I want to do on my machine
@hamiltonwmr189
@hamiltonwmr189 9 ай бұрын
If you are going to do any intensive task on MacBook then keep it charged at 80% using al dante. Dont let run the models on battery as churning though cycles will damage it's help ,keep it on power adapter and 80% charging. I did some intensive training on my m1 Pro and it went from 100 to 96% battery health in 1 year.
@CitAllHearItAll
@CitAllHearItAll 7 ай бұрын
4% loss in 1 year is normal. I'm at 2+ years on M1 Pro with 86% battery health. You're either trippin or trollin.
@ismatsamadov
@ismatsamadov 9 ай бұрын
I subscribed a few months ago, but I have never seen such quality content. Thanks, Alex! Keep going.
@AZisk
@AZisk 9 ай бұрын
thx 🙏
@SebastianWerner82
@SebastianWerner82 9 ай бұрын
Great to see you creating videos with this type of content as well.
@scosee2u
@scosee2u 9 ай бұрын
I really love your videos and how you explain these cutting edge concepts! Would you consider researching or interviewing someone to make a video about quantizing options and how it impacts using llms for coding? Thanks again for all you do!
@AZisk
@AZisk 9 ай бұрын
Possibly!
@jorgeluengo9774
@jorgeluengo9774 4 ай бұрын
Thank You Alex, this is an amazing video. I will look into the software development tools installation.
@AZisk
@AZisk 4 ай бұрын
Awesome! Thanks
@anthonyzheng7274
@anthonyzheng7274 9 ай бұрын
You are awesome! This is great, I bought an M3 Max several days ago and really having a great time playing around with LLM's.
@facepalmmute3619
@facepalmmute3619 9 ай бұрын
the bass in your voice on the MBP speakers is phenomenal
@stanchan
@stanchan 9 ай бұрын
The performance of the M3 is amazing. Waiting for the refreshed Studio, as the M3 Ultra will be a beast. Hoping it will have the 256GB RAM as predicted.
@Mrloganphillips1
@Mrloganphillips1 6 ай бұрын
I had so much fun with this project. I just got a m3max and wanted a project to work on. After I got llama running I made a bash script to run the command and trigger a second bash script to open a browser window to the ip address after a 5s delay to let the server get up and running first. then I made a shortcuts button to run it. now I have on demand llm with an easy to use on/off button.
@JohnSmith762A11B
@JohnSmith762A11B 8 ай бұрын
Excellent. Many thanks for putting this together! 🥂
@FrankHouston-v5e
@FrankHouston-v5e 5 ай бұрын
Best LLM build video KZbin ❤. I’m buying my 36GB MacBook Pro M3 Max 14 Core cpu with 30 core GPU. Planning on launching a KZbin AI/Ml channel soon 🧐.
@BenWann
@BenWann 4 ай бұрын
I couldn’t agree more - I wanted to really sink my teeth in ML since it’s been a while - and I bought a MBP m3 max after seeing your comparisons. Sorry I couldn’t use an affiliate code - micro center had a killer deal on it :(. I look for your videos to drop now, and look forward to what you come up with next.
@someone5781
@someone5781 3 ай бұрын
So excited for your next video on training on the m3 max!
@abhinav23045
@abhinav23045 9 ай бұрын
That fan noise is like feel the power of AGI.
@AZisk
@AZisk 9 ай бұрын
😆
@christopherr8441
@christopherr8441 9 ай бұрын
If only we could directly access and use the Apple Neural Engine for doing things like this. Imagine the speed and performance gains.
@MikeBtraveling
@MikeBtraveling 9 ай бұрын
I bought a maxed out M3 max to do this, please run the larger models with ollama, when using LM studio you need to make sure you are using the correct prompt template for the model, i think that was your issue.
@mrsai4740
@mrsai4740 13 күн бұрын
@@MikeBtraveling I'm curious, were you able to run larger model like a 70b llama maiden?
@juangarcia-wp2zr
@juangarcia-wp2zr 9 ай бұрын
very cool content, thanks, I feel very curious now to try out some of this llms
@nikolamar
@nikolamar 9 ай бұрын
Alex this is AWESOME!!! Thank you!
@bobybobybobo
@bobybobybobo 8 ай бұрын
Just tried this on an M1 max, token generating speed is about 15% slower, ie, 208 vs 238. So the $2100 M1 is still holding up ok compared to the $3500 M3 for this LLM experiment...
@01_abhijeet49
@01_abhijeet49 5 ай бұрын
These models run soooo well on my 3060 rtx desktop. Alas, my investment is worth it
@RadAlzyoud
@RadAlzyoud 9 ай бұрын
Brilliant. Thanks for sharing.
@jeffersonmp4
@jeffersonmp4 4 ай бұрын
How do you know how to do all these steps?
@XNaos
@XNaos 9 ай бұрын
Finally, I waited for this
@JonNordland
@JonNordland 9 ай бұрын
I would love some videos where the M3 was pushed a bit harder. For instance, 70b models. The 70b models are much more useful for real work.
@AZisk
@AZisk 9 ай бұрын
Noted!
@brandall101
@brandall101 9 ай бұрын
I have the 48GB variant so can't do 70B...but 34b models run fairly slow as is, seeing a reported 11-12 tok/sec in LM Studio, so I'd expect a 70B to be about 5-6 tok/sec. It's also pushing 110W during inference. For me personally, that's just not performant enough for real use so opted to not go through the hassle of swapping for a 64GB BTO.
@AZisk
@AZisk 9 ай бұрын
@@brandall101For what it would cost to get a high end 128+gb mac AND all the ssd space you’d need for the ml models, i would just get a 4080 or 4090. only problem is - memory requirements
@brandall101
@brandall101 9 ай бұрын
@@AZisk The common thing is to buy a pair of 3090s to get a nice middle-ground between performance / memory / cost... those can be had used for about $1500. I just don't think the hardware is quite there... yet. A couple more generations and I think we'll be golden.
@geoffseyon3264
@geoffseyon3264 9 ай бұрын
I hope Apple is reading this thread…
@davidpsp89
@davidpsp89 9 ай бұрын
super interesting and useful, I take this opportunity to ask about Matlab again and its real performance, since Apple's on its page is not real
@DavidCampero26
@DavidCampero26 8 ай бұрын
Hi Alex! I would love to see a comparison between M3 Max 14/30 and M3 Max 16/40 with the same processes for LLMs. I read that many people is going with the base model M3 Max and I would like to see how much difference there is. If you know of someone who did it, please let me know!! I want to buy a laptop as soon as possible!! Thanks!!
@bdarla
@bdarla 9 ай бұрын
Super helpful! I hope you will continue with further relevant videos!
@_mansoor
@_mansoor 7 ай бұрын
Awesome, Thank you. Halo Alex!!!🎉🎉
@terra8net
@terra8net 2 ай бұрын
THX .... Great LLM Content for Mac User :)
@estebanguillen8110
@estebanguillen8110 9 ай бұрын
Great video, looking forward to the LLM fine-tuning video.
@suburbanflyer
@suburbanflyer 9 ай бұрын
Thanks for this Alex! Just got an M3 Max so it'll be great to try out some new things on it, this definitely looks interesting!
@yashen12345
@yashen12345 9 ай бұрын
YES AWESOME! More on this pls. do you have an m3 max with 128gb available? Typically these smaller and quantized models you have showcased will preform worse than the bigger ones. I wanna see LLama 2 70B 8bit running on an m3 max with 128gb . This is the largest most powerful model thats able to fit in a macbook. Lets push this thing to the absolute limit and see how it preforms llama 2 70b is actually able to match chatgpt3.5 performance. so if we're able to run this we can have OUR OWN chatgpt that is actually as good as chatgpt running LOCALLY ON A MACBOOK THATS INSANE, pls can i have a video on this
@toddturner6
@toddturner6 9 ай бұрын
Actually it will run Mistral 180B in the top configuration with 128GB RAM. Edit: typo. Falcon 180B.
@yashen12345
@yashen12345 9 ай бұрын
@@toddturner6 ? mistral is 7b
@toddturner6
@toddturner6 9 ай бұрын
@@yashen12345 Falcon 180B (typo).
@jameshancock
@jameshancock 9 ай бұрын
Nice! Thanks! FYI when you change the preset you’re changing how it inputs into the LLm. Which caused it to go nuts.
@tomdonaldson8140
@tomdonaldson8140 9 ай бұрын
Love it! Looking forward to the training video(s). Now I want a Mac Studio M3 Ultra! Oh, no such thing yet? Come on Apple! We’re waiting!!!
@MuhammaddiyorMurodov-l5n
@MuhammaddiyorMurodov-l5n 9 ай бұрын
Thank you so much for making this video, it was really helpful. Please do more this kind of coding videos and testing on m3 macbook, and push them to the limits, I think you are the best channel for this because you have the knowledge and intention to do these things and it will be win win situation for both of us
@sujithkumar8261
@sujithkumar8261 9 ай бұрын
Are you using macbook m3 base variant?
@Andrew-v2g
@Andrew-v2g 9 ай бұрын
Alex, thanks.
@AZisk
@AZisk 9 ай бұрын
You bet!
@justisabelll
@justisabelll 9 ай бұрын
Great video, really looking forward to the next few ML related ones. You might have had better results with LM studio though if you disabled mlock after enabling Metal GPU. Also the model output looks nicer if you enable markdown in the settings as well.
@yinoussaadagolodjo4549
@yinoussaadagolodjo4549 9 ай бұрын
How to disable mlock ? Can find it !
@fallinginthed33p
@fallinginthed33p Ай бұрын
Now the Snapdragon X Elite laptops are also pretty good at local LLMs if you run a specific quantized format.
@vincentnestler1805
@vincentnestler1805 7 ай бұрын
Thanks!
@AZisk
@AZisk 7 ай бұрын
🤩 thanks!
@keithdow8327
@keithdow8327 9 ай бұрын
Thanks!
@AZisk
@AZisk 9 ай бұрын
🤩 thanks!
@PMX
@PMX 7 ай бұрын
7:15 no, the one for text generation is S_TG t/s, in your case about 20 tokens per second for f16. And at 7:46, again, that's the wrong column, the correct column (S_TG t/s) shows the correct value, 33.61 which is exactly what I get on an M2 Max for a 7B Q8_0 (the GPU improvements on the M3 Max compared to the M2 Max don't have much impact for LLM inferencing, they are mostly useful for 3D rendering apps like Blender and for games). The column you are using (S t/s) is (tokens prompt processing + tokens text generation) / total time, which is a meaningless number (you can get it to be as fast as prompt processing speed just by having a large prompt and a very small text generation, or as slow as text generation speed by having a short prompt and a long number of tokens for text generation).
@pbdivyesh
@pbdivyesh 9 ай бұрын
You're a good lad, thank you!🎉😅
@geog8964
@geog8964 9 ай бұрын
Thanks, Alex.
@stephenthumb2912
@stephenthumb2912 9 ай бұрын
thanks for testing. it's interesting that even with enough memory, still some slowness on the bigger model quants. my base M2 8gb can run the q4 7b's barely.... prefer ollama using cli which will run at usable tps. it's sort of ok with LM Studio, but generally I need to run 3b's or below with q4 quants. Orca-mini 3b is sort of the default test standard for me on 8gb mac's incl. the mac metal air. can confirm, using the mac metal checkbox, causes runaways. textgen funnily runs fine with mac metal suport as well.
@ergun_kocak
@ergun_kocak 9 ай бұрын
3 to 5 times faster than M1 Max 64GM full spec. Thank you very much for the video 👍
@neodim1639
@neodim1639 9 ай бұрын
Try ollama instead
@eldee8704
@eldee8704 7 ай бұрын
Awesome tutorial! I bought the 14" MacBook Pro M3 Max base model for this to try out.. lol
@astrohgamingZero
@astrohgamingZero 5 ай бұрын
Looks good. I use text-generation-webui and the chat/chat-instruct modes or input presets can make or break some models.
@laobaGao-y7f
@laobaGao-y7f 9 ай бұрын
Is it the 96GB version of the M2 Max, what do you think, I want to deploy my own 13B model locally (train the model with some relatively sensitive data), or even become my 'digital clone', do you think the 38c 96GB M2 Max is a suitable choice?
@SergeyZarin
@SergeyZarin 9 ай бұрын
Thanks great video explaining !
@AZisk
@AZisk 9 ай бұрын
Glad it was helpful!
@chillymanny714
@chillymanny714 9 ай бұрын
This is a great video, I think if you were to make videos to teach intro/intermediate data analyst how to build LLMs or a series of videos to try different application creation using Macs M chips, that it would be a big hit. I will try to replicate your approach
@syedanas2083
@syedanas2083 9 ай бұрын
I look forward to that
@DivineZeal
@DivineZeal 6 ай бұрын
Great video! Thinking about getting the MBP M3 for llm
@ChitrakGupta
@ChitrakGupta 9 ай бұрын
That was really good. I learnt something and was fun to run on the new M3 Max
@nickwind2584
@nickwind2584 9 ай бұрын
I learned more about AI in just 15 minutes with Alex than I did taking an entire AI class in college.
@timelesscoding
@timelesscoding 8 ай бұрын
Interesting stuff, I wish I could understand a little more. Thanks
@kman41000
@kman41000 9 ай бұрын
Awesome video man!
@AZisk
@AZisk 9 ай бұрын
Glad you enjoyed it
@onclimber5067
@onclimber5067 9 ай бұрын
Maybe I am a bit late but I am gonna ask anyway. If you were to buy a new laptop right now and the budget would be about 2500-3000. I am currently thinking: Dell XPS 15 with 32gb ram, i9-13900H, 1tb, rtx4070 M3 Pro 36gb, 1tb M2 Pro 32gb, 1tb Can't decide haha
@MikeBtraveling
@MikeBtraveling 9 ай бұрын
very interested in the topic and would love to see you do more in this space.
@boraoku
@boraoku 9 ай бұрын
In my experience trying different open LLMs for code generation, my recommendation is that don’t waste your time unless you can’t access OpenAI for some reason…
@RobertMcGovernTarasis
@RobertMcGovernTarasis 4 ай бұрын
Llama3's output for this is pretty decent. and even broke down the regex. Not tested it yet :) (mostly coz I don't know Python). But certainly the JavaScript version did. LMStudio is really nice, just a shame you can use it to benchmark in quite the same way. My poor M1 Air only gets 10.71 tok/s with Llama3 7B q4_k_m *cries*
@MeinDeutschkurs
@MeinDeutschkurs 9 ай бұрын
Thanks for the video, but there's something I don't understand: we explored this together, yet concluding with "find another model" isn't what I expected. Achieving satisfaction through a successful demonstration, despite the difficult journey, is essential. Now, I feel like I've wasted my time. I could have just downloaded LM Studio and figured it out through trial and error myself.
@AZisk
@AZisk 9 ай бұрын
this video is not about finding the best model, but getting set up with an environment that will allow you to use any model you want.
@MeinDeutschkurs
@MeinDeutschkurs 9 ай бұрын
@@AZisk, and that’s why this demonstration was not satisfying, but yes, it was a demonstration: a not satisfying demonstration. :)
@TimHulse
@TimHulse 5 ай бұрын
That's great, thanks!
@theoldknowledge6778
@theoldknowledge6778 9 ай бұрын
This LM Studio is Lit 🔥
@camsand6109
@camsand6109 9 ай бұрын
Glad i subscribed. you've been on a roll lately (new subscriber).
@rhoderzau
@rhoderzau 9 ай бұрын
I was away when my M3 Max (40 Core GPU, 2TB, 64GB) arrived so just got a hold of it now. Looking forward to giving LM Studio a go and finally learning how everything works rather than just what the outcome is.
@melody-cheung
@melody-cheung 9 ай бұрын
I still recommend ollama. Easier installation and friendly local WebUI.
@jigyansunanda
@jigyansunanda 9 ай бұрын
looking forward to your training models video
@juliana.2120
@juliana.2120 9 ай бұрын
ohh i love that you use conda here because it really helps me keep my hard drive clean with all those different AIs :D im an absolute beginner so i'm afraid of installing stuff i cant find later on. some people say its "outdated" and runs in errors too often but i cant really judge that. is that true?
@innocent7048
@innocent7048 9 ай бұрын
Very interesting article. I will try this :-)
@AZisk
@AZisk 9 ай бұрын
🤩 thanks so much!
@petercheung63
@petercheung63 9 ай бұрын
Ziskind is a surname meaning “sweet child
@Xilefx7
@Xilefx7 9 ай бұрын
Can you test the LLM perfomance in low power mode? I believe Apple needs to optimize how they handle the thermals of the MacBook Pro with the m3 max.
@akashtriz
@akashtriz 7 ай бұрын
Why is it that no one questions the metal GPU hardware for bungling up the model? Llama seemed less wacky on CPU.
@davidpsp89
@davidpsp89 9 ай бұрын
LM studio It is an ideal environment to do this on Mac, since it is not consuming as much GPU, since for that we need to use Nvidia and it is not an option in M chips
@PIZZA_KITTY
@PIZZA_KITTY 6 ай бұрын
I really want M3 Max but wish it was a little more cheaper😱
@mercadolibreventas
@mercadolibreventas 9 ай бұрын
Kep it up! Good Job! Can you do a video on getting Llama Factory set up on the M3, Thanks!
@PietroSperonidiFenizio
@PietroSperonidiFenizio 9 ай бұрын
I might need to upgrade from my Apple 2C
@uninoma
@uninoma 8 ай бұрын
cool thank you !!!!🤟
@kingmargie1182
@kingmargie1182 9 ай бұрын
Great job!
@justingarcia500
@justingarcia500 9 ай бұрын
Hey could you do a low battery mode test on the m3 max as you did with your m1 max a while back
@Jorge-ls9po
@Jorge-ls9po 6 ай бұрын
Nice vid. Now, with the M3 Max, should I stick to 64 GB of unified RAM for this sort of tasks? A jump to 128 GB will cost me a thousand bucks more. Cheers!
@42Odyssey
@42Odyssey 9 ай бұрын
Thanks for the video Alex. As for me, my laptop at work is the famously loud MBP 16 intel i9. My personal machine is a 14" M3 Max 64Go. I am with the two laptops right now, and the 16" intel is louder than my 14" M3 max in my opinion . Maybe it's a 16" thing ...
@AZisk
@AZisk 9 ай бұрын
when not under stress the intel will keep being loud and the silicon will be silent. but when fans hit over 3500rpm, the m3 max is louder than any other ones i heard
@brandall101
@brandall101 9 ай бұрын
The main thing with the Intel machines is the GPU. Any moderate load will push it into chaos. With the Max you have to really push it hard - either high performance gaming or inference will do it.
@JS-ih4qi
@JS-ih4qi 9 ай бұрын
@@AZisk I read that the 14” can throttle from the heat due to the smaller fans. Would this affect how fast an llm responds after it’s set up on the computer. I’m looking at the biggest M3 Max chip with 64 ram with 4tb. Appreciate any advice.
FREE Local LLMs on Apple Silicon | FAST!
15:09
Alex Ziskind
Рет қаралды 172 М.
REALITY vs Apple’s Memory Claims | vs RTX4090m
8:53
Alex Ziskind
Рет қаралды 172 М.
小丑妹妹插队被妈妈教训!#小丑#路飞#家庭#搞笑
00:12
家庭搞笑日记
Рет қаралды 36 МЛН
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
Maya Akim
Рет қаралды 830 М.
M3 max 128GB for AI running Llama2 7b 13b and 70b
8:53
TECHNO PREMIUM
Рет қаралды 110 М.
INSANE Machine Learning on Neural Engine | M2 Pro/Max
15:58
Alex Ziskind
Рет қаралды 185 М.
M3 MacBook Air after a week | developer's machine
14:34
Alex Ziskind
Рет қаралды 392 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 479 М.
M3 MacBook Pro - How much RAM do you ACTUALLY need?
10:29
Arthur Winer
Рет қаралды 234 М.
iPhone 16/16 Pro Review: Times Have Changed!
20:41
Marques Brownlee
Рет қаралды 324 М.
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Рет қаралды 1,1 МЛН
MOST POWERFUL MacBook vs PC Laptop (RIP Windows?)
15:18
Created Tech
Рет қаралды 1,1 МЛН