Is Apple's M2 Max Good for Machine Learning?

  Рет қаралды 36,383

Machine Learning with Phil

Machine Learning with Phil

Жыл бұрын

The new M2 Max is indeed a powerful processor for machine learning, best suited for those that need to run large models and value mobility and the Mac ecosystem. If you don't need mobility, large models, or the Mac ecosystem, then it's a much harder sell. For comparable money, you can get faster desktop / laptop systems.
Of course, buy what you want!
Learn how to turn deep reinforcement learning papers into code:
Get instant access to all my courses, including the new Prioritized Experience Replay course, with my subscription service. $29 a month gives you instant access to 42 hours of instructional content plus access to future updates, added monthly.
Discounts available for Udemy students (enrolled longer than 30 days). Just send an email to sales@neuralnet.ai
www.neuralnet.ai/courses
Or, pickup my Udemy courses here:
Deep Q Learning:
www.udemy.com/course/deep-q-l...
Actor Critic Methods:
www.udemy.com/course/actor-cr...
Curiosity Driven Deep Reinforcement Learning
www.udemy.com/course/curiosit...
Natural Language Processing from First Principles:
www.udemy.com/course/natural-...
Just getting started in deep reinforcement learning? Check out my intro level course through Manning Publications.
Reinforcement Learning Fundamentals
www.manning.com/livevideo/rei...
Here are some books / courses I recommend (affiliate links):
Grokking Deep Learning in Motion: bit.ly/3fXHy8W
Grokking Deep Learning: bit.ly/3yJ14gT
Grokking Deep Reinforcement Learning: bit.ly/2VNAXql
Come hang out on Discord here:
/ discord
Website: www.neuralnet.ai
Github: github.com/philtabor
Twitter: / mlwithphil
Links used in the video:
www.macworld.com/article/1475...
www.apple.com/shop/buy-mac/ma...
github.com/tlkh/tf-metal-expe...

Пікірлер: 80
@brandonw1604
@brandonw1604 Жыл бұрын
One thing if you're using just laptops the Macs can come out ahead since the GPU has shared system memory.
@gamer12353
@gamer12353 Жыл бұрын
Just want to add that if youre a software developer and you spend a good amount of time in front of your laptop, you want it to be perfect to work with, the feeling, everthing. For most SE price is not the key factor for something that important. For me personally macbooks are great, simply because i love the unix based OS together with the fact that it just works. The design and the conneciton to all of my other Apple devices just makes the experience much better and enjoyable. When it comes to machine learning, i mostly connect to a vm via ssh and code locally. What i also do locally is testing, hyperparameter tuning... So a laptop thats capable of doing that in a reasonable time is more than enough. There is a reason why you see so many SEs working in front of a Macbook instead of other brands, and i guess my points give a good explanation of why thats the case.
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
Yup, lots of reasons why someone would want the Mac ecosystem
@Biedropegaz
@Biedropegaz Жыл бұрын
@@MachineLearningwithPhil but still sounds like apple fanboy
@rishabdhar6900
@rishabdhar6900 Жыл бұрын
@@BiedropegazI can vouch for what he said. My time is simply too precious to waste on finding first the right machine and then installing Linux on it (Windows is a joke of an OS when it comes to terminal support, and I prefer to automate a lot of my tasks). Secondly, the compatibility of Linux on so many Windows machines is so iffy. Linux is great as a server but trash as a UI OS.
@ameliabuns4058
@ameliabuns4058 10 ай бұрын
same, I had a dell gps 15 for 1.5 weeks, work had to return it. kept BSODing on compile. all fans ramped up like hell and it lasted so little, it was so so loud even on idle. they just got me a refurbished Mac Pro 14" m2 pro 16gb for less than the dell gps 15 with12tg gen processor with similar specs. but god Mac can be hell at times. but 99% of the times. it's super reliable and incredibly to use. we have a windows server anyways for windows builds etc. on the desktop side tho. I'd 100% go custom built. the intel and Ryzen offerings are insanely good, so are the overpriced RTX cards
@conjugategradient5498
@conjugategradient5498 3 ай бұрын
I agree, except for me Windows is what offers me the best developer experience. I love Visual Studio IDE, it's integration with .Net, support for other languages, etc. I personally hate working on a laptop. For any development work, I need my standing desk, my Herman Miller chair, and my ultrawide monitor. Which is why I don't buy laptops for dev work, or if I must use a laptop, it always stays docked at my desk.
@woolfel
@woolfel Жыл бұрын
I spent the weekend benchmarking CreateML object detection project with pascalvoc dataset. I was able to crank up the batch size to 1024 and M2Max 38GPU/96G used 80G during training. To me, that is impressive. On my own cifar10 benchmark, the M2Max is faster than my RTX 2060 6G/ Ryzen 3700X / 64G DDR XMP over-clocked. If I crank up batch size on my cifar10 benchmark to 1024, Tensorflow complains about memory but still runs. Back with Tensorflow 1.x it would just die trying to allocate gpu memory. The joy of how tensorflow allocates GPU memory in version 1.x. I also benchmarked InvokeAI version of stable diffusion. The M2Max is faster than RTX 2060 6G.
@jmurtha80
@jmurtha80 Жыл бұрын
I understand that your desktop is probably all you have, but the M2 chipset is much newer than what you are currently running, as well as the RTX2060. I am not sure what would be the perfect comparison, perhaps a 5600-5700 ryzen CPU and a RTX 3060-3070??? Also, what was the performance gap? I am not sure I am understanding what your desktop machine was able to do in comparison to the M2 Max Macbook pro... I don't have a Mac, but I am looking into either an M1 Pro or M2 Pro... I have a Ryzen5600X, RTX3070, and 32GB DDR4 3000Mhz (Corsair Dominator, older sticks, could easily upgrade to 3600Mhz).
@woolfel
@woolfel Жыл бұрын
@@jmurtha80 I have 2 linux system with other Nvidia video cards. I do light experimentation only, since memory is the biggest limitation with training models. If you look at Jeff Heaton's channel on youtube, he generally recommends "get as much video memory you can afford." Having more CUDA power is nice, but once you hit the memory limit, it's kinda downhill from there. Disk cache can't keep up and your CUDA cores will be waiting for data. Think of 3D games that don't cache the data effectively, the performance is generally 5-10x worse. Tensorflow tries to allocate 100% of the VRAM at the start. If it batch size won't fit in memory, it will spit out warnings. At the end of the day, it comes down to "how much video memory" do you need to run useful experiments. If 6G is enough, no point giving NVidia ransom money. If 24G isn't enough, you have to ask "do I want to spend crazy money on a Tesla A100 40G?" If you need more than 40G, you're gonna train the cloud.
@thanatosor
@thanatosor Жыл бұрын
Thank man, exactly the kind of comparison I need to see.
@NetvoTV
@NetvoTV 8 ай бұрын
I'm going to get the new 16 cores M3 Max 16 inches MacBook Pro but should I get 48GB or 64GB or 128GB of RAM?
@stephenthumb2912
@stephenthumb2912 10 ай бұрын
ime, unfortunately any of the larger LLM's aren't trainable or inferable with even 4090's with practical speed. Until the HW catches up to SW and standardization of linux installs on consumer desktops, it's just too low cost/performance/time ratio for me. I don't like it but using cloud GPU's to me are the most practical source for tflops for training and inferring i'd say 13B+ LLM's. I'd be very happy to be proven wrong. It's a very good discussion. I don't like the centralization of compute required to work with these large models ironically even the open source ones.
@klaymoon1
@klaymoon1 8 ай бұрын
Great video! Can you please cover M2 Ultra or M3 Max as well? I know 4090 is the way to go for a low budget, but I am curious if Apple is an valid option now.
@seanwfindley
@seanwfindley Жыл бұрын
I just bought the 96GB Macbook Pro. Already hitting 70GB usage with deep learning experiments.
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
Nice.
@seanwfindley
@seanwfindley Жыл бұрын
@@MachineLearningwithPhil Thanks. Great video btw. And I do think having a couple of systems for experiments is important. I have a 12900k with a 4090 and a couple of other systems. Anything that needs brute force can probably be offloaded to the cloud (just watch your wallet lol). Keep up the great work with your content, sir!
@user-ob7fd8hv4t
@user-ob7fd8hv4t 7 ай бұрын
Is it the 96GB version of the M2 Max, what do you think, I want to deploy my own 13B model locally (train the model with some relatively sensitive data), or even become my 'digital clone', do you think the 38GB 96GB M2 Max is a suitable choice?
@MachineLearningwithPhil
@MachineLearningwithPhil 7 ай бұрын
Llama2:13B fits on my 24GB RTX Titan so the M2 should be just fine.
@PKperformanceEU
@PKperformanceEU Ай бұрын
@@MachineLearningwithPhil You really comapred a 11800H garbage to an m2max and even said the TIGER LAKE garbage is faster?? If you would have seen the m1max SPEC2017 review or did personally test the m2max with SPEC2017 you wouldnt be this blatantly ignorant!! M2max > RPL 13700HX || m2max = r9 5900x. Stop spreading misinformation out of ignorance and x86 bias!! As for gpu, m2max gets close to a 4080m in gaming and 3d rendering scenarios. Yes its slower in ML but thats down to the MTL libraries not the hardware itself. m2max is about 2 times faster than a shitty 10nm 11800H. I hope you have a good day
@user-vz4wu7gi5p
@user-vz4wu7gi5p Жыл бұрын
Could you do a video on local vs. cloud gpu for ML?
@ahmedhamadto8756
@ahmedhamadto8756 6 ай бұрын
This is done using the GPU on the apple silicon devices, what about the Apple Neural Engine, from what I know it is meant for inference. How does it perform inference wise as opposed to RTX 4090, for inference, not model training. The main use I am looking for is for edge computing at the present time. If the Apple Neural Engine can infer images faster than RTX 4090 or similar to it that would be great as the form factor is also important. I can fit a bunch of mac minis in the same space I can fit a RTX 4090 headless device.
@mikhailkhlyzov6205
@mikhailkhlyzov6205 Жыл бұрын
When you mentioned power consumption and power efficiency, my main question is what about overheating? Laptops tend to have poor cooling systems compared to desktops, so would it effect the question of Mac vs dedicated laptop?
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
The nvidia gpu based laptops may have issues with heat management; that's really going to depend on the manufacturer. Truthfully, it could very well play into one's decision and waiting for proper gaming benchmarks is probably a reasonable idea.
@woolfel
@woolfel Жыл бұрын
I bought my son a RTX 3080 laptop a few years back and it runs HOT. Now that he works as a ML engineer, he uses Tesla A100 and only uses RTX laptop for gaming. My M2Max is much quieter than my RTX 2060 6G for training models. I built my system with coolermaster H500, big heat sink, 64G DDR4 xmp over clocked. power efficiency Nvidia looses big time, but that's not why people buy Nvidia video cards.
@Vikram-wx4hg
@Vikram-wx4hg Жыл бұрын
Wonderful video and discussion! Waiting for your video on: What sort of system to buy/build for AI?
@ameliabuns4058
@ameliabuns4058 10 ай бұрын
why am I watching this I just barely started linear algebra, by the time I learn anything that'd need me a good hardware it'll be way different and cheaper lol. but more cores or threads don't translate to more performance etc. not to mention on the laptop side, there's thermal throttling and power and battery constrains. was that a desktop or laptop 3090?
@vincelibrandi
@vincelibrandi Жыл бұрын
Found your video looking for AI performance spec for apple silicon, great video but should not that your comparing the desktop 3080 to a mobile GPU. Really need AI specs for the Mobile edition of the 3080 (which isn't actually a 3080 and is limited to 135W by default and has 30% fewer cores). Taking that into account would mean its (using back of the napkin math - reduce by 30% for the cores and by 30% for the power) would be about ~290 images / sec. So 140 img / sec from 42W is pretty good. But like you said really for mobile applications.
@haralc
@haralc Жыл бұрын
Btw, training in an MX chip use the GPU core or the Neural Engine? 🤔
@SadmanPinon
@SadmanPinon Жыл бұрын
Same question. I wonder if the neural engines are even used
@PatrickDunca
@PatrickDunca Жыл бұрын
Rather than using up more time referring to the conversations about the merits and detriments of walled gardens, I just want to know if this tool is right for me. Am I someone who would benefit from its massive unified memory, or can I cross it off my list? I'd love to know who this machine shines for. Is this a great workstation because unlike something with multiple GPUs (if they even fit), spitting out heat and fan noise, this one is home friendly? These are the two major selling points that I see. Larger models correlate with better inference results. Same for using full fat models compared to those quantized to 4 bits. I think there might be a use case for this machine, particularly the Mac Studio M2 Ultra. I'm expecting the results are going to be "It has the ability to load the model but it can't run it fast enough." But I haven't seen anyone show this while making use of both the GPU and neural cores at the same time. I would love to see someone push one of these Macs and see where its strengths really lie.
@user-xc9mo9qh4q
@user-xc9mo9qh4q Жыл бұрын
Hi, M2 max good or not for ML ?
@loktevra
@loktevra 5 ай бұрын
Why everybody compare apple's laptops with desktop videocards? Where I can find comparison apple MacBook or Mac studio with mobile video cards?
@broimnotyourbro
@broimnotyourbro 2 ай бұрын
Try running the 70b llama3 model, see how it goes. On my Mac Studio M2 Ultra it runs faster than ChatGPT. On my PC with a 4090 it's a couple seconds per token. It's night and day (but the PC is faster with smaller models like llama3:8b.) A 4090 doesn't have enough VRAM to run large models.
@broimnotyourbro
@broimnotyourbro 2 ай бұрын
There is literally no comparable system for a large model like this for less than $20K. (Cost of the Studio Ultra with 128GB RAM is ~$5K) I wish Nvidia would ship a card with more VRAM, but they won't because it would cannibalize the sales of $25K data center cards. A 4090 with 64GB RAM would scream.
@jiananli6419
@jiananli6419 Жыл бұрын
Another important issue is that for deep RL, especially for those tutorial examples, we don't need GPU for training because it takes extra time for data to transfer from CPU to GPU. Could you please compare the training speeds between CPU-only (like Intel Core i9) and M1/M2 on tutorial examples like DDPG ?
@nathanx.675
@nathanx.675 Жыл бұрын
The time it takes to transfer data to cuda is almost nothing compared to the time it takes to train a dL model.
@jiananli6419
@jiananli6419 Жыл бұрын
@@nathanx.675 For large models, it is almost nothing (for eg, 1 second out of 10 seconds ,10%), but for small models like those used in tutorial examples, its proportion may increase (1 second out of 2 seconds, the time proportion increases to 50%). Well, just share my experience on the Ubuntu system with Intel Core and RTX. In real applications, we will always use Cuda, so this issue is of less importance in this circumstance.
@ivanssaponenko8384
@ivanssaponenko8384 Жыл бұрын
There is no separate memory for CPU and GPU in M2 infrastructure. It uses "Unified memory" which is shared between CPU and GPU and can be accessed instantly by both. This means time to transfer is 0. This, however would not make any difference, due to the fact that you mentioned tutorials, and usually tutorials are set up in a way to train your models quickly and limit model size so tutorials are accessible by users of limited hardware.
@jiananli6419
@jiananli6419 Жыл бұрын
@@ivanssaponenko8384 This is cool. So no worry about the data transferring issue for Mac users. Thanks for sharing this information.
@rishangprashnani4386
@rishangprashnani4386 10 ай бұрын
Buy a m1/m2 macbook air for 1000$ and 2500$ high end pc. You are set coz mostly you would be doing high end training on a workstation only. If we really need mobility, then m2 max is good. High end windows laptop sometimes heat too much
@ceyerwakilpoor9712
@ceyerwakilpoor9712 Жыл бұрын
Have a desktop 3090 and an m2 max, on BART (a variation of BERT, standard transformer model) the 3090 is only about 2.5x faster. This was run on PyTorch. Alex Zisking does a benchmark in TensorFlow and there the Apple SOC does incredibly well.
@Megasunami
@Megasunami 8 ай бұрын
M2 max with 30 or 38 GPU? I'm planning to buy. Thanks.
@ceyerwakilpoor9712
@ceyerwakilpoor9712 8 ай бұрын
38@@Megasunami
@PereMartra
@PereMartra Жыл бұрын
The reason for me is the silence! Most times I work in the same room where my family is watching TV, or at night with everyone sleeping.... and I love the silence of my MacBook M1 Pro. Of course that I don't have the same performance, but the shared memory is a good point too. I have the 16 MB model, and if I compare with the laptops with NVIdia with 16 mb the macbook is cheaper. My MacBook is the only Apple product I own. Really far to be an Apple Fan.
@kuchesezik
@kuchesezik Жыл бұрын
no mention of the machine learning cores? 🤔
@imelliam
@imelliam 4 ай бұрын
He doesn’t know anything. He came here to tell us Apple is bad.
@JJ-fq3dh
@JJ-fq3dh Жыл бұрын
I had a sager laptop with a 3080 doing most anything spun the fans to ear piercing noise. Got rid of it and went with MBP m1 16. Its quiet, which was the most important running the same tasks as the sager, not quite as fast , but so much better . Dont mind OSX, very good once you get the hang of it. And time machine backups are a plus. If i need Cuda or train models, i use colab , did not want to have a nvidia gpu in a laptop with googles colab available . Might look at how the m2 max was the same speed as a rtx 3070 training resnet . kzbin.info/www/bejne/j2OpgIidlM-ibc0
@CosmicReef
@CosmicReef Жыл бұрын
I cant run my LLMs. I never encountered a LLM in the size of one or two GB. I very often run into trouble because 16 GB are not enough. And there are larger models out there, and what I see from personal test is this larger, unshrinked models are far better than what I use at home (with google colab, of course). So yes, memory is actually the real bottleneck (at least here). Bad thing is, I don't prefer Macs. So maybe wait for a 5090? Or use two 40xxx?
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
You might see if you can NVlink two 3090 or 4090 together.
@CosmicReef
@CosmicReef Жыл бұрын
@@MachineLearningwithPhil I think that will it be at the end. Seems that two HighEnd GPUs are the max as long as I cant pay a NVidia A100. If I would buy an AMD gpu instead, I would have 64 GB VRAM at the end (instead of 48). Do you think, it is better to go for AMD?
@user-pf9jv1fl2n
@user-pf9jv1fl2n 9 ай бұрын
If you have the money to spend then get yourself a m2 ultra 192gb unified memory. It's the only consumer device you can actually get with a ridiculous amount of vram it can even run a 180billion parameter model with about 6.30 tokens per second that's insane! I have a 4090 and biggest model I could ever run was a 30billion parameter model! Unless you get yourself a A100 which you will need a couple of them. Honestly M2 ultra is your best bet! kzbin.info/www/bejne/kJ6UiqKajLSar7csi=lysCVebe4VmDsmdo
@bdd2ccca96
@bdd2ccca96 Жыл бұрын
a fan of the Tech Report? i didnt know. will www ever return to good old days?
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
I wish, but i"m not holding my breath.
@ryshask
@ryshask 3 ай бұрын
One thing to keep in mind is it's way easier to resale the mac than a custom pc build. I love custom pc builds... But I have had a very difficult time getting anything more than 1200-1500 after a few years. If you got 3-4 years... You're not going to get much at all. There's no way for people to vet how good a pc builder you are and there's a lot of used PCs on the market. I definitely think the PC is still the better way to go but it's something to bear in mind if you're on the fence.
@MachineLearningwithPhil
@MachineLearningwithPhil 3 ай бұрын
Definitely worth consideration. I've had success parting stuff out rather than selling the whole thing.
@samdcbu
@samdcbu Жыл бұрын
I don’t see why anyone would want to train models on a laptop in a cafe. I don’t want my MacBook pinned at 100% for hours in a coffee shop. I just ssh into my workstation using tailscale and can run anything I need using that.
@Techning
@Techning Жыл бұрын
The question is: does deep learning performance really matter so much on laptops? Large models will probably always be trained on workstations with full size GPUs. The important thing for laptops is that you can develop / test and run the code in general. That's where the unified memory comes into play for the new macbooks. You don't even need the M1/M2 Max to get the 32GB memory, a M1/M2 Pro is fine.
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
Yup, great point. If you're doing serious development, you're probably training on a cloud / larger desktop system.
@laden6675
@laden6675 Жыл бұрын
Local hardware is always cheaper than cloud.
@haralc
@haralc Жыл бұрын
Google Colab is 16GB. Don't even need any GPU at all and you can move around even with just a tablet.
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
Yup, Colab is still an option. It's slower, but free is the best price. The price per dollar spent is infinite!
@fishxsea
@fishxsea 4 ай бұрын
Keep in mind... If you're needing a laptop - Anything outside of Apple HAS to be plugged into a power brick to get full power. Macs do not. Macs run at full power on battery.
@MachineLearningwithPhil
@MachineLearningwithPhil 4 ай бұрын
That's pretty cool. I didn't know that.
@pweddy1
@pweddy1 Жыл бұрын
I don't think I've seen anyone else talk about the AI performance, other than quoting what Apple stated. There are legitimate reasons not to buy the Mac, just look at the upgrade prices for Ram and Storage. You can upgrade to 2 TBs of storage and 32GBs of ram for what they charge you to get a 32GBs of "unified memory." Which just seems to be a marketing term for ram that's no faster than dual channel memory on a mainstream desktop.
@NinjaKiller1022
@NinjaKiller1022 Жыл бұрын
Those laptops that he mentioned, need to be plugged in to access that power he is speaking of, no thanks.
@IamSH1VA
@IamSH1VA 8 ай бұрын
I want performance for my ML training, I do not care if it is plugged in or not…. & most people will agree on this part.
@albertvolcom730
@albertvolcom730 7 ай бұрын
Actually not
@qiyuechen7853
@qiyuechen7853 Жыл бұрын
I think I'll stick with google colab
@ivansorokin1107
@ivansorokin1107 Жыл бұрын
Guys, I'm new to ML, do u think the new macbook 15'' inch m2 is suitable for ML tasks?
@goobtron
@goobtron 11 ай бұрын
It's more than enough for beginners. You could start with Google Colab notebooks with their free GPUs if you need to do any training on deep learning models.
@anjalidas5530
@anjalidas5530 4 ай бұрын
Is M2 air good for AI and Ml
@imelliam
@imelliam 4 ай бұрын
Linux nerd introduces video as an explanation why a Mac (obviously the best ML choice) is actually a bad choice. Yeah I don’t like battery or performance or more ram for my models I prefer to run on 4GB or a slow cpu. 😂 wtf
@tsizzle
@tsizzle Жыл бұрын
Thanks for your thoughts on the new M2 Max MacBooks. I’m curious if you have any suggestions for if someone wants a powerful enough laptop to run various Linux VMs (Ubuntu, RHEL, etc.) (but not concurrently) using something like VMware (fusion) or Parallels. Would you still go with the M2/pro/max Macbooks or go with a x86_64 AMD/Intel laptop (either with or without discrete NVIDIA RTX graphics)? I do realize with the M1/M2 MacBooks, one would need to run ARM based versions of Linux in the guest OS as VMs. Not sure if you were to create Docker containers/images in that guest OS, whether those would be compatible with Docker in an non-ARM based (x86_64 architecture) environment? Also, if one were to spend so much money on the M2 Max… should one consider a laptop from Lambda Labs? lambdalabs.com/deep-learning/laptops/tensorbook/specs
@MachineLearningwithPhil
@MachineLearningwithPhil Жыл бұрын
I would always go x86 over ARM, simply because the only arm based option is Apple and I'm not into their eco system. As far as running cross technology docker images, I think it's possible. I found these instructions for running x86 docker images in nvidia jetson, which is arm based: www.stereolabs.com/docs/docker/building-arm-container-on-x86/ I believe docker also supports emulation, so it should be feasible with some tinkering.
@tsizzle
@tsizzle Жыл бұрын
Thanks for the suggestions!
@woolfel
@woolfel Жыл бұрын
arm linux on M1 isn't there yet and it won't use GPU. Tensorflow-macos + tensorflow-metal use the GPU and it works well. Pytorch gpu support on M1 works, but it's not as mature as tensorflow. When it comes to getting CUDA working on linux, it's gotten better but it's still a pain. Back in TF 1.x days, it was a complete torture getting GPU working with tensorflow. If you forget to turn off auto-update it would install the latest nvidia drivers and break your tensorflow gpu setup. Nvidia's linux gpu docs are kinda painful. Jensen would rather you use nvidia cloud "buy more save more". Nvidia has gotten too greedy. Even though I have several nvidia video cards, I refuse to pay the jacked up prices.
@tsizzle
@tsizzle Жыл бұрын
@@woolfel thanks for the feedback and insights! What are the alternatives then if one was not to use the CUDA framework? Is OpenCL as developed/robust enough?
@woolfel
@woolfel Жыл бұрын
@@tsizzle I like the theory of OpenCL, but apple abandoned OpenCL a while back. Nvidia supported it, abandoned it and now kind of supports it again. I'm biased, so take it with a handful of salt. Nvidia treats opencl like and ugly stepchild it doesn't want.
Apple's Silicon Magic Is Over!
17:33
Snazzy Labs
Рет қаралды 984 М.
Apple's Fastest Mac vs. My $5496 PC
14:55
optimum
Рет қаралды 2,3 МЛН
50 YouTubers Fight For $1,000,000
41:27
MrBeast
Рет қаралды 132 МЛН
WHAT’S THAT?
00:27
Natan por Aí
Рет қаралды 5 МЛН
M2 Ultra Mac Studio - Why Professionals Need This
9:44
ZY Cheng
Рет қаралды 29 М.
INSANE Machine Learning on Neural Engine | M2 Pro/Max
15:58
Alex Ziskind
Рет қаралды 179 М.
Why Games Aren't Ported to Mac
14:57
MysticalOS
Рет қаралды 3,2 М.
MacBooks: Which Should You ACTUALLY Buy?
13:11
Just Josh
Рет қаралды 248 М.
Why I Switched to Mac (as a Linux user)
22:53
Wolfgang's Channel
Рет қаралды 578 М.
Is M1 Ultra enough for MACHINE LEARNING? vs RTX 3080ti
6:03
Alex Ziskind
Рет қаралды 86 М.
Don't Overlook These Features! | Mac Studio (2023) First Impressions
10:14
M2 Max is actually kind of insane?!
15:17
Luke Miani
Рет қаралды 167 М.
50 YouTubers Fight For $1,000,000
41:27
MrBeast
Рет қаралды 132 МЛН