INSANE Machine Learning on Neural Engine | M2 Pro/Max

  Рет қаралды 194,059

Alex Ziskind

Alex Ziskind

Күн бұрын

Пікірлер: 220
@NilavraBhattacharya
@NilavraBhattacharya Жыл бұрын
Thank you from the bottom of our hearts for doing these tests, and not joining the video editing and Final Cut Pro bandwagon.
@AZisk
@AZisk Жыл бұрын
Happy to help!
@somebrains5431
@somebrains5431 Жыл бұрын
Video editing doesn’t seem to hammer ram and gpu cores. Maybe that will change with M3.
@coder-wolf
@coder-wolf Жыл бұрын
Truly! Alex is one of very few who actually focuses on software devs when reviewing the Macs. Hugely appreciated!
@CAA84788
@CAA84788 Жыл бұрын
Yes, this is really helpful. I always had to try to use those video editing/photoshop results as a proxy for what I really wanted to know. Great resource!
@Khari99
@Khari99 Жыл бұрын
As an amateur data scientist, I can't tell you how happy I am seeing you do these tests because of how niche our field is. Now I just need to figure out how to use CoreML with all the models I was working with in Tensorflow lol
@mr.anirbangoswami
@mr.anirbangoswami Жыл бұрын
How hard is it going to be
@ef7496
@ef7496 Жыл бұрын
@@joytimmermans wow 😮 man how much experience you have ? Why don’t you make a video on that ? Please I am looking to start learning all that can you help me with a roadmap?
@rhard007
@rhard007 Жыл бұрын
Your content is the best for developers on youtube. You should have a Million Subs. Thank you for all you do.
@MarsRobotDeveloper
@MarsRobotDeveloper Жыл бұрын
¡Gracias!
@AZisk
@AZisk Жыл бұрын
glad you enjoyed
@drweb1210
@drweb1210 Жыл бұрын
The ANE is something different. As I understand it, it’s designed for matrix (tensor) calculations in contrast to the CPU. I’ve trained NN using python and tf, and i know you can format the trained model in a way so it can utilize the ANE on iPhones using swift, the performance is amazing IMO. However now I started to go a bit deeper into swift, I want to try and train models on the ANE 😅. Awesome video btw. Love this kind of content, glad i found this channel.
@ZhuJo99
@ZhuJo99 Жыл бұрын
well the way how it's done is not that important as a final results. We are not using computers focusing on tool itself, but on getting job done. And seems like Apple did a pretty good job with their processors :)
@riteshdhobale8210
@riteshdhobale8210 11 ай бұрын
Hey I'm currently in 1st yr of cs n specialising in aiml confused which laptop I should buy windows with GPU or m1 or m2 pls help n if mac are the thing for an aiml engineering
@drweb1210
@drweb1210 11 ай бұрын
@@riteshdhobale8210 Consider this. If you are just starting, you will most probably work with more basic models, math and general coding (python, R, JS, MOJO...). For most task the M1, M2, M3 macs will do just fine, they are much more than you'll need for learning. When you start with DL (deep learning), the macs are still good but start to get very expensive (M1, M2, M3 MAX). This does not mean that you HAVE to buy a MAX model, things will just go faster on them, the PRO models will do ok as well. Most of DL is done on GPU clusters anyway (AWS, AZURE...). All that being said, if your main focus is to go for DL directly (which i don't recommend, but most people do), or you want to play with already existing models than go for a laptop with a good GPU. Last thing to take into account is the VRAM. The one thing that separates the macs and PCs (for now) is the underlying memory architecture. Because macs GPUs and CPUs share (in a sense) the RAM, you can load much larger models into memory to do inference. Simple example an RTX 4060 has 8GB of VRAM, and the macs can get access much more RAM (up to 128 GB) but this costs a lot of money. If you go for a mac go for the 16 inch. Hope this helps.
@acasualviewer5861
@acasualviewer5861 Жыл бұрын
What I wonder is if the high RAM M2 Maxes (like 64GB or 96GB) can train significantly more complex models or use significantly bigger batches simply because they have more ram than most discrete GPUs.
@sharathkumar8422
@sharathkumar8422 Жыл бұрын
Data Loading: The dataset used to train a model is first loaded into the system's RAM before it can be utilized. If the dataset is large and the RAM is insufficient, it can't load the entire dataset at once, which can slow down the training process as data needs to be constantly loaded and unloaded. Batch Processing: Deep learning models are typically trained in batches due to computational limitations. The batch size (the number of data points that the model sees at once) directly affects how much RAM is used. Larger batches require more memory but can lead to faster and sometimes more stable training. However, if the batch size is too large for the available RAM, it will cause an out-of-memory error. Model Size: Larger, more complex models (more layers, more nodes) require more parameters and thus more memory to store those parameters. Additionally, during the training process, the system also needs to store other information such as gradients and intermediate layer outputs for backpropagation, which further increases the RAM usage. Parallelism: If you're using a framework that supports it, and you have sufficient RAM, you can train multiple models or multiple parts of a model simultaneously, which can significantly speed up the training process. Speed: RAM is much faster than disk storage. So, the more data your RAM can hold, the quicker the access time, and thus, the faster your model can train. Swapping: If your system runs out of RAM, it will start swapping data to disk, which is a much slower process and can drastically slow down the training process.
@acasualviewer5861
@acasualviewer5861 Жыл бұрын
@@sharathkumar8422 I understand the theory. But I want to see benchmarks. In theory the M2 Max with maxed out RAM is great for ML. But I'd like to see some benchmarks in practice. Performance is also based on bottlenecks, and if the GPU is insufficient, it doesn't matter how much RAM you have, you can use a GPU with inferior RAM and it will still blow you out of the water.
@igordemetriusalencar5861
@igordemetriusalencar5861 Жыл бұрын
Now I really want you to get your hands on a 4090 new notebook to test it to the limits.
@AZisk
@AZisk Жыл бұрын
believe it or not, I was considering this, but you are literally the first one that asked.
@lightytan5404
@lightytan5404 Жыл бұрын
@@AZisk yes, please. So far nvidia new gen seems promising. But how it stands up against M2 Pro/Max?!
@GlobalWave1
@GlobalWave1 Жыл бұрын
@@lightytan5404the new laptops or portable laptops with the intel new hx laptop processors with the 4090 sound like helicopters and forget battery life but they are monsters. 😂
@drreg2513
@drreg2513 Жыл бұрын
​@@GlobalWave1 better have helicopter than 105 degrees cpu/gpu
@LeicaM11
@LeicaM11 Жыл бұрын
Should not squeeze a 4090 into a small laptop!
@roberthuff3122
@roberthuff3122 10 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🖥️ *Introducing the Machines and Models for Machine Learning Testing* - Brief Introduction of M2 Pro and M2 Max and models to be tested - Introduction of the machines to be used, inclusive of a PC laptop 01:23 🔍 *Deeper Insight into the Models* - Explanation of the varying models such as resnet, mobilenet, distal bird, and bird large 02:48 💻 *Detailed Documentation of Training Deep Learning Models * - Description of setting up a conducive environment for dependencies, - Highlights on the heavy dependency on memory for the training 04:12 ⏳ *Experimentation on Multiple Devices* - Descriptions of how tests are kicked off on different machines - Provides real-time feedback on how different machines are responding to the test 06:05 📈 *Wrapping Up and Analyzing the First Set of Results* - A look at the results obtained from the various machines - Commentary on the impact of GPU core count on the results 07:55 🔄 *Another Round of TensorFlow Experiment Focusing on the PC* - Explanation of another TensorFlow experiment that includes the PC - Description of the different batch sizes and the dataset used in the experiment 10:12 🔮 *Insights from Third TensorFlow Experiment with Different Batch Sizes* - Analyses on the outcome of the second batch of experiments - Insights on how power usage directly impacts the speed and results 11:50 🧠 *Focusing on the Usage of Apple's Neural Engine* - Introduction of a test that demonstrates the Apple Neural Engine - Commentary on the impressive results driven by the Neural Engine 13:54 📈 *Results from Experiments with the Neural Engine* - Description of the results obtained from the experiments involving the Neural Engine - Observations and insights drawn from the results 15:28 🔚 *Conclusion and Closing Remarks* - Final thoughts on all conducted experiments - Encouragement to subscribe for access to more tests and software developer content. Made with HARPA AI
@AZisk
@AZisk 10 ай бұрын
wow thanks!
@SonuPrasad-xt4yr
@SonuPrasad-xt4yr Жыл бұрын
Great video! Never thought ANE would be that powerful, Thank you for sharing your expertise and providing such valuable content. Keep up the good work!
@AZisk
@AZisk Жыл бұрын
Glad you enjoyed it!
@tudoriustin22
@tudoriustin22 Жыл бұрын
3:17 - one way I prevented the swap on my M2 Pro 14 inch MacBook Pro with 16GB ram was to disable swap. and I have ran into any problems, the performance will degrade if you do this on an 8gb 13 inch macbook with m2 or m1 because it relies on swap for increasing performance but on the 16gb with swap off, not only does it help ssd longevity but it also never runs into any performance hiccups
@gerryakbar
@gerryakbar Жыл бұрын
Hi Alex, really thanks to your lots of ML benchmark in Apple Silicon. However, I think the benchmark configs can be different that they should maxed out for each arch. You can use bigger batch size for Apple Silicon since they’re using unified memory
@sivovivanov
@sivovivanov Жыл бұрын
Great video! I'm looking at an M2 Pro for PyTorch related work - any chance we can get a video on some of that as well? Thanks
@kitgary
@kitgary Жыл бұрын
I am interested to see how the 14" M2 Max performs, are there any significant difference between the 14" and 16"?
@aaryanchadha
@aaryanchadha Жыл бұрын
thermal throttling, to prevent fan noise, overheating, apple reduces cpu and gpu performance on the 14 inch, also there's no high performance mode on the 14 inch which a lot of devs use as battery life is great
@tomasdvoracek6098
@tomasdvoracek6098 Ай бұрын
What is the window on right which is monitoring memory and gpu memory pressure and etc please?
@yongjinhong5533
@yongjinhong5533 Жыл бұрын
Hey Alex, have you tried increasing the number of CPU workers? As most of the computation overhead is in transferring data from CPU to GPU in the macs.
@user-er3pz8ev6d
@user-er3pz8ev6d Жыл бұрын
Thanks, machine learning is the thing I was looking for and only you making such tests
@AZisk
@AZisk Жыл бұрын
Glad to hear that!
@avocado9227
@avocado9227 Жыл бұрын
Excellent video. Keep it up!
@AZisk
@AZisk Жыл бұрын
Thank you very much!
@fezroldan9545
@fezroldan9545 Жыл бұрын
At Alex I saw your prior video, I'm on the verge of purchasing a new laptop. I'm not a gamer, but rather an engineer who is focused on learning analyst tools along with ML tools. Currently have MacBook Pro 15 in from 2018, seeking to trade in and buy a new MacBook Pro or build my own PC. Been with Mac for a while, still love apple and MacBooks. Your videos have a been great help on determining which laptop to choose and understand fundamental applications based on specifications of laptop along with end application. Keep it up Alex!
@zakpro4007
@zakpro4007 Жыл бұрын
ultimately which mac did you buy?
@remigoldbach9608
@remigoldbach9608 Жыл бұрын
Great variety of test in your videos ! Amazing !
@daveh6356
@daveh6356 Жыл бұрын
Cheers Alex, great to see something actually using the ANE. Something's seriously wrong with the M1 Ultra, it has double to M1 Max resources including a second CPU & ANE and more GPU cores even if they're a little weaker. Any chance you could check the CoreML config to see if the extra resources can be used?
@Techning
@Techning Жыл бұрын
As a PyTorch user these type of comparisons with benchmarks written in PyTorch running on the M1/M2 GPU would be awesome :) I believe the results will probably be similar though.
@MosWaki7
@MosWaki7 Жыл бұрын
they're actually very different, tensorflow has been optimizer for apple soc, but pytorch is nowhere close in performance on apple soc
@Part-Time-Larry
@Part-Time-Larry Жыл бұрын
@@MosWaki7 was going to say just this.
@PeHDimebagPrague
@PeHDimebagPrague 3 ай бұрын
It's possible to train using the ANE?
@edmondj.
@edmondj. Жыл бұрын
Thank you very much, you made it clear the m2 max is not worth buying :)
@AZisk
@AZisk Жыл бұрын
Whoa! Thank you so much for the tip. I upgraded to M2 Max only because I have to make KZbin videos, otherwise I was super happy with my old M1 Max
@edmondj.
@edmondj. Жыл бұрын
@@AZisk I understand. I will be waiting for you for the m3 too 😛👋.
@haralc
@haralc Жыл бұрын
Can you also compare the ANE with Intel's Compute Stick 2?
@MarkMenardTNY
@MarkMenardTNY Жыл бұрын
I think the total wattage usage of all of the Macs is lower than what my 3090 would draw. His saying the M1 Ultra drew like 59 watts almost made me laugh.
@mahdiamrollahi8456
@mahdiamrollahi8456 Жыл бұрын
What we need in pytorch is to specify the device like cpu or cuda. What we have to do to use gpu or ane in Apple Silicon series?
@giovannimazzocco499
@giovannimazzocco499 Жыл бұрын
Did you consider repeating the benchmark for M3s?
@riccardrosen2073
@riccardrosen2073 Жыл бұрын
One thing YT has taught me is that productivity is only about creating videos. Thanks.
@SGLogic-O-Maniac
@SGLogic-O-Maniac Жыл бұрын
Was the ANE test a model training or model inferencing? Can we expect using ANE for training PyTorch/Tensorflow anytime in future? I am blown away by the efficiency of the M1/M2 lineup. I never throught I would say this, but I kinda want to trade-in my Ryzen 5800H/RTX 3060M Legion Laptop and a kidney or two for those shiny M2 Max 16-inch.
@eksjoker
@eksjoker Жыл бұрын
All info I've found so far on ANE for training has been a dead end. So I'd love to know what happened here as well.
@marcosarti9029
@marcosarti9029 Жыл бұрын
I was waiting this video with all myself! finally!
@MiladMokhtari1995
@MiladMokhtari1995 Жыл бұрын
these new macbooks are so cool i wish I could afford one :( great video!
@lyncheeowo
@lyncheeowo Жыл бұрын
thanks for you making the amazing video! one question: what's that chart? how do i visualize the cpu & gpu use on my macbook pro 14'' 2023?
@samz905
@samz905 Жыл бұрын
Very helpful info, thank you! With Nvidia 40 series laptop coming out soon, it would be very interesting to see how M2 GPUs perform against the likes of 4090, 4080 in deep learning tasks
@woolfel
@woolfel Жыл бұрын
in terms of raw compute power, the 40 series has more. The main limitation is memory and how much you need. if 16 or 24G is enough, paying the NVidia tax is better than apple tax. If you need more than 24G, M2Max might be the cheaper tax. Only way to know is take your workload, run some benchmarks and then figure which fits your needs.
@AdamTal
@AdamTal Жыл бұрын
Can you compare base stock m3 max to top stock m3 max (I don’t mean customization, just stock options) any ML benchmarks would be great. Thank you
@shiyammosies5975
@shiyammosies5975 5 ай бұрын
If I have to use local LLM model says for pair programming now or in future which one would you suggest kindly help: M2 Mac Mini with 16GB RAM and 1TB (external SSD NVME M.2) or M2 Pro Mac Mini with 16GB RAM, 513 GB SSD? Here in India cost difference is huge? Let me know which will help me better in longer run for programming, mild video editing, and mostly LLM locally for pair programming, etc.
@SiaTheWizard
@SiaTheWizard Жыл бұрын
Amazing examples and tests Alex. I was actually looking for YOLO test for Mac and this was the best video I've seen. Keep it up!
@revan6614
@revan6614 Жыл бұрын
Thanks a lot for this! Does the M2 Pro Mac Mini (16-core neural engine) perform similarly? I'm trying to decide between the M1 Max Mac Studio and the M2 Pro Mac Mini for machine learning. They both have the same specs aside from the Studio having a 24-core GPU compared to the 16-core GPU of the Mac Mini. Would the difference of the M2 Pro over the M1 Max be more worth it than the 24-core GPU vs. 16-core?
@ZhuJo99
@ZhuJo99 Жыл бұрын
depends un usage. M2 Pro has more high performance cores than M1 Max. M1 Max has more GPU cores.
@PerpetualPreponderer
@PerpetualPreponderer Жыл бұрын
Schwarzenegger at the end of this video: "I'LL BE BACK..."
@theoldknowledge6778
@theoldknowledge6778 Жыл бұрын
Amazing video! Can you do a comparison video running YOLO? I’m very curious to know how many fps these machines can pull up and it’s a more visual test. Thank you!
@RitwikMukhopadhyayBlueThroated
@RitwikMukhopadhyayBlueThroated 7 ай бұрын
Hi @AZisk, could you please do a similar comparison with Intel Core Ultra 9 laptop, not to mention to check it's GPU and NPU.
@_Einar_
@_Einar_ Жыл бұрын
Lovely video as always! I've got a question, which Macbook would you consider 'worth the money' for a Data scientist and/or a ML/AI engineer ? Obviously it depends on the work one does, but it seems some tasks require terabytes of Ram, not GB's and so upgrading to 96 won't cut it anyway. On the other hand, going to low will enforce one to always use cloud services. At this point, I've tried a M2 macbook pro 16" base model (16 Gb ram), and I've run out of ram computing scattering transforms on a relatively small (2Gb) dataset. So the choice for me must be in the range of 32-96 I suppose.
@lucasalvarezlacasa2098
@lucasalvarezlacasa2098 Жыл бұрын
I'd probably say 64GB is the sweet spot.
@08.edenaristotingkir86
@08.edenaristotingkir86 Жыл бұрын
Should I get 14 inch m2 max with 38 core gpu and 32 gb RAM or 30 core GPU and 64 gb of RAM. Does RAM really plays a big role in training?
@bioboy4519
@bioboy4519 4 ай бұрын
yes
@BrickWilbur2020
@BrickWilbur2020 Жыл бұрын
any way I can get Photoanalysis in apple photos to work faster.??
@niharjani9611
@niharjani9611 Жыл бұрын
Which IDE did you used in m2 macbook pro ? Hoping for an answet 😅
@as-qh1qq
@as-qh1qq Жыл бұрын
Subscribed! Was looking for some academic benches and got them. Any chance of benching on simulation workloads like fluid or EM sims?
@ritammukherjee2385
@ritammukherjee2385 Жыл бұрын
hey Alex can you make a video on running Gpt4all falcon on m2 mac air
@watch2learnmore
@watch2learnmore Жыл бұрын
It would be great if you could revisit the Neural Engine's impact now that you're benchmarking with local LLMs.
@alphazutn1274
@alphazutn1274 Жыл бұрын
More on the ANE please. If you can find a test made for PyTorch/TensorFlow and that also has a version for CoreML and compare Windows vs Mac.
@MarkoChuValcarcel
@MarkoChuValcarcel Жыл бұрын
Thank you Alex for this great video. I'm very impressed with the results comparing the CIFAR10 with 1024 batch size, because it shows the difference in RAM speed between Apple M processors and the RTX-3070, the RTX is faster doing the calculations. By the way, my Desktop RT-3070 took 301.787s , 270.702 and 234.107, with 64, ,128 and 1024 batch sizes. During the test My RTX used 178 watts !!! we have to add the power usage of the CPU and other components. Another interesting thing I've notice is that if you consider that every GPU Core in the M1 Max, M2 Max and M1 Ultra processor, has the same processing power, you can almost find the images / second starting with the M1 Max times, just doing a simple aritmetich operation, this is very interesting because in Deep Learning training the performance of the M1 Ultra scales equals or proportional to the number of GPU cores it has, something that is not valid in other tasks. Finally, it would be very interestin to compare the inference speed with a Jetson Xaiver NX o similar board, because this NVIDIA boards cost more than US$1000, I think that a Mac mini M2-Pro could be faster than the Jetsons in inferences, and could replase the Jetsons in some tasks, the Jetsons have many advantages over a Mac Mini of course, for example they have many encoding/decoding engines to process many streams of video in parallel. But who knows, maybe someday we will see a robot with a Mac Mini with an M processor in it.
@JBoy340a
@JBoy340a Жыл бұрын
Thanks for doing this test. It was quite eye opening. I am getting a M2 14" pro and was wondering about the requirements and how much upgrading the base models memory would help. Looks like I would have to upgrade the memory and go to Max to get a big performance increase. Since I have access to cloud based systems with GPUs and TPUs I think I will just go with the base system.
@AZisk
@AZisk Жыл бұрын
glad it was helpful. Thanks for your courses 😊
@JBoy340a
@JBoy340a Жыл бұрын
@@AZisk thanks for the kind words.
@rubenhanjrahing7324
@rubenhanjrahing7324 Жыл бұрын
really needs a lot of content just like this
@abhinav9058
@abhinav9058 20 күн бұрын
do this again for m4 max
@MeinDeutschkurs
@MeinDeutschkurs Жыл бұрын
Great! Thank you very much. I'm interested in AUTOMATIC1111-API-SD-image-creation-differences between these devices. And there is also a question: Is there any way to bring stable diffusion into the neural engines?
@pierrew1532
@pierrew1532 Жыл бұрын
Sorry so regarding the comparaison with PC laptop, is the extra $$$ of the M2 (max) worth it for data science work ?
@ADHDintothewild
@ADHDintothewild Жыл бұрын
great dude!
@AZisk
@AZisk Жыл бұрын
Thanks!
@haralc
@haralc Жыл бұрын
How can the Apple chips neural engine performance can be different in the same generation when they have the same number of neural engine core count?
@urluelhurl
@urluelhurl Жыл бұрын
What are the advantages of using a M2 max pro that does not have a dedicated GPU when for a similar price I could buy a P15 with a RTX 5000 that comes already equipped with Ubuntu and Nvida data science packages?
@gdotone1
@gdotone1 Жыл бұрын
interesting the processors never get to 90%+ usage... is that OS, micro-coding, hardware ?
@the_cluster
@the_cluster Жыл бұрын
The ANE benchmark results for the M1 Ultra are astonishing. Especially where the M2 Pro was faster. Indeed, according to the specification, the M1 Ultra chip contains twice as many Neural Engine cores - 32, while the rest have only 16. The M1 Ultra was supposed to be faster than any M1 / M2 Max or Pro; in this case, it does not matter that the M2 has a slightly higher clock speed or more GPU cores. However, 32 ANE cores do not always give a performance boost. Very strange.
@joloppo
@joloppo Жыл бұрын
Can the Mx Max chips actually run 3+ monitors at the same time? Have you made a vid about this perhaps? Thinking of buying one.
@AZisk
@AZisk Жыл бұрын
Yes and yes :)
@ZhuJo99
@ZhuJo99 Жыл бұрын
straight from Apple's website: M2 Max Simultaneously supports full native resolution on the built-in display at 1 billion colors and: Up to four external displays: Up to three external displays with 6K resolution at 60Hz over Thunderbolt and one external display with up to 4K resolution at 144Hz over HDMI
@blacamit
@blacamit Жыл бұрын
Hello Alex Can you tell me which Linux distributions would be ideal for starting a career in programming? I'm a newbie java programmer.
@r12bzh18
@r12bzh18 Жыл бұрын
What program are you using to monitor the GPU? I have installed something called iStats - but yours looks a bit different. Great video! I installed tensorflow-macos and tensorflow-metal in a venv virtual environment but sometime I get some errors and it stops. Tricky install!
@AZisk
@AZisk Жыл бұрын
iStatistica
@aceflamez00
@aceflamez00 Жыл бұрын
istatistica
@arhanahmed8123
@arhanahmed8123 Жыл бұрын
Hey Alex, It's nice to see that Tensorflow is working well in M2 chip, anyway,where do you live Alex?
@jackyhuang6034
@jackyhuang6034 Жыл бұрын
i want to learn machine learning. Should i get m1 pro which is way cheaper or the latest m2 pro?
@林彥承-l6e
@林彥承-l6e Жыл бұрын
This is the video I need. Thank you!!
@42Odyssey
@42Odyssey Жыл бұрын
Hi Alex, thanks for this video ! (and your interesting channel :) ) I vote for a DALL·E alternative to run on your M1 Ultra/MacBooks arsenal to generate funny images for the next video ! :)
@xavhow
@xavhow Жыл бұрын
In the future, if these ML can tap into M1/M2’s neural engine and co-op with the GPU, could be even faster?
@_xkim00
@_xkim00 Жыл бұрын
Great comparisons, I am planning to get 96gb RAM for my dL models as well.
@henryjiang9990
@henryjiang9990 Жыл бұрын
SO which one should I get?
@kingcoherent
@kingcoherent Жыл бұрын
Thanks for this! I ended up plugging for a 24GB M2 Air because I thought it would be a while before Apple silicon would be at all useful for (real world) DL. perhaps I'll be needing another machine sooner than expected! Doesn't the Nvidia have almost 6K cores? I would absolutely expect it to trounce the Apples, even though it's old - I was most surprised how close some of these benchmarks were. But ultimately, until more of the DL frameworks add proper support for Apple silicon it's a bit of a moot point and I imagine most (Mac based) developers will continue to use cloud resources for their work. Of course once there is proper CoreML support in Tensor Flow / PyTorch then you may train/refine on a laptop.
@zyzzyva303
@zyzzyva303 Жыл бұрын
Nice video Alex.
@carloslemos6919
@carloslemos6919 Жыл бұрын
Any idea on how to run openai whisper on apple neural engine?
@cosmincadar3655
@cosmincadar3655 Жыл бұрын
Statistics look cool, but can someone explain which end user use cases can benefit from ANEs? e.g. if I own one of these MacBooks, in which scenario would I benefit the most of ANEs? Thanks.
@rupeshvinaykya4202
@rupeshvinaykya4202 Жыл бұрын
Which sw used for screen recording, plz share.
@AZisk
@AZisk Жыл бұрын
On mac i use this : prf.hn/click/camref:1100libNI. more specifically the toolbox has different screen cap features
@FranciscoLeite-w3i
@FranciscoLeite-w3i Жыл бұрын
Hi Alex, Do you think the mac studio m2 max with 30 core gpu would be enough for machine learning?
@cipritom
@cipritom Жыл бұрын
Very informative! Thank you!
@lula5425
@lula5425 Жыл бұрын
Sir, please test engineering softwares on parallel on MAC like CAD and CAE , solidworks or ansys
@Bikameral
@Bikameral Жыл бұрын
Thank you Peter and Alex for the video. Could you do this with pytorch as well ?
@aady392
@aady392 Жыл бұрын
Great video Alex. Thanks much, did you tried m2 with 24 gb in similar test?
@MilushevGeorgi
@MilushevGeorgi 10 ай бұрын
15:00 why is the m1 max doing better than the m1 ultra
@cheettaah
@cheettaah Жыл бұрын
Wondering why Apple didn't put the Ultra into a MacBook. 60W is still much less than 175W graphic cards in Windows laptops.
@AIBeautyAlchemist
@AIBeautyAlchemist Жыл бұрын
i am play with stable diffusion recently, i first install it on my 1m pro Macbook pro, it works but slow. Now i am just trying to training some LoRAs, does it it working on M1 chips? and is anyone has experience on training stable diffusion models or LoRA on Mac, how it compare to RTX GPS???
@arijr71
@arijr71 Жыл бұрын
Thanks - great video and comparison between the different MBP models! Planning to ditch my Intel MBP and (maybe) my Linux RTX-PC. Apple Neural Engine has an insane potential for computation purposes on Apple Silicon macs. Is it really so that Apple Neural Engine does not expose any proper API to be used for generic (python) OSS tech-stack DL training purposes? Only Apple CreateML in training and CoreML inference?
@trustmub1
@trustmub1 Жыл бұрын
Accoustic Windscreen (Whats ver that is) ... lol that killed me 🤣
@usptact
@usptact Жыл бұрын
Alex, how did you run tf-metal-experiments? I followed all the steps and got errors during runtime. Some google search showed this due to tensorflow for macos being too high - 2.11 installed in my case.
@AZisk
@AZisk Жыл бұрын
try downgrading tf
@usptact
@usptact Жыл бұрын
@@AZisk Thanks, that did the trick. The version 2.11 was no good, downgrading to 2.9 worked!
@AZisk
@AZisk Жыл бұрын
@@usptact nice!
@TheNameOfJesus
@TheNameOfJesus Жыл бұрын
This video is interesting. But the reason I'm hesitating to get the M2 is that I don't know how much it will speed up my neural-engine limited application. Basically, my app is the MacOS Shortcuts app which is running in an infinite loop doing an OCR of the entire 4K screen. It takes 4.1 seconds to perform OCR on the entire screen. If the M2 can do the same work in less than that, I'll probably buy it. I realize that my situation may also be dependent upon SSD speed, because the screen has to be put into a file to be sent to the Shortcut which does the OCR. Is there a way to do an OCR of the screen without putting the screen into a file first?
@antor44
@antor44 Жыл бұрын
Very interesting video, but the data is explained too quickly, I have to set the KZbin player speed to at least 75%.
@krosser2123
@krosser2123 Жыл бұрын
Nice reference for Star Trek fans. 👏
@renaudg
@renaudg Жыл бұрын
Thanks for the video ! It's just a shame that even after talking to this guy who stressed the importance of memory, you still go ahead and compare the M2 Pro (dragged down by its 16GB and clearly swapping) to the others. Chances are it would do much much better with the same amount of RAM. Isn't there a test out there that uses a much smaller dataset ?
@AZisk
@AZisk Жыл бұрын
i have what i have :)
@renaudg
@renaudg Жыл бұрын
@@AZisk Sure ! But it's easy for the viewer to forget that the M2 Pro is at a huge disadvantage here, especially since you don't mention it again after the intro, not even when you add a "sad trombone" sound to the M2 Pro's first bad result ! Maybe "M2 Pro (16GB)" in the comparison charts would have made the caveat more obvious too.
@cgmiguel
@cgmiguel Жыл бұрын
Nice benchmarks! I’m wondering how many kWh on your bill energy 😮
@AZisk
@AZisk Жыл бұрын
only shows up as a spike when i bring a desktop with an RTX card in here :)
@peterwan816
@peterwan816 Жыл бұрын
I would love to know what kind of models can the neural engine handle and how fast can it handle it in comparison to cpu and you.😂😂😊
@balubalaji9956
@balubalaji9956 Жыл бұрын
asitop not working anymore.\! any body had any luck?
@PizzaSlinga
@PizzaSlinga 2 ай бұрын
There’s a way to skirt performance bottlenecks. And when u can do it it’s just better. Look into translation learning all yall. If ur doing this on a laptop, someone will always have a better model (approaching)
@furoriangurinu6601
@furoriangurinu6601 Жыл бұрын
The point about llm's not being able to be run is false. A friend of mine ran a huge vision transformer and a huge BERT model on her M1 Air with 16gb base config. The swap is insane on these machines.
@onclimber5067
@onclimber5067 Жыл бұрын
Would be amazing to see this test done with the new M3 models, since they are supposed to have a much better GPU.
@axedemon2274
@axedemon2274 Жыл бұрын
hey, i am student who is studying machine learning, can i buy m2 pro for coding and stuff?
@stanchan
@stanchan Жыл бұрын
I don’t get why you would use a laptop for training and inference. A better comparison would be an a6000 ada vs studio m1 ultra.
@MarkPharaoh
@MarkPharaoh Жыл бұрын
Because many of us don't sit in a single room our entire working lives.
@martini380
@martini380 Жыл бұрын
Why should you do that? A mobile 3070 is on par with the macbooks. Even a normal 3070 would be plenty for the M1 ultra, an RTX6000 ada is just destroying it.
@ef7496
@ef7496 Жыл бұрын
You made it so technical, so for beginners it’s difficult to understand which one was the best overall. Can someone tell us the final result of which one is has the best results overall? Thanks
@jaskiratanand586
@jaskiratanand586 Жыл бұрын
Rtx 3070 is not a GPU for ml training, one tragated for ml is rtx 3080 with 16gb vram
@davidchodur7773
@davidchodur7773 Жыл бұрын
I think you can utilise also tensorflow with ML cores. I tried it on M1 air and did some machine learning where CPU and GPU were basically idle.
@choc3732
@choc3732 Жыл бұрын
How was the performance? Also how did you do this? lol
Setting up new M2 Mac Mini for software development
43:16
Alex Ziskind
Рет қаралды 381 М.
Cheap vs Expensive MacBook Machine Learning | M3 Max
11:06
Alex Ziskind
Рет қаралды 112 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 1,1 МЛН
Mac vs Windows in 2024 - Can Apple BEAT AMD & Nvidia?!
13:55
Max Tech
Рет қаралды 92 М.
How to Stay Motivated When Learning How to Code
13:03
The Fullstack Bro
Рет қаралды 669
Zero to Hero LLMs with M3 Max BEAST
17:00
Alex Ziskind
Рет қаралды 133 М.
Apple breaks the ceiling again | M4 MacBook Pro tests
17:03
Alex Ziskind
Рет қаралды 138 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 111 М.
16” MacBook Pro Long Term Review
12:48
Alex Ziskind
Рет қаралды 169 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН