Рет қаралды 2,761
This video shows how Arm CPUs, using Neon, SVE, SME, and Kleidi technology, are revolutionizing AI and machine learning, eliminating the need for dedicated NPUs or GPUs. I'll explore matrix multiplication optimization, highlight Arm's collaboration with Meta on its Llama LLM, and demonstrate the speed of a large language model running on an Android smartphone just using CPU acceleration. Developers, researchers, and anyone interested in high-performance AI on standard hardware should watch.
---
Unleashing the Power of AI on Mobile: LLM Inference for Llama 3.2 Quantized Models with ExecuTorch and KleidiAI - community.arm....
Arm Developer Hub: arm.com/dev-hub
#garyexplains