Рет қаралды 2,373
In this video, I show you how to accelerate Transformer inference with Optimum, an open source library by Hugging Face, and Intel OpenVINO.
I start from a Vision Transformer model fine-tuned for image classification, and quantize it with OpenVINO. Running benchmarks on an AWS c6i instance (Intel Ice Lake architecture), we speed up the original model more than 20% and divide its size by almost 4, with just a few lines of simple Python code and just a tiny accuracy drop!
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️
⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) www.buymeacoffee.com/julsimon ⭐️⭐️⭐️
- Optimum: github.com/huggingface/optimum
- Optimum docs: huggingface.co/docs/optimum/o...
- Intel OpenVINO: docs.openvino.ai/latest/index...
- Original model: huggingface.co/juliensimon/au...
- Code: gitlab.com/juliensimon/huggin...