Рет қаралды 19,586
LM Studio 0.3.4 ships with an MLX engine for running on-device LLMs super efficiently on Apple Silicon Macs.
MLX support in LM Studio 0.3.4 includes:
Search & download any supported MLX LLM from Hugging Face (just like you've been doing with GGUF models)
Use MLX models via the Chat UI, or from your code using an OpenAI-like local server running on localhost
Enforce LLM responses in specific JSON formats (thanks to Outlines)
Use Vision models like LLaVA and more, and use them via the chat or the API (thanks to mlx-vlm)
Load and run multiple simultaneous LLMs. You can even mix and match llama.cpp and MLX models!
lmstudio.ai/bl...
🔗 Links 🔗
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - ko-fi.com/1lit...
🧭 Follow me on 🧭
Twitter - / 1littlecoder
Linkedin - / amrrs