OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3)

  Рет қаралды 2,975

AI DevBytes

AI DevBytes

Күн бұрын

OLLAMA | How To Run UNCENSORED AI Models on Mac (M1/M2/M3)
One sentence video overview: How to use ollama on a Mac running Apple Silicon.
🚀 What You'll Learn:
* Installing Ollama on your Mac M1, M2, or M3 (Apple Silicon) - ollama.com
* Downloading Ollama models directly to your computer for offline access
* How to use ollama
* How to harness the power of open-source models like llama2, llama2-uncensored, and codellama locally with Ollama.
Chapters
00:00:00 - Intro
00:00:15 - Downloading Ollama
00:01:43 - Reviewing Ollama Commands
00:02:29 - Finding Open-Source Uncensored Models
00:05:39 - Running the llama2-uncensored model
00:07:25 - Listing installed ollama models
00:09:18 - Removing installed ollama models
🦙 Ollama Commands:
View Ollama Commands: ollama help
List Ollama Models: ollama list
Pull Ollama Models: ollama pull model_name
Run Ollama Models: ollama run model_name
Delete Ollama Models: ollama rm model_name
📺 Other Videos you might like:
🖼️ Ollama & LLava | Build a FREE Image Analyzer Chatbot Using Ollama, LLava & Streamlit! • Mastering AI Vision Ch...
🤖 Streamlit & OLLAMA - I Build an UNCENSORED AI Chatbot in 1 Hour!: • Build an UNCENSORED AI...
🚀 Build Your Own AI 🤖 Chatbot with Streamlit and OpenAI: A Step-by-Step Tutorial: • Build AI Chatbot with ...
🔗 Links
Ollama - ollama.com
Ollama Models - ollama.com/models
🧑‍💻 My MacBook Pro Specs:
Apple MacBook Pro M3 Max
14-Core CPU
30-Core GPU
36GB Unified Memory
1TB SSD Storage
ℹ️ Other info you may find helpful👇
Can you run LLM tool on your computer: huggingface.co/spaces/Vokturz...
Remember that you will need a GPU with sufficient memory (VRAM) to run models with Ollama. If you are unsure how much GPU memory you need you can check out a calculator HuggingFace created called "Model Memory Calculator" here huggingface.co/docs/accelerat...
Also, here is an article that runs you through the exact mathematical calculation for "Calculating GPU memory for serving LLMs" - www.substratus.ai/blog/calcul....
_____________________________________
🔔 / @aidevbytes Subscribe to our channel for more tutorials and coding tips
👍 Like this video if you found it helpful!
💬 Share your thoughts and questions in the comments section below!
GitHub: github.com/AIDevBytes
🏆 My Goals for the Channel 🏆
_____________________________________
My goal for this channel is to share the knowledge I have gained over 20+ years in the field of technology in an easy-to-consume way. My focus will be on offering tutorials related to cloud technology, development, generative AI, and security-related topics.
I'm also considering expanding my content to include short videos focused on tech career advice, particularly aimed at individuals aspiring to enter "Big Tech." Drawing from my experiences as both an individual contributor and a manager at Amazon Web Services, where I currently work, I aim to share insights and guidance to help others navigate their career paths in the tech industry.
_____________________________________
#ollama #mac #apple #llama2 #aichatbot #ai

Пікірлер: 5
@AIDevBytes
@AIDevBytes 2 ай бұрын
🧑‍💻 My MacBook Pro Specs: Apple MacBook Pro M3 Max 14-Core CPU 30-Core GPU 36GB Unified Memory 1TB SSD Storage ℹ Other info you may find helpful👇 Can you run LLM tool on your computer: huggingface.co/spaces/Vokturz/can-it-run-llm Remember that you will need a GPU with sufficient memory (VRAM) to run models with Ollama. If you are unsure how much GPU memory you need you can check out a calculator HuggingFace created called "Model Memory Calculator" here huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator Also, here is an article that runs you through the exact mathematical calculation for "Calculating GPU memory for serving LLMs" - www.substratus.ai/blog/calculating-gpu-memory-for-llm
@JoshFKDigital
@JoshFKDigital 2 ай бұрын
Should post the commands in the description 😁
@AIDevBytes
@AIDevBytes 2 ай бұрын
👍 Thanks for the feedback! Commands now the description.
@everry3357
@everry3357 2 ай бұрын
How's the response time with your macbook pro specs does it go anywhere near chatgpt 4?
@AIDevBytes
@AIDevBytes 2 ай бұрын
Once the models loads into the GPU memory for the first time the follow up responses seem to be slightly slower than GPT4. It's honestly not too noticeable if you are running on similar or better hardware specs that I listed in the description.
OLLAMA | Want To Run UNCENSORED AI Models on Windows?
12:39
AI DevBytes
Рет қаралды 4,6 М.
Apple's Silicon Magic Is Over!
17:33
Snazzy Labs
Рет қаралды 947 М.
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 8 МЛН
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 16 МЛН
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,1 МЛН
Why I Switched to Mac (as a Linux user)
22:53
Wolfgang's Channel
Рет қаралды 570 М.
Getting Started on Ollama
11:26
Matt Williams
Рет қаралды 33 М.
Local RAG using Ollama and Anything LLM
15:07
GovBotics
Рет қаралды 11 М.
How Fast Will Your New Mac Run LLMs?
9:33
Ian Wootten
Рет қаралды 4,5 М.
LLaMA 3 Tested!! Yes, It’s REALLY That GREAT
15:02
Matthew Berman
Рет қаралды 207 М.
Windows on Mac | 2 options tested
15:42
Alex Ziskind
Рет қаралды 122 М.
I Bought the MacBook M3 Pro 14″ and Now I Regret It
7:26
Cole Caccamise
Рет қаралды 339 М.
Apple's Fastest Mac vs. My $5496 PC
14:55
optimum
Рет қаралды 2,2 МЛН
iPhone 15 Unboxing Paper diy
0:57
Cute Fay
Рет қаралды 3,7 МЛН
WWDC 2024 - June 10 | Apple
1:43:37
Apple
Рет қаралды 10 МЛН
Настоящий детектор , который нужен каждому!
0:16
Ender Пересказы
Рет қаралды 236 М.
keren sih #iphone #apple
0:16
Muhammad Arsyad
Рет қаралды 1,5 МЛН