Vast AI: Run ANY LLM Using Cloud GPU and Ollama!

  Рет қаралды 6,311

WorldofAI

WorldofAI

Күн бұрын

Пікірлер: 13
@intheworldofai
@intheworldofai 4 ай бұрын
Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🧠 Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
@jackpre3399
@jackpre3399 4 ай бұрын
If we planning to run the Llama 3.1 405B model on a cloud GPU, cost will be: GPU Cost: Using NVIDIA H100 GPUs at $3.33/hr. For 3 GPUs over 10 hours, it’s around $99.90. Storage Cost: At $0.03/GB/hr, for 820GB over 10 hours, it’s about $246.00. Total Estimated Cost: $345.90 for a 10-hour session !!!
@magicandr
@magicandr 4 ай бұрын
That's crazy
@randomfacts11223
@randomfacts11223 4 ай бұрын
Bro, you don't need a 4080 to a run a model like phi 2. A GTX 1660 Ti is enough for that small model. Even the llama 3.1 8b can run on that
@PhuongTran-ud2br
@PhuongTran-ud2br 4 ай бұрын
My laptop's gpu is 3060 6gb vram, can i run llama3 on this? Or some small model? Sorry if my bad english :D
@j0hnc0nn0r-sec
@j0hnc0nn0r-sec 4 ай бұрын
I run them both on a 1080 with 16 GB RAM. No problems
@brunodutra5566
@brunodutra5566 4 ай бұрын
@@PhuongTran-ud2br Yes you can run it locally
@faisalferoz6859
@faisalferoz6859 Ай бұрын
I need to run and train the 4 terabyte dataset. Kindly suggest me any solution
@algorsmith8381
@algorsmith8381 4 күн бұрын
damn here in jan 2025 , how did i miss this video 😑😑😑😑😑😑😑😑😑 i missed out on so much money
@JayS.-mm3qr
@JayS.-mm3qr 4 ай бұрын
So which models would one run on this thing, and why? I've used llms on LmStudio, Anything LLM, and Ollama, with a 4060ti 16gb. Never had a problem. But I don't get the hugest models. Are the huge ones really that much better, that it would justify paying for it? Can I use Claude Sonnet on this Vast AI site?
@intheworldofai
@intheworldofai 4 ай бұрын
[Must Watch]: Zed AI: Opensource AI Code Editor - FREE Claude 3.5 Sonnet + Ollama Support!: kzbin.info/www/bejne/nJCue4Ssep6Yq5o Cursor Composer: Develop a Full-stack App Without Writing ANY Code!: kzbin.info/www/bejne/i4DaeoypnpWFpKs Aider UPDATE: Generate Full-Stack Applications! Huge Update! (Opensource): kzbin.info/www/bejne/n3PHpGVjnJiCq7M
@intheworldofai
@intheworldofai 4 ай бұрын
Replit Agent: Easiest Way for ANYONE To Create ANY Application! - kzbin.info/www/bejne/h4ekZmmsidJ0Z5I
@flrn84791
@flrn84791 2 ай бұрын
"12 of 24 GB VRAM" for phi 2 🤡🤣 First, learn to read, it says "CAN RUN ON 12GB TO 24GB VRAM GPUs", not "NEEDS 12 TO 24GB OF VRAM". Second, that's not even remotely true, phi 2 at Q4 quantization needs 1.6GB of VRAM, so any GPU with 2 GB of VRAM can load and run it. Third, yikes, way to try and sell vast AI uh. Alright bye, thanks for the laugh.
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 128 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,5 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
🔥 Run HACKING LLM / AIs locally  - RIGHT NOW! 🚀
37:19
GetCyber
Рет қаралды 1,5 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 172 М.
I Forked Bolt.new and Made it WAY Better
19:28
Cole Medin
Рет қаралды 102 М.
Best GPU Providers for AI: Save Big with RunPod, Krutrim & More
16:15
New Vast.Ai Rig and Realistic Revenue Talk
22:42
Racimbio
Рет қаралды 4,5 М.
Run an AI Large Language Model (LLM) at home on your GPU
9:20
Using Clusters to Boost LLMs 🚀
13:00
Alex Ziskind
Рет қаралды 89 М.
Cheap mini runs a 70B LLM 🤯
11:22
Alex Ziskind
Рет қаралды 294 М.