The best way to support this channel? Comment, like, and subscribe!
@hpongpong3 ай бұрын
Great concise presentation. Thank you so much!
@DevelopersDigest3 ай бұрын
Thank you! 🙏
@ryanroman65893 ай бұрын
this is super valuable. awesome vid!
@DevelopersDigest3 ай бұрын
Thank you! 🙏
@rembautimes88083 ай бұрын
Thanks very nice tutorial
@DevelopersDigest3 ай бұрын
Thank you
@brunozwietisch25 күн бұрын
I’m looking to learn how to use Llama. Do you have a minimum configuration for the 8B version to run? Because here in Brazil, the dollar-to-real exchange rate is 6 to 1, and by the end of the month, the budget gets tight for those who want to learn.
@DevelopersDigest24 күн бұрын
Groq has a free tier for llama as well as cloudflare!
@alejandrogallardo14143 ай бұрын
for models at ~70b, i am getting timeout issues using vanilla ollama. It works with the first pull/run, but times out when i need to reload model. Do you have any recommendations for persistently keeping the same model running?
@DevelopersDigest3 ай бұрын
github.com/ollama/ollama/pull/2146
@rehanshaikh2708Ай бұрын
how can i use this endpoint in langchain chatollama?
@nexuslux3 ай бұрын
Can you use open web ui?
@danielgannage81093 ай бұрын
This is very informative! Thanks :) Curious why you used a g4dn.xlarge GPU ($300/month) instead of a t3.medium CPU ($30/month)? I assumed the 8 Billion parameter model was out of reach with regular hardware. What max model size works with the g4dn.xlarge GPU? To put into perspective, I have a $4K macbook (16gb ram) that can really only run the large (150 million) or medium (100 million parameter) sized model, which i think the t3.medium CPU on AWS can only run the 50 million param (small model).
@dylanv30443 ай бұрын
maybe a dumb question. how do you turn the stream data you received into readable sentences
@DevelopersDigest3 ай бұрын
You could accumulate tokens and split by the end of sentences . ! ? Etc and then send resp after grouping function like that
@ConAim2 ай бұрын
Stay away from AWS, it will cost you arms and legs in a long run..