Love being able to run this locally. Great vid 👏👏👏
@alexanderheppening4 ай бұрын
Great video, really helpful. THX
@Nick__X5 ай бұрын
awesome content dude!
@raminderpalsingh1234 ай бұрын
Great video. I am still getting charged for gpt-4o usage when my llamafile seems to be working fine? Is that expected? Also, how can I switch the code to point to gpt-4o-mini, to even further minimize my openAI API costs? THANKS A BUNCH!
@SixTimesNine6 ай бұрын
Very good vid - thanks!
@vladimirmiletin44866 ай бұрын
Nice, thanks
@RealLexable3 ай бұрын
Do you know something about the data protection rules of serper for using their Google API?
@MikeHeavers3 ай бұрын
I think generally it’s not so much what tool you use as what you’re scraping and what the intended use is. Most sites have their own stipulations or licenses on what you can and can’t do with their site / content
@procrastinatingrn39363 ай бұрын
So a free perplexity?
@86dansu4 ай бұрын
But you used an Open AI key and Serper key so this might not be running entirely local right ?
@MikeHeavers4 ай бұрын
The RAG bit - grabbing content from the internet, will require being online, correct. Inference is local, provided you are using Llamafile.
@iliadobrevqqwqq6 ай бұрын
Tks
@dbreardon6 ай бұрын
Now everyone needs a new computer and a Cuda graphics card which are massively expensive due to crypto mining and now AI servers. Local runs way too slow on my 3-4 year old laptop. Will have to see if new Intel and AMD chips with embedded NPU's provide any support for multiple LLMs run on local machines.
@practical-ai-prototypes5 ай бұрын
Fair point - performance on local is not as good as running on cloud infrastructure. Seems like "AI-enabled" PCs will be the new trend.
@gymlin1236 ай бұрын
but dont I still have to pay for tokens?
@practical-ai-prototypes5 ай бұрын
You don't have to pay for tokens if you run a local model
@practical-ai-prototypes6 ай бұрын
Update - I made an `app-input.py` script that allows you to create your own agent and task just by answering some questions in the command line.
@JofnD6 ай бұрын
Seems very useful! Is there an update video for this?
@practical-ai-prototypes5 ай бұрын
@@JofnD no - but same instructions, just run `python app-input.py` from the command line.
@mattportnoyTLV3 ай бұрын
Probably should've mentioned this is in beta. Don't you think?
@themax2go3 ай бұрын
you talk about running locally but using openai... sad
@MikeHeavers3 ай бұрын
Yeah this is a fair point. I should do another version where I use an open model
@ryanbthiesant23076 ай бұрын
Not good. Does not show the problems of crew ai working with Ollama or any other lllm. Crewai persistently asks for open ai key. The good I discovered Mozilla lllm server thank you. Crew ai is really bad.
@mandelafoggie93595 ай бұрын
So what is better than Crewai?
@practical-ai-prototypes5 ай бұрын
You don't have to use openAI or the API key - you can just remove it from the code. The Ollama file sample from the github repo shows you how to use Ollama. Note that Ollama is not an LLM - it just allows you to run LLMs locally.
@practical-ai-prototypes5 ай бұрын
@@mandelafoggie9359 You can try autogpt if you want - I found it harder to use.
@ryanbthiesant23075 ай бұрын
@@practical-ai-prototypes Thank you I will check that out, again. I think it still asks for a key, even a fake key. Even if you want to use ollama.
@dougsilkstone4 ай бұрын
Having a blast with your content Mike. We're doing real similar things on yt 🫡 thanks for the awesome vids
@MikeHeavers4 ай бұрын
Wow - ton of content, I'll check this out. Thanks for the comment.
@dougsilkstone4 ай бұрын
@@MikeHeavers ah just finding my feet. Its good fun!