Automate AI Research with Crew.ai and Mozilla Llamafile

  Рет қаралды 11,984

Practical AI through Prototypes

Practical AI through Prototypes

Күн бұрын

Пікірлер: 30
@mattc3265
@mattc3265 6 ай бұрын
Love being able to run this locally. Great vid 👏👏👏
@alexanderheppening
@alexanderheppening 4 ай бұрын
Great video, really helpful. THX
@Nick__X
@Nick__X 5 ай бұрын
awesome content dude!
@raminderpalsingh123
@raminderpalsingh123 4 ай бұрын
Great video. I am still getting charged for gpt-4o usage when my llamafile seems to be working fine? Is that expected? Also, how can I switch the code to point to gpt-4o-mini, to even further minimize my openAI API costs? THANKS A BUNCH!
@SixTimesNine
@SixTimesNine 6 ай бұрын
Very good vid - thanks!
@vladimirmiletin4486
@vladimirmiletin4486 6 ай бұрын
Nice, thanks
@RealLexable
@RealLexable 3 ай бұрын
Do you know something about the data protection rules of serper for using their Google API?
@MikeHeavers
@MikeHeavers 3 ай бұрын
I think generally it’s not so much what tool you use as what you’re scraping and what the intended use is. Most sites have their own stipulations or licenses on what you can and can’t do with their site / content
@procrastinatingrn3936
@procrastinatingrn3936 3 ай бұрын
So a free perplexity?
@86dansu
@86dansu 4 ай бұрын
But you used an Open AI key and Serper key so this might not be running entirely local right ?
@MikeHeavers
@MikeHeavers 4 ай бұрын
The RAG bit - grabbing content from the internet, will require being online, correct. Inference is local, provided you are using Llamafile.
@iliadobrevqqwqq
@iliadobrevqqwqq 6 ай бұрын
Tks
@dbreardon
@dbreardon 6 ай бұрын
Now everyone needs a new computer and a Cuda graphics card which are massively expensive due to crypto mining and now AI servers. Local runs way too slow on my 3-4 year old laptop. Will have to see if new Intel and AMD chips with embedded NPU's provide any support for multiple LLMs run on local machines.
@practical-ai-prototypes
@practical-ai-prototypes 5 ай бұрын
Fair point - performance on local is not as good as running on cloud infrastructure. Seems like "AI-enabled" PCs will be the new trend.
@gymlin123
@gymlin123 6 ай бұрын
but dont I still have to pay for tokens?
@practical-ai-prototypes
@practical-ai-prototypes 5 ай бұрын
You don't have to pay for tokens if you run a local model
@practical-ai-prototypes
@practical-ai-prototypes 6 ай бұрын
Update - I made an `app-input.py` script that allows you to create your own agent and task just by answering some questions in the command line.
@JofnD
@JofnD 6 ай бұрын
Seems very useful! Is there an update video for this?
@practical-ai-prototypes
@practical-ai-prototypes 5 ай бұрын
@@JofnD no - but same instructions, just run `python app-input.py` from the command line.
@mattportnoyTLV
@mattportnoyTLV 3 ай бұрын
Probably should've mentioned this is in beta. Don't you think?
@themax2go
@themax2go 3 ай бұрын
you talk about running locally but using openai... sad
@MikeHeavers
@MikeHeavers 3 ай бұрын
Yeah this is a fair point. I should do another version where I use an open model
@ryanbthiesant2307
@ryanbthiesant2307 6 ай бұрын
Not good. Does not show the problems of crew ai working with Ollama or any other lllm. Crewai persistently asks for open ai key. The good I discovered Mozilla lllm server thank you. Crew ai is really bad.
@mandelafoggie9359
@mandelafoggie9359 5 ай бұрын
So what is better than Crewai?
@practical-ai-prototypes
@practical-ai-prototypes 5 ай бұрын
You don't have to use openAI or the API key - you can just remove it from the code. The Ollama file sample from the github repo shows you how to use Ollama. Note that Ollama is not an LLM - it just allows you to run LLMs locally.
@practical-ai-prototypes
@practical-ai-prototypes 5 ай бұрын
@@mandelafoggie9359 You can try autogpt if you want - I found it harder to use.
@ryanbthiesant2307
@ryanbthiesant2307 5 ай бұрын
​@@practical-ai-prototypes Thank you I will check that out, again. I think it still asks for a key, even a fake key. Even if you want to use ollama.
@dougsilkstone
@dougsilkstone 4 ай бұрын
Having a blast with your content Mike. We're doing real similar things on yt 🫡 thanks for the awesome vids
@MikeHeavers
@MikeHeavers 4 ай бұрын
Wow - ton of content, I'll check this out. Thanks for the comment.
@dougsilkstone
@dougsilkstone 4 ай бұрын
@@MikeHeavers ah just finding my feet. Its good fun!
First look at Google's ImageFX generative AI editing capabilities
0:30
Practical AI through Prototypes
Рет қаралды 569
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 82 М.
Из какого города смотришь? 😃
00:34
МЯТНАЯ ФАНТА
Рет қаралды 2,5 МЛН
From RAG to Knowledge Assistants
27:29
LlamaIndex
Рет қаралды 25 М.
Local GraphRAG with LLaMa 3.1 - LangChain, Ollama & Neo4j
15:01
Coding Crash Courses
Рет қаралды 29 М.
LlamaFile: Increase AI Speed Up by 2x-4x
8:43
Mervin Praison
Рет қаралды 10 М.
LlamaFS - The Ultimate AI File Organizer You've Been Waiting For
5:48
Anubhav Shrimal
Рет қаралды 15 М.
Run your own large language model with Mozilla's Llamafile
6:02
Practical AI through Prototypes
Рет қаралды 10 М.
Why I'm Staying Away from Crew AI: My Honest Opinion
53:48
Data Centric
Рет қаралды 29 М.
Llamafile: Local LLMs Made Easy
6:27
Igor Riđanović
Рет қаралды 10 М.