How To Connect Llama3 to CrewAI [Groq + Ollama]

  Рет қаралды 35,245

aiwithbrandon

aiwithbrandon

Күн бұрын

Пікірлер: 65
@dataninjuh2135
@dataninjuh2135 3 ай бұрын
This man knows what the people want , getting up and running with LLMs and Agents for the F R E E 😮‍💨 ! “Now this is pod racing !” 😂🙏🏻👍
@theBookofIsaiah33ad
@theBookofIsaiah33ad 8 ай бұрын
Man, I do not know how to create and write code but you have made a video and I think I can do this! Bless you my friend!
@bhancock_ai
@bhancock_ai 8 ай бұрын
Thank you! I'm confident you can do it! Let me know if you nee help with anything!
@darkyz543
@darkyz543 7 ай бұрын
Your channel is THE real gold mine. Thank you so much.
@ronaldshyu
@ronaldshyu 7 ай бұрын
I totally agree!
@GregPeters1
@GregPeters1 9 ай бұрын
Hey Brandon, welcome back after your vacay!
@bhancock_ai
@bhancock_ai 9 ай бұрын
Feels good to be back! I'm recharged and ready to go!
@CodeSnap01
@CodeSnap01 9 ай бұрын
refereshed after short vacation.. hope to see you frequently
@MariodeFelipe
@MariodeFelipe 9 ай бұрын
The quality is 10/10 thanks mate
@bhancock_ai
@bhancock_ai 8 ай бұрын
Thank you Mario!
@AndyPandy-ni1io
@AndyPandy-ni1io 8 ай бұрын
@@bhancock_ai /llama3-crewai is this automate-youtube-with-crewai? or crewai-updated-tutorial-hierarchical or crew-ai-crash-course or nextjs-crewai-basic-tutorial or crew-ai-local-llm or crewai-groq-tutorial git hub I can't work out what file relates to this video?
@Imakemvps
@Imakemvps 7 ай бұрын
I hope we can get access to your skool soon! Its been a few days. So I can learn from your group.
@protovici1476
@protovici1476 9 ай бұрын
Excellent video! Would be interesting to see these frameworks, but within LightningAI Studios. Also, I saw CrewAI will be having a more golden standard approach to their code structuring in the near future.
@bhancock_ai
@bhancock_ai 9 ай бұрын
Thank you! And you're definitely right about CrewAI moving towards YAML. When CrewAI+ drops, I plan on making a lot more content around this new format for you guys! And, I haven't tried out LightningAI Studio yet so I'll definitely have to try it out this weekend. Thanks for the suggestion!
@protovici1476
@protovici1476 9 ай бұрын
@@bhancock_ai Great! I like the YAML approach. William Falcon that started LightningAI (PyTorch Lightning) likes my posts on LinkedIn as I'm a huge fan in developing with it when I mention them. Will be studying your approach with the latest updates and hopefully with their Studio. Thanks!!
@d.d.z.
@d.d.z. 8 ай бұрын
Friendly commment: You look better with glasses, more professional. Great content.
@bhancock_ai
@bhancock_ai 8 ай бұрын
Hey D! Thanks! I love wearing glasses and hate my contacts so I think I might need to go full glasses mode 🤓
@MichaelDavison-mv8dr
@MichaelDavison-mv8dr 4 ай бұрын
cant get the tools.search_tools module to run says not foun, ive trued pip install command or just install command with tools name, no luck any ideas please
@tapos999
@tapos999 8 ай бұрын
thanks! Your crewai tutorial are top-of-the-shelf stuff. do you have any crewai proejct with streamlit connected to show output on the ui? thanks
@Storytelling-by-ash
@Storytelling-by-ash 8 ай бұрын
I get a error, then I noticed that we need search api, I added that but still get the error pydantic_core._pydantic_core.ValidationError: 1 validation error for Task expected_output Field required [type=missing, input_value={'description': "Analyze ...e business landscapes.)}, input_type=dict]
@ravinayak5457
@ravinayak5457 7 ай бұрын
You get this resolved?
@ravinayak5457
@ravinayak5457 7 ай бұрын
This is resolved by adding expected_output in your task
@kepenge
@kepenge 8 ай бұрын
Appreciate your support (with those contents), the only drawback, was the need to subscribe to get access to a project that isn't yours. 😞
@shuntera
@shuntera 8 ай бұрын
That is using a very old version of CrewAI - if you run it with the current version of CrewAI it fails because of lack of expected_output parameter in the Tasks
@nathankasa6220
@nathankasa6220 9 ай бұрын
Thanks! Is Claude 3 opus still not supported though? How come?
@aboali-pl7ib
@aboali-pl7ib 5 ай бұрын
thank you for your help 🤘🤘
@thefutureisbright
@thefutureisbright 9 ай бұрын
Brandon excellent tutorial 👍
@bhancock_ai
@bhancock_ai 9 ай бұрын
Thanks man! I really appreciate it!
@Omobilo
@Omobilo 8 ай бұрын
Great stuff. Maybe a silly question, but when it was fetching to read data from remote website (the analysis part), does it read it remotely OR does it capture screenshots & download text to feed into its prompt and then clear this cached data or such local cached data needs to be cleaned eventually? Hope it simply reads remotely without too much data saved locally as I plan to use this approach to analyze many websites without flooding my local storage.
@tusharparakh6908
@tusharparakh6908 6 ай бұрын
Can I use this and deploy it live so that other people can use it? Does it run for free only locally or its free when its deployed also?
@jarad4621
@jarad4621 8 ай бұрын
Please for the love of god somebody explain to me why we are using Ollama to download local models and then using Groq anyway to run the model in the cloud. Why can't we just skip the ollama part? I beg you i see all the videos using Ollama with Groq and i don't understand the aspect! thank you. Does ollama do something special to make it work better for crewai then a direct Groq connect?
@geneanthony3421
@geneanthony3421 4 ай бұрын
I ran the code on Ollama and on Groq and I'm getting a loop "It seems we encountered an unexpected error while trying to use the tool. This was the error 'organic'" [Info]: Max RPM reached, waiting for next minute to start
@ag36015
@ag36015 8 ай бұрын
What would you say are the minimum hardware requirements to make it run smoothly?
@AnjuMohan-d8c
@AnjuMohan-d8c 6 ай бұрын
Can someone help me, I got the following when I ran llama3 in Ollama. Created a chunk of size 1414, which is longer than the specified 1000 Created a chunk of size 1089, which is longer than the specified 1000 Created a chunk of size 1236, which is longer than the specified 1000
@Ryan.Youtube
@Ryan.Youtube 8 ай бұрын
This is awesome! 😎
@bhancock_ai
@bhancock_ai 8 ай бұрын
Thanks! 😁
@jarad4621
@jarad4621 8 ай бұрын
Hi Brandon, the groq rate limit is a big issue for my use case, can i use this same method to use another similar hosted llama 3 70b with crewai like openrouter api or can any api be used instead of groq with your method?
@jarad4621
@jarad4621 8 ай бұрын
Oh i see it has to be an api alreadyy supported by langchain correct or it wont work?
@ryana2952
@ryana2952 8 ай бұрын
Is there an easy way to build No Code AI Assistants or Agents with Groq? I know zero code
@jalapenos12
@jalapenos12 8 ай бұрын
Just curious why VSCode doesn't display file types on Mac. I'm going bonkers trying to figure out what to save the Modelfile as.
@bhancock_ai
@bhancock_ai 8 ай бұрын
Hey! There actually isn't a file type for that file. You can just leave it how it is. Hope that helps!
@jalapenos12
@jalapenos12 8 ай бұрын
@@bhancock_ai Thanks for the quick response. I figured out that ".txt" works for those of us in other operating systems.
@madhudson1
@madhudson1 8 ай бұрын
Good luck getting local, quantized models to reliably function call, or use any kind of 'tool'. They need so much more supervision, which is where frameworks like langgraph can help, rather than crew
@QiuyiFeng-t2j
@QiuyiFeng-t2j 4 ай бұрын
Max RPM reached, waiting for next minute to start. How to solve it...
@shuntera
@shuntera 9 ай бұрын
With both the Groq 8b and 70b with crew max_rpm set at both 1 or 2 I do get it halting for a while with: [INFO]: Max RPM reached, waiting for next minute to start.
@bhancock_ai
@bhancock_ai 9 ай бұрын
The problem is that Groq is so fast that it ends up processing too many tokens so it ends up hitting a rate limit and failing. To get around that, we have to slow down our crew by setting the max RPM. Feel free to bump it up to get your crew to move faster!
@reidelliot1972
@reidelliot1972 8 ай бұрын
Great content as always! Do you know if it's sustainable to use a single groqcloud API key to host LLM access for a multi-user app? Or would a service like AWS Sagemaker be better for simultaneous users? Cheers!
@bennie_pie
@bennie_pie 8 ай бұрын
Thank you for this and for the code.. How does Llama 3 compare to Dolphin-Mistral 2.8 running locally as the more junior agents do you know? Dolphin-Mistral with its extra conversatuon/coding training and bigger 32k context window appeals! Ive had agents go round in circles creating nonsense with other frameworks as they dont remember what they are supposed to do! A big context window defo could bring some benefits! I try and avoid using GPT3.5 or 4 for coding preferring for this reason. Id then like to use Claude 3 Opus with his 200k context window and extra capability for the heavy liftin and oversight!
@clinton2312
@clinton2312 8 ай бұрын
Thank you :)
@togai-dev
@togai-dev 7 ай бұрын
Hey Brandon great video by the way. There seems to be an error as such. It seems we encountered an unexpected error while trying to use the tool. This was the error: Invalid json output: Based on the provided text, a valid output schema for the tool is: { "tool_name": str, "arguments": { "query": str } } This schema defines two keys: `tool_name` which should be a string, and `arguments` which should be a dictionary containing one key-value pair. The key in this case is `query`, with the value being another string. 'str' object has no attribute 'tool_name'
@mikesara7032
@mikesara7032 9 ай бұрын
your awesome, thank you!
@bhancock_ai
@bhancock_ai 9 ай бұрын
Thanks Mike! You're awesome too!
@ZombieGamerRealm
@ZombieGamerRealm 8 ай бұрын
gyus do u know any way to run crewai and\or llama on gpu? only CPU is soooooooooooooooooooooooo sloooooooooooooow
@ArseniyPotapov
@ArseniyPotapov 8 ай бұрын
llama_cpp (what ollama is based on) or vllm
@pratyushsrivastava6646
@pratyushsrivastava6646 9 ай бұрын
Hello sir Nice content
@bhancock_ai
@bhancock_ai 9 ай бұрын
Thanks Pratyush! I really appreciate it!
@pratyushsrivastava6646
@pratyushsrivastava6646 9 ай бұрын
@@bhancock_ai how can I connect with you
@magnuscarlsson5067
@magnuscarlsson5067 8 ай бұрын
What graphic card do you use on your computer when running local with Ollama?
@krysc4d
@krysc4d 8 ай бұрын
The key is VRAM. I can run smoothly llama 3 70b on RTX3090 hiting about 16GB of VRam (if remember correctly)
@rauljauregi6615
@rauljauregi6615 9 ай бұрын
Nice! 😃
@deadbody408
@deadbody408 9 ай бұрын
might want to revoke those keys you revealed if you haven't
@miaohf
@miaohf 8 ай бұрын
Very good video demonstration. I noticed that you chose to use serper search in the video. I would like to know the difference between serper and duckduckgo search and how to choose between them. If you know, please introduce it to me. Thank you.
@raghuls1469
@raghuls1469 8 ай бұрын
Hello Brandon thanks for the awesome video, I was trying to do the same setup with crew AI, but I am getting an error while running, I added the error message below Traceback (most recent call last): File "D:\crew_ai\crew.py", line 114, in result = crew.kickoff() ^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\crew.py", line 252, in kickoff result = self._run_sequential_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\crew.py", line 293, in _run_sequential_process output = task.execute(context=task_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\task.py", line 173, in execute result = self._execute( ^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\task.py", line 182, in _execute result = agent.execute_task( ^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\agent.py", line 207, in execute_task memory = contextual_memory.build_context_for_task(task, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\contextual\contextual_memory.py", line 22, in build_context_for_task context.append(self._fetch_stm_context(query)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\contextual\contextual_memory.py", line 31, in _fetch_stm_context stm_results = self.stm.search(query) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\short_term\short_term_memory.py", line 23, in search return self.storage.search(query=query, score_threshold=score_threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\storage ag_storage.py", line 90, in search else self.app.search(query, limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\embedchain\embedchain.py", line 635, in search return [{"context": c[0], "metadata": c[1]} for c in self.db.query(**params)] ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\embedchain\vectordb\chroma.py", line 220, in query result = self.collection.query( ^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\models\Collection.py", line 327, in query valid_query_embeddings = self._embed(input=valid_query_texts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\models\Collection.py", line 633, in _embed return self._embedding_function(input=input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\types.py", line 193, in __call__ result = call(self, input) ^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\utils\embedding_functions.py", line 188, in __call__ embeddings = self._client.create( ^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai esources\embeddings.py", line 113, in create return self._post( ^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 1232, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 921, in request return self._request( ^^^^^^^^^^^^^^ File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 1012, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: 404 page not found
@ronaldshyu
@ronaldshyu 7 ай бұрын
Have you tried use ChatGPT to solve you erro? I used and I had a good result.
How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral]
25:07
aiwithbrandon
Рет қаралды 76 М.
How to Build Effective AI Agents (without the hype)
24:27
Dave Ebbelaar
Рет қаралды 19 М.
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 173 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 356 М.
Qwen QwQ 2.5 32B Ollama Local AI Server Benchmarked w/ Cuda vs Apple M4 MLX
26:28
Llama3 + CrewAI + Groq = Email AI Agent
14:27
Sam Witteveen
Рет қаралды 57 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
CrewAI Noob vs Pro - God Task
26:28
aiwithbrandon
Рет қаралды 8 М.
Powerful AI Agents using CrewAI, GPT, Groq, Ollama, LLama3
36:42
Rob Shocks
Рет қаралды 2,4 М.
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН