Holy shit I was almost having a panic attack by not being able to fix the error of OpenAI API key, and you did it very simple and easy! Thank you so much!
@ColeMedin22 күн бұрын
You are so welcome!!
@TheZEN201125 күн бұрын
I've been hoping something like this would be developed.. I'm so impressed.. Love the application of using it with different models simultaneously. And, of course, to run it all on my computer.. You are awesome.!
@digitalchild24 күн бұрын
You have been able to do this for over a year with crewai and it’s far superior to swarm.
@ColeMedin22 күн бұрын
Thanks so much man!
@ColeMedin22 күн бұрын
@digitalchild True, but I love how simple Swarm is, part of why it makes such a great educational framework.
@TheZEN201122 күн бұрын
@@digitalchild I like knowing how it's working. Having control over the fine details is important to me and especially. The ability to use it with my small language model on my own computer. Is a game changer for me.
@digitalchild21 күн бұрын
@@ColeMedin Yes, this is very true.
@DigitalDesignET25 күн бұрын
You are making better Engineers out of people, thanks for that.
@ColeMedin22 күн бұрын
I appreciate it!
@YUAN_TAO25 күн бұрын
Oh man, it's 0140 am over here... Now there's no chance of me getting any sleep tonight 😂 Thank you Cole for another excellent video😊
@ColeMedin25 күн бұрын
Haha wow well I hope you are able to sleep in a bit 😂 You bet!!
@moses540725 күн бұрын
all great information and each of your presentations. It would really be handy if hardware requirements were mentioned for those of us who have mineral hardware so we know what we can dive into and what not. Thanks for all your work that's awesome!
@ColeMedin22 күн бұрын
You bet Moses - thank you! I typically don't mention hardware requirements since it all depends on the LLM you choose to use. If you want to use a 32b parameter model or bigger for example, you'd need something like a 3090 GPU. But then most computers would be able to run a 7b parameter model or smaller.
@realgouravverma25 күн бұрын
Please compare Swarn and n8n. Because one is code based and another is no code. So please analyse whicb one is better for us because as a no code user I want to understand what I am missing in n8n. Thanks for the video.
@clementgirard204525 күн бұрын
I agree 👍
@gettin2itboi25 күн бұрын
great question!
@ColeMedin22 күн бұрын
You are welcome! I love your suggestion and I've noted it down!
@resourceserp8 күн бұрын
Hi real gourav, Can we connect?
@realgouravverma8 күн бұрын
@@resourceserp ok. what is it about?
@snehasissnehasis-co1sn25 күн бұрын
Please create a frontend UI to run this project.... This makes this beautiful script look awesome .... Great job ....
@ColeMedin22 күн бұрын
I certainly might! Thank you!
@BenjaminK12324 күн бұрын
Great video, you can also use groq as well as that also uses the OpenAI format
@ColeMedin22 күн бұрын
Thank you! I actually tried using Groq but it seems their OpenAI compatible API doesn't support function calling, so I was getting errors.
@catarapatara651125 күн бұрын
Surprised not to see a video after bolt.new video today 😅 Any update on installing a local project to your repo?
@ColeMedin25 күн бұрын
Haha it's tough to keep up, especially with the other Bolt.new fork content I am already planning! haha I'll have more updates on the project including this feature this weekend!
@DoubleD_8725 күн бұрын
Great, simple explanation! Thank you!
@ColeMedin25 күн бұрын
Thank you! You bet!
@ChrisMartinPrivateAI24 күн бұрын
Going back a few videos to ChatLLM, it has RouteLLM that interprets the prompt and routes it to the best LLM for the job. What does the group think about taking a broader view of this Swarm agent router implementation and building a chat request router in front of a defined group of Ollama hosted LLMs? Have it easily trainable by the user for their use cases and their questions. Forcing non technical users to learn the capabilities of each model through trial and error is, IMO, an unnecessary headwind to general adoption. PS - hat tip to the Ollama Team who had the forethought to provide OpenAI compat ability to tweak variables precluding Cole from having to fork.
@derekwise25 күн бұрын
I think you made a great video on the technical side next time you make a video can you start with what this does for the end user? For instance what I could even use this for.
@ColeMedin22 күн бұрын
Thanks for the suggestion! I totally agree that's important too
@MrTapan199425 күн бұрын
@Cole really really awesome content , How about some content on how can use all these individual component (be that n8n, swarm or ...) into a end to end use case. Understanding how it works and implementing these into a production ready env is completely different. I will be looking forward for content like these. Thanks again for the awesome videos.
@ColeMedin22 күн бұрын
You are so welcome! And thank you for the suggestion! A lot of my upcoming content is going to be focused on combining a lot of the tools I've been covering in the way you described.
@phatboi98356 күн бұрын
I love this ability and appreciate you showing us how to do it. I am very curious on how someone could leverage this to create a secondary fund stream? I see tons of video's about code assist but nothing about how to utilize it to create funds.
@ColeMedin5 күн бұрын
You bet! Could you clarify more what you are asking? Like how you could make a side hustle with this?
@ginocote25 күн бұрын
You'll have to do like you did with Bolt, meaning, your forked version for the community with multiple LLMs, LOL. Like trying with Groq with LLMA 3.2 90b ect.. My idea is letting the main agent choosing the right and best LLM for simple and complex task with the tools if need ect. For example, taking a small model to scrape weather or stock price or adding tools with free API and return to a small llm for the result.
@ColeMedin22 күн бұрын
Haha yeah I might! Though a lot of providers like Groq don't support function calling with their OpenAI compatible API unfortunately. I love your thoughts here!
@Techonsapevole23 күн бұрын
Nice integration qwen2.5 is very smart for its size
@ColeMedin22 күн бұрын
Thanks and yes it sure is!
@anandkanade950025 күн бұрын
I recenly came to know about cofounder ai , i didnt saw it all but i think that is worth exploring and explaining to ppl
@ColeMedin22 күн бұрын
Yeah I've heard a lot about it recently, I agree!
@IslandPlumber24 күн бұрын
I did this myself. Called it llama swarm. I made it so mine connects to a SQL database to help keep track of progress. This way you can run with free model through open router. Helps shrink the token count. It writes files to the local computer as well.
@resourceserp8 күн бұрын
Do you have a channel / video ?
@chrismahoney354224 күн бұрын
YES. This is fantastic.
@ColeMedin22 күн бұрын
Thank you! :D
@hrmanager688325 күн бұрын
Great contribution 👍
@ColeMedin25 күн бұрын
Thank you very much! :D
@MustRunTonyo23 күн бұрын
Nice vid! If you could do the same with Langflow or Flowise, which are more "production ready" than swarm, it would be really appreciated :)
@ColeMedin22 күн бұрын
Thanks and I appreciate the suggestions! I definitely want to do some content on both in the future.
@MustRunTonyo22 күн бұрын
@ColeMedin appreciate your vids Cole! I'm trying to find the best platform to do these agentic bots, so videos like these are surely helpful :)
@philamavikane942325 күн бұрын
Mr Cole, did you see that M4 Max MB Pro? ... by the way, the bolt folk project is awesome. I having so much fun with it
@ColeMedin22 күн бұрын
I'm glad you are!! What about M4 Max MB Pro?
@prabhakarchandra-un3cy23 күн бұрын
need a video on Cofounder AI. its amazing.
@ColeMedin22 күн бұрын
So I've heard! I will be looking into it!
@lancemarchetti867325 күн бұрын
Now there's Skyvern ...I can't keep up anymore lol
@UnbekannterArnoNym25 күн бұрын
If I use two different models in a script and switch back and forth between them, will each model be removed from the video memory every time I switch, potentially leading to longer runtimes because both models can't fit into the memory at the same time?
@ColeMedin22 күн бұрын
Good question! I believe there will be latency because of that, which is definitely something to keep in mind. Typically when you are using multiple LLMs at once, one is much smaller and meant more for routing tasks while the large one handles the complicated stuff.
@ffffffffffy16 күн бұрын
I tried this out, but results were pretty iffy I tried to add on to this code by creating a "help" agent that would just read back some examples actions that could be taken given the table schema in the context. I figured that would be a reasonable kind of use-case in this scenario, given that a user wouldn't actually know what kind of data they can get out of a db. However, it was tripping up somewhat often, often times suggesting me to write a specific query rather than executing it on its own. Other times Just getting into hallucaination loops or saying I should google it. This is on the models you used here qwen2.5-coder 7b and qwen3b for the router
@ColeMedin13 күн бұрын
Yeah I get it, smaller LLMs still have a ways to go as far as performance for these kind of things!
@dewilton771225 күн бұрын
You do not have to use Ollama, it is the exact way to use the LM Studio server as well.
@ColeMedin22 күн бұрын
True!
@MustRunTonyo24 күн бұрын
Can this be done in n8n? The SQL n8n agent I think it's not as good as the one in swarm...but maybe I'm wrong!
@ColeMedin22 күн бұрын
Yeah it certainly could be! Swarm honestly makes it even easier to set up though IMO.
@sebastianpodesta19 күн бұрын
Could swarm be added to the ai stack to work with n8n and webui?
@ColeMedin18 күн бұрын
It certainly could! That is one thing I am considering to add, though there are a couple great contenders for agent frameworks to add in!
@Henry-xy3fr24 күн бұрын
Is it also possible to use this with Claude sonnet or Gemini?
@ColeMedin22 күн бұрын
Certainly!!
@ThexBorg19 күн бұрын
Are you downloading the AI models to your local computer?
@ColeMedin18 күн бұрын
I sure am! Using Ollama it's entirely local.
@ThexBorg18 күн бұрын
@ wow that’s awesome. It’s like Tony Stark in that scene installing different AI in his suits from disk. I expect future builds of windows will have integrated AI LLM in the OS.
@SOL-500425 күн бұрын
This is what I just looking for!!! Escape from chat gpt api 😂😂😂
@ColeMedin25 күн бұрын
Haha I'm glad!!
@digitalchild24 күн бұрын
I’ve found crewai to be far better in terms of results and capabilities. It has always supported multi model agents including ollama.
@ikjb856124 күн бұрын
Issue with crewai is that it requires LLM for every task it does... You cannot mix and match AI vs non AI tasks
@StefanEnslin22 күн бұрын
My gripe with crewai is it is calling home. I wanted to work and test my local code without internet and couldn't because it couldn't call home, don't know if they addressed this or not but I work with sensitive data that I can't risk getting out so to speak.
@ColeMedin22 күн бұрын
yeah I know a lot of people who love CrewAI! I think Swarm is great for education though since it is so simple.
@programminganytime433217 күн бұрын
Do you know how can I use hugging face LLMs with agents? I couldn't find anything related to this on the internet.
@ColeMedin13 күн бұрын
I've actually had trouble using HuggingFace with function calling... much better luck with Ollama for me. I think their API is limited in that way.
@ginocote25 күн бұрын
I played with a 3b model and my computer froze for 10 minutes 🙄
@ColeMedin22 күн бұрын
Yikes I'm surprised for a smaller model like that! What are your specs?
@bryanprehoda25 күн бұрын
Auto groq was doing this months ago
@ColeMedin22 күн бұрын
True! I love how simple Swarm is though - it's definitely a superior educational framework IMO.
@PraexorVS25 күн бұрын
Zoom it dude
@AINMEisONE25 күн бұрын
Wow! you are so awesome! I will deposit $100 USD to you..I just emailed you...