There was a glitch in my editing around the 2:40 mark so I had to take a tiny part out of the video, my bad everyone! That is why it is a bit choppy there. Thanks to those of you who pointed it out! The Local AI and n8n sections are LIVE in the oTTomator Think Tank (also home to the oTToDev community!) - head and over and come be a part of this super fast growing community! thinktank.ottomator.ai Local AI Section: thinktank.ottomator.ai/c/loca... n8n Section: thinktank.ottomator.ai/c/n8n/
@cuvy5009 күн бұрын
Reminder: The URL of "Local AI Section" is not fully pasted here.
@robfinito46719 күн бұрын
I've been scouting the whole KZbin universe for this EXACT video only to have it here... Thanks
@AI-GPTWorkshop8 күн бұрын
Super Excited for our future collaboration together Cole!
@Benj-n7t8 күн бұрын
super excited for you guys' @coleMedin
@loicbaconnier91508 күн бұрын
Funny i just put a comment in your chanel with Cole Medin code link, before to see you will collaborate Add Nate and it will be a monster collaboration
@entranodigital8 күн бұрын
Totally agree with you this. It's gonna be a legendary trio@@loicbaconnier9150
@weissahmad86758 күн бұрын
This is awesome. You and Zubair are the best there is right now in this space. Looking forward to this collaboration
@Benj-n7t7 күн бұрын
Super cool. You guys are my two fav content creators in this space. Looking forward to it.
@DigitalAlchemyst8 күн бұрын
YOOOO I cant wait to see the collab with zubair you two are top tier creators in this space, stoked for this one
@Benj-n7t7 күн бұрын
Agreed.
@ColeMedin7 күн бұрын
Thank you man! I can't wait either!
@johngoad7 күн бұрын
Nice I have been using Flowise since their beginning. It really blew up for me when I started calling it via API from other applications
@ward_jl8 күн бұрын
Super valuable video. Flowise and n8n are my favorite tools to build AI applications with!
@ColeMedin8 күн бұрын
Thanks man! Yeah they're great!
@T33KS8 күн бұрын
Great hands-on content as always! I have asked many creators the following question, but haven't really gotten a convincing answer: Why use Flowise with n8n when you can replicate the same RAG flow on n8n? In other, words n8n has all the tools needed to build complex RAG Agents. (I love flowise btw, just trying to find a reason not to ditch it completely for n8n-only flow)
@ColeMedin7 күн бұрын
Thank you! And this is a great question. So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup. I hope that makes sense!
@T33KS6 күн бұрын
@ColeMedin thanks for your reply. I see your point. For prototyping it makes absolute sense. Not for production though; since I am having API response latency and lag from Flowise. So I'm not sure what is the best approach for production and deployment (bedrock maybe?)
@alx84396 күн бұрын
In Linux with recent docker installation you also should "docker compose" instead of "docker-compose". The compose thing was previously a separate binary, but now it is docker plugin
@ColeMedinКүн бұрын
Got it, thanks for calling that out!
@alx84396 күн бұрын
Thanks for spending time and introducing this tutorial
@ColeMedinКүн бұрын
You bet! Glad you like it!
@johanw22678 күн бұрын
Keep pumping out dem videos.
@ColeMedin7 күн бұрын
Will do! haha
@digital.artistic.solutions7 күн бұрын
Great work! Your workflows are a real game-changer. I’ve expanded my local AI stack with Suparbase and LocalAI instead of Postgres, Qdrant and Ollama. LocalAI is OpenAI API compatible and works seamlessly with the OpenAI nodes in n8n. It’s such a huge improvement. Definitely a valuable addition! #keepAIlocal
@ColeMedin7 күн бұрын
Sounds awesome! Nice!!
@digital.artistic.solutions7 күн бұрын
@@ColeMedin Absolutely! Building a self-hosted AI stack with Supabase, n8n, Flowise, LocalAI, Open WebUI, Alltalk TTS, comfyui, and Searxng is definitely a solid foundation for a local AI setup. ;) This idea would be sick content for more videos! I'm totally stoked to see all the cool projects and explanations you'll share. Let's level up our knowledge of local AI solutions together! 🚀💪
@eyewitness45608 күн бұрын
Quick note, if you are one of the 3 people like me who use windows, have an amd card and some technical knowhow, you can get all this running on your gpu using WSL and Ubuntu 22.04, as that has ROCm support. I run ollama and owui in the wsl instance and run the rest in docker desktop on windows. It takes a bit of faffing about to get the ports and integration up, but your previous two vids and an ai assistant absolutely enables anyone who can reasonably prompt or follow orders.
@ColeMedin8 күн бұрын
That's amazing - thank you for guidance on this! I don't have an AMD card myself but I know a lot of people who got one for gaming at one point and now want to do LLM inference with it so that's super helpful.
@andianders35656 күн бұрын
I really like your content. May you once speak about how to update your starter kit whenever you've added some features to it. Anyway your content allows me as a noob to get my hands on local ai. Cheers!
@ColeMedinКүн бұрын
Thank you very much!! To update the starter kit you basically just have to run the same commands you used to install it - but it'll go MUCH quicker since you already have everything installed.
@andianders3565Күн бұрын
@@ColeMedin Thank you so much! I'm really impressed by the opportunities n8n offers me for my daily tasks.
@christopheboucher1278 күн бұрын
It will be great to add longterm memory as ZEP for agents, and grap memory vector database from ZEP too.. Great joc thx for all your precious contents !
@mtofani913 күн бұрын
Hey Cole! Is always great to see new video there- thanks for giving us an overview of Fwise. I’m curious: what are your hardware specs for this test? Also, how do you see the competition among these low-code tools? There are tons of options out there, but n8n still seems like one of the most robust and solid choice when it comes to nodes/widgets for our ia agents. Saludos!
@AE-yp4kc4 күн бұрын
If you are planning on using n8n for commercial purposes, be prepared to open your wallet to pay a license fee. From their "Sustainable Use License: You may use or modify the software only for your own internal business purposes or for non-commercial or personal use. You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes."
@MarkLewis003 күн бұрын
Composio has a free forever plan
@xaviruiz83458 күн бұрын
Great contnt Cole!! Thank so much for your videos, I' m learning a lot 😊 Idea for your next videos: N8n, ollama and Llama 3.2 vision model. I didn' find any content about it!
@ColeMedin7 күн бұрын
Thank you so much! Great suggestion :D
@lucaseatp9 күн бұрын
what do you think about dify? great video
@ColeMedin9 күн бұрын
Thank you! I haven't tried Dify yet but it is on my list of the dozens of platforms to try out! haha I have heard great things about it
@jeremiealcaraz8 күн бұрын
OMg ! You're the best !!!
@ColeMedin7 күн бұрын
Thanks Jeremie!
@kenneththompson44506 күн бұрын
Thank you for the content, I’m using your bundles along with trying to integrate Fabric (an open-source framework for augmenting humans using AI). I’d love to see you add this to the bundle, or if I ever get time maybe I will lol. Ty
@ColeMedinКүн бұрын
That's a great idea!
@TheAleksander229 күн бұрын
You've got some overlapping audio starting at 2:45 (Flowise Overview section)
@ColeMedin9 күн бұрын
Thank you for the heads up! I am editing this part out now!
@TheWiiZZLE9 күн бұрын
@@ColeMedin Thanks for the reactivity, I was about to dm you about it
@AriellePhoenix9 күн бұрын
Thought I had another tab playing lol
@loicbaconnier91508 күн бұрын
Is the killer docker compose will integrate flowize and: KEY COMPONENTS Ollama: Run state-of-the-art language models locally n8n: Create automated AI workflows Qdrant: Vector database for semantic search Unstructured: Advanced document processing Argilla: Data labeling and validation Opik: Model evaluation and monitoring JupyterLab: Interactive development environment liteLLM as llm router What do you think ?
@ColeMedin8 күн бұрын
I love it! Some of these tools I am definitely thinking about adding into the local AI starter kit as I continue to expand it!
@fusilad8 күн бұрын
Thanka for the heads up for default olama context size
@ColeMedin8 күн бұрын
Yeah you bet!
@roccobooysen36117 күн бұрын
Could you perhaps make a video to talk about the tech architecture setup for these platforms. As an example: I understand that you can just install Flowise and N8N but there is security that should be implemented on the network so that it does. It exposes your entire network.
@ColeMedin7 күн бұрын
Yeah this is a great suggestion for a video, definitely agree it's needed. Thank you!
@ChrisMartinPrivateAI8 күн бұрын
Perfect sequencing. Been holding off doing a deep dive into custom LangGraph coding and NOW I know why. Prototype (maybe more) in Flowise. Vectorshift is very easy to learn and get it to work agent type flows but is on a managed tier. Is Flowise the bolt.new of LangGraph? That is the direction I am heading.
@ColeMedin7 күн бұрын
Thank you! Yeah VectorShift is great but unfortunate it isn't open source. If Flowise had an AI integrated to build workflows like Zapier does, it certainly would be the Bolt.new of LangGraph! That would be incredible...
@BirdManPhil4 күн бұрын
my current k8s: argocd - gitops traefik - front end reverse proxy promethius - stack stats grafana - stats dashboard redis - cache and que management for n8n workers pgadmin4 - ui for postgresql postgresql w/pgvector - postgress with vector ability for hybrid semantic rag qdrant - optional vector database for pure vector rag langfuse - monitor llm usage stats n8n - tools for flowise flowise - main agentic flow builder i want to add langflow at some point
@ColeMedin4 күн бұрын
I love it!!
@unknownhacker94169 күн бұрын
Hi Cole... May I know your PC specifications...?
@ColeMedin9 күн бұрын
I share it here! thinktank.ottomator.ai/t/my-2x-3090-gpu-llm-build/1714
@Tennysonism2 күн бұрын
Cole, I’m enjoying this project! I setup the last published environment with n8n and everything is working great. How do I pull the updates from git that include flowise? Can I update my current build or do I need to start from the beginning?
@The-ism-of-isms8 күн бұрын
If u make a playlist serires for what we can acheive with flowise + n8n combination that would be game chnager for the community 🙌😍. NO ONE IS DOING THAT I HOPE YOU DO IT. Thank u bosss , teach us more with flowise and n8n combinations
@ColeMedin7 күн бұрын
Yeah that sounds great, thank you for the suggestion and the kind words!
@josejaner8 күн бұрын
Will it be possible with Swarm to have a hybrid of models, for example: using the triage or first-level agent ''gpt-4o or mini" and the agents that manage tools (last level) that use ollama with models like 'qwen2.5-coder' that works very well and fast? 🤔 I don't know if this can be possible, my intention is to reduce costs and increase perimeter data security.
@ColeMedin7 күн бұрын
Yes this is definitely possible! I would look into the Agent Flows in Flowise if you're looking to do that there!
@mike-s7c6 күн бұрын
Great channel Cole! Can you do a session on how replicate this on a VPS?
@ColeMedinКүн бұрын
Thank you Mike! Yeah I am planning on doing that in the future!
@johnclarkson61205 күн бұрын
🎉 you know what, you're a genius
@ColeMedin4 күн бұрын
Haha thank you very much!
@AnimikhAich8 күн бұрын
Question: once I've built a local Flowise workflow, what is the best way to export that and deploy it? I'd like to deploy it on a server. How do I do that? Any suggestions are welcome.
@ColeMedin7 күн бұрын
Great question! You can click on the code icon in the top right of the builder to export the agent. You can turn it into an API endpoint to use with any frontend you want to build!
@jdaiexplore7 күн бұрын
Help needed.I want to do the add a button call improve text. when clicked the user input text in a long text field shall be improved by ai model in a popup and ask user whether to use the ai generated text or existing text. how to do this in my web too.
@TheDJHollywood9 күн бұрын
At 2:45 mark you have a duplicate track. I spent a couple minutes looking for another open browser, Then I paused this one and both voices were gone, not just 1.
@ColeMedin9 күн бұрын
Yeah found it, thanks! I'm editing it out now.
@TheDJHollywood9 күн бұрын
@@ColeMedin I found it funny that I was looking for a second browser even AFTER the duplicate track was done, but I thought it was just behind and you would talk again.
@prabhic8 күн бұрын
Thank you very much. Very much helpfull, just what is needed, at right time
@ColeMedin8 күн бұрын
You bet! Glad it is helpful!
@NoCodeKing697 күн бұрын
Can't you do all of this purely in n8n? What is the reason to add another tool like flowise to the stack?
@ColeMedin7 күн бұрын
Thank you! And this is a great question. So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup. I hope that makes sense!
@kalilbarry37738 күн бұрын
Hey Cole, what you think about Langflow, is there a reason you chose Flowise instead?
@ColeMedin7 күн бұрын
They are super similar tools! Honestly there wasn't a huge reason I picked Flowise over Langflow - I would have to dive into using Langflow more before I could really do a in depth comparison.
@ahmedalikhan61548 күн бұрын
is there anyway to create agents like that through coding in python? or can we convert these automation into python
@ColeMedin7 күн бұрын
Great question! Flowise is built on top of LangChain which is a Python framework, so anything you build with this can be converted to Python code using LangChain!
@alx84396 күн бұрын
Apart from langflow there's a plethora of agentic libraries already for python.
@GustavoRabelo939 күн бұрын
What are the recommended specs for the GPU? I have a 3080ti, would that be enough?
@ColeMedin9 күн бұрын
Depends on the local LLM you want to run! A 3080ti will definitely be able to run ~20b parameter models and smaller pretty fast! You could try 32b parameter models as well but that is where it would start to be pushing it.
@seniorcaution8 күн бұрын
Should be able to run 20b without quantization. I've been able to run 32b and even some 70b models with quantization on my 12GB card
@alx84396 күн бұрын
@@seniorcautionnah 3080ti has 12Gb of VRAM. Models with original weights (non quantized) are 16bit = 2 bytes per parameter. There's nowhere near you can fit 20B model (40Gb of weights) into 12Gb of VRAM. But if you take 4-5bits quant then yes
@LA_3188 күн бұрын
@colemedin I've really been enjoying your videos and trying to replicate what you've been doing. I think it's very valuable. One thing I'm specifically trying to do myself is create a workflow where it takes screenshots of what I'm doing and then using The latest LlamaVision LLM to convert that to a document that can be saved for later so then I can ask my local AI about projects I've worked on and just details that I may have missed when taking my notes for my builds for other projects that not necessarily have to do anything with AI. I've been having trouble setting this up but if this interests you I'd love to see you make a video about how to make something like that work. I have other ideas on how to integrate that to make other things but if I can't figure this out then I certainly can't move on to more complicated workflows.
@ColeMedin8 күн бұрын
Thank you! Love your use case too. I certainly haven't created enough videos on vision based generative AI so this would be something awesome to cover!
@alx84396 күн бұрын
I'm pretty sure there are already some open source projects doing this: 1) take screenshoting tool that can be executed from CLI on your operating system. There are plenty of them + some are even included to your OS 2) write a simple script (bash / cmd) to take a screenshot and save it to file. Then, as a next step in that script you call ollama with latest llama visual model to process that screenshot and extract whatever information you want to extract. Save the ollama output as text file. Script itself is just 3 lines of code. 3) now you have a pair of screenshot file and text file describing what is on screenshot. You can save them anywhere - local folder, Google drive, database 4) put that script to be executed on a schedule, like once a minute 5) reuse any rag ingestion pipeline to make embedding vectors out of screenshot description and put it + original screenshot + datetime to any vector database of your choice 7) use that vector database and embedding model as RAG tool with any model
@sebastianpodesta9 күн бұрын
Hi Cole, Can I update my old Ai Kit installation to this?
@ColeMedin9 күн бұрын
You sure can Sebastian! You can run the same Docker compose command you used to start it for the first time and it'll update everything and you won't lose any workflows or anything else you've built with the kit!
@sebastianpodesta8 күн бұрын
@ thanks a lot for all your work!!
@ColeMedin7 күн бұрын
You bet!
@go54957 күн бұрын
of course you are an Ai agent being so unearthly helpful that not a quality all the human have
@jaggyjut8 күн бұрын
With cursor ai and windsurf which can code, is there still going to be a demand for no code platforms?
@ColeMedin8 күн бұрын
Super fair question! Definitely until LLMs can consistently write error free code, no code platforms like Flowise and n8n are going to still be by far the fastest way to create your agents. But even once we get to that point, it can still take a while to set up your environment, get the LLM to truly understand what you are trying to build, and execute on it.
@wulfrum5567Күн бұрын
How do I update my n8n from docker?
@frosti76 күн бұрын
I'm actually looking for a cloud solution thats suitable for a team, is there a recommendation?
@ColeMedinКүн бұрын
Cloud solution for what exactly? For running local AI?
@AriellePhoenix9 күн бұрын
Flowise aren't accepting new sign ups, they have a waitlist 😭 looks cool, though!
@ColeMedin9 күн бұрын
You can download and run it yourself locally like I did in this video! I believe that is just for their cloud offering!
@sneakymove9 күн бұрын
the problem with flowise is u could be always on waiting list. I could not download or start anything until now since 3 weeks ago
@ColeMedin9 күн бұрын
Hmmm... Flowise is open source so you can just install it and run it now like I showed in the video! Are you maybe referring to a cloud offering they have?
@augmentos9 күн бұрын
Yasss
@mitchellrcohen9 күн бұрын
Thanks!
@ColeMedin9 күн бұрын
You bet!
@Luke-Barrett8 күн бұрын
Can't this all be done in n8n ?
@ColeMedin8 күн бұрын
Fair question, Luke! In the end everything in Flowise can be done in n8n, but certainly not as fast. Loading documents for RAG for example is one thing you can do MUCH faster with Flowise, just because of all the integrations and prebuilt tools we have there since it's built on top of LangChain. So what I like doing a lot of times is prototyping with Flowise and then eventually moving things into n8n - it's an easy transition too.
@marcosbeliera18 күн бұрын
Nice! But I see it very similar with n8n Agents.
@ColeMedin8 күн бұрын
Thanks! And yeah it is similar, part of why they integrate so well together. Flowise makes it easier to do some things like load documents into RAG because of all the different document loaders there are that aren't available in n8n. Just one example!
@АлександрУшанов-щ3е8 күн бұрын
No subtitles?
@ColeMedin8 күн бұрын
They should be generated by default! Are you not seeing them when you usually do?
@АлександрУшанов-щ3е7 күн бұрын
@@ColeMedin they appeared, but not right away, somewhere around six hours later. although before they appeared almost immediately. And thanks for the videos!
@ColeMedin7 күн бұрын
Great! You bet! Not sure why it took extra time this time around.
@ShubzGhuman9 күн бұрын
well im awake
@Maxim_Kulakov5 күн бұрын
You need to think about people with studio headphones watching your video because these audio effect you have throughout the whole video ARE REALLY LOUD!
@alx84396 күн бұрын
Also all these aka "agent flows" I'm seeing are kind of flawed. They don't have loopbacks and self reflection. Imagine that internet search results didn't return anything meaningful. In ideal situation agent should detect that and try another round of search with using different wordings. But instead whatever garbage has returned is being used
@ColeMedinКүн бұрын
Yeah that is fair! Though those kind of things can be built into these workflows! It's up to the user to make that happen is the way I see it.
@alx84396 күн бұрын
The logging / transparency of what is happening in flowise is shit.
@ColeMedinКүн бұрын
Yeah I agree, that's why I tend to use it for just prototyping
@terrorkiller6457 күн бұрын
Can you make like a series playlist where you can give a full in depth explanation on how to download these things as well as make a space where ppl can ask on questions or of they get stuck on something.
@ColeMedin7 күн бұрын
Yeah I am working on this! And over at thinktank.ottomator.ai I have a community for people to post these kind of questions!
@mahdihussein28267 күн бұрын
Can you share how to deploy a rag on Instagram using n8n please
@PyJu808 күн бұрын
Dont know if you belive in the universe. But I come to KZbin when I hit a raodblock. Then who pops up. My main man Cole. The F**king G.O.A.T. With the exact solution to my problem. EVERY TIME. Hats off. I don't subscribe to many things, but get excited for your next project. NGL, I can see bolt.new or someone coming to you with alot of money for this. You're all over KZbin, even Claude 3.5 is impressed with you and youre new news 😁. Man I love your work. One Day, One Day, you will intergrate VSC and allow for file changes live and mate, I wouldnt blame you to take the $2b and run 🤑🤑🤐🤐. 👊👊
@PyJu808 күн бұрын
Ohhhhh, and if you can convert the above into an open_ai_like base url to use as a model in your OttoDev. Then im fyling to USA to bow to the G.O.A.T.
@ColeMedin8 күн бұрын
Haha thank you so much - that means a lot to me! $2b would be a fever dream... I would take it and keep teaching on KZbin :)
@PyJu808 күн бұрын
@@ColeMedin don't forget me on the Lads holiday. All expenses spring break on you. 😅😅😂😂❤🎉