Flowise + n8n - The BEST No Code + Local AI Agent Combo

  Рет қаралды 18,270

Cole Medin

Cole Medin

Күн бұрын

Пікірлер: 134
@ColeMedin
@ColeMedin 9 күн бұрын
There was a glitch in my editing around the 2:40 mark so I had to take a tiny part out of the video, my bad everyone! That is why it is a bit choppy there. Thanks to those of you who pointed it out! The Local AI and n8n sections are LIVE in the oTTomator Think Tank (also home to the oTToDev community!) - head and over and come be a part of this super fast growing community! thinktank.ottomator.ai Local AI Section: thinktank.ottomator.ai/c/loca... n8n Section: thinktank.ottomator.ai/c/n8n/
@cuvy500
@cuvy500 9 күн бұрын
Reminder: The URL of "Local AI Section" is not fully pasted here.
@robfinito4671
@robfinito4671 9 күн бұрын
I've been scouting the whole KZbin universe for this EXACT video only to have it here... Thanks
@AI-GPTWorkshop
@AI-GPTWorkshop 8 күн бұрын
Super Excited for our future collaboration together Cole!
@Benj-n7t
@Benj-n7t 8 күн бұрын
super excited for you guys' @coleMedin
@loicbaconnier9150
@loicbaconnier9150 8 күн бұрын
Funny i just put a comment in your chanel with Cole Medin code link, before to see you will collaborate Add Nate and it will be a monster collaboration
@entranodigital
@entranodigital 8 күн бұрын
Totally agree with you this. It's gonna be a legendary trio​@@loicbaconnier9150
@weissahmad8675
@weissahmad8675 8 күн бұрын
This is awesome. You and Zubair are the best there is right now in this space. Looking forward to this collaboration
@Benj-n7t
@Benj-n7t 7 күн бұрын
Super cool. You guys are my two fav content creators in this space. Looking forward to it.
@DigitalAlchemyst
@DigitalAlchemyst 8 күн бұрын
YOOOO I cant wait to see the collab with zubair you two are top tier creators in this space, stoked for this one
@Benj-n7t
@Benj-n7t 7 күн бұрын
Agreed.
@ColeMedin
@ColeMedin 7 күн бұрын
Thank you man! I can't wait either!
@johngoad
@johngoad 7 күн бұрын
Nice I have been using Flowise since their beginning. It really blew up for me when I started calling it via API from other applications
@ward_jl
@ward_jl 8 күн бұрын
Super valuable video. Flowise and n8n are my favorite tools to build AI applications with!
@ColeMedin
@ColeMedin 8 күн бұрын
Thanks man! Yeah they're great!
@T33KS
@T33KS 8 күн бұрын
Great hands-on content as always! I have asked many creators the following question, but haven't really gotten a convincing answer: Why use Flowise with n8n when you can replicate the same RAG flow on n8n? In other, words n8n has all the tools needed to build complex RAG Agents. (I love flowise btw, just trying to find a reason not to ditch it completely for n8n-only flow)
@ColeMedin
@ColeMedin 7 күн бұрын
Thank you! And this is a great question. So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup. I hope that makes sense!
@T33KS
@T33KS 6 күн бұрын
@ColeMedin thanks for your reply. I see your point. For prototyping it makes absolute sense. Not for production though; since I am having API response latency and lag from Flowise. So I'm not sure what is the best approach for production and deployment (bedrock maybe?)
@alx8439
@alx8439 6 күн бұрын
In Linux with recent docker installation you also should "docker compose" instead of "docker-compose". The compose thing was previously a separate binary, but now it is docker plugin
@ColeMedin
@ColeMedin Күн бұрын
Got it, thanks for calling that out!
@alx8439
@alx8439 6 күн бұрын
Thanks for spending time and introducing this tutorial
@ColeMedin
@ColeMedin Күн бұрын
You bet! Glad you like it!
@johanw2267
@johanw2267 8 күн бұрын
Keep pumping out dem videos.
@ColeMedin
@ColeMedin 7 күн бұрын
Will do! haha
@digital.artistic.solutions
@digital.artistic.solutions 7 күн бұрын
Great work! Your workflows are a real game-changer. I’ve expanded my local AI stack with Suparbase and LocalAI instead of Postgres, Qdrant and Ollama. LocalAI is OpenAI API compatible and works seamlessly with the OpenAI nodes in n8n. It’s such a huge improvement. Definitely a valuable addition! #keepAIlocal
@ColeMedin
@ColeMedin 7 күн бұрын
Sounds awesome! Nice!!
@digital.artistic.solutions
@digital.artistic.solutions 7 күн бұрын
@@ColeMedin Absolutely! Building a self-hosted AI stack with Supabase, n8n, Flowise, LocalAI, Open WebUI, Alltalk TTS, comfyui, and Searxng is definitely a solid foundation for a local AI setup. ;) This idea would be sick content for more videos! I'm totally stoked to see all the cool projects and explanations you'll share. Let's level up our knowledge of local AI solutions together! 🚀💪
@eyewitness4560
@eyewitness4560 8 күн бұрын
Quick note, if you are one of the 3 people like me who use windows, have an amd card and some technical knowhow, you can get all this running on your gpu using WSL and Ubuntu 22.04, as that has ROCm support. I run ollama and owui in the wsl instance and run the rest in docker desktop on windows. It takes a bit of faffing about to get the ports and integration up, but your previous two vids and an ai assistant absolutely enables anyone who can reasonably prompt or follow orders.
@ColeMedin
@ColeMedin 8 күн бұрын
That's amazing - thank you for guidance on this! I don't have an AMD card myself but I know a lot of people who got one for gaming at one point and now want to do LLM inference with it so that's super helpful.
@andianders3565
@andianders3565 6 күн бұрын
I really like your content. May you once speak about how to update your starter kit whenever you've added some features to it. Anyway your content allows me as a noob to get my hands on local ai. Cheers!
@ColeMedin
@ColeMedin Күн бұрын
Thank you very much!! To update the starter kit you basically just have to run the same commands you used to install it - but it'll go MUCH quicker since you already have everything installed.
@andianders3565
@andianders3565 Күн бұрын
@@ColeMedin Thank you so much! I'm really impressed by the opportunities n8n offers me for my daily tasks.
@christopheboucher127
@christopheboucher127 8 күн бұрын
It will be great to add longterm memory as ZEP for agents, and grap memory vector database from ZEP too.. Great joc thx for all your precious contents !
@mtofani91
@mtofani91 3 күн бұрын
Hey Cole! Is always great to see new video there- thanks for giving us an overview of Fwise. I’m curious: what are your hardware specs for this test? Also, how do you see the competition among these low-code tools? There are tons of options out there, but n8n still seems like one of the most robust and solid choice when it comes to nodes/widgets for our ia agents. Saludos!
@AE-yp4kc
@AE-yp4kc 4 күн бұрын
If you are planning on using n8n for commercial purposes, be prepared to open your wallet to pay a license fee. From their "Sustainable Use License: You may use or modify the software only for your own internal business purposes or for non-commercial or personal use. You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes."
@MarkLewis00
@MarkLewis00 3 күн бұрын
Composio has a free forever plan
@xaviruiz8345
@xaviruiz8345 8 күн бұрын
Great contnt Cole!! Thank so much for your videos, I' m learning a lot 😊 Idea for your next videos: N8n, ollama and Llama 3.2 vision model. I didn' find any content about it!
@ColeMedin
@ColeMedin 7 күн бұрын
Thank you so much! Great suggestion :D
@lucaseatp
@lucaseatp 9 күн бұрын
what do you think about dify? great video
@ColeMedin
@ColeMedin 9 күн бұрын
Thank you! I haven't tried Dify yet but it is on my list of the dozens of platforms to try out! haha I have heard great things about it
@jeremiealcaraz
@jeremiealcaraz 8 күн бұрын
OMg ! You're the best !!!
@ColeMedin
@ColeMedin 7 күн бұрын
Thanks Jeremie!
@kenneththompson4450
@kenneththompson4450 6 күн бұрын
Thank you for the content, I’m using your bundles along with trying to integrate Fabric (an open-source framework for augmenting humans using AI). I’d love to see you add this to the bundle, or if I ever get time maybe I will lol. Ty
@ColeMedin
@ColeMedin Күн бұрын
That's a great idea!
@TheAleksander22
@TheAleksander22 9 күн бұрын
You've got some overlapping audio starting at 2:45 (Flowise Overview section)
@ColeMedin
@ColeMedin 9 күн бұрын
Thank you for the heads up! I am editing this part out now!
@TheWiiZZLE
@TheWiiZZLE 9 күн бұрын
@@ColeMedin Thanks for the reactivity, I was about to dm you about it
@AriellePhoenix
@AriellePhoenix 9 күн бұрын
Thought I had another tab playing lol
@loicbaconnier9150
@loicbaconnier9150 8 күн бұрын
Is the killer docker compose will integrate flowize and: KEY COMPONENTS Ollama: Run state-of-the-art language models locally n8n: Create automated AI workflows Qdrant: Vector database for semantic search Unstructured: Advanced document processing Argilla: Data labeling and validation Opik: Model evaluation and monitoring JupyterLab: Interactive development environment liteLLM as llm router What do you think ?
@ColeMedin
@ColeMedin 8 күн бұрын
I love it! Some of these tools I am definitely thinking about adding into the local AI starter kit as I continue to expand it!
@fusilad
@fusilad 8 күн бұрын
Thanka for the heads up for default olama context size
@ColeMedin
@ColeMedin 8 күн бұрын
Yeah you bet!
@roccobooysen3611
@roccobooysen3611 7 күн бұрын
Could you perhaps make a video to talk about the tech architecture setup for these platforms. As an example: I understand that you can just install Flowise and N8N but there is security that should be implemented on the network so that it does. It exposes your entire network.
@ColeMedin
@ColeMedin 7 күн бұрын
Yeah this is a great suggestion for a video, definitely agree it's needed. Thank you!
@ChrisMartinPrivateAI
@ChrisMartinPrivateAI 8 күн бұрын
Perfect sequencing. Been holding off doing a deep dive into custom LangGraph coding and NOW I know why. Prototype (maybe more) in Flowise. Vectorshift is very easy to learn and get it to work agent type flows but is on a managed tier. Is Flowise the bolt.new of LangGraph? That is the direction I am heading.
@ColeMedin
@ColeMedin 7 күн бұрын
Thank you! Yeah VectorShift is great but unfortunate it isn't open source. If Flowise had an AI integrated to build workflows like Zapier does, it certainly would be the Bolt.new of LangGraph! That would be incredible...
@BirdManPhil
@BirdManPhil 4 күн бұрын
my current k8s: argocd - gitops traefik - front end reverse proxy promethius - stack stats grafana - stats dashboard redis - cache and que management for n8n workers pgadmin4 - ui for postgresql postgresql w/pgvector - postgress with vector ability for hybrid semantic rag qdrant - optional vector database for pure vector rag langfuse - monitor llm usage stats n8n - tools for flowise flowise - main agentic flow builder i want to add langflow at some point
@ColeMedin
@ColeMedin 4 күн бұрын
I love it!!
@unknownhacker9416
@unknownhacker9416 9 күн бұрын
Hi Cole... May I know your PC specifications...?
@ColeMedin
@ColeMedin 9 күн бұрын
I share it here! thinktank.ottomator.ai/t/my-2x-3090-gpu-llm-build/1714
@Tennysonism
@Tennysonism 2 күн бұрын
Cole, I’m enjoying this project! I setup the last published environment with n8n and everything is working great. How do I pull the updates from git that include flowise? Can I update my current build or do I need to start from the beginning?
@The-ism-of-isms
@The-ism-of-isms 8 күн бұрын
If u make a playlist serires for what we can acheive with flowise + n8n combination that would be game chnager for the community 🙌😍. NO ONE IS DOING THAT I HOPE YOU DO IT. Thank u bosss , teach us more with flowise and n8n combinations
@ColeMedin
@ColeMedin 7 күн бұрын
Yeah that sounds great, thank you for the suggestion and the kind words!
@josejaner
@josejaner 8 күн бұрын
Will it be possible with Swarm to have a hybrid of models, for example: using the triage or first-level agent ''gpt-4o or mini" and the agents that manage tools (last level) that use ollama with models like 'qwen2.5-coder' that works very well and fast? 🤔 I don't know if this can be possible, my intention is to reduce costs and increase perimeter data security.
@ColeMedin
@ColeMedin 7 күн бұрын
Yes this is definitely possible! I would look into the Agent Flows in Flowise if you're looking to do that there!
@mike-s7c
@mike-s7c 6 күн бұрын
Great channel Cole! Can you do a session on how replicate this on a VPS?
@ColeMedin
@ColeMedin Күн бұрын
Thank you Mike! Yeah I am planning on doing that in the future!
@johnclarkson6120
@johnclarkson6120 5 күн бұрын
🎉 you know what, you're a genius
@ColeMedin
@ColeMedin 4 күн бұрын
Haha thank you very much!
@AnimikhAich
@AnimikhAich 8 күн бұрын
Question: once I've built a local Flowise workflow, what is the best way to export that and deploy it? I'd like to deploy it on a server. How do I do that? Any suggestions are welcome.
@ColeMedin
@ColeMedin 7 күн бұрын
Great question! You can click on the code icon in the top right of the builder to export the agent. You can turn it into an API endpoint to use with any frontend you want to build!
@jdaiexplore
@jdaiexplore 7 күн бұрын
Help needed.I want to do the add a button call improve text. when clicked the user input text in a long text field shall be improved by ai model in a popup and ask user whether to use the ai generated text or existing text. how to do this in my web too.
@TheDJHollywood
@TheDJHollywood 9 күн бұрын
At 2:45 mark you have a duplicate track. I spent a couple minutes looking for another open browser, Then I paused this one and both voices were gone, not just 1.
@ColeMedin
@ColeMedin 9 күн бұрын
Yeah found it, thanks! I'm editing it out now.
@TheDJHollywood
@TheDJHollywood 9 күн бұрын
@@ColeMedin I found it funny that I was looking for a second browser even AFTER the duplicate track was done, but I thought it was just behind and you would talk again.
@prabhic
@prabhic 8 күн бұрын
Thank you very much. Very much helpfull, just what is needed, at right time
@ColeMedin
@ColeMedin 8 күн бұрын
You bet! Glad it is helpful!
@NoCodeKing69
@NoCodeKing69 7 күн бұрын
Can't you do all of this purely in n8n? What is the reason to add another tool like flowise to the stack?
@ColeMedin
@ColeMedin 7 күн бұрын
Thank you! And this is a great question. So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup. I hope that makes sense!
@kalilbarry3773
@kalilbarry3773 8 күн бұрын
Hey Cole, what you think about Langflow, is there a reason you chose Flowise instead?
@ColeMedin
@ColeMedin 7 күн бұрын
They are super similar tools! Honestly there wasn't a huge reason I picked Flowise over Langflow - I would have to dive into using Langflow more before I could really do a in depth comparison.
@ahmedalikhan6154
@ahmedalikhan6154 8 күн бұрын
is there anyway to create agents like that through coding in python? or can we convert these automation into python
@ColeMedin
@ColeMedin 7 күн бұрын
Great question! Flowise is built on top of LangChain which is a Python framework, so anything you build with this can be converted to Python code using LangChain!
@alx8439
@alx8439 6 күн бұрын
Apart from langflow there's a plethora of agentic libraries already for python.
@GustavoRabelo93
@GustavoRabelo93 9 күн бұрын
What are the recommended specs for the GPU? I have a 3080ti, would that be enough?
@ColeMedin
@ColeMedin 9 күн бұрын
Depends on the local LLM you want to run! A 3080ti will definitely be able to run ~20b parameter models and smaller pretty fast! You could try 32b parameter models as well but that is where it would start to be pushing it.
@seniorcaution
@seniorcaution 8 күн бұрын
Should be able to run 20b without quantization. I've been able to run 32b and even some 70b models with quantization on my 12GB card
@alx8439
@alx8439 6 күн бұрын
​@@seniorcautionnah 3080ti has 12Gb of VRAM. Models with original weights (non quantized) are 16bit = 2 bytes per parameter. There's nowhere near you can fit 20B model (40Gb of weights) into 12Gb of VRAM. But if you take 4-5bits quant then yes
@LA_318
@LA_318 8 күн бұрын
@colemedin I've really been enjoying your videos and trying to replicate what you've been doing. I think it's very valuable. One thing I'm specifically trying to do myself is create a workflow where it takes screenshots of what I'm doing and then using The latest LlamaVision LLM to convert that to a document that can be saved for later so then I can ask my local AI about projects I've worked on and just details that I may have missed when taking my notes for my builds for other projects that not necessarily have to do anything with AI. I've been having trouble setting this up but if this interests you I'd love to see you make a video about how to make something like that work. I have other ideas on how to integrate that to make other things but if I can't figure this out then I certainly can't move on to more complicated workflows.
@ColeMedin
@ColeMedin 8 күн бұрын
Thank you! Love your use case too. I certainly haven't created enough videos on vision based generative AI so this would be something awesome to cover!
@alx8439
@alx8439 6 күн бұрын
I'm pretty sure there are already some open source projects doing this: 1) take screenshoting tool that can be executed from CLI on your operating system. There are plenty of them + some are even included to your OS 2) write a simple script (bash / cmd) to take a screenshot and save it to file. Then, as a next step in that script you call ollama with latest llama visual model to process that screenshot and extract whatever information you want to extract. Save the ollama output as text file. Script itself is just 3 lines of code. 3) now you have a pair of screenshot file and text file describing what is on screenshot. You can save them anywhere - local folder, Google drive, database 4) put that script to be executed on a schedule, like once a minute 5) reuse any rag ingestion pipeline to make embedding vectors out of screenshot description and put it + original screenshot + datetime to any vector database of your choice 7) use that vector database and embedding model as RAG tool with any model
@sebastianpodesta
@sebastianpodesta 9 күн бұрын
Hi Cole, Can I update my old Ai Kit installation to this?
@ColeMedin
@ColeMedin 9 күн бұрын
You sure can Sebastian! You can run the same Docker compose command you used to start it for the first time and it'll update everything and you won't lose any workflows or anything else you've built with the kit!
@sebastianpodesta
@sebastianpodesta 8 күн бұрын
@ thanks a lot for all your work!!
@ColeMedin
@ColeMedin 7 күн бұрын
You bet!
@go5495
@go5495 7 күн бұрын
of course you are an Ai agent being so unearthly helpful that not a quality all the human have
@jaggyjut
@jaggyjut 8 күн бұрын
With cursor ai and windsurf which can code, is there still going to be a demand for no code platforms?
@ColeMedin
@ColeMedin 8 күн бұрын
Super fair question! Definitely until LLMs can consistently write error free code, no code platforms like Flowise and n8n are going to still be by far the fastest way to create your agents. But even once we get to that point, it can still take a while to set up your environment, get the LLM to truly understand what you are trying to build, and execute on it.
@wulfrum5567
@wulfrum5567 Күн бұрын
How do I update my n8n from docker?
@frosti7
@frosti7 6 күн бұрын
I'm actually looking for a cloud solution thats suitable for a team, is there a recommendation?
@ColeMedin
@ColeMedin Күн бұрын
Cloud solution for what exactly? For running local AI?
@AriellePhoenix
@AriellePhoenix 9 күн бұрын
Flowise aren't accepting new sign ups, they have a waitlist 😭 looks cool, though!
@ColeMedin
@ColeMedin 9 күн бұрын
You can download and run it yourself locally like I did in this video! I believe that is just for their cloud offering!
@sneakymove
@sneakymove 9 күн бұрын
the problem with flowise is u could be always on waiting list. I could not download or start anything until now since 3 weeks ago
@ColeMedin
@ColeMedin 9 күн бұрын
Hmmm... Flowise is open source so you can just install it and run it now like I showed in the video! Are you maybe referring to a cloud offering they have?
@augmentos
@augmentos 9 күн бұрын
Yasss
@mitchellrcohen
@mitchellrcohen 9 күн бұрын
Thanks!
@ColeMedin
@ColeMedin 9 күн бұрын
You bet!
@Luke-Barrett
@Luke-Barrett 8 күн бұрын
Can't this all be done in n8n ?
@ColeMedin
@ColeMedin 8 күн бұрын
Fair question, Luke! In the end everything in Flowise can be done in n8n, but certainly not as fast. Loading documents for RAG for example is one thing you can do MUCH faster with Flowise, just because of all the integrations and prebuilt tools we have there since it's built on top of LangChain. So what I like doing a lot of times is prototyping with Flowise and then eventually moving things into n8n - it's an easy transition too.
@marcosbeliera1
@marcosbeliera1 8 күн бұрын
Nice! But I see it very similar with n8n Agents.
@ColeMedin
@ColeMedin 8 күн бұрын
Thanks! And yeah it is similar, part of why they integrate so well together. Flowise makes it easier to do some things like load documents into RAG because of all the different document loaders there are that aren't available in n8n. Just one example!
@АлександрУшанов-щ3е
@АлександрУшанов-щ3е 8 күн бұрын
No subtitles?
@ColeMedin
@ColeMedin 8 күн бұрын
They should be generated by default! Are you not seeing them when you usually do?
@АлександрУшанов-щ3е
@АлександрУшанов-щ3е 7 күн бұрын
​@@ColeMedin they appeared, but not right away, somewhere around six hours later. although before they appeared almost immediately. And thanks for the videos!
@ColeMedin
@ColeMedin 7 күн бұрын
Great! You bet! Not sure why it took extra time this time around.
@ShubzGhuman
@ShubzGhuman 9 күн бұрын
well im awake
@Maxim_Kulakov
@Maxim_Kulakov 5 күн бұрын
You need to think about people with studio headphones watching your video because these audio effect you have throughout the whole video ARE REALLY LOUD!
@alx8439
@alx8439 6 күн бұрын
Also all these aka "agent flows" I'm seeing are kind of flawed. They don't have loopbacks and self reflection. Imagine that internet search results didn't return anything meaningful. In ideal situation agent should detect that and try another round of search with using different wordings. But instead whatever garbage has returned is being used
@ColeMedin
@ColeMedin Күн бұрын
Yeah that is fair! Though those kind of things can be built into these workflows! It's up to the user to make that happen is the way I see it.
@alx8439
@alx8439 6 күн бұрын
The logging / transparency of what is happening in flowise is shit.
@ColeMedin
@ColeMedin Күн бұрын
Yeah I agree, that's why I tend to use it for just prototyping
@terrorkiller645
@terrorkiller645 7 күн бұрын
Can you make like a series playlist where you can give a full in depth explanation on how to download these things as well as make a space where ppl can ask on questions or of they get stuck on something.
@ColeMedin
@ColeMedin 7 күн бұрын
Yeah I am working on this! And over at thinktank.ottomator.ai I have a community for people to post these kind of questions!
@mahdihussein2826
@mahdihussein2826 7 күн бұрын
Can you share how to deploy a rag on Instagram using n8n please
@PyJu80
@PyJu80 8 күн бұрын
Dont know if you belive in the universe. But I come to KZbin when I hit a raodblock. Then who pops up. My main man Cole. The F**king G.O.A.T. With the exact solution to my problem. EVERY TIME. Hats off. I don't subscribe to many things, but get excited for your next project. NGL, I can see bolt.new or someone coming to you with alot of money for this. You're all over KZbin, even Claude 3.5 is impressed with you and youre new news 😁. Man I love your work. One Day, One Day, you will intergrate VSC and allow for file changes live and mate, I wouldnt blame you to take the $2b and run 🤑🤑🤐🤐. 👊👊
@PyJu80
@PyJu80 8 күн бұрын
Ohhhhh, and if you can convert the above into an open_ai_like base url to use as a model in your OttoDev. Then im fyling to USA to bow to the G.O.A.T.
@ColeMedin
@ColeMedin 8 күн бұрын
Haha thank you so much - that means a lot to me! $2b would be a fever dream... I would take it and keep teaching on KZbin :)
@PyJu80
@PyJu80 8 күн бұрын
​@@ColeMedin don't forget me on the Lads holiday. All expenses spring break on you. 😅😅😂😂❤🎉
We've Been Building AI Agents WRONG Until Now
19:51
Cole Medin
Рет қаралды 32 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 283 М.
Farmer narrowly escapes tiger attack
00:20
CTV News
Рет қаралды 15 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 21 МЛН
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 8 МЛН
小路飞和小丑也太帅了#家庭#搞笑 #funny #小丑 #cosplay
00:13
家庭搞笑日记
Рет қаралды 17 МЛН
n8n Masterclass: Build AI Agents & Automate Workflows (Beginner to Pro)
1:31:43
Nate Herk | AI Automation
Рет қаралды 42 М.
FINALLY, this AI agent actually works!
27:00
AI Search
Рет қаралды 171 М.
Build your first AI AGENT as A Total BEGINNER! (NO CODE) | N8N Beginner's Tutorial
16:36
Akram K. | AI and Automation
Рет қаралды 2,2 М.
Run All-in-One Local AI Infrastructure In MINUTES! (LLMs, RAG & More)
15:09
I Tested No Code AI App Makers and Found the BEST
16:02
Creator Magic
Рет қаралды 49 М.
BREAKING NEWS: oTToDev is Now the OFFICIAL Open Source Bolt.new
11:44
Use Open WebUI with Your N8N AI Agents - Voice Chat Included!
26:06
Using Ollama and N8N for AI Automation
13:43
Matt Williams
Рет қаралды 40 М.
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 110 М.
I Forked Bolt.new and Made it WAY Better
19:28
Cole Medin
Рет қаралды 90 М.
Farmer narrowly escapes tiger attack
00:20
CTV News
Рет қаралды 15 МЛН