Build Your Own ChatGPT Alternative FREE (NO CODE Required)

  Рет қаралды 55,098

Leon van Zyl

Leon van Zyl

Күн бұрын

Пікірлер
@leonvanzyl
@leonvanzyl 4 ай бұрын
What do you think about the new Llama 3.2 models? Remember to hit the like button and subscribe to boost the video in the YT algorithm.
@lingzhang2061
@lingzhang2061 4 ай бұрын
Awesome video! Thanks, Leon. I was wondering if we could use Llama 3.2 models in the sequential multiagent chatflow?
@AaronBlox-h2t
@AaronBlox-h2t 3 ай бұрын
Cool video but Ollama sucks lemons....ON both my intel Macbook Pro and win10 Intel Arc A770 16GB, 64GB DDR4 RAM, 5000TB 7400MB/sec ssd system. But that's cuz I choose Arc, but hey, if you have Nvidia and M1 to M3 it shoudlwork fine. Regarding LLama3.2, they are great, downloading thme from Hugginface with a cusotm python script.
@errorsinconduct
@errorsinconduct 3 ай бұрын
thanks for the tutorial, everything else works except the chatflow which says "fetch failed" when I try to prompt it. The document side works and retrieves the 4 most relevant documents. What could be the problem?
@pavanj2238
@pavanj2238 3 ай бұрын
@leonvanzyl fetch failed is the error message in chat window
@msclt1982
@msclt1982 3 ай бұрын
Thank you so much Loe, You videos are crisp and clear. I have created a chatbot, as shown in the video, using LLaMA and configured it on my VM. The chatbot has memory and is responding to questions as expected. However, the problem I am facing is that the bot takes a lot of time to load and respond to each query. I have already built a chatbot with OpenAI, and while it was a different approach, I didn’t encounter this kind of slowness. Anybody faced this issue? Do you have any solution for this?
@ygdiget4119
@ygdiget4119 4 ай бұрын
I really love your videos! I would really love to see an in-depth video of you comparing Langflow, Flowise, n8n, VectorShift, etc!
@dliedke
@dliedke 4 ай бұрын
Very nice, I followed the guide and worked just fine. Thanks a lot for the detailed tutorial. I generated some document data samples with Claude :D
@leonvanzyl
@leonvanzyl 4 ай бұрын
Great to hear!
@msclt1982
@msclt1982 3 ай бұрын
I have created a chatbot, as shown in the video, using LLaMA and configured it on my VM. The chatbot has memory and is responding to questions as expected. However, the problem I am facing is that the bot takes a lot of time to load and respond to each query. I have already built a chatbot with OpenAI, and while it was a different approach, I didn’t encounter this kind of slowness. Anybody faced this issue? Do you have any solution for this?
@mayowao
@mayowao 2 ай бұрын
@@msclt1982 it might be because Ollama3.2 is local and OpenAI is likely hosted, accessible via API key; so, your machine's hardware specs may be its Achilles heel.I'll know for sure when I try it out.
@mayowao
@mayowao 2 ай бұрын
exactly what I thought to do too, likely via Bolt
@BrianDevJourney
@BrianDevJourney 3 ай бұрын
Great tutorial Leon!! I reached out to your email about a sponsor opportunity. Cheers!
@leonvanzyl
@leonvanzyl 3 ай бұрын
Saw the mail. Thank you. Will respond soon.
@rauldiaztorres
@rauldiaztorres 4 ай бұрын
Great Leon !!! Thanks for the tutorial, very simple and work fine ! 🏅🏆🎖
@leonvanzyl
@leonvanzyl 4 ай бұрын
You're welcome 🤗
@Augmented_AI
@Augmented_AI 4 ай бұрын
Do you have a graph rag workflow?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Not yet. Will create one soon 😄
@Augmented_AI
@Augmented_AI 4 ай бұрын
@@leonvanzyl Lekker, looking forward to it :).
@PIOT23
@PIOT23 4 ай бұрын
Brilliant video as always 👏🏼
@KiWelten
@KiWelten 4 ай бұрын
Amazing tutorial, exactly what I was looking for!
@leonvanzyl
@leonvanzyl 4 ай бұрын
You're welcome 🤗
@donutguy1281
@donutguy1281 2 ай бұрын
Fantastic tutorial! Exactly what I needed. Chrome's AI generated results referenced you on this topic, lol
@leonvanzyl
@leonvanzyl 2 ай бұрын
For real!? Haha, that's awesome 😎
@laflammedenis
@laflammedenis 3 ай бұрын
Great video. Standing by for my Flowise invite - would love to see "How to Fine Tune llama 3.2"
@leonvanzyl
@leonvanzyl 3 ай бұрын
Flowise invite? It's free, and we used the free version in this video. If you're referring to their paid cloud service then you can skip the waiting list by using my affiliate link.
@AhmedMeklad
@AhmedMeklad 3 ай бұрын
Wow, I think this is an advantage for the developers and the geeks.
@leonvanzyl
@leonvanzyl 3 ай бұрын
It's awesome.
@maniecronje
@maniecronje 4 ай бұрын
Thank you Leon another amazing FW tutorial -unfortunatelly the hallucination issues using RAG still an issue not something you can use in PROD unfortunatelly - looking fwd to next FW video 'dankie Leon'
@leonvanzyl
@leonvanzyl 4 ай бұрын
Glad you liked it! Let me know what you'd like me to cover next 👍
@johannesdeboeck
@johannesdeboeck 3 ай бұрын
@@leonvanzyl If you allow me to answer that question :) A video covering Flowise + a knowledge graph would be great! This would reduce LLM hallucinations as well
@jawwadhussain8457
@jawwadhussain8457 3 ай бұрын
you made my life complete :P with love
@leonvanzyl
@leonvanzyl 3 ай бұрын
Hehe, glad I could help 😀
@researchandbuild1751
@researchandbuild1751 3 ай бұрын
I got it working locally, nice video thank you so much for showing this. I wonder though how could I embed this into a product? Seems that I have to at least run a node server to run the chatbot?
@M3Services168
@M3Services168 Ай бұрын
Dear sir, did you manage to deploy this wonderful solution onto a server to serve the public?
@КравчукІгор-т2э
@КравчукІгор-т2э 3 ай бұрын
Thanks for the content!
@wgaitanartunduaga
@wgaitanartunduaga 2 ай бұрын
Thanks Leo, great tutorial. It does a wonderful job. Please can you make a video where the chatbot has the option to save the conversation history and export it as a PDF or TXT or DOCX file.
@brianglong11
@brianglong11 2 ай бұрын
Outstanding! I have set up a knowledge base of data and information collected by forestry company and want to extract relevant information for annual reports. Is there a way to create a flowiseAI with a template of queries that I could run to populate the annual report? Thanks again!
@ariclenescorreia6048
@ariclenescorreia6048 4 ай бұрын
Thank you so much for this video it is just what I needed
@jakkalsvibes
@jakkalsvibes 3 ай бұрын
Thank you Leon for the detailed explanations, Just need to know is there a bug Clearing Chat History or am I missing something
@leonvanzyl
@leonvanzyl 3 ай бұрын
Hey Jakkalsvibes (I like your channel by the way 🔥). I'm not aware of any issues. What's issue are you experiencing?
@jakkalsvibes
@jakkalsvibes 3 ай бұрын
@@leonvanzyl In die Chatflow Chat Screen as ek Clear Chat druk kry ek 'Error: chatMessagesService.removeAllChatMessages - SQL_READONLY: attempt to write a readonly database'
@jakkalsvibes
@jakkalsvibes 3 ай бұрын
@@leonvanzyl Dankie ek like jou channel ook 🦊. As ek 'Clear Chat' druk kry ek : Error: 'chatMessagesService.removeAllChatMessages - SQLITE_READONLY: attempt to write a readonly database'
@jakkalsvibes
@jakkalsvibes 3 ай бұрын
​@@leonvanzylsorted
@M3Services168
@M3Services168 7 күн бұрын
Great Tool & Great Tutorial!!! Thank you :-) Regarding the "Conversational Retrieval QA Chain", is there any way to display the name of the source document alongside the fragments/chunks of "Return Source Documents"? It would help the administrator very much in deciding whether to include or exclude certain documents in the ragging process.
@mayowao
@mayowao 2 ай бұрын
Thanks for the walkthrough video
@leonvanzyl
@leonvanzyl 2 ай бұрын
You're welcome 🤗
@nick8292
@nick8292 27 күн бұрын
Amazing! Thank you very much Leon. And Happy New Year 2025!
@leonvanzyl
@leonvanzyl 27 күн бұрын
Happy new year 🎉
@sdffsdsfdsfd
@sdffsdsfdsfd 4 күн бұрын
Hello, im not seeing the buffer memory in the new Flowise version flowise@2.2.4 , what is the other way?
@SyntharaPrime
@SyntharaPrime 4 ай бұрын
It is amazing. Great effort and very usefull. Thanks alot
@SirSalter
@SirSalter 2 ай бұрын
Take a sip of every time he says go ahead
@leonvanzyl
@leonvanzyl 2 ай бұрын
🤣 SirSalter, go ahead and have a sip
4 ай бұрын
Thank you. Any chance doing the video how to deploy this and use it in the cloud or on a website?
@saiy88
@saiy88 4 ай бұрын
Ty! Can you do a tutorial on how you can host and deploy a lead generation chatbot on a website? Thanks!
4 ай бұрын
Sorry, found it on your channel already!
@M3Services168
@M3Services168 Ай бұрын
Great idea! Where on this channel can you find the website deployment solutions?
@Boundlessmofo
@Boundlessmofo Ай бұрын
How can I implement this on my online store? Thanks for the detailed guide, btw!
@noway-j5j
@noway-j5j 7 күн бұрын
Great videos Leon love the SA accent - in my flowise I don't see upsert config? please let me know what to do to get this.
@leonvanzyl
@leonvanzyl 7 күн бұрын
Are you on the latest version?
@noway-j5j
@noway-j5j 6 күн бұрын
@@leonvanzyl Current Version Latest Version Published At 1.8.4 flowise@2.2.4 9 days ago when i run npm update flowise get up to date in 417ms?
@Kafuixx
@Kafuixx 3 ай бұрын
Hi Leon,love this video as a coding noob looking to create an ai enterprise app for a business. Is it possible using this method. And if so how can this be embedded into that product
@PrensCin
@PrensCin 3 ай бұрын
how can we ollama 11b model move avilable models?
@faxiran
@faxiran 9 күн бұрын
Do you need to install Faiss and how?
@MyFukinBass
@MyFukinBass 4 ай бұрын
Amazing. Thanks for the content. Is it possible to take this bot elsewhere? Say if I ever wanted to have a bot like this on an app like Slack or Discord?
@researchandbuild1751
@researchandbuild1751 3 ай бұрын
I installed flowise using their own documentation and during the NPM install it has a TON of warnings about unsupported packages lol!
@RedCloudServices
@RedCloudServices 4 ай бұрын
How can you setup a vision model to read from a pdf.
@showcaseshot
@showcaseshot 3 ай бұрын
when i type something in the chat box i get the error Fetch Failed. Do you know how to fix this issue? I searched the web got a suggestion to reinstall ollama but still the issue persists.
@TabindaQudrat
@TabindaQudrat 3 ай бұрын
Getting same issue. do you find anyway to resolve it?
@soundlab4831
@soundlab4831 2 ай бұрын
Hi, in Base URL enter 127.0.0.1:11434
@BilalKundi-lm5zz
@BilalKundi-lm5zz Ай бұрын
I am trying to download and set up the Ollama environment, which includes running the llama3.2 model-a file size of approximately 42GB. Given that Ollama operates its models locally, it does not require an active internet connection once the setup is complete. My questions are: Website Integration: After successfully setting up Ollama on my system, how can I configure and connect the chatbot to a website for real-time interaction? Offline Configuration: For environments with no internet access, how can I customize and deploy a chatbot effectively using Ollama? The aim is to ensure smooth functionality both for online website integration and in fully offline scenarios.
@leonvanzyl
@leonvanzyl Ай бұрын
Websites will not be able to access Ollama on your local machine. You'll have to host it in the cloud (which can be quite expensive), or use services like Groq, HuggingFace, Replicate, etc.
@leonvanzyl
@leonvanzyl Ай бұрын
Also, there are smaller versions of Llama 3.2 (like the 3b model).
@BilalKundi-lm5zz
@BilalKundi-lm5zz Ай бұрын
​@@leonvanzyl are you saying I can’t run the chatbot on a local server within a LAN environment? 🤔 Additionally, if hosting on the cloud is too expensive for me, how can I integrate the chatbot with my website to make it accessible to clients?
@jonathanmisaelmusser8090
@jonathanmisaelmusser8090 3 ай бұрын
Great video! But I have a problem, the chatbot doesn't always respond, or I see the response when I close and reopen the window, what could be happening?
@pategoubusinesscenter1307
@pategoubusinesscenter1307 4 ай бұрын
thank you for this video, please hox can i add it to a website ?
@leonvanzyl
@leonvanzyl 4 ай бұрын
I have a video on my channel that shows how to add Flowise chatbots to websites. Obviously you won't be able to use a local instance of Flowise for that, so you could either self host Flowise or use their cloud service. Instead of using Ollama, use the Groq Chat node instead.
@henriquemoniz2091
@henriquemoniz2091 3 ай бұрын
Is flowise collecting any info (resources based on metadata and namespaces) that could potentially extract private information?
@leonvanzyl
@leonvanzyl 3 ай бұрын
No, you can literally disconnect your internet connection and run this solution fully local.
@ageell2004
@ageell2004 3 ай бұрын
thanks for the hard work. however how do we able to "export" the chat bot into a single site instead have to start with the chat bubble?
@LamaChandrasena
@LamaChandrasena 4 ай бұрын
Very useful. Thanks. Is it possible to automate document loading (as new docs become available), for example by running a script or copying files into a folder etc, or it has to be manually done everytime through the UI as you demonstrated? Thanks.
@SoyPorteroYT
@SoyPorteroYT 4 ай бұрын
This would be really useful
@boteeeh
@boteeeh 3 ай бұрын
Just watch his n8n tutorials, there he made with google docs.
@AaronBlox-h2t
@AaronBlox-h2t 3 ай бұрын
Sure...look at the source code and integrate it. Oh wait, it's done via Flowise.....
@SoyPorteroYT
@SoyPorteroYT 3 ай бұрын
@@AaronBlox-h2t limitations of no-code/low-code
@mortu90
@mortu90 4 ай бұрын
Hi Leon, Thank you so much for all the work you’re doing-it’s greatly appreciated! I’ve just become one of your subscribers and I’m really looking forward to your tutorials. I have a question that’s been driving me a bit crazy: Is it possible to implement a chatflow developed in FlowiseAI to work with the frontend of OpenWebUI? I tried to create a pipeline and uploaded it to OpenWebUI but couldn’t get it to work. Is this achievable? Thanks again!
@mortu90
@mortu90 4 ай бұрын
As a follow-up, I was also wondering about integrating a chatflow developed in N8N. I feel that combining the power of N8N or Flowise with OpenWebUI would be incredibly powerful to leverage their awesome UI capabilities.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Thanks for the sub 💪. I don't think there is a way to add Flowise to Open Web UI. May I ask why you would want to use Web UI instead of the Flowise/ Chatflow UI?
@mortu90
@mortu90 4 ай бұрын
@@leonvanzyl Thanks for getting back to me! I thought the Web UI might offer more flexibility for managing user permissions (but I could be wrong!). My idea is to leverage Flowise’s power to build the chatflow logic and then use WebUI to manage access and provide a more polished frontend experience for users. Does that sound like a good approach, or do you think I’m missing something?
@Col-pd2zd
@Col-pd2zd 3 ай бұрын
Is it possible to use LLaVA with ollama to have the client upload a photo and the model can view it? I know it does'nt have the option, like how openAI has.. but maybe you know a work around?
@msclt1982
@msclt1982 3 ай бұрын
Thank you so much Loe, You videos are crisp and clear. I have created a chatbot, as shown in the video, using LLaMA and configured it on my VM. The chatbot has memory and is responding to questions as expected. However, the problem I am facing is that the bot takes a lot of time to load and respond to each query. I have already built a chatbot with OpenAI, and while it was a different approach, I didn’t encounter this kind of slowness. Anybody faced this issue? Do you have any solution for this?
@leonvanzyl
@leonvanzyl 3 ай бұрын
Your VM might need some more horsepower
@msclt1982
@msclt1982 3 ай бұрын
@@leonvanzyl wow I didnt expect a quick response from you. Thats great. :). Anyway iam using an Azure VM, and already checked this approach, and the bot functionality improved after increase the vm cpu and ram. Thank you so much.
@YanasChanell
@YanasChanell 2 ай бұрын
Thank you for the video. What are the minimal system requirements to run all of this on PC?
@leonvanzyl
@leonvanzyl 2 ай бұрын
I think you can run this on a potato. Seriously, we tested the model on a 10 year old Acer Spin with an i3 processor, and onboard graphics, and it works.
@adrianobernagozzi-ik1tecno362
@adrianobernagozzi-ik1tecno362 4 ай бұрын
Great video, I'm doing a training but the text I'm using is in txt format and is very large, sometimes the questions have answers that make sense and other times they don't. Do you have any tips?
@donatocontreras7073
@donatocontreras7073 3 ай бұрын
How can I use more than two document stores at the same time? If I can only use one document store card (vector)?
@leonvanzyl
@leonvanzyl 3 ай бұрын
Absolutely! I would use a tool agent and attach two retriever tools.
@donatocontreras7073
@donatocontreras7073 3 ай бұрын
@@leonvanzyl please! Could you show in a video how to do it?
@karthikeyakuncham6929
@karthikeyakuncham6929 3 ай бұрын
Hey nice video leon....but in my Rag chatbot I am using a tool agent and I need to pass the context which I have embedded in vector db in chat prompt template so that the agent can analyse whether the question is related to the docs provided or context provided ....how can I pass it?....i just want my chatbot not to give answers for unrelated questions. can you please help me with this
@DarkLineSnes
@DarkLineSnes 3 ай бұрын
why do we need this vector thing? i didnt understand what's it for
@leonvanzyl
@leonvanzyl 3 ай бұрын
The vector store is where the knowledge base is stored.
@ShaunyTravels.
@ShaunyTravels. 3 ай бұрын
Knew if there is one place to come to get help its here !! Leon need help connecting a multimodal model from ollama in flowise for text and images do you have a video about that I looked but can't find one 😢
@leonvanzyl
@leonvanzyl 3 ай бұрын
Hehe, my PC won't be able to run a model with image capabilities 😊. Will see what I can do once the larger Llama 3.2 models become available
@santhoshkrishnan6269
@santhoshkrishnan6269 3 ай бұрын
getting fetch failed error. please help
@TabindaQudrat
@TabindaQudrat 3 ай бұрын
Did you resolve the error as i am also getting the same error.
@soundlab4831
@soundlab4831 2 ай бұрын
Hi, in Base URL enter 127.0.0.1:11434
@RajuK-mn6lk
@RajuK-mn6lk 2 ай бұрын
@@TabindaQudrat Im also getting the same error. Any idea how to resolve it
@TabindaQudrat
@TabindaQudrat 2 ай бұрын
@@RajuK-mn6lk actually it requires high computational power. So i just dropped the idea of using Flowise. Instead of it you can do the same thing through coding. New library has been launched of generative AI. Use it and ta da.
@warrior-dl9vn
@warrior-dl9vn 3 ай бұрын
I get fetch error for any prompts in the chat? what could be the issue?
@damirrekic4271
@damirrekic4271 3 ай бұрын
I had this issue too, but after upgrading the node to version 20.18 , it works properly
@univer6979
@univer6979 3 ай бұрын
@@damirrekic4271 you are right
@M3Services168
@M3Services168 8 күн бұрын
Yes, I re-installed NodeJs v22.13.1 over v22.12.0, it worked without doing any other thing.
@BilalKundi-lm5zz
@BilalKundi-lm5zz Ай бұрын
I followed the instructions to create a chatbot, but its response time is very slow and delayed. Compared to the BotPress chatbot, it is noticeably less responsive. Could this be due to a missing configuration or an overlooked step?
@leonvanzyl
@leonvanzyl Ай бұрын
I don't think you're comparing apples with apples here. You can't run botpress locally. The response times of Ollama depend on your own hardware.
@Aibots777
@Aibots777 3 ай бұрын
Do i have to add doc to loader? Why can’t i point to folder to load ? Cuz i got 16 tb file that will not fit
@leonvanzyl
@leonvanzyl 3 ай бұрын
If I'm not mistaken you can use the folder or file loader for that. That is a massive file though, so you might want to clean up that data first, to text only, before uploading it.
@youriwatson
@youriwatson 4 ай бұрын
Thanks for the video Leon! When making chatbots for clients, do you encounter any cybersecurity concerns? for example, i could imagine clients worrying about using a third-party script on their website, because it could be vulnerable to XSS attacks, user privacy issues etc. Would like to hear your (or someone else's thoughts on this)
@leonvanzyl
@leonvanzyl 4 ай бұрын
Good question. I've had to create some custom integrations for clients where all sorts of security and Auth systems are in place. I feel that Flowise provides enough flexibility in the platform, especially when using their APIs and the new SDKs, to build whatever you want within the requirements from the client.
@APOLOVAILS
@APOLOVAILS 3 ай бұрын
looks really nice but coulndnt instal flowise, i get " Cannot find module 'colorspace" error when try to start npx. hope i will find some help on the web. thanks anyway !
@tommoves9935
@tommoves9935 2 ай бұрын
Thanks for the video and setup. With OpenAI, I get it to run great. However... when I try to use the Conversational Retrieval QA Chain or Conversation Chain node in combination with ChatOllama, I get a "fetch failed" error (whatever model I use). When I run Ollama (+ model) with a regular LLM Chain node, it functions as well - but obviously no RAG here. Too bad... Has anyone any idea?!? I would appreciate that very much!!!
@jawwadhussain8457
@jawwadhussain8457 3 ай бұрын
i have a issue with MSSQL Server i have trying to get data from another mssql server machine kindly make a small video of this
@sridhartn83
@sridhartn83 3 ай бұрын
Subscribed and Liked. :-) This is really helpful and simple, something I really wanted to learn and set up. thanks a lot, I have set up the chatbot for my pdf repository, but is there a way to pull just the chatbot something like ollama openwebui chat, what I mean is- once we set it up in flowise, as I see I open locathost on port 3000 it takes me to the flowise page and then I click on the chatbot , the whole chat flow is shown there then I have to click chat icon to initiate a chat , is there a way to setup an url:port I can put on browser and it lands me directly on the web UI showing me this flowise chatbot that I configured.
@sridhartn83
@sridhartn83 3 ай бұрын
NVM I got the url , in the settings option in chatbot also other APIs, great thanks again. Really appreciate you for making this video.
4 ай бұрын
very good explanations here. thank youuu.. can i ask whats the minimum hardware requirements to run these locally? do we need a tensor chip and cuda gpu, how much vram gpu, and pc ram for this ?
@leonvanzyl
@leonvanzyl 4 ай бұрын
This model is tiny (like 2GB). It should run on a potato 😄
3 ай бұрын
@@leonvanzyl 🤣🤣🤣🤣🤣🤣🤣 i adored this answer.. thank youuu 🤘🤘
@Karacho-u7v
@Karacho-u7v 2 ай бұрын
Somethings missing from the process "AppData\Roaming pm ode_modules\flowise\dist\commands\start.js: Cannot find module 'langchainhub'", how to get? Pip doenst work on windows.
@M3Services168
@M3Services168 Ай бұрын
I got the same error too.
@M3Services168
@M3Services168 Ай бұрын
I had to separately install langchain and langchainhub using: npm install -S langchain npm install -S langchainhub If there are further issues, I had to: npm audit fix npm audit fix --force
@mishaelo689
@mishaelo689 4 ай бұрын
Is it possible to host ollama on coolify?
@JoaoCarlos-bp8ip
@JoaoCarlos-bp8ip 3 ай бұрын
Thanks Leon, another great tutorial. What is the minimum hardware requirements to run this setup?
@leonvanzyl
@leonvanzyl 3 ай бұрын
I think you can run this on a potato 🤣. Seriously, I managed to get this running on an 5 year old i3 laptop with an onboard GPU.
@JoaoCarlos-bp8ip
@JoaoCarlos-bp8ip 3 ай бұрын
@@leonvanzyl maybe the lack of nvidia gpu? mine is an AMD, because here I didn´t get it to answer so far.
@JoaoCarlos-bp8ip
@JoaoCarlos-bp8ip 3 ай бұрын
I´ve found the issue. It was the way I wrote the ollama server address, it worked 127.0.0.1:11434/ this way instead of localhost
@univer6979
@univer6979 3 ай бұрын
How can I set it up on a workstation which doesn't have internet? How to download the models for offline installation?
@leonvanzyl
@leonvanzyl 3 ай бұрын
You'll need an internet connection to download the models from Ollama. There's probably a way to drop a model into the Ollama folder, but I haven't tried that yet
@univer6979
@univer6979 3 ай бұрын
@@leonvanzyl I copied the files to the offline pc and restarted, it worked. Thank you. One more question - is there a way to install flowise offline? File copy isn't working in case of flowise
@tecnopadre
@tecnopadre 4 ай бұрын
Well, for files with 1 KB it's awesome. When you load files with let's say 20 MB, the embedding doesn't work. Maybe it's the Ollama computer. Trying to find out.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Interested. I'll actually follow up on this. I've personally loaded some massive docs IRL and haven't experienced issues. Did you remember to split that doc?
@tecnopadre
@tecnopadre 4 ай бұрын
@@leonvanzyl Yes I did. I alwasy do with REcursive Character Splitter (Showing 1-50 of 124 chunks 76.210 characters). And I had to use For the Upsert, Voayage embeddings and Faiss. I cuoldn't make it with Ollama Embedding because my PC CPU and memory couldn't handle it. Although it has a NPU. This is something Ollama has to work on. How to run Ollama with NPU instead of CPU and Memory.
@alexandruscinteie7449
@alexandruscinteie7449 3 ай бұрын
@@leonvanzyl I have the same issue as the original commenter. I have uploaded a 600 something kb pdf and it can't return any answers. I followed your instructions to the letter. The only difference is that i have used the smaller llama3.2:1B model. All i get is "Hmm... I'm Not sure" answer from the bot. Any advice would be greatly appreciated! My intention is to create a document chat bot for documents to summarize and extract data from larger files.
@AlphaMoury
@AlphaMoury 4 ай бұрын
Is it possible to load multiple models in parallel using Ollama and Flowise?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Sounds interesting. What would be the use-case?
@Opeyemi.sanusi
@Opeyemi.sanusi 3 ай бұрын
I can't create a document store. It says created but it's not created
@leonvanzyl
@leonvanzyl 3 ай бұрын
Haven't had that issue myself. Maybe someone in the comments knows. Try to update FW perhaps?
@Opeyemi.sanusi
@Opeyemi.sanusi 3 ай бұрын
@@leonvanzyl other people on Github had the same issue but I was able to fix it I switched to Node.js version 18.20.4 and Instead of installing it in my Documents or Downloads folder, I created a new folder directly in my root directory at ~/User/AI/, which helped avoid permission issues. After navigating to the folder in terminal, I ran npm install -g flowise and that did the trick!
4 ай бұрын
Awesome! How can we publish this on our website? Hosting it?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Hey there! I have plenty of videos on my channel related to deploying Flowise and embedding it into websites. Ollama is meant for running the models locally, but in the cloud you could use Groq to run open source models for free. Again, I have videos on using Groq as well 😊
@karthikeyanvelayutham2817
@karthikeyanvelayutham2817 2 ай бұрын
Hi Leon, I was try your way to create AI chatbot, the only problem I'm not able to load a document bro, I used win machine. is there a way that you can help here?
@leonvanzyl
@leonvanzyl 2 ай бұрын
I also use a Windows machine. What issues are you experiencing? You should check out my Document Store video if you're new to using Document Stores 🤞
@kbtvn
@kbtvn 3 ай бұрын
OpenWebUI or Koteamon can do the similar thing?
@leonvanzyl
@leonvanzyl 3 ай бұрын
I doubt you can build multi-agent apps, or agentic rag solutions, with those platforms 😊 . HEHE, seriously though - Flowise allows for some way more advanced AI solutions than those interfaces.
@longsyee
@longsyee 4 ай бұрын
Just wondering when will the next update for llama vision that can read images 😆
@leonvanzyl
@leonvanzyl 4 ай бұрын
I'm keeping an eye out 👀. Look forward to putting it to the test.
@inout3394
@inout3394 3 ай бұрын
Epic thx
@leonvanzyl
@leonvanzyl 3 ай бұрын
You're welcome 🤗
@ashypeshy
@ashypeshy 4 ай бұрын
I am on the waiting list to get acces.
@leonvanzyl
@leonvanzyl 4 ай бұрын
If you use my affiliate link you will skip the waiting list
@colabwork1910
@colabwork1910 4 ай бұрын
I don't when I get the registration link from Flowise. It almost 15 days now since I requested for account registration. Any suggestion, please?
@leonvanzyl
@leonvanzyl 4 ай бұрын
You can avoid the waiting list if you use my link apparently.
@shayanumar6824
@shayanumar6824 3 ай бұрын
Can we integrate these chat bots in our own websites?
@leonvanzyl
@leonvanzyl 3 ай бұрын
Absolutely! Check out the web embed video on my channel
@shayanumar6824
@shayanumar6824 3 ай бұрын
​@@leonvanzylone more question it is taking longer time to generate the response is it dependent on my PC.
@shayanumar6824
@shayanumar6824 3 ай бұрын
@@leonvanzyl I have 8gb ram but still the response is taking a lot of time. is 8 gb sufficient or not.
@stanTrX
@stanTrX 3 ай бұрын
Thanks, flowaise looks good but is it free?
@leonvanzyl
@leonvanzyl 3 ай бұрын
Yes, it's open source and free to use
@muraliytm3316
@muraliytm3316 2 ай бұрын
Hello sir I got the following error while loading the database Oh snap! The following error occured when loading this page. Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: Could not import faiss-node. Please install faiss-node as a dependency with, e.g. npm install -S faiss-node. Error: Could not locate the bindings file.
@DanielLaLiberte
@DanielLaLiberte 2 ай бұрын
I got a similar error: Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: ENOENT: no such file or directory, mkdir 'C:\Users\danie\sentience\"C:\Users\danie\OneDrive\Desktop\faiss"' It looks like the path is appended to the current folder where I built Flowise, in the sentience folder. Yes, I just replaced the path, including the double quotes, with 'faiss' and it worked!
@muraliytm3316
@muraliytm3316 2 ай бұрын
@@DanielLaLiberte Yeah you can fix it by installing flowise in docker. Later if you use ollama in workflow, change baseurl, host.docker.internal:11434
@RameshBaburbabu
@RameshBaburbabu 4 ай бұрын
Awesome, and thanks for uploading. I would like to know how we can implement voice input and receive voice output from my internal RAG system.
@pategoubusinesscenter1307
@pategoubusinesscenter1307 4 ай бұрын
interessing me also
@leonvanzyl
@leonvanzyl 4 ай бұрын
I don't think this is possible within Flowise itself. Would be awesome thought.
@meuchat_oficial
@meuchat_oficial 4 ай бұрын
And about to calling a function?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Yes, these models support function calling 💪
@BirdManPhil
@BirdManPhil 4 ай бұрын
its not free. you get a 14 day trial then you pay $35/mo for 10k predictions/mo to use your agent. am i wrong or does it clearly state that in the PRICING page
@leonvanzyl
@leonvanzyl 4 ай бұрын
Nope, it's free and open source. Follow the instructions in the Flowise section of the video to set it up on your machine. What you're referring to is their managed cloud service which is definitely not needed.
@大支爺
@大支爺 3 ай бұрын
sponsored AD
@leonvanzyl
@leonvanzyl 3 ай бұрын
Nope. I did not get paid by Flowise, Meta or Ollama for this video. It wasn't created with their knowledge or permission either. I'm always blown away by comments like this. These videos are a lot of effort to create. Dude, I would happily accept a sponsorship deal if it means funding the channel, allowing me to create more videos. So even if the videos WERE sponsored, decide whether the content is for you and move on.
@nafang-x3u
@nafang-x3u 2 ай бұрын
dear leaon , im getting this error , appreciate if you can help me fix it , my environment C:\gnLAMA>node -v v20.18.1 and flowise same as yours : C:\gnLAMA>npx flowise start 2024-11-24 20:30:46 [INFO]: Starting Flowise... 2024-11-24 20:30:46 [INFO]: 📦 [server]: Data Source is initializing... 2024-11-24 20:30:51 [ERROR]: ❌ [server]: Error during Data Source initialization: SQLITE_ERROR: no such column: ChatFlow.isPublic QueryFailedError: SQLITE_ERROR: no such column: ChatFlow.isPublic at handler (C:\gnLAMA ode_modules\typeorm\driver\sqlite\SqliteQueryRunner.js:88:37) at replacement (C:\gnLAMA ode_modules\sqlite3\lib\trace.js:25:27) at Statement.errBack (C:\gnLAMA ode_modules\sqlite3\lib\sqlite3.js:15:21) 2024-11-24 20:30:51 [INFO]: ⚡ [server]: Flowise Server is listening at :3000
@pongtomar
@pongtomar 3 ай бұрын
Hi Leon , Thank you for sharing this clip. It is very useful for me. Now I was encountering the issues after upsert at Flowise even I was using Ubuntu 24.04 and having /home/myname/vector path" 0Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: Request to Ollama server failed: 404 Not Found". Please give me some advice. Banatus
@kuochingliew
@kuochingliew 4 ай бұрын
i stuck at Faiss ..but got this error tus: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: Request to Ollama server failed: 404 Not Found
@leonvanzyl
@leonvanzyl 4 ай бұрын
Did you set up Ollama? That message is saying that Ollama is not running.
@TheCcamera
@TheCcamera 4 ай бұрын
@@leonvanzyl having the same issue. (posted above in the comments). Ollama is up to date and running fine otherwise. I also successfully upserted vectors to faiss with Ollama embeddings with small documents but if you using larger documents it fails constantly. As workaround is it possible to use different embeddings with Ollama models?
@surbhosale
@surbhosale 3 ай бұрын
getting the same error even with a small document (I am using faiss with ollama embeddings)
RAG Like a BOSS 🔥 Flowise Document Stores Tutorial
14:44
Leon van Zyl
Рет қаралды 14 М.
Building Langchain Agents Using Flowise (Full AI Chatbot Tutorial)
18:24
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
Why You Should Be Using PostHog
15:51
Nathan Covey
Рет қаралды 4,6 М.
Make Your RAG Agents Actually Work! (No More Hallucinations)
35:01
Leon van Zyl
Рет қаралды 30 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
AI Agents vs. AI Assistants vs. Multi Agent Systems vs RAG in n8n: Explained
8:42
AI Agent Damian I Automation & No-Code
Рет қаралды 626
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 250 М.
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 80 М.
How to Build AI ChatBot with Custom Knowledge Base in 10 mins
10:46
Coding Money
Рет қаралды 104 М.
How to Build a Self-Improving AI with Agentic RAG and Flowise
20:43