Simple Overview of Text to SQL Using Open-WebUI Pipelines

  Рет қаралды 6,704

Jordan Nanos

Jordan Nanos

Күн бұрын

Пікірлер: 27
@kyudechama
@kyudechama 26 күн бұрын
Hi Jordan, thanks for the video! You mentioned having vLLM AND ollama running on a single GPU. How do you prevent vLLM from blocking all available VRAM? Can memory be allocated dynamically?
@jordannanos
@jordannanos 26 күн бұрын
@@kyudechama ollama does allocate memory dynamically but vLLM does not. I just use docker to restrict which GPU(s) the vLLM runtime has access to.
@jordannanos
@jordannanos 26 күн бұрын
@@kyudechama and I guess the NVIDIA container toolkit within docker specifically
@kyudechama
@kyudechama 26 күн бұрын
@@jordannanos thanks for the quick response. Would you mind sharing the docker command that you used to deploy?
@jordannanos
@jordannanos 26 күн бұрын
@@kyudechama docker run --gpus '"device=0"' -v /home/jordan/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 vllm/vllm-openai:latest --model meta-llama/Llama-3.3-70B-Instruct
@kyudechama
@kyudechama 25 күн бұрын
@@jordannanos thank you!😊
@MustRunTonyo
@MustRunTonyo 2 ай бұрын
I have used the openwebui standard pipeline, and it looks like I can't put more than one table in the DB_table field. That's too much of a downside! Did you come across a solution?
@martinsmuts2557
@martinsmuts2557 4 ай бұрын
Hi Jordan, thanks. I am missing the steps where you created the custom "Database Rag Pipeline with Display". From the Pipelines page you completed the database details and set the Text-to-sql Model to Llama3, but where do you configure the connection between the pipeline valves and the "Database Rag Pipeline with Display" to be an option to be selected?
@jordannanos
@jordannanos 4 ай бұрын
@@martinsmuts2557 it’s a single .py file that is uploaded to the pipelines container. I’ll cover that in more detail in a future video
@KunaalNaik
@KunaalNaik 4 ай бұрын
@@jordannanos Do create this video soon!
@jordannanos
@jordannanos 4 ай бұрын
@@KunaalNaik @martinsmuts2557 just posted a video reviewing the code: kzbin.info/www/bejne/n325qnidrayVnZY repo is here: github.com/JordanNanos/example-pipelines
@swarupdas8043
@swarupdas8043 4 ай бұрын
Hi. Could you link us to the source code of the pipeline?
@jordannanos
@jordannanos 4 ай бұрын
code is here: github.com/JordanNanos/example-pipelines video reviewing the code: kzbin.info/www/bejne/n325qnidrayVnZY
@RedCloudServices
@RedCloudServices 3 ай бұрын
Jordan thanks, I have a single gpu runpod setup would you recommend just adding a docker postgresql to existing pod? and is the python code using langchain stored in the pod pipeline settings? this sort of reminds me of AWS serverless Lambda but simpler
@jordannanos
@jordannanos 3 ай бұрын
@@RedCloudServices if you’d like to save money I would run Postgres in docker on the same VM you’ve already got. That will also simplify networking. Over time you might want to start/stop those services independently in the event of an upgrade to docker or your VM. Or you might want to scale independently. In that case you might want a separate VM for your DB and a separate one for your UI. Or you might consider running kubernetes. Yes the python code is all contained within the pipelines container and uses llama-index not langchain (though you could use langchain too). Just a choice I made.
@jordannanos
@jordannanos 3 ай бұрын
@@RedCloudServices in other words, you’ll need to pip install the packages that the pipeline depends on, inside the pipelines container. Watch the other video I linked for more detail on how to do this.
@RedCloudServices
@RedCloudServices 3 ай бұрын
@@jordannanos yep! just watched it. I just learned openwebui does not allow Vision only models or multi modal LLMs like Gemini. Was hoping to setup a pipeline using a vision model 🤷‍♂️ also it’s not clear how to edit or setup whatever vector db it’s using
@peter102
@peter102 4 ай бұрын
nice video. saw the link from twitter. my question is, is there a way to speed up the results after you ask it a question?
@jordannanos
@jordannanos 4 ай бұрын
Yes, working to improve the LLM response and SQL query time
@renatopaschoalim1209
@renatopaschoalim1209 3 ай бұрын
Hey Jordan! Can I change your pipelines for work in SQL Server?
@jordannanos
@jordannanos 3 ай бұрын
@@renatopaschoalim1209 yes, it’s tested with Postgres and MySQL. If you know how connect to SQL server with python, you’ll be able to use the pipeline
@random_stuf_yt
@random_stuf_yt 4 ай бұрын
hi
Demo and Code Review for Text-To-SQL with Open-WebUI
19:17
Jordan Nanos
Рет қаралды 5 М.
Advanced RAG: Combining RAG with Text-to-SQL
15:08
LlamaIndex
Рет қаралды 7 М.
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
Solving one of PostgreSQL's biggest weaknesses.
17:12
Dreams of Code
Рет қаралды 222 М.
Building Customized Text-To-SQL Pipelines in Open WebUI
6:22
Jordan Nanos
Рет қаралды 6 М.
A Natural Language AI (LLM) SQL Database - Could this work?
8:52
All About AI
Рет қаралды 15 М.
Treesitter Basics and Installation
13:50
TJ DeVries
Рет қаралды 22 М.
LangGraph - SQL Agent - Let an LLM interact with your SQL Database
20:22
Coding Crash Courses
Рет қаралды 5 М.
How to Chat with Your Documents in Open WebUI
8:00
DigitalBrainBase
Рет қаралды 2,7 М.
Open WebUI Tools, Functions & Pipelines Deep Dive
30:25
DigitalBrainBase
Рет қаралды 2,3 М.
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН