Local RAG Using Llama 3, Ollama, and PostgreSQL

  Рет қаралды 9,619

Timescale

Timescale

Күн бұрын

Пікірлер: 11
@BertrandDunogier
@BertrandDunogier 3 ай бұрын
Thank you, clear and straightforward.
@awakenwithoutcoffee
@awakenwithoutcoffee 3 ай бұрын
great , looking forward learning how to utilize Timescale DB for our AI start-up !
@TimescaleDB
@TimescaleDB 3 ай бұрын
Awesome! That's great to hear. Please let us know how it goes, or if you have any questions. 😁
@awakenwithoutcoffee
@awakenwithoutcoffee 2 ай бұрын
@@TimescaleDB for sure!
@renobodyrenobody
@renobodyrenobody 2 ай бұрын
Sir, thanks a lot for your video. Did not follow exactly your presentation but it is great and save a lot of time. Thanks thanks thanks!
@NicolasEmbleton
@NicolasEmbleton 2 ай бұрын
Very cool. Thanks. Nice summary.
@matarye9745
@matarye9745 3 ай бұрын
Nice tutorial!
@TimescaleDB
@TimescaleDB 3 ай бұрын
Thanks!!
@nanomartin
@nanomartin 3 ай бұрын
Quite useful tutorial! Thanks for bringing it. A few questions that come to my mind: 1- How could I get rid of proprietary image in docker? I bet it is possible to get our existing PG instance and just drop in the necessary extensions and should work as well, but just want to confirm. 2- It looks easy to delegate to the PG extension to communicate with ollama and get the embedings, however I see to many roundrtrips in that approach. For a programmatic system that would have to do tens of thousands runs a day, how this perform? Is there a more "straight" way to pull embeddings from ollama and query PG?
@fakkmorradi
@fakkmorradi 3 күн бұрын
sweet
Use Ollama Embeddings with PostgreSQL (Tutorial)
15:17
Timescale
Рет қаралды 5 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 343 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,5 МЛН
Why More People Dont Use Linux
18:51
ThePrimeTime
Рет қаралды 344 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 347 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 212 М.
Reliable, fully local RAG agents with LLaMA3.2-3b
31:04
LangChain
Рет қаралды 82 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН