Llama-3 🦙 with LocalGPT: Chat with YOUR Documents in Private

  Рет қаралды 16,412

Prompt Engineering

Prompt Engineering

Күн бұрын

Пікірлер: 41
@engineerprompt
@engineerprompt 7 ай бұрын
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
@TraveleroftheSoul7674
@TraveleroftheSoul7674 7 ай бұрын
there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?
@backofloca5326
@backofloca5326 19 күн бұрын
When i install requirements it take an infinity time, why?
@Ankara_pharao
@Ankara_pharao 8 ай бұрын
May i use llama3 with languages other then english?
@sauravmukherjeecom
@sauravmukherjeecom 8 ай бұрын
Yes you can. Out of the total training data around 5 or 10 percent (forgot now) is languages other than English. Which is close to the total training data for llama 2.
@engineerprompt
@engineerprompt 8 ай бұрын
Yes, you can as pointed out. You also want to make sure to use a multi-lingual embedding model.
@vetonrushiti19
@vetonrushiti19 6 ай бұрын
does localgpt work in an ubuntu machine without nvidia gpu?
@heikg
@heikg 22 сағат бұрын
You don't need Nvidia gpu, you just need enough VRAM to offload the LocalLLM off into it. You can also try using gguf format that allows offloading onto RAM if you don't have enough VRAM, but it slows down the token generation a lot.
@soarthur
@soarthur 4 ай бұрын
This is very interesting and great work. There is the Mozilla project called llamafile which makes running local LLM with one simple executable file. It also can use CPU instead of GPU intensive. LLamafile makes running LLMs on older hardware possible. It has great performance improvement. It will be great if LocalGPT can work with LLamafile. Thank you.
@zahidahmad1894
@zahidahmad1894 8 ай бұрын
I want a specific conversational chatbot with very few amount of data. How can I do it?
@kingfunny4821
@kingfunny4821 8 ай бұрын
can use this offline and Can I save the conversation so that I can refer to it after a period of time or when creating a new conversation?
@sauravmukherjeecom
@sauravmukherjeecom 8 ай бұрын
Yes, For memory you will have to send the past conversation as context. Try looking into one of the rope trained models with longer context length.
@bobby-and2crows
@bobby-and2crows 8 ай бұрын
Yeah fella
@engineerprompt
@engineerprompt 8 ай бұрын
This is for offline use. localgpt has a flag save_qa that will enable you to save your conversations and you can load them.
@colosys
@colosys 8 ай бұрын
Could you help me configure localGPT with pgvector embeddings? :$ I'm seriously struggling
@adityamishra611
@adityamishra611 7 ай бұрын
I am getting this error: You are trying to offload the whole model to the disk
@azizjaffrey123
@azizjaffrey123 8 ай бұрын
Please keep this code version for future use, if you update code and if people cannot find code from this video they skip , which i personally did on your old video on LocalGPT and started watching this but for my gpu old code was compatable but cannot clone, since that version doesnt exist
@NovPiseth
@NovPiseth 8 ай бұрын
Hello thanks for great video you help me alot about this. Could you help me to add Panda and PandaAI? it could help me to analys the data from the excel and/or csv file. Thanks
@pablolbrown
@pablolbrown 7 ай бұрын
Any idea when support for Apple Silicon M3 is coming?
@engineerprompt
@engineerprompt 7 ай бұрын
It already supports Apple Silicon. Make sure you correctly install the llamacpp version. Instructions are in the Readme
@thegooddoctor6719
@thegooddoctor6719 8 ай бұрын
By Far the LocalGPT is the most robust RAG system out there - Thank you - But I'm running it on a i9 13900/4090 GPU system - Is there any plans on making the RAG system a bit faster - It can take up to 5 minutes to come back with a response...... Thanks again - Very Cool...
@engineerprompt
@engineerprompt 8 ай бұрын
Yes, I am experimenting with using ollama for the LLM and I think that will increase the speed. Working on major updates, stay tuned :)
@laalbujhakkar
@laalbujhakkar 8 ай бұрын
on m2 mbp 16gb with ollama+llama38b+anythingllm is returning in. seconds …
@thegooddoctor6719
@thegooddoctor6719 8 ай бұрын
@@laalbujhakkar Then again I'm having it search 300 MB of documents.........
@o1ecypher
@o1ecypher 8 ай бұрын
a .exe or a gui for windows would me nice gradio like stable diffusion please
@zahidahmad1894
@zahidahmad1894 8 ай бұрын
4gb gpu 16 gb ram. Will llama3 work fine?
@FranchGuy
@FranchGuy 8 ай бұрын
Hi , is there way to contact you for privet project ?
@engineerprompt
@engineerprompt 8 ай бұрын
There is a link in the video description or email me at engineerprompt at gmail
@EDRM-my5rd
@EDRM-my5rd 7 ай бұрын
I tested the ingest and query model with PDF edition of FINANCIAL ACCOUNTING International Financial Reporting Standards ELEVENTH EDITION using default parameters and answers were 80% wrong, particularly with sample journal entries from the context: > Question: provide example of VAT journal entries > Answer * The sales revenue is recorded as a debit to the "Sales Revenue" account, which increases the company's assets.
@Player-oz2nk
@Player-oz2nk 8 ай бұрын
Very interested in how to correctly ingest csv files and formats and limitations
@sauravmukherjeecom
@sauravmukherjeecom 8 ай бұрын
Csvs are tricky. You can either go by adding the data to a database and then querying on it. Or create text chunks out of it.
@Player-oz2nk
@Player-oz2nk 8 ай бұрын
@@sauravmukherjeecom assuming foe larger cvs importing directly to db would make more sense and smaller file we could chunk
@shaonsikder556
@shaonsikder556 8 ай бұрын
Which screen recorder do you use?
@engineerprompt
@engineerprompt 8 ай бұрын
Screen.Studio
@ai-folk-music
@ai-folk-music 8 ай бұрын
Why use this over something like AnythingLLM?
@engineerprompt
@engineerprompt 8 ай бұрын
They solve the same problem. My goal with localgpt is to be a framework for testing different components of RAG as lego blocks.
@engineerprompt
@engineerprompt 8 ай бұрын
Want to learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0
@kunalr_ai
@kunalr_ai 8 ай бұрын
😂kuch samaj nahi aa raha .. kaha se start karna hai
@engineerprompt
@engineerprompt 8 ай бұрын
there is a playlist on localgpt on the channel. that will be a good starting point :)
Graph RAG: Improving RAG with Knowledge Graphs
15:58
Prompt Engineering
Рет қаралды 84 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,5 МЛН
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Using Clusters to Boost LLMs 🚀
13:00
Alex Ziskind
Рет қаралды 87 М.
Сборник Эксклюзивов 2024 - Уральские Пельмени
1:33:24
Уральские Пельмени
Рет қаралды 123 М.
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 184 М.
Talk to Your Documents, Powered by Llama-Index
17:32
Prompt Engineering
Рет қаралды 88 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 200 М.
100% Offline ChatGPT Alternative?
16:01
Rob Mulla
Рет қаралды 640 М.