If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
@TraveleroftheSoul76747 ай бұрын
there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?
@backofloca532619 күн бұрын
When i install requirements it take an infinity time, why?
@Ankara_pharao8 ай бұрын
May i use llama3 with languages other then english?
@sauravmukherjeecom8 ай бұрын
Yes you can. Out of the total training data around 5 or 10 percent (forgot now) is languages other than English. Which is close to the total training data for llama 2.
@engineerprompt8 ай бұрын
Yes, you can as pointed out. You also want to make sure to use a multi-lingual embedding model.
@vetonrushiti196 ай бұрын
does localgpt work in an ubuntu machine without nvidia gpu?
@heikg22 сағат бұрын
You don't need Nvidia gpu, you just need enough VRAM to offload the LocalLLM off into it. You can also try using gguf format that allows offloading onto RAM if you don't have enough VRAM, but it slows down the token generation a lot.
@soarthur4 ай бұрын
This is very interesting and great work. There is the Mozilla project called llamafile which makes running local LLM with one simple executable file. It also can use CPU instead of GPU intensive. LLamafile makes running LLMs on older hardware possible. It has great performance improvement. It will be great if LocalGPT can work with LLamafile. Thank you.
@zahidahmad18948 ай бұрын
I want a specific conversational chatbot with very few amount of data. How can I do it?
@kingfunny48218 ай бұрын
can use this offline and Can I save the conversation so that I can refer to it after a period of time or when creating a new conversation?
@sauravmukherjeecom8 ай бұрын
Yes, For memory you will have to send the past conversation as context. Try looking into one of the rope trained models with longer context length.
@bobby-and2crows8 ай бұрын
Yeah fella
@engineerprompt8 ай бұрын
This is for offline use. localgpt has a flag save_qa that will enable you to save your conversations and you can load them.
@colosys8 ай бұрын
Could you help me configure localGPT with pgvector embeddings? :$ I'm seriously struggling
@adityamishra6117 ай бұрын
I am getting this error: You are trying to offload the whole model to the disk
@azizjaffrey1238 ай бұрын
Please keep this code version for future use, if you update code and if people cannot find code from this video they skip , which i personally did on your old video on LocalGPT and started watching this but for my gpu old code was compatable but cannot clone, since that version doesnt exist
@NovPiseth8 ай бұрын
Hello thanks for great video you help me alot about this. Could you help me to add Panda and PandaAI? it could help me to analys the data from the excel and/or csv file. Thanks
@pablolbrown7 ай бұрын
Any idea when support for Apple Silicon M3 is coming?
@engineerprompt7 ай бұрын
It already supports Apple Silicon. Make sure you correctly install the llamacpp version. Instructions are in the Readme
@thegooddoctor67198 ай бұрын
By Far the LocalGPT is the most robust RAG system out there - Thank you - But I'm running it on a i9 13900/4090 GPU system - Is there any plans on making the RAG system a bit faster - It can take up to 5 minutes to come back with a response...... Thanks again - Very Cool...
@engineerprompt8 ай бұрын
Yes, I am experimenting with using ollama for the LLM and I think that will increase the speed. Working on major updates, stay tuned :)
@laalbujhakkar8 ай бұрын
on m2 mbp 16gb with ollama+llama38b+anythingllm is returning in. seconds …
@thegooddoctor67198 ай бұрын
@@laalbujhakkar Then again I'm having it search 300 MB of documents.........
@o1ecypher8 ай бұрын
a .exe or a gui for windows would me nice gradio like stable diffusion please
@zahidahmad18948 ай бұрын
4gb gpu 16 gb ram. Will llama3 work fine?
@FranchGuy8 ай бұрын
Hi , is there way to contact you for privet project ?
@engineerprompt8 ай бұрын
There is a link in the video description or email me at engineerprompt at gmail
@EDRM-my5rd7 ай бұрын
I tested the ingest and query model with PDF edition of FINANCIAL ACCOUNTING International Financial Reporting Standards ELEVENTH EDITION using default parameters and answers were 80% wrong, particularly with sample journal entries from the context: > Question: provide example of VAT journal entries > Answer * The sales revenue is recorded as a debit to the "Sales Revenue" account, which increases the company's assets.
@Player-oz2nk8 ай бұрын
Very interested in how to correctly ingest csv files and formats and limitations
@sauravmukherjeecom8 ай бұрын
Csvs are tricky. You can either go by adding the data to a database and then querying on it. Or create text chunks out of it.
@Player-oz2nk8 ай бұрын
@@sauravmukherjeecom assuming foe larger cvs importing directly to db would make more sense and smaller file we could chunk
@shaonsikder5568 ай бұрын
Which screen recorder do you use?
@engineerprompt8 ай бұрын
Screen.Studio
@ai-folk-music8 ай бұрын
Why use this over something like AnythingLLM?
@engineerprompt8 ай бұрын
They solve the same problem. My goal with localgpt is to be a framework for testing different components of RAG as lego blocks.
@engineerprompt8 ай бұрын
Want to learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0
@kunalr_ai8 ай бұрын
😂kuch samaj nahi aa raha .. kaha se start karna hai
@engineerprompt8 ай бұрын
there is a playlist on localgpt on the channel. that will be a good starting point :)