Private Chat with your Documents with Ollama and PrivateGPT | Use Case | Easy Set up

  Рет қаралды 67,654

Prompt Engineer

Prompt Engineer

Күн бұрын

Important: I forgot to mention in the video . Please delete the db and _cache_ folder before putting in your document. Otherwise it will answer from my sample document.
In this video, we are going to chat with your documents using privateGPT powered by Local LLMs from Ollama.
PrivateGPT: Interact with your documents using the power of GPT, 100% privately, no data leaks
Learn step by step how to set this up on your own system.
I have set up a public github repository for all the Ollama Open Source models.
You can clone the repo and get started with any particular folder for testing out any feature of Ollama.
The code can be accesed from this link:
www.patreon.com/posts/private...
Install Ollama: ollama.ai/
Let’s do this!
Join the AI Revolution!
#ai #privateGPT #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
☕ Buy me a coffee: ko-fi.com/promptengineer
🧛‍♂️ Join my Patreon: / promptengineer975
❤️ Subscribe: / @promptengineer48
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: / prompt48
🤠Join this channel to get access to perks:
/ @promptengineer48
TIME STAMPS:
0:00 Intro
1:28 Install Ollama
2:23 Test Ollama
3:30 My GitHub Repos
5:21 Starting VS Code Editor
7:16 Setting up Virtual Environment
8:42 Install the requirements
10:18 Files for Chatting
10:58 Ingest the files
11:54 Run privateGPT
14:21 Summarize
15:24 Conclusion
15:40 Join Me
🎁Subscribe to my channel: / @promptengineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!

Пікірлер: 412
@justhuman9551
@justhuman9551 6 ай бұрын
Just found out your channel and subbed after watching this video. Very good quality video! Keep up the great content creation! I am impressed with your motivation to answer the questions from your comment section. Not every youtube channel cares about answering subscriber questions and doing content around what people comment , so very good job!
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Thank you. It's my pleasure to be talking with my viewers..
@Paul-gg3cr
@Paul-gg3cr 6 ай бұрын
I've been looking for this for months. Thank you alot, dude! Subscribed :)
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Thanks for the sub!
@mjackstewart
@mjackstewart 2 ай бұрын
Dude … This is AMAZING! I was just looking for pushes in the right direction, but this actually does exactly what I was attempting to do! Thank you!
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Glad I could help!
@enceladus96
@enceladus96 5 ай бұрын
You've saved me from going down my RAG rabbit hole. The code is extremely detailed, clean, and easy to understand too. God bless.
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Thanks for tuning in.
@batzizou
@batzizou 12 күн бұрын
Good work! I found your video well done!
@AlexanderDeplov-sd5pg
@AlexanderDeplov-sd5pg 3 ай бұрын
Great tutorial, one of the best on the web!! Thanks for your time and effort! Upvoted 👍
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Thanks so much.
@charlesbiggs7735
@charlesbiggs7735 Ай бұрын
Awesome effort!! Your code worked right off the bat. Thanks for saving me a LOT of time.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Thanks
@davidpe76
@davidpe76 6 ай бұрын
Great video, took me a few tries getting Ubuntu configured (using wsl under windows) and updated before it would build the scripts, but I am very impressed. Thanks for all the effort you put into these videos 😁
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
🤗 welcome. Trying to bring the best.
@waynesbigw2305
@waynesbigw2305 3 ай бұрын
what is "wsl under windows?" I'm running Linux. No Windows here. His instructions in the video don't work for my system at all.
@roaming934
@roaming934 3 ай бұрын
Are you using the code from private gpt’s primordial version? What a great work! By the way, now they officially support integration with ollama. You probably wanna make a video about how to set this up.
@BeyondTheSide
@BeyondTheSide Ай бұрын
I will like to say that this works just as well even today. Much thanks to the prompt engineer, you have made my life and others a lot easier.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Thanks for watching
@user-wr4yl7tx3w
@user-wr4yl7tx3w 6 ай бұрын
this is really high quality content, especially given the effort made in editing. the subtitle is a nice addition also.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Thank You for noticing the efforts... 😍 -- With Love (Prompt Engineer) However, more than the subtitles, I want the main content to be more engaging.
@salvatorespampinato3788
@salvatorespampinato3788 3 ай бұрын
Thank you very much, great video.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
You are welcome!
@jorgitozor
@jorgitozor Ай бұрын
Nice video, very informative! What do you use to generate subtitles? thanks
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Thanks.. Capcut
@frankbradford2869
@frankbradford2869 3 ай бұрын
Hi, I did what you said with some hesitation but it worked as you said . This is one good program to use to have a good look at a documents content and meaning. Thanks. BTW is there way to let the program give a full response with out telling it to continue with its explanation?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
I think there should be a verbose flag, which you can set to False.
@rickgeyer9685
@rickgeyer9685 4 ай бұрын
Really great video! This has saved me a ton of time as I am building an AI model to ingest thousands of documents and query them. Very straightforward. What made you choose mistral as the model to use? What is the largest model that Ollama has? I liked and subscribed! God bless you!
@PromptEngineer48
@PromptEngineer48 4 ай бұрын
Glad it helped! I choose mistral coz it's small and replies accurately to my use cases. Umm. the largest model I think is ollama.ai/library/megadolphin
@salahdinwaji7498
@salahdinwaji7498 Ай бұрын
Thank you for the amazing video! a quick question, are these local llms safe to use with private data? I want to use it for work but idk if the info will be shared with meta.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
U can switch off internet. Safe 🔐 or not, we cannot guarantee, it may so happen that once u connect to internet the data may be transferred.
@salahdinwaji7498
@salahdinwaji7498 Ай бұрын
@@PromptEngineer48 okay, so the advantage of running an llm locally is just to save some $$ from API calls?
@gabriel-gr
@gabriel-gr 3 ай бұрын
This has been very instructive, thanks! Is there an LLM that's better than Mistral at working with very technical documents, i.e. lengthy API implementation documents? I set up my environment exactly as instructed, got my docs indexes and could get some answers on them. But things get murky when I go very specific, with incorrect or incomplete answers.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Try with mixtral 7x8b
@weihe2047
@weihe2047 2 ай бұрын
Thank you very much, all the videos you have made are great! However, when I was building the local llm, I found that there are multiple frameworks, such as privateGPT, localGPT, langchain, etc. Similarly, I found that there are very many choices for the llm as well as the vector database (e.g., hugging face vs. ollama), which gave me a big headache, and I was wondering if you could make a I was wondering if you could make a video that explains your recommendations for each part of the process of building a rag-based personal local document chat llm?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
That would be a very good video but very less relevant as everyday we have so many updates. But this is something I can create. thanks for the idea. Definelty will work on that
@weihe2047
@weihe2047 2 ай бұрын
@@PromptEngineer48 Thank you for your response! It's true that, as you say, the various programs are moving so fast. Since I'm hoping to be able to build something myself via langchain, I'm starting to work based on your github project, and some of the other out-of-the-box projects (e.g. open-webui, privateGPT, etc.) are just too heavy for me to get into and modify.
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Cool.
@sergeaudenaert
@sergeaudenaert 2 ай бұрын
Thank you! Very clear
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Welcome
@samshosho
@samshosho 3 ай бұрын
Thanks for the great effort. I just have a question. When a file is ingested, then i want to ingest different file, should i delete the db folder first? so not to mix older ingested files with the current one i want to ingest. Also, after ingesting one of my csv files, i asked few questions. Then the answers i was getting were far off and was actually from another source, which i didn't provide, it was from a pdf book about getting rich or something. When i only ingested a csv file with numbers!
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Yes u need to delete the db file first
@drkvaladao776
@drkvaladao776 27 күн бұрын
Hi, while setting up Virtual environment I'm getting an error, 7:39 what programs do I need? I have installed Miniconda and it's still no running the line. Thks
@PromptEngineer48
@PromptEngineer48 23 күн бұрын
then try with Ananconda
@donniealfonso7100
@donniealfonso7100 4 ай бұрын
Followed your instructions here and installed on Raspberry Pi4. Works but of course painfully slow and chip approach 145 degrees F which slows things down as well. But it works and may try on Pi5. Was using a pdf manual for Viking drill press for document. Have to try something with just text.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Hmm. I see.
@user-nd2zy9so8k
@user-nd2zy9so8k 3 ай бұрын
i use "pip install tqdm ",The system prompts me: ModuleNotFoundError: No module named 'tqdm'
@natehedgeman
@natehedgeman Ай бұрын
Make sure you have installed all the frameworks listed in the requirements.txt file. Rewind the video, he explains how to do it all at once. pip install -r requirements.txt If you have done that, make sure you are working in the correct environment. The same environment you installed the python frameworks in. He explains that as well.
@fabriziocasula
@fabriziocasula 6 ай бұрын
thank you, sorry but i don't see the old chat interface :-) i have 2 questions how can i remove a ingested document that i don't need? ist it possible to chat with the docker interface ? or is it only for terminal??
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
1. U can remove the file. Then delete the db folder and __pycache__ folder.. then run python ingest.py again and python privateGPT.py again. 2. Web UI interface is not integrated here right now. But that is some ok my pipe line as well. I am working on that.
@jfranz8491
@jfranz8491 6 ай бұрын
Inspirational video, great work, subscribed. Could you look into integrating a web gui for requests and responses? I'm looking for a local LLM + Documents + URL for tailored and personalized chatbot knowledge store. I use perplexity right now but want to curate my own documents for tailored content. Keep up the amazing videos!
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
For this you can use the cat kzbin.info/www/bejne/f6Sqh4aQe8aCpqc
@RafikKouissar
@RafikKouissar 6 ай бұрын
Outstanding work you did bro. I was able to get mine running following your instructions with no issues. Does this support llama2 as well? My other question, what are the steps to run privategpt ui? Thanks a lot, this is brilliant!
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Yes. Llama 2 as well.. PrivateGPT UI I shall have to look.
@Paulo-ut1li
@Paulo-ut1li 6 ай бұрын
Thanks, that’s a great video! I’m testing privategpt for some time and I would love to know if you’re experiencing hallucinations from the chat? And yes mistral seems to be a good model but Zephyr and Dolphin seems to give better answers with a little less performance, depending on the context. Therefore, I couldn’t Get rid of some hallucinations, I would say the reliability of information is 45-65%
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Yes. Hallucinations are there. Need a good model in future
@pankajagarwal1980
@pankajagarwal1980 Ай бұрын
Well explained. Can you suggest if we want to pass a onenote how we can pass it.
@gabrielalejandroverapinto1974
@gabrielalejandroverapinto1974 2 ай бұрын
This is great, can you add or show how we could add gpu integration even better if it is over a GUI with privategpt 2.0?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Alright
@chjpiu
@chjpiu 2 ай бұрын
Excellent job. Thank you so much for your sharing. Could you please let me know if it also can run on Windows? As Ollama recently published into Windows a few months ago
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Yes. Sure
@r0ntuber
@r0ntuber Ай бұрын
Thanks for doing this: It seems like when one clones the repository, you need to delete everything in the db folder or it will mess up the results of the information you are trying input yourself.
@heinzpeterklein9383
@heinzpeterklein9383 6 ай бұрын
Awesome idea. Now use Streamlit or Flask as GUI and the solution is perfect. Thanks for the inspiration. Questions: 1. which OS are you using? 2. python version? 3. do you rather use CPU or GPU? Would an M3 with 128 GB also be sufficient for a quick training / fine tuning of hugging face models up to 20B? Thanks for the answer in advance. Hp
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
1. Mac OS. Pretty basic. Macbook Air M2, 8 GB 2. python version 3.9 ++ version 3. M3 with 128 GB.. Hmm when you say fine tuning it depends on the model. But here is a rough calculation. If you have a 20B parameter model in 32 bits, then you need 20x32/8 = 80 GB GPU. your system should be able to do the fine tuning. Else please go for 4 bit quantization, then the requirement will reduce by 8 times, now requiring only 80/8 = 10 GB of GPU.
@iclonethefirst
@iclonethefirst 4 ай бұрын
Thank you a lot for your effort to create an easier way to get up and running. Could you make a video explaining what all of the parts of "requirements.txt" are needed for? I would like to understand how everything works in detail
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Sure thing!
@mega225
@mega225 3 ай бұрын
Great video. Is there a way to know the limitations on the number of ingested documents or their size? or the maximum number of chunks that the model can deal with? Thanks
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
There seems to be a limitation of 166 embeddings.
@joseffb7821
@joseffb7821 4 ай бұрын
Can you still use the ollama API to search your documents? or does it need to be via the console?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
API can do that
@RealBassPhat
@RealBassPhat Ай бұрын
Very interesting, easy to follow. I tested this with a music instrument manual, and it wasn't giving accurate answers at all. Any ideas on how to improve this? It's unusable for this type of document. Makes me wonder how accurate it would be with any content. Thank you!
@PromptEngineer48
@PromptEngineer48 Ай бұрын
That was pretty old stuff. Please watch the recent videos on my channel.
@user-xd5gd4pc9h
@user-xd5gd4pc9h 6 ай бұрын
It is an another great video, thank you! And I need your further help. I use ollama in docker container, I think it is easier to organize and I use wsl2 ubuntu. I think it is related to the lang-chain and ollama. I cannot find the info in lang-chain docs. Do you want to try that? Though you are using Mac. Thank you !
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
yes. I shall start working on that.
@frankbradford2869
@frankbradford2869 3 ай бұрын
This work very well but it has issues ingesting docx, pptx and ods files without a pip python install
@BetterEveryDay947
@BetterEveryDay947 Ай бұрын
Can you tell, how to use other models like llama3, without using mistral?
@omsen2805
@omsen2805 Ай бұрын
Great video! Can I connect Langchain with it? Or is it included. Im a newbie on it :D
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Yes u cN
@mohamedsabirudeen9249
@mohamedsabirudeen9249 2 ай бұрын
If I give ollama pull mistral I'm getting error that says could not connect to ollama app, is it running ? Please give me a solution for it
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
simple, before typing ollama pull mistral, write ollama serve..
@mohamedsabirudeen9249
@mohamedsabirudeen9249 2 ай бұрын
Actually using it in linux platform but still I gave the command ollama serve, I'm getting no GPU detected, Please give solution for this ! @@PromptEngineer48
@RedCloudServices
@RedCloudServices 6 ай бұрын
thank you ❤ can the LLM be given a system prompt to only retrieve answers related to the source pdf or csv and nothing else? also which ui can be used instead of command prompt with this setup?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
By default it will retrieve answers related to the source only. you should see the source locations as well.
@RedCloudServices
@RedCloudServices 6 ай бұрын
@@PromptEngineer48I revised my comment to ask about UI
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
@@RedCloudServices UI?? Streamit, Flash, Gradio etc etc
@dreadmadseen
@dreadmadseen 23 күн бұрын
Success!! Thank you!!
@PromptEngineer48
@PromptEngineer48 23 күн бұрын
Welcome
@MatiasFedericoWolters
@MatiasFedericoWolters Ай бұрын
hi, ask a question. How can I change the model for example to llama3 with model 8b-instruct-q6_K?? please
@PromptEngineer48
@PromptEngineer48 Ай бұрын
go to line 12 of the privateGPT.py file and change the mistral to whatever model your heart desire.
@ganeshnayak9459
@ganeshnayak9459 15 күн бұрын
Great Tutorial, Why is it saying loading 235 new documents when it has only one in the source_documents folder. I had 2 in mine and it said 8, wondering why.
@PromptEngineer48
@PromptEngineer48 15 күн бұрын
It's because of the chunking.. I had put only one document but it was chunked into many pieces.
@nd21209
@nd21209 2 ай бұрын
Thanks for a great video! While ingesting a Word document with docx extension, I get an error No module named 'docx'. Is there something I am missing?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Try to install the module. Paste the error in chatgpt.
@petergab734
@petergab734 2 ай бұрын
I got an error when I typed ollama run mistral. Got a a message saying that ollama command not found. I get this from within the terminal of Visual studio code. But I can run Ollama from the mac's terminal window no problem. Did I forget to do something? Thank you!!!
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
try to change the type of terminal that you are using. i mean zsh or command prompt.
@rgm4646
@rgm4646 5 ай бұрын
great video!
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Thanks
@frankbradford2869
@frankbradford2869 3 ай бұрын
Hi, First I want to say this is an amazing program and I feel as if I am starting off in this field on the shoulders of giants. I just had the program ingest 16 pdf files and I got this error, cannot submit more than 166 embeddings at once. Please submit your embeddings in batches of size 166 or less. Is there a limit if file I can give program at once? I look forward to your answer.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
We need to hunt that code where 166 is mentioned. or perhaps that's the limitation of the model's allowable lenght itself. Can try with other models please.
@BetterThanTV888
@BetterThanTV888 6 ай бұрын
How would you do this if you have ollama in docker? Or even cloud gpu like runpod? Or Linode? Seems like a good video for the future as you explain,and teach better than a majority of the creators.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Okay. Will create a video soon on the specific use case that you have mentioned.. I mean by hosting on docker and chatting with docs. Thanks for the comment
@AlperYilmaz1
@AlperYilmaz1 6 ай бұрын
same here, I'm using dockerized ollama.. would be great to have privategpt with dockerized ollama..
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Okay. Got the requirements Now it's my turn to create that. 😄
@tier1recon836
@tier1recon836 6 ай бұрын
Would like to see ollama with openai assistant or similar that can use a file and have assistants do action to the file such as execute code or clean up data etc.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Got it. Let me do some research.
@swapnil0402
@swapnil0402 3 ай бұрын
Hi, thanks for the tutorial. I am able to run the model using Ollama on windows but after everything the project runs and Asks for the Enter a query and if I add the questions it stucks there. In problems I am getting bekow issues such as Import "langchain.chains" could not be resolvedPylance Import "langchain.embeddings" could not be resolvedPylance Import "langchain.callbacks.streaming_stdout" could not be resolvedPylance etc Can you please help me out in understanding this issue and resolving it. Thanks.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
There are bugs in windows.. you can try with Linux on windows.
@renierdelacruz4652
@renierdelacruz4652 6 ай бұрын
Great video, thanks for sharing it. Can you create a Video content talking about making work Cheshire cat on ollama ? Must be interesting.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Thanks for the idea! On it.
@Elrevisor2k
@Elrevisor2k 6 ай бұрын
Where the knowledge base is stored? It keeps track of all pdfs already processed?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
The are two folders created automatically names db and cache.
@user-xl5vc3mu9q
@user-xl5vc3mu9q 4 ай бұрын
Hello, i keep getting error ERROR: Could not find a version that satisfies the requirement onnxruntime>=1.14.1 (from chromadb) (from versions: none) ERROR: No matching distribution found for onnxruntime>=1.14.1
@JDSchuitemaker
@JDSchuitemaker 4 ай бұрын
I had the error for ChromaDB too. If you Google for them you will probably find an answer. For ChromaDB this solved it for me: - sudo apt install python3-dev - sudo apt-get install build-essential -y
@JohnDo-ntchaknow
@JohnDo-ntchaknow 15 күн бұрын
If my company has a pre-existing Data Dictionary, is there a way to allow Ollama to integrate it so that it better understands the data I am working with?
@PromptEngineer48
@PromptEngineer48 15 күн бұрын
yes. that could be included technically.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 6 ай бұрын
do you know how to stop Ollama afterwards? it continues to run in the background even after trying to end the process multiple times.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
An icon on the top.. appears. Press and quit
@7ali1124
@7ali1124 3 ай бұрын
great video! it is very cool. I noticed that you implemented the Ollama on your mac, but can you update this to install in a server or the cloud that can provide this service to your friends? that would be helpful for your friends
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Yes you can! I will try to bring in a video
@user-xq9no3so4f
@user-xq9no3so4f 5 ай бұрын
Great video!! However if i follow the instructions my results are different. I created the source_documents folder an put in another pdf-file. When i then execute the "python3 ingest.py" the ingestion seems to work fine. But when i afterwards exceute the privateGPT.py and start to interact with the llm it still responses to "Think and Grow Rich"-Book.
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Delete the db and cache folder
@arvindelayappan3266
@arvindelayappan3266 4 ай бұрын
@@PromptEngineer48 can we not append the pdf files, do we have to keep removing them. when a new file is added and ingested, it should add the document into its cache and should be able to response from both the document isnt it
@kamcarlson1413
@kamcarlson1413 4 ай бұрын
@@arvindelayappan3266 did you ever figure this out?
@williamwong8424
@williamwong8424 6 ай бұрын
great video. now can u do it in streamlit so there's user interface to chat and how can we host it online? like render?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Okay. Streamlit and render integration got it. Will do that.
@nufh
@nufh 6 ай бұрын
About the context window. I have noticed that it cannot exceed over 2k tokens even though Mistral can support up to 8k. From what I have tested so for, it is like the bot identify itself as a GPT-3, is it because of the openai library?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
yes. That is because we start everything compatible with OpenAI API. then we shift to opensource APIs. We could instead work for OpenSource APIs for the start. 😁
@nufh
@nufh 6 ай бұрын
@@PromptEngineer48 So for this Ollama, the context windows will not be limited with 2k only right? It will be scale based on the model capability.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Yes.
@nufh
@nufh 6 ай бұрын
@@PromptEngineer48 I wished that I could test it right now. Windows user need to wait.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
We can try LocalAI. I will come up with a video
@hyde8118
@hyde8118 3 ай бұрын
Interesting idea of integration. But i think that since you have no bugs in this process, it must be automated. Also, ollama is nothing more than click-to-run tool to download and deploy different sorts of AI models, so in fact you don't really need it to run Mistral with PrivateGPT. Or am i wrong?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Now with more interesting integrations, we can scrape privateGPT itself and use Ollama to code up our projects natively. Yes you are right
@SorobanWorld
@SorobanWorld 6 ай бұрын
Amazing!
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Thanks!
@faridullahkhan1
@faridullahkhan1 6 ай бұрын
is it possible to have this setup on a windows machine? or can I use vLLM server to host the model with OPEN AI API on windows ?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Not possible for windows. For windows u can try lmstudio..
@arvindelayappan3266
@arvindelayappan3266 4 ай бұрын
what is the system configuration that you are using and what is the response time for the query
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Just a basic 8GB M1
@cuoi123
@cuoi123 4 ай бұрын
Hi, Ollama is running, I input query but nothing receive answer, termial is blank. What should I do?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Try with different LLMs. a smaller version please
@kashifrit
@kashifrit 2 ай бұрын
Can private GPT be run for a web type interface similar to your previous video ?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Is there anything which is not possible. 😅😀
@frankbradford2869
@frankbradford2869 3 ай бұрын
How do I remove the embedded Think and grow rich pdf file. I say this because when I add other file the query goes back to this pdf file and quotes it
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
delete the db and cache folders
@frankbradford2869
@frankbradford2869 3 ай бұрын
Thank you, forgive me for sounding slow. You do mean remove or delete the chroma.sqlite3 file?@@PromptEngineer48
@frankbradford2869
@frankbradford2869 3 ай бұрын
If I delete the cache folder name db , will this effect how the program will ingest the files I supply or will create a new db and then ingest the new files?
@spiazzigiovanni7330
@spiazzigiovanni7330 5 ай бұрын
It Is possibile to load legacy code (i.e vb6 )and database schema and query how this code does?
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
I should be possible
@panfeng2879
@panfeng2879 Ай бұрын
Is there a limitation on the max number of personal documents that I can upload to PrivateGPT?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
No. but then the vectorstore gets confused and not able to get the relevant chunks.
@user-tx1oo5xv4u
@user-tx1oo5xv4u 3 ай бұрын
excellent! BTW, Can I upload many my own docs?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Yes
@RamondeBruyn
@RamondeBruyn 6 ай бұрын
Thank you for this great content! I was able to get this working on my M1 Mac. I was able to run `python ingest.py` and the `python privtagpt.py` commands. However, when I asked it to summarize the document I had uploaded, it referenced the "Think and Grow Rich" document that you showed in the video, rather than the test document I uploaded to the source_documents folder. How do I clear out the embeddings from the "Think and Grow Rich" document, or clear the chroma db database embeddings completely before running the ingest.py command again?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Delete the db and cache folder
@RamondeBruyn
@RamondeBruyn 6 ай бұрын
@@PromptEngineer48 Thank you! Got it to work exactly as expected! Thank you for all the great content!
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Welcome
@darshanpatil1663
@darshanpatil1663 2 ай бұрын
I am getting the sqlite3 error of using a unspported version, even the link specified does not solve my error can, I get the error when I run the ingest.py file
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Were u able to solve?
@user-xd5gd4pc9h
@user-xd5gd4pc9h 6 ай бұрын
I can not update the db, when I ask the agent about the new added document, it still gives answers about the document in this video. It is kind of confusing.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Delete the db and cache folder.. then upload your own document.
@user-xd5gd4pc9h
@user-xd5gd4pc9h 6 ай бұрын
Thank you, you are right, and really helpful!@@PromptEngineer48
@davidaliaga4708
@davidaliaga4708 5 күн бұрын
Do you have the pdf document you tried? Would like to try it myself
@PromptEngineer48
@PromptEngineer48 4 күн бұрын
It's the think and grow rich book. Just search for the book on the internet
@DataTheory92
@DataTheory92 3 ай бұрын
which is the best vision model to extract entities from complex invoices ?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
If you are talking about open source, then I have to go with ollama.com/library/llava If closed source, then openai
@kusumahaja
@kusumahaja 18 сағат бұрын
Hello @PromptEngineer48 , I new to python and want to learn this. I followed your instruction in your great video, but had many errors when installing modules in requirements.txt. Any update?
@PromptEngineer48
@PromptEngineer48 18 сағат бұрын
why dont i come up with an updated video.. please give me like a week or so.
@kusumahaja
@kusumahaja 9 сағат бұрын
@@PromptEngineer48 very nice... thank you sooo much....
@thoamslau14
@thoamslau14 Ай бұрын
I'm having trouble activate private1. It says I need to run conda init first but then I still couldn't activate private1
@PromptEngineer48
@PromptEngineer48 Ай бұрын
please install anaconda first. www.anaconda.com/download
@betagroobox
@betagroobox 3 ай бұрын
Wonderful, thank you! My dream would be to feed my local model with all my books in epub or pdf format just once and the model will keep a memory of those. Then from there I have so many ideas but not sure if feasible, maybe someone can help? 1) for each book create a mind map of concepts 2) a diagram of how each book is related to the others (citations, same authors, same topic, related concepts) 3) given a question or a topic the system can point me to which book is better to read. Probably impossible at the moment, isn't it?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Wonderful idea. I will dedicate time for a POC
@betagroobox
@betagroobox 3 ай бұрын
@@PromptEngineer48 Awesome! Another one could be to automatically find the book category. Since the system has already ingested all the books it already knows the discussed topics in each book and from there it can assign each to an ontology of category and subcategories. Like non-fiction/self-help, fiction/novel, non-fiction/self-help/personal growth, and so on. I have other 100 of these ideas, ping me if you need more :D
@SujithAbraham
@SujithAbraham 3 ай бұрын
I would also be interested in something like this if possible as it would be amazing to do this in a repository of books that you know. If you could do this with a non-trivial number of books, say, 100-150 (from Project Gutenberg), it would be a great application of local LLMs.
@Clammer999
@Clammer999 Ай бұрын
I couldn’t get Conda to work after installing Conda . The installed files are in /opt/miniconda3 but whenever I run Conda, it’s says Command Not Found
@PromptEngineer48
@PromptEngineer48 Ай бұрын
www.anaconda.com/download You need to install anaconda. however, you could use the .venv instead of conda. We just need a virtual environment.
@Clammer999
@Clammer999 Ай бұрын
@@PromptEngineer48ok managed to get Conda working. However when I run the python3 ingest.py, I got an error: line6, in from tqdm import tqdm. ModuleNotFoundError: No module named ‘tqdm’
@Clammer999
@Clammer999 Ай бұрын
Ok made more progress but now stuck in pymupd. Tried installing it but keep getting message Requirement already satisfied: /opt/anaconda3/envs/privategpt/lib/python3.11/site-packages
@PromptEngineer48
@PromptEngineer48 Ай бұрын
pip install tqdm should have worked
@SamdarshiPali
@SamdarshiPali 6 ай бұрын
Is similar solution possible using 'LM Studio' for users having Windows machines?
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Not right now. But if u follow the discord of lmstudio. They have plans
@DustinHare
@DustinHare 6 ай бұрын
@@PromptEngineer48 i installed wsl in my windows machine and followed your tutorial, everyhting worked fine. Great tutorial btw, thank you :)
@harishhari605
@harishhari605 Ай бұрын
Hi, can you create a video on how to clean our own data which is in my CSV file which is best to answer our query very effectively?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
Yes I can. but to be more clear.. you want to use a data cleaner llm which will give clean you csv file?
@harishhari605
@harishhari605 Ай бұрын
@@PromptEngineer48 Okay pls go ahead
@cringeberry3027
@cringeberry3027 6 ай бұрын
It would be great if you will create a video where you work on entire project, consisting dozens of files, related to coding adding, editing some feature. Thinking about that, DeepSeek Coder would be a best choice I suppose.
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
Noted! Will try to implement. Deepseeker Coder is very good actually. Beat so many benchmarks. 😄
@cringeberry3027
@cringeberry3027 6 ай бұрын
@@PromptEngineer48 Also, will it be necessary to use MEMGPT to store different contexts in separate chat windows? I'm bit frustrated trying to figure out bundle of all required to perform what I have mentioned in initial comment =)
@PromptEngineer48
@PromptEngineer48 6 ай бұрын
MemGPT working solution.. not yet developed. For a big project like youra need some exploration. I think I have to go for contextual sparsity. ..
@cringeberry3027
@cringeberry3027 6 ай бұрын
@@PromptEngineer48 Bing came back with following bundle: Yes, you can use **AutoGen**, **MemGPT**, and **DeepSeek Coder** together to chat with an entire project like a Ruby on Rails (RoR) project consisting of dozens of files. But it got concerns regarding token limit: "It might be challenging due to the token limit. Each file would need to be processed separately, and the total number of tokens across all files would need to be within the token limit. If the total exceeds the limit, you might need to truncate or otherwise reduce the size of your input1. Remember, a token in this context is not necessarily a word. It could be as short as one character or as long as one word. For example, “ChatGPT is great!” is encoded into six tokens: [“Chat”, “G”, “PT”, " is", " great", “!”]
@MA_808
@MA_808 3 ай бұрын
Thanks!
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Thank You so much !!
@randomscandinavian6094
@randomscandinavian6094 2 ай бұрын
I'm getting CondaError: Run 'conda init' before 'conda activate' during my installation. I did try conda init but then it says "no action taken". As usual I can't get a step-by-step tutorial to work.
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
So you were able to create a conda environment?? Using conda create -n your-name python=3.11??
@randomscandinavian6094
@randomscandinavian6094 2 ай бұрын
Yes. Followed everything up until the activation part
@randomscandinavian6094
@randomscandinavian6094 2 ай бұрын
Although I don't get the (base) in front of my path like you did after Preparing transaction: done Verifying transaction: done Executing transaction: done
@MerguVinay
@MerguVinay 22 күн бұрын
i am facing thiis error conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + conda create -n private1 python=3.11 + ~~~~~ + CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
@PromptEngineer48
@PromptEngineer48 20 күн бұрын
docs.anaconda.com/free/anaconda/install/windows/ install the anaconda
@harishhari605
@harishhari605 2 ай бұрын
Could you provide suggestions on how to enhance the conversational capabilities of this bot?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Prompts and proper chains ⛓️
@user-xi4mw9rx1l
@user-xi4mw9rx1l Ай бұрын
finally finally windows version of came, because i don't notice it until i this video and rember it.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
yes. bro
@veniagl3984
@veniagl3984 Ай бұрын
can I use this to extract pdf information from 100 pdfs? I need the same information extracted from each pdf and store it in rows, so I need a table of 100x (items to extract), i.e. extract Total Assets from a Balance sheet (will be my 1st column) and I need to do this task for 100 companies. can i use this code to do that? I feel that its more like a many to one thing this one, rather than many to many. Thanks so much for your content!
@PromptEngineer48
@PromptEngineer48 Ай бұрын
If I understand that correctly, that could be hard coded. I think we don't need an LLM here
@Alexiy25raffasan
@Alexiy25raffasan Ай бұрын
It would be great to feed local AI with project code or framework, and be able to ask it questions about the code.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
nice idea. i will try to implement the same.
@adityadeshmukh
@adityadeshmukh 3 ай бұрын
can you provide a similar use case setup for windows as well, now that Ollama is available on windows
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
No difference. Just use the code on windows. Make sure to install Ollama on windows.
@varun_tech7
@varun_tech7 2 ай бұрын
is there a way to view the actual embeddings value from ChromaDB?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Sure it is
@varun_tech7
@varun_tech7 2 ай бұрын
@@PromptEngineer48 How exactly can I view them ?
@MA_808
@MA_808 3 ай бұрын
Is there a way for links, images and even videos to be parsed into PrivateGPT ???
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Not with this setup. But yaa we could make that work
@DiminencoIulian
@DiminencoIulian 2 ай бұрын
is there the posibility to take responses only from your documents?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
yes. a good prompt. in the begining of the prompt mention that you are a chatbot and answer based on these documents only.... some modifications. but this works for openai api. i have tested in my current project that I am doing on some RAG applications.
@ryanwales9399
@ryanwales9399 2 ай бұрын
Keep getting a error when doing python3 ingest.py says line 8 no module named langchain
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
pip install langchain langchain_community
@yashyaadav
@yashyaadav 5 күн бұрын
Are we using poetry here or not? because that part was not there in the video.
@PromptEngineer48
@PromptEngineer48 4 күн бұрын
Yes.
@yashyaadav
@yashyaadav 4 күн бұрын
Is the full code available on the GitHub repository or are their some scripts missing using git ignore?
@PromptEngineer48
@PromptEngineer48 4 күн бұрын
No. Everything in GitHub repo
@hpsfresh
@hpsfresh Ай бұрын
How does it knows it should use mistral if I have several models downloaded?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
that is hardcoded
@hpsfresh
@hpsfresh Ай бұрын
@@PromptEngineer48 actually not. This written in config. Take a look
@bx1803
@bx1803 2 ай бұрын
I want to try to give it some ability to troubleshoot for me, like conduct pings and traceroutes.
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
looks good
@gaganpreetsingh-6453
@gaganpreetsingh-6453 4 ай бұрын
is it possible to use Private GPT with hosted open source model?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Quite possible
@ahmadsiddiqui7998
@ahmadsiddiqui7998 2 ай бұрын
@promtEngineer, can you host it with basic UI interface, like people could upload their docs and ask questions, without doing all of this hardwork🙈, and also dont keep anyones personal documents with you
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Ok
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
Installing Private GPT to interact with your own documents!!
14:52
Novaspirit Tech
Рет қаралды 41 М.
FOOTBALL WITH PLAY BUTTONS ▶️ #roadto100m
00:29
Celine Dept
Рет қаралды 75 МЛН
UFC 302 : Махачев VS Порье
02:54
Setanta Sports UFC
Рет қаралды 994 М.
Black Magic 🪄 by Petkit Pura Max #cat #cats
00:38
Sonyakisa8 TT
Рет қаралды 27 МЛН
WHY IS A CAR MORE EXPENSIVE THAN A GIRL?
00:37
Levsob
Рет қаралды 12 МЛН
Llama-3 🦙 with LocalGPT: Chat with YOUR Documents in Private
12:24
Prompt Engineering
Рет қаралды 9 М.
How to Set up and Use PrivateGPT and LocalGPT
20:06
AssemblyAI
Рет қаралды 26 М.
Local RAG using Ollama and Anything LLM
15:07
GovBotics
Рет қаралды 10 М.
this changed how I use AI...
30:58
NetworkChuck
Рет қаралды 236 М.
Chat with your Documents Privately with Local AI using Ollama and AnythingLLM
14:56
Using Ollama To Build a FULLY LOCAL "ChatGPT Clone"
11:17
Matthew Berman
Рет қаралды 234 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 642 М.
AMD больше не конкурент для Intel
0:57
ITMania - Сборка ПК
Рет қаралды 514 М.
Latest Nokia Mobile Phone
0:42
Tech Official
Рет қаралды 491 М.
😱НОУТБУК СОСЕДКИ😱
0:30
OMG DEN
Рет қаралды 2,7 МЛН
Эволюция телефонов!
0:30
ТРЕНДИ ШОРТС
Рет қаралды 6 МЛН