I love your videos. Before starting the setup, could you make sure that your code is future proof by sharing the Python/Conda version you are using? Preferably, start with the `pyenv install` command. Could you also commit the `requirements.txt` file with the version number you used? Thank you 🙏
@AtharvaWeginwar10 ай бұрын
Hi sir I am getting error "cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)". Can you help me with this?
@balvendarsingh990510 ай бұрын
same issues with me @@AtharvaWeginwar
@NavyaSravaniSomu9 ай бұрын
Can you use OCR model also to read images in pdf.
@chrisdsilva71149 ай бұрын
Hey Krish i've been following a lot of your videos lately especially the road to gen ai repo wanted to ask you if this is as similar as the Chat with PDF which we build with gemini model if so what makes this RAG and not that or is it the same?
@ajg395111 ай бұрын
This session is fantastic! It would be great if you could also demonstrate how to change the default embedding, specify which embedding the model is using, and explain how to switch between different models such as GPT and LLM. Additionally, it would be helpful to cover how to utilize this dataset to answer specific questions.
@Broke_gamer229 ай бұрын
You are amazing and your videos taught me more than any of my graduate professors could. Thank you
@faqs-answered8 ай бұрын
I really love the way you teach these hard concepts with so much enthusiasm that it sounds so easy. Thank you so so much.
@phanindraparashar893011 ай бұрын
much-awaited series. would be nice if we have even more complex rag applications.
@1murali5teja11 ай бұрын
Thanks for the video, I have been constantly learning from your videos
@ariondas741511 ай бұрын
please use open source LLMs As a student, it's difficult to come up with budgets for openai api key btw just wanted to thank you for everything you're doing!!
@manikumar-vr3kp4 ай бұрын
hey for the above task there is no need of open api
@rafiali73154 ай бұрын
But how
@narsimharao856511 ай бұрын
Hi Krishn sir, very thank you for this video❤
@bernard273511 ай бұрын
Thank you - this was a great tutorial. Liked and subscribed.
@bawbee278 ай бұрын
I love how verbose this is. Thank you!
@enestemel94905 ай бұрын
The source nodes are not simply other curated answers. Instead, they are the similar indexes retrieved from the vector store based on the query. These similar indexes serve as the primary source for constructing the final answer. In essence, the vector store identifies and retrieves the most relevant information from similar contexts or data points, which are then used to generate the final response.
@khalidal-reemi336110 ай бұрын
eagerly waiting for a video to include databases.
@deepak_kori10 ай бұрын
thank you sir making such video these are amzaing video🤩🤩
@RanjitSingh-rq1qx11 ай бұрын
Wow sir, I were waiting this video ❤
@jcneto2511 ай бұрын
Excellent Tutorial. Thanks
@seanrodrigues184211 ай бұрын
Since we are using open ai, does it mean we are using one of the gpt models? There was no parameters in the code to choose what llm model to you. How do we select a particular open ai model?
@bevansmith321011 ай бұрын
Great channel Krish! Is it possible to create a RAG/LLM model to interact with a database to ask statistical type questions? what is the max, min, median, mean? basically to create a chatbot for non-technical users to interact with spreadsheets
@akshatapalsule294011 ай бұрын
Thankyou so much Krish!
@r1234-e1e11 ай бұрын
I am using mistral open source model, and I want to store the relevant documents that are retrieved. How do to it?
@lixiasong345911 ай бұрын
Thank you very much, Sir. In your Llamaindex playlist, it says five videos so far, but 2 unavailable videos are hidden. do I have to pay and become a member to be able to say the full playlist? Thanks again for the amazing videos!
@piyush-A.I.-IIT11 ай бұрын
Thanks! Just a quick question: For indexing the documents, does it call openAI api internally? I understand for retrieval it calls openAI api to formulate the final answer. But I am unclear whether it calls api for indexing. I need to index 10000 pages document so I have to account cost if it calls openAI api.
@vivekshindeVivekShinde11 ай бұрын
As per my knowledge, for Indexing it doesn't use OpenAI. For retrieval it does. Correct me if I am wrong
@ajg395111 ай бұрын
@@vivekshindeVivekShinde we are using indexing with the API because by default we are utilizing OpenAI's text embedding. Indexing involves embedding the text into respective folders. However, you have the freedom to change the embedding method from OpenAI to any other open-source option available for this purpose.
@manikumar-vr3kp4 ай бұрын
@@ajg3951 no
@aryansawant237413 күн бұрын
Does anyone know how can we make use of Groq here instead of OpenAI and I have updated the code based on recent llamaindex libraries by going through the documentation but on using queryretriever the error shows up for missing OpenAI API key so is it like LLama index is totally based on OpenAI?
@pranavgaming763411 ай бұрын
Amazing Energy Krish. I am your student in Master GenAI class. I am trying this project but i am getting Import Error while loading VectorStoreIndex,SimpleDirectoryReader from llama_index. I have tried loading only one but status quo. Could you please guide me to fix it
@linalogvina60019 ай бұрын
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
@shashankag53619 ай бұрын
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
@pavankonakalla466811 ай бұрын
So it is power full than Azure AI Search?? or it does the same thing as AI search(Azure cognitive search).
@alexaimlllm11 ай бұрын
Thanks Sir. May i know where did we use OpenAI here, Can we use any open source model like Llama-2?
@aravindraamasamy945311 ай бұрын
Hi Krish , I have a doubt regarding the project I am doing. So the project is that from a pdf file I need to create a excel file which have 5 columns and the info in excel can be filled from the pdf. Can I get a an approach to solve the problem using llm. I am looking forward to hearing from you.
@livealil7 ай бұрын
Are you going to cover how to do the LangChain integration that mentioned in the first video of the series and is included in the diagram pulled up at 25:09 (same as the first video)?
@pritioli84297 ай бұрын
great tutorial! Thanks for sharing!
@ParulSharma-fx5zw7 ай бұрын
In your playlists, you have been using the Open AI llm with some API key, but where are you using it and linking it with llama index? something is missing here....
@alfatmiuzma11 ай бұрын
Hello Krish, Thanks for this informative video on RAG and LLAmaIndex. I have one doubt - When you query "what is attention is all you need", the source having 0.78 similarity score is chosen as Final Response instead of the source having similarity score 0.81. Why?
@AbdelilahBo10 ай бұрын
Hello, thank you so much for this video. i have a question related of summarize questions in LLM documents. For example in vector database with thousands documents with date property, and i want ask the model how much document i received in the last week?
@RishikantMallickКүн бұрын
I am facing this - ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)
@summa754510 ай бұрын
Hello krish, first of all, I'd like to thank you for all your guidance. Your videos are my main source of study. Now, my query related to this video. The codes have been changed from the one you are showing. Most remain same with addition of core to the library. But I couldn't find any for vectorindexautoretriever, mainly the keywords to be used inside. Currently it's asking for vectorstoreinfo apart from index and similarity top k
@CheggAnonymous7 ай бұрын
In your playlists, you have been using the llm with some API key, but where is the RAG here?
@shobhitagnihotri41611 ай бұрын
We cam do same thing in langchain , so what id difference
@sravan92539 ай бұрын
Instead of using an LLM to generate embeddings of input data, we are using LlamaIndex here to embed and index the same?
@SanketSancheti-h5e4 ай бұрын
in .env how can we store our api keys? i mean without """this format""' or not??
@achukisaini279711 ай бұрын
Sir i need your help i am using llama index and saving the embeddings in pinecone using sentence transfromer but i am not to connect with the pinecone
@StutiKumari-yn5ws8 ай бұрын
Hi krish if there is an option to store index in hard disk , then why we need vector store like chroma db
@dhruvasthana52703 ай бұрын
*SPOILER ALERT* its not a 27min project! i think it works for every sir's project video :)
@codeCoffee_cc22 күн бұрын
mai 3 din se laga hoo isme , as a beginner , so many codes which are working fine in the video but not working in my project.
@ashusoni64489 ай бұрын
sir, As you know that libraries like llama index are still undergoing various changes, please try to mention the exact version of library in requirements.txt files
@lordclayton8 ай бұрын
Can anyone link the llamaindex playlist that Krish Naik has started? I can't seem to find it somehow
@abhayjoshi21216 ай бұрын
Hi Krish , Great video do you plan to use open source LLM reason bing private data is the key in all the industries
@tonydavis23188 ай бұрын
Out of curiosity, why are you using python 3.10 instead of the current stable version 3.12?
@ambarpathakkilbar11 ай бұрын
very basic question - is llama index using the open api key you initialized in the os environment ?
@ambarpathakkilbar11 ай бұрын
also where exactly did you use open ai I am not able to understand it
@vivekshindeVivekShinde11 ай бұрын
No. I think Llamaindex is not using thr OpenAI api key. Also he didn't use it anywhere in the project. Like he said in future we will create more complex conversational bots, maybe at that time he ll use it. He just added that OpenAI part for sake of maintaining the future flow. I might be wrong. Feel free to correct me.
@subhamjyoti418911 ай бұрын
@@vivekshindeVivekShinde 'VectorStoreIndex' is using openai internally for generating embeddings.
@omerilyas734710 ай бұрын
@@subhamjyoti4189 I dont think VectorStoreIndex uses OpenAI's embeddings.
@shrihanscreativeworld88136 ай бұрын
Hi sir, Really Excellent Content, I am following all your playlists to learn GenAI. I have one query in this video, why we are not using any embedding model here, as it's mentioned in the title that RAG application using Lammaindex and openAI but I dind't found any call to OpenAI here. Please correct me if anything wrong in my understanding.
@qzwwzt11 ай бұрын
Sir, congrats on your lessons! I'm from Brazil. I tried other PDFs in Portuguese. At the end of the response, the text came in English. "These are the enteral nutritional requirements for preterm infants weighing less than 1500g." Is it possible to get everything in Portuguese? Tks a lot
@marcelobeckmann95527 ай бұрын
Did you try to ask LLM to translate the response to Portuguese?
@Decoder_Sami10 ай бұрын
from llama_index import VectorStoreIndex,SimpleDirectoryReader documents=SimpleDirectoryReader("data").load_data() ImportError Traceback (most recent call last) Cell In[20], line 1 ----> 1 from llama_index import VectorStoreIndex,SimpleDirectoryReader 2 documents=SimpleDirectoryReader("data").load_data() ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location) How can I fix this issue any suggestions, please!
@Decoder_Sami10 ай бұрын
Yes I got it The correct code should be like this: from llama_index.core import VectorStoreIndex,SimpleDirectoryReader documents=SimpleDirectoryReader("data").load_data()
@tonydavis23188 ай бұрын
@@Decoder_Sami That little tidbit took me about 3 hours to figure out. Thanks for posting!
@amritsubramanian83848 ай бұрын
Gr8 Video
@ANKITKUMAR-oc7pt7 ай бұрын
Can't see the pdf , where you have uploaded the pdf
@shrankhalmohite63876 ай бұрын
Is it fetching data from internet if he didn't find answer In PPT indexex?
@ANKITKUMAR-oc7pt7 ай бұрын
Sir, you haven't provided the attention pdf and yolo pdf
@Chuukwudi8 ай бұрын
Thank you Krish. Important Notes: llama_index doesn't support python 3.12 If you're decide to use python 3.11, while importing, you will need to use `from llama_index.core.`.
@fatimazehra59628 ай бұрын
Which location is to be added in method_location?
@keepguessing12348 ай бұрын
My requirements.txt is not able to install... Throwing error
@AniketGupta-et7zw11 ай бұрын
Hi Krish, can you also make a roadmap video on data engineer.
@sanjaykrish87198 ай бұрын
can llamaindex be used with Llama? why did openai name it after metas llama?
@keepguessing12347 ай бұрын
.env file ...how did you make it... Like you keep original key value in it or the name we given 'open_ai_api_key'
@ardensarmiento84175 ай бұрын
should contain something like this OPENAI_API_KEY='insert your open ai key'
@prasanthV-ji1ub10 ай бұрын
We are getting an Error that says --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) c:\Users\wwwdo\Desktop\LLAMA_INDEX\Llamindex-Projects\Basic Rag\test.ipynb Cell 1 line 4 1 ## Retrieval augmented generation 3 import os ----> 4 from dotenv import load_dotenv 5 load_dotenv() ModuleNotFoundError: No module named 'dotenv' even though I try to add python-dotenv in the requirments.txt
@rishabkhuba26639 ай бұрын
Same here Did you find a workaround?
@ShaikKarishma-c8q6 ай бұрын
create a file with name of .env
@awakenwithoutcoffee8 ай бұрын
Hi there Krish, amazing tutorial once again but I'm running in the issue that the "maximum context length is 8192 tokens". How can we best chunk per PDF page/chapter if the PDF size > 8k tokens ? *EDIT*: Our use-case is the following: we want to retrieve 100% accurate text from a page or chapter. Is this possible or does the AI only knows how to summarize ?
@saumyajaiswal658511 ай бұрын
Thank you for the awesome video. Can you please tell the best approach where we have a multiple pdf chatbot.The Pdfs can have text,images,tables.The answer should contain text, images and tables(or get answers from them) from the pdfs itself wherever required.
@vivekshindeVivekShinde11 ай бұрын
Facing similar issue. Let me know if you find something. It ll be helpful
@saumyajaiswal658511 ай бұрын
@@vivekshindeVivekShinde sure....you found any solution?
@nelohenriq9 ай бұрын
What about doing all this but using only open source models from HF?
@sanyadureja255614 күн бұрын
Please make a video on Multimodel RAG using Llama index
@JeevankumarKodali-c5k11 ай бұрын
where can I find those pdf's used in the project?
@srishtisdesignerstudio831711 ай бұрын
Krish Ji, hi. we are into Stock market and use ML which we only use for LSTM and weka and to some extent knime and rapidminer for building simple models involving moderate levels of data sets 4000 to 7000 instances may go upto 10000 instances and 8-10 features hence not very big models in terms of size as may be termed in actual seious ML. We saw in one of your videos building LSTM on TF on your 1650 gtx laptop we guess. We had been training our models on CPU only till now and it consumed a lot of time. however we have recently started working on sentiment library and wish to implement it into our models to make some auto trading bots. could you please guide us on our laptop purchase I mean will a 1650 be good enough or do need to invest heavily. we have shortlisted some gaming budget laptops with 70-80k range with rtx 4050 or 3050. your valuable suggesttions will be of great help. dont want to waste our money and also think you are quite well versed in the subject. you suggestions please........
@Dream-lp7km11 ай бұрын
Sir company mein Google colab or jupyter notebook kisme work krte hai
@ishratsyed7710 ай бұрын
Llama-index installation is giving errors Any suggestions?
@rajum94745 ай бұрын
Can someone help me, where can I find the pdf files used in this video.
@udaysharma13811 ай бұрын
Can you please create a Video on How we can Summarize a long PDF with Mistral or Llama-2 to get a very Efficient output , Because with Open AI we have great amount of Context Length , But with these Open Source LLM Models we are Restricted while summarize a Large PDF
@kunalbose636011 ай бұрын
Can we have some content where we can fine tune as well as feedback or advance RAG for QnA ❤❤ Or Triplets way for RAG
@Raaj_ML7 ай бұрын
Thanks..but in this video, there was no openAI used. please correct me if I am wrong.
@ardensarmiento84175 ай бұрын
there is, it's in the .env file mentioned at 2:56. It should contain something like this OPENAI_API_KEY='insert your open ai key'
@Raaj_ML5 ай бұрын
@@ardensarmiento8417 but where it is used ?
@karmicveda964811 ай бұрын
🔥🔥🔥
@harik559111 ай бұрын
Can you create an application with indexing images and creating a prompt with similarity search for a given image content
@abax_11 ай бұрын
sir can you plz use a opensource model in next video such as google palm i tried using the palm model but VectorStoreIndex is constantly demanding open api key , even took help with the docs but i am only able to get response without chaining the pdf
@MangeshSarwale11 ай бұрын
sir I did'nt have the paid openai key so while running code i am getting the error(RateLimitError : You have exceed your current quota) at the line index=VectorStoreIndex.from_documents(documents,show_progress=True) please tell how to solve this
@kishanpayadi816811 ай бұрын
Either create a new account and get free but limited access for 30 days or use gemini pro
@shravaninevagi57299 ай бұрын
did you find any alternative? i am stuck here as well
@aiml.meetsolanki8 ай бұрын
@@shravaninevagi5729 in source file .venv\Lib\site-packages\llama_index\core\embeddings\utils.py change below for GooglePalmEmbedding which is working in my case Install llama-index-embeddings-google by command "pip install llama-index-embeddings-google" """Embedding utils for LlamaIndex.""" import os from typing import TYPE_CHECKING, List, Optional, Union if TYPE_CHECKING: from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings from llama_index.core.base.embeddings.base import BaseEmbedding from llama_index.core.callbacks import CallbackManager from llama_index.core.embeddings.mock_embed_model import MockEmbedding from llama_index.core.utils import get_cache_dir from llama_index.embeddings.google import GooglePaLMEmbedding EmbedType = Union[BaseEmbedding, "LCEmbeddings", str] def save_embedding(embedding: List[float], file_path: str) -> None: """Save embedding to file.""" with open(file_path, "w") as f: f.write(",".join([str(x) for x in embedding])) def load_embedding(file_path: str) -> List[float]: """Load embedding from file. Will only return first embedding in file.""" with open(file_path) as f: for line in f: embedding = [float(x) for x in line.strip().split(",")] break return embedding def resolve_embed_model( embed_model: Optional[EmbedType] = None, callback_manager: Optional[CallbackManager] = None, ) -> BaseEmbedding: """Resolve embed model.""" from llama_index.core.settings import Settings try: from llama_index.core.bridge.langchain import Embeddings as LCEmbeddings except ImportError: LCEmbeddings = None # Check if embed_model is 'default' or not specified if embed_model == "default" or embed_model is None: # Initialize Google PaLM embedding google_palm_embedding = GooglePaLMEmbedding() embed_model = google_palm_embedding return embed_model
@RahulAthreyaKM11 ай бұрын
can we use Gemini with llama index?
@allaboutgaming83611 ай бұрын
Getting error for importing CohereRerank ImportError: cannot import name 'CohereRerank' from 'llama_index.core.postprocessor' Causing the error while importing SimilarityPostprocessor from llama_index.core.indices.postprocessor import SimilarityPostprocessor
@DataDorz10 ай бұрын
what is the need of openai in this video?
@thespritualhindu9 ай бұрын
For response synthesis. Once the relevant nodes are retrieved, it is passed as a context to LLM(openAI) model and then LLM provided the answer in much better way to the users query.
@chinnibngrm27211 ай бұрын
Hi sir Previously I have tried with Gemini pro In that project while extracting text from pdf of 32 pages it's not extracting all text... That's why I am not able to get perfect answers.. What I have to do sir... Please help me to solve
@krishnaik0611 ай бұрын
Use this technique it will work
@chinnibngrm27211 ай бұрын
@@krishnaik06 Sure sir
@chinnibngrm27211 ай бұрын
@@krishnaik06 Thank you soo much sir for helping lot of students.... You are Amazing😍 Waiting for more projects. And also one request from my side sir... Please share some project ideas to us as assignments. It will help us to do it on our own Please sir... Please share some application ideas
@akj334411 ай бұрын
@@chinnibngrm272 omg stop begging.
@manjushreegs10636 ай бұрын
Sir can you please upload a video for multi doc RAG using llamaindex and other other agent apart from openAI.
@kamitp497211 ай бұрын
Sir can you please make an implementation video on TableGPT?
@kamalakantanayak325011 ай бұрын
How is this different from embedding technique ??
@kishanpayadi816811 ай бұрын
As far as I understand, It is RAG is based on embedding for similarity search. LLAMA index is just at frame work to build application on top of it.
@nagrajkethavarapu39385 ай бұрын
We should purchase openai api?
@AliYar-e4u5 ай бұрын
can we use this for urdu language as well ????????????????????
@MirGlobalAcademy11 ай бұрын
Why don't you use vs code as code WRITING purpose. why are you using PyCharm inside vs code?
@ArunkumarMTamil11 ай бұрын
Teach about Direct Preference Optimization
@Munnu-hs6rk7 ай бұрын
you should make same video using open source llms if we can make the project in free why we should pay......and also make end to end streamlit app
@satyamoahnty4 ай бұрын
how to add chat memory ?
@harshsingh784211 ай бұрын
how to create open api key please tell me please help me with this doubt
@bindupriya1178 ай бұрын
Can you make a video how to RAFT RAG handson video
@aibyak10 ай бұрын
ERROR: Failed building wheel for greenlet Failed to build greenlet ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects getting this error while installing frameworks from requirements.txt
@sachinborgave80949 ай бұрын
don't see the pdf's
@nefelimarketou18929 ай бұрын
thankyou!
@bhanu8668 ай бұрын
How can we start this in colab
@Innocentlyevil36711 ай бұрын
Hey krish can u do end to end project on model fine tuning
@Magnus-kd7wd4 ай бұрын
You should really look into poetry instead of using conda. Also, why are you assigning an environment variable to the same environment variabel? If you can `getenv` it, it's already in `os.environ`.
@vijjapuvenkatavinay820711 ай бұрын
I'm getting rate limit error sir.
@kishanpayadi816811 ай бұрын
Toh gemini use karle mere bhai
@manzoorhussain527511 ай бұрын
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'c:\\programdata\\anaconda3\\lib\\site-packages\\__pycache__\\typing_extensions.cpython-39.pyc' Consider using the `--user` option or check the permissions. WARNING: Ignoring invalid distribution -umpy (c:\programdata\anaconda3\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\programdata\anaconda3\lib\site-packages) WARNING: Ignoring invalid distribution -umpy (c:\programdata\anaconda3\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\programdata\anaconda3\lib\site-packages) WARNING: Ignoring invalid distribution -umpy (c:\programdata\anaconda3\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\programdata\anaconda3\lib\site-packages) Getting the errors and warnings when I am trying to install the packages from requriement.txt file kindly help
@khanmahmuna8 ай бұрын
please can you build a project based on document summarisation app using RAG and LLM without locally downloading the llm and without using gpt model,it would be very helpful or anyone can guide me through this from the viewers it would be very helpful.
@lakshman5875 ай бұрын
why are you keep telling openapi open api, rather it is open ai api key.
@svdfxd11 ай бұрын
With all due respect, the speed with which you are posting videos makes it very difficult to keep up with the learning pace.