Thanks, this is tremendously helpful One point to note - you need to upload the embed file, not the sentence file -> upload_file(bucket_name,embed_file_path)
@ScottJohnson-d3x2 ай бұрын
Very excellent Learning session Janakiram!
@jagatmohansarvari56812 ай бұрын
really helpful for understanding the concept of embedding and retrieval. Thanks.
@edubr20116 ай бұрын
Excelent video! Thanks for sharing the code too.
@Janakirammsv6 ай бұрын
Glad it was helpful!
@thecopt116 ай бұрын
Best tutorial. Big thanks for your shared.
@sureshkumarselvaraj89116 ай бұрын
Great video! What is the difference between Vertex Search service VS Vector Search for RAG application? which one is better in terms of handling better retrieval of relevant documents for RAG application where we deal with 100+ PDF documents? Can you share some insights?
@ShahidGhetiwala-dg3ol6 ай бұрын
Great Video, thank you soo much........
@Ahsan_Akhtar12 ай бұрын
really helpful I have question i have multiple pdf files how i handel with them?
@digiplouxinc.66882 ай бұрын
In your video you say "sentence_file_path". However shouldn't it be "embed_file_path" ? create_tree_ah_index function should have the GCS bucket of the embedded data and not the text with teh ids right ?
@MarceloFerreira-rl6hh5 ай бұрын
Great job! Thanks a lot. What’s the difference between this approach and using langchain?
@GAURAVRAUT0075 ай бұрын
Excellent video - can u please do same with Langchain with retrieval
@AhmedBesbes6 ай бұрын
Thanks for the tutorial! Instead of going through the ids in the json file to fetch the sentences, is it possible to integrate those directly as metadata in the index?
@Hitish999995 ай бұрын
Thanks for the tutorial. I am bit confused which file to be uploaded to bucket. sentence file or embedding file
@arvindmathur65746 ай бұрын
Great!
@vikasbammidi13405 ай бұрын
Can you please do a video on "How to use the same in Langchain with retrieval"
@GAURAVRAUT0075 ай бұрын
+1
@JulianHarris6 ай бұрын
Nice. Are you ok to share the colab notebook?
@Janakirammsv6 ай бұрын
Yes, sure. Please check the description. I have added the links.
@dhananjaypathak152 ай бұрын
i want same thing in nodej can some one please help on which library to use
@TomFord-mv2mx6 ай бұрын
Great Video. One question, I noticed you used a different model (gecko) to Gemini Pro for the embeddings. Is this ok to do? I assumed the models needed to be the same for both training and inference? Thanks again
@Janakirammsv6 ай бұрын
Text embedding models are independent of LLMs. You only have to ensure that the same embedding model is used for indexing the documents and the query. This is critical to retrieving the context based on the similarity.
@wanderlust83673 ай бұрын
the code link u have shared is incomplete, load_file is missing and other few stuffs,
@tarunrey6196 ай бұрын
Thanks for sharing knowledge. Can you share the notebook
@Janakirammsv6 ай бұрын
Please check the description. I have added the links.
@AlaGalai-m9l5 ай бұрын
why always python is there any way to use js?
@khondakersajid11386 ай бұрын
Possible to share the notebook?
@Janakirammsv6 ай бұрын
The code is available at gist.github.com/janakiramm/55d2d8ec5d14dd45c7e9127d81cdafcd and gist.github.com/janakiramm/7dd73e83c92a0de0c683ed27072cdde2