No video

Retrieval-Augmented Generation chatbot, part 1: LangChain, Hugging Face, FAISS, AWS

  Рет қаралды 22,323

Julien Simon

Julien Simon

9 ай бұрын

In this video, I'll guide you through the process of creating a Retrieval-Augmented Generation (RAG) chatbot using open-source tools and AWS services, such as LangChain, Hugging Face, FAISS, Amazon SageMaker, and Amazon TextTract.
Part 2: • Retrieval-Augmented Ge... - scaling indexing and search with Amazon OpenSearch Serverless!
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos. Follow me on Medium at / julsimon or Substack at julsimon.substack.com. ⭐️⭐️⭐️
We begin by working with PDF files in the Energy domain. Our first step involves leveraging Amazon TextTract to extract valuable information from these PDFs. Following the extraction, we break down the text into smaller, more manageable chunks. These chunks are then enriched using a Hugging Face feature extraction model before being organized and stored within a FAISS index for efficient retrieval.
To ensure a seamless workflow, we employ LangChain to orchestrate the entire process. With LangChain as our backbone, we query a Mistral Large Language Model (LLM) deployed on Amazon SageMaker. These queries include semantically relevant context retrieved from our FAISS index, enabling our chatbot to provide accurate and context-aware responses.
- Notebook: gitlab.com/juliensimon/huggin...
- LangChain: www.langchain.com/
- FAISS: github.com/facebookresearch/f...
- Embedding leaderboard: huggingface.co/spaces/mteb/le...
- Embedding model: huggingface.co/BAAI/bge-small...
- LLM: huggingface.co/mistralai/Mist...

Пікірлер: 61
@jacehua7334
@jacehua7334 9 ай бұрын
always making great and timely videos.
@juliensimonfr
@juliensimonfr 9 ай бұрын
Glad you like them!
@caiyu538
@caiyu538 9 ай бұрын
Thank you for your lectures.
@juliensimonfr
@juliensimonfr 9 ай бұрын
You are very welcome
@justwest
@justwest 9 ай бұрын
thanks julien, one can learn so much from these!
@juliensimonfr
@juliensimonfr 9 ай бұрын
That's the idea 😀
@AaronWacker
@AaronWacker 6 ай бұрын
The RAG chatbot you demonstrate is an excellent lesson with HuggingFaceEmbeddings. Regarding how to do it outside GPT being generic enough to have your own vectorDB on demand for any model I had wondered how that was done. Thanks for covering this really great stuff!
@juliensimonfr
@juliensimonfr 6 ай бұрын
Glad it was helpful!
@iAkashPaul
@iAkashPaul 9 ай бұрын
Hey Julien, great job with the video. For QnA on corpus I'd recommend to generate hypothetical questions for each paragraph & ingesting them as well since those would have better similarity to the user input which is usually a question & can also help constrain the model to answer only closed domain questions.
@juliensimonfr
@juliensimonfr 9 ай бұрын
Yes, that's a nice trick. I tried to keep things simple here ;)
@devilliersduplessis7904
@devilliersduplessis7904 8 ай бұрын
Hey Julien, Thanks for an insightful talk last night at the AWS center!
@juliensimonfr
@juliensimonfr 8 ай бұрын
You're welcome. Thanks for coming!
@kuzeyiyidiker1344
@kuzeyiyidiker1344 3 ай бұрын
Thanks for this clear explanation.
@juliensimonfr
@juliensimonfr 3 ай бұрын
Glad it was helpful!
@ComFomeTo
@ComFomeTo 8 ай бұрын
Thanks a lot! It was very, very helpful.
@juliensimonfr
@juliensimonfr 6 ай бұрын
You're welcome.
@DCTekkie
@DCTekkie 3 ай бұрын
Thank you, gonna check it out tomorrow!
@juliensimonfr
@juliensimonfr 3 ай бұрын
Have fun!
@edinsonriveraaedo292
@edinsonriveraaedo292 6 ай бұрын
Hi Julien, thanks for your video, pretty clear explained ;-)
@juliensimonfr
@juliensimonfr 6 ай бұрын
Glad it was helpful!
@badbaboye
@badbaboye 3 ай бұрын
Thanks for the video!
@juliensimonfr
@juliensimonfr 2 ай бұрын
You're welcome!
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg 9 ай бұрын
Superr video, Thanks for trying using open source solutions...
@juliensimonfr
@juliensimonfr 9 ай бұрын
Glad you liked it
@ccc_ccc789
@ccc_ccc789 4 ай бұрын
Thanks!
@juliensimonfr
@juliensimonfr 4 ай бұрын
You bet!
@jingqiwu2865
@jingqiwu2865 9 ай бұрын
Thanks Julien! very nice video. very curious if there are some compare between bge-small with ada-002 when used in RAG.
@juliensimonfr
@juliensimonfr 9 ай бұрын
Hi, please check our embeddings leaderboard at huggingface.co/spaces/mteb/leaderboard. ada-002 is #15, bge-small is #8 :)
@GeigenAkademie
@GeigenAkademie 8 ай бұрын
Thanks Julien, for the good tutorial! Some use pinecone, do you see differences/advantages of using faiss over pinecone? Thank you
@juliensimonfr
@juliensimonfr 6 ай бұрын
FAISS is a simple lightweight open source solution. Pinecone is a fully managed, closed source DB running in the cloud. Depends what you're looking for, and how much work you want to do on managing the solution :)
@aishwaryakumar6504
@aishwaryakumar6504 8 ай бұрын
Hi Julien, Thank you for this video. It's helping me learn a lot. I was trying to run the code. When I attempt the zero shot example, my output is quite different from whats shown in the video. I tried to split it, but I get something like this - [answers: * 1) The trend is to invest more in solar energy in China. * 2) The trend is to invest less in solar energy in China. * 3) The trend is to invest the same amount of money in solar energy in China. * 4) The trend is to invest more in solar energy in the United States. * 5) The trend is to invest less in solar energy in the United States. ] Can you please explain why this is happening and how it can be fixed?
@whemmakatatt5311
@whemmakatatt5311 9 ай бұрын
godlike
@krishnasunder9491
@krishnasunder9491 3 ай бұрын
thanks it was really informative, can do demonstrate fine tuning LLM's with lora and Qlora? In your experience, RAG has better performer over fine tuning ?
@juliensimonfr
@juliensimonfr 3 ай бұрын
Llama2 fine-tuning with Qlora: kzbin.info/www/bejne/kJbZZ3lmiZZ_abs. IMHO RAG and fine-tuning solve different problems and are complementary. RAG lets you access fresh company data and gives you some domain adaptation. Fine-tuning gives you better domain adaptation and lets you customize guardrails and tone of voice.
@anserali551
@anserali551 4 ай бұрын
Sagemaker with langchain streaming option is generating output
@coolcurly9736
@coolcurly9736 8 ай бұрын
It throws : KeyError: 'Blocks' after running the cell with boto3.client('textrac') thrown by the loader.load(), from parser in langchain
@Invincible615
@Invincible615 7 ай бұрын
Thanks for the tutorial.In my case,i can't use Mistral somehow due to some restrictions on AWS test account.I have used FLAN-T5 but it is giving this error.ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (422) from primary with message "Failed to deserialize the JSON body into the target type: missing field `inputs` at line 1 column 503".
@juliensimonfr
@juliensimonfr 6 ай бұрын
The input format for T5 is quite different, so sending a Mistral-formatted message won't work. Not sure what restriction you're facing, but maybe TinyLlama would work? I think you would only have to adapt the prompting format in the content handler.
@Abhisekgev
@Abhisekgev 4 ай бұрын
I want to embed large data. In this case, if I want to embed document without a GPU notebook ml.t3.medium, is it possible to deploy the embedding model as well in some ml.g5.large GPU instance to make the processing faster?
@juliensimonfr
@juliensimonfr 4 ай бұрын
Sure, it's what you would do for production.
@Thirumalesh100
@Thirumalesh100 6 ай бұрын
Great video, But what if user question is related to chat history and it may contain short cuts like he/she/that/it etc then how to handle such cases
@juliensimonfr
@juliensimonfr 6 ай бұрын
Langchain has different ways to handle this, e.g. python.langchain.com/docs/modules/memory/types/buffer
@Thirumalesh100
@Thirumalesh100 6 ай бұрын
Thanks@@juliensimonfr , Basically it is question rephrase request by passing entire chart history, tried this approach which has cost and token limit problem Looking for other alternative for the same
@rnronie38
@rnronie38 4 ай бұрын
can you tell how to get the key for sagemaker to work here?
@juliensimonfr
@juliensimonfr 4 ай бұрын
Not sure what you mean. Are you looking for a SageMaker tutorial ? See docs.aws.amazon.com/sagemaker/latest/dg/gs.html
@kevinngo3722
@kevinngo3722 9 ай бұрын
Hi Julien. The code is not working when I try to run it. I think the error I am getting is related to Sagemaker credentials. I made an account just now but don't know where to get information where I can plug into your code to make this work.
@juliensimonfr
@juliensimonfr 9 ай бұрын
Start here: docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html. Create a notebook instance and make sure its IAM role includes the SageMakerFullAccess and TexttractFullAccess managed policies. Once you've done that, the notebook will run as is.
@kevinngo3722
@kevinngo3722 9 ай бұрын
Thanks for your reply! It seems that this leads me to make a Jupyter notebook. How do I integrate this to do what you're showing on Colab in the tutorial?@@juliensimonfr
@da-bb2up
@da-bb2up 8 ай бұрын
Thx for the video :) can you update your vector database by a few lines ( if you want to add data to your knowledge base) automatically by running a python script or something like that?
@juliensimonfr
@juliensimonfr 8 ай бұрын
Sure, you can keep adding embeddings anytime you want.
@da-bb2up
@da-bb2up 8 ай бұрын
oh thats nice :) thx for the answer@@juliensimonfr
@SebastienStormacq
@SebastienStormacq 9 ай бұрын
Thank you Julien - this is super useful and comes at the right time during my writing season (you know what I'm talking about :-) ) As someone else mentioned in the comment, I also received an error when calling Textract. I solved it by adding `pip install amazon-textract-textractor -qU` - hope it might help others
@juliensimonfr
@juliensimonfr 9 ай бұрын
Ok, good to know. Thanks Seb and good luck with the writing ;)
@SebastienStormacq
@SebastienStormacq 9 ай бұрын
also 'pip install pip install faiss-cpu' :-)
@rnronie38
@rnronie38 4 ай бұрын
how can I call onto my react frontend?
@juliensimonfr
@juliensimonfr 4 ай бұрын
A SageMaker endpoint is an HTTPS API, so you can plug it in anything. You should be able to find lots of examples out there.
@debojitmandal8670
@debojitmandal8670 7 ай бұрын
Y r u deploying first in sage maker
@juliensimonfr
@juliensimonfr 6 ай бұрын
Because I don't want to manage any infrastructure :)
@Azazello1482
@Azazello1482 Ай бұрын
Seems like a great video, but I can't move from the starting line. You seem to be skipping over very important details about how to deal with the HuggingFace tokens, AWS security keys, regional compatibility settings with Sagemaker, etc. For example, when running the copied SageMaker code, I get "ValueError: Must setup local AWS configuration with a region supported by SageMaker", but no region seems I try seems to work. Did you cut all the authentication code from your demo? Obviously you don't want to disclose security keys, but at least show/explain that part of the setup code and simply redact the sensitive information.
@juliensimonfr
@juliensimonfr Ай бұрын
How about going through Hugging Face 101 and SageMaker 101 first?
@Azazello1482
@Azazello1482 Ай бұрын
@@juliensimonfr Yes, clearly I'll need to do this! Nonetheless, as an educator myself, I think my point is still useful. It's helpful to learners to mention parts that you skip over. You don't have to teach it in this video, but it would be helpful to mention that there are steps one must perform that are not shown in this video.
RAG But Better: Rerankers with Cohere AI
23:43
James Briggs
Рет қаралды 56 М.
A little girl was shy at her first ballet lesson #shorts
00:35
Fabiosa Animated
Рет қаралды 16 МЛН
RAG + Langchain Python Project: Easy AI/Chat For Your Docs
16:42
Hugging Face LLMs with SageMaker + RAG with Pinecone
32:30
James Briggs
Рет қаралды 17 М.
How to build Multimodal Retrieval-Augmented Generation (RAG) with Gemini
34:22
Google for Developers
Рет қаралды 47 М.
What is LangChain?
8:08
IBM Technology
Рет қаралды 181 М.
How to set up RAG - Retrieval Augmented Generation (demo)
19:52
Don Woodlock
Рет қаралды 22 М.
ADVANCED Python AI Agent Tutorial - Using RAG
40:59
Tech With Tim
Рет қаралды 133 М.
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 42 М.