You guys are the RAG masters ! Thank you for the informative videos.
@AI-Makerspace7 ай бұрын
Hahaha thanks @taylorfans1000!
@AI-Makerspace Жыл бұрын
Slides & Colab from the event! Slides: www.canva.com/design/DAFvFEhCJtg/Mthlo-nWXAPck3iK3JaB7Q/edit?DAFvFEhCJtg& Google Colab Code: colab.research.google.com/drive/1wwGLuEreZJfpxTvFeMLFbWb1GkRkXFwS?usp=sharing
@mohamedchorfa Жыл бұрын
Same url for both?
@fozantalat4509 Жыл бұрын
Please share the github repo link as well.
@LeonardoRocha0 Жыл бұрын
As @mohamedchofra pointed, the Colab link is the same for the canva. Could you fix with the correct url?
@AI-Makerspace Жыл бұрын
Whoops @@LeonardoRocha0 And @mohamedchorfa - fixed!
@sivi38838 ай бұрын
Awesome video again! These videos are blowing up! My company has 200K+ pdfs ranging from 100 pages to 10000+ pages. Will this framework work for such enormous data? Wondering how long it will take to create the synthetic triples for millions of PDFs' chunks that would get created from 200K+ pdfs. Would love to hear your thoughts!!!
@pshreyaan Жыл бұрын
This is super interesting, I hope the channel blows up and we get to see more similar content. 👍
@AI-Makerspace Жыл бұрын
Same!
@alchemication Жыл бұрын
Super interesting content. Thanks for posting! I would say the piece, which I often miss is an actual example of using this thing (is it 1 model, 2 models, do we still use vector db?) And also some discussion about the practical side of things. What if my data changes? Nevertheless awesome work, cheers guys!
@AI-Makerspace Жыл бұрын
In this case we're doing simultaneous fine-tuning of both embedding and LLM models! And yes, we're still using a vector DB here for all of our documents. When data changes significantly, re-training (e.g., fine-tuning) should be considered for sure. Of course, what exactly do you mean by "changes" matters (e.g., what metrics are you measuring, how much have they moved, how you've noticed the performance degrading (or not) from the actual user's perspective, etc. are good places to start). Thanks for your support @alchemikification!
@chrisogonas Жыл бұрын
Thanks Team, for that educative session. 👍
@AI-Makerspace Жыл бұрын
You bet @chrisogonas!
@pavellegkodymov4295 Жыл бұрын
Thanks for the video, guys, most useful were definitely notebooks with comments of Chris. Thumb up, subscribed. But appreciate also information given by Greg, just seemed a bit high level or not explained deep enough/explained too complex. Probably better would be then to explain less but in more details , with examples. The aim I assume is not watch and think "or that guy knows a lot, although I don't understand anything". But actually to learn something. Straight and maybe not "nice" feedback, but hope it will help. You guys are doing great job sharing insides and helping others. I'm far behind with that so far)) Haven't reached so far a point where I need to fine tune. Working now mostly on retrieval step and different strategies like pre-filtering of text (key word search) before retrieval from vector store. But definitely fine tuning of embedding model might be one of solutions for me. As for now I'm struggling to "emphasize" , give higher priority to some domain relevant words automatically inside the question over other regular non relevant ones like "in", "please" etc. To get chunks that are more relevant for answer within K chunks selected and increase chance to provide more relevant context to LLM. So thanks for hints and well done!
@AI-Makerspace Жыл бұрын
Thanks for your comments, and appreciate the feedback!
@micbab-vg2mu Жыл бұрын
Thank you the topic of RAG is very interesting.
@nenjaplays9457 ай бұрын
Thank a lot Team! Question: After finetuning, how does one save the fine-tuned model to disk?
@AI-Makerspace6 ай бұрын
You can use the `.push_to_hub()` method to push the completed model to the Hugging Face Hub!
@li-pingho1441 Жыл бұрын
this video saved my life !!!!! amazing work!
@AI-Makerspace Жыл бұрын
We love to hear that!
@mosca2045 ай бұрын
Can I use a custom model as a generator?
@AI-Makerspace5 ай бұрын
Yes!
@aswathmg10 ай бұрын
Thank you for that nice video guys. You mentioned that there are open source models for generating synthetic data. Can you please suggest any?
@AI-Makerspace10 ай бұрын
Generating synthetic data is best done today with the most powerful models that you can find, so we'd highly recommend using GPT-4 for this today. If you'd like to leverage open-source models, then once you see what the OpenAI models can do we recommend using that data quality as a baseline to compare to outputs from the latest and greatest OS models like Solar, Mistral, Yi, Llama 2, etc!
@arkabagchi8689 Жыл бұрын
Great video on E2E RAG pipelines and where /when to fine tune (embedding model, the LLM itself, or retrieval models). I was wondering if you had a source or links to relevant literature that specifically talks about this E2E evaluation framework (Arxiv papers or something similar)? Thanks a ton and keep up the work in this retrieval pipeline space. Knowledge augmented language models are going to be amazing.
@AI-Makerspace Жыл бұрын
Thanks for the great comment! Def start with the e2e RAG paper: arxiv.org/abs/2210.02627 Beyond that, check out DALM docs and source code from Arcee! github.com/arcee-ai/DALM
@arkabagchi8689 Жыл бұрын
Hell yes, I was looking for just a paper like that on E2E RAG. Thanks a ton. I found the Arcee Github a couple hours ago, going to parse through it in the coming days! I'm trying to develop a RAG pipeline for certain finance domains so this is all ultra-relevant. @@AI-Makerspace
@AI-Makerspace Жыл бұрын
@@arkabagchi8689 Of course! Keep us posted as you build, ship, and share!
@nasiksami23515 ай бұрын
What open source platform/library can I use to create synthetic data, instead of using openIAI?
@AI-Makerspace5 ай бұрын
Llama 3 70B is a decent replacement! Be mindful of any licensing restrictions!
@peregudovoleg9 ай бұрын
Hey guys. At ~14:30 you talk about genrating answers with question-context pairs, but generate_answer func never calls the context, just the question. Wouldn't it be better to give llm question AND context? Since the question, I would assume, might sound like: "What Arthur decided to do at the end?" And without the context, llm might give suboptimal answer. I have also not found a link to colab NB, did you give any? Thanks for the great talk!
@AI-Makerspace9 ай бұрын
Here's the link - it's in a pinned comment: colab.research.google.com/drive/1wwGLuEreZJfpxTvFeMLFbWb1GkRkXFwS?usp=sharing And yes, you're absolutely correct! Using the context would likely improve the generated answers. I believe the intent was to use both the query and the context - but we must've made a typo that prevented both from being utilized!
@AI-Makerspace9 ай бұрын
@AI-Makerspace I've updated the code to reflect this.
@ashritkulkarni9186 Жыл бұрын
Thanks Team, this a great video. How do we now make queries from the dalm after training and finetuning?
@AI-Makerspace Жыл бұрын
You'll query it the same as you might any other Hugging Face model! You can call itself pipeline directly with the "text generation" task!
@nelohenriq7 ай бұрын
Can you explain how to achieve the same without OpenAI? Thanks in advance
@AI-Makerspace7 ай бұрын
You only need OpenAI to generate the synthetic dataset (QAC triplets) so you could substitute any process that results in question, answer, context triplets for OpenAI.
@zd6769 ай бұрын
Quick question, if I have a query rewriting model that takes a user's input query into a simplified or a set of simplified sub queries to be sent to the retriever, I'm thinking can I leverage this framework to train the query rewriter, retriever, as well as the generator (LLM) end-to-end?
@AI-Makerspace9 ай бұрын
This process can be used to train any data you'd like - so long as there is a retrieval and generation step in the process!
@abd-m3s8 ай бұрын
very good, tks
@saka242 Жыл бұрын
Great tutorial - thank you! However, a lot of useful information is posted in the chat window and it got lost when the tutorial ends. I don't know if there is a practical solution to this, unfortunately.
@Star-rd9eg Жыл бұрын
do you have it?
@saka242 Жыл бұрын
Do I have what?
@AI-Makerspace Жыл бұрын
Hi @saka242 - thanks for the note! We did not intentionally take it down, have double checked the streaming settings, and are hoping that it shows up within the next day or so 🙏
@AI-Makerspace Жыл бұрын
Looks to be working now!
@eagle0829 Жыл бұрын
Dear can you share your keynote? The resolution is up to only 720.
@eagle0829 Жыл бұрын
I just saw the slides shared !!!! Appreciated for the grate work, it really save my day
@robertcringely734811 ай бұрын
Stop reading from a script, Greg, and ditch the goofy hat. It's distracting.
@AI-Makerspace11 ай бұрын
Thanks for the feedback @robertcringely7348
@zd6769 ай бұрын
Dude, rock the way you are man! Love the style and presentation! If a hat can be distracting, I can't imagine how you would get anything done in today's age lol.
@JuanMartinez-oq8cd6 ай бұрын
Who cares, really. If it ain't broke, don't fix it. Awesome content guys!