Retrieval-Augmented Generation (RAG)

  Рет қаралды 32,568

Connor Shorten

Connor Shorten

Күн бұрын

Пікірлер: 29
@AbdennacerAyeb
@AbdennacerAyeb 4 жыл бұрын
Welcome back.. It was a long time since you posted on KZbin.. We were waiting for you. :) Thank you for sharing knowledge
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you so much! Really glad you liked the video as well!
@arielf376
@arielf376 4 жыл бұрын
Glad to see you again. I was getting worried. Your videos are great, thanks so much for the content.
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you so much!
@whatsinthepapers6112
@whatsinthepapers6112 4 жыл бұрын
New background, who dis?? Great to see you back making videos!
@connor-shorten
@connor-shorten 4 жыл бұрын
Lol, Thank you so much!
@سودانتوك
@سودانتوك 4 жыл бұрын
Welcome back,thanks for the great work you are doing .
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you so much!
@katerinamalakhova9872
@katerinamalakhova9872 4 жыл бұрын
We’ve missed you so much 🤍
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you so much!
@DistortedV12
@DistortedV12 3 жыл бұрын
Really great paper. To some extent all of NLP can be treated as a QA task.
@connor-shorten
@connor-shorten 3 жыл бұрын
Thanks! I think the "Text-in, Text-out" unifying framework for all tasks really set the stage for this, interesting stuff!
@himatammineedi6307
@himatammineedi6307 3 жыл бұрын
Can you explain why this RAG model seems popular? It seems like all they've done is connect a pre-trained retrieval model and connected it to a pretrained seq2seq model, and trained them together. They also just did a simple concatenation of the retrieved passages with the initial input before inputting it to the seq2seq model. This all seems like really basic stuff. So am I just missing something here? Because you could just also get rid of the retrieval model if you already knew which documents you wanted the seq2seq model to use and could just directly concat those with the original input.
@imranq9241
@imranq9241 2 жыл бұрын
Very nice video! Really excited to try these techniques out
@kevon217
@kevon217 Жыл бұрын
Very comprehensive overview, thanks!
@thefourthbrotherkaramazov245
@thefourthbrotherkaramazov245 10 ай бұрын
Can someone expand on the snippet at 4:45 explaining how the query works with the encoded samples? In the video, the speaker states, "And then when we ask a query, like we have this new x sequence with a mask at the end of it, we're going to treat that like a query, encode that query, and then use this maximum inner product search...". My understanding is that we encode the masked x (where x is the input) with the same query encoder as what encodes the context information, then use MIPS to find essentially the most similar context to x, which is then processed by the generator to append to x. Any help clarifying would be much appreciated.
@shaz7163
@shaz7163 4 жыл бұрын
Amazing video. What about finetuning this for different tasks? Authors say we do not need to fine-tuning the document encoder.. but other things.sny comments on that?
@connor-shorten
@connor-shorten 4 жыл бұрын
Fine-tuning the document encoder would be very tedious because you need to continually re-build the index and centroids that speedup searching for nearest neighbors. Fine-tuning the query and BART seq2seq generation is much easier and any NLP task can be setup with this, as in the Text-input Text-output formulation. I cover that in more detail in the T5 video if interested.
@shaz7163
@shaz7163 4 жыл бұрын
@@connor-shorten Yeah I went through those videos. So basically their doc-encoder is trained with 21 million wiki-dumps and it kind of enough for the network to encoder any type of document into a vector right? My other question is what if I want to look at a different set of documents ? How should I index it?
@bivasbisht1244
@bivasbisht1244 Жыл бұрын
@@shaz7163 did you get the answer to that ? cuz i have the same question :(
@TheAmyShows
@TheAmyShows Жыл бұрын
Any ideas on some methodologies to perhaps evaluate the performance of the retrieval mechanism within the RAG model? thanks
@sandeepunnikrishnan9885
@sandeepunnikrishnan9885 Жыл бұрын
is it possible for you to add the link to the ppt presentation used for this video in the description?
@alelasantillan
@alelasantillan 3 жыл бұрын
Great video!
@machinelearningdojo
@machinelearningdojo 4 жыл бұрын
Ooooooooooo 🙌😎
@MrjbushM
@MrjbushM 4 жыл бұрын
nice!
@bivasbisht1244
@bivasbisht1244 Жыл бұрын
i want to know if RAG is a model or a framework or just an approach ? question might be dumb to ask , but i really want to know
@DEEPTHIRAJAGOPALGAJENDRA
@DEEPTHIRAJAGOPALGAJENDRA 9 ай бұрын
approach
@riennn2
@riennn2 4 жыл бұрын
Nice one
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you!
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, with Patrick Lewis, Facebook AI
1:22:32
Natural Language Processing presentations
Рет қаралды 13 М.
Seja Gentil com os Pequenos Animais 😿
00:20
Los Wagners
Рет қаралды 91 МЛН
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 13 МЛН
Try Not To Laugh 😅 the Best of BoxtoxTv 👌
00:18
boxtoxtv
Рет қаралды 7 МЛН
REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)
1:00:41
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 45 М.
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 169 М.
#100 Dr. PATRICK LEWIS - Retrieval Augmented Generation
25:44
Machine Learning Street Talk
Рет қаралды 14 М.
What is RAG? (Retrieval Augmented Generation)
11:37
Don Woodlock
Рет қаралды 164 М.
Session 7: RAG Evaluation with RAGAS and How to Improve Retrieval
37:21
RAG But Better: Rerankers with Cohere AI
23:43
James Briggs
Рет қаралды 61 М.
Seja Gentil com os Pequenos Animais 😿
00:20
Los Wagners
Рет қаралды 91 МЛН