Welcome back.. It was a long time since you posted on KZbin.. We were waiting for you. :) Thank you for sharing knowledge
@connor-shorten4 жыл бұрын
Thank you so much! Really glad you liked the video as well!
@arielf3764 жыл бұрын
Glad to see you again. I was getting worried. Your videos are great, thanks so much for the content.
@connor-shorten4 жыл бұрын
Thank you so much!
@whatsinthepapers61124 жыл бұрын
New background, who dis?? Great to see you back making videos!
@connor-shorten4 жыл бұрын
Lol, Thank you so much!
@سودانتوك4 жыл бұрын
Welcome back,thanks for the great work you are doing .
@connor-shorten4 жыл бұрын
Thank you so much!
@katerinamalakhova98724 жыл бұрын
We’ve missed you so much 🤍
@connor-shorten4 жыл бұрын
Thank you so much!
@DistortedV123 жыл бұрын
Really great paper. To some extent all of NLP can be treated as a QA task.
@connor-shorten3 жыл бұрын
Thanks! I think the "Text-in, Text-out" unifying framework for all tasks really set the stage for this, interesting stuff!
@himatammineedi63073 жыл бұрын
Can you explain why this RAG model seems popular? It seems like all they've done is connect a pre-trained retrieval model and connected it to a pretrained seq2seq model, and trained them together. They also just did a simple concatenation of the retrieved passages with the initial input before inputting it to the seq2seq model. This all seems like really basic stuff. So am I just missing something here? Because you could just also get rid of the retrieval model if you already knew which documents you wanted the seq2seq model to use and could just directly concat those with the original input.
@imranq92412 жыл бұрын
Very nice video! Really excited to try these techniques out
@kevon217 Жыл бұрын
Very comprehensive overview, thanks!
@thefourthbrotherkaramazov24510 ай бұрын
Can someone expand on the snippet at 4:45 explaining how the query works with the encoded samples? In the video, the speaker states, "And then when we ask a query, like we have this new x sequence with a mask at the end of it, we're going to treat that like a query, encode that query, and then use this maximum inner product search...". My understanding is that we encode the masked x (where x is the input) with the same query encoder as what encodes the context information, then use MIPS to find essentially the most similar context to x, which is then processed by the generator to append to x. Any help clarifying would be much appreciated.
@shaz71634 жыл бұрын
Amazing video. What about finetuning this for different tasks? Authors say we do not need to fine-tuning the document encoder.. but other things.sny comments on that?
@connor-shorten4 жыл бұрын
Fine-tuning the document encoder would be very tedious because you need to continually re-build the index and centroids that speedup searching for nearest neighbors. Fine-tuning the query and BART seq2seq generation is much easier and any NLP task can be setup with this, as in the Text-input Text-output formulation. I cover that in more detail in the T5 video if interested.
@shaz71634 жыл бұрын
@@connor-shorten Yeah I went through those videos. So basically their doc-encoder is trained with 21 million wiki-dumps and it kind of enough for the network to encoder any type of document into a vector right? My other question is what if I want to look at a different set of documents ? How should I index it?
@bivasbisht1244 Жыл бұрын
@@shaz7163 did you get the answer to that ? cuz i have the same question :(
@TheAmyShows Жыл бұрын
Any ideas on some methodologies to perhaps evaluate the performance of the retrieval mechanism within the RAG model? thanks
@sandeepunnikrishnan9885 Жыл бұрын
is it possible for you to add the link to the ppt presentation used for this video in the description?
@alelasantillan3 жыл бұрын
Great video!
@machinelearningdojo4 жыл бұрын
Ooooooooooo 🙌😎
@MrjbushM4 жыл бұрын
nice!
@bivasbisht1244 Жыл бұрын
i want to know if RAG is a model or a framework or just an approach ? question might be dumb to ask , but i really want to know