This is the best video on summarizer! Thank u so much for such simplified explanation!
@edlitera2 жыл бұрын
Glad it was helpful!
@lalithsasubilli8103 Жыл бұрын
Very helpful! Finally some video that helps!
@BinySD12 жыл бұрын
Thank you Very much! I followed all parts of this video completely and I got the better idea. But if you are voluntary to help us on abstractive text summarizations for less resourced local languages, it is best of all helps, for those are dealing with this issue, like me. Thank you again!
@jeremyheng85732 жыл бұрын
Good tutorial! Looking forward for new tutorial video.
@rockpaper5578 Жыл бұрын
Do I need to install pytorch and sentencpiece for pegasus?
@guimaraesalysson Жыл бұрын
Great video
@yasminesmida25854 ай бұрын
Can I use Pegasus to genera French summary??
@-PhamAnhNguyet Жыл бұрын
Can you show the way to convert a long text to an outline?
@an565332 жыл бұрын
Very nice plz put video to sumarize long text than it
@jaimebaquerizo5475 Жыл бұрын
Incredible video!! Do you know how can I solve this warning? in the summarization pipeline I still get only one phrase like in pegasus: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max_length` will d `max_length` will default to 64 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transfnd using `max_new_toefault to 64 (`self.config.max_length`). Controlling `ormers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
@couples_I_like_are_real2 жыл бұрын
Thank you for this great tutorial, but I really want to ask some questions, do you know how to deal with the dead kernel when I downloading the model, pegasus_model = PegasusForConditionalGeneration.from_pretrained(model_name). And if you have time to reply my question, I really appreciate it. Thank you!
@cesaranzola69502 жыл бұрын
Great tutorial! thanks so much! is this an efficient way of summarizing very long documents say 20 pages?
@edlitera2 жыл бұрын
For longer documents, you will need a different approach. Here are a few ideas: Option 1. Do a multi-step summarization: first, use an extractive summarization technique (e.g. TextRank) to 'pre-summarize' the data to a length that is manageable by a typical transformer model, then use a transformer model to summarize the 'pre-summarized' text. Option 2. Chunking: first, chunk your text into sections that are small enough to be processed by a transformer, then summarize each chunk and combine the summaries. Option 3. You could look at different models (for example, Longformers). However, even these models have an upper limit, so the previously mentioned techniques scale better for large text.
@salamaawad91353 жыл бұрын
Do you have full tutorial at NLP using pych or DL , BCZ you ML THANKS
@edlitera3 жыл бұрын
Hi, we are planning on launching an NLP course early next year. If you are interested in this topic, subscribe to our channel to get announcements and see more tutorials like this one.