Thanks for putting together the description and implications of the paper as well as demoing the code. I agree with your comments about keeping GNNs in mind for future development as it appears that all of the various forms of RAG will likely only take us so far. But, in the interim, it's great to see improvements like SELF-RAG. Along with these methods, are you familiar with David Shapiro's approach using SPRs (Sparse Priming Representations)? I'm wondering if it could be used as a compression strategy, reducing the number of tokens used per self reflective / critique step. Perhaps as part of the critique, we could also generate SPRs that could then be trained into the main model, thus reducing the number of times that the main model requested a retrieval action.
@AGIBreakout Жыл бұрын
I just Read about SPR's. I agree with you comments. I'd like to see more of people putting together various codes from different sources. LongFormer / Self-Taught Optimizer (STOP) / SPR might be a good combination of complementing codes.
@damujen Жыл бұрын
Best AI channel, thank you
@stuartpatterson1617 Жыл бұрын
AGI will only use the present LLMs like an encyclopaedia. Great content, cheers!
@arsalino1116 Жыл бұрын
sure..
@bado_badooo Жыл бұрын
Great video!
@StoianAtanasov Жыл бұрын
Thanks for the wonderful presentation.
@matten_zero Жыл бұрын
Self RAG + Dave Shapiro's SPR is THE WAY
@henkhbit5748 Жыл бұрын
Love Self RAG, because it's open source. I love especially your ice cream😂 in which Hilbert space can I buy strawberries with Riemann flavor with an Escher twist🤔