Dude, you're a hero for making these videos! Definitely earned a subscription from me.
@FaultyTwo Жыл бұрын
I'm really glad a channel likes yours exist.
@DED_Search10 ай бұрын
I was reading Zephyr. It led to this DPO paper, which landed me on your channel. I am soooo happy. Keep it up!
@Anonymous-bu9ch7 ай бұрын
Absolute Goldmine!!
@Poqets Жыл бұрын
Finally, a video on this!
@prof_shixo Жыл бұрын
Nice one, thanks for sharing. The replacement of the reward model by the MLE term looks appealing when we have groundtruth (generated reply and reference reply). Still, the advantage of reward models is mainly in their potential to be used on new samples without groundtruth presented (i.e., no reference replies in a self-play training on new datasets), so how would the MLE loss work in such scenarios?
@unclecode Жыл бұрын
Thank you so much! I truly enjoyed your video and the way you explain things. There are moments, however, where I find myself a bit lost when you don't delve deeper, and I wish you could expand on those points. I would appreciate it if you could recommend a video, article to help me become more familiar with the basic concepts related to such papers in this field. Understanding these basics would make it much easier for me to grasp the material. I feel some gaps in my knowledge that If I work on them then understanding these papers, their mathematical notation will be easier. I focusing on papers in this area of transformers and training models.. Any suggestions would be greatly appreciated. Btw you definitely earned my subscription as well.
@gabrielmongaras Жыл бұрын
Thanks for the feedback! I usually try to hit the sweet spot between assumed knowledge and what I put in the videos so they don't get too long. I'll keep that in mind for future videos! I usually assume knowledge about MLPs (feed-forward networks), convolutional neural networks, sometimes attention (attention is all you need), the normal training process as the loss functions that go with it such as NLL and MSE. Most of these concepts are usually covered in intro to ML classes such as Andrew Ng's or a textbook. As for the textbook, I don't know of one that stands out among the rest. Most are probably similar to each other. As for mathematical notion, I'm not a mathematician, but reading papers in general has helped me get a better understanding of the notation, though I still lack a lot of knowledge of basic mathematical concepts. Hope this helps!
@YashVerma-ii8lx9 ай бұрын
Hey @Gabriel. Can you please clear my doubt, why @12:58 we cant just directly backpropagate the loss like we do in simple fine tuning? I am not understanding it? Please share any relevant resource?
@MacProUser998769 ай бұрын
How DPO works under the hood: kzbin.info/www/bejne/gKaQoXmAg8uCnLs