ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)

  Рет қаралды 18,250

Yannic Kilcher

Yannic Kilcher

22 күн бұрын

Paper: arxiv.org/abs/2403.07691
Abstract:
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval2.0 (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-α (7B) and Mistral-ORPO-β (7B).
Authors: Jiwoo Hong, Noah Lee, James Thorne
Links:
Homepage: ykilcher.com
Merch: ykilcher.com/merch
KZbin: / yannickilcher
Twitter: / ykilcher
Discord: ykilcher.com/discord
LinkedIn: / ykilcher
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Пікірлер: 46
@r9999t
@r9999t 20 күн бұрын
Glad you're back to technical content this time. Any AI KZbinr can give us latest AI news, but you're just about the only one that can give technical insight into the stories.
@lone0017
@lone0017 20 күн бұрын
6 videos in 7 days, I'm having a holiday and this is such a perfect-timing treat.
@EternalKernel
@EternalKernel 20 күн бұрын
Thank you for being awesome Yannic, I send people from the classes that I "TA" for to you because you're reliably strong with your analysis.
@justheuristic
@justheuristic 20 күн бұрын
The main loss function (7) looks like it can be meaningfully simplified with school-level math. Lor = -log(sigm( log ( odds(y_w|x) / odds(y_l|x)))), where sigm(a) = 1/(1 + exp(-a)) = exp(a) / (1 + exp(a)) Let's assume that both odds(y_w|x) and odds(y_l|x) are positive (because softmax) By plugging in the sigmoid, we get Lor = - log (exp(log(odds(y_w|x) / odds(y_l|x) )) / (1 + exp(log(odds(y_w|x) / odds(y_l|x)))) ) Note that exp(log(odds(y_w|x) / odds(y_l|x)) = odds(y_w|x) / odds(y_l|x). We use this to simplify: Lor = - log( [odds(y_w|x) / odds(y_l|x)] / (1 + odds(y_w|x) / odds(y_l|x)) ) Finally, multiply both numerator and denominator by odds(y_l|x) to get Lor = - log(odds(y_w|x) / (odds(y_w|x) + odds(y_l|x)) ) Intuitively, this is the negative log-probability of (the odds of good response) / (odds of good response + odds of bad response ). If you minimize the average loss over multiple texts, it's the same as maximizing the odds that the model chooses winning response in every pair (of winning+losing responses).
@peterszilvasi752
@peterszilvasi752 19 күн бұрын
Good job! I suppose you mean `odds(y_l|x)` instead of `odds(y_l)` in the final equation.
@justheuristic
@justheuristic 19 күн бұрын
@@peterszilvasi752 thanks! good catch :) /* fixed the previous comment */
@lucidraisin
@lucidraisin 19 күн бұрын
very cool! thank you for this
@peach412
@peach412 20 күн бұрын
26:30 that 'really?' and the following struggle with basic math is WAAAAY to relatable
@borisbondarenko314
@borisbondarenko314 18 күн бұрын
I very like more technical content from you. I usually read tech news in telegram and your NL New are greats, but very ordinal and simple. So such paper explanations are kind of impact to the DS community, such videos grands new ideas and increase understanding of the field for those, who tried to dive in the deep. Of course it less popular due to complexity of material for audience, but much more interesting. So thank you for such format.
@tensorturtle1566
@tensorturtle1566 20 күн бұрын
Great to see research from my homeland of South Korea represented!
@Dogo.R
@Dogo.R 20 күн бұрын
Woo allegence to tribes!!... .. ..
@jawadmansoor6064
@jawadmansoor6064 20 күн бұрын
do you know Seoul?
@cvabds
@cvabds 20 күн бұрын
There is only one korea
@blender6426
@blender6426 20 күн бұрын
Nice I was waiting for this after you mentioned ORPO in ML News :))
@I-0-0-I
@I-0-0-I 20 күн бұрын
Thanks for explaining basic terms along with the more complex stuff, for dilettantes like myself. Cheers.
@kaikapioka9711
@kaikapioka9711 19 күн бұрын
Thx again yan! 🎉
@fearnworks
@fearnworks 20 күн бұрын
You are on fire!
@max0x7ba
@max0x7ba 5 күн бұрын
That log of probability is also a power transform often used to narrow or widen a distribution.
@meselfobviouslyme6292
@meselfobviouslyme6292 20 күн бұрын
Thank you Mr Klicher for delving into the paper, ORPO; Monolithic Preference Optimization without Reference Model
@jondo7680
@jondo7680 6 сағат бұрын
You should make a video just focusing on log and explaining it's role in neuronal networks.
@gauranshsoni4011
@gauranshsoni4011 20 күн бұрын
Keep them comin
@jellyfishnexus3132
@jellyfishnexus3132 20 күн бұрын
Nice!
@MyCiaoatutti
@MyCiaoatutti 19 күн бұрын
"Specifically, 1 - p(y|x) in the denominators amplifies the gradients when the corresponding side of the likelihood p(y|x) is low". I think that (1 - p(y|x)) have two different meanings here: it can be the result of differentiation by coincidence and also the "corresponding side" of the likelihood, i.e., 1 - p(y|x). So, when it says the "corresponding side" of p(y|x) is low, it means that 1 - p(y|x) is low.
@wwkk4964
@wwkk4964 20 күн бұрын
What's going on, is it a yannic bonanza time of the year! Loving these addicting videos
@yannickpezeu3419
@yannickpezeu3419 20 күн бұрын
I liked the self deprecation at 32:00 haha
@chrise8153
@chrise8153 20 күн бұрын
Wow good timing to go on youtube
@axelmarora6743
@axelmarora6743 20 күн бұрын
great! now apply ORPO to a reward model and round we go!
@Zed_Oud
@Zed_Oud 14 күн бұрын
27:57 “the corresponding side” Maybe they mistakenly switched the w l givens in the denominators?
@mantasorantas5289
@mantasorantas5289 20 күн бұрын
Would be interesting to see how it compares to KTO. Would guess that KTO outperforms and is easier to implament as you dont need pairs of inputs.
@SLAM2977
@SLAM2977 19 күн бұрын
There seems to be a conceptual problem, where are the preferences coming from given that they are expressed on multiple responses to the same prompt? Let's suppose we wish to fine-tune a foundation model for chat, we would not have the preferences before having done SFT and gathered some responses on the chat template format based prompt, that would force us to do SFT first and then SFT+ODDS_RATIO loss. Doable but surely not a single pass approach.
@syeshwanth6790
@syeshwanth6790 19 күн бұрын
Where does Yw and Yl come from. Is it from the training dataset or the LLM that is being trained generates these and are labelled by humans or reward models as W and L?
@drdca8263
@drdca8263 20 күн бұрын
0:52 : I wish we had a different term for this other than “alignment”
@TheRyulord
@TheRyulord 20 күн бұрын
"Preference tuning" is used to describe it pretty often
@drdca8263
@drdca8263 20 күн бұрын
@@TheRyulord thanks!
@rectomgris
@rectomgris 20 күн бұрын
makes me think of PPO
@thunder89
@thunder89 20 күн бұрын
The comparison in the end between OR and PR should also discuss the influence of the log sigmoid, or? And, more importantly, how the gradients for the winning and loosing output actually would look like with these simulated pars... It feels a bit handweavy why the logsigmoid of the OR should be the target ...
@john_blues
@john_blues 20 күн бұрын
I don't even know what the title of this video means 😵‍💫. But I'm going to watch anyway.
@Jason-lm2yq
@Jason-lm2yq 17 күн бұрын
Can you do one on Kolmogorov-Arnold Network from MIT
@davidhauser7537
@davidhauser7537 7 күн бұрын
yannick can you do xlstm paper
@amber9040
@amber9040 20 күн бұрын
I feel like AI models have gotten more stale and same-y ever since RLHF became the norm. Playing around with GPT-3 was wild times. Hopefully alignment moves in a direction with more diverse ranges of responses in the future, and less censorship in domains where it's not needed.
@dinoscheidt
@dinoscheidt 20 күн бұрын
LLMs are what Machine Learning has always been: input output. Quality data makes the cake…. no matter how many fancy mixers you bring to the table.
@Embassy_of_Jupiter
@Embassy_of_Jupiter 20 күн бұрын
why hat, indeed
@iworeushankaonce
@iworeushankaonce Күн бұрын
*posts videos almost every day* *KAN paper dropped, disappears for 2 weeks* I hope you alright man 🫂🤗
Hyperparameters Tuning: Grid Search vs Random Search
3:15
DataMListic
Рет қаралды 1,8 М.
Chips evolution !! 😔😔
00:23
Tibo InShape
Рет қаралды 23 МЛН
Super sport🤯
00:15
Lexa_Merin
Рет қаралды 7 МЛН
ISSEI funny story😂😂😂Strange World | Magic Lips💋
00:36
ISSEI / いっせい
Рет қаралды 154 МЛН
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
8:22
AI Coffee Break with Letitia
Рет қаралды 31 М.
Long Short-Term Memory (LSTM), Clearly Explained
20:45
StatQuest with Josh Starmer
Рет қаралды 465 М.
The AI Hardware Arms Race - We Are Reinventing Computing?
9:39
Neural Networks Pt. 2: Backpropagation Main Ideas
17:34
StatQuest with Josh Starmer
Рет қаралды 471 М.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 49 М.
RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)
1:02:17
New Disruptive Microchip Technology and The Secret Plan of Intel
19:59
Anastasi In Tech
Рет қаралды 262 М.
Why spend $10.000 on a flashlight when these are $200🗿
0:12
NIGHTOPERATOR
Рет қаралды 18 МЛН
How about that uh?😎 #sneakers #airpods
0:13
Side Sphere
Рет қаралды 10 МЛН
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
Как я сделал домашний кинотеатр
0:41
RICARDO
Рет қаралды 1,1 МЛН
Приехала Большая Коробка от Anker! А Внутри...
20:09
РасПаковка ДваПаковка
Рет қаралды 73 М.
Apple. 10 Интересных Фактов
24:26
Dameoz
Рет қаралды 111 М.
Трагичная История Девушки 😱🔥
0:58
Смотри Под Чаёк
Рет қаралды 108 М.