Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF

  Рет қаралды 11,107

CodeEmporium

CodeEmporium

5 ай бұрын

We talk about reinforcement learning through human feedback. ChatGPT among other applications makes use of this.
ABOUT ME
⭕ Subscribe: kzbin.info...
📚 Medium Blog: / dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: / ajay-halthor-477974bb
PLAYLISTS FROM MY CHANNEL
⭕ Reinforcement Learning: • Reinforcement Learning...
Natural Language Processing: • Natural Language Proce...
⭕ Transformers from Scratch: • Natural Language Proce...
⭕ ChatGPT Playlist: • ChatGPT
⭕ Convolutional Neural Networks: • Convolution Neural Net...
⭕ The Math You Should Know : • The Math You Should Know
⭕ Probability Theory for Machine Learning: • Probability Theory for...
⭕ Coding Machine Learning: • Code Machine Learning
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStati...
📕 Bayesian Statistics: imp.i384100.net/BayesianStati...
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow

Пікірлер: 11
@RameshKumar-ng3nf
@RameshKumar-ng3nf 11 күн бұрын
Brilliant Bro 👌. Excellent explanation. I never understand RLHF reading so many books and notes. Your examples are GREAT & simple to understand 👌 I am new to your channel and subscribed.
@neetpride5919
@neetpride5919 5 ай бұрын
Great video! I have a few questions: 1) Why do we need to manually train the reward model with human feedback if the point is to evaluate responses of another pretrained model? Can't we just cut out the reward model altogether, rate the responses directly using human feedback to generate a loss value for each response, then backpropagate on that? Does it require less human input to train the reward model than to train the GPT model directly? 2) When backpropagating the loss, do you need to do recurrent backpropagation for a number of steps that is the same as the length of the token output? 3) Does the loss value apply equally to every token that is output? Seems like this would overly punish some words e.g. if the question starts with "why" it's likely the response is going to start with "because" regardless of what comes after. Does RLHF only work with sentence embeddings rather than word embeddings?
@0xabaki
@0xabaki 3 ай бұрын
1) I think the point is to minimize the human feed back volume so humans just give enough responses to train a model for all future feedback. this way humans are not going to always have to give feedback, but instead will lay the basis, and probably come back to re-evaluate what the reward model is doing so it is still acting human (2) and (3) seem more specific to the architecture of chatGPT and neither PPO nor RLHF. I would look into the other GPT specific videos he made
@manigoyal4872
@manigoyal4872 5 ай бұрын
what about the generation of rewards, will there be another model to check the relativity of the answer and the precision of the answer, cause we have a lot of data
@sangeethashowrya0318
@sangeethashowrya0318 2 ай бұрын
Sir ,please make a video on function approximation in RL
@theartofwar1750
@theartofwar1750 2 ай бұрын
At 6:58, you have an error: PPO is not used to build the reward model.
@manigoyal4872
@manigoyal4872 5 ай бұрын
Acts as a randomizing factor depending on whom you are getting feedback from
@0xabaki
@0xabaki 3 ай бұрын
haha quiz time again: 0) when the person knows me well 1)D 2)B if proper human feedback 3)C
@ayeshariaz3382
@ayeshariaz3382 Ай бұрын
where to det your slides?
@manigoyal4872
@manigoyal4872 5 ай бұрын
Aren't we users are the humans in feedback loop for openai
@akzytr
@akzytr 5 ай бұрын
Yeah, however openai has the final say on what feedback goes through
Proximal Policy Optimization | ChatGPT uses this
13:26
CodeEmporium
Рет қаралды 10 М.
Python Reinforcement Learning using Stable baselines. Mario PPO
37:24
MOM TURNED THE NOODLES PINK😱
00:31
JULI_PROETO
Рет қаралды 21 МЛН
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 40 МЛН
2.4 The Domain Name System (DNS)
19:08
JimKurose
Рет қаралды 80 М.
Can Laravel Scale? Truth About Scalability
10:09
Robin He
Рет қаралды 1,3 М.
RLHF+CHATGPT: What you must know
10:48
Machine Learning Street Talk
Рет қаралды 66 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
RLHF: How to Learn from Human Feedback with Reinforcement Learning
59:17
Cooperative AI Foundation
Рет қаралды 4,1 М.
Should You Use Open Source Large Language Models?
6:40
IBM Technology
Рет қаралды 336 М.
Reinforcement Learning: on-policy vs off-policy algorithms
14:47
CodeEmporium
Рет қаралды 6 М.
The complete guide to Transformer neural Networks!
27:53
CodeEmporium
Рет қаралды 31 М.
Chat GPT Rewards Model Explained!
17:56
CodeEmporium
Рет қаралды 17 М.