GPT-2: Language Models are Unsupervised Multitask Learners

  Рет қаралды 30,436

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 22
@bobsalita3417
@bobsalita3417 5 жыл бұрын
We need more paper talkers such as Yannic. Yes, Two Minute Papers is great but there's many papers worthy of discussion, many opinions needed, and many worthy methods of analysis.
@jcorey333
@jcorey333 11 ай бұрын
It's such a shame that the field stagnated after this. Nothing bigger or better than GPT2. Maybe someday.
@michaelcarlon1831
@michaelcarlon1831 5 жыл бұрын
These paper-talks are great!
@xaxfixho
@xaxfixho 4 ай бұрын
We need more of this
@eab4984
@eab4984 2 жыл бұрын
Would be awesome if Yannic made the video on Byte Pair Encoding mentioned 18:30
@ben2258
@ben2258 4 жыл бұрын
Did you ever end up making a video that discusses byte pair encoding?
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Not yet :)
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
kzbin.info/www/bejne/Y2Gsm3ljbLR1adU&ab_channel=Rasa Here you go. :)
@dongilseo5727
@dongilseo5727 3 жыл бұрын
Thanks for sharing this video. I just found that GPT2 models will be available soon at Ainize Teachable NLP for free fine-tuning.
@dongilseo5727
@dongilseo5727 3 жыл бұрын
@Web Front-end You can just search with 'teachable nlp'! (it seems links are auto-deleted on youtube)
@kumarsubham2078
@kumarsubham2078 4 жыл бұрын
Great video! Btw, is the model released now and do we have weights available?
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Yes, I think so
@harmitchhabra989
@harmitchhabra989 3 жыл бұрын
I think a neural network is essentialy a function that we can't express explicitly... The function is fine tuned and generated uaing the training data and then the said function is passed an input that we want to know the output of and since the function was fine tuned to the dataset that we gave it we can expect a prediction of output similar to the dataset. Essentialy Nn can be used to rougly map huge pieces of data to each other and then use the mapping to obtain similar outputs for inputs whose outputs are otherwise unknown to us. Also to check wether a given input is similar to the other inouts of our dataset we can input the input on a trained neural network and then see accuracy of neural network to compare similarity of this input to training inputs. Thus can be used for a recommendation system like youtubes.
@neuron8186
@neuron8186 3 жыл бұрын
Open Ai is more like Close Ai
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
How can we say that GPT is simply not overfitting since it literally has seen so much data that now any down-stream task would already have been covered in the training dataset?
@YannicKilcher
@YannicKilcher 3 жыл бұрын
Not necessarily. They do deduplication of the downstream tasks
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Is there a good video that explains how transformers work?
@YannicKilcher
@YannicKilcher 5 жыл бұрын
kzbin.info/www/bejne/n3XYnZulhpejqNE
@Xnaarkhoo
@Xnaarkhoo 2 жыл бұрын
First ten minutes no substance - don’t have more time to waste here
@mannacharya4088
@mannacharya4088 Жыл бұрын
7:10 Got me rolling on the floor laughing
@meditationMakesMeCranky
@meditationMakesMeCranky 5 жыл бұрын
Hilarious!
GPT-3: Language Models are Few-Shot Learners (Paper Explained)
1:04:30
Yannic Kilcher
Рет қаралды 213 М.
Каха и лужа  #непосредственнокаха
00:15
Players vs Pitch 🤯
00:26
LE FOOT EN VIDÉO
Рет қаралды 39 МЛН
Yay, My Dad Is a Vending Machine! 🛍️😆 #funny #prank #comedy
00:17
🕊️Valera🕊️
00:34
DO$HIK
Рет қаралды 19 МЛН
Ilya Sutskever - GPT-2
38:43
Matroid
Рет қаралды 10 М.
GPT2 Explained!
11:12
Connor Shorten
Рет қаралды 29 М.
LLaMA: Open and Efficient Foundation Language Models (Paper Explained)
41:07
Big Bird: Transformers for Longer Sequences (Paper Explained)
34:30
Yannic Kilcher
Рет қаралды 24 М.
L19.5.2.4 GPT-v2: Language Models are Unsupervised Multitask Learners
9:03
Sebastian Raschka
Рет қаралды 4,5 М.
The True Story of How GPT-2 Became Maximally Lewd
13:54
Rational Animations
Рет қаралды 1,9 МЛН
Каха и лужа  #непосредственнокаха
00:15