OpenAI CLIP: ConnectingText and Images (Paper Explained)

  Рет қаралды 135,784

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 97
@hmate1119
@hmate1119 3 жыл бұрын
This channel is insanely good. Deserves even more recognition. Great work! Subscribed
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 жыл бұрын
This is a really important paper, I suggest people pay particular attention to Yannic's "robustness to data shift" section if you are short on time. I hope we can get the authors on to discuss this!
@ghostlv4030
@ghostlv4030 3 жыл бұрын
The idea is so simple and so hard to believe it is this effective! Okay, I see, NLP is so useful in vision now.
@jonatan01i
@jonatan01i 3 жыл бұрын
Thank you so much for this, especially for not keeping the promise on cutting the video short!
@ПавелШтыков-х9т
@ПавелШтыков-х9т 2 жыл бұрын
Man, you have a talent to explain hard things! And your english is awesome!!
@shengyaozhuang3748
@shengyaozhuang3748 3 жыл бұрын
Interestingly, similar training methods have been explored in the field of information retrieval for searching relevant documents to the given query. So, probably a good application of CLIP could be searching a wanted photo on the internet by using a text query.
@dreadfulbodyguard7288
@dreadfulbodyguard7288 3 жыл бұрын
Google images?
@alteshaus3149
@alteshaus3149 3 ай бұрын
Thank you so much for this video! It really helped me understand Clip! Best regards from Vienna!
@jeshweedleon3960
@jeshweedleon3960 3 жыл бұрын
Imagine this but with more sensory data - audio, video, text, hell any string of bytes even. Wild...
@charrylee3671
@charrylee3671 3 ай бұрын
so good video. I understand CLIP after your explanation.
@vsiegel
@vsiegel 3 жыл бұрын
Trained on "the internet" - so technically speaking, it is a porn classifier, right? Except if it used a separate algorithm for "adult image filtering". Fascinating! (And funny!)
@G12GilbertProduction
@G12GilbertProduction 3 жыл бұрын
In a 8 of 20 examples presented in this paper review is really measured by different compilers of models, but not only this same in 20, 45, 60 bites for a 1mm³ pixel outer the third output layer.
@MeatFingerSteam
@MeatFingerSteam 3 жыл бұрын
Absolutely loved the Alec meme, thanks!
@Xaelum
@Xaelum 3 жыл бұрын
Just imagine a version of CLIP trained on random KZbin video frames + Title or Subtitles.
@aminasadi1040
@aminasadi1040 2 жыл бұрын
Thanks a lot for this awesome video! The explanations are very digestible even for a beginner.
@ThetaPhiPsi
@ThetaPhiPsi 3 жыл бұрын
just for ppl watching this lately: They revised the results for STL-10 in another version of the paper. On p. 40 they write "We updated the STL10 scores from the previous version of this paper after fixing a CUDA-related bug."
@ophir1080
@ophir1080 3 жыл бұрын
Great video, thanks for sharing! Just one wonder if mine, why are we 100% sure that all these old known datasets are not just subsets of the images CLIP was trained on?
@uniqued4ve
@uniqued4ve 2 жыл бұрын
I'm missing a bit your critique points here! But thanks, good intro to CLIP
@TechNewsReviews
@TechNewsReviews 21 күн бұрын
Very good explanation....👌
@bukovelby
@bukovelby 2 жыл бұрын
Just a Brilliant overview!
@Kram1032
@Kram1032 3 жыл бұрын
Can't wait for this to be done to, like, entire movies. "Just" take the actual movie scripts as text input and the entire resulting movies (the frames) as image input, and add the modality of sound on top. Could also add a bunch of other production data if available (such as, say, concept art, or voices and music unmixed or even making-of documentaries and interviews or entire books which those movies are based on etc.) Between (such versions of) CLIP and Dall-E you probably could make entire movies from scratch with just writing out scripts, and then refine them by giving some concept art or something. I mean that level is a long ways off I expect - mostly due to how much data needs to be fit into a model that has to be long-time coherent etc. - just the memory requirements as of right now would be quite insane. But *in principle* I think this could be possible. Resource-needs aside, I suspect adding a sound modality wouldn't even be that difficult in CLIP, right? You'd basically do the same symmetric contrastive classification but add a third concept to it dealing with sound.
@p.z.8355
@p.z.8355 3 жыл бұрын
Do they have a specific strategy to sample the batches ? Maybe sampling totally unrelated captions initially of e.g dogs and planes, then in a later state in training sampling more subtly differing captions of e.g different breeds of dogs.
@YannicKilcher
@YannicKilcher 3 жыл бұрын
I think it's just pure random.
@p.z.8355
@p.z.8355 3 жыл бұрын
@@YannicKilcher merci :)
@jenishah9825
@jenishah9825 3 жыл бұрын
I can't thank you enough for making such useful videos.
@oflasch
@oflasch 3 жыл бұрын
Great explanation! 👍
@ashrafg4668
@ashrafg4668 3 жыл бұрын
Thank you for the explanation!
@naifalkhunaizi7847
@naifalkhunaizi7847 Жыл бұрын
Truly great explanation!
@key2thacity87
@key2thacity87 Жыл бұрын
Hey @YannicKilcher /all, it seems like OpenAI is only referring to performance on the class of bananas at 39:05 (figure 13) not that zero-shot CLIP outperforms resnet in general on ImageNet. Earlier in the paper (8:15) they achieve 40% accuracy on ImageNet. Is 39:05, (figure 13) showing 72% accuracy on bananas or overall?
@SaidAzizov-c3l
@SaidAzizov-c3l Жыл бұрын
Excellent! Thank you a lot!
@ShivamSingh-xf8nb
@ShivamSingh-xf8nb Жыл бұрын
Amazing explaination!
@Qumeric
@Qumeric 3 жыл бұрын
It's weird that ImageNet-A performance is higher than ordinary ImageNet performance.
@norik1616
@norik1616 3 жыл бұрын
Could it be, because the images are more artistic ≈ closer to labeled images ppl put on the internet?
@akhilezai
@akhilezai 3 жыл бұрын
Hey Yannic! I wanna know what software you use to "extend" your PDF with empty space that you use to write notes. Please tell us
@tsunamidestructor
@tsunamidestructor 3 жыл бұрын
OneNote, afaik
@fayeq1745
@fayeq1745 3 жыл бұрын
I was also wondering about that and figured out it might be OneNote.
@akhilezai
@akhilezai 3 жыл бұрын
So I found another way to do it. Using latex's includepdf
@tsunamidestructor
@tsunamidestructor 3 жыл бұрын
@@akhilezai you could also use LiquidText if you have an iPad
@akhilezai
@akhilezai 3 жыл бұрын
@@tsunamidestructor thanks! I was sure it was possible on some apps on iPad, but I own Samsung tab s7+
@antonio.7557
@antonio.7557 3 жыл бұрын
thanks yannic, great video! but the biggest question i havs is how they got this dataset with images+descriptions 🤔
@bhavikdhandhalya
@bhavikdhandhalya 7 ай бұрын
I thought you will explain how those image and words are processes so that they have some connection. No issue.
@44Kokoloko
@44Kokoloko 2 жыл бұрын
Am I understanding this right: The CLIP training results in having both a text and image encoder that are able to numerically represent the proximity between words and image representations, with vectors. These encoders can then be used on different datasets to good effect. In other words, it relies on the findings related to text embeddings (word2vec) to train corresponding "image embeddings" in a way that allows matching an image embedding to a text embedding. Text embeddings having proved to be able to encode relations between concepts in 3d space (king - man + woman = queen), you can then move between text and image representation of these concepts. Does that sound right? Also, what is the pretraining done on?
@theocachet6496
@theocachet6496 2 жыл бұрын
Do they check that Ti =! Tj for i =! j with (i, j) indexes of a minibatch? If it is not the case, than sometimes it may have conflict in the contrastive loss (max Ti,Ti and min Ti,Ti in the same computation). Do we agree?
@florianhonicke5448
@florianhonicke5448 3 жыл бұрын
New video from yannic!!! Saved my day :D
@maryamaghili1148
@maryamaghili1148 3 жыл бұрын
Thank you for your great work! So is there any way we could find the actual label (text) they have used for training? I need to use this model for some classification tasks that I have, but I am wondering how to organize labels? I have only images with no annotation.
@user-vr3bl6cn9e
@user-vr3bl6cn9e 4 ай бұрын
after understanding the paper, how do we approach to understand the code?
@prabhupadpradhan489
@prabhupadpradhan489 3 жыл бұрын
The dataset which was used for pretraining the model (in the paper it is mentioned as WebImageText) is it made available for public use ?
@srinathtangudu4899
@srinathtangudu4899 2 жыл бұрын
Your videos are so good. Thanks:)
@GiangNguyen-of4qf
@GiangNguyen-of4qf 3 жыл бұрын
best ever video explained Yannic :)
@frankd1156
@frankd1156 3 жыл бұрын
Very good Yanic......
@raphaelsaeed
@raphaelsaeed Жыл бұрын
Well explained
@herp_derpingson
@herp_derpingson 3 жыл бұрын
18:20 This symmetric classification looks like a good idea. I wonder if we can use this for all classification tasks in general. 28:40 If you look at the datasets it is weak at. They involve some form of arithmetic. This paper is a big deal. Kudos to the authors.
@YannicKilcher
@YannicKilcher 3 жыл бұрын
good thought, but if you apply this to standard classification, you always have the same N labels, which would just reduce to the classic crossentropy loss
@omkarpanhalkar1857
@omkarpanhalkar1857 3 ай бұрын
what is linear probing between 2 visual models?
@dl569
@dl569 Жыл бұрын
thank you a lot!
@GUINTHERKOVALSKI
@GUINTHERKOVALSKI Жыл бұрын
24:55 "i think prompt engineering will become quite a bit more relevant"
@Abdulazizab2
@Abdulazizab2 3 жыл бұрын
Great explanation! But I wonder how they measure the accuracy of zero-shot prediction, is it by containing the original word of the label only? or some sort of combination as the output of zero-shot CLIP would be a sentence I assume.
@gocomputing8529
@gocomputing8529 Жыл бұрын
It is a bit too late, but I'll answer for the future people. From the video the classification is performed by creating a prompt. For example, if you know they are photos, you would say 'a photo of {label}'. As the video shows, the prompt you choose is really important for some applications (datasets)
@eliteari
@eliteari 8 ай бұрын
great video
@morkovija
@morkovija 3 жыл бұрын
Chuckled at that narrator cut! x)
@xingjian417
@xingjian417 9 ай бұрын
thanks for sharing
@yaka169
@yaka169 2 жыл бұрын
How it works is similar to siamese network, or how? I quite confused
@willrazen
@willrazen 3 жыл бұрын
"We'll forgive it"
@ranam
@ranam 3 жыл бұрын
can i make an orc text recognizer with it
@simonstrandgaard5503
@simonstrandgaard5503 3 жыл бұрын
Mindblown again.
@black-snow
@black-snow 3 жыл бұрын
"random GoPro fallen into a bunch of bananas" xD
@imranq9241
@imranq9241 2 жыл бұрын
Is it zero shot if you consider image captioning as a single task ?
@DajesOfficial
@DajesOfficial Жыл бұрын
it is zero shot in terms of not using dataset-specific data. Otherwise it is obviously heavily trained
@h3rtc
@h3rtc 3 жыл бұрын
that Alec meme is fire haha!
@jonatan01i
@jonatan01i 3 жыл бұрын
29:38 Voice borrowed from Josh from Let's Game It Out
@morkovija
@morkovija 3 жыл бұрын
funny how we all watch same channels
@chandrahmmouleb9611
@chandrahmmouleb9611 2 жыл бұрын
super Hit
@emilyme9478
@emilyme9478 2 жыл бұрын
👍👍
@antonio.7557
@antonio.7557 3 жыл бұрын
shouldn't this easily beat imagenet state of the art if you actually finetune it on the full imagent dataset?
@p.z.8355
@p.z.8355 3 жыл бұрын
Why do you even need a prompt ? Can't you just use the original label set ?
@DajesOfficial
@DajesOfficial Жыл бұрын
They show in the paper and it is demonstrated in the video that prompt engineering adds 5 percent points to accuracy.
@nakshatrasingh9202
@nakshatrasingh9202 3 жыл бұрын
Switch transformer, Google. Video please 😭😭🙏🙏🙏
@Lee-vs5ez
@Lee-vs5ez 3 жыл бұрын
Better with vision to do nlp
@harinkumar1073
@harinkumar1073 3 жыл бұрын
44:00 "human model" lmao
@jointcc2
@jointcc2 2 жыл бұрын
"logit" XDDDD
За кого болели?😂
00:18
МЯТНАЯ ФАНТА
Рет қаралды 2,7 МЛН
СОБАКА ВЕРНУЛА ТАБАЛАПКИ😱#shorts
00:25
INNA SERG
Рет қаралды 3,9 МЛН
1, 2, 3, 4, 5, 6, 7, 8, 9 🙈⚽️
00:46
Celine Dept
Рет қаралды 105 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,7 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 213 М.
Diffusion models from scratch in PyTorch
30:54
DeepFindr
Рет қаралды 259 М.
Flow Matching for Generative Modeling (Paper Explained)
56:16
Yannic Kilcher
Рет қаралды 54 М.
Embeddings: What they are and why they matter
38:38
Simon Willison
Рет қаралды 24 М.
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 75 М.
OpenAI’s CLIP explained! | Examples, links to code and pretrained model
14:48
AI Coffee Break with Letitia
Рет қаралды 38 М.
За кого болели?😂
00:18
МЯТНАЯ ФАНТА
Рет қаралды 2,7 МЛН