CLIP - Keras Code Examples

  Рет қаралды 7,721

Connor Shorten

Connor Shorten

Күн бұрын

Пікірлер: 23
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 жыл бұрын
Well done Connor! Looking forward to checking this out!!
@connor-shorten
@connor-shorten 4 жыл бұрын
Thanks, I really appreciate it! Really fun going through these examples, definitely struggled with the batch loss in contrastive learning though haha
@googoopiano
@googoopiano 4 жыл бұрын
Thank you for sharing your explanations of Keras code examples. I've only looked at examples that I need, but I want to try more after watching your video.
@connor-shorten
@connor-shorten 4 жыл бұрын
That’s awesome to hear! Thank you!
@scarletrazor1102
@scarletrazor1102 4 жыл бұрын
I've tried out training CLIP on pytorch with the released OpenAI code and honestly with the helper functions they've provided I find it much easier. Appreciate this though since it actually lets me learn keras and tf better..
@theodorosgalanos9663
@theodorosgalanos9663 4 жыл бұрын
may I ask what resources you needed for that? Does it still work okay without the big batch size?
@scarletrazor1102
@scarletrazor1102 4 жыл бұрын
@@theodorosgalanos9663 check out the GitHub and all the issues on there - it has some hints from one of the authors on how to train that makes it easy enough if you browse the code. Apart from that you just need to write a training loop, fetching data might be tough, that's all. Batch sizes can be an issue, I'm experimenting with writing a training loop to break up a large batch into smaller ones to sort of simulate a larger batch, accumulate gradients across these mini batches for one training iteration.
@theodorosgalanos9663
@theodorosgalanos9663 4 жыл бұрын
@@scarletrazor1102 thanks will take a look. I saw somewhere the batch size was in the thousands (although I wonder if that was dall-e, might be confused here) and was worried how reproducible the quality of the results might be.
@PurpleRivar
@PurpleRivar Жыл бұрын
@scarletrazor1102 Can you pls send the best code or github's of CLIP that you think it performs perfect. Do you have a code for fine tuning?
@nathancooper1001
@nathancooper1001 4 жыл бұрын
Awesome to see these new models having code available so quickly. Thanks for going through them! One thing though, the video is small and so the text is hard to read, especially on mobile
@connor-shorten
@connor-shorten 4 жыл бұрын
Ah damn, sorry about that! I'll figure out a good cropping for next time. I'll try to write a corresponding article if I can find the energy haha
@HomayDanaeiMehr
@HomayDanaeiMehr Жыл бұрын
It seems you are not here for a long time!! thanks for this video. is there any colab file related to this code? Do you have more updated lesson about CLIP? writing code from scratch or finetunning?
@aflatkhan6725
@aflatkhan6725 2 жыл бұрын
what are these features['image'] and features['caption'] in class DualEncoder where did they come from?
@tomwoodruff5967
@tomwoodruff5967 2 жыл бұрын
Where can one find that collab notebook?
@dgrxd
@dgrxd 4 жыл бұрын
I'm waiting for the weekly update , please !!!!! thanks .
@connor-shorten
@connor-shorten 4 жыл бұрын
Thank you so much for the interest in the series. Taking a quick rest from this but will be back soon!
@socurem6151
@socurem6151 2 жыл бұрын
a little confused about the loss function.
@user-or7ji5hv8y
@user-or7ji5hv8y 4 жыл бұрын
I guess, this makes a good case to learn both pytorch and tensorflow as well.
@connor-shorten
@connor-shorten 4 жыл бұрын
Haha awesome, glad to hear it!
@mahimanzum
@mahimanzum 4 жыл бұрын
I don't think you explained the loss function correctly. it's not as complicated as you explained here. for the prediction the target is the matrix multiplication and for the labels it's just the average of image and caption similarity that's it. no normalizing going on in this whole process. Please correct me if i am wrong or if i am misunderstanding something.
@scarletrazor1102
@scarletrazor1102 4 жыл бұрын
Am I understanding this wrong or wouldn't it be simpler to just get that [[i1.t1,i2.t1...,in.t1],[t2.t1,..],..] matrix by matrix multiplying the image and text encoder outputs and doing a crossentropy loss with arange(batch_size) ?
@mahimanzum
@mahimanzum 4 жыл бұрын
@@scarletrazor1102 this would be the obvious choice but the one implemented by them works as well conceptually i think
DSPy Explained!
54:16
Connor Shorten
Рет қаралды 67 М.
Few-Shot Learning with Reptile - Keras Code Examples
22:31
Connor Shorten
Рет қаралды 10 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
Vision Transformer - Keras Code Examples!!
21:54
Connor Shorten
Рет қаралды 42 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,4 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 435 М.
OpenAI CLIP Explained | Multi-modal ML
33:33
James Briggs
Рет қаралды 24 М.
EfficientNet! - Keras Code Examples
18:17
Connor Shorten
Рет қаралды 16 М.
LSTM is dead. Long Live Transformers!
28:48
Seattle Applied Deep Learning
Рет қаралды 531 М.
OpenAI’s CLIP explained! | Examples, links to code and pretrained model
14:48
AI Coffee Break with Letitia
Рет қаралды 39 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Knowledge Distillation - Keras Code Examples
16:54
Connor Shorten
Рет қаралды 8 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН