How to train a model to generate image embeddings from scratch

  Рет қаралды 14,108

Underfitted

Underfitted

Күн бұрын

Пікірлер: 41
@emrahe468
@emrahe468 6 ай бұрын
I had been working on a similar problem for a few weeks and had already implemented most of the code you mentioned (after many trial and errors) . But after watching your video, I realized that I had missed a few crucial details like the dense layer and the loss function. Your clear instructions and fantastic tutorial really saved me tons of of time. I wish you had released this video earlier, but regardless, thank you very much! 🙏
@underfitted
@underfitted 6 ай бұрын
Thank you!
@dcrasto
@dcrasto 5 ай бұрын
Thanks!
@underfitted
@underfitted 5 ай бұрын
Thanks!
@LuisAlvarado-hm3br
@LuisAlvarado-hm3br 7 ай бұрын
Great, insightful video with an original approach to explaining embeddings. Most explanations focus on text, so it's refreshing to see image embeddings for a change. It's also fantastic to see such an influential paper used as a reference for the implementation. Thank you!
@chidubem31
@chidubem31 7 ай бұрын
cool explanation, i always wondered how embeddings worked at the lower level
@ojaspatil2094
@ojaspatil2094 Ай бұрын
thank you for the intuitive explaination!
@toddroloff93
@toddroloff93 7 ай бұрын
Great video. I like your enthusiasm, and passion you display in your videos. The way you break things down and explain it is great. Thank you
@underfitted
@underfitted 7 ай бұрын
Thanks
@ThetaPhiPsi
@ThetaPhiPsi 7 ай бұрын
Contrastive explained nicely! It's a shame nobody uses it. I've some improvements to add: 1. you can use the model itself to compare pairs and take the loss to discriminate results (but the embedding is fine too for a class of downstream tasks) 2. you can further take ROC AUC and optimize your threshold on the given training data (I used sigmoid to squish the loss between 0 and 1) Works nicely!
@kalinduSekara
@kalinduSekara 7 ай бұрын
Clear and great explanation 💯
@Aclodius
@Aclodius 7 ай бұрын
You're doing the Lord's work
@sachinmohanty4577
@sachinmohanty4577 7 ай бұрын
Beautiful explanation ❤ loved the tutorial 😊
@wilfredomartel7781
@wilfredomartel7781 2 ай бұрын
Great explication!
@KoenYskout
@KoenYskout 7 ай бұрын
I experimented with modifying the embedding size to 2, and visualize that on a 2d plot (colored by label). Easy to see how all (or most) numbers with the same label are clustered together by the embedding, and numbers with a different label are moved apart.
@mehershahzad-n5s
@mehershahzad-n5s 3 ай бұрын
Impressive clip
@raheemnasirudeen6394
@raheemnasirudeen6394 6 ай бұрын
A great explanation
@ian-haggerty
@ian-haggerty 7 ай бұрын
@LanreOladele
@LanreOladele 5 ай бұрын
I sincerely would like to see how you'd go about it using 3d images while implementing triplet loss
@sam.scrolls
@sam.scrolls 3 ай бұрын
Thank you for the wonderful explanation. I understood the importance of loss function here. If I want to create an embedding with multiple objects in one image, can you please give some insights on how it can be done?
@yaseromar1539
@yaseromar1539 7 ай бұрын
What a magnificent explanation, every time I watch one of your videos I feel enjoyment and excitement and I can see the same in your way of talking about machine learning 🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩
@underfitted
@underfitted 7 ай бұрын
Thanks!
@LanreOladele
@LanreOladele 6 ай бұрын
@Underfitted , Thank you for this amazing video. How would you ideally do the same using 3d images?
@arashsheikh65
@arashsheikh65 3 ай бұрын
Thank you!
@ddemmkkimm
@ddemmkkimm 7 ай бұрын
1:51 Image is not 2D data. It is # of pixels dimensional data, i. e. width x height.
@underfitted
@underfitted 7 ай бұрын
I meant you need 2 dimensions to represent one image: 1 dimension to represent height and 1 to represent width.
@thevoyager7675
@thevoyager7675 7 ай бұрын
Thanks for the nice explanation! Could we use these image embeddings for classification tasks? if so, how?
@underfitted
@underfitted 7 ай бұрын
You could. You can create 10 template embeddings, representing each digit. To classify a new image, compare it to all 10 embeddings and select the closest one.
@KoenYskout
@KoenYskout 7 ай бұрын
I would say: transform the input into its embedding, and classify based on the embedding coordinates. I guess a simple KNN classifier will already do well, because similar numbers are moved closer together, and different numbers further apart, in the embedding.
@gemini_537
@gemini_537 7 ай бұрын
Gemini 1.5 Pro: This video is about creating image embeddings from scratch using a neural network. The speaker starts by explaining what embeddings are and why they are important. Embeddings are a way of representing data points as vectors in a high-dimensional space. Similar data points will have similar embeddings, while dissimilar data points will have dissimilar embeddings. This makes embeddings useful for tasks such as finding similar documents or images. The speaker then introduces the concept of a Siamese network. A Siamese network is a type of neural network that takes two inputs and outputs a measure of similarity between the inputs. The speaker explains how to use a Siamese network to train a model to generate image embeddings. The speaker then shows how to train the model on a dataset of handwritten digits. The model learns to generate embeddings for the digits such that similar digits (e.g., two different images of the digit 3) have similar embeddings, while dissimilar digits (e.g., an image of 3 and an image of 7) have dissimilar embeddings. Finally, the speaker shows how to use the trained model to generate embeddings for new images. The speaker concludes by discussing some of the applications of image embeddings.
@chuanana
@chuanana 6 ай бұрын
Thank you for the video! Is it expected to have the distance of image embeddings of different labels (3 vs. 7) to be greater than 1? I got (1.0468788, 1.087123). Since we normalized the inputs, I had expected the embedding distance to be normalized as well. Is there an expected range for the distance?
@user-wm8xr4bz3b
@user-wm8xr4bz3b 6 ай бұрын
Thanks for the video! so am i right to say that the process is the supervised learning?
@underfitted
@underfitted 6 ай бұрын
This one is supervised, yes
@АлексГладун-э5с
@АлексГладун-э5с 7 ай бұрын
amazing
@ian-haggerty
@ian-haggerty 7 ай бұрын
Funny, it wasn't too long ago that MNIST wasn't a "toy" problem. The history of computer vision is rather short. Are we writing the beginning of it?
@underfitted
@underfitted 7 ай бұрын
Probably
@privateprivate-g3j
@privateprivate-g3j 2 ай бұрын
It lacks a lot of context. It is just about trying some functions. what about the mathematical concept?
@sad_man_no_talent
@sad_man_no_talent 7 ай бұрын
9000+ power
@alliedeena1141
@alliedeena1141 9 сағат бұрын
Is this even from scratch?! Using external libraries doesn't mean it's from scratch.
@ajanieniola9172
@ajanieniola9172 2 ай бұрын
Please LangGrpah
@anime_comp
@anime_comp 4 ай бұрын
Way too basic for people who already know about Neural networks, good enthusiasm though
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
LCM: The Ultimate Evolution of AI? Large Concept Models
30:13
Discover AI
Рет қаралды 51 М.
I Redesigned the ENTIRE YouTube UI from Scratch
19:10
Juxtopposed
Рет қаралды 930 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,2 МЛН
Embeddings - EXPLAINED!
12:58
CodeEmporium
Рет қаралды 9 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 405 М.
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
31:51
Algorithmic Simplicity
Рет қаралды 213 М.
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Рет қаралды 153 М.
Diffusion models from scratch in PyTorch
30:54
DeepFindr
Рет қаралды 266 М.
What P vs NP is actually about
17:58
Polylog
Рет қаралды 143 М.