C4W4L04 Triplet loss

  Рет қаралды 129,019

DeepLearningAI

DeepLearningAI

Күн бұрын

Пікірлер: 62
@jaswant2578
@jaswant2578 4 жыл бұрын
Explained Crystal clearly !!
@holgip6126
@holgip6126 6 жыл бұрын
Excellent and calm explained.
@zingg7203
@zingg7203 2 жыл бұрын
Calmly
@hiankun
@hiankun 4 жыл бұрын
To force the net to learn better, choose Andrew Yang as the negative sample for Andrew Ng. :-p
@songpandy9590
@songpandy9590 Жыл бұрын
Clearly explained. Thank you.
@logicboard7746
@logicboard7746 2 жыл бұрын
The fun fact at the end is mind blowing :-)
@ahhhwhysocute
@ahhhwhysocute 3 жыл бұрын
Thank you, this was very well explained and easy to understand !
@allancentis300
@allancentis300 3 жыл бұрын
very nicely done :)
@alessandrocornacchia8125
@alessandrocornacchia8125 4 жыл бұрын
Thanks for your clear explanation
@pranavgandhiprojects
@pranavgandhiprojects 5 ай бұрын
Loved the explaination...thanks
@IgneousGorilla
@IgneousGorilla 3 жыл бұрын
Why is the squared norm used instead of just the norm, the actual euclidean distance?
@Darkev77
@Darkev77 2 жыл бұрын
If you get an answer please lmk
@fabiansvensson9588
@fabiansvensson9588 2 жыл бұрын
Hello! Why is it mandatory to have multiple images of the same person for the training? I'm creating an algorithm where I want to use SNNs to map photos of a product to a system-genereted image of the same product. The problem is that the products are personalized, so each product is unique. I therefore only have one image of each product. Why can I not simply create triplets with tons of different images. Why is it mandatory to have several photos of the same person/product?
@phoenix-hg8oq
@phoenix-hg8oq 3 жыл бұрын
Thanks a lot sir....sir what is the significance of anchor here?why we choose anchor and positive from same class?
@kwon0128
@kwon0128 5 жыл бұрын
exellent and absolutely best explanation thank you
@gorgolyt
@gorgolyt 3 жыл бұрын
Why don't we just seek to always minimise |f(A) - f(P)|^2 - |f(A) - f(N)|^2 ? Instead of the max method which only minimises this when it's >= -alpha.
@Darkev77
@Darkev77 2 жыл бұрын
If you found the answer please lmk
@samuelleecong4585
@samuelleecong4585 2 жыл бұрын
From my understanding it's to ensure that there is a larger difference between the positive and negative examples, so as to train your model to be more sensitive in identifying differences, forcing the difference function for the positive and negative functions to have more than a small difference i.e. 0.1, to the alpha value of maybe 5 or 6.
@raphaelnoronha1419
@raphaelnoronha1419 5 ай бұрын
Ok, but what I am looking at the end of it? (1) The weights (W) that transform the image matrix (Ii) in a vector (such as f(a)) --> Ii * W = F(Ii) = vector with n dimensions? (2) Such parameters (W) should be constant for any image?
@Acha413
@Acha413 3 жыл бұрын
Hello Professor, I have a qeary. Why do we use squared norms to compute the distance between two examples (A,P or A,N). Isn't cosine a better way to compute the similarity between two examples (A,P or A,N). Because squared norms might have larger magnitude, even if they were closer to each other (that is angle between them is smaller), Hence , is cos(A,N)
@Darkev77
@Darkev77 2 жыл бұрын
Please lmk if you found an answer
@Acha413
@Acha413 2 жыл бұрын
we use square in the distance (f(A) - f(P))^2 because we do not want -ve values. In other word distance between (3-5)^2 = (5-3)^2 are same because we took square. In Machine Learning, we want to compute the gradient (1st order derivative) of this function for minimising the loss. For that purpose, we square the loss function and then minimize it. Also we can consider absolute value but first order derivative of absolute value at 0 does not exist. Hence even though abs(loss) is continuous, it is still not differentiable at 0. Hence Square is easier to differentiate and also it will square the error and. choose a model which minimizes the larger errors more than smaller errors. Due to these 2 reasons, i think square is a better choice for loss function.
@sharadchandakacherla8268
@sharadchandakacherla8268 Жыл бұрын
Thank you sir.
@felixgao7417
@felixgao7417 8 ай бұрын
Question on choose triplets that are hard to train on. How to do that in practice you have a million pictures, is a human go to judge the two images with close enough distance between d(A, N) and d(A, P) and say, they are different use that for training?
@linzhu5178
@linzhu5178 Жыл бұрын
How do we differentiate this loss?
@vikramsandu6054
@vikramsandu6054 3 жыл бұрын
Well explained. Thanks for the video.
@LeenaGurgPhysics
@LeenaGurgPhysics 4 жыл бұрын
Can I use one of these face recognition pre-trained models to recognise plants?
@TimeKnowledgePower
@TimeKnowledgePower 4 жыл бұрын
@leena perhaps the first few layers since they are looking for patterns of complex shapes such as faces, but the deeper the layers go the less likely you are to have a super relatable model.
@LeenaGurgPhysics
@LeenaGurgPhysics 4 жыл бұрын
@@TimeKnowledgePower Thank you for your suggestion. Will try this!
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
nice explantion .
@meghnavasudeva4898
@meghnavasudeva4898 4 жыл бұрын
Now I can understand how DeepFakes would have been named :D
@furkatsultonov9976
@furkatsultonov9976 3 жыл бұрын
or it could have been named as FakeNet lol
@debarunkumer2019
@debarunkumer2019 4 жыл бұрын
What if the pictures are of the same person from two different timelines? Will the model still work?
@clemsch90
@clemsch90 4 жыл бұрын
If you train a neural network that's detecting throats, is it called "deepthroat"?
@WooblyBoobly
@WooblyBoobly 4 жыл бұрын
someone's asking the real questions
@kaustubhparmar4274
@kaustubhparmar4274 4 жыл бұрын
ballsDeep
@nailcankara8164
@nailcankara8164 4 жыл бұрын
Can I choose big margin for hard train?
@petarulev6977
@petarulev6977 2 жыл бұрын
I dont understand why the loss should be minimzed instead of maximized. After all, we want the difference between f(true) - f(neg) to be as large as possible.
@Frosp
@Frosp Жыл бұрын
because you want to minimize f(true) and maximize f(neg) therefore minimize -f(neg)
@AvinashSingh-bk8kg
@AvinashSingh-bk8kg 3 жыл бұрын
@08:15 - 9:21 , We need multiple samples of a person in our database to train the model using triple loss while only one sample image of a person while testing. Is it what Mr Andrew meant to say? If that is the case how can we call it one shot learning as we are training our model on multiple images of a person. Kindly clarify.
@osiris1102
@osiris1102 3 жыл бұрын
I think we only need to train the network once and you can use random pictures from the internet to train it so that it can tell us that the two given pictures are of the same person or not. Then we can give it two images of the same person and it should be able to tell us if that's the same person or not.
@nidanoorain6339
@nidanoorain6339 3 жыл бұрын
Triplet loss:it helps in updating hyper parameters such as weights and bias of the network.When a new face is added we dont retrain the network since it requires huge calculations.We train when large data is added.Normally the output of the model is not a classification but a function(which can generate encoding (128d) for faces). One-shot: The new face is given to the network with same architecture with same weights and biases(trained with triplet loss) and similarity is calculated between the new and old faces using a similarity function which gives true if same face or false if different, then we can add the new face by giving the image to network to generate embedding. Here instead of retraining the model we can just create the embedding of image with pretrained model and dump to pickle file or any for classification.Correct me if am wrong hope it helps..
@ajaykannan6031
@ajaykannan6031 3 жыл бұрын
@@nidanoorain6339 "then we can add the new face by giving the image to network to generate embedding. " - So, in one shot, is the network adding the new Images and then training them too during the testing phase?
@nidanoorain6339
@nidanoorain6339 3 жыл бұрын
@@ajaykannan6031 it doesn't train in one-shot.. its like a classification task.for example if a new face needs to be added then give image to the network or model it creates embedding..now compare the embedding with previous embedding(ie..the embedding of the previous faces should also be saved in a file) by using similarity function keep some threshold, if the embedding similarity say is < 0.5 add the new face label and its embedding in the previous saved feature file...all together one-shot says if its same person or not we need to write condition to save the person to database with its embedding...hope it helps
@harrrymok
@harrrymok 4 жыл бұрын
thanks for the nice presentation, when i load the facenet model by tf.keras.models.load_model('facenet_keras.h5') with tensorflow 2.2, it keeps raising error (ValueError: bad marshal data (unknown type code), may i know is there any idea to mitigate it?
@nidanoorain6339
@nidanoorain6339 3 жыл бұрын
For anyone who is trying to load the model try it with keras 2.3.1 if it didnt then downgrade tf to 2.0.0 and keras 2.3.1 hope it helps
@kbstudios8402
@kbstudios8402 6 жыл бұрын
How can we validate a dataset learnt using a Siamese network and triplet loss?
@donm7906
@donm7906 6 жыл бұрын
by testing whether it can recognize you, or 100 persons.
@GilangD21
@GilangD21 6 жыл бұрын
the similiarity function?, the point is the model will produce 2 outputs, that is mean only two label which is "same" or "not same". then after that we keep validate is it produce the same label or not as what we do in usual supervised neural network.
@samuelbarham8483
@samuelbarham8483 6 жыл бұрын
You take the triplet loss over a heldout validation set.
@md.rijoanrabbi99
@md.rijoanrabbi99 5 жыл бұрын
may i encode my own image using 2 layer neural network? instead of inception block ?is it use of supervise learning or on encoding occured in the inception or facenet unit?
@abdengineer6225
@abdengineer6225 4 жыл бұрын
hello can i detect the person wich wear mask
@nayeonhan1406
@nayeonhan1406 4 жыл бұрын
@abd engineer Yeah you can detect people wearing masks!! You can detect whether the person is wearing a mask or not here: www.pyimagesearch.com/2020/05/04/covid-19-face-mask-detector-with-opencv-keras-tensorflow-and-deep-learning/ and you can detect the person wearing masks here: openaccess.thecvf.com/content_cvpr_2017/papers/Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf you can also get datasets of people wearing masks in here: datatang.ai/dataset/info/image/1084 These are only a few models and papers I searched, you may find many other models than these! Hope this will help you
@julessci2716
@julessci2716 2 жыл бұрын
Nobody explains like A. Ng
@sylus121
@sylus121 2 жыл бұрын
Good for facial expression recognition
@black-snow
@black-snow 4 жыл бұрын
damn, upvote 665, off by one
@sadkntt
@sadkntt 5 жыл бұрын
Thanks so much, briliant, please add some python code
@gabrielwong1991
@gabrielwong1991 3 жыл бұрын
Choose triplets that are hard to train on.... So use entire training set as asian because we all look the same :D? haha
@arisioz
@arisioz Жыл бұрын
Is this your wife?
@jsonbourne8122
@jsonbourne8122 Жыл бұрын
You thought it was his wife, but it was me, Alpharius!
@fwang1252
@fwang1252 6 жыл бұрын
good explanation whats wrong with his eyes
@aayushpaudel2379
@aayushpaudel2379 4 жыл бұрын
you are a well-trained face recognizer !! :D
C4W4L05 Face Verification
6:06
DeepLearningAI
Рет қаралды 55 М.
Few-Shot Learning (2/3): Siamese Networks
23:41
Shusen Wang
Рет қаралды 56 М.
ТВОИ РОДИТЕЛИ И ЧЕЛОВЕК ПАУК 😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 6 МЛН
From Small To Giant 0%🍫 VS 100%🍫 #katebrush #shorts #gummy
00:19
Миллионер | 3 - серия
36:09
Million Show
Рет қаралды 2,1 МЛН
Tips Tricks 15 - Understanding Binary Cross-Entropy loss
18:29
DigitalSreeni
Рет қаралды 22 М.
C4W2L04 Why ResNets Work
9:13
DeepLearningAI
Рет қаралды 144 М.
C4W4L03 Siamese Network
4:52
DeepLearningAI
Рет қаралды 138 М.
Supervised Contrastive Learning
30:08
Yannic Kilcher
Рет қаралды 60 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 747 М.
Tom Goldstein: "What do neural loss surfaces look like?"
50:26
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 19 М.
C4W4L10 Style Cost Function
17:01
DeepLearningAI
Рет қаралды 30 М.
MIT 6.S191: Reinforcement Learning
1:00:19
Alexander Amini
Рет қаралды 59 М.
ТВОИ РОДИТЕЛИ И ЧЕЛОВЕК ПАУК 😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 6 МЛН