Wow!! Some days back I was trying to train a Variational Auto-encoder, and was thinking about the idea of Latent space, How I can make the encoder network better, and came up with a idea of using mse loss on the latent vector of two images. Sadly it didn't work, I thought my idea wasn't good, But now I'm wondering why, It's the same thing as Siamese Network. Btw, thanks for this talk!!