Full video list and slides: www.kamperh.com/data414/ Introduction to neural networks playlist: • Introduction to neural... Yet another introduction to backpropagation: www.kamperh.com/notes/kamper_...
Пікірлер: 5
@harshadsaykhedkar151527 күн бұрын
This is one of the better explanations of how the heck we go from maximum likelihood to using NLL loss to log of softmax. Thanks!
@allantourin6 ай бұрын
Thanks Herman. I'm following some pytorch tutorial and got lost when i saw the cross entropy computation equal the NLL one. This definitely filled the gap