Learning Representations: A Challenge for Learning Theory, COLT 2013 | Yann LeCun, NYU

  Рет қаралды 935

Preserve Knowledge

Preserve Knowledge

6 жыл бұрын

Slides: videolectures.net/site/normal_...
Perceptual tasks such as vision and audition require the construction of good features, or good internal representations of the input. Deep Learning designates a set of supervised and unsupervised methods to construct feature hierarchies automatically by training systems composed of multiple stages of trainable modules.The recent history of OCR, speech recognition, and image analysis indicates that deep learning systems yield higher accuracy than systems that rely on hand-crafted features or "shallow" architectures whenever more training data and more computational resources become available. Deep learning systems, particularly convolutional nets, hold the performances record in a wide variety of benchmarks and competition, including object recognition in image, semantic image labeling (2D and 3D), acoustic modeling for speech recognition, drug design, handwriting recognition, pedestrian detection, road sign recognition, etc. The most recent speech recognition and image analysis systems deployed by Google, IBM, Microsoft, Baidu, NEC and others all use deep learning and many use convolutional nets.While the practical successes of deep learning are numerous, so are the theoretical questions that surround it. What can circuit complexity theory tell us about deep architectures with their multiple sequential steps of computation, compared to, say, kernel machines with simple kernels that have only two steps? What can learning theory tell us about unsupervised feature learning? What can theory tell us about the properties of deep architectures composed of layers that expand the dimension of their input (e.g. like sparse coding), followed by layers that reduce it (e.g. like pooling)? What can theory tell us about the properties of the non-convex objective functions that arise in deep learning? Why is it that the best-performing deep learning systems happen to be ridiculously over-parameterized with regularization so aggressive that it borders on genocide?

Пікірлер
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
32:28
Escaping the Local Optimum of Low Expectation
39:15
Lex Fridman
Рет қаралды 106 М.
Do you have a friend like this? 🤣#shorts
00:12
dednahype
Рет қаралды 37 МЛН
CAN YOU HELP ME? (ROAD TO 100 MLN!) #shorts
00:26
PANDA BOI
Рет қаралды 35 МЛН
How many pencils can hold me up?
00:40
A4
Рет қаралды 16 МЛН
Bruce Lawrence - Language of the Qur'an
7:45
CityLore
Рет қаралды 112 М.
Outgrowing God: Richard Dawkins in Conversation
57:21
Pioneer Works
Рет қаралды 305 М.
Yann LeCun: From Machine Learning to Autonomous Intelligence
1:08:06
Berkeley EECS
Рет қаралды 92 М.
Christopher Hitchens, Shmuley Boteach at the 92nd Street Y
4:20
The 92nd Street Y, New York
Рет қаралды 66 М.
Christopher Hitchens in Conversation with Salman Rushdie at the 92nd Street Y
11:06
The 92nd Street Y, New York
Рет қаралды 240 М.
The Tong Wars of New York's Chinatown (Part 1) | The China History Podcast | Ep. 171
40:00
2011 Isaac Asimov Memorial Debate: The Theory of Everything
1:47:01
American Museum of Natural History
Рет қаралды 258 М.
Do you have a friend like this? 🤣#shorts
00:12
dednahype
Рет қаралды 37 МЛН