Lenka Zdeborová - Statistical Physics of Machine Learning (May 1, 2024)

  Рет қаралды 11,764

Simons Foundation

Simons Foundation

14 күн бұрын

Machine learning provides an invaluable toolbox for the natural sciences, but it also comes with many open questions that the theoretical branches of the natural sciences can investigate.
In this Presidential Lecture, Lenka Zdeborová will describe recent trends and progress in exploring questions surrounding machine learning. She will discuss how diffusion or flow-based generative models sample (or fail to sample) challenging probability distributions. She will present a toy model of dot-product attention that presents a phase transition between positional and semantic learning. She will also revisit some classical methods for estimating uncertainty and their status in the context of modern overparameterized neural networks. More details: www.simonsfoundation.org/even...

Пікірлер: 9
@atabac
@atabac 11 күн бұрын
wow, if all teachers explain things like her, complexities are simplified.
@kevon217
@kevon217 8 күн бұрын
Excellent talk. Love the connections and insights.
@ozachar
@ozachar 9 күн бұрын
As a physicist, but non expert in AI, viewer: Very interesting insights. Over-parameterization (size) "compensates" for sub-optimal algorithm. Also non trivial that it doesn't lead to getting stack in fitting the noise. Organic neural brains (human or animal) obviously don't need so much data, and also are actually not that large in number of parameters (if I am not mistaken). So for sure there is room for improvement in the algorithm and structure, which is exactly her direction of research. A success there will be very impactfull.
@nias2631
@nias2631 4 күн бұрын
FWIW iIf you consider a brain's neurons as analogs to neurons in an ANN then the human brain, at least, has more complexity by far. Jeffrey Hinton points out that the mechanism of backprop (chain rule) to adjust parameters is more efficient by far than biological organisms in its ability to store patterns.
@nias2631
@nias2631 4 күн бұрын
That efficiency is what worries him and also points to a need for a definition of sentience arising under different learning mechanisms than our own.
@theK594
@theK594 12 күн бұрын
Fantastic lecture! Very clear and well structured! Thank you, diky🇨🇿!
@shinn-tyanwu4155
@shinn-tyanwu4155 9 күн бұрын
You will be a good mother please make many babies 😊😊😊
@forcebender5079
@forcebender5079 12 күн бұрын
想要理解机器学习内部黑箱,要靠更进一步的人工智能,由更先进的人工智能反过来解析黑箱,破解黑箱的机理,靠现在用人力去理解黑箱内部机制是不可能的。
@jiadong7873
@jiadong7873 12 күн бұрын
huh?
KSEA Distinguished Guest Series: June Huh
1:01:10
KSEA_OFFICIAL
Рет қаралды 7 М.
Can You Draw The PERFECT Circle?
00:57
Stokes Twins
Рет қаралды 71 МЛН
GADGETS VS HACKS || Random Useful Tools For your child #hacks #gadgets
00:35
FLIP FLOP Hacks
Рет қаралды 101 МЛН
Sigma Girl Education #sigma #viral #comedy
00:16
CRAZY GREAPA
Рет қаралды 2 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 197 М.
Is the Future of Linear Algebra.. Random?
35:11
Mutual Information
Рет қаралды 190 М.
Stanford CS25: V4 I Jason Wei & Hyung Won Chung of OpenAI
1:17:07
Stanford Online
Рет қаралды 62 М.
In conversation with June Huh
24:33
mpiMathSci
Рет қаралды 14 М.
All Learning Algorithms Explained in 14 Minutes
14:10
CinemaGuess
Рет қаралды 119 М.
Thomas Dietterich, "What’s Wrong with Large Language Models, and What We Should Be Building Instead"
1:15:31
Johns Hopkins Institute for Assured Autonomy
Рет қаралды 17 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 148 М.
Prof. Chris Bishop's NEW Deep Learning Textbook!
1:23:00
Machine Learning Street Talk
Рет қаралды 76 М.
📱 SAMSUNG, ЧТО С ЛИЦОМ? 🤡
0:46
Яблочный Маньяк
Рет қаралды 1,1 МЛН