Panel Discussion: Open Questions in Theory of Learning

  Рет қаралды 4,867

MITCBMM

MITCBMM

Күн бұрын

Пікірлер: 12
@carpediemcotidiem
@carpediemcotidiem 25 күн бұрын
00:05 Understanding natural and artificial intelligence requires a foundational theory. 03:23 Discussion of key theoretical questions in deep learning. 08:55 Neural networks exhibit similar feature detectors despite different training tasks. 10:53 Exploring representational kernels in embedding spaces for various models. 14:55 Exploring convergence of event representations across modalities using kernels. 17:09 Categories in vision emerge from complex neuronal responses, not unique patterns. 21:45 High-capacity discrimination emerges in pre-trained networks for few-shot learning. 24:00 Strong correlation exists between neural representations and deep learning performance. 28:32 Analyzing neural representations and performance in speech recognition systems. 30:43 The emergence of effective representations in learning remains a challenging problem. 34:52 Modularity enhances learning efficiency and robustness in complex systems. 36:53 Challenges in achieving modularity in deep learning networks. 40:59 Networks exhibit fault tolerance and mutational robustness for evolution. 42:58 The challenges of training feedforward networks for modular solutions. 46:52 Language models excel by predicting the next word in a sequence. 49:03 Increasing problem complexity impacts model learning speed and effectiveness. 53:01 Process supervision accelerates learning in language models compared to outcome supervision. 55:01 Language models struggle with complex problem-solving in asynchronous learning contexts. 1:00:19 Sparse deep networks can approximate high-dimensional functions efficiently. 1:03:01 Compositionality in functions mitigates the curse of dimensionality. 1:09:29 Exploration of learning theory and representation emergence in neural networks. 1:12:04 Progress in learning theory lacks understanding of deep networks' representation quality. 1:16:44 Exploration of gradient-based learning alternatives in neural networks. 1:18:50 Exploration of gradient descent and its biological relevance in learning. 1:23:24 Learning involves reward-based mechanisms and the importance of pre-training. 1:25:33 Learning is primarily evolutionary rather than individual cultural acquisition. 1:29:59 Challenges in replicating biological network structures in machine learning. 1:32:27 Exploration of modularity in neural networks and implications for deep learning. 1:36:02 Discussion on modularity and the evolving landscape of supervised and unsupervised learning. 1:37:50 Self-supervised learning differs from supervised learning by predicting missing data. Crafted by Merlin AI.
@michaelmoore7568
@michaelmoore7568 Ай бұрын
I'm totally confused in Zero-Shot Learning you sset the weights of a new neural network to the weights of the old one and then see how that works?
@AlgoNudger
@AlgoNudger Ай бұрын
Thanks.
@memegazer
@memegazer Ай бұрын
"if finding modular sultions is finding the needle in the hey stack how has evolution done that" Recursive hyperlibrary expansion imo Where I use the term hyper library to be a similar concept to a hyperwebster of efficient decidable algorithms building a library towards wolfram's ruliad. This places the largest natural constraint on storage imo.
@memegazer
@memegazer Ай бұрын
Perhaps the most efficient error corrected storage method would be to find some hyperwebster in currently known primes as an adressing scheme for labling efficient algorithms.
@memegazer
@memegazer Ай бұрын
great stuff
@peoplemachine439
@peoplemachine439 Ай бұрын
this is the stuff that is truly relevant
@alisalehi7696
@alisalehi7696 Ай бұрын
Fantastic
@memegazer
@memegazer Ай бұрын
"back prop" imo as models approach closer to real time embedding, for example something like test time training this will beging to approach what is being sought however this would require a great deal of effort to use current bench mark questions become solvable in virtual environments to simulate "practical" exploration space solutions decompressing current benchmark question into virtual enviroments is a not insignificant task I agree however I feel there is too much objection over how to accurately represent virtual enviroments and than simply allowing a learning algorithm to be simulated in an embodied way in that enviroment I am less concerned with accurate one to one representations of currently know features and motiffs of phsyical reality, due to my suspicions about currently undisclosed hyperwebster modularity of the deeper substrates. To my view there is a path for using current AI to begin to automate the training of "agi" That AI must be given at least a simulated oprotunity to learn from a multimodal simulation of an enviroment that provides trail and error feedback that is approaching an aproximation of human level perception/simulation.
@memegazer
@memegazer Ай бұрын
cracking this feat will enable deployment of automous agents that can then elevate to an embedding opperating at human level perception by being installed into phsyical robotics and begin applying their learning function in a closer to one to one substrate of human perception/qualia/sentience/simulation/etc for any relevant term you seek to parse I will grant that this make my pervious statements about storage being the primary constraint seem refuted, but I would argue that these computation challanges are more local in a temporal sense.
@memegazer
@memegazer Ай бұрын
lol...sorry for the word salad...perhaps a chat model can represent my thought in better good faith than I am able due to lack of technical expertise
@randomchannel-px6ho
@randomchannel-px6ho Ай бұрын
When it comes to machine intelligence I think this one area where the loose analogy in our language to human behavior really breaks down. Can we truly consider what contempary neural network models do ''learning" in the same sense as the human brain and its neurons? No, not at all really. I'd bet money that a true a breakthrough of such magnitude will require novel hardware like a neuromorphic computer in addition to a robust theory to describe why these things actually work, and with it who knows what novel head spinning complications will arise.
Learning to Reason, Insights from Language Modeling
57:43
MITCBMM
Рет қаралды 6 М.
Support each other🤝
00:31
ISSEI / いっせい
Рет қаралды 81 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
The Platonic Representation Hypothesis
1:09:11
MITCBMM
Рет қаралды 4,6 М.
The Core Equation Of Neuroscience
23:15
Artem Kirsanov
Рет қаралды 179 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Dominic Cummings: is AI already in control?
1:47:50
The Spectator
Рет қаралды 90 М.
Dynamic Deep Learning | Richard Sutton
1:04:32
ICARL
Рет қаралды 11 М.
Daniel Everett, "Homo Erectus and the Invention of Human Language"
1:10:43
Harvard Science Book Talks and Research Lectures
Рет қаралды 559 М.
Support each other🤝
00:31
ISSEI / いっせい
Рет қаралды 81 МЛН