Yoshua Bengio | From System 1 Deep Learning to System 2 Deep Learning | NeurIPS 2019

  Рет қаралды 38,981

Preserve Knowledge

Preserve Knowledge

4 жыл бұрын

Slides: www.iro.umontreal.ca/~bengioy/...
Summary:
Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement learning are opening the door to the development of novel deep architectures and training frameworks for addressing System 2 tasks (which are done consciously), such as reasoning, planning, capturing causality and obtaining systematic generalization in natural language processing and other applications. Such an expansion of deep learning from System 1 tasks to System 2 tasks is important to achieve the old deep learning goal of discovering high-level abstract representations because we argue that System 2 requirements will put pressure on representation learning to discover the kind of high-level concepts which humans manipulate with language. We argue that towards this objective, soft attention mechanisms constitute a key ingredient to focus computation on a few concepts at a time (a "conscious thought") as per the consciousness prior and its associated assumption that many high-level dependencies can be approximately captured by a sparse factor graph. We also argue how the agent perspective in deep learning can help put more constraints on the learned representations to capture affordances, causal variables, and model transitions in the environment. Finally, we propose that meta-learning, the modularization aspect of the consciousness prior and the agent perspective on representation learning should facilitate re-use of learned components in novel ways (even if statistically improbable, as in counterfactuals), enabling more powerful forms of compositional generalization, i.e., out-of-distribution generalization based on the hypothesis of localized (in time, space, and concept space) changes in the environment due to interventions of agents.

Пікірлер: 32
@evankim4096
@evankim4096 4 жыл бұрын
I like that Yoshua approaches the theory of neural networks in the language of probability at its core.
@CristianGarcia
@CristianGarcia 4 жыл бұрын
Yoshua: "Conscience is the next big thing" Next job offering: AI Conscience Engineer
@ACogloc
@ACogloc 4 жыл бұрын
Following job: Conscient AI
@cristhian4513
@cristhian4513 3 жыл бұрын
XD te pasas
@CosmiaNebula
@CosmiaNebula 3 жыл бұрын
From computer science to comscience.
@SterileNeutrino
@SterileNeutrino Жыл бұрын
Like a KZbin video, the AI will be able to convince you of anything and its opposite.
@gangfang8835
@gangfang8835 11 ай бұрын
It took me a month to fully understand everything he discussed in this presentation (at a high level). I think this is the future. Would love to hang out and discuss if anyone is in Toronto.
@AR-iu7tf
@AR-iu7tf 4 жыл бұрын
Prof. Bengio is perhaps one of the key (if not the only key voice) who so clearly articulates in great detail what is lacking in DL to date and what could be one path forward ( and is kind enough to give links to all relevant references). Few exhibit the intellectual honesty and earnestness in helping the rest of us understand what to expect in the future. Wish I had teachers like him when I went to school.
@arjunashok4956
@arjunashok4956 3 жыл бұрын
The link for the slides don't work! Please update them!
@wehitextracellularidiombit4907
@wehitextracellularidiombit4907 4 жыл бұрын
Who's the speeker who introduced Mr YB? Is she a researcher too ?
@robosergTV
@robosergTV 4 жыл бұрын
why?
@lalithbharadwajbaru8704
@lalithbharadwajbaru8704 3 жыл бұрын
Leon Bottu. Yes he's one of the great researcher
@keghnfeem4154
@keghnfeem4154 4 жыл бұрын
 Hello Yoshua.
@leo.budimir
@leo.budimir 4 жыл бұрын
"In our community, the C-word (consciousness) ..." =D
@immortaldiscoveries3038
@immortaldiscoveries3038 4 жыл бұрын
Transformers. Deep Learning. Training. Hard Problem. Fixed-Size set. Prune.....I could keep going...
@araldjean-charles3924
@araldjean-charles3924 Жыл бұрын
A big chunk of knowledge maybe pre verbal. Look at our cats, dogs, and other mammals.
@Dmdmello
@Dmdmello 4 жыл бұрын
Isn't causality just a special case of correlation across time? At least that's how it seems to works for human intuition of causal effect, I think if so, I don't see the fact that modern neural nets are only capable of learning correlations as an impediment for them to also learn causal relations.
@viswanathgangavaram7385
@viswanathgangavaram7385 4 жыл бұрын
I suggest to read "The Book Of Why by Judea Pearl", especially the first two chapters
@ans1975
@ans1975 4 жыл бұрын
I am not sure about that, but I want to pose another question: if you invert time what happens? Does this thought experiment help in looking for clarifications? The correlation of x at time t with x' at time t' should not be affected by that change. But a causal relation should be affected, as far as I can see. I am now at the conference, if I manage to meet Bengio I may ask directly to him and report his answer here.
@Dmdmello
@Dmdmello 4 жыл бұрын
@Shikhar Srivastava @Shikhar Srivastava Shikhar Srivastava "Say we're in a given state. If events A & B are simply correlated, and if B occurs, we can consequently say there's a probability of A occuring. Now if event B is caused by event A, and if B occurs, then A has already occured, and so the probability of the event A occuring forward in time is independent of event B - as in we don't expect A to occur simply because B has occurred. However, if A has occurred, then B must occur with the known P(B|A) probability. Hence the directionality of the relationship." Let me see if I understand...basically, the problem is that simultaneous correlation between A and B, say in a bayesian net, is not able to grasp the fact that a future occurrence of A becomes conditionally ind. from a past occ. of B, whereas a future occ. of B is still conditionally dependent from a past occ. of A, hence the asymmetry problem. Then why not treat A in time t=0 and A in time t>0 as different events, so it wouldn't make any sense to compute P(At>0 | Bt=0), since there wouldn't be any connection in the graph of relations? Doesn't it solve the problem of dependence/independence asymmetry, since the A that preceded B would still be dependent on B, but the A that comes after would just be another variable? I guess the problem is a limitation of representation of sequential relations in a Bayesian Net, which is unfeasible, but this is not a difficult for a neural net such as RNNs to model, which are able to grasp those sequential relationships.
@viswanathgangavaram7385
@viswanathgangavaram7385 4 жыл бұрын
@@ans1975 Just inverting time does not mean anything causal world; Causal world talks about antecedent follows precedent, because precedent causes antedent
@viswanathgangavaram7385
@viswanathgangavaram7385 4 жыл бұрын
@UCGTnKVtLrM0sI8QZbeEFo7Q Even though almost all causal relations are encoded in data, it seems like without a causal model it is some what impossible to infer those causal relations from data ( even with RNNs)
@cafeliu5401
@cafeliu5401 4 жыл бұрын
大牛挖坑
@catsaresocute650
@catsaresocute650 2 жыл бұрын
I am a horrible sister I just went to somome IS in the room and I just wanted to make sure He doesn't get into too much trouble what is like snnsnsjjsjsn Always make doubble an tripple sure that the absuive persons know a meeting has been argreed multiple Times and so that they can't deny it and schools so good for that too because it's so good that it can't be rejected socialy without going to tue relm of negelect- Like saying a sister May not teach her brother how to do things. Need to keep maybe book of Interactions w Jackob so I have a better case?
Black Magic 🪄 by Petkit Pura Max #cat #cats
00:38
Sonyakisa8 TT
Рет қаралды 30 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 102 МЛН
СҰЛТАН СҮЛЕЙМАНДАР | bayGUYS
24:46
bayGUYS
Рет қаралды 774 М.
Dynamic #gadgets for math genius! #maths
00:29
FLIP FLOP Hacks
Рет қаралды 18 МЛН
AI DEBATE : Yoshua Bengio | Gary Marcus
2:02:25
Montreal.AI
Рет қаралды 35 М.
2D water magic
10:21
Steve Mould
Рет қаралды 541 М.
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
21:02
Preserve Knowledge
Рет қаралды 25 М.
Yoshua Bengio: Deep Learning | Lex Fridman Podcast #4
42:19
Lex Fridman
Рет қаралды 126 М.
From Deep Learning of Disentangled Representations to Higher-level Cognition
1:17:05
What Jumping Spiders Teach Us About Color
32:37
Veritasium
Рет қаралды 1,8 МЛН
The Amish Explained
24:47
ReligionForBreakfast
Рет қаралды 144 М.
Black Magic 🪄 by Petkit Pura Max #cat #cats
00:38
Sonyakisa8 TT
Рет қаралды 30 МЛН