From Deep Learning of Disentangled Representations to Higher-level Cognition

  Рет қаралды 72,622

Microsoft Research

Microsoft Research

Күн бұрын

One of the main challenges for AI remains unsupervised learning, at which humans are much better than machines, and which we link to another challenge: bringing deep learning to higher-level cognition. We review earlier work on the notion of learning disentangled representations and deep generative models and propose research directions towards learning of high-level abstractions. This follows the ambitious objective of disentangling the underlying causal factors explaining the observed data. We argue that in order to efficiently capture these, a learning agent can acquire information by acting in the world, moving our research from traditional deep generative models of given datasets to that of autonomous learning or unsupervised reinforcement learning. We propose two priors which could be used by an agent acting in its environment in order to help discover such high-level disentangled representations of abstract concepts. The first one is based on the discovery of independently controllable factors, i.e., in jointly learning policies and representations, such that each of these policies can independently control one aspect of the world (a factor of interest) computed by the representation while keeping the other uncontrolled aspects mostly untouched. This idea naturally brings fore the notions of objects (which are controllable), agents (which control objects) and self. The second prior is called the consciousness prior and is based on the hypothesis that our conscious thoughts are low-dimensional objects with a strong predictive or explanatory power (or are very useful for planning). A conscious thought thus selects a few abstract factors (using the attention mechanism which brings these variables to consciousness) and combines them to make a useful statement or prediction. In addition, the concepts brought to consciousness often correspond to words or short phrases and the thought itself can be transformed (in a lossy way) into a brief linguistic expression, like a sentence. Natural language could thus be used as an additional hint about the abstract representations and disentangled factors which humans have discovered to explain their world. Some conscious thoughts also correspond to the kind of small nugget of knowledge (like a fact or a rule) which have been the main building blocks of classical symbolic AI. This, therefore, raises the interesting possibility of addressing some of the objectives of classical symbolic AI focused on higher-level cognition using the deep learning machinery augmented by the architectural elements necessary to implement conscious thinking about disentangled causal factors.
See more at www.microsoft.com/en-us/resea...

Пікірлер: 61
@johntanchongmin
@johntanchongmin Жыл бұрын
Interpolating in abstract space, exactly what Stable Diffusion is doing. This idea is really impactful.
@itaybenou
@itaybenou 6 жыл бұрын
Wonderful video. You can't help but admire his approach for what is AI, and the way he manages to convey these concepts. Brilliant!
@flamingxombie
@flamingxombie 6 жыл бұрын
The intuition for why we current speech models can't produce good unconditional samples (see wavenet) is simply mindblowing. Phonemes occupy a small number of bits as compared with the overall signal (~10/s as compared with 16 k/s)!
@mjParetoQuant
@mjParetoQuant 2 жыл бұрын
I liked that point very much. However, it is also quite obvious: we don't analyse the signal with our brain, we analyse the sentenses and the meaning.
@bingeltube
@bingeltube 3 жыл бұрын
Thanks for the interesting talk! Please post the slides as well!
@ProfessionalTycoons
@ProfessionalTycoons 5 жыл бұрын
amazing talk.
@silberlinie
@silberlinie 6 жыл бұрын
Deepening of learning into a higher cognitive level: Very good. What and where are the works, who is working on this approach?
@catsaresocute650
@catsaresocute650 Жыл бұрын
In Lexes Podcast just the mention of the example of speech as becoming unrecognicable gibberish (due to the amount of data) but when you seperate the gibberish to get a baisc feel for intonation or sound and speach as vocilisation of cerain tones that humans think of as speach you get a functional way to work it out would have totally suffieced to get the thing
@tempvariable
@tempvariable 5 жыл бұрын
In 12:07, are cognitive states low dimensional if that is the case are they sparse? If they are both sparse and low dimensional it contradicts with what he said in his MSS talk in 2012, where he states high dimensional and sparse is better than low dimensional
@maloxi1472
@maloxi1472 Жыл бұрын
He's allowed to change his opinion and improve his theories over time. That's (interestingly enough) the kind of stuff that general intelligence allows
@rahuldeora5815
@rahuldeora5815 4 жыл бұрын
Someone should write a detailed blog explaining stuff in this
@siarez
@siarez 5 жыл бұрын
Who is the gentleman at 1:09:35 asking a question, and bringing up gradual learning?
@DeadRabbittt
@DeadRabbittt 5 жыл бұрын
Its Patrice Simard from Microsoft Research
@nguyenngocly1484
@nguyenngocly1484 3 жыл бұрын
You can turn artificial neural networks inside-out by using fixed dot products (weighted sums) and adjustable (parametric) activation functions. The fixed dot products can be computed very quickly using fast transforms like the FFT. Also the number of overall parameters required is vastly reduced. The dot products of the transform act as statistical summary measures. Ensuring good behavour. See Fast Transform (fixed filter bank) neural networks. The variance equation for linear combinations of random variables is very useful for understanding dot products in neural networks especially in conjunction with cosine angle. Also ReLU is a switch. The electricty in your house is a sine wave. Turn on a switch and the output is f(x)=x. Again the same sine wave as the input. Off(x)=0. A ReLU neural network then is a switched composition of dot products. If the switch states are known then there is a linear mapping between the input vector and the output vector which you can check out with various metrics.
@ahmadchamseddine6891
@ahmadchamseddine6891 6 жыл бұрын
He is genius
@johntanchongmin
@johntanchongmin Жыл бұрын
In 1:09:33, there was a question on gradual change in hypothesis space from very few samples - theory revision. I feel like neural nets may be quite ill-suited for fast change of learnt knowledge as the weights take a long time to change by backpropagation. What I believe is necessary, will be to imbue some form of learnable external memory bank on which we draw our knowledge from (in addition to neural nets), so we can just change that knowledge bank and learn new concepts instantly.
@juliocardenas4485
@juliocardenas4485 Жыл бұрын
The camera work negatively affects a wonderful lecture
@muckvix
@muckvix 6 жыл бұрын
Anyone has a link to the slides? And come on camera people, it's not a beauty pageant, it's ok if you show slides instead of the speaker's face :)
@oudarjyasensarma4199
@oudarjyasensarma4199 4 жыл бұрын
@@ProfessionalTycoons 404 not found!!! Can you share a revised link? Thanks in advance!!!
@reidalbecker3830
@reidalbecker3830 3 жыл бұрын
medium.com/@SeoJaeDuk/archived-post-from-deep-learning-of-disentangled-representations-to-higher-level-cognition-b848fdc0de2c
@ewfq2
@ewfq2 4 жыл бұрын
I want to talk with the guy talking about barycentres and wasserstein distance!
@ewfq2
@ewfq2 4 жыл бұрын
around 1:06:00
@ewfq2
@ewfq2 4 жыл бұрын
What's the neurips optimal transport tutorial mentioned?
@scose
@scose 5 жыл бұрын
Sampling rate * bit depth is a big overestimate of the amount of information in speech audio signals - look at the compression ratios that audio codecs can achieve
@jonabirdd
@jonabirdd 4 жыл бұрын
51:00 I like the idea of a two-level system but disagree with the mutual information criterion.
@nathanbittner1452
@nathanbittner1452 4 жыл бұрын
Interesting. Could you say a bit more on this?
@dr.mikeybee
@dr.mikeybee 5 жыл бұрын
Doesn't translation into an abstract space necessitate a loss of information?
@tomm7273
@tomm7273 5 жыл бұрын
Yes, but the benefits of dimensionality reduction far outweigh that. You don't need to consider every pixel of a picture to reason about the objects contained within that picture and their features.
@zachundisclosed6706
@zachundisclosed6706 5 жыл бұрын
There is also information stored in the decoder.
@nauy
@nauy 2 жыл бұрын
No, not if the information content is low dimensional to begin with. Consider a circle of radius r is rendered at location (x, y) on a bit map. The information in pixel space is high dimensional - the number of pixels in the bit map. But the same circle can be transformed into a 3-dimensional parameter space representation - (x, y , r) with no loss of information. The same circle in pixel space can always be regenerated using the parameter space representation.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
@@nauy Thanks. I've been learning about matrix transformations and PCA lately. It took me a few years to get here.
@runvnc208
@runvnc208 4 жыл бұрын
Sounds right to me. But why do they assume that the traditional neural net and deep learning are the best or only possible fundamental structures and processes for a system with these capabilities of disentangled abstractions working together with granular representations?
@moisesfelipe9596
@moisesfelipe9596 4 жыл бұрын
Good point. I'd like to hear something in that sense. I fell the current popularization of NN and DL has lead a lot of people to not consider any other alternatives and then losing other ways to solve problems and useful insights.
@moisesfelipe9596
@moisesfelipe9596 4 жыл бұрын
For example, when I heard the term disentangling I cannot stop thinking on this as a fancy (and potentially more sophisticated) way to refer to blind source separation.
@MartinLichtblau
@MartinLichtblau 5 жыл бұрын
Humans use fuzzy approaches, while computers use precise numbers. Which one can work in this complex world?
@tomm7273
@tomm7273 5 жыл бұрын
Computers can use fuzzy approaches as well. Almost all modern machine learning techniques are fuzzy.
@MartinLichtblau
@MartinLichtblau 5 жыл бұрын
​@@tomm7273 If you mean Deep Learning I'd say: yeah the direction seems ok. But the way computers work, and any representation they use is, quantitive and they are absolutely precise with those numbers. While humans think in qualitative terms, like "this rough concept is very similar to that one". Indeed they can't quantify things precisely, but that is what makes humans more capable to deal with all this ambiguous complexity.
@ahilanpalarajah3159
@ahilanpalarajah3159 5 жыл бұрын
@@MartinLichtblau Why do you think fuzziness has to contradict precise numbers? I'm not arguing it doesn't, I'm just asking because we can fuzzify and work with vagueness to eliminate as much of the search space as possible.
@MartinLichtblau
@MartinLichtblau 5 жыл бұрын
@@ahilanpalarajah3159 It's complicated, but in it's basic sense it doesn't. I just couldn't find simple terms to tell them apart. Perhpas better say Accurate vs. Approximate or rigid vs. flexible...
@mikepict9011
@mikepict9011 4 жыл бұрын
Humans have emotion , humans care .... robots will never care
@yeodongyoun6780
@yeodongyoun6780 2 жыл бұрын
Wow the subtitles are terrible :( ... GAN -> gown, k means -> keys lol
@zlh
@zlh 4 жыл бұрын
46:00 re: attention as gating the conscious and unconscious thoughts - can you imagine a machine which can widen and narrow its aperture of attention to accomplish different tasks?
@rahuldeora5815
@rahuldeora5815 4 жыл бұрын
Someone should write a detailed blog explaining stuff in this
Humans, Machines, and Work: The Future is Now
1:15:25
Microsoft Research
Рет қаралды 2,5 М.
Yoshua Bengio: Deep Learning | Lex Fridman Podcast #4
42:19
Lex Fridman
Рет қаралды 125 М.
Follow @karina-kola please 🙏🥺
00:21
Andrey Grechka
Рет қаралды 21 МЛН
Когда на улице Маябрь 😈 #марьяна #шортс
00:17
Buy Feastables, Win Unlimited Money
00:51
MrBeast 2
Рет қаралды 98 МЛН
Yann LeCun - A Path Towards Autonomous Machine Intelligence
47:55
Institut des Hautes Études Scientifiques (IHÉS)
Рет қаралды 25 М.
Geoffrey Hinton: The Foundations of Deep Learning
28:22
Elevate
Рет қаралды 126 М.
Our Post-Human Future | David Simpson | TEDxSantoDomingo
25:08
TEDx Talks
Рет қаралды 1,8 МЛН
An Introduction to Graph Neural Networks: Models and Applications
59:00
Microsoft Research
Рет қаралды 267 М.
The Thousand Brains Theory
1:30:07
Microsoft Research
Рет қаралды 45 М.
phone charge game #viral #tranding #new #reels
0:18
YODHA GAMING RAAS
Рет қаралды 12 МЛН
Обзор игрового компьютера Макса 2в1
23:34
Переходник для IPhone • 181649538                         Делюсь обзорами в профиле @lykofandrei
0:15
IPad Pro fix screen
1:01
Tamar DB (mt)
Рет қаралды 2,9 МЛН
На iPhone можно фоткать даже ночью😳
0:30
GStore Mobile
Рет қаралды 1,4 МЛН