DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu

  Рет қаралды 34,575

The Artificial Intelligence Channel

The Artificial Intelligence Channel

Күн бұрын

Пікірлер: 23
@RaviAnnaswamy
@RaviAnnaswamy 6 жыл бұрын
The key seems to be that the setup provides a “grounded bottleneck for the learned representation” by requiring that the representation learnt is not a description but an action plan hence connection to reinforcement learning In one of the Turing lectures from 1970s Simon Newell I think distinguish between two ways of describing a circle A circle is a set of points equidistant from a point A circle is the path traced by a point moving around at a constant distance from a given center The first is a static description the second a procedural description While the first type can guarantee accuracy it cannot guarantee minimalism and hence generality If VAE and GAN are of type one impala spiral is type 2 So this approach looks at a picture and generates a program (plan of actions) to generate it; it sees a digit and infers strokes needed for it Since the real information content of an mnist digit picture is far less than 789 bits the representation learned by spiral seems to be far economical This has mind boggling implications Btw such brilliant delivery amazed by precision of thought and words throughout
@RaviAnnaswamy
@RaviAnnaswamy 6 жыл бұрын
Also listened to prof bengios presentation at Microsoft on disentangled representation learning and prof Lecun additional approaches to unsupervised learning examples..2018 seems to have begun a new direction and already delivered fruits in unsupervised learning and this marriage of unsupervised learning with reinforcement learning is awesome
@dr.mikeybee
@dr.mikeybee 6 жыл бұрын
Fascinating. Using various models as components in training seems like the best way forward. Brilliant.
@citiblocsMaster
@citiblocsMaster 6 жыл бұрын
Koray is an archon of Dan Ariely and George Clooney
@berkk1993
@berkk1993 6 жыл бұрын
i thought it was George Clooney
@EngIlya
@EngIlya 6 жыл бұрын
No links to the research paper he was telling about?
@GuillermoValleCosmos
@GuillermoValleCosmos 6 жыл бұрын
What is the explanation of the contrastive loss in 16:30 ?
@GuillermoValleCosmos
@GuillermoValleCosmos 6 жыл бұрын
Ah I see, there is a typo, the second term in the loss, should have two different conditionings c_1 and c_2. See here: arxiv.org/pdf/1711.10433.pdf#page=6
@utomo8
@utomo8 6 жыл бұрын
Can impala have Good Natural Language Understanding ? Natural Language Understanding have very big potential in the upcoming year
@litman3980
@litman3980 4 жыл бұрын
türk olarak gurur duydum
@ilyadaaa9284
@ilyadaaa9284 4 жыл бұрын
🇹🇷
@enesmahmutkulak
@enesmahmutkulak 2 жыл бұрын
Bu alanda çalışan Türk'lerin olması ne güzel.🇹🇷
@oldschoolgreentube
@oldschoolgreentube 6 жыл бұрын
We are engineering our own extinction.
@eyeofhorus1301
@eyeofhorus1301 6 жыл бұрын
+oldschoolgreentube newbz
@vcool
@vcool 6 жыл бұрын
Luddite
@oldschoolgreentube
@oldschoolgreentube 6 жыл бұрын
Absolutely. @@vcool
@FacePalmProduxtnsFPP
@FacePalmProduxtnsFPP 6 жыл бұрын
Gross
@MONOLITH-yd4vq
@MONOLITH-yd4vq 6 жыл бұрын
FacePalmProductions just think people want this 🤖
@FacePalmProduxtnsFPP
@FacePalmProduxtnsFPP 6 жыл бұрын
MONOLITH 2045 I'm sitting here scrolling through it again trying to remember what it even is...
@FacePalmProduxtnsFPP
@FacePalmProduxtnsFPP 6 жыл бұрын
MONOLITH 2045 now I remember why I said gross, I lost interest so fast.. 😂 Jabril and Siraj Raval are FAR more interesting and fun, and equally as informative.
@shubhampateria2267
@shubhampateria2267 6 жыл бұрын
FacePalmProductions People like Jabril and Siraj are not qualified enough to speak at ICLR!
@FloydMaxwell
@FloydMaxwell 6 жыл бұрын
Most of these DeepMind talks are like that joke about the Comedians convention. They were all so familiar with jokes, they had them numbered. Someone would get up and say "217". Much laughter. Someone else would get up and say "133". More laughter. Then someone got up and say "351". Hysterical laughter. A newcomer asked why the third person's joke was so funny. Answered another comedian, "It was a new joke." - - - - - The point is that unless you invest more time in using analogies, and front-loading your talks with the reasons to listen to it, you just end up talking code to people already familiar with the code.
Deepmind AlphaZero - Mastering Games Without Human Knowledge
42:29
The Artificial Intelligence Channel
Рет қаралды 195 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 654 М.
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
Yann LeCun - The Next Step Towards Artificial Intelligence
58:42
The Artificial Intelligence Channel
Рет қаралды 26 М.
Under the hood with Google AI
43:50
Google for Developers
Рет қаралды 4,5 М.
AI Trends for 2025
7:32
IBM Technology
Рет қаралды 112 М.
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
Yann LeCun - Power & Limits of Deep Learning
36:48
The Artificial Intelligence Channel
Рет қаралды 87 М.
A History of Reinforcement Learning - Prof. A.G. Barto
31:50
The Artificial Intelligence Channel
Рет қаралды 14 М.
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН