The key seems to be that the setup provides a “grounded bottleneck for the learned representation” by requiring that the representation learnt is not a description but an action plan hence connection to reinforcement learning In one of the Turing lectures from 1970s Simon Newell I think distinguish between two ways of describing a circle A circle is a set of points equidistant from a point A circle is the path traced by a point moving around at a constant distance from a given center The first is a static description the second a procedural description While the first type can guarantee accuracy it cannot guarantee minimalism and hence generality If VAE and GAN are of type one impala spiral is type 2 So this approach looks at a picture and generates a program (plan of actions) to generate it; it sees a digit and infers strokes needed for it Since the real information content of an mnist digit picture is far less than 789 bits the representation learned by spiral seems to be far economical This has mind boggling implications Btw such brilliant delivery amazed by precision of thought and words throughout
@RaviAnnaswamy6 жыл бұрын
Also listened to prof bengios presentation at Microsoft on disentangled representation learning and prof Lecun additional approaches to unsupervised learning examples..2018 seems to have begun a new direction and already delivered fruits in unsupervised learning and this marriage of unsupervised learning with reinforcement learning is awesome
@dr.mikeybee6 жыл бұрын
Fascinating. Using various models as components in training seems like the best way forward. Brilliant.
@citiblocsMaster6 жыл бұрын
Koray is an archon of Dan Ariely and George Clooney
@berkk19936 жыл бұрын
i thought it was George Clooney
@EngIlya6 жыл бұрын
No links to the research paper he was telling about?
@GuillermoValleCosmos6 жыл бұрын
What is the explanation of the contrastive loss in 16:30 ?
@GuillermoValleCosmos6 жыл бұрын
Ah I see, there is a typo, the second term in the loss, should have two different conditionings c_1 and c_2. See here: arxiv.org/pdf/1711.10433.pdf#page=6
@utomo86 жыл бұрын
Can impala have Good Natural Language Understanding ? Natural Language Understanding have very big potential in the upcoming year
@litman39804 жыл бұрын
türk olarak gurur duydum
@ilyadaaa92844 жыл бұрын
🇹🇷
@enesmahmutkulak2 жыл бұрын
Bu alanda çalışan Türk'lerin olması ne güzel.🇹🇷
@oldschoolgreentube6 жыл бұрын
We are engineering our own extinction.
@eyeofhorus13016 жыл бұрын
+oldschoolgreentube newbz
@vcool6 жыл бұрын
Luddite
@oldschoolgreentube6 жыл бұрын
Absolutely. @@vcool
@FacePalmProduxtnsFPP6 жыл бұрын
Gross
@MONOLITH-yd4vq6 жыл бұрын
FacePalmProductions just think people want this 🤖
@FacePalmProduxtnsFPP6 жыл бұрын
MONOLITH 2045 I'm sitting here scrolling through it again trying to remember what it even is...
@FacePalmProduxtnsFPP6 жыл бұрын
MONOLITH 2045 now I remember why I said gross, I lost interest so fast.. 😂 Jabril and Siraj Raval are FAR more interesting and fun, and equally as informative.
@shubhampateria22676 жыл бұрын
FacePalmProductions People like Jabril and Siraj are not qualified enough to speak at ICLR!
@FloydMaxwell6 жыл бұрын
Most of these DeepMind talks are like that joke about the Comedians convention. They were all so familiar with jokes, they had them numbered. Someone would get up and say "217". Much laughter. Someone else would get up and say "133". More laughter. Then someone got up and say "351". Hysterical laughter. A newcomer asked why the third person's joke was so funny. Answered another comedian, "It was a new joke." - - - - - The point is that unless you invest more time in using analogies, and front-loading your talks with the reasons to listen to it, you just end up talking code to people already familiar with the code.