you might like time frames (pause playback at illustration there): - 9:26 (pure NLP) - 10:37 (saliency detection -> conditional learning / learn by exception) - 13:04 - 28:47 (MAC logic) - 41:30 (hybrid VP/NLP composition tree allowing feeding visual info into NLP processing STM) - 48:10 - 53:49 - 54:34 - 59:37 - 1:02:40 (a moment of insight) "use attention over abstracted disentangled concepts and then do multi-step reasoning by having an iterative attention process over different time steps." - 1:04:38
@ronhightower65495 жыл бұрын
Great! Thanks for doing this. I found it extra helpful because the cameraman spent little time on the actual slides. There should be a tool that automatically generates the same list of time codes from a video--shouldn't be too hard to do.
@petrafebrianto10453 жыл бұрын
@Stephan thanks, that's really helpful
@aydinhudson22803 жыл бұрын
i know Im asking the wrong place but does anyone know a trick to log back into an Instagram account?? I somehow forgot the login password. I would appreciate any assistance you can offer me!
@gaelkenzo25323 жыл бұрын
@Aydin Hudson instablaster :)
@aydinhudson22803 жыл бұрын
@Gael Kenzo Thanks for your reply. I found the site on google and Im waiting for the hacking stuff now. I see it takes quite some time so I will reply here later with my results.
@FranckDernoncourt5 жыл бұрын
Talk starts at 3:03
@dr.mikeybee5 жыл бұрын
So basically, if I understand this CLEVR work, one builds a classifier and runs a scene through it which builds a database including every object, object characteristic, and relationship to other objects would be classified. Then the question is translated via an end-to-end differentiable NN into some query language? So the classifier would be considered a structural prior -- as would anything like convolutional layers. Finally, another translation net would take the query language answer and turn it into whatever the user's language is.
@dr.mikeybee5 жыл бұрын
For MAC nets, does the entire database exist as part of the net? I assume that in order for a query to be run, this would be the case, and RU would actually be a weight on one "connection" from a database entry to the RU cell within the MAC node. Is this the case? Or is there a function that's called to access an "external" database? I'm getting the impression that the MAC NET is a "one size fits all" NN, but it is called iteratively with different control values thus storing results. Rinse and repeat. And would the key/value pairs making up control unit options act as "function loaders?"
@skierpage5 жыл бұрын
37:12 "it's not quite clear how it does the counting" ! He builds machines he can't fully understand, cool!
@babadook41732 жыл бұрын
on 10:59 why can't we see the slide? Do not focus on Prof. Manning, put on the slides please
@hossein.amirkhani5 жыл бұрын
It is fun and interesting to watch Manning's lectures.
@dr.mikeybee5 жыл бұрын
Great lecture. I learned much more than what I couldn't understand.
@dr.mikeybee5 жыл бұрын
Is there any advantage to using a tree structure rather than a table or tables? Is it simply that there is hierarchy in a tree structure? Conversely, is there any advantage to a table? My initial thought is that either would work, but there would be more overhead with a table. There would be better middleware tools, however.
@andenandenia5 жыл бұрын
To evolve reasoning ai. Guess u must be able to measure how well it is reasoning. Maybe then it's easier to reason about moving things in space, building a stack of boxes, building a bridge like just a plate a car can move over to get to other side of a river. Cause its measurable.
@manfredadams32525 жыл бұрын
The neurons they use in machine learning are just point neurons that essentially encapsulate a single function. Real neurons don't work like that at all.
@aviraljain91213 жыл бұрын
Does anyone have any suggestions on how I can learn more about the foundations of the topic of this talk? This was a very interesting talk. However, I am actually a novice in the area of AI and ML and I don't think I understand a lot of topics in this talk. Any suggestions will be appreciated.
@444haluk3 жыл бұрын
13:53, you can answer associatively, meaning even a (2D + 1 fake D) understanding can work well. The real intelligence is about asking the questions. Children always ask questions. curiocity and attention is better indicators of intelligence than answering.
@stephanverbeeck5 жыл бұрын
The only part where this is getting to a level of grasping is at kzbin.info/www/bejne/Y2Otg5ysaLKsmdU timestamp 1:02:40 -> "doing multi-step reasoning by having an iterative attention process over different time steps.". But that should have been the first sentence of the talk and all remainder based on that instead of having it as a last side thought like "oh yeah, this could be it" if you know why waste so much effort on dumb statistic calculus (aca dynamic programming 1950 -2020) ?
@nighthawkviper67915 жыл бұрын
We can track mycelial networks, neurobiopathy, and the physics of it should model an easy replicable blueprint for you to encapsulate in code. It should bring mathematical light and 3-Dimensional modelling to morphic resonance.
@badhumanus5 жыл бұрын
This is frustrating. I'm always amazed to watch deep learning experts compare neural nets to the brain while ignoring the gigantic pink elephant in the room. Do you people realize that, unlike the brain, a deep neural net is blind to things it has not seen before? Do you realize that a deep neural net is essentially an old-fashioned, rule-based expert system (IF pattern A THEN label X) and just as brittle? If it encounters a situation for which there is no rule, it fails catastrophically. This is a fatal and fundamental flaw of deep learning. It is the reason that self-driving car companies drive their cars millions of miles in the vain hope of capturing all possible situations. How silly is that? Why then do you keep insisting that deep learning has a role to play in general intelligence? Please stop the hype and the pseudoscience before you get crushed by the pink elephant in the room.
@badhumanus5 жыл бұрын
@@warriorgavalaki7336 Being in its infancy is irrelevant. AGI is coming but no thanks to deep learning. A deep neural net has nothing in common with the brain and never will. There is no backprop in the brain, no labels and no gradients. Also, neural nets process vectors (data) whereas the brain process precisely timed data transitions aka spikes. The brain is a massive timing mechanism. DNNs and brains are completely different animals.
@badhumanus5 жыл бұрын
@@warriorgavalaki7336 Sorry, I said all I wanted to say on this topic. Thanks for the input.
@snippletrap5 жыл бұрын
@@badhumanus Where is AGI coming from?
@badhumanus5 жыл бұрын
@@snippletrap I am convinced it will emerge from research in event-based neuromorphic computing and spiking neural networks. I predict that, although it's a needle in the haystack kind of thing, once it is found, it will turn out to be surprisingly much simpler than most people in the field expect. I also believe it will most likely come from some lone-wolf maverick working alone. The mainstream AI community is clueless in my opinion. They're lost in the woods.
@JustNow425 жыл бұрын
Wonder why its called neural network when it gots very little to do with nervecells nor their organisation. Actualle not much has happened since the perceptron was inventet (1940th or so) except that we has got better computers. The structure of the neocortex is obviously very differet and there are a multitude of different neural cells .
@marble2965 жыл бұрын
It's trying to replicate the properties of a neuron. What you mention about different types of neural cells could be interesting.
@kobilica9995 жыл бұрын
How can you say 'not much has happened? If you would study ML, you'd see a LOT has evolved since invention of perceptron.
@JustNow425 жыл бұрын
There are about 13-14 different types of neural cells. The neocortex is about 2.5 mm thick and about a large handkerchief when folded out. It is organized in pieces about 1 mm2 times the 2.5 mm so about 150 000 pieces. Each piece has 12 layers and about 100 000 neurons. This little computer is connected to sensors and motorcircuits, each other and much more and can also predict what it thinks will happen. ( That is how it learns, if the predition is wrong a correction is needed.), It is thought by many including me that a circuit cannot be inteligent it it does not interact with its space. Now this mass of smal computers vote on the action to take it is not a deeplearning circuit and there are no backpropagation. This is why I say that not much progres has been made. Everything is wrong with the current models if we expect the circuit to think.
@kobilica9995 жыл бұрын
@@JustNow42 there is model called Predictive coding neural network, which has error propagation.
@kobilica9995 жыл бұрын
@@JustNow42 and models based on Hebbian/Oja rule, which is basically approximation of brain plasticity