I agree about the superb quality of all of Jeffs presentations and interviews. Jeff Hawkins and his NUMENTA and NuPIC teams as well as his Redwood Institute are probably the leading researchers in the entire world, at this moment, in the AI with the true goal of acheiving machine intelligence and not just shortcuts for cheery picking with only commercial interest in mind. Like Jeff says, almost as an Understatement: Biology Matters! I would dare say, if you do not want to go to the grave without having understood the principles of human intelligence, then watch this video and stay tuned to Jeff Hawkins.
@madmanzila8 жыл бұрын
I feel the same way... been keepin an eye for some 8 years now and i agree totally with your sentiment. read On Intelligence 3 times and I think this man is on the money, does not exaggerate, straight forward and humble in the right ways. The only group bound to succeed in my opinion. class A stuff Thanks
@jonacacarr38399 жыл бұрын
especially valuable summary in the last 5 minutes outlining the difference in the current approaches to machine intelligence
@hegerwalter9 жыл бұрын
What I really love about all of this is the open source aspect and the quality of the presentations. Even the lectures and presentations are free. Yes, this is true of some stuff out there, e.g. MIT OpenCourseware, Standford and Coursera, but still hard to find. I remember learning about Finite Elements 30 years ago. Much of that stuff required a large payment, usually aimed at corporate budgets. I agree with Jeff in that the fact that this approach is biologically inspired means that it is most probably going to be the approach that will lead to true machine intelligence. Biology has had more "trials and time" to find the best approach. This comparison against Deep Learning and Watson really drive the point.
@J2897Tutorials7 жыл бұрын
Fortunately, a lot of it isn't just open source, it's libre (non-proprietary) which means anyone can use it in their own work as long as it's kept libre.
@agu3rra7 жыл бұрын
Haven't seen the whole video yet, but can anyone explain why (≈13:00) Jeff states there are 4 layers, draws 5 of them, numbers them from 2/3,4,5,6? Why not number them 1-6? What's the distinction between cells in the same 2/3 layer?
@Numenta7 жыл бұрын
Hi Andre, good question. Many neuroscience textbooks label the layers in the cortex this way, so he was following that layout. In a talk this past December at MIT, Jeff went into more details on the layers and filled in some of what we've learned since this 2014 talk. If you're curious about the layers, you may want to watch that video. You can also see the diagram of the layers in the thumbnail for the video here: cbmm.mit.edu/video/have-we-missed-half-what-neocortex-does-allocentric-location-basis-perception
@DHorse5 жыл бұрын
From the lectures, some (or all?) of these layers are broken down further by cell types within them. Of course that's something Numenta should speak to.
@matheusaraujo73937 жыл бұрын
Today in 2018, how much do you think HTM Theory understands on those 6 layers (at that time in 2014 it was high-order inference(98%), sensory-motor inference(80%), motor sequences(50%), attention/feedback(10%))?
@Numenta7 жыл бұрын
Hi Matheus, Great question. Jeff spoke about this a couple of months ago and commented that the 6 layer view actually leaves out quite a bit. We now know there are about 12 different cellular layers / different cell types, and in the talk he gives his proposal for what many of the layers are doing. I’d recommend watching the video of this talk: www.numenta.com/resources/papers-videos-and-more/resources/jeff-hawkins-mit-talk/ He walks through the proposal at 42:20, if you want to skip to that part, but if you want the context for how we got there, the full video will help.
@firetv78356 жыл бұрын
10:48 But when the weight of a synapse is changed to zero that would effectively mean the synapse "is being lost" just as increasing the amount of the weight of a synapse (to a positive or negative weight) which had a previous weight of zero would be the equivalent of a new synapse being formed, wouldnt it?
@NumentaTheory6 жыл бұрын
In our theory, when the permanence of a synapse drops low enough, it is lost completely as you suggest. But this isn't the same as setting the permanence to zero in an artificial neural network. There are still several differences: we don't allow negative permanences, have slight different rules for synaptogenesis than simply changing the permanence of an existing synapse, and we treat the contribution of an active synapse as binary (1 if the permanence is over threshold, 0 otherwise).
@firetv78356 жыл бұрын
HTM School First of all thank you for the quick response, I'm really curious to learn about artificial neural networks :D Having the weights of the synapses only be either 1 or 0 (if I understand you correctly) is definitively an interesting approach which should also make finding the most appropriate weights a lot faster and more straight forward. However I am wondering how inhibitory synapses (i'm assuming such a thing exists) are represented in this model since one should assume it would be logical to use a negative weight value for said synapses. I'm sure there is a logical reason you came to the conclusion not to use negative weights for synapses and I'm curious to find out why that is the case :D
@NumentaTheory6 жыл бұрын
The Temporal Memory (one part of our model) distinguishes between the weight (what we call a permanence) of a synapse from the activation contribution to the post-synaptic cell. So we do have non-binary "weights" but we consider the synapse to be connected or not connected based on whether this weight is above some fixed threshold. When a neuron is active, it provides input to downstream neurons only for the connected synapses and this contribution is just "1" (binary). So the growth of synapses is non-binary but the activation contribution is binary. This seems to more closely match how real synapses work. You can read more about this in the neuron paper: numenta.com/resources/papers/why-neurons-have-thousands-of-synapses-theory-of-sequence-memory-in-neocortex/ In the biology, neurons provide either excitatory or inhibitory connections to other neurons, but not both. Our models primarily capture the functioning of excitatory neurons, which neuroscientists generally believe to be the information carrying cells (and they make up ~80% of neurons in neocortex). We implicitly model some inhibitory cells in how we implement competition between minicolumns in the Spatial Pooler and inhibition within a minicolumn in the Temporal Memory. There are many types of inhibitory neurons and we don't know what many of them are doing functionally.
@DHorse5 жыл бұрын
"The Temporal Memory (one part of our model) distinguishes between the weight (what we call a permanence) of a synapse from the activation contribution to the post-synaptic cell." The papers should be fun. I knew you couldn't have a binary output and be even close to matching the brain as you describe it. You output a real or integer while the spike is binary. The dendrite zones involved are integer based but the feedback to them must be a real.
@swat0339 жыл бұрын
It's been nearly a year when this talk was given. What's the state-of-the-art now with the layer 4 and 5?
@drq30989 жыл бұрын
+Michal Najman What is the point of this discouraging curiosity????
@swat0339 жыл бұрын
+DRQ discouraging?
@sebastiannarvaez76387 жыл бұрын
They uploaded a playlist with the state of the art yesterday. I hope it still interests you :) PL3yXMgtrZmDrlePl0jUIZWKwQwUgOfxA-
@swat0337 жыл бұрын
Thanks. I cant really open it thought.
@JCUDOS6 жыл бұрын
Haha, yeah that link seems suspiciously short... All their playlists are on this page so it has to be one of those. kzbin.infoplaylists?disable_polymer=1
@boyonstilts31219 жыл бұрын
What's the difference between this and whole brain emulation?
@nicktraynor10 жыл бұрын
I'm interested in possible future implementations of other brain structures, particularly the hippocampus and the amygdala.
@nicktraynor10 жыл бұрын
***** Thanks for the tip! Looking it up now.
@lasredchris5 жыл бұрын
These algorithms - modeling of the world
@nicktraynor10 жыл бұрын
I hope this method of extracting principles of operation of the cortex goes well, rather than having to dig into all the particulars of brain wetware. Neurotransmitters, for instance, are not mentioned by Mr Hawkins at all.
@madmanzila8 жыл бұрын
I may be completely of here but..from what i gather, different neurotransmitters may account for differences in synaptic strength ... the resulting neuronal firings are still doing the same thing in terms of creating these sparse representations, (adrenaline for example may direct all attention to execution of an attack or escape plan in a mammalian brain)i take it that different neurotransmitters may be acting as dominant area communicators in brains repressing inputs from other areas. for the time being that HTMs is clearly problem solving without the emergency modalities of real brains... But its exciting to see what someone who knows what they are talking about may say (i do not) just swinging some conjecture.
@DHorse5 жыл бұрын
44:00 Err... Jeff? There's no mathematical foundation for how a computer works?. Hmmmm.
@DHorse5 жыл бұрын
47:30 AGI by 2030 Jeff. The brain by 2050.
@bm55434 жыл бұрын
😂😂 what is he smoking?
@DHorse4 жыл бұрын
@@bm5543 I am sure he and I smoke the same things. Usually I see pink bunnies but my view of mathematics changes in the other direction.
@DHorse4 жыл бұрын
05/20 Hmmm. I misunderstood what he was saying there. No formal definition for the physical layer. Agreed. And I all I care is what works and informs. We certainly agree there. Jeff shrugs... 👍
@lasredchris5 жыл бұрын
Numenta - discover operating principles of the neocortex - create technology for machine intelligence based off Numenta principles
@Ivorforce7 жыл бұрын
47:00 Critical mistake! He forgot we are actually also the best species at throwing stuff and we are exceptional at jogging / sweating. It's not just our brain, Jeff!!!
@climatechangedoesntbargain91405 жыл бұрын
GPL - seriously?
@alexanderb.78999 жыл бұрын
You can't really understood High-order inference on 98%, if you feels like you understand only 10% of Attention/feedback theory and 50% of Motor Sequences theory. The same with 50% Motor Sequences and 10% of Attention/feedback. That contradicts with your own ideas Jeff! We see what we look at! You see what you imagine! How can you be 98% sure in "seeing" , if you got only 10% and 50% in theory about where you currently look at?
@Niki007hound8 жыл бұрын
+Alexander B. You made an interesting observation, but your conclusions are perhaps a little haste. The operations termed "inference", "motor-sensory inference", "motoric sequences" and "attention/feedback" are completely different operations. They very likely share features but understanding one, does not require the other. Keep in mind that in physiology, scientists learn by empirical observations, that confirm a hypothesis. You cannot observe all functional areas simultaneously, due to the complexity of the measuring techniques. You can very definately make precise measurements of one layer to confirm certain functional mechanisms and yet not be as far in other layers. Where you are right, is that you cannot understand the full function of the neocortex without all of these layers and their Interactions. But this does not contradict anything Jeff said.
@victorpanzica2617 жыл бұрын
The reason why mathematics cannot explain his work is because mathematics itself is a product of the hierarchical brain structure. His critics want to have the foxes guard the henhouses and claim they can't find the hens.
@kharyrobertson35799 жыл бұрын
I love this but please stay out of trying to invade peoples personal lives. It has been proven that utilities like this cause more harm than good.