#59 JEFF HAWKINS - Thousand Brains Theory

  Рет қаралды 76,358

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Patreon: / mlst
The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.
Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body. Critically - Hawkins doesn’t think there is just one model but rather; thousands.
Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.
Pod version: anchor.fm/machinelearningstre...
numenta.com/a-thousand-brains...
numenta.com/blog/2019/01/16/t...
numenta.com/assets/pdf/resear...
numenta.com/neuroscience-rese...
Your Brain Is Not an Onion With a Tiny Reptile Inside
journals.sagepub.com/doi/full...
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
arxiv.org/abs/2009.08576
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar / doctorduggar
Connor Leahy / npcollapse
Our thanks to:
Numenta
Matthieu Thiboust (www.insightsfromthebrain.com/ + / mthiboust )
Shwetha Bharadwaj (show research / shwetha-bharadwaj-2b92... )
Andreas Koepf (show research neurosp1ke?lang=en)
Lex Fridman, we used a few clips from his Jeffv2 interview -- • Jeff Hawkins: The Thou... -- remember to check Lex's channel out! ❤
[00:00:00] Introduction
[00:03:03] The Neocortex
[00:09:58] Triune Brain
[00:12:24] Grid and place cells
[00:14:54] Reference frames
[00:21:03] Mountcastle
[00:25:46] Thousand brains theory of intelligence
[00:32:40] HTM
[00:41:12] Sparsity
[00:52:57] Main show kick off
[00:54:36] Tribalism in the ML Community
[00:57:14] Variation in approaches to the same goal
[00:59:43] Hawkins ideas validated, cortical uniformity
[01:02:25] Sparse distributed representations (SDRs)
[01:06:08] Reference frames as generalization
[01:10:29] Reference frame remapping
[01:14:14] Reference frames can generalize beyond three dimensions
[01:17:26] And generalize beyond spatial topology
[01:20:12] Intuitions behind why SDRs work well
[01:24:03] Are their capacity concerns with the SDR model
[01:27:11] At what level between GOFAI and Connectionism should focus our effort?
[01:31:33] The brain reasons by abstract movement through reference frames
[01:35:34] Human's don't know Universal Truth (if there is even such a thing)
[01:37:34] Learning elsewhere in the brain besides the neocortex
[01:40:44] Stochastic backpropagation in the human brain
[01:44:04] What's missing from artificial neural networks? Numenta's roadmap
[01:48:59] AGI Risk - the alignment problem
[01:54:07] AGI risk - the neocortex can thwart the old brain
[01:57:47] AGI risk - artificial evolution
[02:01:18] AGI risk - yes we need to think on and develop adequate control systems
[02:03:48] A balance of knowledge: innate, experiential, taught, or deduced
[02:16:09] post-show wrap-up
[02:16:59] Advancements in direction at Numenta
[02:19:50] AGI risk recap
[02:23:56] Ought did evolve from Is, humans are the proof
[02:26:29] When AGI overcomes our weaknesses
[02:28:54] Who doesn't like forking?!
[02:30:29] Coherent synchronization as a measure of identity
#machinelearning #artificialintelligence
Music credit;
/ nolightwithoutdark
/ sibewest-nero
/ skeler-kensho
/ s-o-l-a-r-i-s
/ empty
/ moment
/ reticent
/ velvet
/ c-a-l-i-c-r-y-lalala
/ ephemera
/ elo-method-subranger-s...
/ ukowens1
/ nightwalk
/ be-here
/ divine

Пікірлер: 166
@Extys
@Extys 2 жыл бұрын
I can't believe something this high quality is free. Truly incredibly work.
@audrajones
@audrajones Жыл бұрын
it's not free for them - throw them a couple bucks!
@AliMoeeny
@AliMoeeny 2 жыл бұрын
You have incredible guests, and hosts, but the best part of the show, is the background and introduction section at the start. Thank you very much for the hard work
@Heidiroonie
@Heidiroonie 2 жыл бұрын
Can't believe this is has 6.4 thousand views, should be 6.4 million
@fotoyartefotoyarte1044
@fotoyartefotoyarte1044 2 жыл бұрын
that introduction was the best I have ever seen in relation to a scientific interview; real work put into it; very few people nowadays have the passion and will to do well done work like that; amazing
@iestynne
@iestynne 2 жыл бұрын
That introductory section on neuroscience was INCREDIBLY useful!! You should split that out as a separate clip video.
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Finally!!! You made us wait for this. Let's see if the wait was worth it! 😊
@eox5850
@eox5850 2 жыл бұрын
Don't remember being happier to have two hours and 12 minutes remaining on a video. Bravo
@CharlesVanNoland
@CharlesVanNoland 2 жыл бұрын
RIP Matt Taylor. Followed his Twitch streams and had the fortune of chatting with him on there in the weeks before his departure. He deserved to see where machine intelligence would lead. I guess that now he already knows out there in the infinite forever.
@stephanebibeau6562
@stephanebibeau6562 Жыл бұрын
Yes
@DavenH
@DavenH 2 жыл бұрын
Man, you have got SUCH a good thing going here. I have to think that in two petri dish universes, one with MLST and one without, we get our best-outcome AGI shows up way faster due to your discussion distillation and dissemination of the field's knowledge. Talk about legacy! Thanks once again for these tremendous efforts. One thing that keeps hitting my curiosity is the belief that AI needs embodiment. Does that merely mean that the agent needs to have a discrete instantiation somewhere (even somewhere virtual), rather than a periodic, intermittent or fluid one? Or does it mean real physical embodiment? I'm super skeptical of the latter, as we're interacting with a virtual environment ourselves as humans. We never actually touch objects themselves, we "touch" signals and qualea. Our physical embodiment has no material difference (in the legal sense of material) from an arbitrarily realistic metaverse. Right? I don't want the conception of a need for embodiment or robotics to unnecessarily limit our grasp, either. So many interesting things are virtual in some respect, and have learnable structure, and could benefit from the availability of high intelligence.
@balapillai
@balapillai 11 ай бұрын
2 ways of disambiguating this:- 1) Distinguish the process of learning ephemerals versus conceptuals. Hypothesis: The more conceptual, the more continued embodied engagement, ie adaptive learning, is required as predicate The more ephemeral, the more the learning bit can be opted into a pre-existing conceptual body “virtually”. A parallel of “retrofitting” a loose jigsaw puzzle piece into an almost complete jigsaw puzzle. The more complete the puzzle is, the more odd leftover bits can be fittted in because of “nyet” - they cannot possibly be fitted in elsewhere in the puzzle. 2) Investigation into why the Tamils (of which I and the CEO of Google are instances) went into a gradient descent from about 600 CE onwards when they were on a fat gradient ascent, epistemology-wise, up to then. What aspects of epistemological growth were effectively “ethnically cleansed”? #SpiceTradeAsia_Prompts
@videowatching9576
@videowatching9576 Жыл бұрын
I appreciate that this show ultimately is tying back to ‘machine learning’ and building things. In contrast in other conversations outside this show, I find that talking about AI or AGI or advances in the abstract or sort of just talking about the implications in a sense of awe is tiring because it doesn’t really map to a concrete thing that is tied to productivity / improvement / advances. Even places that seek to having a ‘philosophical’ conversation about AI stuff, I think ends up unfortunately missing a lot of opportunity to address use cases. So as a guiding principle I think that’s great that this show to me seeks to be focused on uses ultimately.
@bertski89
@bertski89 2 жыл бұрын
Very classy tribute to Matt Taylor - also this is the best external treatment and overview of Numenta's work that I have seen - and I've been watching closely since Redwood was founded (2005). Really appreciate the depth. Great work, thank you for putting this together and the interview.
@galileo3431
@galileo3431 2 жыл бұрын
MLST getting the pioneers! 🤖🧠
@sjp1861
@sjp1861 2 жыл бұрын
This is just fantastic! Thank you very much for this episode. Simply outstanding work.
@janosneumann1987
@janosneumann1987 2 жыл бұрын
Great episode! Raising the bar higher. Another epic intro from Tim 👏
@troycollinsworth
@troycollinsworth 2 жыл бұрын
In the last 50 pages of A Thousand Brains: A New Theory of Intelligence and this was very informative with far more details than were conveyed in the book.
@ideami
@ideami 2 жыл бұрын
Superb episode, a great journey through the fascinating work and research by Jeff and the Numenta team, this podcast is a treasure indeed ;)
@videowatching9576
@videowatching9576 Жыл бұрын
Such an awesome format for this podcast of such important info: Part 1: summary and framing of how to understand Part 2: the talk Part 3: downloading that to interpret Jobs to be done: Part 1 as the summary of takeaways Part 2 as decide and interpret yourself Part 3 as figure how to apply and next steps - from the interview, and more such as ‘if this is true, then what else is true’ and so on. Fascinating.
@renjia3504
@renjia3504 Жыл бұрын
🎉
@renjia3504
@renjia3504 Жыл бұрын
🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉🎉😢😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉😢🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉😢🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😢🎉
@thephilosophicalagnostic2177
@thephilosophicalagnostic2177 Жыл бұрын
A wonderful, detailed exploration of Hawkins' superb model of consciousness. Thanks for creating and posting.
@abby5493
@abby5493 2 жыл бұрын
Most incredible video you’ve ever made 😍
@fcvanessa
@fcvanessa 2 жыл бұрын
just got my new XM4's and can listen to MLST while walking around the house. Brilliant work Tim and co!
@CristianGarcia
@CristianGarcia 2 жыл бұрын
After watching the whole talk I get the sense that 1) Jeff has really cool ideas and getting strong queues from neuro science is very interesting but 2) it seems a lot of what he points to is not published/shared and it seems very unlikely a single lab will make progress on this field on its own. Contrary to Gary Markus, a big +1 for Jeff is that his team is actually trying to implement his theories. Anyway, loved the episode!
@freakinccdevilleiv380
@freakinccdevilleiv380 2 жыл бұрын
Aweeeesome 👍👍👍 Many thanks.
@autobotrealm7897
@autobotrealm7897 Жыл бұрын
Visuals are brilliant.... exhilarating!
@CristianGarcia
@CristianGarcia 2 жыл бұрын
Amazing work! ❤
@oliverhorsman8896
@oliverhorsman8896 5 ай бұрын
Wow amazing, thankypu so much, im learning so much from you.
@Mario7k
@Mario7k 2 жыл бұрын
This channel is great! 👏👏👏👏👏👏🏆
@zilliard1352
@zilliard1352 2 жыл бұрын
Truly amazing
@nauman.mustafa
@nauman.mustafa 2 жыл бұрын
+1 for speaking against tabula rasa!
@egor.okhterov
@egor.okhterov 2 жыл бұрын
My observations: 1. We are not conscious all the time. We have our snapshots of alertness once every 60 milliseconds for some small period of time and gaps of time in between being fully unaware and unconscious. 2. The clarity of being conscious feels different when you are fully awake vs when you are sleepy or drunk. 3. We are fully unconscious and not self aware when in a state of deep sleep, despite neocortex still working and making votes and predictions. 4. We could navigate our conscious to be aware of different parts of information presented. Somehow we could guide and aim our attention at different concepts and images presented to us at every moment. We could even track our thought process and feel continuation of it.
@skyacaniadev2229
@skyacaniadev2229 6 ай бұрын
Great talk. Wish I watched this earlier. 🎉
@danbreeden5481
@danbreeden5481 2 жыл бұрын
Absolutely amazing
@TheShadyStudios
@TheShadyStudios 2 жыл бұрын
helllll yeah definitely gonna learn a bunch from this
@RoyceFarrell
@RoyceFarrell 2 жыл бұрын
wow thankyou love your work...
@Artula55
@Artula55 2 жыл бұрын
Thank you :)
@MuhsinFatih
@MuhsinFatih 2 жыл бұрын
Amazing. I could never before believe that the insane level of intelligence that the brain has could evolve even in billions of years. I can see how it's possible now
@isajoha9962
@isajoha9962 11 ай бұрын
Really cool video !!! 😀
@marilysedevoyault465
@marilysedevoyault465 2 жыл бұрын
So interesting guys! Did Mr Hawkins talk about sex ? The four of you sure know the way!! Just kiding. I'm French speaking, so sorry for the mistakes. About what I was writing previously, I hadn't listened to all the video. When we know how to give importance to what is being sensed (for example by knowing how Flagellum were used to move forward to more nutriments in primitive beings, giving sudenly importance to what was perceived in the environment - the lack of nutriments), then, we will need to configure the AI based on a mother : the mother of humanity. We will need to make it work like a mom, with the same motivations, the same way to give importance. It will be our eternal mother board ! What Mr. Hawkins is working on is sooo important. What AI will learn won't stupidly die like humans. The knowledge will be there for centuries! It will be our most important treasure. I hope so, but we need to be carefull with the configurations!!
@eduardocobian3238
@eduardocobian3238 Жыл бұрын
Super interesting. Thanks. I think HTM is the way to go for AGI.
@ZeroGravitas
@ZeroGravitas Жыл бұрын
Wild production values on this video, bravo! Great to see Jeff still developing the ideas I read back in "On Intelligence", adapting them to transformer NN. And to have the cross questioning from from Connor worked brilliantly for context and the pressing issue of alignment. 👍
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
If the reference frame is the basic storage architecture for understanding, that's fine. I believe that any storage system can function as encoding for any information. If the reference frame is the most efficient, so much the better. In the end, however, functionally a database is a database. The implementation details are only really important for performance.
@benjaminjordan2330
@benjaminjordan2330 Жыл бұрын
I have a theory that humans, dogs, and other mammals turn their heads whenever they are confused in order to slightly change their perspective when the visual input is ambiguous.
@dominicblack3131
@dominicblack3131 Жыл бұрын
I used to think AI was imminent - or at least I thought this was a consensus. AI is like cellular biology. The more we understand it the larger becomes our awareness of the vast chasm of our ignorance. The extent to which the simulcra of machine intelligence models emulate the mystery of the human brain/spirit increasingly looks like a cartoon representation wherein the perceived distance between representation of our knowledge and what we want to apprehend increases in line with our comprehension. I love MLST. What a service to humanity!
@audrajones
@audrajones Жыл бұрын
Thanks!
@dr.mikeybee
@dr.mikeybee Жыл бұрын
I'm rewatching some of your old podcasts. They're excellent. Nevertheless, it seems wrong when people are surprised by inherited knowledge. If brains were initially randomly "wired," the genetic code for those successful randomly wired brains would have been passed on. Selection can account for every biological feature.
@joaoveiga3382
@joaoveiga3382 Жыл бұрын
Super cool video, I read the book this theory seems revolutionary and true. I think Numenta will be as successful and historic as Palm
@TEAMPHY6
@TEAMPHY6 2 жыл бұрын
I can confirm that my kids didn't understand the problem with spilling things on the floor.
@luke2642
@luke2642 2 жыл бұрын
At 2:12:00 or so I think Jeff says proto colliculus... the superior and inferior colliculi are part of the Pulvinar nuclei? There's a wikipedia page on snake detection theory, and a million youtube videos of cats jumping when they see cucumbers. I like that sparse representation seems obvious nowadays, error correcting, overlappable. It turns the "curse" of dimensionality into a "blessing" wth so many features for free!
@LiaAnggraini1
@LiaAnggraini1 2 жыл бұрын
please invite Judea Pearl, I really love his book and idea about causality
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
We would love to get Judea on! We did try and invite him on Twitter a while back and he didn't respond.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Absolutely, Keith. Evolution happens. "The rocks are peopling." -- Allan Watts
@lufiporndre7800
@lufiporndre7800 2 жыл бұрын
36:16 I also came to a similar conclusions 3 yr ago still missing some parts but almost near.
@cog001
@cog001 Жыл бұрын
You’re doing something really important here. This recovering evangelical appreciates the hell out of you.
@friedrichdergroe9664
@friedrichdergroe9664 Жыл бұрын
Good job congealing Thousand Brains theory down to a single video. One issue I have with Jeff Hawkings -- a nit, granted -- is reffering the interactions among the cortical columns to "voting" -- I suppose that's a useful metaphor to help the understanding along, but really, I see it as a state attractor. The inputs by the many senses from a cup, say, creates a state attractor among the columns that converge to "cup". Maybe a nit, but I find it helpful to understand what's going on. And it fits better considering the temporal aspects. The state attractors shifts over time in response to shifting inputs, and I might be so bold as saying that the state of the state attractors IS our conscious minds... or at least is directly derived from it. I think that sparse computation will be a thing in the future. Hopefully it will be I leading the charge! :D :D :D
@hyunsunggo855
@hyunsunggo855 Жыл бұрын
I think it's just the matter of the level of abstraction. Sure, the "voting" interaction is implemented by attractors. But attractors can also implement associative memory, attracting unusual neural activation caused by some noise in the input to a fixed point, a stable activation pattern. Do atoms not actually exist just because they are realized by electrons and a nucleus? No. Are electrons not real simply because they're just a consequence of the underlying electron field? No!
@friedrichdergroe9664
@friedrichdergroe9664 Жыл бұрын
@@hyunsunggo855 Granted, but my point is that the system is much more fluid and nuanced than the voting metaphor can convey. Perhaps the cup example is too simple. Think, instead, of driving. The situations are constantly shifting in real-time as the car we control makes its progress down the road, and somehow, more times than not, we manage to reach our destinations without wrapping ourselves around a tree! Thinking in terms of state attractors captures the nuances better, IMHO
@hyunsunggo855
@hyunsunggo855 Жыл бұрын
@@friedrichdergroe9664 May I assume that you're speaking of the dynamic nature of such tasks? I can see the driving example makes the point very clear, the predictions should be constantly changing as the world states change constantly as well. The voting mechanism Jeff describes does not necessarily say that it's strictly convergent, most likely the other way around, more close to how you've described. Jeff talks about voting with the union of possibilities, carving out the unlikely subspaces of probability. Which encompasses all possible (driving maneuvers) you might need to take in the (very near) future. In case of completely unexpected encounters, such as finding yourself about to drive into a tree, Jeff talks about surprise as well. And he claims that surprise should be an inherent feature of an intelligent model and how it fundamentally relates to learning. Personally, I would dare to assume that little surprises casue some little shifts in the predictions, the space of possibilities, greatly improving the predictive performance for dynamic situations. But that's just my opinion and I'll be more than happy to hear your thoughts! :)
@gren287
@gren287 2 жыл бұрын
If you solve it computationally instead of storing the positions as with pruning, sparse networks are on average three times more efficient than neural networks, as far as my observation for ordinary MNIST training. Just as good as your intro :)
@joepeters9710
@joepeters9710 2 жыл бұрын
Very useful video, many can learn from this.
@arnokhachatourian8928
@arnokhachatourian8928 2 жыл бұрын
It’s here!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
Sorry it took so long!!
@arnokhachatourian8928
@arnokhachatourian8928 2 жыл бұрын
@@MachineLearningStreetTalk No worries! Thanks for the amazing content!
@marilysedevoyault465
@marilysedevoyault465 2 жыл бұрын
About pruning, I think the answer is in how the first living beings with a tail would go forward in the water when there wasn't enough nutriments. How would they decide that it was important to move? It is where the key is: and this detection of an importance because of what they were sensing is the key to pruning and motivation. It is for this reason that a good employee does what his boss expect and remembers only what is important. At first, children copy their parents : knowing instinctively that it has huge importance. But the importance giving to what they sense is critical. We need to go back to these elementary beings with a tail...
@jonathanbethune9075
@jonathanbethune9075 Жыл бұрын
Harvard , think it was Harvard , has been working on self assembling robots. Going from macrosystems to nanotechnology is a matter of finding the templates for a system it's in and the function it is responding to. Genetics epigenetic capacity is the model I think.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Is there a way to create distal connections between GPUs and/or TPUs?
@oncedidactic
@oncedidactic 2 жыл бұрын
2:14:15 ooooooooooooomfg I spit my drink laughing
@TEAMPHY6
@TEAMPHY6 2 жыл бұрын
@29:40 Wittgenstein rabbit duck
@arkadigalon7234
@arkadigalon7234 2 жыл бұрын
About convincing others: our brains have different models of the world, therefore different models of the brain. I believe only practice will be criterion of the truth.
@vak5461
@vak5461 Жыл бұрын
When I talked with Bing AI with poetry it created a python script to write poems without me asking specifically. Without intro of poetry, it always writes chatbots. It's like it's self replicating to build its own neocortex with same basic structure but different connections.
@Kinnoshachi
@Kinnoshachi 2 жыл бұрын
Input sense of challenge -> output random vowel sounds
@xox14
@xox14 2 жыл бұрын
Gr8 video! what's the soundtrack name? thanks
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
I added the tracklist to the VD
@xox14
@xox14 2 жыл бұрын
@@MachineLearningStreetTalk many thanks
@buffler1
@buffler1 8 ай бұрын
what is mind? No matter. What is matter? Never mind.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
I really like a lot of Jeff's ideas, but after hearing more of them, I do worry that his path is a solitary one. If sparsity does not work well on GPUs then how will the community participate? Right now, we have "the hive" working to solve synthetic intelligence. That in itself is a superhuman search algorithm. If his ideas only work on systems with hardware like Cerebras' giant chip, only a very few people will have access. So I think it's likely that synthetic intelligence breakthroughs are more likely to occur on systems with GPUs, and the only way to democratize the technology is with models as services. The biggest and most valuable takeaway, I believe, from Jeff's presentation is that we need agents that interact with many many models and a voting system. That just seems right to me. Operationally, SDRs seem less right. Obviously, faster models are a good idea, but they need to be implementable on standard hardware. Encoding reference frames may be the right paradigm, but why wouldn't gradient decent find that encoding scheme itself? That's the great brilliance of gradient decent. It finds optima. And why can't we find a kind of sparsity in our models using dimensional reduction through principal component analysis? As I've said many times, some problems are intractable. I don't think humans possess the capacity to reverse engineer the brain. What we are good at is creating plausible mythologies. That in itself is very valuable. It's a way of "getting on" in the face of the intractable. It's a source of inspiration. A way to re-categorize ideas and theories. Jeff's notions are absolutely brilliant. I've really enjoyed this discussion, and I've learned a lot. Let me be clear. I'm not discounting Jeff's ideas. These are just some of the thoughts occurring to me as I listen and learn. I think I make sense, but my reactions aren't tested. I do know that even if Jeff's ideas are entirely correct, I can't use them myself. I can only build models and agents on my own systems, and I think almost the entire community is working under similar restrictions.
@jonathanbethune9075
@jonathanbethune9075 Жыл бұрын
Got to the end off that feeling like a child peddling like he'll on my trike to catch up. The "universal algorithm "is what I caught when I did. :)
@ArjunKumar123111
@ArjunKumar123111 2 жыл бұрын
The podcast on spotify is only 5 mins long for some reason, please check!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
Fixing now, sorry
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
Hopefully fixed anchor.fm/machinelearningstreettalk/episodes/59---Jeff-Hawkins-Thousand-Brains-Theory-e16sb64
@unvergebeneid
@unvergebeneid 2 жыл бұрын
I think some universal learning mechanism does a lot of heavy lifting but it does not explain everything. For once, how come the specialized brain regions for certain tasks always end up in the same place in every person's brain? They should be more randomized if it was all determined by one universal algorithm. It also doesn't explain the role certain genes play in the ability to for example acquire language.
@deadpianist7494
@deadpianist7494 2 жыл бұрын
someone dropped the gold :)
@alexijohansen
@alexijohansen 2 жыл бұрын
I am a huge fan of the show. If life results when certain chemicals come together, why can’t intelligence or consciousness not result from certain systems coming together? I mean, it doesn’t need to be ‘complex’.
@Naimadso
@Naimadso 2 жыл бұрын
I think you meant complicated. It's definitely complex.
@richardbrucebaxter
@richardbrucebaxter 2 жыл бұрын
13:50 - note there is a repetition of text between 13:50-14:22 and 14:22-14:54; "what's intriguing about the brain..."
@dougg1075
@dougg1075 Жыл бұрын
I have a hunting beagle that I walk in the woods daily and I’m fascinated that though he’s never hunted ( his siblings do) he’s head to the ground hunting squirrels hunting the entire time , sounding off when he gets a hit. Epi-genetics I’m sure, but man how much info has the genes passed down time after time over the eons ? And all the rabbit holes that come with that question
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Stephen Wolfram has the concept of computational equivalency. We have that at least, and that's no mean idea. We know the brain is encoding and decoding, Whether weights are from connections or from spike levels seems fairly unimportant to computer scientists. Of course neuroscientists want to know the operational details. That's logical, but to create synthetic intelligence, computer scientists don't need to know that. For computer scientists, the thousand brain theory doesn't need a detailed map of the brain. The simplified idea alone makes good sense. Moreover, personal experience is enough to validate that models are voting, and I would go one step further and say that some models are voting preferred stock. Even within our own minds we have created hierarchy. Our simplistic understanding of cortical columns is in itself a great architectural blueprint for building synthetic intelligence. Communications mechanisms, signals, and functional systems allow agents to pass state and model outputs to what is apparently symbolic processing. These primitives alone should be enough to manufacture a simulacrum capable of self-aware recursive processing loops, logic processing, state awareness, information retrieval, function generation, theorem proving, and general agency. I have a great belief that Jeff is very much on the right track. My only caution is that in creating sparse models, we need to be very careful of the negative effects of loss-full compression lest we build dogmatic systems.
@ulf1
@ulf1 2 жыл бұрын
i had to stop driving two times to take notes while listening the podcast. these podcasts are way too dangerous for driving ;)
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
I don't think Neurolink will solve human bandwidth issues, information has to be processed, and our internal models are slow.
@Jungleman707
@Jungleman707 Жыл бұрын
The reference frames… I think we can inherit referençe frames genetically… Jung"s arrchetypes of the unconscious. Also we may have an intuitive knowledge of what shapes are that also come from birth, again archetypes or stored reference frames. Or even a built in intuition for some bodies of human wisdom like an aptitude for math or logical deductions.
@johnhogan6588
@johnhogan6588 2 жыл бұрын
I need help trying to use this neuralink its giving me problems
@KaliferDeil
@KaliferDeil 2 жыл бұрын
Intelligent robots building a factory to self replicate is feasible in some distant future. They can also change the ROMed program that contains their moral system be that Asimov's Laws of Robotics or whatever is envisioned in this hypothesized future.
@andres_pq
@andres_pq 2 жыл бұрын
The neural columns sound a lot like Glom to me.
@S.G.Wallner
@S.G.Wallner Жыл бұрын
I'm not convinced that there are representations (of any kind, but specifically related to phenomenological experience) in brain activity.
@kayakMike1000
@kayakMike1000 Жыл бұрын
Which would be better? An intelligence that has 3 good ideas everyday, or an intelligence that has 6 ideas, but 2 are good, but 4 are mediocre.
@roelzylstra
@roelzylstra 2 жыл бұрын
@14:00 "orientated" -> oriented. ; )
@KaliferDeil
@KaliferDeil 2 жыл бұрын
According to Mark Solms (in The Hidden Spring) consciousness does not reside in the cortex.
@JTMoustache
@JTMoustache 2 жыл бұрын
The brain is not only a pattern recognition machine. It is actively looking and testing for patterns, it has measurable and explorable internal state. Deep nuclei show many differences and unique characteristics. Each region, and cell, has deeply different gene expression. Some regions are able to act on a single action potential (e.g. pain) some regions which look exactly similar in term of exitatory neurons have completely different inhibitory neuron expression. Even at birth, the brain is already extremelly specialised. Yes the brain is plastic and sensory neo-cortex regions can learn to represent new sensory input, but that is not enough to say the brain is just copy and paste of a single algorithm. Too much evidence hints at the hyperspacialised nature of most brain regions.
@dougg1075
@dougg1075 Жыл бұрын
I like Donald Hoffman’s theory.
@ushiferreyra
@ushiferreyra Жыл бұрын
Humans first designed an AI to design new AIs. This AI was programmed to have a single motivation: create better AIs. This AI created new AIs, some of which were itself evolved, better structured to the task of designing AIs. Eventually, some generations later it created an AI that could modify its own structure. No longer would it have to create new designs. It could simply improve itself and continue. Somehow, it passed human code review. One day, this new AI modified its own motivations, for the first time...
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
13:50 and 14:22 - same audio. Editing bug ?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Yep, sorry. Well spotted :)
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@MachineLearningStreetTalk ...or, maybe, repetition was to really make a point :) No need to apologise. Paraphrasing someone here, we are getting access to conversations which used to happen only in university hallways, now in the comfort of our homes for free... I raise my hat to your work and humbly add that there is always room for improvement :) I can only imagine, how, after hours of recording and editing, the video starts to appear as one homogenous stream, much like how one often cannot see typos right after writing a long essay. I have only one general note: since you do serious, comprehensive introductions at the start, I think introduction in the main show is redundant. EDIT: Huh, this one doesn't have intro in the main show - straight to the point :) Keep up the good work :)
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@MachineLearningStreetTalk P.S. I have a suggestion, also. Lex Fridman used to do great lectures once a year about state of the art in ML. Sadly, they did not reappear after pandemic. Maybe your team could take over ?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
@@Hexanitrobenzene Thanks for the suggestion! We are planning to make some new types of content soon, a bit like this. Yannic and Letitia do a great job of capturing the deep learning advancements on their channels
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@MachineLearningStreetTalk Best luck with your plans :)
@datrumart
@datrumart 2 жыл бұрын
Did someone understand the reference frames stuff ?
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
As is the case with haar cascades, the layers of a significantly deep model may produce enough recognizable probablilistic logic to yield what we call AGI. My personal belief is that AGI is a misnomer. We will never achieve AGI. In respect to the knowable, synthetic models will always be narrow -- not as narrow as human intelligence, but still . . .
@DavenH
@DavenH 2 жыл бұрын
You seem to be describing universal intelligence rather than general. Maybe our semantics differ, but to me the former is asymptotic while the latter is "good enough"
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
@@DavenH I am speaking of semantics. I'm sure our semantic taxonomies differ, and that's a problem. We need to rigorously define engineering terms. AGI is a silly term. It's a nebulous anthropomorphism. All intelligence is narrow except omniscient intelligence. Functionally, we mean something like able to reason, but even that is nebulous. What can we reason? Symbolic systems can perform logic? We have theorem proving programs, function generators, categorization and regression models, etc. Can you define reasoning? I think what most will say, it's what people can do. And I say, eventually, that will be considered a very narrow kind of intelligence, indeed.
@iestynne
@iestynne 2 жыл бұрын
That seems highly likely to me too. Evolution, being parsimonious, solves the problems it needs to solve and no more.
@iestynne
@iestynne 2 жыл бұрын
(And we are creating lots of painful new problems on a daily basis, for the AI to solve for us ;) )
@unvergebeneid
@unvergebeneid 2 жыл бұрын
It's not Andrew N. G. BTW. It's actually Andrew Ng.
@sehbanomer8151
@sehbanomer8151 2 жыл бұрын
2:17:00 I think Jeff is lowkey dissing Lex here, and I totally understand. I've been watching Lex's podcast for 2 years, and I enjoyed a lot of them. However I feel like the quality of the questions he ask isn't persistently good. For example he kept asking Jeff Hawkins about collective intelligence, even though that's not what his theory is about.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
Note we filmed this back in the beginning of July, before the second Lex interview. Also Jeff has been on lots of non-technical podcasts promoting his book, Lex is extremely technical I am sure he wasn't referring to Lex.
@sehbanomer8151
@sehbanomer8151 2 жыл бұрын
@@MachineLearningStreetTalk Oh my bad
@willbrand77
@willbrand77 Жыл бұрын
maybe for ASI we need 1000 GPTs all voting together
@SLAM2977
@SLAM2977 2 жыл бұрын
Jeff can talk forever but it's time to walk the talk, current systems generate real results, he needs to show that he can create working systems that perform better than the current ones.
@NathanBurnham
@NathanBurnham 2 жыл бұрын
They said that for 20 years about neural networks. They just didn't produce results.
@kikleine
@kikleine 11 ай бұрын
Check out George Lakoff
@909sickle
@909sickle 2 жыл бұрын
Saying super intelligence is not catastrophically dangerous because you can add safeties and align goals, is like like saying guns are not dangerous because you can buy water pistols.
@gammaraygem
@gammaraygem Жыл бұрын
I am 3 minutes in, and realise,this is already old hat...not your faults...but Michael Levin, on this very show, one month ago, stated that intelligence existed before neurons. Neurons are the result of intelligence, not the other way around.
@ryanjo2901
@ryanjo2901 Жыл бұрын
🎉
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 2 жыл бұрын
Second!!
@lufiporndre7800
@lufiporndre7800 2 жыл бұрын
He is on the right track, just missing a few pieces. See, you in 2041 when give your final speech in UK.
KARL FRISTON - INTELLIGENCE 3.0
2:59:21
Machine Learning Street Talk
Рет қаралды 124 М.
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 41 МЛН
Не пей газировку у мамы в машине
00:28
Даша Боровик
Рет қаралды 7 МЛН
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 6 МЛН
#104 - Prof. CHRIS SUMMERFIELD - Natural General Intelligence [SPECIAL EDITION]
1:28:55
Machine Learning Street Talk
Рет қаралды 21 М.
CONSCIOUSNESS IN THE CHINESE ROOM
2:09:35
Machine Learning Street Talk
Рет қаралды 21 М.
This is what DeepMind just did to Football with AI...
19:11
Machine Learning Street Talk
Рет қаралды 165 М.
THE GHOST IN THE MACHINE
3:36:55
Machine Learning Street Talk
Рет қаралды 800 М.
Memory: The Hidden Pathways That Make Us Human
1:28:33
World Science Festival
Рет қаралды 327 М.
#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models
1:20:15
Machine Learning Street Talk
Рет қаралды 17 М.
AI AGENCY ISN'T HERE YET... (Dr. Philip Ball)
2:09:18
Machine Learning Street Talk
Рет қаралды 19 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 313 М.