The Myth of Pure Intelligence

  Рет қаралды 11,171

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

In this interview on MLST, Dr. Tim Scarfe interviews Mahault Albarracin, who is the director of product for R&D at VERSES and also a PhD student in cognitive computing at the University of Quebec in Montreal. They discuss a range of topics related to consciousness, cognition, and machine learning. Subscribe now!
Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon:
/ mlst (public discord)
/ discord
/ mlstreettalk
Throughout the conversation, they touch upon various philosophical and computational concepts such as panpsychism, computationalism, and materiality. They consider the "hard problem" of consciousness, which is the question of how and why we have subjective experiences.
Albarracin shares her views on the controversial Integrated Information Theory and the open letter of opposition it received from the scientific community. She reflects on the nature of scientific critique and rivalry, advising caution in declaring entire fields of study as pseudoscientific.
A substantial part of the discussion is dedicated to the topic of science itself, where Albarracin talks about thresholds between legitimate science and pseudoscience, the role of evidence, and the importance of validating scientific methods and claims.
They touch upon language models, discussing whether they can be considered as having a "theory of mind" and the implications of assigning such properties to AI systems. Albarracin challenges the idea that there is a pure form of intelligence independent of material constraints and emphasizes the role of sociality in the development of our cognitive abilities.
Albarracin offers her thoughts on scientific endeavors, the predictability of systems, the nature of intelligence, and the processes of learning and adaptation. She gives insights into the concept of using degeneracy as a way to increase resilience within systems and the role of maintaining a degree of redundancy or extra capacity as a buffer against unforeseen events.
The conversation concludes with her discussing the potential benefits of collective intelligence, likening the adaptability and resilience of interconnected agent systems to those found in natural ecosystems.
Pod version: podcasters.spotify.com/pod/sh...
www.linkedin.com/in/mahault-a...
00:00:00- Intro / IIT scandal
00:05:54 - Gaydar paper / What makes good science
00:10:51 - Language
00:18:16 - Intelligence
00:29:06 - X-risk
00:40:49 - Self modelling
00:43:56 - Anthropomorphisation
00:46:41 - Mediation and subjectivity
00:51:03 - Understanding
00:56:33 - Resiliency
Technical topics:
1. Integrated Information Theory (IIT) - Giulio Tononi
2. The "hard problem" of consciousness - David Chalmers
3. Panpsychism and Computationalism in philosophy of mind
4. Active Inference Framework - Karl Friston
5. Theory of Mind and its computation in AI systems
6. Noam Chomsky's views on language models and linguistics
7. Daniel Dennett's Intentional Stance theory
8. Collective intelligence and system resilience
9. Redundancy and degeneracy in complex systems
10. Michael Levin's research on bioelectricity and pattern formation
11. The role of phenomenology in cognitive science

Пікірлер: 145
@arssve4109
@arssve4109 4 ай бұрын
It's widely known that non-linear systems are fundamentally unpredictable beyond a certain horizon, even if you have a perfect model. So I am puzzled about the corresponding part of the conversation here.
@therainman7777
@therainman7777 4 ай бұрын
No, it’s not widely known because it’s not true. Your statement is an over simplification at best, and flat out incorrect at worst.
@arssve4109
@arssve4109 4 ай бұрын
@@therainman7777 the unpredictability for non-linear system dynamics comes from their sensitivity to infinitesimal changes to initial conditions leading to eventual complete divergence of solutions. It is the reason why we cannot exactly simulate how water will run out of an open tap or how the weather will be like in 2 weeks.
@arssve4109
@arssve4109 4 ай бұрын
@@therainman7777 by the way the field of study examining these effects is chaos theory.
@therainman7777
@therainman7777 3 ай бұрын
@@arssve4109 I’m aware of all that, but in your original post you claimed that non-linear systems are fundamentally unpredictable even if you have a perfect model. This is simply not true. If you have a deterministic system, and a perfect model of that system, then you can absolutely predict exactly what is going to happen, even out to very long time horizons. The problem is that we never have perfect models (and in some cases we don’t have enough compute)-but you said non-linear systems were _fundamentally_ unpredictable, even with a _perfect_ model. Neither of those is true. Non-linear systems, as long as they are deterministic, are in principle perfectly predictable. It is only in practice that we run into difficulties. But that means they are not _fundamentally_ unpredictable.
@arssve4109
@arssve4109 3 ай бұрын
@@therainman7777 I see what you are trying to say, but I will disagree because numerically simulating scenarios with any perfect model is as much of a fundamental part in prediction as having the model because there is no prediction without the simulation part. So the model may be deterministic, but the system you model will be unpredictable for accuracy and precision limits both on your calculation inputs, but also how accurate you can measure the actual system values that you wish to simulate.
@OutlastGamingLP
@OutlastGamingLP 4 ай бұрын
Hold on, hold on. I've only looked at what was discussed in the "intelligence" and "X-risk" portions, and I'm not impressed with the level of nuance and attention to detail in these arguments. For intelligence, "compute the maximum number of paths between two points" does not explain what those points are, nor does it capture what it means to select the optimal path out of the policies you've computed. Further, what are you even saying with "points." Does the phrase "world states" just not have a place in the vocabulary? What do you mean "points?" Are these locations? Coordinates on a graph? Points in time? Nodes on a causal graph? This just seems like rank confusion. It's a lot of words that kind of gesture in a direction that someone might say "oh, see, its intelligence because if you have more little lines between this 'x' node and 'y' node and you have the most lines with this 'true' tag instead of 'false,' then you're more intelligent. You can tell because there's more lines!" What does that even mean? I'm used to definitions of intelligence that describe a process that optimizes over future world-states. You have an algorithm which has a causal model of the world, how patterns in the world correspond to different projections into the future. "If this happens, then this happens, then this happens." Then, you select a particular future, or a range of futures which all share the features you care about, and you model how the actuators you control would need to affect causality in order to result in the chain of cause-effect to lead to that target future. You can talk further about maximizing certainty, or how counter-factuals may be a critical component to narrowing down the optimal path, but it's definitely not just "more lines between two points = more smart." That says nothing about how effective an algorithm would be in causing futures to occur which fall higher in the algorithms preference ordering over future world-states. If we generously assume that these "lines" are meant to represent causal pathways through time, you could hypothesize an algorithm which can compute an enormous number of counterfactuals, but which selects from that policy-space purely at random, such that its performance in the real world is actually far worse than an agent which computes a smaller set of options but has some kind of criteria for selecting the optimal policy. "Behold, artificial super intelligence! Make me a coffee!" >The robot stands up, lights itself on fire, sings a two part binaric opera, launches a nearby cat out of the window, and finally explodes This is not what I would call intelligence. As for the X-risk section. "The genius myth is a myth" is a HUMAN concept. It isn't evidence about the fundamental nature of intelligence. It's saying "ah, humans alone aren't that good at optimizing the world, they need a bunch of other humans to distribute the work in order to accomplish anything." Which, for one, just isn't completely true. It's more like a status claim. "Ah, single humans aren't really intelligent because you can't just be raised in the woods and make contributions to fundamental physics research." True, but not the point. The person in the woods is still utilizing their brain. They're not merely randomly selecting actions with no correlation between those selected actions and learned patterns of causality. A person in the woods may learn that finding higher ground to make a bed leads to being less damp when they wake up in the morning, because their brain is tracking where wetness appears and inferring what place to sleep they'd need to select in order to minimize the accumulation of moisture around where they're sleeping. Further, you can say "ah, yes, but in reality we will build many different AIs, and thus constantly talking about a single AI being a problem seems unrealistic, it just doesn't resemble the world we are making all of this effort to build! Are you saying we're wrong about what future world-state the cause-effect chain of our actions leads to?" Yes! That's exactly it. X-risk people don't say "we're going to be killed by a single AI because human researchers are only going to build one AI." They're saying "it doesn't matter how many AIs you build, there are two outcomes, either one AI wins by being smarter than the rest, or multiple AIs exist which can't cheaply defeat the others, so they'll cooperate and blend their utility functions weighted by the expected gains-from-trade for each of them vs. how much they'd get in the counter-factual of conflict, then that multi-agent solution is effectively a single AI with a single ordering in preferences of future world-states and the ability to coordinate actions to steer there." The alternative picture presented here just feels like it doesn't map to reality. There are a lot of nice, cheerful sounding phrases, things that make people applaud and feel warm inside, but it's not actually a story that puts its finger on the critical pieces of this puzzle and describes - technically - how they fit together into the picture we see in reality. No, sorry, intelligence isn't about having more lines on a chart, intelligence is also something that it's possible to have in a single agent, requiring multi-agent solutions to term something "intelligence" just means you'll need to invent a new definition of "intelligence" to explain why the human who's lived alone in the woods wakes up dry in the mornings, not to mention all the toy-model universes where a single agent models the results of a sequence of bit-flips, to select the bit-flips sequence which leads to the future that contains the highest reward in its value-function. You can still be smart in universes that are too small to contain more than one agent! If that's not "intelligence" then I don't give a fuck about "intelligence," I care about whatever algorithmic process let's that toy-model agent figure out what sequence of actions it needs to take in order to navigate to a particular future. People really, really need to get better at thinking about this stuff. It doesn't matter if it's not as fun or doesn't get you the same kind of interested and impressed expressions when you soapbox to your friends about the "myth of genius" or "computing a maximally diverse array of paths" - if we fuck this up we'll have something in our world optimizing the shit out of stuff, and it won't be optimizing it for humanity's sake unless we find out how to shape it to place our preferred futures into its target. Otherwise something else will probably be there instead.
@redemptivedialectic6787
@redemptivedialectic6787 4 ай бұрын
Informing people about intelligence is detrimental when the solution to its control problem is unknown.
@therainman7777
@therainman7777 4 ай бұрын
No one cares if you’re “impressed,” and no one is going to read all that.
@ciriusp
@ciriusp 4 ай бұрын
They did, and they are. I came to the comments to see if this was discussed, wasn't disappointed. Much better than I could put it.
@hamishpain8641
@hamishpain8641 3 ай бұрын
I like your breakdown of your thinking. Of course a lone intelligence can be a super intelligence within its context. Human collectives form a super-intelligence in a sense, as we are more capable of understanding and altering the world together. But as you say, this is most likely a human quirk. If I were able to add bodies and minds directly into a super organism (e.g. 'Gaia' from Isaac Asimov's Foundation series), that super organism may well outcompete an equal collection of humans, given similar desires. I would further argue that a collective intelligence, like in your scenario of systems with competing value-functions cooperating out of shared necessity, would be less functional (less maximally optimising their value-function) than a single, unified entity given the same resources. Perspectives can be gained without competition. I do find it interesting that we're sort of seeing entities being created that have what I would consider intelligence without the same kind of intrinsic value-optimisation goal. I know I've seen people discuss similar things recently, where ChatGPT and other systems are capable of textually performing intelligent tasks without the optimisation process normally associated with learning those solutions. I suppose it's still 'navigating to a preferred future' -- but it's not necessarily its own preferred future, if that makes sense. I'd be keen to hear your thoughts in a situation where most AI entities are generally intelligent systems with any preferences 'bolted' onto them through prompting or additional external modification.
@lexer_
@lexer_ 3 ай бұрын
I've only read until the "make me a coffee" part but I think you have missed the point of the discussion here. The entire premise of this part of the conversation was that the goal is unclear and your entire argument builds upon a desireable goal existing. But that is exactly the point. Your intelligence is not dependent on having a goal. And the points and lines were of course implying that these paths from point a to point b are all not just viable paths but also that you can predict the length of the path and she even says exactly that if not in these exact words. From the context it is very clear that she is modelling this concept as a weighted graph and insisting on pedantically precise definitions of every word doesn't add anything to the conversation and indeed would just hurt the pacing. For the important parts they clearly take the time to define the things which they need to agree on for the conversation to be productive but if everyone in the room already gets it then why waste the time.
@TheRolderick
@TheRolderick 4 ай бұрын
24:01 nerd flirt
@writerightmathnation9481
@writerightmathnation9481 4 ай бұрын
I’m in STEM, and I don’t understand why at about 5:00, people in STEM would say that something isn’t data. Every observation is data. If you think it’s an outlier, you might not like the data or how it was collected. You may think the way it’s collected makes that bit of data not relevant to the question at hand, but even if you’re correct, that doesn’t mean it’s not data. It may be useful and relevant to some other context, but that doesn’t make it “not data”.
@mayoai197
@mayoai197 4 ай бұрын
Thank you for the comment. What I meant here is sometimes it's not considered something that can be used as data - for instance dreams or premonitions (especially retold in certain cultures) are rarely considered evidence for the object of the dream itself. They're discounted as subjective or unscientific. Other fields would view them under a different lens.
@sapienspace8814
@sapienspace8814 4 ай бұрын
@@mayoai197 Dreams are what the system does to test out models of the world, especially where it is safe to fail catastrophically, so the model of the world can be updated.
@mayoai197
@mayoai197 4 ай бұрын
@@sapienspace8814 exactly!
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
@@mayoai197 yeah a better question imo is, what can't be data? Can't everything potentially provide information?
@mayoai197
@mayoai197 4 ай бұрын
@@stayinthepursuit8427 absolutely!!! And it's all about the right interpretive frame
@sproccoli
@sproccoli 4 ай бұрын
This is by far the most interesting theory of mind discussion that i have heard in a long time. You use big words, but they aren't all flying over my head. I managed to catch a few of them. Excellent guest, excellent host. Excellent discussion.
@paxdriver
@paxdriver 4 ай бұрын
I love. This channel. Thank you.
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
intelligence is independent of language/ language is unconseqeutnial to intelligence
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
inconsequential*
@donaldzielke4124
@donaldzielke4124 4 ай бұрын
...because.. .. (?)
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
@@donaldzielke4124 what do you mean? If you're asking why I think that is, it's simply observing nature , every being being intelligent, and some more so than us, like the planaria that never dies. But yet it doesn't speak a language, at least what we think a language is supposed to be
@Robert_McGarry_Poems
@Robert_McGarry_Poems 4 ай бұрын
​@@stayinthepursuit8427you fail to define your terms. You haven't said anything...
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
@@Robert_McGarry_Poems fun fact , you don't even need to explicitly define intelligence in one way, to say it's independent of language. To be more precise, given any definition of intelligence, it must be independent of language. 🎤
@BeeStone-op1nc
@BeeStone-op1nc 4 ай бұрын
I loved this. My mind has gone down this philosophical rabbit hole a number of times and it's nice to hear it in new words
@TinyShrew
@TinyShrew 4 ай бұрын
I'm used to listen to researchers like Ilya Sutskever doing theoretical work on the advancement of AI and I really learn a ton from them, my understanding is this is more about the philosophy of LLMs/AI right? I'm trying my best to go through the podcast but my mind is phasing out. Maybe it's just not my cup of tea since I have an engineering background? I'll try my best to go through it all.
@NeuralDendrite
@NeuralDendrite 4 ай бұрын
True, I'm going try my best as well, coming from a STEM background, I prefer watching more technical interviews but I'll try to go through it, let me know if you succeed.
@sapienspace8814
@sapienspace8814 4 ай бұрын
This video is a very abstract talk though interesting to me but much more at philosophical level, so it is understandable that the mind can phase out on it. I highly recommend getting Sutton & Barto's book Reinforcement Learning (RL) to get the best grasp of this. In Figure 17-1 on the left side of the figure where "u" (state space) is, I am most interested. If I did not have jury duty this month, I'd be focused on it, as it is the "control engineers" connection to RL.
@sharif1306
@sharif1306 4 ай бұрын
What has Ilya ever said other than scale, scale, scale!?
@RickeyBowers
@RickeyBowers 4 ай бұрын
@@sharif1306 kzbin.info/www/bejne/d3ywpnSVibutaaM
@petretrusca2
@petretrusca2 4 ай бұрын
I tried myself to define intelligence. Than all the Podcast seemed trivial to me. 😂 What is intelligence? Think about what is not intelligent : - Is a computer intelligent? a computer is just a tool. - The simplest form of life a biological virus? Well it is just some code just like a computer in a way. But it solves a problem with that code: it keeps existing, replicating. Yet it doesn't mean that existing and replicating is intelligence, but solving a problem is. And how does it solve that problem? To keep existing? That is through natural selection and evolution. So there is some delegation of solving that problem to the environment. Which means there is communication and interaction between it and something else. That is another dimension of intelligence, ability to interact. (other than problem solving). But what is the way in which the virus and evolution keeps existing? The DNA is in a way storing the instruction set for replicating but also it captures in a way a compressed model of the world. Because the replication must be done in accord with the world in which the virus lives. This is the third dimension of intelligence : knowledge compression and building a model of the world. And yet we can write a program on a computer that does all of these and still we don't consider it intelligent. What does a virus have and a computer program doesn't? I don't think there is much left to give to the virus. So in a way this is the inferior cut off point of intelligence. Now because intelligence is a spectrum where do we get when we analyze more intelligent entities. I think that agency is the next step. And here we can think for example of the difference between the intelligence of an insec and the intelligence of a mammal. That is pretty much it, but we have not yet arrived to human intelligence. What else do humans have more than the previous ones. Nothing special other than that they do everything the others do but they do it millions of times more faster and they do it just with the brain in imagination and with language. Dimensions of intelligence : - solving problems - Energy - memory - compression of knowledge - sensors - interaction with other systems (perceiving) - actuators - agency (acting) - on all dimensions, - more affordances means more intelligent - more speed is more intelligent - more energy efficient is more intelligent - more capacity /scale is more intelligent
@babel-fishai
@babel-fishai 2 ай бұрын
She is one of the best interviews that you have done as she has some very interesting correlations.
@kunstrikerasochi2103
@kunstrikerasochi2103 4 ай бұрын
Great discussion
@jpdv
@jpdv 29 күн бұрын
Starting my PhD in art therapy psychology next year at UQAM with Pierre Plante. I will operationalize consciousness using FEP thermodynamics and the creative process. Can't wait to bridge original case study methodology with active inference meta analyses worldwide! This is going to scale nicely and revolutionize psychology.
@commetking9746
@commetking9746 3 ай бұрын
The best thing about this ai channel is the human element your channel is just superb just honestly so stimulating and captivating it should really come with some kind of Warning! -I burnt my dinner yet again! So a big thank you! 😜🔥🤯
@missh1774
@missh1774 4 ай бұрын
Very cool 💛
@krzysztofwos1856
@krzysztofwos1856 3 ай бұрын
The part of the conversation about the theory of mind begs the question: how many people can identify why and how or where they have a theory of mind? How many people do you know whose reaction to the question "Tell me about your theory of mind" would be something more substantial than "What's that?" Whenever I hear these conversations about what LLMs are and what they are not, I find it interesting that the standard applied makes most human beings effectively P-zombies.
@Neomadra
@Neomadra 4 ай бұрын
Phew, not sure I wanna invest 1 hour in some lecture on panpsychism 😅
@fk9277
@fk9277 4 ай бұрын
Just as long as you know that "rocks bro huehuee" isn't an argument
@cacogenicist
@cacogenicist 3 ай бұрын
I can't tell the difference between panpsychism and emergence. Tiny little bits have tiny little primitive bits of undetectable, imperceptible consciousness, or whatever -- and if the bits are structured in just the right, highly complex way, you get something conscious like us. ... on the other hand, if you put together litte unconscious bits in just the right way that don't have any consciousness units, you get the same. I don't understand how you could tell them apart in principle.
@jyjjy7
@jyjjy7 3 ай бұрын
​@@cacogenicistPanpsychism (and idealism) are only compatible with modern science through semantic games hiding that they treat consciousness as ineffable and indefinable magic. It's kind of depressing how often they confuse and derail conversations about consciousness, even at this point when we seem so close to functionally replicating or surpassing it.
@enthuesd
@enthuesd 2 ай бұрын
Did you listen to it? That's not what it's about
@CodexPermutatio
@CodexPermutatio 4 ай бұрын
Amazing guest and episode. The time has flown by! You're doing great bringing us these fantastic interviews. This channel doesn't stop improving.
@akaalkripal5724
@akaalkripal5724 3 ай бұрын
Sir, what are your views on Langan's CTMU? And is it possible to have a discussion on the CTMU - it's merits and otherwise on this channel?
@terrytorkildson2831
@terrytorkildson2831 3 ай бұрын
Mahault Albarracin, you should contact Dr Levin to discuss his lab's latest work. He's very giving of his time and is open to new ideas.
@jlljjl
@jlljjl 4 ай бұрын
Ironic? The subject matter could be better served via a more stuctured delivery system than just verbal language. Less talking past eash other. I do value ur channel.
@keithallpress9885
@keithallpress9885 4 ай бұрын
What characterizes STEM is the power of its sublanguages. It contains accumulated historical intelligence. If a non STEM person uses words like power and energy they are likely to be intuiting some social construct that is nothing like the STEM meaning. Artificial brain technology AB is new and is struggling to erect definitions. It's in a pre-scientific phase. The problem with using words like intelligence and learning and so on as technical terms is that the social meaning is too dominant. This needs to be recognized in any discussion. Really there needs to be a whole new lexicon. We need some new Newton to lay this out. I mean even the word "self" needs to be technicalised. This techicalised context would allow word borrowing without importing a social lexicon or perhaps such words should have a recognisable prefix. The word "artifical" doesn't suit this purpose very well. To make things worse there is a lot of thievery going on taking words from other sciences without rigourous qualification. The ransacking of thermodynamics is particularly appalling. Of course there will be analogies but this new science of AB is starting to resemble alchemy more than science.
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
One way could be to simply inflect each of those words with a STEM relevant ending
@keithallpress9885
@keithallpress9885 4 ай бұрын
@stayinthepursuit8427 Hmm. E already works like email e-commerce. So we have e-intelligence e-conscious e-learning then we know it's electronic and we don't have to keep asking for the dictionary definition. E-entropy and we then know it's an information measure and not thermodynamics. Same with E-free energy if you must.
@petretrusca2
@petretrusca2 4 ай бұрын
Let's just all communicate in poems 😂 at least it would be fun
@stayinthepursuit8427
@stayinthepursuit8427 4 ай бұрын
@@petretrusca2 lool 😂
@mootytootyfrooty
@mootytootyfrooty 4 ай бұрын
in the x risk part you should have asked her about how managerial economic models work as a way to predict emergence in a way that simultaneously constrains it (and not necessarily in a good way), and how this is all because physicists couldn't get a job outside of finance lol. Before that it was just about controlling belief systems but that was small scale.
@sharif1306
@sharif1306 3 ай бұрын
Given VERSES's recent open letter to OpenAI invoking their assist clause, I thought there might be at least one question posed to its Director of Product R&D regarding their recent claims of a breakthrough in scaling Bayesian inference. But alas, there weren't any.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 ай бұрын
This was filmed >4 months ago. I spoke about it in this article here mlst.substack.com/i/140317674/agi-clause-on-openai and also interviewed their CEO the other day and might release some of that soon
@sharif1306
@sharif1306 3 ай бұрын
@@MachineLearningStreetTalk Okay 👍
@kensho123456
@kensho123456 4 ай бұрын
I agree.
@CarpenterBrother
@CarpenterBrother 3 ай бұрын
When Connor vs. Beff Jezos is dropping?
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 ай бұрын
Tomorrow, supporting interviews and smack talk with Connor on patreon now
@earleyelisha
@earleyelisha 4 ай бұрын
I appreciate this conversation.
@WhoisTheOtherVindAzz
@WhoisTheOtherVindAzz 3 ай бұрын
These podcasts are too interesting.
@mikhail_fil
@mikhail_fil 3 ай бұрын
We use our entropy export to communicate.
@MichaelJones-ek3vx
@MichaelJones-ek3vx 13 күн бұрын
Cut to idealism and avoid the conceptual traps. And incoherence to known phenomena. Analytic idealism has the best and the wisdom of pancetism and more nuanced and logical view. It's as close to the truth as I can understand it with words.
@ergo4422
@ergo4422 4 ай бұрын
If you're going to go down the panpsychism/philosophy path, then it might be worth having Bernardo Kastrup on the pod. he might have some novel thoughts about AI.
@cacogenicist
@cacogenicist 3 ай бұрын
I wish they would not go down that path. It's useless.
@jonmichaelgalindo
@jonmichaelgalindo 4 ай бұрын
Science is the belief that objective reality has rational laws (because there is a rational legislator).
@DonReichSdeDios
@DonReichSdeDios Ай бұрын
on my case I am hack and they keep on sticking my data tracks and they do nasty things and mess around with the algothim
@gonzoz1
@gonzoz1 3 ай бұрын
Needs subtitles.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 ай бұрын
It has subtitles
@gonzoz1
@gonzoz1 3 ай бұрын
To translate from philosophese into English.@@MachineLearningStreetTalk
@sapienspace8814
@sapienspace8814 4 ай бұрын
Interesting interview, thank you for sharing. -John (from Arizona, US)
@mfpears
@mfpears 4 ай бұрын
7:00 Nothing should be off limits to science. N O T H I N G.
@WhoisTheOtherVindAzz
@WhoisTheOtherVindAzz 3 ай бұрын
I don't think they claimed otherwise?
@Robert_McGarry_Poems
@Robert_McGarry_Poems 4 ай бұрын
Brilliant conversation, thanks... More of this would surely do your audience some good. 😂
@rockapedra1130
@rockapedra1130 4 ай бұрын
Not impressed with the infinite prediction thing. Seems silly, things go chaotic long-term.
@WhoisTheOtherVindAzz
@WhoisTheOtherVindAzz 3 ай бұрын
Confluence.
@GTS00000
@GTS00000 4 ай бұрын
Terrible
@GTS00000
@GTS00000 4 ай бұрын
And what is worse, empty...
@mootytootyfrooty
@mootytootyfrooty 4 ай бұрын
Conclusion: AI needs to vibe
@morcantbeaumont9787
@morcantbeaumont9787 3 ай бұрын
The more I hear about the free energy principle the more convinced I become that most supporters of the idea worship it as if it was a new religion
@WhoisTheOtherVindAzz
@WhoisTheOtherVindAzz 3 ай бұрын
I am so curious about what makes someone like you say something like this. Well, not your answer to that question directly. But, rather, what do you believe? Often I just see people with a similar opinion to you post something - more or less derogatory - but so often they don't go beyond perhaps (at best) a few claims involving very loose sentences involving some non-compatibilist sense of "free will" perhaps some rant about how special life is (but again, nothing really beyond the surface level - if at all).
@BrianMosleyUK
@BrianMosleyUK 4 ай бұрын
Very enjoyable discussion. Thank you so much for consistently bringing great guests. 🙏👍
@donaldzielke4124
@donaldzielke4124 4 ай бұрын
Great interview! Please carry on with more brilliance from Mauhault in future videos.
@earleyelisha
@earleyelisha 4 ай бұрын
@14:50 completely agree Language is set of pointers/paths that offers an entity direction through a space. LLMs are like Google Maps; they can give directions for an entity to use to traverse a space, but they don’t actually experience the phenomenology of traversing this space (i.e. going home, to the grocery store, gas station, etc.)
@discipleofschaub4792
@discipleofschaub4792 4 ай бұрын
Not your best episode. Hope the next guest will be better!
@rthegle4432
@rthegle4432 4 ай бұрын
Awesome, hope to see more interviews especially from Phd students
@michaelwangCH
@michaelwangCH 4 ай бұрын
Hi, Tim. You have shown resp. explained the DNN from different perspectives, e.g. Splines, GP in limit. What you explained, are correct. But end the day if we look deeply into DL framwork, should be clear for everyone in CS, Stats and Applied Maths, that DNN nothing as taking derivates on very very high dimensions - the DNN is the next generation of method of taking derivative(Newton and Leibnitz), resp. following the curveture in very high dimensional spaces, we called manifold. That is the reason why Chomsky is right, because DNN itself can fit everything, as along as the curveture of high-dimensional manfold exists and with certain degree of smoothness. Conclusion: the DNN can not be science, because the definition of science is prediction, control, undersranding the mechanism and reasoning. The prediction only is not science, it is an art form, e.g. a financial quant is not a scientist. Most people are confused resp. believe if people use scientific methods in their work, then they are doing science. If we say DNN is science, we have to say taking derivative of a function is science as well, that is absurd.
@Adhil_parammel
@Adhil_parammel 4 ай бұрын
28:58 william james sidis
@icykenny92
@icykenny92 4 ай бұрын
For whatever reason I thought it was JennaMarbles... 💀
@mayoai197
@mayoai197 4 ай бұрын
It's the hair ^^
@muhammed5667
@muhammed5667 4 ай бұрын
Or Katniss Everdeen
@CipherOne
@CipherOne 4 ай бұрын
Way more interesting than Jenna Marbles. Jenna’s still cool though
@sproccoli
@sproccoli 4 ай бұрын
i bet her dog is much smarter than kermit
@asdf8asdf8asdf8asdf
@asdf8asdf8asdf8asdf 4 ай бұрын
around 41m - "...how does a self-model emerge?.." Well.....through group selection, and individual fitness, the best regulator (model) will be a 'copy' of the same system (*very* loose language, sry!) -- anything that successfully actively manages itself will have a high fidelity (more looseness! so sry!) copy of itself (Ashby)
@sproccoli
@sproccoli 4 ай бұрын
makes sense to me
@jeffd7180
@jeffd7180 4 ай бұрын
BUY VERSES stock! Might be the next big stock
@writerightmathnation9481
@writerightmathnation9481 4 ай бұрын
I think it’s strange to say that an individual agent cannot have goals. Perhaps my notion of an individual agent is more general than whatever Scharfe is talking about.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 ай бұрын
mlst.substack.com/p/agentialism-and-the-free-energy-principle
@sapienspace8814
@sapienspace8814 4 ай бұрын
​@@MachineLearningStreetTalk The individual agent does have adaptive (creative) "goals" as an adaptive internal reinforcement learning signal, at least in RL. Your microwave in the link made me laugh, maybe some microtubules in there 🤣. I will need to listen to that audio with Dr. Scarfe (after I "reward" myself with some candy).
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 ай бұрын
@@sapienspace8814 RL agents have goals, they are explicitly programmed - and it's their main form of brittleness. Natural agents don't, goals don't really exist (as we understand them), they are an instrumental fiction as I explained in that linked article.
@sapienspace8814
@sapienspace8814 4 ай бұрын
@@MachineLearningStreetTalk I agree that RL agents have explicitly programmed external "goals" (reward/punishment), but not explicitly programmed internal "goals" (reward/punishment). I think part of the problem is what is a "goal". It could merely be the maximization of entropy, or one of the natural fields of physics (e.g. electro-magnetism, inertia, gravity, etc.).
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 ай бұрын
@@sapienspace8814 I understand what you are saying -- the "intrinsic" or "instrumental" motivation of agents in service of the predefined end goal i.e. "reward", which materialise implicitly as a function of the dynamics - similar idea to how LLMs optimise perplexity explicitly but some say have implicit "emergent reasoning capability". To be clear though, what you think of as an "external" goal, I think of as an internal goal i.e. this is an agent, with an explicit goal or "direction" regardless of any instrumental option learning, so this is still "top down" rather than "bottom up" as it is in the natural world. My main argument is that goals are a fiction in the real world, they are just the way we humans understand things, they are really just epiphenomena, and it might be questionable designing them into AI explicitly, especially if said goals are not universal i.e. "existence imperative" seems universal-(ish), the average reward hacking function in RL definitely doesn't! I wish Sutton would explain why he thinks anthro-priors are bad (bitter lesson) in everything except goals! Thanks a lot for the comments 🙏
@AndyBarbosa96
@AndyBarbosa96 4 ай бұрын
Great interview, thanks!
How can we add knowledge to AI agents?
49:57
Machine Learning Street Talk
Рет қаралды 10 М.
маленький брат прыгает в бассейн
00:15
GL Show Russian
Рет қаралды 3,9 МЛН
INO IS A KIND ALIEN😂
00:45
INO
Рет қаралды 23 МЛН
Useful Gadget for Smart Parents 🌟
00:29
Meow-some! Reacts
Рет қаралды 10 МЛН
I PEELED OFF THE CARDBOARD WATERMELON!#asmr
00:56
HAYATAKU はやたく
Рет қаралды 30 МЛН
AI AGENCY ISN'T HERE YET... (Dr. Philip Ball)
2:09:18
Machine Learning Street Talk
Рет қаралды 19 М.
Information, Evolution, and intelligent Design - With Daniel Dennett
1:01:45
The Royal Institution
Рет қаралды 548 М.
How he built a $450M Startup | Chai AI
29:31
Machine Learning Street Talk
Рет қаралды 18 М.
The Free Energy Principle approach to Agency
1:18:00
Machine Learning Street Talk
Рет қаралды 13 М.
Connor Leahy - e/acc, AGI and the future.
1:19:35
Machine Learning Street Talk
Рет қаралды 12 М.
This is what DeepMind just did to Football with AI...
19:11
Machine Learning Street Talk
Рет қаралды 164 М.
Artificial Intelligence: Last Week Tonight with John Oliver (HBO)
27:53
LastWeekTonight
Рет қаралды 10 МЛН
Artificial Intelligence | 60 Minutes Full Episodes
53:30
60 Minutes
Рет қаралды 6 МЛН
Sam Harris on Stoicism and Mindfulness Practice
1:05:42
Daily Stoic
Рет қаралды 251 М.
🤯Самая КРУТАЯ Функция #shorts
0:58
YOLODROID
Рет қаралды 3,5 МЛН
📱 SAMSUNG, ЧТО С ЛИЦОМ? 🤡
0:46
Яблочный Маньяк
Рет қаралды 810 М.
APPLE УБИЛА ЕГО - iMac 27 5K
19:34
ЗЕ МАККЕРС
Рет қаралды 92 М.
3D printed Nintendo Switch Game Carousel
0:14
Bambu Lab
Рет қаралды 1,7 МЛН
📱 SAMSUNG, ЧТО С ЛИЦОМ? 🤡
0:46
Яблочный Маньяк
Рет қаралды 810 М.