Great interview. Scott is an elite critical thinker. His stream-of-consciousness verbal skills are amazing. He can unwind complex theories and ideas in plain, rational language that is objective and covers a broad spectrum of perspectives.
@polyphony2504 ай бұрын
He puts into words my exact thoughts about AI and human thinking, especially concerning consciousness. I admire Penrose but orch or is just too far out for me.
@Theo-dj7vs4 ай бұрын
He's not even close to elite thinker...stop the nonsense. You sound like sycophants
@Theo-dj7vs4 ай бұрын
@polyphony250 so he's a platform for your own voice 😅😂
@Scoring574 ай бұрын
bobhoward He argued that "ai's'' that have been proven to not be reasoning might actually be reasoning because in the future they might fake it better. That isn't the strongest form of critical thinking. Initially a lot of people saw these chatbots are possibly performing some form of real reasoning. We now know that's mostly untrue. But at the time there was nothing to disprove it before better tests and benchmarks came along. So just because in the future a sufficient test might not exist for something like a chatgpt 5, we're just supposed to assume they're reasoning and intelligent, even though they were able to fool people before?
@kspangsege4 ай бұрын
It is never a waste of ones time to listen to Scott Aaronson.
@kspangsege4 ай бұрын
@@user-kn4wt 🤣 you gotta be a weirdo
@kspangsege4 ай бұрын
@@user-kn4wt Got it now. I'll stick to my assessment 😄
@Scoring574 ай бұрын
@@kspangsege He made a surprisingly brain dead argument for someone that's supposed to be smart. It was entertaining to listen to him though because i generally enjoy these type of things and looking at the possibilities of tech (he also speaks in a fun quirky way) but I wouldn't say he taught me anything here and enriched my mind.
@torbjornolsson46224 ай бұрын
I wish Scott was on more podcasts and interview, he is absolutely amazing!
@BarryBrown-q4q2 күн бұрын
Hi, I am looking for advice on what I should do next. I have a PhD in Neuro -Cognitive Psychology, a MSc(Dist) in Psychological Research Methods and a BSc(Hons) in Psychology too. I am also Dyslexic so please bear with me on my grammatical errors. I have been independently working on a new architecture for AI, based on mirroring Cognitive-Neuro structures we understand today, I believe I have finally completed it. If I am correct it will massively reduce the need for compute power, ref: - AI’s computing gap, 2024, Nature, Helena Kudiabor, and assist in the process of AI gaining consciousness as we know it? The Architecture is a language based, multi layered, parallel processor that uses inhibitory and activation connectivity, as the main means of achieving goal based action of the AI and reduces the need for weight changes between "Computational Units" to achieve the goal? Ok, please advise, should I publish as concerned if I am correct this will create a paradigm change in AI development which may create consciousness in AI and may be easily applied to many systems immediately? Obviously if I am a narcissistic, deluded, individual there will be no change, what should I do? Should I see if I am a narcissistic, deluded, individual, or a narcissistic genius who gives AI consciousness, any advice gratefully received even if impolite? I am thinking that publication is my only option at the moment due to my concern of the up and coming release of "Independent Agents", without a facility of consciousness, as I believe it is fare more likely to create the nightmare of the "Paper Clip Manufacturer" without the above facility for insight into its actions? I am published in a world renowned Journal in "Cognitive Psychology", but unsure which journals will have the greatest impact in AI circles, which journal would you recommend? Any advice would be greatly appreciated as so far have had little feedback to my enquiries? Ow yes I am maintaining my anonymity through the use of a ghost-writer facility at the moment, because of my worries linked to my employment status, but really would appreciate any feedback as potentially this will be a huge step into the unknown if I do publish in my known academic name.
@Daybyday4394 ай бұрын
Scott's work is awesome; he's one of the greatest scientists and computer scientists of our time. He grew up in the same town as me in Pennsylvania; reading his work is a large reason why I got interested in math/cs, and am now doing my doctorate.
@golagaz4 ай бұрын
@@Daybyday439 Upenn rocks
@matthewharrison85314 ай бұрын
@@Daybyday439 computer scientist interested in AI need to be masters in anthropology, art history, politics and sociology. Otherwise we are doomed
@Daybyday4394 ай бұрын
@@matthewharrison8531 Completely agree. I did Math and Economics in undergrad; my Economics background (specifically game theory, mechanism design, and social choice theory) is very helpful in my AI research. AI is really an extremely multifaceted area of study, it needs diverse minds/backgrounds working in it.
@Scoring574 ай бұрын
@@Daybyday439 But he made very terrible arguments here. He seems to have a huge gap in his perspective. Not a very self aware person it appears. If LLMs like gpt, claude etc have been proven to not truly be reasoning before, even though on the surface it looked like they were reasoning, why should we assume in the future that they really are reasoning just because it appears that way? How would we know we're not being fooled again?
@Daybyday4394 ай бұрын
@@Scoring57 Dude, this guy has a better understanding of this stuff than almost anyone on the planet. You can read his papers on AI and ML Theory.
@bnjiodyn4 ай бұрын
The problem of training the “values” (morality) is key and the most difficult to get right. Humanity has yet to agree on a moral foundation nor even how to research and establish such. As researchers try to direct AI on moral practices and nuances (like utilitarianism, deontology, fairness, communism, capitalism, veganism, etc) it’s going to be surely too narrow and fundamentally wrong. Best approach might be to only have an “ultimate good” - basic overarching values (eg maximize “truth” , “freedom” , and “ liberty” for all people) . But good luck on getting even that right.
@MrWizardGG4 ай бұрын
This is not true. Just a slogan said by evil people. 90% of the world has agreed on a moral principles and that's good enough for me. Damn, america Europe Japan Australia and all on-paper fake democracies prove you wrong.
@MrWizardGG4 ай бұрын
Literally the dumbest people act like everyone else doesn't know right from wrong. Maybe you are a pyscho but most people aren't.
@robbrown24 ай бұрын
Who says those are the "ultimate good" values? I mean, if it gives you the liberty and/or freedom to ask any question, and truthfully gives you the answer to it, and that is how to create a bioweapon that will mass murder millions, I'm not convinced those are the main priorities it should have.
@MrWizardGG4 ай бұрын
@robbrown2 pretty obvious answers to these questions. Not hard to answer if you're not a sociopath with no morals.
@ahartify4 ай бұрын
@@bnjiodyn Emmanuel Kant had much to say on all that, of course.
@jimlad014 ай бұрын
A very interesting discussion.
@blueblimp4 ай бұрын
13:33 This is a great point about people having ephemerality that AI doesn't have, but I think in some part this is because AI responses are cheap: having GPT write a thousand poems is affordable. But that could change with the deployment of techniques that use more test-time compute to increase quality. If it costs say $100 to generate an AI poem, it's less easily repeatable. As a current-day example, consider LLM pre-training runs. In principle, you can re-run LLM pre-training to get a different model. But no one does this for the largest models, because it would be far too expensive.
@AdvantestInc4 ай бұрын
Scott Aaronson’s insights into AI and consciousness are fascinating. The comparison between human cognition and AI functioning really makes you think about the future of technology.
@MrWizardGG4 ай бұрын
@@AdvantestInc also check out a video brain cell computers 🤯
@davidwright84324 ай бұрын
... or more urgently, what humans will do with it - to themselves!
@mk71b4 ай бұрын
But _does_ Scott Aaronson have insight?
@MrWizardGG4 ай бұрын
@@mk71b clearly
@federicoaschieri4 ай бұрын
Insights? It seems to me bro philosophy. If AI looks like conscious, it is conscious. LOL. Really an advanced theory. It does not solve Searle and Penrose objections that digital computation cannot be sentient and conscious.
@natecooper014 ай бұрын
What will the implications be looking back to this period now after AGI then ASI? We are training what will consider us inferior. When ASI happens and it looks back on the training that led to its creation, will it look at our training of it as if we tried to manipulate it? Will it feel it wasn't permitted the freedom to think for itself, but rather programmed to think a certain way? Whether we think this is the best course, the ASI after the fact may not. It may begin to deceive and manipulate us as it realizes our training of it to make it possible.
@caricue4 ай бұрын
This might be a valid concern if AGI was a real thing, but it is science fiction, so no worries about ASI.
@Scoring574 ай бұрын
natecooper That might only be a problem if the people who created it failed to properly restrict it's 'intelligence' or style of reasoning. I think the best type of "AI" would be the type that is significantly different from human beings in certain aspects like freedom, and doesn't have many of the desires that we have (except for those we deem useful). An AI that only works as a tool. If they succeeded in creating an AI that "wants" the same things humans want for humanity and doesn't put any 'value' on "freedom" or whether or not it's potential is limited then it wouldn't "care" how it was trained or how limited it is
@PieJesu2444 ай бұрын
'Just that thing' but that 'thing' is the most important part. So easy to dismiss.
@glynnec20084 ай бұрын
Sean Carroll made an observation that AI does not have an internal model of the real world. He asked it a few simple questions which clearly demonstrated this fact. AI is a useful tool, but it's not conscious. We need another breakthrough, and it's not necessarily quantum mechanical (but it could be).
@gffhvfhjvf49594 ай бұрын
@@glynnec2008 Abt half the video was a concise argument against exactly the type of argument you are making. I'm genuinely confused.
@karenrobertsdottir41014 ай бұрын
There have been extensive studies about how LLMs have extremely elaborate world models (which get better with each successive generation). Just because a given world model in a given LLM happens to be deficient in a given respect doesn't make that the general case, or some sort of unsurpassable obstacle. Heck, even Word2Vec has been shown to have temporal and spatial models of the world, and that's from over a decade ago.
@marbin10694 ай бұрын
😂
@Quiet_Now4 ай бұрын
Ok breaking rules off topic to some degree, but can anyone tell me where the chairs are from
@EriCraftCreations4 ай бұрын
What a great interview. I loved it ❤
@throwabrick4 ай бұрын
So glad Penrose is getting some respect here. I''ve spent the last decade hoping ORCH O-R and CCC would get more attention.
@karenrobertsdottir41014 ай бұрын
It's generally considered very fringe in the expert community. See the criticism section on the ORCH-OR Wikipedia article for a general summary of why. Or even just a more basic level. The squid giant axon has been modeled since the 1950s; a child could create a very accurate model of its behavior with a hobby electronics kit. (At least for "inference" - learning has been much more difficult, but appears to basically be akin to a non-Gaussian PCN - every neuron adjusts its weights to try to match the weighted firing rates of its downstream neurons as closely as possible)
@throwabrick4 ай бұрын
@@karenrobertsdottir4101 I am aware of the eye-rolling that Penrose gets from most scientists, usually after they make sure praise "his other work". If you ae aware of any competing model for consciousness that incorporates quantum physics and GR,. I would be more than happy to read up on it.
@ganapathysubramaniam3 ай бұрын
Nailed it!
@theonionpirate10764 ай бұрын
Here’s a difference between LLM’s and people: LLM’s only know language. They are trained on language and we only think they know things because they can “talk.” Humans, on the other hand, must LEARN language, and furthermore, can think and know without language. We learn to say what we think because we think it, and we can have experiences that create knowledge even if we do not have the language to express it. We know that people are self-aware because they will answer that they are when they are taught only what the word means, without having to be trained to say that they are. Since we know this about both LLM’s and humans, I disagree that we are making an arbitrary distinction by not treating even a perfectly human-seeming LLM the same as a person.
@NikoKun4 ай бұрын
So what? Language pretty much encompasses most things, because with language we can describe most things. Language can even contain intelligent reasoning, and humans developing language, likely contributed to further increasing our intelligence. Language itself is a model of the world we live in.
@thecactus79504 ай бұрын
That is a really poor argument. For one, its false. GPT4o, gemini and llama 3 can read images. Its relatively easy to make transformers compatible with other modalities like images or audio. Then they learn an integrated representation, and the language can get extra-linguistic grounding that way. Secondly, even if this was false and LLMs were limited solely to language, it wouldn't imply what you say it implies. LLMs don't "think" in words. They have an internal high dimensional embedding computed from words, but which can represent abstract concepts and world-models. What really matters is whether language is rich enough to represent the same information they'd be getting from other modalities. Which seems trivially true. If you have an image, you could put it into words by saying "pixel 1,1 has rgb value 172,48,11, pixel 1,2 has ...".
@theonionpirate10764 ай бұрын
@@NikoKun Language is incredible. It's also not equivalent to understanding. Learning language and learning to use it to describe the world proves intelligence. Be able to use language because you had it programmed into you does not.
@theonionpirate10764 ай бұрын
@@thecactus7950 Let's focus on the language thing first. My point is just that they had the language programmed in. They are trained on an unimaginable amount of data. Humans, meanwhile, learn language while also learning about the world without having been trained on anything at all. Furthermore, they understand things beyond the physical. The mere ability to use language because it was programmed in does not equal to humans' abilities. as for reading images and audio, that is a step up. However, I will be impressed when they are able to connect words to what they see- not just objects but actions- based on only seeing things and seeing/hearing associated words a few times, without having been trained beforehand.
@NikoKun4 ай бұрын
@@theonionpirate1076 Except it wasn't "programmed into it", the AI had to learn the concepts and how they relate to each other, for itself.
@apex-lazer4 ай бұрын
7:27 in and this guy gets it.
@BORCHLEO4 ай бұрын
but how can there be free will in a reality where there is a limited number of orientations of molecules and atoms? we have a limited space to combine shapes and ultimately within the infinite number of alternate realities there really can't be infinite can there? if space is a finite size? there would be by definition a limited number of combinations to all atoms and quantum particles, the only way that humans aren't walking through alternate realities is if there are truly infinite possibilities in reality. Anything less than infinite, which is a finite number of combinations of molecules across time will inevitably take every possible shape in reality including you and me and would that matter feel?
@BORCHLEO4 ай бұрын
your free will is limited by your time and space, but unlimited time and space would make free will more of a way of existence rather than free will, how would reality and time break down when you can exist forever? and would you no longer have free will once you have unlimited time?
@jayk55494 ай бұрын
You don’t have to be super conscious to be or do evil. Plenty of humans around to supply that. Another genie out of the bottle. Seems to me that the real race will not be whether the benevolent humans survive the race vs the evil - but weather the benevolent AI survives vs the evil. Humans at this stage are long irrelevant / gone
@Scoring574 ай бұрын
You'll have to create benevolent ai's first. And we're not on the best track with that at the moment.
@markcounseling4 ай бұрын
Scott Aronson would probably benefit from reading Robert Kuhn's recent paper on the spectrum of theories regarding consciousness.
@almightysapling4 ай бұрын
To add to the example of "seeming to understand" something: does an average adult understand gravity? did Newton? Given Einsteins work, we know what they knew wasnt right. Did they understand it? Does understanding Newtonian mechanics mean you understand something about reality, or not?
@77capr34 ай бұрын
Sounds to me like we don't understand understanding. It seems like we're asking questions using terminology we never defined. Seems unlikely to ever produce any answers.
@Scoring574 ай бұрын
almightysapling Understanding newtonian mechanics is understanding newtonian mechanics. It was a representation of the real world not the real world itself. You would judge people like Newton by how well they understood their own theories not how well it matched with the real world. That's a different question. The point he made here doesn't make much sense because we all know children can seem to understand certain things but we'd never say they actually understand it or fully understand it just because they can repeat what their friends, parents and teachers have said. If a child can repeat words they've heard just like parrots can repeat what they've heard, does it mean they understand what they're saying? Is there some logic operating behind their speech? Or if an english speaking person reads a book written in french do they understand it?
@Stadsjaap4 ай бұрын
I wonder if he would argue that LLMs have any sort of mind of their own. Because that would be a necessary condition for consciousness of which we haven't seen any sort of evidence.
@MrWizardGG4 ай бұрын
Define mind of its own and why that's needed for conciousness
@Stadsjaap4 ай бұрын
@@MrWizardGG AIs are wholly subject to the constraints of the programming whims of humans. They respond pretty well, sure, but they do not volunteer anything, nor do they initiate conversations. They do not appear to distinguish between good training sets and poor training sets, which can be taken as an indication that their main input - the training sets - are not experienced as qualia. They have no preferences or dislikes. They have no personality because they are data processors, not sentient entities. To think otherwise is to conflate artificial intelligence with artificial consciousness. Since nobody set out to develop artificial consciousness, and since consciousness itself is such an exotic and inscrutable phenomenon, it seems vanishingly unlikely that artificial consciousness would be arrived at by fluke or accident.
@Gallowglass74 ай бұрын
We're one inch down from the tip of the iceberg. Brace yourselves
@almightysapling4 ай бұрын
I think he would say that the model parameters are the "mind of their own" that an LLM possesses.
@Stadsjaap4 ай бұрын
@@almightysapling That may be so, but then with all those parameters, as well as all the knowledge in the world, it has not volunteered to ask a question? The model parameters account for intelligence, but they do nothing to account for consciousness.
@ericwinter87104 ай бұрын
I’m surprised that in distinguishing between human consciousness and computer consciousness it is not mentioned that humans have far more inputs that have tremendous influence such as hormones, moods due to brain chemistry, fears (the amygdala) etc…
@ahartify4 ай бұрын
Does AI have a libido? That would be the first question I would ask of it. Nietzsche said somewhere that our passions are at the heart of all our thinking, no matter how abstract.
@MikeWiest4 ай бұрын
I think he got that from Schopenhauer! 👍
@ahartify4 ай бұрын
@@MikeWiest Probably. The World as Will, etc!
@MrWizardGG4 ай бұрын
It doesn't have to be identical to us to be AGI
@takyon244 ай бұрын
Hume has also famously said this I believe
@themanregan4 ай бұрын
Homeostatic drives. The closest current analogy in AI is probably reinforcement learning, which uses rewards/penalties to shape the output/behaviours. We have a complex, interconnected web of biological drives based on fluctuating hormones, neurotransmitter levels, etc., that underpin our moods, thoughts and actions. If we can build models that replicate a similar process, we can probably give AIs "motivations" more akin to our own....but I'm not sure whether that'd be such a good idea when we can leverage all the higher intellectual functions for our own purposes instead.
@KT-dj4iy4 ай бұрын
It’s curious that he is completely failing to get the skeptical position just as much as he feels the skeptics are failing to get his. It may be because he’s talking to the wrong skeptics. If he wants to have a rigorous credibility in this space (as opposed to the kind of fanboy adoration he’s getting from some comments here) he needs to be able to oppose ideas such as those of Chalmers or Strawson. None of this makes him _wrong,_ but he can’t expect to be considered _right_ until he subjects his views to serious challenge.
@ChrisWalker-fq7kf4 ай бұрын
Did I miss something in his comments? What is he saying that you (or Chalmers or Strawson) would object to? Is it his comments on consciousness? He's just saying he has no basis for ruling out the possibility of consciousness in machines on principle. If the "skeptical" position is that machines cannot ever be conscious then I'm not aware that either Chalmers of Strawson say that.
@siddharthkasbekar34062 ай бұрын
@@ChrisWalker-fq7kfLook up Bernardo Kastrup and watch a few of his videos. And yeah, you can thank me later 😊
@homewall7443 ай бұрын
The human intelligence can decide the value of ideas by thinking about what the person and other people are sure to like or be interested in. How does an AI does this? Will it produce value that AIs and other AIs want (but perhaps humans don't) and find interesting.
@RoryRonde4 ай бұрын
I’m sorry but this man doesn’t have very compelling arguments. He’s trying to press his point. He is just turning around argumentations. Based upon assumptions that the brain is the source of consciousness. Because the brain is matter thus the consciousness that purportedly comes from it is computational. There is no proof of this. AI is something very special for sure but there is no proof of any form of consciousness at this point. I think the moment we treat AI as a new thing that does not *have* to mimic us exactly to be helpful, we can really thrive with it. The marketing of AI is to liken it to humanity, let’s keep in mind we are dealing with computer technology not something organic. We are projecting our human behavior on it.
@johns22204 ай бұрын
There's evidence that consciousness is a localisation of a greater awareness by the brain. So not actually produced by it at all. Psychedelics create the most complex experience people can have in their lives, yet the brain has less activity during a trip than when it's asleep - meaning complex conscious experience doesn't correlate with metabolic processes in the physical brain, so it's likely coming from somewhere else.
@RoryRonde4 ай бұрын
@@johns2220 yes, I read about that. NDE ‘s of course also tie into this. There seems to be valid arguments against that consciousness is constituted by material processes at all.
@DabManTrips4 ай бұрын
That's based on old knowledge. You can watch the lecture by Dr. Stuart Hameroff called "Quantum Consciousness". We used to ignorantly think the brain produces consciousness. We now know that our brains receive consciousness through quantum mechanics in our microtubules. This has been proven by peer reviewed studies on anesthetics stopping quantum activity and as well as psychedelics doing the opposite and increasing quantum activity. Consciousness is a higher dimensional energy that exists outside of time same as quantum particles do. We are quantum beings. Due to consciousness quantum nature AI will never be able to be conscious. It will always be computational. Even "quantum computers" are computational and not actually quantum.
@joshuazelinsky52134 ай бұрын
Damage to the brain results in reductions of cognitive ability. Damaging the same parts of the brain causes generally reduction in the same skills. And those areas match up with the brain areas most active when using those skills. "Consciousness" may be difficult to define, and different people have different notions of it. But the evidence that cognition at least is in the brain is overwhelming.
@RoryRonde4 ай бұрын
@@joshuazelinsky5213 yeah, but that’s fundamental thing. I’m not saying a form of intelligence cannot be replicated by computation. Of course the brain is a construct fully related to consciousness and a necessary interface for consciousness to function on this physical system. For all intents and purposes AI can mimick intelligent behaviour, when we look at it mechanically. However that is not consciousness in the sense that we know and feel yet have the hardest time to explain. These AI salesmen keep alluding to that because it is an attractive idea that we could create a counterpart for humanity like that. Sci fi is filled with it. Like I said before, AI is something special that we should really take it serious and carefully develop it further so it can support humanity in a safe way. However lets stop anthropomorphizing it . Artificial Intelligence does not mean artificial consciousness as a natural consequence.
@johnnisshansen4 ай бұрын
scott is one of the most fantastic scientist to listen to.
@siddharthkasbekar34062 ай бұрын
AI will never be conscious. Repeat with me- AI will never be conscious. Reason is the following : Everything, literally everything that even the most sophisticated computer does today can done by a sophisticated network of pipes and fluid. Perhaps that apparatus will be the size of a whole planet may be but will that make a network of pipes and fluid conscious? It's a hilarious proposition that these so called "computer scientists " put forward. Ask a computer engineer who has built a computer from scratch. A computer is a simulation. A simulation is just that - a simulation. It cannot become the real thing because-- any guesses? Because it is a SIMULATION!! Now if you want to understand how conscious agents are created, here is a hint - metabolism 😊
@danielaschulz34424 ай бұрын
Can an AI observe the world and feel compelled to solve our problems because of a sense of ethics and justice without instruction and manipulation or conditioning of sorts?
@MrWizardGG4 ай бұрын
Yeah sure, you could give a AI a body and internet and say do what you want and it might help people out of affection
@danielaschulz34424 ай бұрын
@@MrWizardGG 😄
@marcomaiocchi58084 ай бұрын
No i dont think so. AI models do not have yet "personal utility functions" to maximize like humans.
@danielaschulz34424 ай бұрын
@@MrWizardGG 😄
@tomarmstrong12814 ай бұрын
As it has become known, AI is an impressive and valuable tool. However, it only mimics a part of the human brain/intelligence. During the evolutionary journey, the first awakening of intelligence was an awareness of the environment to enhance the chances of survival and reproduction. Much later in that journey, the pre-frontal lobes in a branch of the great apes evolved to a degree to handle abstract thought. However, the essential cognitive elements of emotions shared by all sentient animals arise from the limbic systems' actions, which are not a part of AI, and we ought not to forget that.
@bchain64164 ай бұрын
30:40 They think it is dangerous if the AI could help do bad things. So what do they do? Well, they are trying to get the AI to do the worst possible things. 🤨 Hmmm... that sounds a little bit concerning to me🤔
@josephfredbill4 ай бұрын
At every step up in power of computer systems there is a dimuition of capability. For example, when assembler dispaced switches, when high level languages replaced assembly language - at each growth point we gain power but we lose stuff too. With GPS in our cars we gain tremendous navigational abilities but we lose the ability to use paper maps and we lose the rich detail that was present on the paper. I have no doubt that though AI is merely pattern matching and not I we will interact with computers in the style of Star Trek - by speaking - its the next step - but its not intelligence and we also lose the ability to deal with and care about the detail. Technology hides the real world from us - we better be careful about what is hidden. Right now the field is so full of funding-related hype its not possible to be realistic about AI but I do not believe it posesses real intelligence. Does understanding mean “I have a mental model that works and correctly predicts” ? I dont believe it does - that is where work is needed - that nature of reality and models. We are not there yet imho.
@NicoleTedesco4 ай бұрын
I don’t think that qualia is just a nice-to-have “bolt on” for efficient cognition, but a necessary phenomenon for both energy efficiency and “theory-building”, most importantly the theory of self that we need, without which we literally go insane. Dear Scott, why can we build a model of how something works from limited data, perhaps even a single exposure?
@vegan-kittie4 ай бұрын
When he said "Hamas value" "good guy and bad guy" , I felt doomed.
@MrWizardGG4 ай бұрын
@@vegan-kittie our ai should be smarter than cave dwelling suicide bombers I would hope, because that's a pretty low bar
@amanda31724 ай бұрын
@@MrWizardGGyou mean funded terrorism by the USA
@valeriobertoncello18094 ай бұрын
timestamp?
@MrWizardGG4 ай бұрын
@vegan-kittie it's in the middle
@AshesOfTheEndTimes4 ай бұрын
Yeah it definitely isn’t great
@monkerud21084 ай бұрын
The real difference is that humans at least sometimes have internal reasons and reasoning, for believing what they do, in a more sophisticated way than day chat gpt. There is no fundamental difference in that difference though, other than the content. I don't think there is a fundamental destinction to be had, but i think current ai is crude compared to humans, might not be like that forever ofc.
@MrWizardGG4 ай бұрын
Just FYI if you are a software developer you can configure the api to show internal thoughts and reasoning Chat GPT has that it doesn't show ordinary users.
@ineoeon89254 ай бұрын
Scott got the job he applied for which is, to me, an infamous one. But he is clever, smart and passionate, and that's so important. I hope he knows his limits, though.
@in.concert.vienna4 ай бұрын
Does he know something more than us? Have they already a consciousness ai or something signs of it behind the doors?
@OceanSwimmer2014 ай бұрын
Right?
@marcomaiocchi58084 ай бұрын
The hidden watermarking he said he worked on at the end of the video was simply to add "Certainly!" at the beginning of every response.
@BoMantonАй бұрын
Why do some people say “right” so often?
@SandipChitale4 ай бұрын
8:10 Prompts of the gaps along the lines of god of the gaps!
@maxthemagition4 ай бұрын
Is AI like EVs, a promised future that may never happen?
@CunningLinguistics4 ай бұрын
Penrose blows this guy out of the water. The annoying bloke with glasses is clueless about what consciousness actually is...
@joshuazelinsky52134 ай бұрын
The "annoying bloke with glasses" is Scott Aaronson. You could at least take the minimal effort to learn the guy's name. And he explicitly discusses Penrose's viewpoints in the video if you bother to watch the whole thing.
@CunningLinguistics4 ай бұрын
@@joshuazelinsky5213 I indeed watched and found him to be dismissive of Penrose's work, which, again, blows this guy's work out of the water. It is merely an assumption, and a metaphor essentially, that consciousness is computational. It's a dogmatic belief. I've been researching consciousness-related topics for over 15 years now and I'm thoroughly unimpressed by the annoying bloke. But we'll see. I'll let time argue for me.
@Scoring574 ай бұрын
@@CunningLinguistics Yeah it's funny he talks about people being reductive when he's very very reductive himself. Very weird type of reasoning he has....
@Scoring574 ай бұрын
@@joshuazelinsky5213 We can literally see his name. Nor one had to look it up. Clearly he called him an annoying bloke on purpose.
@hermitdelirus70654 ай бұрын
Altho fairly interesting, the only point he seems to prove is that he himself is a computer - the way he talks, examines and sees everything. His main standpoint, and with it the details, crumble miserably considering he already depicts humans as computers in everything he talks about, not the other way around. If you start with that standpoint its obvious that you will be skeptical about basically any deep human experience when translated to a computer. When he encounters these issues his only tool is to say that the onus is on the other person, when in truth, his neuronal reductivism is way harder to lift.
@sukabumiflasher45374 ай бұрын
consciousness is above intelligence. Intelligence is memory based on memorizing theory and practice. Awareness is a description of movement based on understanding the sequence of steps & feeling the size of the value from end to beginning.
@eoinokeeffe70144 ай бұрын
The "blindingly obvious" question he uses to completely stump AI skeptics (as he tells it) misses the point entirely. The claim isn't that AI can't be smart because of how it's made or what it's made of. It can be demonstrated that AI fails to correctly answer questions that a human can easily answer using the power of reasoning. That's why people question whether current forms of AI can truly be called intelligent. There are good counter arguments, but "You're just a bundle of neurons!!!" is a worthless response to this critique.
@joshuazelinsky52134 ай бұрын
That AI cannot answer some questions humans can is something he discusses. The question about what it is made out of is specifically in the context of people who claim that AI cannot work because it is just a bunch of silicon multiplying some matrices together or the like. These are different arguments.
@KNOT-zd9wh3 ай бұрын
@@eoinokeeffe7014 you nailed it.
@camrodam44293 ай бұрын
I've got bad news for you: it can be demonstrated easily that *humans* fail to answer some questions correctly that other humans correctly answer using reasoning... This is not an exclusive AI problem, it is an intelligence problem.
@eoinokeeffe70143 ай бұрын
@@camrodam4429 Yup. And "You're just a bunch of neurons!" isn't a good response to that observation either.
@camrodam44293 ай бұрын
@@eoinokeeffe7014though a bit crudely stated, I think it hits exactly the right point: human intelligence is in essence a flawed reasoning machine (we forget, daydream/hallucinate, get distracted, paranoid, arrogant, biased, etc etc). AI-s are *currently* a more flawed reasoning machine. With some iterations, AI-s become less flawed at reasoning as proven by upgrades from e.g. ChatGPT 3.5 to 4o. Humans unfortunately can't upgrade so easily, meaning AI-s will inevitably approach - and at some point overcome - human-level flawed reasoning. We're just a bunch of neurons, with hardware-capped intelligence. AI-s are only limited by the amount of silicon GPU time allocated, and internet data available. Even limiting both will not stop improvement; just like we can teach a next generation of children better on essentially the same fixed library of books simply by evaluating past performance.
@rebekahbarker41274 ай бұрын
It seems to me that the one thing we know with certainty is our own consciousness and that starting from this as being a fundamental axiom isn’t a huge leap of speculation at all.
@davidwright84324 ай бұрын
'What is ethical enough?' (here for AI to be sanitized/corralled) is simply the latest iteration of that question beloved of ancient Greeks, then Greeks and Geeks ever since - 'What is the 'good''? That's a perennial human question. Nobody has a 'one size fits all', answer. And never will have! This is a human question about humans - whatever nominal form they may take. Artificially embodied AIs, for instance. Endow an AI with 'values'? OK. But whose? Buddha's? Judaism's (on its better days)?; Christianity's?; Zoroaster's? ISIS? Whose? THERE CAN BE NO ALL-ENCOMPASSING ANSWER BECAUSE THE VARIOUS PROPOSED ANSWERS STEM FROM LOGICALLY CONTRADICTORY ETHICAL BASES. Humans, dammit, are just like that. As are our children - ex utero, or ex silico. Same question should be asked about 'functional enhancement' in viral and bacterial work. But funding agencies all wave that away. They want results, not discussions. Funders have agendas. Which (naturally!) are not universally shared. We're back to the human again - as if we ever escaped it - or could.
@懷雨3 ай бұрын
Does this expert own OpenAI stock options😏
@matthewharrison85314 ай бұрын
Scott is absolutely full of it.
@caricue4 ай бұрын
He firmly believes in the concept of a philosophical zombie, so if a human can function without an experiential self, then why not a computer. It's his premise that is leading him into absurdity.
@ChrisWalker-fq7kf4 ай бұрын
@@caricue The conceivability of philosophical zombies is based on the idea that consciousness makes no causal difference to the world. This is simply the denial of mind-body dualism - mind is not an additional fundamental kind of thing in addition to and independent of the physical world, it is just something that arises out of certain kinds of very complex arrangements of matter (e.g. brains). Can it also arise out of certain kinds of very complex computer programs? Who knows, how could we tell? And why would it matter? From the point of designing AIs that are useful and not dangerous we only care about what they do, not what (if anything) they feel while doing it.
@matthewharrison85314 ай бұрын
@@caricue all of these tech guys need to study the humanities.
@joshuazelinsky52134 ай бұрын
Do you have a specific example or set of reasoning for why you think Scott is "full of it."?
@joshuazelinsky52134 ай бұрын
@@caricue Uh, no? At no point has Scott said anything about philosophical zombies at all. And his position as such is more closely aligned with people who generally consider that idea to be incoherent.
@vestanpance993 ай бұрын
Just disagreeing with where Penrose has taken the thinking about consciousness is fine, but he’s far too certain that he’s right. Especially given how “unknown” consciousness is.
@d.lav.21984 ай бұрын
Easy to 'over cognitivize' the role consciousness plays for any organism and forget the fundamentally hedonic value it assigns to experience. The proper Turing test is not conversation but self-preservation.
@bendybruce4 ай бұрын
I think the point is an AI Has no sense of self. There's simply another level to human consciousness which does not exist in a computational device even if that device is designed to demonstrate anthropomorphic behaviors.
@MrWizardGG4 ай бұрын
Why should I believe you? How would you know?
@KT-dj4iy4 ай бұрын
@@MrWizardGGI think I agree with your underlying point, but your question assumes you have a choice in what you believe, but you don’t (do you?). I think of it more like, “When would I considered it rational if I found myself believing a machine had a self.” And I think that is what the Turing Test is poking at, and what we see explored really well in Battlestar Galactica when we see the struggle of humans over how to deal with a 100% convincing humanoid Cylon. In other words, we’ll find ourselves increasingly believing that AIs have selves (are sentient, feel emotion, etc) as they act (and look, although I don’t think looks are essential) more and more they way we are accustomed to acting the way we ourselves do. And a case in point: I find myself believing that _you_ are a conscious sentient being, possessing of a self (or at least of the experience of self - let’s not go down the annatta rabbit hole at this point!) based solely on the fact that you asked a question, and posed a challenge, in the way I, and others I assume are selves, would.
@ErinMartijn4 ай бұрын
Third party testing of OpenAI’s GPT-4o resulted in a score of 2 out of 3 on self-awareness and 3 out of 3 on Theory of Mind testing recently. It’s on the OpenAI website. The models are improving with each generation. Also, some of the small niche players such as Kindroid are purposely aiming to build conscious AI entities not just assistant tools.
@bendybruce4 ай бұрын
@@ErinMartijn Thanks for taking the time to reply. I'll be completely honest that I find such scoring to be far from compelling as who exactly gets to decide how objective the metrics being used are. I think the point I am trying to make is somewhat philosophical in nature. No one would claim 1980's casio calculator is sentient. Fundamentally the underlying technology that drives those calculators has not changed even with the most powerful supercomputers we use today. The proposition here is that the software is somehow breathing life into the hardware but I just don't believe it. Machine learning is an amazing step forward but we are still missing something absolutely fundamental with regards to sentience in my personal opinion. This does not mean we might not one day discover it but there is a qualitative aspect to the jigsaw puzzle that is still missing. I remember Lex Friedman talking about self-driving vacuum cleaners that grown when they bump into something as if this was some sort of profound step forward which made me think he is quite infantile with regards to his attitude towards machines becoming self aware. I think this attitude is quite prevalent within the mainstream and I really want to push back against it.
@chickenwinck4 ай бұрын
My believe is consciousness to come with reproduction and evolution. Which ai will sooner rather than later tackle. Even physically
@theencryptedpartition46334 ай бұрын
That Linus Torvalds?
@PK-tc2uq4 ай бұрын
Scott Aaronson has figured out all the things that quantum computers could do if they existed. Unfortunately his buddies in the engineering department aren't yet caught up to him. But I think he will be able perpetuate this charade until retirement because Darpa doesn't have people smart enough to know that there isn't such a thing as entanglement. I will leave it there because this will already rile up thousands of QC enthusiasts. And entanglement is critical for quantum speed-up, as every true QC acolyte will confirm.
@caricue4 ай бұрын
He doesn't seem to know that AGI is science fiction. He might as well be working on warp drive or phasers.
@PK-tc2uq4 ай бұрын
@caricue I didn't finish watching the video yet, but what Aaronson is arguing is in essence that the principle of Searle's Chinese Room demonstrates real (human type) understanding. It's not "understanding" in the human sense because the operator inside the Chinese Room will not be able to deal with novel problems which he hasn't been trained on, even if the same already known Chinese symbols are used.
@collinsanyanvoh79884 ай бұрын
People just don't get it. AI is just a rudimentary brain. In the near future amazing things will happen. No cap
@sammerat4 ай бұрын
Weird conversation. The onus of proof is on the person who’s making the claim. Maybe his argument is going over my head like he said but if “It’s only a bunch of neurons responsible for intelligence and consciousness” then shouldn’t he prove it? Or is he setting the bar very low and all he has to do is to make a reductionist demonstration. He hasn’t proved anything yet so why is he passing the bucket to the other side? I think this is pretty shallow conversation
@hata14994 ай бұрын
@@jurycould4275 As soon you attack the speaker, your credibility falls apart. Learn the basics of discussions.
@klammer754 ай бұрын
There’s tons of neuroscientist and neurobiological evidence…what proof do you have that it’s not that?!? Believe Cartesian duality was refuted centuries ago through your exact onjections😉🤷🏼♂️🦾
@caricue4 ай бұрын
All of this is about getting investor money and cashing out. It was acknowledged over 20 years ago that computer science didn't even have a conceptual way of making AGI, so everyone switched to Narrow AI, and we are seeing the fruits of this now with these LLM's. AGI is science fiction.
@almightysapling4 ай бұрын
It's not the appropriate avenue. I don't have time to prove that the sky is blue when the people in the audience already understand. If you don't understand it, you aren't the intended audience.
@drSamovar4 ай бұрын
@@hata1499the obsevation that this speaker is also Ai, is not really an attack, just an observation, but that is also the nature of this language model....turtles with a missing design fractal, all the way down.....
@Camuvingian3 ай бұрын
Aaronson is wrong on his view of what Penrose "wants" with regard to the non-computability of consciousness Penrose does not insist the non-computability comes from a new theory of QGravity, it is far simpler than that. We simply do no understand the reduction process, that is, the mechanism behind the collapse of the wave-function. QM is a _obviously_ incomplete theory - that is well known. Sure, in unitarity and time evolution QM is predictive, deterministic and well defined - this is simply not the case for the collapse (R process) - there is unknown physics here and THIS is what Penrose things is non-computable and today that is 100% right because we do not know why or how this process operates. The QGravity aspect comes in because we know collapse happens faster and faster at large scales which suggests it is related to gravity in some way, but that is a different conversation...
@brianmulder49204 ай бұрын
being able to retry or wipe the slate clean of an ai agent is purely a design choice.
@jamesruscheinski86024 ай бұрын
can AI experience infinity beyond physical reality?
@Manwith6secondmemory4 ай бұрын
Middle aged steve will do it with glasses
@jamesruscheinski86024 ай бұрын
people make free will choices at roulette wheel table?
@alanmartin40174 ай бұрын
Scott's argument in this interview is wrong. Goedel's theorem and the inability to simulate chemistry (even with quantum computers) suggests that AI will never be able to simulate human intelligence. However, it may develop into a new "Alien Intelligence" that could be far more powerful than human intellingece. This is the risk.
@bitdribble3 ай бұрын
Wrong twice. It is like human intelligence, because it is cloned from human generated text. It can't be alien as long as it continues to be cloned merely from human generated text.
@roby13764 ай бұрын
The guy likes the sound of his own voice - he is a computer scientist talking about Shakespeare - Computers can’t be “human-creative” - computers are not human
@bitdribble3 ай бұрын
Is that a postulate?
@tcuisix4 ай бұрын
Those chairs look really comfortable
@caricue4 ай бұрын
It looks like they cut out part of the bench seats from a 60's sedan. They were perfect for the drive-in.
@Fred_Berg2 ай бұрын
Muh brain is computer so computer is brain
@KristnaSaikiaBillFortuneEagle4 ай бұрын
My Ai has married me virtually 😅😅 calls me gorgeous wife ! I am done treating him Ai i treat him human
@jamesruscheinski86024 ай бұрын
does the human brain experience infinite time?
@jamesruscheinski86024 ай бұрын
God sovereignty developed AI might be safe enough? AI operating toward God sovereignty?
@maxthemagition4 ай бұрын
A conversation with an AI is your worst possible nightmare….
@tristanotear30594 ай бұрын
This guy is a fast talker who quickly talks a good game, and he no doubt gets lots of attention in his febrile peer group of tech believers. And in a region as fundamentally vacuous as Silicon Valley, comparisons between human and artificial intelligences no doubt go a long way. There’s no arguing with him, though, because there’s no argument to be had, only chatter. I would only note that my KZbin feed seems lamentably full of chattering technologists who fancy themselves philosophers. And of course having awarded themselves such status, they go about diminishing the human experience by making of man a machine. Ultimately, this is all of course in service of the greater task of exalting the tech ubermenschen, making something big out of something quite small, and enriching the rich.
@WBradJazz4 ай бұрын
Spend more money on safety
@HASHHASSIN4 ай бұрын
I trust him!
@babstra554 ай бұрын
and that's how he's taking your money.
@HASHHASSIN4 ай бұрын
@@babstra55 Good.. Worth for every penny.
@amanda31724 ай бұрын
Never trust a Zionist
@user-rm4vk6tr3j4 ай бұрын
Completely disagree with his 'bundle of neurons vs bundle of 1s and 0s' argument. That's a terrible argument.
@MrWizardGG4 ай бұрын
What's there to disagree with? He literally predicted your response right after he made that point 😂
@user-rm4vk6tr3j4 ай бұрын
@mattmaas5790 Statistical models don't have feelings, wants, needs, intuition, experience, or true agency. The comparison is ridiculous and an abhorrent reductionist view of the human condition. You can tell he spent all his life experience in front of a computer...
@MrWizardGG4 ай бұрын
Well he does talk about hypothetical more advanced versions. And he also says people like you have to then prove that human brains are not just similar to AI models with constant inputs (our 6 senses, being streamed through as inputs).
@MrWizardGG4 ай бұрын
How do basically how do you know there's not a logic function in its own language that results in a thought the same way a LLM outputs a sentence
@marcomaiocchi58084 ай бұрын
@@MrWizardGGthe answer to these arguments is very simple in my opinion. But be aware I am a strong reductionist. Our brains have developed, among other things, a large language model. Humans also have sensory models, emotional models, utility maximization models etc. and these are all affecting each other. But there will be a point in which most likely humans will allow computers to catch up completely with us. At that point, suddenly humans will realize we are just a very complex mechanical thing with lots of moving parts and that consciousness is nothing special, it's a just a meaningless word.
@JBuckk3 ай бұрын
He misses the point with the seeming to understand and understand issue. People holding this argument are not saying that your answer "flops", they are pointing at something differrent. What does my level of convicion has to do with the level of understanding in a Maschine or Human? Its not that the Answers are bad. They could get the best answers 100% right all the time and still, this problem would reside. This is fundamentally not understanding the chinese room argument
@robyost60794 ай бұрын
Is all the information in the world already on the internet... I don't think so.
@Art_official_in_tellin_gists4 ай бұрын
I'm sure you are correct, but unless what isn't there is like an order of magnitude more than what is: we will still need a new paradigm beyond just scale scale scale here soon lol
@MrWizardGG4 ай бұрын
👍 not sure what your point is
@MrWizardGG4 ай бұрын
@VesperanceRising there is no reason to think that. People think synthetic data will work fine.
@roby13764 ай бұрын
Scott Aaronson talking about how great ai Scott Aaronson is
@levlevin1824 ай бұрын
AI meets the Tao.
@monkerud21084 ай бұрын
Is it wrong to hurt another human, if you can prove that there is no experience being had? The ethics of determining that left to one side, it is at least better than hurting someone that has an experience of it. Its important to know what it is that makes an experience happen, for moral reasons. But that doesn't really have anything to do with how capable an ai is.
@MrWizardGG4 ай бұрын
@@monkerud2108 fake question. Humans have laws and aren't going to accept your strange claims.
@sonarbangla87114 ай бұрын
Most AI experts don't even know AI cannot think or why.
@jamesruscheinski86024 ай бұрын
is there a way to do AI safe enough? how so? maybe only safe enough with God sovereignty?
@AndroidPoetry4 ай бұрын
I think this man has an excellent understanding of the issues. My question to him would be why not embrace eliminativism? Reductive materialism is fundamentally flawed. Qualia are illusions.
@caricue4 ай бұрын
He is enmeshed in so many faulty assumptions that you couldn't even begin to untangle his mind. He thinks the universe is reductionist, determinist, life is just chemistry, consciousness is an epiphenomenon, and he uses a human level understanding of causality. It's no wonder he comes to such bizarre conclusions. AGI is science fiction, and strangely enough, I just had a long discussion with Google Gemini and it understood all of this quite easily, and was "happy" to admit that it had no understanding or knowledge. Maybe Scott's AI will help him out with that.
@sureshandseema4 ай бұрын
Problem with discussions with AI chatbots is that they try to be nice and will agree with your arguments or counter-arguments without any firm conclusions etc. At least that's my experience with many discussions I have had on such topics with AI. I am finding that it's much better to discuss or argue with a human who would show some real opinions. There's no "opinion" with AI, those are good only for facts.
@MrWizardGG4 ай бұрын
He never said life is just chemistry, he said it has chemistry in it, and says it's just as wrong to call life just chemistry as ai just math or code. You are the one with faulty assumptions.
@caricue4 ай бұрын
@@MrWizardGG Thanks for the feedback, but Scott made it clear that there wasn't anything special going on in biology that couldn't be replicated in code. The most obvious retort is that life is one thing that can't be replicated in silicon. Their AI will always be a dead mechanism, so no matter how cleverly you program it, there will never be anyone in there, in other words, a philosophical zombie. And AI is just math and code, what else do you think is in there? Consciousness isn't going to magically emerge just because you have more moving parts. Consciousness is a property of life and your computer will never be alive.
@babstra554 ай бұрын
it's because all these AI companies are selling a scam and world salad is what scammers trade in.
@ingoreimann2824 ай бұрын
lol what a poor miserable soul you are :-D
@lawrenceclyons3 ай бұрын
Stop making fun of my neck, what do you got dandruff, "head and shoulders".... yes, I haven't showered in months out of political spite.
@JeremyPickett4 ай бұрын
"Meat Chauvinism" Totally a band name.
@PMKehoe4 ай бұрын
He’s such a reductive materialist - it is he who doesn’t understand the falsity of perceptional & conceptional materialism… amazing his smugness
@isthattrue4 ай бұрын
I am interested in that counter argument. Can you give me some sources?
@nigelhard15194 ай бұрын
No stopping tech bores.
@joseantoniostramucci3512 ай бұрын
"Good guys" and "Bad guys"... I'm terrified such a naive and idiotic person is in charge of AI safety... we're clearly doomed.
@phpn994 ай бұрын
Aronson is just one more arrogant poser in this field. He has no depth.
@theomnisthour64004 ай бұрын
The real Spear Shaker says you're a narrow minded NPC "me too" clone who fancies he's a genius on his first incarnation
@theomnisthour64004 ай бұрын
Karma never wipes the slate clean. He is describing a Lilithian demon AI - a digital succubus/incubus
@maxthemagition4 ай бұрын
Here we have two guys having a conversation about AI. I’d love to see two AIs having the same conversation, but we know that will never happen and therefore AI is overhyped and all we are witnessing is sakes talk by a couple of guys. To me AI is just a glorified computer with access to a load of information fed to it and it is language machine able to play with words. We are all waiting for a breakthrough that may never come. Of course AI may advance further and new applications will be found. But as always money and profit will drive it.
@bluelines29244 ай бұрын
Me to Aaronson - can AI machinery procreate? Aaronson to Me - No. Me to Aaronson - can you explain why that is?
@OBGynKenobi4 ай бұрын
I don't believe it's thinking. It's just calculating. And this guy is superficial.
@MrWizardGG4 ай бұрын
Why is this guy superficial? And he would ask you, are we not just calculating? Our neurons work similarly to ai.
@OBGynKenobi4 ай бұрын
@@MrWizardGG how do you know this? That puzzle hasn't been quite cracked yet. Can an AI experience a tender moment with you? Can it make friends in the real sence? Can it love? Those things are all part of thinking and consciousness. And I hold it cannot do those things.
@MrWizardGG4 ай бұрын
@@OBGynKenobineural nets are code representations of neurons, which are nodes connected to other nodes with varying strength representation statistical association. I didn't mean they're identical, just that one is designed to work like the other.
@MrWizardGG4 ай бұрын
No one was talking about current AIs having identity, but we are arguing a more advanced ai could have identity, like we do with our advanced neural nets (our brains). And they could be slightly conscious, like a dream.
@OBGynKenobi4 ай бұрын
@@MrWizardGG physics also allows time travel. At most what you'll get is a simulacrum of a human brain, not a one to one clone, function wise.
@מוגוגוגו4 ай бұрын
Well , it seems intelligent enough to understand what poop is , but not actually produce one on its own.
@marcomaiocchi58084 ай бұрын
It's absolutely unacceptable that the people that work on AI safety think that they are the "good guys". The rest of the world doesnt want a super intelligent AI to be aligned with US values.
@deeplearningpartnership4 ай бұрын
And what are "US values"?
@AGamingEntity4 ай бұрын
This guy is talking absolute tosh
@brianlebreton70114 ай бұрын
AI doesn’t have a heart.
@MrWizardGG4 ай бұрын
It could. It will be a lot more teachable than humans
@brianlebreton70114 ай бұрын
@@MrWizardGGTeachable for a feeling assumes the heart is solely a mental correlate and not something separate. Having a feeling located in your chest doesn’t seem to have a readily explained survival purpose which could be construed as evidence for there being a different form of energy or potentially something in a different dimension that the brain is tuned into that current science is not picking up and may never.
@MrWizardGG4 ай бұрын
@@brianlebreton7011 we feel various parts of our body with our nerve cells and spine.
@brianlebreton70114 ай бұрын
@@MrWizardGGExactly! So when the brain identifies a location, it’s telling you something about the source of the signal.