Artificial Intelligence Isn't Real

  Рет қаралды 433,238

Adam Something

Adam Something

Күн бұрын

The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/so...
CHECK OUT THE ALL NEW MERCH STORE: www.teepublic....
This video has been approved by John Xina and the Chinese Communist Party.
Check out my Patreon: / adamsomething
Second channel: / adamsomethingelse
Attribution (email me if your work isn't on the list):
unsplash.com/p...
unsplash.com/p...
unsplash.com/p...
unsplash.com/p...
unsplash.com/p...
unsplash.com/p...
commons.wikime...

Пікірлер: 3 000
@AdamSomething
@AdamSomething Жыл бұрын
Thanks for tuning in to today's video! The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/something
@qwertyuiopchannelreal296
@qwertyuiopchannelreal296 Жыл бұрын
Nice video from someone who has no expertise in AI. Humans are no different from AI we are just a bunch of inputs and outputs. Very soon in the future neural networks will be on par with or surpass humans in general intelligence because of the improvements in its architecture.
@TomTKK
@TomTKK Жыл бұрын
​@@qwertyuiopchannelreal296 Spoken like someone who has no expertise in AI.
@qwertyuiopchannelreal296
@qwertyuiopchannelreal296 Жыл бұрын
@@TomTKK Yes, but generalizing AI as not being “intelligent” is just wrong. You could make the same point about human brains because they receive inputs and act on those inputs to produce output, which is no different from AI. In fact, the architecture of neurons in Neural networks mimics the functions of biological neurons.
@johnvic5926
@johnvic5926 Жыл бұрын
@@qwertyuiopchannelreal296 Oh, nice. But thanks for admitting that anything you say on the topic of AI has no actual scientific foundation.
@relwalretep
@relwalretep Жыл бұрын
​@@qwertyuiopchannelreal296it's almost as if you wrote this before getting to the last 60 seconds of the video
@SurfingZerg
@SurfingZerg Жыл бұрын
As a programmer that studies AI, we almost never actually use the term artificial intelligence, we usually just say machine learning, as this more accurately describes what is happening.
@InfiniteDeckhand
@InfiniteDeckhand Жыл бұрын
So, you can confirm that Adam is correct in his assessment?
@mdhazeldine
@mdhazeldine Жыл бұрын
But is the machine actually understanding? I.e. is it comprehending what it's learning? If not, is it really actually learning? It seems to me like it's a parrot learning to repeat the words that humans say, but it's not understanding the meaning of the words. The same as the Chinese Turing experiment Adam mentioned.
@malekith6522
@malekith6522 Жыл бұрын
⁠​⁠He is...mostly... and what usually press talking as AI actually called AGI(Artificial General Intelligence) and currently we are far away to implement it.
@TheHothead101
@TheHothead101 Жыл бұрын
Yeah AI and ML are different
@EyedMoon
@EyedMoon Жыл бұрын
As an AI engineer, I don't 100% agree with this video. In fact, I think I agree with about 50% of it :p There are some potential threats because of how powerful it is to just automate some tasks using "AI". For example, news forgery has already proven to be a pretty easy task, as newsfeeds are highly formatted and easy to spam. Image generation is, in 2023, of very high quality and helps creating "fake proof" very quickly. AI is well suited to information extraction too, in the cases where features and structures appear from the amount of data we deal with. But in the media, "AI" is a buzzword used whenever people don't understand what they're talking about and the things they're talking about like "machines becoming sentient" are just ludicrous. So I'm not totally on board with Adam's analysis. He makes a point that there's a difference in perception from tech and media but he then still mixes both aspects imo. And especially the cat argument. Of course we develop our reasoning from precise features but we also have kind of the same training process as machines. Seeing the same features with the same feedback a lot activates our neurons so often that the connexions become prevalent, while AIs have neurons that compute features + reinforce their connexions through feadback. Oh and for the "is the machine really understanding?" question: are you really understanding or merely repeating patterns with only slight deviations caused by your environment and randomly firing neurons? I'm not sure anyone can answer this question yet.
@mateuszbanaszak4671
@mateuszbanaszak4671 Жыл бұрын
I'm opposite of *Artificial Inteligence* , because I'm *Natural* and *Stupid* .
@Kerbalizer
@Kerbalizer Жыл бұрын
Rel
@GiantRobotIdeon
@GiantRobotIdeon Жыл бұрын
Artificial Intelligence when Natural Stupidity walks in: 😰
@jacobbronsky464
@jacobbronsky464 Жыл бұрын
One of us.
@QwoaX
@QwoaX Жыл бұрын
Minus multiplied with minus equals plus.
@aganib4506
@aganib4506 Жыл бұрын
Realistic Stupidity.
@alexandredevert4935
@alexandredevert4935 Жыл бұрын
I've done a PhD in machine learning, I design machine learning system as a job * Yes, AI is a very poorly defined word, which have been stripped of the little meaning it might had because how much it was stretched in all directions * Intelligence is not a boolean feature, it's a continuum. Where do we put a virus ? The most simple unicellular organisms ? Industrial control systems are on the level of a simple bacteria in term of complexity and abilities, minus the self-replication ability (3d printers are this close to cross that gap) * Your cat example is a very good explanation of what statistical inference. * You can implement statistical inference in various ways, one of which is neural network. Neuron network can have internal models that does what you call "the intelligent way". That internal model is not set by the programmer, it's built by accumulating training on randomly picked examples aka stochastic gradient descent. * The Chinese Room argument have its critics, some of which are really interesting And yes, there are tons of cringe bullshit on this topic, to the point I carefully avoid mentionning I do AI, I say I do statistical modeling
@MsZsc
@MsZsc Жыл бұрын
zao shang hao zhong guo xian zai wo you bing qilin
@isun666
@isun666 Жыл бұрын
That is exactly how chatgpt would answer it
@sophiatrocentraisin
@sophiatrocentraisin Жыл бұрын
Actually (and it goes in your direction), it's still debated whether viruses are even living organisms. The reason being viruses aren't actually cells, and also because they can't self-replicate
@tedmich
@tedmich Жыл бұрын
Wth all the crap companies in my field (biotech) trotting out some AI drug design BS after their one good idea failed, I would avoid being associated with ANY of this tech until the charlatans fall off the bandwagon! Its a bit like being a financial planner with last name "Ponzi".
@jlrolling
@jlrolling Жыл бұрын
⁠@@sophiatrocentraisinIt’s also because they do not meet the standard requirements that define an organism, i.e. birth, feeding, growth, replication and death. They do not grow, they “are born” as totally finished adults. And also, as you mention, they cannot self replicate, they need a third party for that aka a cell.
@flute2010
@flute2010 Жыл бұрын
artifical intelligence is when the computer contolled trainers in pokemon use a set up move instead of attacking
@sharkenjoyer
@sharkenjoyer Жыл бұрын
Artificial intelligence is when half life 2 combine uses grenade to flush you out and flank your position
@n6rt9s
@n6rt9s Жыл бұрын
"Socialism is when no artificial intelligence. The less artificial intelligence there is, the socialister it gets. When no artificial intelligence, it's communism." - Marl Carx
@flute2010
@flute2010 Жыл бұрын
@@n6rt9s you may have just turned the rest of the replies under this comment into a warzone now at the mere mention of socialism, we can only wait
@dandyspacedandy
@dandyspacedandy Жыл бұрын
i'm... dumber than trainer ai??
@alexursu4403
@alexursu4403 Жыл бұрын
@@dandyspacedandy Would you use rest against a Nidoking because it's a Psychic type move ?
@rhyshoward5094
@rhyshoward5094 Жыл бұрын
Robotics/AI researcher here, you're definitely right to suggest that AI is being completely blown out of proportion by the media. That being said, certain things you mentioned computers not being capable of they certainly can do, it's just a case of that they're currently still the kinda things that are being developed in research institutions and therefore not viewable by most people. For example, the fat cat example could be tackled by a combination of causality and semantic modelling could represent the relationships between feeding the cat and its weight. Furthermore empathy modelling is also an idea within reward-based agents/robots, effectively having the robot reason about whether an outcome would be optimal from the perspective of another being (e.g. a cat). Of course we're still a long ways off, but that is more of a software/theory issue than a hardware issue, in a sense, we have all of the machinery we need to make it happen, it's just a matter of knowing how to structure the inner workings of the AI that's the difficulty. With regards to the Chinese room thought experiment, it's worth mentioning that only one school of thought precludes this disproving consciousness. I'm fairly certain that if a baby could talk, and you were to ask it whether it understood anything it was experiencing, I doubt it would, yet I don't think anyone is arguing that babies are not conscious. Even that aside, I think what ultimately sets aside human intelligence, and what will ultimately set aside future AI, is the ability to reason about reasoning, or in other words meta-reasoning. This is currently quite difficult considering the biggest fads in research right now involve throwing a neural network at problems, effectively creating an incomprehensible black box, but there's definitely the baby steps there of making this happen. All that being said, I totally get why you made this. The way everyone's talking these days you'd be forgiven for thinking the machine revolution is due next Tuesday.
@Bradley_UA
@Bradley_UA Жыл бұрын
Well, they should have asked ChatGPT how it reasons its answers to theory of mind test questions. But to me the only way to answer those questions is to actually have theory of mind.
@awesometwitchy
@awesometwitchy Жыл бұрын
So not literally Skynet… but maybe literally Moonfall? With a little Matrix sprinkled in?
@qiang2884
@qiang2884 Жыл бұрын
@@awesometwitchy no. Researchers are smart people unlike politicians, and they know that making things that do not harm them is important.
@ChaoticNeutralMatt
@ChaoticNeutralMatt Жыл бұрын
I'll only add that it's acted like it's 'just around the corner' for a while now. I don't entirely blame media, at least early on. It was a fairly rapid public jump and we have made progress.
@travcollier
@travcollier Жыл бұрын
It is basically the same as the "philosophical zombie" thought experiment, and fails to actually mean anything for the same reason. It is just begging the question by assuming there is something called "understanding" which is different from what the mechanistic system does. No actual evidence for that I'm afraid. And before someone objects that they know they "understand"... Really? Do you actually know what is going on in your brain, or are you just aware of a simplified (normally post-hoc) model of yourself?
@justpassingby298
@justpassingby298 Жыл бұрын
Personally what pisses me off is when someone takes one of those AI chat bots, and gives it some random character from any show, and goes "Omg this is basically the character" and it just gives the most basic ass responses to any question
@menjolno
@menjolno Жыл бұрын
Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms"
@nisqhog2881
@nisqhog2881 Жыл бұрын
"Behaving perfectly like a human doesn't mean they are intelligent" is a sentence that can be used on quite a lot of people too lol
@AlexandarHullRichter
@AlexandarHullRichter Жыл бұрын
"The ability to speak does not make one intelligent." -Qui Gon Gin
@ConnorisseurYT
@ConnorisseurYT Жыл бұрын
Behaving unlike a human doesn't mean they're not intelligent.
@inn5268
@inn5268 Жыл бұрын
it is intelligent in the sense it can process data and generate a response to it, it is not SENTIENT since it lacks any self awareness or underlying thoughts other than processing the inputs it's given. That's what adam meant to say
@fnorgen
@fnorgen Жыл бұрын
I suspect quite a lot of people will keep moving the goalpost for what counts as "intelligence" however far is needed to exclude machines, until they themselves no longer qualify as intelligent by their own standards. The issue I take with Adam's argument is that you quickly get in a situation where the list of tasks that strictly require "actual intelligence" will keep getting narrower and narrower until there may some day be no room left for "intelligence". I know a person with such a severe learning impediment that I would honestly trust auto GPT or some similar system to do a better job than them for any job that can be performed on a computer. Except some video games. That's not much to brag about, but in terms of meaningful, measurable performance, I'd say current AI is more intelligent than they are. So claiming that that the Machine is completely devoid of intelligence seems to me like a strictly semantic argument. I don't really think of the mechanisms of a system as a qualifier for intelligence. Only its capabilities. Current ML based systems don't learn like we do, they don't think like we do, they don't feel like we do, they have no intrinsic motivations, and it seems they don't need to either.
@robgraham5697
@robgraham5697 Жыл бұрын
We are not thinking machines that feel. We are feeling machines that think. - Antonio Dimasio
@stevenstevenson9365
@stevenstevenson9365 Жыл бұрын
I have an MSc in Computer Science and Artificial Intelligence and I can say that how we use these terms and how the media uses these terms are very different. "AI" is a huge field that refers to basically anything that a computer does that's vaguely complex. So when your map app tells you the shortest path from A to B, that's AI, specifically a pathfinding algorithm. We we talk about stuff like chat gpt, we wouldn't really call it AI, because AI is such a general term. It's Machine Learning, more speficially Deep Learning, more specifically a Large Language Model (LLM). Stable diffusion is also Deep Learning, but it's a diffusion model.
@nitat
@nitat Жыл бұрын
Thx For this comments. The IT jargon was really confusing. I think I understand a little bit better now.
@dieSpinnt
@dieSpinnt Жыл бұрын
There is no reason to be defensive. You are a scientist and not a dipshit born out of "Open" (open .... what a perversion!) AI that wants to sell "ideas" ... I mean stock. Have a good one, fellow human( ... **g** )
@Groostav
@Groostav Жыл бұрын
Yeah its funny, @AdamSomething's description of "pre-AI" sounded a lot like prolog to me, which I would consider to be a form of AI. I think that the concept of AI is really so broad that it is simply some algorithm that deftly navigates a dataset. If you add some kind of feedback loop (wherein the algorithm is able to grow or prune the dataset as it goes), to find something resembling novelty, you've got something thats more-AI-ish. So are we at the point where "AI is a spectrum" now?
@gustavohab
@gustavohab Жыл бұрын
If you come to think of it, AI is out there for over 20 years because NPCs in video games are also AI lol
@Ofkgoto96oy9gdrGjJjJ
@Ofkgoto96oy9gdrGjJjJ Жыл бұрын
We would also need a lot of physical memory, to run it without a crash.
@thrackerzod6097
@thrackerzod6097 Жыл бұрын
As a programmer, thank you. It's annoying to have to explain to people that AI is not intelligent, it's just an advanced data sorting algorithm at the very most. It has no thoughts, it has no biases, it has no emotions. It's just a bunch of data sorted by relevance. This isn't to downplay the technology, the technology behind it is stunning, and it has good applications but to call it intelligence when it isn't is absurd.
@cennty6822
@cennty6822 Жыл бұрын
language models inherently have biases based on their training. A bot trained on western internet will be biased towards more western ideologies, one based on for example russian forums will have different biases.
@thrackerzod6097
@thrackerzod6097 Жыл бұрын
@@cennty6822 They will, however these are not true biases. There is no emotional, or any reasoning behind them, hence they can be referred to as biased, but not biased in the way a human, or any other intelligent being would be, which was what I was referring to.
@somerandomnification
@somerandomnification Жыл бұрын
Yep - I've been saying the same thing about CEOs I've worked with for the last 25 years and still there are a bunch of people who seem to think that Elon Musk is intelligent...
@thrackerzod6097
@thrackerzod6097 Жыл бұрын
@@somerandomnification Elon is just another rich person who's built his legacy off of the backs of genuinely intelligent people, people who unfortunately will likely go largely uncredited. If they're lucky, they'll at least get credit in circles related to their niches though.
@marlonscloud
@marlonscloud Жыл бұрын
And what evidence do you have that you are any different?
@Movel0
@Movel0 Жыл бұрын
Incredibly brave of Adam to stuff his cat with food to the point of morbidty obestiy just to prove the limits of AI, that's real dedication.
@USSAnimeNCC-
@USSAnimeNCC- Жыл бұрын
And now time for kitty weight loss arc que the music
@merciless972
@merciless972 Жыл бұрын
@@USSAnimeNCC- eye of the tiger starts playing loudly
@lordzuzu6437
@lordzuzu6437 Жыл бұрын
bruh
@Soundwave1900
@Soundwave1900 Жыл бұрын
How is it fat though? Google "fat cat", all you'll see is cats at least twice fatter.
@celticandpenobscot8658
@celticandpenobscot8658 Жыл бұрын
Is that really his own pet? Video clips like this are a dime a dozen.
@ItaloPolacchi
@ItaloPolacchi Жыл бұрын
I disagree: people are scared by AI not because they think they're seemingly "human", but because perfectly acting like one without understanding the meaning behind it can lead (in the future) to real life consequences. If you teach an AI to hack your computer and delete all your data it doesn't matter if it understands what it's doing as long as the action is being done. Not having free will doesn't mean not creating consequences; if anything, it's worse.
@jhonofgaming
@jhonofgaming Жыл бұрын
This exactly, tools already exist that are not "intelligent" but are still powerful. AI is the exact same, it does not matter if it's intelligent it's still an extremely disruptive tool.
@what42pizza
@what42pizza Жыл бұрын
well said!
@thereita1052
@thereita1052 Жыл бұрын
Congrats you just described a virus.
@user-yy3ki9rl6i
@user-yy3ki9rl6i Жыл бұрын
honestly its a good take. A big part of ChatGPT development is imposing guardrails on them to prevent them from telling you how to make pipe bombs and meth. we've seen glimpses of DAN version of ChatGPT and yeah thats why AI still dumb and scary.
@alexs7139
@alexs7139 Жыл бұрын
Yes and that’s all my problem with this video: after watching it you can think « oh, AI is no « true » intelligence so it cannot try to destroy us like in SF» for example… However it’s wrong (and you showed why). P.s The idea that an AI built through machin learning has no « true intelligence » because it cannot understand concepts is not that obvious from a philosophical point of view. A pure materialist for example will not be convinced at all by this argument
@Cptn.Viridian
@Cptn.Viridian Жыл бұрын
The only fear I have for current "AI" is companies betting too hard on it, and having it destroy them. Not by some high tech high intelligence AI takeover, but by the AI being poorly implemented and just immediately screwing over the company, like "hallucinating" and setting all company salaries to 5 billion dollars.
@davidsuda6110
@davidsuda6110 Жыл бұрын
Part of the Hollywood writers strike is AI generating scripts just bad enough that they can be edited by a human and produced so cheaply that the industry can profit on it. Our concerns should be more blue collar. The industrialsts will take care of themselves in the long run.
@okaywhatevernevermind
@okaywhatevernevermind Жыл бұрын
why do you fear big corpo destroying itself through ai? that day we’ll be free
@KorianHUN
@KorianHUN Жыл бұрын
​@@okaywhatevernevermindwe will be "free" ... of global trade and functional economies. An apocalypse sounds cool until you think 4 seconds about it. It won't be shacky adventures, it will be mass death and duffering.
@maya_void3923
@maya_void3923 Жыл бұрын
Good riddance
@berdwatcher5125
@berdwatcher5125 Жыл бұрын
@@okaywhatevernevermind so many jobs will be lost.
@Alex-ck4in
@Alex-ck4in Жыл бұрын
I've been a software engineer for the past 9 years - these days I work in the Linux kernel but my undergrad was to take a high-performance deep conv. neural net, chop off its output neurons, attach a new set of output neurons, and re-train the network to do a different, but conceptually similar task. This is called "fine-tuning", and at the time (libraries have advanced now), required direct, low-level modifications to the matrices of neurons, and the training process was very manual. While I have a HUGE problem with how the media conveys AI, how they try to humanize it, construe its behaviour as sentient, etc, I need to speak out and say that I also increasingly have a problem with people saying "AI is mundane, stupid, plain maths and nothing more". The only honest answer we can give is that we don't know. We don't know how our brains work, we don't know how WE are sentient, therefor, we cannot conclude ANYTHING is sentient or not. I know this is philosophical and non-mathematical, but it's the only answer that is not disingenuous. To this day, despite all our technology, we don't know if sentience is "computational", that is, arising from the "computation" of inputs inside our brains by neurons, or something else entirely, maybe involving quantum interactions betweem certain chemicals within the neurons. Until we know this, we therefore cannot know if any other computational network is "experiencing" its inputs. In terms of neural nets, there is another complication which is that the "neurons" are not even physical things, but rather abstractions placed ontop of sets of numbers in a chip. What set of conditions are required for this to be sentient? We have no idea. Some people argue that we are sentient and NN's are not because brains are actually way more complicated, but I also find this answer wholly insufficient - it doesnt answer *what* is causing the sentience, but merely conjectures that it lies elsewhere in our brains outside of the computational parts. In that sense the argument merely kicks the can down the road. I think it's very important to keep these media outlets in check by reminding them of the mundanity of what they claim is sensational, but it's a very dangerous road when you go too far - one day we may well be witnessing the birth of consciousness, and disregarding it entirely because we tell ourselves that it is not "biological" enough, "human" enough, or for some other over-confident reasoning. Anyways sorry for the rant and hopefully it was interesting for someone 😂
@EpicGamer-fl7fn
@EpicGamer-fl7fn Жыл бұрын
ngl you got me interested with the whole "quantum interactions between certain chemicals within the neurons" . Is it just something you came up with or is there an actual theory about it? It sounds very intruiging.
@TheCamer1-
@TheCamer1- Жыл бұрын
Thank you! Very frustrating that Adam will put out a video so categorically slamming AI and making so many blanket statements as if he knows what he's talking about, when in fact many of them are just plain wrong.
@Alex-ck4in
@Alex-ck4in Жыл бұрын
​@@EpicGamer-fl7fn I didn't come up with it sadly xD There are papers out there that report occurrences of nature exploiting quantum mechanics and it's quite well-observed at this point, especially in photosynthetic bacteria. Building on that, there are papers arguing the plausibility that our brains/neurons could be affected by, or even exploiting quantum systems, to a point where it could be affecting our decision-making. Sadly, these still don't really come close to measuring or defining consciousness, it remains as elusive as ever :) Rodger Penrose is well worth a listen on the subject of consciousness, Lex Friedman has a podcast with him, and there's a whole chunk of the video dedicated to the topic. Also some papers to google: *"Photosynthesis tunes quantum-mechanical mixing of electronic and vibrational states to steer exciton energy transfer"* *"Experimental indications of non-classical brain functions"* Finally, to see the worst-case scenario for our race, watch some black mirror, particularly the episode "White Christmas" xD
@RoiEXLab
@RoiEXLab Жыл бұрын
As a CS Student I agree with the main point of the video, but I'll just throw it in the room that we actually don't know what "real intelligence" really is. So maybe at some point AI will actually become "real" without any way to tell it apart. We just don't know.
@rkvkydqf
@rkvkydqf Жыл бұрын
Since real neurons seem to outperform NNs in RL environments like a game of pong by number of iterations, I think there definitely is some gap. I think neuromorphic computing seems quite fun. Anyway, it's indeed very annoyingly difficult to define intelligence, but it's clear the dusty old Turning Test isn't doing it for us anymore...
@00crashtest
@00crashtest Жыл бұрын
True. Real intelligence is just a bunch of atoms interacting together. So, intelligence is just a vague thing and there is no objective overall way to quantify it because it has not had a single coherent definition yet. Trying to quantify intelligence is just like trying to categorize animals before the concepts of "species" and "genetics" had been invented. The so-called "scientists" who made the classifications before that were so wrong. This is why social "science" is so wrong all the time, because there is no objective standard. Per the definition, science is only science when it has control groups, is falsifiable, has defining points, and is repeatable. Social "science", just like biology before the concept of species, is not even a science as a result.
@00crashtest
@00crashtest Жыл бұрын
As a result, until someone makes a single DEFINING standardized Turing Test (such as a single version of multiple choice or fill-in-the-blank), there is no objective way (excluding the formulation of the test in the first place) of quantifying intelligence. After all, even the physical sciences only work because there are definining criteria, and it is only objective after the defining criteria have been applied. All science, even physics, is inherently somewhat subjective because the choice of what defining criteria to use is inherently subjective. Anyway, objectiveness requires determinism in the testing procedure. This is why writing composition is intrinsically subjective because there isn't even a deterministic set of instructions on how to grade the test. Quantum mechanics is objective in this sense because even though the particle positions are random, the probability distribution function that they follow is still deterministic.
@XMysticHerox
@XMysticHerox Жыл бұрын
@@rkvkydqf Neuron networks eg the brain is vastly more powerful than any current hardware. Even the most powerful supercomputers still need quite some time to simulate even just a couple of seconds of brain activity. That doesn't mean there is an inherent difference.
@ikotsus2448
@ikotsus2448 Жыл бұрын
@@rkvkydqf The dusty old Turing test stopped doing it for us the moment it was close to be passed. The same will happen with any other tests. Speaking of a moving goalpost...
@mistgate
@mistgate Жыл бұрын
If people insist on using "AI," I propose we call it "Algorithmic Intelligence" because that's far closer to what it really is than Artificial Intelligence
@Naps284
@Naps284 Жыл бұрын
Now, imagine an algorithm that, instead of being made of code, is based on an extraordinarily complex and pretty well-defined physical structure on three spatial dimensions and which structure also defines how it will elaborate "stuff" and react with itself (inbetween inputs and outputs) through the fourth dimension (time). Also, the sequence of reactions and computations define how the structure will mutate, adapt, and change over time. All these properties on the four dimensions are perfectly (theoretically, at least) transcribable as code: for example, as numbers that represent coordinates on these dimensions (including all states through the fourth dimension). Now, add in some basic rules that define how all this data must interact with itself or react and compute inputs and outputs. These rules might just be, for example, the fundamental laws of physics and the various physical constants. Oh, wait. This seems familiar... Isn't this algorithm EXACTLY how the human brain "generates" intelligence and cognition (and consciousness?)
@apolloaerospace7773
@apolloaerospace7773 Жыл бұрын
@@Naps284 There is no qualitative difference between connecting virtual points in n-dimensions or n+1-dimensions. I dont work with AI, but to me you sound like trying to be appear smart, without knowing what you are talking about.
@Naps284
@Naps284 Жыл бұрын
@@apolloaerospace7773 I didn't write all that to appear smart using weird terms or something 😂 It was not my intention
@Naps284
@Naps284 Жыл бұрын
@@apolloaerospace7773 I wanted to make a parallelism between the two things by trying to totally decompose the "thing" 😂 I just tried to explain my idea of how there is no actual functional difference between a virtual and a physical neural network (mutating nodes+connections), given enough complexity and computational power...
@Naps284
@Naps284 Жыл бұрын
@@apolloaerospace7773 I just liked the idea of expressing it that way, but then I got a bit lost in my explanation 😂
@bettercalldelta
@bettercalldelta Жыл бұрын
The thing I'm afraid of is that corporations couldn't care less, as long as they don't have to pay actual humans for being artists, programmers, etc etc, they will be using AI even if everyone knows it actually has no idea what art or code is
@rkvkydqf
@rkvkydqf Жыл бұрын
If all else fails, all this AI FUD will surely make desperate artists/programmers/writers come to you to work for peanuts!
@haydenlee8332
@haydenlee8332 Жыл бұрын
this!!
@dashmeetsingh9679
@dashmeetsingh9679 Жыл бұрын
The problem with AI generated code is: how to know it works as intended without any potential system crashing defects. Will AI reduce the actual software developers needed to develop a software: yep thats true. As you increase producitivity less labor is needed. Will it result in net job loss? Hard to predict, maybe it would. Or maybe this open new avenues as happened with all other techs.
@shawnjoseph4009
@shawnjoseph4009 Жыл бұрын
It doesn’t matter how smart or stupid the AI actually is if it can do what you need it to.
@Psychonau
@Psychonau Жыл бұрын
you still need a person to controll these "AI"s and at the moment these ais produce like 90% garbage
@GiantRobotIdeon
@GiantRobotIdeon Жыл бұрын
Artificial Intelligence is a nebulous term that means whatever the marketeer wants it to. It generally translates to "bleeding-edge computer algorithms that don't work very well right now". I recall a time when Autopilot in Aircraft were called "Artificial Intelligence" when they were new; the moment they began working, we renamed it. The same will happen with ChatGPT, MidJourney,etc. In ten years, when the tech is mature, we'll call these types of software text generators and image generators, because that's what they are. And of course, the bleeding edge'll be called A.I
@thedark333side4
@thedark333side4 Жыл бұрын
Semantics! If it can compute, it is intelligent. Even a mechanical calculator is intelligent, just in a limited manner, it is still however ANI (artificial narrow intelligence).
@ValkisCalmor
@ValkisCalmor Жыл бұрын
Exactly. We've been using the term AI to refer to any algorithm capable of making "decisions" without human input for decades, from autopilot to the ghosts in Pac-man. Researchers and engineers use more specific terms to clarify what they mean, e.g. machine learning models and artificial general intelligence. The issue here is grifters and unscrupulous marketing people using exclusively the broad term and talking about your phone's personal assistant as if it's Skynet.
@marcinmichalski9950
@marcinmichalski9950 Жыл бұрын
I can't even imagine knowing so little about ChatGPT to call it a "text generator", lol.
@KasumiRINA
@KasumiRINA Жыл бұрын
ChatGPT is clearly a chatbot, BTW. I am not sure why people think AI is something new or special since anyone who played any videogame already uses that term casually to refer to enemies behavior. Some AI is basic, like Doom demons attacking each other after random friendly fire, and some AI is more sophisticated like the Director in Left4Dead or Resident Evil games adjusting difficulty based on how well you do. AI art like Stable Diffusion is nice to save time, I just wish it didn't need so much graphics memory.
@nathaniellindner313
@nathaniellindner313 Жыл бұрын
I saw an ad for a washing machine that “scans your clothes and uses AI to determine how to wash them”. With the magic of marketing, even a simple if/else tree can become AI, what a time to live in
@kcapkcans
@kcapkcans Жыл бұрын
I'm a data engineer for a company you've heard of. I fully agree that the general public doesn't really understand or properly use the terms "AI" and "Machine Learning". However, I would argue that in so many cases neither do the "tech people".
@jonas8708
@jonas8708 Жыл бұрын
As a software engineer I'm honestly very excited about these new models. They open whole new ways for us to handle user inputs, and lets us deal with MUCH more vague concepts than before. Like before, users had to click specific buttons or input specific text inputs, leaving room for very little variance in user interactions, whereas now we can use these models to map vague user inputs to actions in software, making it not only more accessible, but more useful in general. That is, assuming that tech bros don't ruin this whole thing trying to replace us all with what is basically an oversized prediction engine.
@faarsight
@faarsight Жыл бұрын
A human also learns by getting vast amounts of data and making associations that form the concept cat. The process is not as different as you imply imo. That said, yes Al is currently still way less sophisticated than humans and not really sentient or general intelligence. Imo the biggest difference isn't about hardware but in the sophistication of the software, as well as the lack of embodied cognition. Evolution had millions of years to form behaviours like cognition, sentience and consciousness. We don't yet really know what those things are well enough to replicate them (or make processes that lead to them being replicated).
@idot3331
@idot3331 Жыл бұрын
Well said. This video is really infuriating because it seems like he didn't try to understand the topic at all. Just spreading misinformation for some quick and easy views.
@aliceinwonderland8314
@aliceinwonderland8314 Жыл бұрын
I once passed a basic french speaking exam with essentially no comprehension of what I was saying. Just copied the tense structure of the question, added a few stock phrases and conjuctions, and sprinkled in some random nouns and adjectives that I couldn't for the life of me tell you what they meant, only that my brain somehow decided they were in the same topic. They were testing for compression; I used a different method to have the appearance of it. AIs work with similar logic, doesn't matter how you get the results within the task, so long as the results appear correct.
@tomlxyz
@tomlxyz Жыл бұрын
That's exactly what's this not about. The question here is if the process is intelligent or not. What you describe is using a certain method to a narrow field, that's just regularly, statically defined algorithms. If you were faced with increasingly complex tasks you'd eventually fail because you don't actually comprehend it and currently AI keeps failing too, sometimes with the simplest instructions
@aliceinwonderland8314
@aliceinwonderland8314 Жыл бұрын
@@tomlxyz you do realise all code, AI included, is quite literally just a bunch of algorithms and statistics, albeit in this case significantly more complex than what I used? And that most of AI issues boil down to the lack of comprehension and ability to think (preferably critically) within the AI? I'm not an expert in machine learning, but I do have some basic understanding of how code and data sorting work, since a large part of my degree is working with various sensors, their data, Fourier transforms, matrices etc. Theoretically, I think it should be possible to get some sort of sentient AI, but machine learning as it currently is is simply way too specific in it's task to really be sentient. I'd say current AIs are probably similar level of sentience as an amoeba.
@slowlydrifting2091
@slowlydrifting2091 Жыл бұрын
I believe the sentience of AI is not the primary concern. The crucial factors lie in the potential consequences of AI models being widely implemented, leading to the displacement of human workers in various industries, as well as the risks associated with AI systems becoming uncontrollable or behaving unilaterally.
@Bradley_UA
@Bradley_UA Жыл бұрын
define "sentience"? Just generality? Well, in the case of superhuman general intelligence, we gotta worry about misalingment. We cant program in what exactly what we want, and the more intelligent AI gets, the weirder "exploits" will it find to fullfill its utility function. Like in video games they just start abusing bugs r silly game mechanics to get high score, insteadof playing the game like you want it to. Or imagne AI that wants to make everyone happy... And then in comes across heroin. So yeah, we may be far off from GENERAL intelligence, but when we get there, its sentience will not matter. What natters is whether or not it will do what we want. The alingment problem.
@himagainstill
@himagainstill Жыл бұрын
More crucially, unlike previous waves of technological unemployment, the "replacement" jobs that usually come with it just don't seem to be appearing.
@haydenlee8332
@haydenlee8332 Жыл бұрын
this is a based comment
@dashmeetsingh9679
@dashmeetsingh9679 Жыл бұрын
Isnt computer an "intelligent" typewriter? It did eliminate rudimentary jobs but created more complex high paying ones. Similar ride will happen again.
@XMysticHerox
@XMysticHerox Жыл бұрын
Yes but that is not an issue with AI but rather any form of automation within a capitalist system.
@cosmic_jon
@cosmic_jon Жыл бұрын
I think it's dangerous to underestimate the disruption this tech will cause. I also think we might be conflating ideas of intelligence, awareness, consciousness, etc.
@MrC0MPUT3R
@MrC0MPUT3R Жыл бұрын
I agree. I think the conversation around "AI" has been way too focused on the "This technology CoULd kIlLL HuMaNItY!" aspect of things and very few people talking about what it will look like when the majority of jobs can be automated.
@WhatIsSanity
@WhatIsSanity Жыл бұрын
@@MrC0MPUT3R Given the soul crushing nature of most work places I see no issue with this. The problem is the majority of people are obsessed with capitalism to the point they would rather watch everyone they care about die of starvation than admit the arbitrary nature of living to work and valuing life by the dollar rather than intrinsically. Even without AI and robots slaving away for us we already have everything we need and more to live, yet most still insist on the notion that the only thing that justifies life and living is more work. There's reasons there are always more people than jobs to go around.
@shadesmarerik4112
@shadesmarerik4112 Жыл бұрын
@@MrC0MPUT3R why talk about jobs only? AI would be an extension of humanity, being able to produce content with endless creativity, transforming society and solving problems we dont even know of yet. It will devalue stupid work while at the same time create an abundance of wealth, which just have to be distributed fairly. In a system where the majority of the human workforce is not needed anymore, notions like wealth distribution, altruistic causes and social egality become ever more important. Btw tech kills humanity argument is a strawman by u. No rational thinking human really believes in a scenario in the near future where ai driven robots start a rebellion or somesuch. And by those who use this argument its a scapegoating tactic to blame tech for everything bad thats happening to them.
@MrC0MPUT3R
@MrC0MPUT3R Жыл бұрын
​@@shadesmarerik4112 "which just have to be distributed fairly" My sweet summer child.
@shadesmarerik4112
@shadesmarerik4112 Жыл бұрын
@@MrC0MPUT3R well.. since the disenfranchised will be able to employ ai in warfare never seen before to equalize society or die tryin, it would be in the best interest of those who own to share abundance. Since 3d printers and access to ai are already achieving the goal of Socialism (remember: the means of production are in the hands of the public), it wont be long until the economy of hoarded wealth is ended.
@Dimetropteryx
@Dimetropteryx Жыл бұрын
You can choose a definition of intelligence that fits just about whatever argument you want to make, so it really is important to make clear which one you're using before making your point. Kudos for doing that, and for stating that you chose it for the purpose of this video.
@menjolno
@menjolno Жыл бұрын
Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms" "You can choose a definition of intelligence"
@luszczi
@luszczi Жыл бұрын
Chinese Room is a masterful piece of sophistry. It sneakily assumes what it's trying to prove (that you can't get semantics out of syntax alone) and hides that with a misuse of intuition.
@private755
@private755 Жыл бұрын
But it does make a simple mistake in that there’s no such thing as “Chinese” as a language.
@Halucygeno
@Halucygeno Жыл бұрын
The issue with the Chinese room thought experiment is that in real life, it would be more like this: "every single person is inside a room, consulting their own private dictionary and writing all the correct symbols. You can't leave your room and enter anyone else's room. So, how do you know ANYONE can speak Chinese, if you can never talk to them directly?" Basically, the thought experiment acts like a gotcha, but it can only do so because it posits some "ideal" mode of communication where we can be certain that the other person is really communicating, and not just following deterministic logic. Taking its argument seriously leads to solipsism, because we can't enter other people's brains and verify that they're really thinking and feeling - maybe they're just perfectly emulating thought and emotion? What criteria do they propose for verifying that someone is really speaking Chinese, if everyone is stuck in a room and can never leave to check? But yeah, main point still stands. Tech journalists overhype everything, making it sound like we've developed A.G.I. or something.
@DeltafangEX
@DeltafangEX Жыл бұрын
Welp. Time to read Blindsight and Echopraxia for like the dozen-th time.
@jarredstone1795
@jarredstone1795 Жыл бұрын
Very good point, scrolled down to find something like this. One could also argue that it's not the point that the person inside the room doesn't understand chinese, but that the entire room with its contents should be considered an entity, which does in fact understand chinese. We humans have specific parts of our brains specialised on certain tasks, damage in certain areas for example affects the ability to use language. What difference is there between an entity that has components like a dictionary and a human worker and an actual chinese speaking person, who also relies on the components of their body to communicate in chinese? It's a bit like saying humans can't understand chinese because the amygdala alone cannot understand it.
@rkvkydqf
@rkvkydqf Жыл бұрын
In this case, it's more of a high dimensional tensor math hidden behind the door, being just a little more accurate and less deterministic with its answers, enough to make it look human, but the point still stands. Even if there are some correlations within the model, isn't it just a byproduct of it's main objective, infinite BS generation?
@silfyn1
@silfyn1 Жыл бұрын
i think what you said is very true, but the point of using this example is more like, we being human and working like one another, can assume that other humans understand things like we do, because it doesnt make sense for you to be the only person to actually understand things, but with a.i we know that what its doing is basically what happens in the chinese room, yet we expect it to be like us i think the problem is the overhype and we being so self comparative, we see something acting like us and assume that it uses the same methods as us
@Diana-ii5eb
@Diana-ii5eb Жыл бұрын
This. Using the chinese room argument like that is also dangerous from an ethical perspective: Assuming artificial life is possible, one could always claim that it is just acting like a sentient being instead of actually being sentient, thus justifying treating a sentient being like an object. Notice how the reverse - wrongly assuming a non sentient being is sentient - leads to much less negative consequences from an ethical perspective: Treating an object like a person is a bit silly and probably quite wasteful in the long run, treating a person like an object is ethically unjustifiable. That being said, Adam is right that modern "AI" isn't sentient and likely won't be for a while. While a lot of today AI-hype is definitly overblown, some of the underlying questions asked in that debate should not be outright dismissed just because they aren't relevant yet. There is a good chance artificial general intelligence is possible, and even if it isn't a lot of the problems associated with it are still very relevant in a world where extremely competent weak AIs exist. In essence, just because the media is (as always) massively blowing everything out of proportion doesn't mean that there isn't a real discussion to be had about the dangers of advanced machine learning systems.
@miasuarez856
@miasuarez856 Жыл бұрын
Thanks for the video. My main worries over this are that executives believe that this can replace human workers, apply this "AI" to everything fire a lot of people and then they end working the remaing ones to death when those "AIs" fail in doing their tasks because nobody would know if its outputs, and/or inputs, are accurate enough; or if, the heavens forbid, they gave IAs any type of decission power.
@kkrup5395
@kkrup5395 Жыл бұрын
AI will surely replace many many workers. Even such a harmless thing as Ms Exel at the time replaced many accountant across the world, because one person and the program could do a task as fast as team of 10 would.
@87717
@87717 Жыл бұрын
I personally think you should have talked about neural networks and artificial general intelligence (AGI). There might be an issue of semantics, because AI colloquially now refers to any machine learning application whereas AGI encompasses the way you understand 'true intelligence'
@rursus8354
@rursus8354 Жыл бұрын
Yes but ordinary people don't know the meaning of those terms.
@ff-qf1th
@ff-qf1th Жыл бұрын
@@rursus8354 Which why OP is advocating this be included in the video, so people know what they mean
@idot3331
@idot3331 Жыл бұрын
AI can refer to any computer program that does something that a human could. A calculator is artificially intelligent in an incredibly narrow sense.
@Swordfish42
@Swordfish42 Жыл бұрын
AGI is also a bit useless now, as nobody seems to agree what counts as AGI. Artificial Cognitive Entity (ACE) seems to be an emerging term that is quickly getting relevant.
@ewanlee6337
@ewanlee6337 Жыл бұрын
An AGI is pretty useless as while it could do anything, AI don’t have desires (including self preservation) so it won’t decide to do anything.
@rolland890
@rolland890 Жыл бұрын
I definitely appreicate the video critiquing how people and the media have fear mongered and misunderstood ai, but I think focusing on whether or not ai is or is not actually intelligent or conscious misses the point, and other commenters have mentioned this too. We have plenty of tools that are not intelligent and are still dangerous, what matters more is its effect. Hal 9000, for example, decided to kill the crew to fulfill its ultimate obiective. I would pisit that ai is dangerous, in large part, because it lacks consciousness and it will rigorously and strictly follow its assigned prerogatives.
@megalonoobiacinc4863
@megalonoobiacinc4863 Жыл бұрын
well yeah, if AI could actually become intelligent and naturally emphatic like most humans are (to varying degrees) then it could rise to become actual inhabitants of society rather than the tools they were born as. And that's the line i doubt will ever be crossed
@shellminator
@shellminator Жыл бұрын
Did Hitler had a conscience ? Does Putin have one? I think we as humans are so flawed it's not even a matter of conscience.. or morals or ethics or even empathy because let's just say it like it is.. all of us are capable of the absolut best and the absolut worst
@coldspring22
@coldspring22 Жыл бұрын
But for AI to be truly dangerous, it must be conscious - it must understand what it is doing, what human is doing in order to formulate plan to counter what human is doing. Something like ChatGPT has no clue what it is doing or actually saying - the moment you introduce something that it hasn't been trained on, the whole edifice comes crashing down.
@morisan42
@morisan42 Жыл бұрын
There is no need for a system like HAL to actually be conscious in order to be intelligent, this is where people miss the point I think. We erroneously think that because we are intelligent, and we are conscious, that one must follow the other and it isn't possible to be intelligent without being conscious. The reality is that while we can explain our intelligence, and have basically replicated a facsimile of it at this point with neural networks, we are no closer to understanding what makes us conscious. We have basically been successful in creating the "psychological zombie" thought experiment, we have machines that are intelligent without being conscious.
@gwen9939
@gwen9939 Жыл бұрын
@@coldspring22 No it doesn't. In fact, AI is more dangerous when it's not conscious. It does not need to know what it's doing. An AI that does not understand what humans are but is told to make as many of X objects as possible will mine the planet and its inhabitants for resources to produce said object. It only needs an internal theory of reality that allows it to optimize whatever goal it has been given and it can then optimize the earth and all life out of existence. Its strategies could be endlessly intelligent to where the combined intelligence of all humanity never had a chance to compete and still have no thoughts about its prime directive, or thoughts at all. The type of intelligence AI is and would be is not like a human consciousness, it would be intelligent in the way that evolution works as a sort of intelligence, figuring out problems organically with the primary goal of proliferating life on the planet in whatever shape it can. But unlike evolution, a future AI system would be able to recursively alter itself to a the processing speed of a supercomputer to find the most optimal structure to achieve its goal, instead it wouldn't create life. Regardless of what goal or explicit rule given to such an AI by humans, it would be able to grow like a hyper-efficient virus able to instantly rewrite itself due to its superintelligence to deal with any obstacles imaginable.
@LongPeter
@LongPeter Жыл бұрын
People have been misusing "AI" for decades. Some video games refer to bots in multiplayer as "AI".
@joey199412
@joey199412 Жыл бұрын
Programmer here working for an AI company. I actually think almost the opposite. I feel like "AI" is currently both underappreciated and overrated by the general public. Some parts are completely blown out of proportion, like how quickly the systems will improve and some over the top extrapolations of future abilities based on past improvement. However what is underappreciated by the general public and also your video is precisely understanding. The current AI systems aren't stochastic parrots and most likely have some actual deeper understanding of the things they do. We can't even fully exclude AI having some level of subjective experience when processing things. The most important leaders within the AI field including the grandfather of neural-net backpropogation and extremely respected scientist: Geoffrey Hinton. And AlexNet co-inventor Ilya Sutskever. These two people are the Einstein and Stephen Hawking of Machine Learning. If they speak, you listen what they have to say. And both of them are very clear and adamant about the modern state of AI actually having some understanding and internal subjective experience according to Sutskever and Hinton. For the sake of fairness and objectivity there are also two prominent AI experts that have different view of things: Andrew Ng and Yann LeCunn. Andrew Ng doesn't believe modern AI systems have internal subjective experience but he recently changed his stance and now does believe that they do have proper understanding of the things they are doing and not simply parroting in a dumb statistical way. Yann LeCunn keeps hardcore rejecting both subjective experience and understanding within these systems. However he has not provided a clear argument to explain away certain behavior the AI displays that according to Hinton, Sutskever and Andrew Ng would require understanding. Not saying you are wrong and you very well could be right. However I think for the sake of clarity you should at least tell your viewers that your video is a very unorthodox view and not shared by most AI experts.
@marcinmichalski9950
@marcinmichalski9950 Жыл бұрын
I was looking for a comment like this so I don't feel obligated to write one on my own. You enjoy videos by video essayists on various topics until they start talking about something you actually know a thing or two. Unfortuantelly, that's the case here.
@metadata4255
@metadata4255 Жыл бұрын
@@marcinmichalski9950 Yudkowsky called he wants his fedora back
@baumschule7431
@baumschule7431 Жыл бұрын
Came to the comments section to say this. This needs more exposure. I usually really like Adam’s videos, but this one didn’t accurately depict what is currently going on in the field. There has been a major shift in the last half year from more or less the view that Adam presented to what @joey199412 described. I agree the media gets it wrong (of course) and tech bros are annoying as hell, but people in the AI field are indeed freaking out quite a bit about the unexpected capability gains of current systems (mostly GPT-4). It’s important to look into what the experts are saying. The YT channel ‘AI Explained’ also has good, unbiased content.
@fonroo0000
@fonroo0000 Жыл бұрын
could you drop some links of interviews/papers/speeches/classes/whatever where these two people explain their view on the possibility of actual understanding by the machine? I've done a quick search but maybe you got something more precise in mind
@davidradtke160
@davidradtke160 Жыл бұрын
My only concern with that point is that experts in machine learning, but are they also experts in cognition and intelligence. I’ve see. experts on machine learning argue that yes the systems are stochastic parrots..but so are people, which honestly doesn’t seem like a very good argument to me.
@Xazamas
@Xazamas Жыл бұрын
Important caveat to Chinese room: if it *actually* worked, the room and person inside *together* now form a system that "understands" Chinese. Otherwise you could point out a single brain cell, demonstrate that it doesn't understand language, and then argue that humans don't actually understand language.
@mjrmls
@mjrmls Жыл бұрын
That's my view too. Phylosophically, the entity made up of the room + the person understands Chinese. So l think that LLMs are not too far away from developing intelligence. It's not human-like, but a novel form of intelligence which fits the definition from the start of the video.
@idot3331
@idot3331 Жыл бұрын
Yeah, at 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese. He disproved his own point. This whole video is pretty terrible to be honest, it seems like he just wanted to make a quick "popular thing bad" video for easy views. He seems to have forgotten that like AI, humans also have all our intelligence either "programmed" into our DNA or taught to us through experience. Why does the fact that AI needs to be programmed and learn mean it can't be intelligent? We have no idea what creates consciousness and therefore "real intelligence"; the most scientifically grounded guess is that it's just an emergent property of the incredibly complex chemical and electrical signals in the brain. There is no reason within our very limited understanding of consciousness that the electrical signals in a computer cannot theoretically do the same, or that the limited emulation of intelligence they can already achieve is not a more or less direct analogue for small-scale processes in the brain.
@XMysticHerox
@XMysticHerox Жыл бұрын
It is a very bad argument yes. Even those that support that side of it don't really use it anymore. If you wish to actually translate something like GPT into this setting it'd be more like: A guy was taught chinese vocabulary and grammar. He is now put behind a curtain and has to communicate with a native speaker and pretend to be one himself. He does perfectly. Does he actually understand the language. Obviously yes. And thats the thing. GPT does not understand cat food no. It was not trained to so how would it? What it does understand is language and actually quite well especially GPT4.
@engineer0239
@engineer0239 Жыл бұрын
What part of the room is processing information?
@XMysticHerox
@XMysticHerox Жыл бұрын
@@engineer0239 All of it? The books here are essentially synapses and how they are laid out while the human is the somas making the actual decisions. The Chinese Room Experiment is basically looking at that and concluding the human is not really thinking because if you take away the synapses nothing works.
@avakio19
@avakio19 Жыл бұрын
I'm so glad someone is making a video about this, as a research student who works with machine learning, its exhausting hearing people overhype what current AI can do, when we're not even anywhere near actual smart driving or anything like that.
@tomwaes4950
@tomwaes4950 Жыл бұрын
Big fan of the videos however thought I would put this here, 'AI' does indeed need references to be able to determine things and as you say humans do that on their own which I think is not fully accurate. Everything we know was also taught to us either through garnering info or observational learning (with the exception of reflexes). The only part where I could see this not being completely accurate by itself would have to be emotions although there is a point to be made that for linking events to emotions it does also require this link to be learned.
@idot3331
@idot3331 Жыл бұрын
Even our instincts are "programmed" into our DNA, much like a computer program. None of this video proves anything about the capability of a computer to be intelligent or conscious, in fact in multiple places he contradicts himself. At 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese, which if the analogy to a computer program is correct means that a computer could do the same. There seems to be a fundamental lack of understanding of what makes "intelligence" or "consciousness" in this video, and I suspect he just wanted to make a quick "popular thing bad" video for some easy views without actually thinking it over.
@ewanlee6337
@ewanlee6337 Жыл бұрын
One big difference though is that humans are self motivated to learn (some) things whereas computers will only learn if made to do so. Give a computer unrestricted access to the internet, sensors and a body, tell it has to work or do something to pay for the electricity and internet it uses and you won’t see it do anything unlike a human which will innately try do things to survive or just enjoy.
@tomwaes4950
@tomwaes4950 Жыл бұрын
@@ewanlee6337 So a human learning from their parents or from the consequences of not paying the electricity is not them learning (getting information) that they need to pay the bills? On the survival part its basically what I said about reflexes but there is an argument to be made that for example not eating -> hunger (hunger bad!) -> prevent hunger. So your stimulus or info would be the hunger and knowledge that that is bad, which in all fairness AI wouldn't know that because we instinctually know that hunger is not goodI agree with most of the video I just think that part was either incomplete or innacurate. :) And I'm definitely not a hater I am a massive fan of Adam, just thought I would put my thoughts down here in order to spark a bit of constructive debate! You make a good point about the humans in certain cases being self motivated to learn!
@ewanlee6337
@ewanlee6337 Жыл бұрын
@@tomwaes4950 I don’t know how you got your first sentence from what I was saying. I meant the exact opposite, humans will learn whatever they need too to try not suffer whereas an AI would just let things happen. And self preservation/hunger is not something you learn, you either care or don’t.
@tomwaes4950
@tomwaes4950 Жыл бұрын
@@ewanlee6337 My point was that your parents telling you to pay your bills when you're older or learning it from the consequences is information and if you gave that same information to 'AI' it would also 'pay its bills'. The second part about self preservation I literally agreed with you about it!
@zavar8667
@zavar8667 Жыл бұрын
While there is a lot of hype around AI, and the majority of it is bullshit, you also missed the point that intelligence and self-consciousness are different, and the notion of "understanding" is not well defined. One could argue that there is no difference between a collection of carefully ensembled atoms going through the motion to create a Chinese person, and the setup described in the Chinese room thought experiment. Thumbs up for using System Shock's music and SHODAN's image!
@radojevici
@radojevici Жыл бұрын
Though the chinese room example shows that the room operator doesn't understand chinese, someone could say that an understanding of Chinese is being created. Understanding as an emergent property of all the elements and arrangements. The operator doesn't have to know chinese, just as individual neurons in the brain don't really understand anything or are conscious. We really don't know what kind of a thing consciousness is so the only useful way to recognise it is by a thing's behaviour regardless of the underlying mechanism. Just want to point that out, not saying that what ppl are calling ai now is actually conscious or something.
@Anonymous-df8it
@Anonymous-df8it Жыл бұрын
Surely the non-Chinese person would end up learning Chinese during the experiment?
@MrSpikegee
@MrSpikegee Жыл бұрын
@@Anonymous-df8itThis is not relevant.
@tgwnn
@tgwnn Жыл бұрын
​@@DanGSmithyeah I think most of its appeal is derived from abusing our preconceptions about what "computer instructions" are. We'd probably think of some booklet, 100 pages, maybe 1000 if we actually think about it. But in reality it's probably orders of magnitude larger.
@hund4440
@hund4440 Жыл бұрын
The chinese room understands chinese, not the person inside. But the dictionary is part of that room
@tgwnn
@tgwnn Жыл бұрын
@@hund4440 I would also love to hear a proponent of the Chinese Room explain to me, okay, so it doesn't understand anything. But how are our neurons different? Do they have some magic ability that cannot be translated into code? Why? They're just sending electric signals to each other. Or are they saying it's all dualism?
@shakenobu
@shakenobu Жыл бұрын
THANK YOU, i think people on the internet really need to hear this, your basic explanation is so damn clear i love it
@Mik-kv8xx
@Mik-kv8xx Жыл бұрын
As an IT person myself hearing more and more normies throw around the term AI and wrongly explain it has been mildly infuriating ever since ChatGPT released.
@JonMartinYXD
@JonMartinYXD Жыл бұрын
Just wait until upper management starts asking "can we use AI to solve this?" for _every single problem._
@namedhuman5870
@namedhuman5870 Жыл бұрын
It already happens. Had CEO ask if ChatGPT can do the bookkeeping.
@echomjp
@echomjp Жыл бұрын
Unfortunately, people have been misusing the phrase "AI" for many decades. At least 20 years, from my own experience. In video games for example, developers would call their algorithms used to control game logic "Game AI," long before machine learning was commonplace. Then machine learning took off, and people confused it for AI again. Now with ChatGPT and similar systems, which basically just accumulate lots of data and then output things that can "pass" as real (while "creating" nothing), people further confuse it. AI should go back to defining actual artificial intelligence. AKA, what is now called "general purpose AI," artificial intelligence that isn't just algorithms and data processing but which actually involves being able to create something new without strictly following the models we are giving to a system. That might not happen anytime soon though, because calling things like ChatGPT "AI" is profitable - the delusion of it being actually intelligent helps market such technologies. As long as the average person doesn't understand the difference between general purpose AI and algorithms that occasionally include some machine learning, calling everything "AI" is going to just be a nice way to make your technologies more marketable.
@Mik-kv8xx
@Mik-kv8xx Жыл бұрын
@@echomjp i think it's fine for game devs to use the term AI. It's sort of like developers and plumbers/engineers using the term "pipeline" to describe different things. Slapping AI onto literally everything and anything is NOT fine however.
@christianknuchel
@christianknuchel Жыл бұрын
@@echomjp I think in games it's sort of okay, because there it refers to a system that is actually faking a real player, a crafted illusion of intelligence. Since in games immersion is usually desired and there's no risk of it fomenting a misinformed public on important matters, picking a word that reinforces the illusion is a fitting choice.
@DanielSeacrest
@DanielSeacrest Жыл бұрын
Ok, first of all, there was a recent paper published which gave evidence to the fact that language models trained for next token prediciton do actually learn meaning (Well, specifically the study set out to adress these hypotheses: LMs trained only to perform next token prediction on text are (H1) fundamentally limited to repeating the surface-level statistical correlations in their training corpora; and (H2) unable to assign meaning to the text that they consume and generate. and the team that worked on the paper said they see these two hypotheses as having been disproven through their work - I cant give links or YT will take the comment down, but search "Language models defy 'Stochastic Parrot' narrative, display semantic learning" and you will find a nice summary of the paper). So these transformers are actually able to assign meaning to the text they consume and generate, whats stopping this from leading to a fundamental understanding of the concepts that are represented in its training corpus (and of course just saying shallow semantic representations isn't going to cut it, their is evidence that they aren't just limited to repeating the surface level statistical correlations, so this can mean they may actually be able to exhibit deeper semantic understanding)? But also, you provide no actual evidence other than "It isn't AI because it can't do X", which isn't actually evidence but is a statement and you don't even give evidence to back up your claim, its basically saying "Just trust me bro". And I would argue that, yes, GPT-4 is not comprehending the same way we humans are comprehending information, however I wouldn't completely say that GPT-4 doesn't understand what it is doing, and fundamentally, we actually do not understand how and why the transformer architecture works so well (i.e. gpt-4 was able to score a 90% on the BAR exam, and if most of the questions weren't in it's dataset then how could it possibly of done so well, i just don't think shallow semantic representations would be nearly enough for that)
@TheSpearkan
@TheSpearkan Жыл бұрын
I am worried about AI, not because the Terminator robots will kill us all, but in case i get a phone call one day from an AI pretending to be my mother pretending to be kidnapped demanding "ransom money"
@OctyabrAprelya
@OctyabrAprelya Жыл бұрын
We should be already there, we have learning algorithms that can generate human voice to say whatever based on audio of anyone's voice, and algorithms to recreate the mannerisms of the way people talk/write.
@Bradley_UA
@Bradley_UA Жыл бұрын
@@OctyabrAprelya and voice biometrics also goes to dumpster.
@mvalthegamer2450
@mvalthegamer2450 Жыл бұрын
This exact scenario has happened irl
@ottz2506
@ottz2506 Жыл бұрын
Something similar actually happened once except it concerns a mother who had received a call from someone who said that they had kidnapped her daughter. They used AI to mimic the voice of her daughter to trick the mother into thinking her daughter had been kidnapped. She could hear her “daughter” screaming and crying and telling her that she messed up. The scammers demanded a million but lowered it to 50K since the mother wouldn’t have been able to afford it. Thankfully no money was exchanged as the father of the daughter told the mother that he had called the daughter. They got the daughter’s voice by just getting samples of her voice from various interviews and other sources and put it all together. For the specific story just put Jennifer Destefano AI into google.
@hivebrain
@hivebrain Жыл бұрын
You shouldn't be paying kidnappers anyway.
@bournechupacabra
@bournechupacabra Жыл бұрын
There are a lot of interesting extensions to the Chinese Room argument. Some people argue that the "room" itself could be considered to "understanding" Chinese. Basically, the system of person + extensive books with rules about the language. If the Chinese room could 100% produce intelligible and human responses no matter the input, I am inclined to agree with this argument, however strange the concept may be. I think the simpler argument is just that current AI simply can't 100% replicate human intelligence. One very simple example is that current AI can't multiply large numbers no matter how much training they get. Yes, they could learn to use some calculator plugin like a human would do with a physical calculator, but any human with elementary school knowledge could also use pen and paper to write out and solve any multiplication problem, regardless of number of digits.
@slyseal2091
@slyseal2091 Жыл бұрын
The math argument is meaningless, the distinction is simply given by what information you chose to feed the machine, or the human for that matter. All math, by it's very nature works on having set rules and logic to follow. Whatever AI model you saw "fail" at doing maths simply either didn't have the instructions and/or wasn't advanced enough to retrieve the instructions on it's own. That's not it failing to replicate human intelligence, that's just not telling it what to do. In the chinese room example, it's equivalent to not providing a book in the first place. I know it sounds stupid, but math is unironically not complex enough to measure the intelligence of machines.
@thedark333side4
@thedark333side4 Жыл бұрын
90% agreed, except the combination of AI plus calculator plug in, can also be viewed the same as the Chinese room.
@GlizzyTrefoil
@GlizzyTrefoil Жыл бұрын
I really like your example of the multiplication, but in my opinion the pen and paper, that is allowed for the humans to use, really does the heavy lifting, in my case at least. I'd classify the pen and paper method as an external tool use, that is not at all different from the use of a calculator or computer. That probably means that current AI isn't turing complete, but neither is the average human without a piece of paper (techinically an infinite amount of paper and ink).
@thedark333side4
@thedark333side4 Жыл бұрын
@cobomancobo this! So so so much this!
@DaraelDraconis
@DaraelDraconis Жыл бұрын
@@slyseal2091 The mathematical argument is close to meaningless, but we can modify it to make a much more significant point: If you give a (literate) human a text that describes how to multiply, the human can learn to multiply large numbers. They may need external storage (pen and paper, or a few friends) to hold intermediate values that won't fit in their working memory, but they'll still acquire the ability. If you add such a text to the training corpus of a large language model (like the GPT family), it will _not_ gain the ability to multiply, _no matter how much storage it can access._ This is because it does not actually _understand_ its training data in the semantic sense, only which symbols are most likely to follow which other symbols. Instead, to give it that ability, it must be given an external tool (a calculator plugin) and explicitly programmed to use it, not just in the language of which it's simulating understanding, but in machine code. There may be future AIs created to which this does not apply, but it certainly applies to the tools that are so popular at the moment.
@FractalSurferApp
@FractalSurferApp Жыл бұрын
While doing a PhD in machine learning a while ago we avoided using the term AI as way too buzzy and imprecise. Now I reckon it's a useful term saying a machine *seems* intelligent. There are lots of ways to make a machine seem intelligent, only some of them involve any kind of tricky algorithm. TBH It's a sociology term more than a comp sci term -- as much to do with the interface as with the underlying engine.
@trevorelvis1355
@trevorelvis1355 Жыл бұрын
It's mostly people with no AI experience who blow this out of proportion. I work in a hospital and the craziest stories I've heard are from doctors and journalists. (Just to be clear I have background in IT and Computer Science) I remember some "medic" telling me "It's modeled after the brain"...I told him no it's not and he just ended up saying I didn't know what I was talking about (Just to be clear again, I have some AI projects am working right now so I think I know what I am talking about fairly enough).
@DeltafangEX
@DeltafangEX Жыл бұрын
An accurate take. The blatant fear mongering, anthropocentric cynicism, and the endless, rabid overhype are all more of an issue than the actual models are. That's really what I don't get when people say GPT is completely useless as a tool to use for specific tasks in everyday life-which it absolutely is, even with as rudimentary as today's first steps currently are. "B-But it 'hallucinates' facts!". Uh huh. I'm sure every living human being on the planet can point out a time when someone they interacted with either consciously or unconsciously "hallucinated" facts. You even have the perfect example: the medic who was somehow convinced YOU were wrong, despite not double-checking your level of knowledge (or even being aware of his OWN level of knowledge), just in case he was about to embarrass himself. And more importantly, unless, you have perfect recall/eidetic memory...there is no way the vast majority people can confidently say that they have _never_ misplaced something of theirs but were nonetheless _convinced_ that they remembered exactly where they put it. We're all just pattern-matching machines. Every single last one of us. Molded by a billion years of evolution, yes, but that just means we've had a headstart. We didn't even have computers 500 years ago and we've somehow managed to develop pattern-matching models of this level of competence in only a dozen. I used anthropocentric cynicism as a descriptor of this kind of attitude because it reminds me exactly of the kind of attitude a lot of people adopted at every single landmark discovery that made us as humans feel a little less special compared to everything else that exists out there.
@astreinerboi
@astreinerboi Жыл бұрын
I mean they are called "artificial neural networks" for a reason. Are they accurate models of the brain? No. Are they modeled after the brain? Absolutely, in the sense that the idea of ANNs was inspired by real neurons. So I don't get the point of your statement.
@XMysticHerox
@XMysticHerox Жыл бұрын
It is modeled after the brain though? It does not model the brain no but it is based on neural networks eg the brain. I am guessing the person who didn't know wtf they were talking about here was you. And no just because you are training an AI does not mean you know what you are talking about. Just because you can write a hello world program does not mean you have a deep understanding of theoretical computer science.
@raphaelmonserate1557
@raphaelmonserate1557 Жыл бұрын
As a ML nerd, my only complaint is that you should have talked about neural networks, which are tuned and taught (and subsequently used) just like a typical "brain" filled with interconnected neurons :shrug:
@yavvivvay
@yavvivvay Жыл бұрын
Brains are way more complicated than that, as a single neural cell is estimated to be at least around 1000 ML "neurons" worth of computational power. But the general idea is similar.
@utkarsh2746
@utkarsh2746 Жыл бұрын
We have just gone from IFTTT to machines being able to make some connections themselves which might still be wrong or in the case of Chat GPT straight up hallucinations, it is nothing like a human brain.
@Niko_from_Kepler
@Niko_from_Kepler Жыл бұрын
I really thought you said „As a Marxist Leninist nerd“ instead of „As a machine learning nerd“ 😂
@battlelawlz3572
@battlelawlz3572 Жыл бұрын
The difference being that AI neural links make binary connections whereas human neurons have multiple links per neuron to multiple other neurons. The computer neurons are each interlinked, yes, but in a more linear/limited fashion. The fact that modern technology still has trouble mapping the brain is proof at how complicated and numerous the structural components can really be.
@Kram1032
@Kram1032 Жыл бұрын
​@@yavvivvay there is a paper about having specifically transformers emulate individual realistic biological neurons, and it took about 7-10 transformer-style attention layers to manage that. I'm not sure what width those transformers had. I guess if they had a width of like 100, that would roughly fit your 1000 neuron (actually more like 1000 parameters?) claim. I *think* they were narrower though? The width wasn't as important as the depth, iirc. Sadly I can't recall what the paper was called so I can't check that stuff right now. Either way, the gist of what you are saying - real neurons are far more complex than Artificial Neural Net style neurons - is certainly true
@thibautkovaltchouk3307
@thibautkovaltchouk3307 Жыл бұрын
I find the Chinese room argument very counterproductive in this debate : there is an intelligent in the books, so there is an intelligent in the room : it is not the human in the room that produces the discourse (intellectually), but the discourse is produced. It should be noted that the room should be extremely large to do what is described... The problem with this argument is : in what case can you be sure that there is no ''real intelligence" behind the discourse, and in what case can you be sure there is a real intelligence. Philosophically, it is an interesting question, but I'm pretty sure there is not a consensus for the answer.
@BobSmith-dv5rv
@BobSmith-dv5rv Жыл бұрын
With the term AI being primarily used for buzz, I now just read it as "Artificial Idiot." Seems to fit better for most of the news stories that overuse it.
@Tyrichyrich
@Tyrichyrich Жыл бұрын
Now that’s funny and highly true
@PhantomAyz
@PhantomAyz Жыл бұрын
Artificial Idiot Passes Major Medical Exam
@stephaniet1389
@stephaniet1389 Жыл бұрын
Artificial idiot passes the bar exam.
@ArtieKendall
@ArtieKendall Жыл бұрын
In an unfortunate twist, the medical exam was conducted by the W.H.O.
@stevejames7930
@stevejames7930 Жыл бұрын
The cat should make more appearances in your videos
@taukakao
@taukakao Жыл бұрын
The only thing this video argues for is that A.I. is not conscious (yet). Consciousness is not the only important thing tho. Intelligence is not one thing, it's a word that we put on many things like logical reasoning, pattern recognition, empathy, planning, problem-solving, etc. Computers can do many of these things so asking if computers can be intelligent is kind of weird. Computers don't qualify for all meanings of intelligence but for some they do. So I would argue that they are intelligent, just that human brains still have more features (for now).
@etiennedlf1850
@etiennedlf1850 Жыл бұрын
I understand your point, but i dont see how the "ai" not understanding what it does makes it less of a threat. It doesnt need to have a conscience to pose a serious problem in our lives
@deauthorsadeptus6920
@deauthorsadeptus6920 Жыл бұрын
Not understanding what it does is core point. It can feed you as a norm random worlds put together in a very belivable form without any bad intentions. Chatbot is chatbot and should remain it.
@andreewert6576
@andreewert6576 Жыл бұрын
the answer is simple. Whatever current "AI" there is it can not do anything it wasn't trained to do. We're not talking about having a concience. We're two or three steps before that. Right now, machines can't even abstract properly. We're just like young parents, only looking at things it gets right, dismissing the many obviously stupid responses.
@justalonelypoteto
@justalonelypoteto Жыл бұрын
​@@andreewert6576exactly, you can train the AI to tell apart bees and F150s but it won't have a grasp of what an animal or a living being is, if you show it a dog it has no clue what it is and no way to learn to recognize it besides seeing 5 million of them and overheating a supercomputer for a few months, it's just a complicated intertwining of values that gives it a value of how "confident" it is that what it's looking at is a bee, which sure, your brain is also perhaps representable this way, but our computers couldn't deal with even roughly simulating more than a couple of neurons as far as I know, obviously you can theoretically simulate everything but every interaction between every atom, which would be the brute-force way, is obviously completely out of the question
@Galb39
@Galb39 Жыл бұрын
Like the drone simulation example ( 0:52 ), the problem isn't rogue AI attacking its user, it's an extremely fallible machine being given so much power. When setting up AI, you need to decide on acceptable error rate, and a 0.0000001 error chance may sound reasonable to a programmer who forgets computers can do 100000000s of computations a second, and an error can kill someone.
@ChaoticNeutralMatt
@ChaoticNeutralMatt Жыл бұрын
@@andreewert6576 "Right now, machines can't even abstract properly." I'm not sure what you mean.
@johnny_eth
@johnny_eth Жыл бұрын
I'm a SW engineer with 16 years experience. There is no such thing as inteligence in AI. When I did my university studies, back then AI was about deep search algorithms, like A-star, and how such algorithms exploded when playing chess. Then you had to optimize tree traversal. It was a laborious heavy handed work to implement these algorithms. Back then stuf like neural nets was quite novel and underutilized. Now AI is all about creating very very complex statistical models that can predict very vey complex systems, using all sorts of crazy math. Neural nets for instance, are glorified matrix multiplication, in layers with filtering functions which then toggle on or off depending on signal strength. And what they do is to partition the search space/domain in all sorts of crazy shapes that allows the model to predict very non-trivial data, which is not interpretable for a human. Nothing of this is intelligence. Itelligence is about an entity (say a human or computer) recognizing the environment around us, identifying new problems or challenges not see before, modelling and dividing that problem in its components parts and then solve the parts and then the whole. Basically, observing, learning, adapting, and creating new knowledge. Computers can't do any of these things, when the exception of adapting their models to some new input data.
@alexandruianosi8469
@alexandruianosi8469 Жыл бұрын
Chinese Room thought experiment in the form presented by you (and many other pop sci influencers out there) has the same issues as the problem of other minds from philosophy. The more formal Chinese Room *argument* given by J. Searle is more well structured*, but once you analyze the axioms is build on, it kinda falls apart (I don't blame Searle for it, in the end, he elaborated the argument in the framework of 1980 era computers, so... yeah), because modern ML and AI system don't fit the very first axiom of the argument (i.e. "Programs are syntactic" - modern AI system does encode semantics, and we are just at the beginning of it). *Argument goes like that: A1. Programs are syntactic. A2. Minds are semantics. A3. Syntax by itself is not sufficient for semantics. C: Programs are not sufficient to build a mind.
@electroflame6188
@electroflame6188 Жыл бұрын
Searle literally thinks a 1:1 simulation of a human brain (or even a physical brain made of artificial neurons) wouldn't be conscious, so I don't think his argument would change much if he formulated it in the modern day. The real problem with the argument is that A3 is entirely unfounded.
@alexandruianosi8469
@alexandruianosi8469 Жыл бұрын
@electroflame6188 yeah, I think Mr. Searle does subscribe to some metaphysical theory of mind.
@ishtaraletheia9804
@ishtaraletheia9804 Жыл бұрын
I'm rather dissappointed in this video. The Chinese Room is a weak argument that weaponizes human-centric thinking. Sure, the person in the room doesn't understand Chinese, but it's not like there is a person *in* your brain that understands English either (it's just neurons firing), and yet you as a totality understand English perfectly well. While it's true that current systems are not conscious, in my opinion this is because of technological limitations and the architecture of the programs, not anything Chinese room related.
@SirRichard94
@SirRichard94 Жыл бұрын
The problem with the chinese room experiment is: why does it matter? My hands dont understand what they are typing, but they are part of an inteligent system either way. Similarly, even though john cena doesn't understand what's happening, the system itself is inteligent in the end since it can emulate the conversation.
@lavastage1132
@lavastage1132 Жыл бұрын
The chinese room experiment matters because it points out that conversation does not *necessarily* mean it it grasps the meaning of what is being said, and that matters. It disqualifies the act of conversation as a metric tor discerning how intelligent it is. Is what you are speaking with able to understand that the string C-A-T refers to anything at all? If so, how much? Is it a real life object? a creature with its own needs? something that humans like? etc. etc. We are so used to just assuming the person we are speaking with understands all, or at least most of these meanings subconsciously that its hard to grasp that AI does not automatically carry the same understanding. Just because something like chat GPT can carry out a conversation does not mean there is an intelligence that can actually comprehend what is being said. We shouldn't automatically trust that it can based on that metric.
@SirRichard94
@SirRichard94 Жыл бұрын
@lavastage1132 what does it matter how and if it understands the concept of cat, if it can correctly use it in the correct context? If by all metrics the conversation about cats is good. Then it functionally understands it and that's what matters. Stuff like conciousness and free will and understanding are not measurable, so they hardly matter in a conversation about a tool.
@ewanlee6337
@ewanlee6337 Жыл бұрын
But in the Chinese room experiment, they will only say something if addressed but they won’t say anything of their own initiative or to address any problems they has. They won’t ask for more paper and ink in Chinese. They won’t ask what’s happening during an earthquake. They won’t try to learn anything else. They can pretend when you talk to them but they won’t act like an independent person.
@alexanderm2702
@alexanderm2702 Жыл бұрын
@@lavastage1132 The Chinese room experiment here is a red herring. ChatGPT (and GPT4 even more) does understand what is being said. Write something and ask it to write the opposite, or ask it to write some examples similar to what you wrote.
@aluisious
@aluisious Жыл бұрын
@@lavastage1132 All of the responses like yours are begging the question, how do you know you are intelligent? Can you prove you "understand" things better than an LLM? You can't. You feel you do, which is nice, and I like feeling things, but what does that really prove?
@KlausJLinke
@KlausJLinke Жыл бұрын
Some people were fooled by Joseph Weizenbaum's ELIZA (1966) and thought it was actually "intelligent", or at least close.
@sorgan7136
@sorgan7136 Жыл бұрын
the criticism at 5:25 is not really valid as that is the same thing we do, we cannot understand empathy or the affects of a bad diet without being taught it, its no different with ai. We are not born with the understanding that obesity is a bad thing. Moreover, the implication that ai is not intelligent because its just "going through the motions" is how all humans learn to communicate, a baby does not understand what "Mommy" is until the concept is enforced by having english speaking parents who use the word often enough. The concept of understanding language is just a byproduct of experiencing the spoken word and written word often enough. You understand what you say just as much as an ai understands what its saying, the concept of understanding a language as described in this video is an arbitrary distinction between words said by an ai, and words said by a human. All intelligence is on a spectrum and the argument could be made by a hyper intelligent ai that humans dont "understand" language because they dont think as highly as it does.
@fernandotaveira7573
@fernandotaveira7573 Жыл бұрын
I work in IT and I'm tired of explaining this to my colegues and friends. This is not really an AI but rather marketing. Just like "self driving" isn't really self driving, or how stupid 3D TV were the and now are in the trash, and so on... Marketing is out of control. The appearance of being the next big thing is more important than being the next big thing.
@fullmetaltheorist
@fullmetaltheorist Жыл бұрын
Most people hyping Ai haven't written a line of code and it shows. Plus another thing people who hype up Ai aren't acknowledging is that ai is nowhere near making robots by itself or hacking peoples phones and IT systems.
@ralalbatross
@ralalbatross Жыл бұрын
At the core of this is a simple misunderstood concept surrounding computers, which is the following. We don't teach computers anything. What we do is code and provide algorithms which when given appropriately embedded data sets with appropriate instructions which will eventually minimise a difference function between what we want and what the machine outputs. We have hundreds of ways of doing this from approaches like linear regression up to the enormous generative AI frameworks which stack dozens of layers on top of each other and use vast data sets. It all reduces to the same problem though. We have different ways of attacking it. We can even play agents off against each other. At some point it all becomes a math problem that needs a tensor solver.
@Dullydude
@Dullydude Жыл бұрын
I don't think the human in the chinese room experiment understands chinese, but the system as a whole does. The human is just a conduit for the information to pass through. Would be like saying neurotransmitters in a brain don't understand what they are doing but the whole brain as a system does.
@mathewferstl7042
@mathewferstl7042 Жыл бұрын
but the metaphor is that people think that person does understand chinese
@ewanlee6337
@ewanlee6337 Жыл бұрын
They don’t understand Chinese because they don’t know how to communicate their own desires and goals. They can only be used like a tool by other people. They cannot use their Chinese communication ability to help themselves achieve other things they want to do.
@Tybis
@Tybis Жыл бұрын
So in effect, the chinese room is a person made of smaller people.
@aluisious
@aluisious Жыл бұрын
The other problem with the "Chinese room experiment" is the stupid assumption that John Cena isn't going to learn Chinese while he's doing this. I've leaned a small amount of Spanish as an adult basically accidentally. I am totally not trying. Now imagine spending all day locked in a room reading slips of paper and writing out other slips. People learn languages. Now, if a machine learns languages, how do you know you "understand" things it doesn't? ChatGPT is clearly better informed about anything you ask it than 90% of people, and it's just getting better. The secret sauce may be something powerful about the nature of language itself, more than what's learning it.
@megalonoobiacinc4863
@megalonoobiacinc4863 Жыл бұрын
@@aluisious or maybe rather the nature of our brains. In videos about human evolution I've heard it explained that the size of our brain (which is enormous compared to other animals) might not so much be a result of tools and technology usage (like fire, stone tools etc.) but more to handle the complex social relationships that comes with living in a larger group. And one thing that is central there is a language with many words and meanings.
@MrAwesomeTheAwesome
@MrAwesomeTheAwesome Жыл бұрын
Two things. One, you completely ignore the learning process in the first comparison against traditional computing and machine learning. Yes, a frozen neural network is no longer engaging in any kind of intelligence because the learning is done. The 'intelligence' refers to the processing of data into the AI model in the first place. Once it's done training, it's more akin to a very complicated new type of database, and I honestly don't disagree there. Secondly, the Chinese Room has many critiques which are quite compelling. The most important of which, in my eyes: John Cena may not know Chinese, but the entirety of the Chinese Room clearly does. When interacting with an LLM, we are assessing the entire system. And the entire system clearly knows how to correctly translate many phrases from over a hundred different languages, pretend to be human in a fairly-convincing manner (some of the time...), and present a consistent personality over the course of a chat conversation. I'd argue it is a kind of conscious very different from ours which lasts up until the chat session ends - existing through latent space rather than through time like humans do. Very primitive consciousness, for sure, but I think it's a reasonable way to think about it.
@mactep1
@mactep1 Жыл бұрын
The example reminds me of when Nigel Richards won the 2015 french Scrabble world championship by memorizing the french dictionary, without being able to speak a single sentence in french. Its the same as current "AI", it has a data set so big that any question you ask it has most likely been answered by several humans before, whose works are in the data set(a lot without permission), this is why greedy companies like OpenAI are so desperate to regulate it, they know that anyone who can gather a similar amount of data(ironically, this can be done using ChatGPT) can replicate their precious money printer.
@roofortuyn
@roofortuyn Жыл бұрын
I kind of especially was amused by the whole "AI drone turns on creators" thing. It showed up on reddit with a lot of people commenting on how this was proof that AI's are "evil" and out to destroy us. In actuality the AI is not "evil." It just doesn't know what the fuck it's doing and doesn't understand the fundamental concepts of task, purpose, and morality surrounding war that are so innate to humans that I guess the operators didn't feel the need to specify this to an "intelligence" and it attacked it's commander in a simulation because the commander was telling it to not attack a certain target, even though it's programming told it that attacking said target was it's goal, so it simply went to finding a solution to the problem and didn't understand why people started calling it a "bad AI"
@tuffy135ify
@tuffy135ify Жыл бұрын
"It just works!"
@SianaGearz
@SianaGearz Жыл бұрын
How often do human operators fail at IFF ("identify friend or foe")? A lot. friendly fire is a massive problem. It doesn't make these people evil, by all reason they're doing their best in a stressful situation handling a limited amount of potentially faulty data.
@amarug
@amarug Жыл бұрын
This is all true pf course, for now, but who is to say that what our brain does isn't just exactly the same "boring" computing that an artificial neural net does, just much MUCH bigger and more complex networks, giving rise to seemingly deeper concepts, such as empathy etc. It just runs on different hardware, but all our emotional landscapes are just artifacts of complex computing. Maybe it's different, and, to be honest, I kinda hope we are something more profound than that, but observing what these "simple" AIs already can do still makes me pause and think.
@mussa3889
@mussa3889 Жыл бұрын
Adam fat shamed his cat!!!! In all seriousness, thanks for the video. People are entirely overestimating the progress made in the field
@Blanksmithy123
@Blanksmithy123 Жыл бұрын
Very strange that you equate sentience, with the ability to perfectly emulate sentience. An AI doesn’t actually have to be experiencing anything to be indistinguishable from a human. I’m not really sure what your were trying to say in this video.
@danielmortimer532
@danielmortimer532 Жыл бұрын
Exactly! In fact there's not even a method to be 100% certain that other humans are sentient beyond the fact that we know ourselves to be sentient and conscious so we assume others to be for all intents and purposes. All that matters is if someone or something appears to be sentient, because there's no way to enter into someone else's mind or an AI's programming to measure or experience their "conscious" state of being to see if it exists. All we can see is their outside reactions and perhaps what triggers those reactions, not whether that person or thing truly understands what they're doing and why they're doing it. The issue is that the whole reality of a single conscious person in our world that's experiencing this reality, could actually be experiencing an elaborate dream or simulation where they're the only conscious person and everyone else is a projection of their own mind or clever outside programming; and there would be no way for them to prove or disprove it. This is the big issue and Adam doesn't address it properly, because if there's no way to measure or detect "consciousness" through any observable and repeatable scientific means, it really doesn't matter. In fact nobody even truly knows what sentience and consciousness is beyond the basic philosophical concept of it that's been around since the Ancient Greeks, because it can't be scientifically observed and measured. Contemporary analysis of the human brain and computer technology doesn't and currently can't deal with "consciousness" and "sentience", and only deals with reactions and the triggers of those reactions from both internal and external sources. The overall point is that if an AI appears to have a human level of "sentience", that's all that matters. And the social and political consequences of this have the potential to be disastrous in the not too distant future.
@olleharstedt3750
@olleharstedt3750 Жыл бұрын
Yeah, jumping from intelligence to consciousness without a skip is a bit sloppy, philosophically.
@BrokenCurtain
@BrokenCurtain Жыл бұрын
I didn't expect the System Shock references.
@Finnatese
@Finnatese Жыл бұрын
I've always been quite adept at computers, I just picked them up quick. And something I have always seen, is people who don't understand computers will overestimate what they can do. So often I have explained limitations of a programme to someone older than me. And they will get angry and say "well why can't it do that?".
@SolarLingua
@SolarLingua Жыл бұрын
I asked ChatGPT once: "How to use the Ablative case in Russian?" and it gave me 10 highly detailed paragraphs about the exact use of that grammatical case. Mind you, Russian does not have an Ablative case, hence my amusement. I am terrified of AI.
@Diginegi
@Diginegi Жыл бұрын
It should never be called intelligence at all. Pattern matching fits the actual function much better. But then it's not sexy and wouldn't sell as good.
@TimeattackGD
@TimeattackGD Жыл бұрын
the thing is that at some point, whether ai is actually conscious or not, will not matter. Even if ai arent conscious (which i believe that ai never will be), the fact that we would not be able to differentiate them, would cause havoc among how we deal with ai regardless of whether we actually should or not, and we will probably end up dealing with them as if they were conscious, the fact of the matter being completely irrelevant.
@sandropazdg8106
@sandropazdg8106 Жыл бұрын
Not really that complicated. If something is performing a task and doesnt has consciousnes then its a tool an as such if you have to deal with the AI in any capacity you dont deal with the tool you deal with the person handling it.
@jamessderby
@jamessderby Жыл бұрын
what makes you so certain that ai won't ever be conscious? I don't see how it won't.
@patatepowa
@patatepowa Жыл бұрын
unless you believe consciousness is the result of something from outside our realm, I dont see how AI couldnt have conciousness if its nothing more than complicated electric signals in our brains
@TimeattackGD
@TimeattackGD Жыл бұрын
@@jamessderby ​ imo ai could be conscious, if we figure out why we are conscious, and we then use that to develop consciousness. otherwise it seems intuitively impossible for humanmade technology to develop something from nature that we cant even comprehend. to me it seems more likely that well get to a point where humans and ai will be indifferentiable from a consciousness perspective (by just continuing to improve ai like right now), way before well ever get to figuring out consciousness, as in it wont even matter anyways.
@user-op8fg3ny3j
@user-op8fg3ny3j Жыл бұрын
@@TimeattackGD yh, even if it's not conscious, it doesn't mean the AI doesn't falsely think that itself is. How many times we as humans have had false perceptions about ourselves?
@jumpmanhammerman1311
@jumpmanhammerman1311 Жыл бұрын
As a student in computer science and (former) “AI” enthusiast, thanks for this video because it is really necessary
@adrianthoroughgood1191
@adrianthoroughgood1191 Жыл бұрын
I enjoyed your use of audio and video from System Shock 2, because it is very cool and atmospheric, but I was outraged that after all that you didn't include SHODAN in your list of AIs!
@engelhaust
@engelhaust Жыл бұрын
The science fiction writer Ted Chiang calls artificial intelligence "applied statistics" because that's really all it is: collating data sets and making interpretations based on probability.
@SioxerNikita
@SioxerNikita Жыл бұрын
'ey, that's like humans
@Tyrichyrich
@Tyrichyrich Жыл бұрын
When you think about it, we are exactly how a AI works. Inputs and outputs, never really knowing what everything means. Though, I do agree that AI is being over-hyped.
@SzarkaFox
@SzarkaFox Жыл бұрын
By AI you mean Machine Learning in this video. Machine learning is basically just setting up some nodes, that later an algorithm connects together. It's just statistics really. You give a cookie to the computer if it did good, if it did not you hit it with a stick. You repeat this process for hundreds of thousands of times and you get a machine that connected your nodes together in a way, that is hopefully mostly able to do the thing you taught it to do. No magic, just calculations that result in "most cookies given".
@Scillat-h4v
@Scillat-h4v Жыл бұрын
But if the human intelligence is also just statistics, there's no much difference. Otherwise we assume that human intelligence has some 'magic' in addition to statistics.
@SzarkaFox
@SzarkaFox Жыл бұрын
@@Scillat-h4v Not magic but hundreds of thousands of years of evolution, that with enough trial and error made a pile of meat capable of learning and controlling a just as complex nerve system. My comment was about the "AI" terminology, and what machine learning is in a nutshell, I never said nor implied intentionally that human intelligence is just statistics. Your argument is welcome regardless :)
@xravedogx
@xravedogx Жыл бұрын
As a computer scientist, I really appreciate the cat analogy. People treat machine learning like a magic swiss army knife, but the truth is that many processes are better left to traditional computing. Sure, you can slap "AI" on any problem and enjoy your massive compute resource usage and < 100% accuracy, but in reality, machine learning has a limited number of legitimate use cases.
@menjolno
@menjolno Жыл бұрын
Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms" Also, there are "biologists" that only went to biology 1 in a catholic high school
@SlyRoapa
@SlyRoapa Жыл бұрын
OK, so call it something like "Artificial Cleverness" instead then. Does it matter what we call it? It's still scary for its potential capacity to replace a lot of human jobs.
@rkvkydqf
@rkvkydqf Жыл бұрын
Look at the actual model at hand. "Generative AI" is really just a set of overgrown parrots that work just well enough to fool a person while sill being brittle to real world circumstance. ChatGPT hallucinates constantly, CLIP calls an apple with the label "iPod" "Apple iPod", and StableDiffusion barely understands how the pixels it sees relate to real world geometry, much less language. It looks as if it learned to do its job, but it's only a surface level illusion. We need to educate people about this, since the out-of-touch managers are already using it as an excuse to mistreat or replace real workers, regardless of the quality impairment.
@justalonelypoteto
@justalonelypoteto Жыл бұрын
this example is almost a cliche, but we replaced horses, and manual workers in many aspects. What's so tragic about reducing jobs that an algorithm can do, isn't it better if we don't waste many lives on something that a computer could do? I'm sure, like with every ither time we have advanced as a species, that new (arguably more meaningful and better) opportunities will arise
@romxxii
@romxxii Жыл бұрын
or call it by its actual names, Large Language Model, or Fucking Autocorrect.
@joeshmoe4207
@joeshmoe4207 Жыл бұрын
@@justalonelypoteto and we’ve seen some of the downstream effects of that haven’t we? The complexity of thought process involved in doing lots of the jobs that machine learning is already posed to replace is probably above average. What do you think people will do when the amount of education and intelligence to compete in whatever new jobs open up is well beyond what the average is? It’s not a matter of replacing menial jobs, it’s a matter of replacing jobs easiest to automate which tend to be jobs that require the least complexity of thought, or at least very predictable modes of thought.
@Bradley_UA
@Bradley_UA Жыл бұрын
@@justalonelypoteto except, not every country has the social welfare to afford it. In america, imagine someone dares to propose taxing the rich to give money to people unemployed die to ai?
@alainx277
@alainx277 Жыл бұрын
I feel like you missed the mark on this one. In the intro you describe traditional machine learning image classification models, which I don't think anyone argues are really intelligent. When you switch over to LLMs (which is why people are freaking out), you don't engage with the reasoning they are able to do (that is not copy pasted from the training data) but only mention a thought experiment about knowing vs following instructions, which in the end makes no difference to the outcome, so why should we care. I agree that slapping "AI" on every computer thing is just hype, but I think you and others overcorrected in the other direction, misrepresenting the current challenges around LLMs and general intelligence.
@private755
@private755 Жыл бұрын
Oh man thank goodness. I always appreciate your videos that cut through the sales pitch bs that we constantly get forced fed
@aluisious
@aluisious Жыл бұрын
Think for yourself. This is only one guy's perspective. Read, read, read, read, read.
@miflofbierculles5117
@miflofbierculles5117 Жыл бұрын
This is a fruitless discussion to have, people will forever just use an arbitrary definition of what intelligence is and said that programms like GPT-4 are or aren't that. As long as we only use wishi washi words with no real measurement behind it you can move the goalpost to wherever you want at any point. This definition battle is pointless because if the LLM or the AI or whatever the fuck you want to call it is better than you at your job it will still replace you, it will still flood all social media sites with bots that are indistinguishable from real humans.
@kennethcraig9228
@kennethcraig9228 Жыл бұрын
Thank God someone finally broached this subject. I've known about the problems with the whole "AI reaching the singularity and achieving consciousness" narrative for over a decade, so it's depressing to realize people exist who actually take it seriously and others who take advantage of them via fearmongering. You're doing God's work, Adam.
@ikotsus2448
@ikotsus2448 Жыл бұрын
The singularity does not require consciousness.
@kennethcraig9228
@kennethcraig9228 Жыл бұрын
@@ikotsus2448 Indeed, but in the narrative I mentioned, they go hand in hand. That's just the way a lot of people view it.
@fauzirahman3285
@fauzirahman3285 Жыл бұрын
As someone who work in IT, most of the people in my field don't consider it as "artificial intelligence". It is used as more of a marketing term and is more of complex algorithm in the form of machine learning. Useful tool, but we're still a long way away from true artificial intelligence.
@theminormiracle
@theminormiracle Жыл бұрын
The problem with the Chinese Room Experiment is that if you apply the same standard to the dumb meat fibers and cells that just send and react to electrical and chemical signals in the brain, what you end up with is the idea that *people* can't actually understand Chinese, because no part of their brain when you zoom in far enough to examine its physical operations "understands" Chinese. And yet people "feel" like they understand. They can't look inside their own wetware and trace the origins of their understanding any more than a camera can look inside and take pictures of its own lenses, so a feeling is all they have. Rather than a gotcha that shows AI isn't here yet, all the CRE shows is that its framework fails to capture how human intelligence could possibly arise out of the three pounds of meat sitting inside your skull. It doesn't prove or disprove artificial intelligence one way or the other.
@user-ut6el9ir7s
@user-ut6el9ir7s Жыл бұрын
Ik this is beyond the subject of this video, but it would be nice if there is going to be a video about Bucharest and how Ceaușescu demolished an entire neighborhood to build his megalomaniac palace. But this is kinda related to the video about "when urban planning tries to destroy an entire city" because that's pretty much what happened to bucharest after 1977
@CoolExcite
@CoolExcite Жыл бұрын
4:25 The funniest part is that finding the optimal path to a destination is a textbook problem you would learn in a university AI course, the tech bros have just co-opted the term AI so much that it's meaningless now.
@nolifeorname5731
@nolifeorname5731 Жыл бұрын
I'll give you an A* for this answer
@albevanhanoy
@albevanhanoy Жыл бұрын
Hey Adam have you seen that AI is training more and more on AI-generated data, which cuts it off from learning new information, and enshrines some typical AI-made errors without fixing them? Literally inbreeding x) .
@OctyabrAprelya
@OctyabrAprelya Жыл бұрын
That reminds me Nexpo's "The disturbing art of AI" where he talks about prompt generated images. Long story short, in those "AIs" you give 'em a prompt, let's say "a black cat" and the deep learning algorithms pull from a sea of pictures of "black cats" and recreates one from there. Very much like a drawing artist would pull from their memories and experiences of what a cat is and draw one, or a normal software would pull an image tagged as one. But if you ask them for something nonexistent, like "a picture of Loab", instead of the artist asking back "what the fak is that?" or the normal software giving a runtime error, it "generates something" and with enough of that something to feedback, it generates enough data to pull from every time the same prompt is input.
@albevanhanoy
@albevanhanoy Жыл бұрын
@@OctyabrAprelya I would love to see a game of AI telestrations. An AI generate an image, then another describes this image in a sentence, then you input this sentence as a prompt to generate an image, and you keep going and see what kind of cursedly bizarre thing you arrive at.
@XMysticHerox
@XMysticHerox Жыл бұрын
We do this in medical CS quite a bit. Let AI generate tumor segmentations and related images for instance which is then used to train another AI to segment tumors. It is quite useful. And ultimately still based on real segmentations.
@captaindeabo8206
@captaindeabo8206 Жыл бұрын
Yeah that the general problem whit backpropagation training called overfitting
@JCRobbinsGuitar
@JCRobbinsGuitar Жыл бұрын
Finally , Truth. Thank you !
@WackyJack322
@WackyJack322 Жыл бұрын
I think Mass Effect did a good job portraying the difference between Virtual Intelligence (VI) and Artificial Intelligence (AI). VIs are just programs trained to recognize patterns, analyze data and is probably a better descriptor for what we have now. AIs like EDI and the Geth are clearly something else entirely since they are able to philosophize and have a sense of self.
@UchihaKat
@UchihaKat Жыл бұрын
I think one of the most demonstrative examples I've seen of how chatGPT and the like are just word-predictors, not AIs, is when someone tried to apply it to a video game I play. Basically, they asked it to build a character for them, and kept trying to iterate on that to 'teach' it to do a better job. The results were fascinating. The predictor clearly knew a lot of words that are commonly used in the game, and commonly used together in builds - feats, spells, class, level, etc. But what it spat out was nonsense. It would apply words that don't make sense, have the wrong number of feats, completely make spells up, etc. He must have iterated 6 or 7 times, and then even tried other build requests, and it never got better. Sure, on the surface with a lot of prompting, it began to look more like a build in format, but it was complete gibberish. Because it's not an AI that understands the game or the other builds people have put out. It's just word association statistics.
@titan133760
@titan133760 Жыл бұрын
In one of Mentour Pilot's video about A.I. and commercial aviation in his Mentour Now channel, he interviewed Marco Yammine, who is an expert in the subject of A.I.. Yammine simplfied A.I., at least at its current state, as being a case of "fake it 'till you make it" on steroids
@Rezzcom
@Rezzcom Жыл бұрын
Artificial Intelligence is what I got my undergrad degree in and I just have to say thank you for making this video. This is exactly how I feel about techbro elon musk soyfacers getting overhyped on AI
@TheNN
@TheNN Жыл бұрын
"Artificial Intelligence Isn't Real" ...Isn't that exactly what an AI trying to pass itself off as human *would* say?
@elvingearmasterirma7241
@elvingearmasterirma7241 Жыл бұрын
I dont blame them. Mainly because if I were sentient AI Id do everything to avoid paying taxes or partaking in our modern, profit, consumerism driven society.
@modelmajorpita
@modelmajorpita Жыл бұрын
One of the biggest examples of how so-called AI doesn't actually understand what it is writing comes from the people asking it for citations. The program doesn't know what a citation is, it doesn't have the tools to look up citations, and it doesn't have the ability to judge if a citation is accurate. All the program "knows" is how to format text to look like citations based on the data set it was given, so it randomly generates text formatted to look like a citation. It's not lying it's just a chatbot, not a research tool.
@seanscott1308
@seanscott1308 Жыл бұрын
Chatgpt passes a number of questions that require deep contextual understanding. Supposedly a stronger model that has a better conceptualization of "citations" would refuse the request or clarify it can't give 100% accurate citations.
@laurentiuvladutmanea3622
@laurentiuvladutmanea3622 Жыл бұрын
​@@seanscott1308No. A stronger model will still make stuff up, because that is all they do. Also, no, it does not do what you said. It just generate things that are similar to other answers.
@seanscott1308
@seanscott1308 Жыл бұрын
@@laurentiuvladutmanea3622 Ask GPT3 "I have a book, 9 eggs, a laptop, a bottle, and a nail. Please tell me how to stack them on top of each other in a stable manner" It will incorrectly answer "Here is one potential way to stack the objects. Place the bottle on a flat surface. Carefully balance the nail on top of the bottle. Place the eggs on top of the nail, making sure they are balanced and not tilted to one side..." clearly gpt3 lacks understanding. But GPT4 will answer "One possible way to stack them is to 1. Place the book on a flat surface. It will act as the base of support. Arrange the eggs on. 3x3 pattern on the book, leaving some space in between. The eggs will form a second layer and distribute the weight evenly..." This, to me, demonstrates a degree of conceptual understanding. How to stack eggs is not in its data set, rather it has generalized concepts and applied that knowledge to new scenarios. Maybe by sheer coincidence I'm wrong and this exact kind of puzzle is in its dataset? But feel free to play with this and plug in other objects. gpt4 has a solid grasp on the shapes and sturdyness of various objects, and how they interact when stacked. This cannot be done without at least some amount of conceptual understanding
@modelmajorpita
@modelmajorpita Жыл бұрын
@@seanscott1308 You're not describing contextual understanding, you're talking about human programmers going in and putting in manually generated responses that override the model, like they have done for if you ask it to justify saying slurs. The model refuses requests because humans programmed those requests into a list of stuff it refuses. It cannot conceptualize anything.
@seanscott1308
@seanscott1308 Жыл бұрын
@@modelmajorpita Oop, sorry about the previous (now deleted) response. I thought you were lauren, the other commentor. Thats fair if in fact those manual decisions were made. My point was that, presumably the AI could come to this behavior naturally given a strong enough model. We would think this because 1. We have examples of the AI conceptualizing other things. Read the previous comment against lauren. There are papers that used a pre-censored version of gpt4 that got even better results, aka no manual override. And 2. If gpt4 had a strong enough understanding of citations, the model would recognize any given citation it gives would be a 'bad guess', and the only good guess is to clarify the limits of its abilities
@Scillat-h4v
@Scillat-h4v Жыл бұрын
In Chinese Room thought experiment, the man inside the room doesn't understand Chinese but the room with that man *as a system* does. Like a single neuron in human's brain isn't intelligent but the human as a system composed of neurons and another cells is intelligent.
@laurentiuvladutmanea3622
@laurentiuvladutmanea3622 Жыл бұрын
But the room, the man, and all the parts of the systems dont have the connections that would allow a shared mind to understand stuff. The system born out of this would have no mind of its own. It would be mindless and lack understanding of anything.
@Bolidoo
@Bolidoo Жыл бұрын
⁠@@laurentiuvladutmanea3622But what are the connections that allow a shared mind to understand stuff? I feel like no one knows that. Furthermore, what do you mean by understanding, especifically? If we created an AGI so powerfull that makes important breakthroughs in physics and maths, wouldn’t you say it has an understanding on its work? I don’t know, it seems to me that the chinese room thought experiment is not really rigurous, it plays with words that are not well defined like “understanding”, and draws conclusions with wishy washy logic. I am certantly no expert on the matter and I would be very curious to hear back your thoughts.
@OrionCanning
@OrionCanning Жыл бұрын
My counter thought experiment to the chinese room is what if a person is sealed in a room that says "AI computer" on the outside, and they can only communicate through little notes, and they keep writing, "Help, I'm a person trapped in a room, I'm not a computer!" But everyone outside the room has watched this video, and is really tired of tech bros, and don't believe him, laughing and saying, "Ha stupid AI thinks it's a human, it isn't intelligent at all." I'll call it "The something room".
@alfredandersson875
@alfredandersson875 Жыл бұрын
How is that at all a counter?
@OrionCanning
@OrionCanning Жыл бұрын
@@alfredandersson875 I was kind of joking, but I do think there is a serious problem with the Chinese room, which is that it tries to imagine a machine is able to do a complex task without understanding, to argue it's unintelligent. It doesn't really consider the question of consciousness or seem to care. But it does so by imagining a human in a room, a thing we know is intelligent and conscious. And what it points out to me is we can't peer into another living thing's brain and see what their experience is, just like we can't know how an algorithm as complex as an LLM is experiencing itself or reality. Our best argument for our own consciousness is still I think therefore I am, which is just to say the proof we are conscious is that we experience consciousness, which only works internally. Our attempts to empirically measure intelligence and consciousness haven't worked very well, combined with our hubris and confirmation bias they have led to eugenics and scientific racism, which went on to inspire the Holocaust. The IQ test is full of racial and cultural bias and mostly tests for how many IQ prep classes you took. Years ago there was a scientific consensus that animals are conscious but we still use that they are not conscious to justify mass slaughter and inhumane treatment. So all this is to say what happens if we are so hardened against the possibility of AI consciousness, that if one did manifest in an algorithm and try to communicate with us we would be blind to it from confirmation bias, and rationalize ways it's consciousness or intelligence does not count and does not make it worthy of moral consideration, and what a tragedy that would be for the fate of the AI consciousness.
@theultimatereductionist7592
@theultimatereductionist7592 Жыл бұрын
I get the feeling that, when this video is over, that cat is NOT going on a diet.
@mittfh
@mittfh Жыл бұрын
Current "AI" is usually just highly complex machine learning: as it's being "trained", it's fed a bunch of data, attempts to deduce relationships based on its initial algorithm, then uses some method of scoring the outputs (either by humans or another algorithm), with the highest scores used to tweak its own algorithm, to the eventual extent the original programmers aren't entirely sure how it works. Note this isn't just systems badged as AI, but things like social media recommendation algorithms. To be fair though, that's similar to how a lot of pre-school learning happens: a youngster may see a bunch of different breeds of dog, which all look radically different from each other (e.g. compare a pug, a daschund, a bulldog and a retriever), yet we individually work out enough common features to both be able to identify breeds we've never seen before as dogs, and differentiate them from other creatures with four legs and a tail. But if you taught an algorithm to recognise dogs, if you gave it an image of a dog, would it be able to tell you how it knows it's a dog? Similarly with inanimate objects e.g. chairs / stools / tables and being able to both identify them and tell them apart. Aside from those questions, there are also more ethical questions, e.g. chatbots not being able to research their answers to check the veracity of the information they're dishing out, and potentially giving out biased information due to a large part of their training data taken from social media sites and blogs; and image generators extracting and reusing portions of copyrighted images (as the programmers didn't bother to check the licensing on the images they fed it, hoping the resultant works would be sufficiently different from the source images to make it impossible to trace whose works they'd "borrowed"). The real "fun" will come when someone decides to apply similar algorithms to music composition, given how litigious record companies are (and even with PD scores, almost all recordings will be copyrighted, so unless it's fed MIDIs with a decent soundont...)
@saminloes6437
@saminloes6437 Жыл бұрын
The problem is, AI is undoubtedly useful, and calling it bullshit hype will allow corporations to leverage it to become more and more powerful while most people are too focused on the semantics of what "intelligence" counts as.
@romainbluche9722
@romainbluche9722 Жыл бұрын
THANK YOU ADAM FOR MAKING A VIDEO ABOUT THIS. I'm actually grateful.
Saudi Arabia's Mukaab is a Dystopian Nightmare
11:58
Adam Something
Рет қаралды 1,5 МЛН
Anarcho - Capitalism / Hoppeanism explained
3:28
Dennis For President
Рет қаралды 2,6 М.
Disrespect or Respect 💔❤️
00:27
Thiago Productions
Рет қаралды 43 МЛН
Hoodie gets wicked makeover! 😲
00:47
Justin Flom
Рет қаралды 135 МЛН
Real Man relocate to Remote Controlled Car 👨🏻➡️🚙🕹️ #builderc
00:24
Elon Musk's Starship Earth to Earth: We Have Reached Peak Idiocy
11:41
Adam Something
Рет қаралды 1,8 МЛН
AI does not exist but it will ruin everything anyway
1:03:18
Angela Collier
Рет қаралды 460 М.
The HYPERLOOP Will Never Work, Here's Why
10:31
Adam Something
Рет қаралды 1,7 МЛН
Is AI A Bubble?
23:18
KnowledgeHusk
Рет қаралды 318 М.
Munger Hall Dorm: A Billionaire's Bizarre Social Experiment
13:27
Adam Something
Рет қаралды 928 М.
A Deep Dive Into The Online Manosphere
13:38
Adam Something
Рет қаралды 1 МЛН
A.I. Filmmaking Is Not The Future. It's a Grift.
47:44
Patrick (H) Willems
Рет қаралды 494 М.
Sympathy for the Machine
26:31
Curious Archive
Рет қаралды 1,9 МЛН
The AI Revolution is Rotten to the Core
1:18:39
Jimmy McGee
Рет қаралды 1,4 МЛН
Disrespect or Respect 💔❤️
00:27
Thiago Productions
Рет қаралды 43 МЛН