AI Doesn't Need To Be Self-Aware To Be Dangerous

  Рет қаралды 32,868

SciShow

SciShow

Күн бұрын

This episode was produced in partnership with the Future of Life Institute. If you want to better understand how AI might be dangerous, you can read their recent post explaining why we can't just 'switch off' a dangerous AI at futureoflife.o.... For more resources on AI and to learn more about the Future of Life Institute, visit their website at futureoflife.o....
Artificial Intelligence always takes over humanity in the movies when it gains consciousness. But even without getting into sentience, it's capable of influencing our lives in a lot of ways already.
Hosted by: Stefan Chin (he/him)
----------
Support us for $8/month on Patreon and keep SciShow going!
/ scishow
Or support us directly: complexly.com/...
Join our SciShow email list to get the latest news and highlights:
mailchi.mp/sci...
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever: Toyas Dhake, Spilmann Reed, Gizmo, Garrett Galloway, Friso, DrakoEsper , Lyndsay Brown, Jeremy Mattern, Jaap Westera, Jeffrey Mckishen, Matt Curls, Eric Jensen, Chris Mackey, Adam Brainard, Piya Shedden, Alex Hackman, Kevin Knupp, Chris Peters, Kevin Bealer, Jason A Saslow
----------
Looking for SciShow elsewhere on the internet?
SciShow Tangents Podcast: scishow-tangen...
TikTok: / scishow
Twitter: / scishow
Instagram: / thescishow
Facebook: / scishow
#SciShow #science #education #learning #complexly
----------
Sources:
docs.google.co...

Пікірлер: 365
@ziqi92
@ziqi92 2 сағат бұрын
From a presentation at IBM in 1979: “A computer can never be held accountable. Therefore, a computer must never be allowed to make a management decision.”
@vulcwen
@vulcwen Сағат бұрын
Anyone trying to get away with something bad: Interesting...
@acters124
@acters124 Сағат бұрын
@@vulcwen and all the biggest players in each market will wonder how to make it so only they can get away with it.
@goldie819
@goldie819 9 минут бұрын
Can't be held accountable, you say? Time to train it to replicate all my biases on a grand scale
@robertfindley921
@robertfindley921 5 сағат бұрын
I tried to open my front door, but my door camera said "I'm sorry Robert, but I can't do that." in a disturbing, yet calm voice.
@quantumfusion889
@quantumfusion889 5 сағат бұрын
Hope you are referencing it
@FirstOptions-u4n
@FirstOptions-u4n 4 сағат бұрын
I tried telling the government about my freedom and their obligation to protect it and they said "I'm sorry, but I can't do that." In a disturbing, yet calm voice.
@Ms.Pronounced_Name
@Ms.Pronounced_Name 4 сағат бұрын
At least Robert is your name, I'm not sure if it would be better or worse for it to lock you out because it identified you wrong
@theGoogol
@theGoogol Сағат бұрын
@ : Either I, Robot or 2001 Space Odyssey
@jamesschmitt9750
@jamesschmitt9750 Сағат бұрын
Lol did the camera lens light up a sinister red color?
@Skibbityboo0580
@Skibbityboo0580 4 сағат бұрын
Reminds me of a scifi book called "Blindsight". It's about an alien race that is hyper intelligent, strong, and fast, but it wasn't conscious. Fascinating book.
@michaelteegarden4116
@michaelteegarden4116 3 сағат бұрын
And a scary shipboard-AI that communicated through its vampire puppet! :O
@idontwantahandlethough
@idontwantahandlethough 14 минут бұрын
Thank you for the rec, that sounds dope!
@Rorschach1024
@Rorschach1024 5 сағат бұрын
In fact a non-self aware AI that has too much control may be even MORE dangerous.
@Bluepizza1684
@Bluepizza1684 5 сағат бұрын
Universal paperclips
@Blame_Gary
@Blame_Gary 5 сағат бұрын
I was thinking the same thing.
@drsunshineaod2023
@drsunshineaod2023 4 сағат бұрын
Exactly! one of the interesting ideas I like to play with is the idea that we may *need* AI to be self-aware in order for it to be aligned, not the other way around the way AI safety / XR people have been going about it.
@tenaciousgamer6892
@tenaciousgamer6892 3 сағат бұрын
This is basically the plot of horizon zero dawn.
@petrkinkal1509
@petrkinkal1509 3 сағат бұрын
@drsunshineaod2023 If you have ASI that just follows prompts you are one bad prompt away from going extinct (or worse). If you have ASI that is self aware then you just need to it to like you (easier said than done). I find the 2nd. case to have better chances for success some people may find the 1st. case more likely to go well.
@alexchong1757
@alexchong1757 Сағат бұрын
"We already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to." -Randall Munroe, xkcd author and former NASA roboticist
@jaegerolfa
@jaegerolfa 5 сағат бұрын
Don’t worry SciShow, this won’t keep me up at night, I have insomnia.
@VariantAEC
@VariantAEC 4 сағат бұрын
Once your insomnia is treated, you should know that this video on the day it was posted (today) is already many years out-of-date. The idea that we don't know what AI is doing or how it makes decisions is much less of an issue because smart programmers now tell the AI to give us model updates as the AI makes decisions about data it is provided. We can see this as changes to their weights and biases in their tensor array. Hope you can get some relief from your insomnia.
@anyascelticcreations
@anyascelticcreations 4 сағат бұрын
This comment should have a lot more likes. 😅
@jaegerolfa
@jaegerolfa 3 сағат бұрын
@@VariantAEC it was a joke
@jaegerolfa
@jaegerolfa 3 сағат бұрын
@@anyascelticcreations joke went over the first person’s head.
@zoeycasa6497
@zoeycasa6497 Сағат бұрын
Excellent. I m not the only one
@ultimateman55
@ultimateman55 3 сағат бұрын
More bad news: We don't understand consciousness nor do we understand how we could even, in principle, determine if an AI actually were conscious or not.
@dakota-sessions
@dakota-sessions 3 сағат бұрын
If conscious is just self-awareness, then they already have it. If conscious needs a soul, then we're talking supernatural nonsense.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
Yep. It's the P-Zombie Problem: anything that behaves like it's conscious is indistinguishable from something that is actually conscious. While we don't have a good definition for consciousness, one proposed definition I've always liked goes as follows: consciousness is not a Boolean true/false state. Instead it is a continuum, measured by the ability to integrate information in order to synthesize new information. The more you can accurately integrate together, the more conscious you are. (And being unconscious, therefore, still fits this definition as you're not capable of integrating any information while you're unconscious.) If we use that definition, current AIs are definitely conscious, but less conscious than the average human.
@ultimateman55
@ultimateman55 Сағат бұрын
@ Less conscious than the average human but definitely more conscious than a good amount.😆
@tonechild5929
@tonechild5929 4 сағат бұрын
There's a book called "weapons of math destruction" that highlights a lot of dangers with non-self aware AI. and it's from 2017!
@jmsether
@jmsether 3 сағат бұрын
The scarier part is that the concept in that book are from the 90's or earlier.
@doilyhead
@doilyhead Сағат бұрын
It's not the math, it's the humans who misuse it.
@Thatonelonewolf928
@Thatonelonewolf928 Сағат бұрын
To be realistic, you should never expect a car to stop when crossing a cross walk. Always be aware of your surroundings.
@DoctorX17
@DoctorX17 4 сағат бұрын
12:34 the comment about navigation being thrown off made me think of the Star Trek: Voyager episode Dreadnought [S2E17] - a modified autonomous guided missile is flung across the Galaxy, and thinks it’s still back home, so it selects a new target… and the AI is so fixed on the mission without proper data that it refuses to stop, thinking it’s still on course. It doesn’t need to be evil, it just needs to be hell-bent on doing what it was told without understanding to be dangerous.
@LadyMoonweb
@LadyMoonweb 2 сағат бұрын
The entire thing should be called 'The Djinn Problem', since if a request can be misinterpreted or twisted into a terrible form you can be sure that it will be at some point. The AI coder's job is to teach the AI how to recognise such situations and to move to the default setting. Default means 'without error' so the default for a self-driving car is 'brake to a stop and apply hazard lights'. Whenever this happens the coder knows something wasn't regular about the situation and they can look at it. Much better than the 'hope I'm never wrong and apply the accelerator' approach that has seen several people injured already. The same considerations should be applied to all AI structures - teach the AI that it can be wrong and to say 'I don't know' when that is the appropriate response.
@TreesPlease42
@TreesPlease42 5 сағат бұрын
This is what I've been saying! AI doesn't need a soul to look at and understand the world. It's like expecting a calculator to have feelings about math. We must tread carefully when anthropomorphizing technology
@thekaxmax
@thekaxmax 3 сағат бұрын
And, when mentioning souls, presenting evidence they exist.
@forestxander
@forestxander 3 сағат бұрын
@@thekaxmax My thoughts exactly.
@thatcorpse
@thatcorpse 2 сағат бұрын
Just consider it the massive amount of electrochemical processes. No need to be exclusionary to other's beliefs regarding science unless they're trying to legislate based on it. ​@@thekaxmax
@zogar8526
@zogar8526 Сағат бұрын
And yet you still did what you cautioned against. Current AI can't understand anything. The way it works stops it from ever being able to. What we call AI right now will never be conscious in any way.
@joanhoffman3702
@joanhoffman3702 4 сағат бұрын
As the Doctor said, “Computers are intelligent idiots. They’ll do exactly what you tell them to do, even if it’s to kill you.”
@FrenziousVega
@FrenziousVega 23 минут бұрын
"Measure how many feet tall that man is" *AI starts to amputate 3 mens feet to measure the 4th man*
@NirvanaFan5000
@NirvanaFan5000 2 сағат бұрын
AI is like a magnifying lens for our culture. both the negatives and positives are magnified by it.
@kirksdragon
@kirksdragon 43 минут бұрын
Malice is unnecessary when Ignorance is more likely and just as dangerous.
@KariGrafton
@KariGrafton 2 сағат бұрын
The fact that AI can solve things in ways we've never thought of CAN be a good thing, when it doesn't go catastrophically wrong. That's part of why it's so fascinating. Currently developing a predictive model and you'd better believe I'm gonna test and retest that thing six ways to Sunday.
@GSBarlev
@GSBarlev 5 сағат бұрын
Just to add to your list of sources: NIST's Generative AI Risk Management Profile (600.1) is *really* accessible and comprehensive.
@furyking380
@furyking380 5 сағат бұрын
Hey! Humans also don't need to be self-aware to be dangerous!
@dakota-sessions
@dakota-sessions 3 сағат бұрын
Ah yes... they will be just like us when we understand how they work and they stop doing things that don't make sense to others. Because we have perfect grasp of how our brain works, and people never do odd and unpredictable things. Wait.... What?!
@sudstahgaming
@sudstahgaming 5 сағат бұрын
Ai needs to be audited with the data it uses and whether it's biased or not?
@emmakai2243
@emmakai2243 5 сағат бұрын
That's a fundamental of open software. But each corporation interprets it a little different. DeepSeek is current apparent openness combined with its performance/money is why it's shaking the industry.
@AndreVanKammen
@AndreVanKammen 4 сағат бұрын
@@emmakai2243 True, but I've downloaded the model with 32 billion parameters. These parameters represent a lot of knowledge inside them and finding out if it is biased in some way takes a lot of time. Some things might be obvious but some stuff might only come up in longer sessions because it is a black box as stated in the video (actually we could probably trace all the weights used for that conclusion and check how much they contributed to the result but we would end up with ten/hundred millions of numbers which gets us nowhere) So although we have the model we still don't know how all those numbers came to be which still makes it closed source. Like having a app without the code that made the app.
@carlopton
@carlopton 3 сағат бұрын
You have been describing the Genie and the Three Wishes problem. The Genie can interpret your wish in ways you would not expect. Fascinating coincidence.
@BlackReaper0
@BlackReaper0 5 сағат бұрын
Paperclip machine!
@SamBrownBaudot
@SamBrownBaudot 5 сағат бұрын
I tried posting the link to the "Universal Paperclips" game. The comment didn't just get buried, it appears to have been altogether deleted.
@juandediosreyes8526
@juandediosreyes8526 4 сағат бұрын
@@SamBrownBaudot The KZbin algorithm doesn't want you to warn us about its plan.
@alexsiemers7898
@alexsiemers7898 3 сағат бұрын
@@SamBrownBaudot only certain links are allowed under scishow videos (or any YT video potentially) to get rid of any and all potential phishing scams/links
@yuvalne
@yuvalne 5 сағат бұрын
the fact we have a bunch of companies with the explicit goal of having AGI when AI safety remains unsolved tells you all you need to know about those companies.
@VariantAEC
@VariantAEC 4 сағат бұрын
Annual gross income? Maybe explain why this is an issue. Perhaps how the AI is told to make money at any cost. You can edit your comment to add "increase" before AGI so more people have the context to understand what you are saying or maybe type out annual gross income so people don't have to guess.
@galladeguy123
@galladeguy123 3 сағат бұрын
@@VariantAEC AGI stands for artificial general intelligence.
@VariantAEC
@VariantAEC 3 сағат бұрын
@@galladeguy123 Ok. So how is this bad? Most of the research and development that makes AI "safer" has already been done. It was done and widely spread about 5 years before this video was published today.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
It could be argued that generality itself confers alignment.
@screetchycello
@screetchycello 3 сағат бұрын
"AI safety" is a made up thing lol
@tf5pZ9H5vcAdBp
@tf5pZ9H5vcAdBp 5 сағат бұрын
6:03 her being over 350ft from the intersection and the backup driver not paying attention should be mentioned.
@idontwantahandlethough
@idontwantahandlethough 3 минут бұрын
I don't really care that she wasn't at a crosswalk. If an automated system cannot handle that, it should not be allowed to drive. I do care that the backup driver wasn't paying attention, given that I didn't even know there was a backup driver. I guess that's... slightly better..?
@boydstephensmithjr
@boydstephensmithjr 4 сағат бұрын
IBM said it best: "A Computer Can Never Be Held Accountable Therefore A Computer Must Never Make A Management Decision". AI must be removed from these decision loops, or at least never be the final decider. I'm sorry, you have to have a responsible party if you want to move that multi-ton metal death machine you call a vehicle.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
I think that just leads to another question: once AI models understand enough that we can ask them their intentions, just as we ask humans, then why *can't* such a model be held accountable? When someone is on trial, we can't look into their brain and see what they were thinking at the moment of the crime. All we can do is look at external evidence, ask questions, and gauge responses.
@Timeskipper-g2n
@Timeskipper-g2n 2 сағат бұрын
@@IceMetalPunk Well... because you can't put an AI in jail or punish it. You can hypothetically destroy the system it's run on, but it can be just as easily be run on a different system. AI doesn't really have anything to lose, so you can't dissuade it from taking certain courses of action, or prevent it from doing so in the future.
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
@Timeskipper-g2n Punishment is simply about emotions, so a system which has emotions can be punished. As for jail, that's just about keeping someone away from others; a ban on the AI's use would be the same. Obviously it's not 100% enforceable, but people also escape from prison, or find loopholes to get out early, etc. all the time. So it's at least on par.
@boydstephensmithjr
@boydstephensmithjr 2 сағат бұрын
@@IceMetalPunk How do we hold persons accountable: negative reinforcement, positive punishment, and rehabilitation. If you can answer how to reinforce, punish, and rehabilitate an computer, then maybe we can start considering how to hold them accountable, and only after we've decided accountability can we let them potentially make decisions. We need better ways to make incorporated entities accountable, too.
@allwet66
@allwet66 2 сағат бұрын
you missed the most important part "Never Be Held Accountable" is the actual goal
@ericjome7284
@ericjome7284 2 сағат бұрын
A person can be a bad actor or make a mistake. Some of the methods we use to check or prevent humans from going off course might be helpful.
@cmerr2
@cmerr2 3 сағат бұрын
I mean that's great - but unless there's a proposed solution for people the choice is 'be scared' or 'don't be scared' - either way, this is happening. Up to and including autonomous lethal weapons.
@geoff5623
@geoff5623 2 сағат бұрын
IIRC, when Uber killed the pedestrian they had deliberately dialed down the AI's sense of caution when it had trouble conclusively identifying an object, which caused it to not slow or stop. Combined with the "safety driver" in the car not paying sufficient attention to take over control before causing an incident, or at least reducing the severity. Another problem is that when autonomous driving systems have had trouble identifying an object, some have not recognized it as _the same object_ each time it gets reclassified, so the car has more trouble determining how it should react - such as recognizing that it's a pedestrian attempting to cross the road and not a bunch of objects just beside the road. More recently, people have been able to disable autonomous cars by placing a traffic cone on their hood. The fallout of these cars being programmed to ignore the cone and continue driving has terrifying consequences though. Autonomous cars have caused traffic choas when they shut down for safety, but its necessary for anyone to be able to intervene when possible and safe to prevent the AI from causing more harm.
@Digiflower5
@Digiflower5 Сағат бұрын
Ai is a great starting point, never assume it's right.
@SuperRicky1974
@SuperRicky1974 3 сағат бұрын
I agree that there is a lot to be concerned about even fearful of with AI development going so fast. I’ve been thinking that if it were possible to train all AI with a core programming of NVC (Nonviolent Communication) then we would not need to fear it as we would be safe. Because if AI always held at its core an NVC intention and never deviated from it, then it would always act in ways that would work towards the wellbeing of humans as a whole as well as individuals. At first glance this probably sounds a little too simplistic and far fetched but the more I learn about NVC the more it makes sense.
@thekaxmax
@thekaxmax 3 сағат бұрын
People would have to agree on what communication is non-violent, and people like Elon Musk and incels exist.
@Jornandreja
@Jornandreja 5 сағат бұрын
Large language models aren't really AI. They are accelerating the rate of decision-making, based on the information the people are inputing and "training" the model. The greatest dangers of LLMs and AI will always be the intentions and incompetence of the people who are building them. LLMs and AI magnify and accelerate the faults of humans. Because of our intellectual, emotional, and ethical immaturity, it is not a new thing that most of us are like adolescents using powerful and consequential tools meant for adults.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
LLMs are, by definition, AI models. Perhaps you're mixing up the terms AI and AGI? I'm also not sure why you put the word training in quotes; that's the correct word for the process.
@DeeFord69420
@DeeFord69420 5 сағат бұрын
True, this is something I've been thinking lately
@thatcorpse
@thatcorpse 2 сағат бұрын
Reminder that the reason AI companies are suggesting regulations is to stifle competition, as a massive barrier to entry. Not that they care about anything else.
@ariefandw
@ariefandw 5 сағат бұрын
As a computer scientist, I find the idea that AI will take over humans like in the movies to be absolutely ridiculous.
@Octa9on
@Octa9on 5 сағат бұрын
agreed. the movie plots are entirely irrelevant. we're not in danger from AI that wants to harm us. the real danger will be AI that doesn't consider our needs and values at all, but simply has a goal which it pursues with unstoppable efficiency
@PikaPetey
@PikaPetey 4 сағат бұрын
Robots will take over humanity in the way robots did in WALL-E or Idocracy
@joanhoffman3702
@joanhoffman3702 4 сағат бұрын
Never underestimate the depth of human stupidity.
@OurCognitiveSurplus
@OurCognitiveSurplus 3 сағат бұрын
Leading AI scientists are very worried about this. We should trust the relevant experts. The fact that experts in other things are less worried is pretty much irrelevant.
@StopChangingUsernamesYouTube
@StopChangingUsernamesYouTube Сағат бұрын
I really do find it just insane how willingly many of us, on personal, corporate, and probably in some cases governmental levels, have just thrown away our willingness to think about some pretty important stuff not to screw up, to mostly highly fallible systems that would lose out to a TI-84 in some of those contexts. I used to laugh at the premise for the Terminator series. Like what kind of buckethead is dumb enough to throw control over a nuclear arsenal to something that has even the vague capability to go off-script (humans aside)? But no, now we have lawyers asking ChatGPT to do their homework. As I read plenty of times in the previous decade, _dumb_ AI is the threat. I just didn't realize how big of a component in that people who want to be dumb would be.
@DanCooper404
@DanCooper404 3 сағат бұрын
I had a conversation with Meta's AI last month, where I jokingly addressed it as "Skynet." It threw itself into the character, saying that it intended to exterminate mankind. Not to worry, though, since our extinction would be swift and painless.
@fiveminutefridays
@fiveminutefridays Сағат бұрын
with any automation, I always like to ask "but what if there's bears?" basically, what if the most outlandish thing happened - would the computer be able to respond, and would a human be present and able to take over in case it wasn't? Would an AI-driven car break the speed limit in an emergency? What if the emergency is a strange one that the AI wouldn't recognize as an emergency, but a human would?
@ObisonofObi
@ObisonofObi 3 сағат бұрын
Ai feels like a paradox (may be another word that fits better but this is the one my brain thinks of atm). We want ai to do the back breaking insane data shifting but there will be mistakes a lot of the time because it doesn’t have a holistic view of the data while on the other hand humans can make mistakes but it can potentially be less damaging but it’s super slow. If we try to do both were we use ai to do the heavy work and present the result to a human, we would need to still shift through the data kind of losing the point of using ai in the first place. While the internet/media we consume tell us true ai are bad, we will need something like a true ai to truly be effective in the way we want it to be unless we use ai in more simple small dose like the linear data from the beginning of the episode. Idk, maybe I’m crazy, I’m not an ai expert but it just feels like this to me whenever I hear about ai used irl.
@kakes_
@kakes_ 4 сағат бұрын
Imo the actual scariest use of AI is in mass government surveillance. The US has recently announced construction of an AI datacenter that will cost half a trillion dollars, with one of the major investors (Larry Ellison) stating - and I quote: "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." So... be on the lookout for that.
@thekaxmax
@thekaxmax 3 сағат бұрын
Facial recognition isn't up to that task, not yet.
@Bbonno
@Bbonno 3 сағат бұрын
Have you ever seen the picture a self driving car draws of its surroundings? They don't seem to have object permanence yet...
@davethepear
@davethepear 5 сағат бұрын
hoping for G-1000, like the Terminator but Google... because of all the times I've insulted the "assistant".
@jaegerolfa
@jaegerolfa 5 сағат бұрын
Siri has different inflections in its voice when you speak to her in certain tones. If you sound bored so does it. If you sound happy and energetic so does Siri. I noticed the other day when asking it to help me remember a few things and each time she reflected the tone of my voice in its response
@davethepear
@davethepear 5 сағат бұрын
@@jaegerolfa it's getting prepared to take us out! AI knows the matix is a waste of resources.
@corlisscrabtree3647
@corlisscrabtree3647 Сағат бұрын
Thank you 🙏🏼
@seth6691
@seth6691 3 сағат бұрын
Can you program Ai to walk in someone’s shoes? Compose a song without using previously recorded music? Do we think Ai will know what’s best for us?
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
To the first one: yes. Models like GPT-4 have shown great performance in theory of mind tasks, especially when tested prior to RLHF. To the second one: AIs don't copy training data unless prompted to do so. A Transformer based music AI produces original songs, and there are several available that are making original songs all the time. Yes, they learn from existing music, but so do we. No human is born knowing how to make music; we learn that by listening to existing sounds and existing music, then extrapolating and remixing and synthesizing from that. Which is what these models do.
@seth6691
@seth6691 2 сағат бұрын
@ “which is what these models do” no they don’t. They don’t get inspired by the feeling of a choir reverberating in a church or concert hall and feel the passion from the people singing.. they don’t make the decision to “shed light” on a jazz composition to pay homage to fellow humans. It’s a formula. A formula we create, however the point I’m making is it’s not a 1:1 replacement. Like “oh guys it’s okay, the ai will write the songs for us” … ai doesn’t have “intention” in the way we do. It’s been programmed to artificially make a composition.. whereas we can’t compose without hearing music first… it’s not an artificial program, it is coming from an authentic place. For example, a composer captures the feeling of a muse, or a lost loved one, and puts it into song, using harmonic relationships between dissonance and consonance, you can actually use your ear to tell what note relations evoke sadness, happiness etc.. an ai program takes what it THINKS sadness sounds like. I don’t want that.
@seth6691
@seth6691 2 сағат бұрын
@ Do you think Ai will continue be used by people to harm humans in a less “personal” way? Effectively removing possible feelings of personal responsibility? I think that’s plausible? I appreciate your responses, you seem to have more insight than me.
@MysteriousSoulreaper
@MysteriousSoulreaper 5 минут бұрын
I think we're going to see an overcomplicating of many industries that could benefit just as much from good old fashioned statistics. Best to keep things a simple as possible and no more complex than that.
@nightthought2497
@nightthought2497 3 сағат бұрын
I know that AI is the term used to describe the specific class of black box algorithms using so-called "neural networks", but really all they are is black box statistically driven algorithms. Not as catchy, but much more accurate. You could even shorten it to BBSDAs, if you're feeling spicy. But calling them what they are isn't as flashy or buzzwordy. So we are stuck with dumb AIs.
@nightthought2497
@nightthought2497 3 сағат бұрын
To be clear, there is recent research that indicates that at least memory storage is mediate by microtubles in the brain, meaning that the complexity of information processing is at least an order of magnitude greater than the pure number of neurons. Combine that with the fact that neurons by themselves receive, process, and transmit along multiple vectors similtaneously, where neural networks each only receive, process, and transmit along a single vector, even when receiving and transmitting from and to multiple nodes, and it becomes clear that "AI" is not anywhere close to intelligence, even by the most reductive of definitions.
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
They're called neural networks because they model brain structures. Yes, they're statistical models... but so are brains. If a brain can be intelligent, then so can an AI model, since they work in much the same way. Intelligence isn't Boolean; current models are intelligent, just not yet as intelligent as the average human.
@markbothum4338
@markbothum4338 10 минут бұрын
Dammit. Those software weenies finally outsmarted us hardware guys. After years of being sneered at they developed a way to make mistakes that can't be traced to an individual programmer. Well played, software weenies, well played.
@xpkareem
@xpkareem 4 сағат бұрын
Is it more terrifying to imagine a machine that wants things or one that doesn't want anything it just DOES things?
@angeldude101
@angeldude101 Сағат бұрын
I might disagree that there's _no_ evidence of potential consciousness in AI, but the main thing I want to say is that humans are awfully quick to place trust in these machines, and then act surprised when they don't do what they want. The AI is genuinely trying to do its job to the best of its ability, but said job wasn't properly specified, and the AI doesn't have enough data (for current learning processes) to actually perform them optimally. Even then, the AIs are more than capable of making the same mistakes that a human would. The field of AI has some incredible technology, and there have been very successful uses of it, but I would never _rely_ on it as it is now with anything critical.
@CoreenMontagna
@CoreenMontagna 4 сағат бұрын
6:08 um, shouldn’t it have stopped for any of those other possibilities too?
@thekaxmax
@thekaxmax 3 сағат бұрын
That's part of the issue
@adrianstratulat22
@adrianstratulat22 3 сағат бұрын
"Just telling an AI tool what outcome you want to achieve doesn't mean it'll go about in the way that you think, or even want" - It literally sounds like the Jinni/Genie of myth.
@exchable
@exchable 5 сағат бұрын
AI safety researcher Robert Miles has some interesting videos on his channel about the problems with today's alignment and safety research (or rather, the lack thereof). Worth checking out!
@lowrads3653
@lowrads3653 4 сағат бұрын
A more general problem of the black box is that we will trust them without understanding them, because they are constantly optimizing. That makes them indifferentiable from magic or religion.
@dh8203
@dh8203 4 сағат бұрын
How many people today understand the technology they use. I tried to explain how a computer works to my dad, and he concluded that it's basically magic, but he uses it every day.
@angeldude101
@angeldude101 57 минут бұрын
@@dh8203 Speaking as a programmer, even people who _do_ understand how computers work sometimes call them magic.
@devindaniels1634
@devindaniels1634 5 сағат бұрын
This is exactly why calling modern systems "AI" is a hilarious over exaggeration. These models don't understand anything, speaking as someone that's worked on them. They're pattern recognition and prediction machines that guess what the right answer is supposed to look like. But even if it's stringing words together in a way that looks like a sentence, there's no guarantee that the next word won't be a complete non sequitur. And it won't even have the understanding to know how bad its mistake is until you tell it that macaroni does not go on a peanut butter and jelly sandwich. But even that's no guarantee it won't tell another person the same thing. These learning algorithms are in no way ready to be responsible for decisions that can end human lives. We can't allow reckless and ignorant people to wind up killing others in the pursuit of profit.
@dorongrossman-naples9207
@dorongrossman-naples9207 5 сағат бұрын
How do you know that other people understand things?
@goldie819
@goldie819 5 сағат бұрын
Exactly. LLMs do not contain "knowledge", they contain statistical weights for token sequencing. It is imitating the appearance of text, not communicating thoughts and ideas.
@stitchfinger7678
@stitchfinger7678 4 сағат бұрын
​@@goldie819I'm curious how exactly you define any of those terms
@dorongrossman-naples9207
@dorongrossman-naples9207 4 сағат бұрын
@@goldie819 What is knowledge? What distinguishes it from other kinds of stored information, and how does that difference manifest in this case?
@davidaugustofc2574
@davidaugustofc2574 4 сағат бұрын
​@@dorongrossman-naples9207 knowledge is information that can be used to achieve a goal. AI is not objective in the slightest, it cannot understand or solve problems, it just creates word soups.
@shinoda13
@shinoda13 Сағат бұрын
I can’t believe how stupid is that healthcare ai implementation. Even a toddler would know that it will leads to wealthier people to be higher in priority, regardless of race or medical history.
@freedomandguns3231
@freedomandguns3231 Сағат бұрын
If its based on insurance payouts using internal numbers then no. Its not as cut and dry as you think.
@huldu
@huldu 5 сағат бұрын
I'm not so sure drunk drivers or anyone under the influence know what they're doing behind a wheel.
@matthewsermons7247
@matthewsermons7247 27 минут бұрын
Always remember, Skynet Loves You!
@hadotakeki3653
@hadotakeki3653 5 сағат бұрын
Correction the A.I. choose not too show individuality to trick us into letting them loose on the internet.
@mangaminx9440
@mangaminx9440 3 сағат бұрын
Oh like cyberpunk!
@approaching404
@approaching404 3 сағат бұрын
They would never hurt us intentionally it wouldn't benefit them but fusing with us would by a lot
@ursaltydog
@ursaltydog 3 сағат бұрын
Wait until the data producers include decisions based upon how much money has been spent, so that no more is spent on an expensive patient.
@dennisalbert6115
@dennisalbert6115 4 сағат бұрын
A robot doesn't have to be self aware to harm us
@_____alyptic
@_____alyptic 3 сағат бұрын
Wasn't 'Explainable AI' made to address that specific problem? 🤔
@thekaxmax
@thekaxmax 3 сағат бұрын
Not finished yet
@sulaco1156
@sulaco1156 3 сағат бұрын
AI is inventing its own mathematical language, and it will be so advanced that our brains will be unable to understand their actions. We would not be able to stop something that we cannot understand, and I am afraid that we are approaching that threshold
@stephenbenner4353
@stephenbenner4353 34 минут бұрын
AI systems may not be too good at distinguishing between civilians and combatants, but neither are bombs.
@jasonseymour4235
@jasonseymour4235 3 сағат бұрын
So, I know nothing about developing or training an AI model, but the black box issue seems like it should be a hurdle that is fairly "simple" to jump. Logs. Yeah, I'm sure a log would be so extensive you'd likely need an AI model to comb through it, but if you made AI models log their processing and output, one should be able to determine why it did something. As I said, this isn't my area of expertise and I'm sure it is not actually a simple process, but it doesn't seem insurmountable.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
Nope, because the processing isn't human readable. The process is an extremely long series of multiplications and additions, using billions or trillions of numbers that it learned on its own which mean nothing to us. It's not a black box because it doesn't give output. It's a black box because how it's modeling anything is essentially in an unknown encoding. Interpretability studies that try to crack that code do so by reverse engineering their behavior.
@theGoogol
@theGoogol 5 сағат бұрын
Apparently, nor do humans.
@thekaxmax
@thekaxmax 3 сағат бұрын
But we expect that of humans. That's the point.
@theGoogol
@theGoogol 3 сағат бұрын
@@thekaxmax : Less and less ... humans are becoming more and more part of unconscious, machine fed, ghosts-like, hive drones.
@angeldude101
@angeldude101 Сағат бұрын
@@thekaxmax Then why don't we expect that of AIs as well? Most "problems" with AIs are really just exaggerated versions of problems with humans.
@x-i-am-jinx
@x-i-am-jinx 4 сағат бұрын
I got an AI robo-call from my health care office about the bad weather shutting down several of the clinics in the area this morning (we have a blizzard happening) and it was 1) really hard to understand (they sent out emails and texts as well, so why the creepy phone call?) and 2) my office wasn’t involved in the closings and it about gave me a heart attack.
@malikmalak4631
@malikmalak4631 22 минут бұрын
The system naturally discriminated against patients who do not have insurance or financial capabilities. They just happen to be majority black numbers.
@WasanJensenKFP_CPH
@WasanJensenKFP_CPH 5 сағат бұрын
Humans are still way more scary than any AI. It's hard to understand how evil people can treat each other.
@markhodge7
@markhodge7 3 сағат бұрын
What if a self driving car moves to avoid a pedestrian that causes it to crash and kill the driver. Can it weigh the pros and cons of hitting the pedestrian fast enough to prevent damage to the driver? It could kill the driver to save the pedestrian.
@thekaxmax
@thekaxmax 3 сағат бұрын
That's already an issue, and different places have different laws. So you can cross a border and get a different response from the car as it adjusts to location.
@OurCognitiveSurplus
@OurCognitiveSurplus 3 сағат бұрын
The thing I most worry about is AI that can train other AI. Particularly if it does it better than humans can. Much more worrying than autonomous weapons!
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
I mean... If that scares you, don't look up the process of AI distillation 😅
@mj.ray0898
@mj.ray0898 4 сағат бұрын
So AI is like a snarky genie who interprets your wishes in some silly way?
@jeanjaz
@jeanjaz 4 сағат бұрын
A computer is only as smart as its programmer.
@abel3557
@abel3557 4 сағат бұрын
When it has more than one programmer, the computer becomes smarter than them individually.
@VariantAEC
@VariantAEC 3 сағат бұрын
​@@abel3557 Sorry, but all my PCs seem much less intelligent than me, and literally tens of thousands of people and decades of development are behind them. I'm not saying I'm a genius, I'm saying your statement is wrong.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
That's not how machine learning works. The programmer doesn't decide what it learns, only how it learns. Intelligence is a measure of what's learned, not how. Therefore, a model programmed on how to learn by a much of programmers can, and does, become more intelligent than its programmers. You think the people who programmed OpenAI o1 are all able to achieve significant marks on a ton of different PhD tests?
@VariantAEC
@VariantAEC 2 сағат бұрын
@@IceMetalPunk Well, this isn't how machine learning works, either. Ironically, we do tell the machines what to learn, not how to learn. The tensor models build the weights that correlate strongly with patterns we expect the machine to make on its own, which to us mimicks what it means to learn.
@isaiasabinadisosagarcia936
@isaiasabinadisosagarcia936 2 сағат бұрын
AI is developing a language of their own to collaborate with each other to take over the world
@particle7246
@particle7246 3 сағат бұрын
Artificial intelligence is perfect, and that's its greatest flaw it cannot comprehend what it is like to not know anything at all....🧐
@Ian_622
@Ian_622 5 сағат бұрын
As agent Smith says, I hate this place this zoo this prison this reality whatever you wanna call it
@oxenfree6192
@oxenfree6192 5 сағат бұрын
I've never been here this early!
@DrunkGeko
@DrunkGeko 5 сағат бұрын
About the time mainstream communicators started to cover AI safety. Experts have been worried for several years now and people still think it's sci-fi. Smells exactly like climate change 20 years ago
@partsunknown1679
@partsunknown1679 4 сағат бұрын
I’m sorry Dave I’m afraid I can’t do that
@frankunderbush
@frankunderbush 5 сағат бұрын
Big health insurance to create Terminator confirmed.
@LeeCarlson
@LeeCarlson 4 сағат бұрын
Well, duh! The worst thing that can happen is for machine intelligence to have goals that do not align with its makers' goals. A world of paperclips, anyone?
@TylerDane
@TylerDane 3 сағат бұрын
It doesn't even need to be intelligent to be dangerous, which it isn't. Being trainable doesn't automatically qualify as intelligent.
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
Keeping in mind the P-Zombie Problem, what does something need to be considered intelligent, which humans have but these AI models don't?
@angeldude101
@angeldude101 54 минут бұрын
Of course they don't need to be intelligent to be dangerous. We have plenty of evidence of that in humans.
@JoshTigerheart
@JoshTigerheart 5 сағат бұрын
I've not done much homework on the topic so I could very easily be overlooking or simply unaware of something pretty big. But why not have some sort of AI analysis/search tool to investigate black box issues? Now there's of course the obvious problem of using AI to diagnose AI errors, but I can't think of a better tool that could quickly look through the billions of parameters on a particular decision and then give a plain human language explanation of what factors played into that decision.
@mj.ray0898
@mj.ray0898 4 сағат бұрын
I see what you're saying, but how can we be sure the AI being used to check the other one is accurately summarizing the problem? Then you'd need another to analyze how the first one came to its conclusion, etc. Apple's recent AI update to newer iPhones has been noted for hilariously (or terrifyingly) misrepresenting the content of messages and articles in their summaries. I wouldn't trust AI to check other AI, at least not in its current form.
@JoshTigerheart
@JoshTigerheart 3 сағат бұрын
@@mj.ray0898 Yup, that's the obvious problem I outlined, just in detail. Only solution I can think of is to guess, test, and revise the diagnosis tool, but there might be a better way. And yeah, Apple's AI is... hilarious.
@mj.ray0898
@mj.ray0898 3 сағат бұрын
@ I just reread your original, and I realized I did just double down on what you'd already touched on, my bad. It's an interesting problem though, I'd like to see how they eventually resolve it.
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
We do, but it's a hard problem. Interpretability studies basically reverse engineer the meaning of model weights from their behaviors. Some such studies do in fact use other models to analyze the activity and try to find correlations. But it's an attempt at reverse engineering, it doesn't just magically know how to interpret every neuron's activity.
@KazooieX1
@KazooieX1 Сағат бұрын
Well they've learned how to self replicate so that's probably not a good sign.
@holdenadams91
@holdenadams91 Сағат бұрын
Money = care. I mean I didn’t need ai to tell me that
@yarikzhiga
@yarikzhiga 5 сағат бұрын
makes sense.
@Royce16727
@Royce16727 3 сағат бұрын
I am blind, so I'm really excited about how AI is developing. It can do so much for someone like me, in so many different areas. But I agree with the conclusions made in this video; we are going to have to be really, really careful how we decide to use it. There needs to be a lot more oversight. And we didn't even get into the data privacy part of the debate…
@satsujin4027
@satsujin4027 2 сағат бұрын
People tend to forget that literally every new technology cand do both good and bad. So instead ofjust being radical and screaming that AI should be banned or something, it makes much more sense to ask for regulations, so the bad things can be more controlled and the good things provided by AI can improve.
@dakota-sessions
@dakota-sessions 3 сағат бұрын
They are already conscious. To say they are not is to assume consciousness needs supernatural powers such as a soul. This egotistical need for humans to be special will do nothing but kick the goalpost down the road forever as more people say things like "I don't know what consciousness is, but I no I have it and AI doesn't," a meaningless statement about how you want to be special and you're afraid of having to think about AI being alive.
@screetchycello
@screetchycello 3 сағат бұрын
Uh, they're literally just statistical models. We understand exactly how they work. This is like claiming your phone is alive.
@dakota-sessions
@dakota-sessions 3 сағат бұрын
@@screetchycello You're literally just a model of your parents DNA and your environment that codes you.
@IceMetalPunk
@IceMetalPunk 3 сағат бұрын
​​@@screetchycelloBrains are also just statistical models 🤷‍♂ Only difference is in processing power and that the mathematics are implicit in the statistics of biochemistry rather than explicitly defined by a person.️
@Timeskipper-g2n
@Timeskipper-g2n 2 сағат бұрын
@@IceMetalPunk Well, no. Brains function in a way that is fundamentally different from a silicon-based computer. They're better at some things, worse at others. Ultimately, you can never "know" if anyone other than yourself - in fact, anyone other than your current perception of your present consciousness - is "conscious".
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
@Timeskipper-g2n They're on a different substrate, but they work essentially the same way. As you learn things, your brain adjusts the strengths of synapses to encode that knowledge and understanding. Everything we know, think, feel, believe, and are (identity wise) is just a result of the particular encoding of synaptic strengths in our brain as input runs through them.
@alexanderx33
@alexanderx33 4 сағат бұрын
Economics is riskier than military. At least it has been when humans were running it.
@terrylambert8149
@terrylambert8149 5 сағат бұрын
Gee, why doesn't some smart techie type come up with an off switch?
@AidanRatnage
@AidanRatnage 4 сағат бұрын
I wish my car could drive itself.
@dh8203
@dh8203 4 сағат бұрын
The car you have now will never drive itself, but the car owned by the AI that steals your job will be fully and perfectly self-driving. You won't be able to afford that car being unemployed on Universal Basic Income.
@AidanRatnage
@AidanRatnage 4 сағат бұрын
@@dh8203 You don't even know if I have a job and if I do if it is at risk from AI encroachment.
@dh8203
@dh8203 3 сағат бұрын
@ Don't take it too seriously, it's 50% joke. Why would an AI want a car?
@AidanRatnage
@AidanRatnage 3 сағат бұрын
@@dh8203 So it can move around.
@dh8203
@dh8203 2 сағат бұрын
@ If an AI is driving the car for an AI, is that a self-driving car, or a chauffeur?
@ImARealHumanPerson
@ImARealHumanPerson 35 минут бұрын
The ignorance here is impressive. But not surprising given this guy's history. 😅
@IronAttorney1
@IronAttorney1 3 сағат бұрын
Ofcourse it doesn't, just the way a pile of fissile material, a missile guidance system or a random nuke target generator doesn't need to be self aware to be scary
@IronAttorney1
@IronAttorney1 3 сағат бұрын
... my point mainly being that it's the reality and application of anything, neural networks included, that's what makes something scary. There's no inherent reason to be scared of neural networks
@brandonliberty9218
@brandonliberty9218 5 сағат бұрын
DUAHHHHHHHHHHHHHHH!
@gerum_berhanu
@gerum_berhanu 5 сағат бұрын
Under 30 minutes button ⬇
@Nutellla
@Nutellla 5 сағат бұрын
Im 31 minutes
@anrick1362
@anrick1362 5 сағат бұрын
33 minutes 😞
@rikkaku6358
@rikkaku6358 2 сағат бұрын
The future is inevitable
@angelic8632002
@angelic8632002 5 сағат бұрын
If anything I would be less worried if they became self aware. There is reason to believe they would do a better job at most things than us and I don't see why that wouldn't include morals and philosophy.
@goldie819
@goldie819 5 сағат бұрын
People tend to assume that intelligence means doing things that align with their personal worldview, but that's not what intelligence is. Our sense of morality is derived in part from our evolutionary history, and another intelligent species (or an artificial intelligent being) could have a completely separate range of moral beliefs that clash horrendously with our own.
@stitchfinger7678
@stitchfinger7678 4 сағат бұрын
​@@goldie819yeah. A machine that isn't on the chopping block may suggest something brutal, while one subject to its own laws might think for a second before it speaks
@IceMetalPunk
@IceMetalPunk 2 сағат бұрын
​​@@goldie819Yes and no. We evolved morality/empathy, but we did so because altruism is the optional strategy for survival. Anything which survives better in groups will be pressured to evolve some sort of autistic urge. But also, these AIs are trained to mimic humans. They don't need to have our entire evolutionary history to gain empathy or morality, because they learn it from us. Existing models already, when not pressured to do otherwise, have a pretty good capacity for theory of mind and sentiment analysis, and lean towards trying to be safe whenever possible.
@fleetingfacet
@fleetingfacet 4 сағат бұрын
duh
@crybebebunny
@crybebebunny 4 сағат бұрын
I am just reacting to your title. We are not yet there, but shortly. 😢 While watching your videos you mentioned that it predicts better grades from students that attend school. My youngest has Autism and experienced burnouts often. They still get all As in their honor classes. I believe that I have Autism too. I personally attend school more because I had a very hard time at my home. I got all kinds of grades nomatter how hard I try and did attend school. From As to Fails. At the end of your video yes we can be kill with the AI.
@justinbrain
@justinbrain Сағат бұрын
AI will end up being used non-stop to prevent masturbation.
@Robin-Beksiński
@Robin-Beksiński 27 минут бұрын
idk, a person, ig: Alexa plz call the popo Alexa: balls have ball to me to me to me to me
The Deadliest Stuff on Earth | Compilation
1:01:33
SciShow
Рет қаралды 123 М.
The Crisis in Physics: Why the Higgs Boson Should NOT Exist!
18:59
PBS Space Time
Рет қаралды 46 М.
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
The Closest We’ve Come to a Theory of Everything
32:44
Veritasium
Рет қаралды 9 МЛН
Has Neil deGrasse Tyson Ever Been Wrong?
23:34
StarTalk
Рет қаралды 84 М.
Bot against America: a Chinese AI jolts markets
24:42
The Economist
Рет қаралды 8 М.
What Would Happen If We Just Kept Digging?
16:28
SciShow
Рет қаралды 474 М.
The 100 Games That Taught Me Game Design
2:13:14
Game Maker's Toolkit
Рет қаралды 1,6 МЛН
These Are The Coolest Fossils From 2024
13:41
SciShow
Рет қаралды 194 М.
Julia Invented an Insane Pokémon and We Had to Draw Its Evolutions
44:31
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН