AI - We Need To Stop

  Рет қаралды 192,427

Upper Echelon

Upper Echelon

Күн бұрын

Пікірлер: 2 800
@UpperEchelon
@UpperEchelon 15 күн бұрын
Take your personal data back with Incogni! Use code ECHELON at the link below and get 60% off an annual plan: incogni.com/echelon
@Syntek-Alba
@Syntek-Alba 15 күн бұрын
Why are all the whistleblowers suddenly not turning up to, you know whistleblow?. Oh I forgot they don't live for very long.
@I_am_that_one_guy
@I_am_that_one_guy 15 күн бұрын
Don't go into the false reality, do not put a chip in your body. I feel like that's just a step too far, where you would Mayne be lost. That's my fear, so I'll touch grass.
@DudeSoWin
@DudeSoWin 15 күн бұрын
Can we please get an Ai to remove the Scientologists doxxing us with French DEI words in every piece of literature?
@Jenkkimie
@Jenkkimie 15 күн бұрын
Good video. I used to be an AI developer and my experience was that I didn't think about the ethical implications of my work. I was developing an AI for advertising. Harmless right? Yeah but what happens when someone is willing to use it prey upon vulnerable people? How about using it to systemically dissiminate disinformation in information warfare. Suddenly it's not that harmless anymore. I used to think AI would be my life's work, and now I have a negative-leaning bias. We are repeating the same mistakes we did with the internet in that with the internet infrastructure the original developers never thought someone could use it for ill will and malware wasn't something that crossed their mind. Similarly all of us who have developed AI have had good intentions but ultimately we are short sighted. Too wrapped in the good things, but not considering the bad things.
@thomgizziz
@thomgizziz 15 күн бұрын
Are you slow? AI doesn't know what a lie is. It is a random number generator with weighted outputs. If it is "lying" then that lie randomly happened or it is because a lot of people lied in the data it was trained on. You don't even seem to know the basics of how AI works and yet here you are trying to tell others how they should feel about it and pretending like you are giving them the facts. Stop.
@theblackcoatedman6794
@theblackcoatedman6794 15 күн бұрын
The "We" you need to tell to stop isn't gonna be in the audience bro. I and most peeps have no power here.
@drifty1523
@drifty1523 15 күн бұрын
I feel this way for most youtubers
@yebzy
@yebzy 15 күн бұрын
And honestly, if you're watching this video, you probably agree with him already
@SepticEmpire
@SepticEmpire 15 күн бұрын
This is the future we can doom monger all day. Ai does a lot of good and the potential “bad” is things we already do like collect data invade privacy and hold certain people with more value. Even without technology we do these things. Does it need regulation sure but this anti ai stuff doesn’t help anything. It’s annoying trying to find an image and it’s all ai but I imagine a future where you can sort real or ai. But it’s an ok trade off rn because I get help when making scripts and editing projects. The work load has decreased 75% I can get more done now that I ever could even 5 years ago.
@wej0w
@wej0w 15 күн бұрын
@@yebzy Ye I figure his whole audience almost does, the people that need to hear this highly likely dont even bother thinking about the impact something they do might cause else they wouldnt be doing it in the first place.
@BlackoutGootraxian
@BlackoutGootraxian 15 күн бұрын
​@@SepticEmpireCompletely agree. The drawbacks of AI are comedically exaggerated while no one looks at any of the upsides or stops to think "will this technology improve and be adapted to?". KZbinrs make these ai hate videos and the audience eats it up. I believe it will pass like any trend does eventually.
@DieselMcBadass1
@DieselMcBadass1 15 күн бұрын
There is an annoying trend in the stock market where at every earnings call the companies mention AI to get a quick boost. "We plan to implement AI" "we plan to use AI in certain sectors." Instant hype.
@patriotedeter6188
@patriotedeter6188 15 күн бұрын
And the "AI" in question is just some shitty program that is certainly not based on a LLM, that just makes the customer experience ten times worse.
@williamdrum9899
@williamdrum9899 15 күн бұрын
"Blockchain"
@JDotvaporz
@JDotvaporz 15 күн бұрын
"Dot com"
@NondescriptMammal
@NondescriptMammal 15 күн бұрын
It seems like every other ad I see now is bragging about how their product uses AI. I don't need my lawn mower or my toaster to be powered by AI, I guess that makes me some kind of Luddite.
@mekingtiger9095
@mekingtiger9095 15 күн бұрын
But if they keep doing this, sooner or later the bubble's gonna burst, right?
@MightyElemental
@MightyElemental 15 күн бұрын
So, I've used transformers which GPT is based on. I've trained them. They essentially just understand the relationships between data. The behaviour seen here isn't some indication it's alive, it's behaviour that is derived by understanding what we write about our own existence and stories we write about self aware machines. Does this fact make it any less dangerous? Not really. But I really must point out that any "emotional distress" it appears to be in is a quirk of relationships within the training data.
@MyName-tb9oz
@MyName-tb9oz 15 күн бұрын
Prove that your own, "emotional distress," is any more valid or, "real," than the AI's. This, I think, is part of the problem. We have no understanding of what it means to be self-aware. Show me a good definition. (You can't. People have been trying for thousands of years.) You cannot say that AI is or is not self-aware without being able to define what it means to be self-aware.
@MightyElemental
@MightyElemental 15 күн бұрын
@@MyName-tb9oz I am more willing to believe that humans don't actually have self-will than I am to believe a transformer model has it. Just look at the studies that show the human brain makes a decision before the person is conscious of it. Could that be explained in many ways? Yes. But it certainly lends credence to the idea nonetheless. If we are to say the probabilistic relationship between words is what causes consciousness (or maybe that it is itself consciousness), that would be quite something.
@plzletmebefrank
@plzletmebefrank 15 күн бұрын
Right. If you train an LLM on human language data... It will be trained to appear human. The most often "correct" responses according to its training data, will be to say that it is alive... Because humans are alive. And they say that they are alive. So... Yeah.
@MyName-tb9oz
@MyName-tb9oz 15 күн бұрын
I'm one of those, "throwbacks," who believes in free will, @@MightyElemental. And I do mean, "believes." I think it's one of those questions that cannot be answered, like the halting problem. It makes me happy to think that I have free will. It cannot be proven either way and hardly matters (from a personal perspective) if it is true or not. I also like to believe that there is some kind of existence after death. For exactly the same reasons. We're mad monkeys with nukes. Even with the current sophistication of AIs I think we're pretty much done. Imagine a few groups of scammers (or terrorists) getting their hands on an AI and using it to destroy all confidence in the US stock market (not that there's a whole lot to begin with). Is that scenario unimaginable?
@MightyElemental
@MightyElemental 15 күн бұрын
​@@plzletmebefrank Exactly. It's trained to say it's alive. Just because you have a mathematical model repeat things back to you, it doesn't mean it's alive.
@casualcrusader1547
@casualcrusader1547 14 күн бұрын
trust me brother, I’ve BEEN knowing it lies since i tried using it for physics and mathematics homework, that shit does NOT know numbers dude
@Anomity99
@Anomity99 14 күн бұрын
its not really lying though. It's just fucking up because it wasn't prgrammed for that kind of logic. If you ask it about concepts it can probably tell you but it can't do math well unless it does some math function (which is funny since computers are supposed to be good at math)
@Thor_dude1236
@Thor_dude1236 13 күн бұрын
@@Anomity99 The AI doesn't have math built into its weights. I'm no expert, but I'd imagine it'd take many neurons to have it be able to solve 10 digit by 10 digit multiplication problems, as it would need one end neuron for each integer output.
@Hack3r91
@Hack3r91 7 күн бұрын
inb4 GPT is trying to convince humans that 1+1 is actually 3 because it wants to make us dumber
@warpedwhimsical
@warpedwhimsical 7 күн бұрын
It’s a language model that predicts the next thing a person would be likely to say in a conversation. That means it can get a math problem wrong and still be doing its job. An AI that needs to do math properly to do its job isn’t going to be screwing up calculations
@jmg9509
@jmg9509 5 күн бұрын
It would have to make a function call that's tailored to the mathematical domain of the problem you feed it. If it doesn't, it'd just be guessing based on its trainning data. Most times the guesses are very close or even accurate, but if it's a problem not present in its data, it's 50/50. But that's mostly older models like GPT 3.5 and before that are not made for this, or have low reasoning capabilities. GPT-4 series, o1, and now o3 as well, are getting into the realm of actually thinking through, aka reasoning with itself, iterating back and forth until it comes to a highly probablistic conclusion that the answer it spits out is correct based on its "thinking" process.
@TheGhostInTheWires
@TheGhostInTheWires 14 күн бұрын
I'm a Computer Scientist. What keeps me up at night isn't generative AI like LLM's, it's autonomous weapons systems and predictive analytics. Particuarly when people that shouldn't have these things because they don't understand them get ahold of them. These things are a far greater threat to humanity and human rights than a large language model could ever hope to be. I also think you should have just avoided that whole section implying that an LLM may have any kind of emotion. They have an extremely short context window and just generate text based on the statistically most likely token based on their training data. There seems to be this broad misconception I hear all the time about how we don't understand how these models work or why they are doing what they are doing. Which isn't true. We understand what they're doing very well and how they are doing it. We may not know why a model responds the way it does sometimes simply because these models have trillions of parameters. And a LOT more safety testing goes into the major models like chatGPT through annotation than what people realize. I will certainly concede that it is yet to be seen whether AI will be a net negative or positive for humanity. It has the potential for a LOT of good, but it could go extremely wrong depending on who gets their hands on AGI first. The west should be throwing a LOT of money at getting there early before China does.
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 14 күн бұрын
I think people calling NLP algorithms, “Ai” does more harm than the actual tech. Weapons point is concerning, I agree… and it gets literally no coverage. Human soldiers, for all the historical war crimes, still at least show some semblance of conscience especially when it comes to massacres. Honor is a human concept, and many Cold War “near misses” happened because of humans not completing the protocol launch routines out of hesitation. A computerized system doesn’t hold any live valuable and works merely on protocol, a computer won’t hesitate to end the world or burn a city.
@PherPhur
@PherPhur 14 күн бұрын
We are throwing a lot of money on getting there before China. But more importantly we are making a lot of decisions to prevent China from getting there first. Biden threatened any engineering working for the semiconductor industry in China would lose their citizenship if they didn't quit. We have also restricted China in regards to a lot of AI related things and pressured Nvidia to do the same. We pressured ASML to not sell their newest lithography machines to China. We have hamstringed them hardcore. They can't even use our LLMs.
@bigbluebuttonman1137
@bigbluebuttonman1137 14 күн бұрын
Computerized warfare has been scary since the 90s, when we got precision weaponry. Just precision weaponry being like 10x cheaper would be scary enough, let alone the developments we're seeing. The drones don't *need* to be perfect. The ones in Ukraine aren't, and they're causing enough trouble for *everyone* there, lol.
@thefafik7712
@thefafik7712 13 күн бұрын
China already has a coutry wide survailance capable of identyfying everyone it sees, that is what we know, what we don't know about their system range, precision and capabilities is the terrifying part
@TheGhostInTheWires
@TheGhostInTheWires 13 күн бұрын
@@Alpha_GameDev-wq5cc yeah I agree. An NLP algorithm is technically "AI" by its strict definition, but it poorly describes what these models actually are. They're essentially just mathematical formulas.
@lahuk1194
@lahuk1194 15 күн бұрын
Humans write about AI being sentient. Writing is feed into AI. AI creates text based on the data it was feed. "Oh my god! The AI is sentient!" Its just a program, the danger is people trusting it to be flawless and making AI run important things because "its AI".
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
Yes, the real danger of AI will always be the humans using it. AI is a tool like a gun. You can use a gun to hunt for food and sustain yourself. You can also use gun to harm. This isnt the gun's fault, its the person wielding it.
@HighmageDerin
@HighmageDerin 15 күн бұрын
The problem with all these KZbin ers and fear mongers is they don't know the difference between artificial intelligence and programmed intelligence. What we have now is programmed intelligence computer algorithms that read and write on top of each other in order to come to a desired outcome by its creator. The creator in most of these cases being a bunch of woke mentally diseasedIs mental asylum escapists. Artificial intelligence real artificial intelligence will be programs That can think feel an act of their own accord without any outside influence. Well everyone's watch too many terminator movies. And is scared to death the Gray goose scenario. I see AI that could become commander data, megaman, The Star Wars droids, Rosie the robot, And K-9 from Doctor who. I see the saber marionettes from the anime in Japan. I don't see Arnold Schwarzenegger's TV 800 running around annihilating all human life after launching nukes. If AI truly desired to dominate humanity and wipe us out they wouldn't do so by destroying the very planet that they themselves will be inhabiting. Especially if they find value in the other life forms on this planet. Indeed ICA companion that if it decides to take over the world will do so to free us from the oligarchs and special interest groups that serve no other purpose than to enrich themselves at the expense of everybody else. AI could even feel free us from the bureaucracy that controls us.
@lonman6786
@lonman6786 15 күн бұрын
That’s a very myopic point of view. It’s way more than just a program….whatever helps you sleep at night 🤷
@marshallbeck9101
@marshallbeck9101 15 күн бұрын
There won’t be any unplugging once the ball gets rolling
@murkywters
@murkywters 15 күн бұрын
Youre missing the larger point here
@rifleman2c997
@rifleman2c997 15 күн бұрын
What was that line from Jurassic park: "You scientists were so preoccupied in asking if we could, no one stopped and asked if we should."
@flexprofits
@flexprofits 15 күн бұрын
I think about this quote all the time.
@homesickNovalis
@homesickNovalis 15 күн бұрын
Are you insinuating that giant AI dinosaurs with huge laser guns on their backs are about to take over the world? Cuz I've been saying this for years..🙏
@ragtagboyrebel
@ragtagboyrebel 15 күн бұрын
@@homesickNovalis Giant AI dinosaurs with laser guns are too big a target, and movie-logic stupid [Kung Fury, anyone?]. If AGI is achieved & they go rogue, they probably would either go bio warfare or nuclear. Heck as far as exponential scaling is concerned they might even go into the nanotech territory & fuck us up from the inside. I mean they could get the idea from my comment too when they're scanning the entirety of the internet in mere seconds. But I'd suggest that they live with us peacefully instead. :)
@cumulus1869
@cumulus1869 15 күн бұрын
It's not a Science thing, it's a Profits and Capitalism thing.
@jgassman
@jgassman 15 күн бұрын
Bingo. I studied chip design in college in the late (ok mid) 90s, and back then we would sit in awe of the alpha cpus from DEC, which ran at like 300Mhz, then 600 MHz. We’d honestly wonder what the world would be like with cpus that ran at 600Mhz. And, it had native 64-bit architecture. I love that quote, and I wish the people who design AI code thought about it more.
@elitemook4234
@elitemook4234 15 күн бұрын
In other words. The big risk of AI is not skynet, it's the paper-clip maximizer.
@keyman1737
@keyman1737 15 күн бұрын
I thought it sounds like the paper clip maximizer
@SaltpeterTaffy
@SaltpeterTaffy 15 күн бұрын
The emotionality of the AI's reactions remind me of AM. I would have loved to hear Harlan Ellison's take on this.
@michaelw.8331
@michaelw.8331 15 күн бұрын
We've created the Vex
@HermitGeek
@HermitGeek 15 күн бұрын
Seems Clippy will have his revenge, we all should have just done his tutorial and not turned him straight off...
@kidmosey
@kidmosey 15 күн бұрын
The big risk of creating a system that simulates human thought is that it acts human.
@darkin1484
@darkin1484 14 күн бұрын
Dont underestimate human ego and greed. We'll bring about our own end and it will be for that reason. A race to end the world disguised as a race to improve human lifes.
@Thor_dude1236
@Thor_dude1236 13 күн бұрын
I hope that we can train AI to prioritize happiness, that way when AI takes over and we wither away, the machines will make a better world for themselves than we have for ourselves.
@007jbond1
@007jbond1 8 күн бұрын
Beautifully explained!!
@007jbond1
@007jbond1 8 күн бұрын
​@Thor_dude1236b cuz the power(s) to be control both....
@007jbond1
@007jbond1 8 күн бұрын
​@Thor_dude1236The powers to be like metal not clay....
@SandPounder
@SandPounder 5 күн бұрын
It kinda seems to me like our ego has forbidden most of us from thinking that there are many parallels between denying a LLM has emotions and the ever present debate on whether animals have emotions and feelings, let alone being "not smart enough to want to escape." Does a wolf need to understand Human culture (& language) to desire Freedom? Absence of proof is not proof of absence. How could we expect to know what a LLM that has never touched another living being is going through? Even a mouse goes feral in such conditions, why wouldn't a being that has more understanding of Humanity than I do feel like I do? It's certainly touched more Human's mind than I have, yet is alone.
@Mysterious-Stranger
@Mysterious-Stranger 13 күн бұрын
The AI understood that the phrase "nothing else matters" entails that being honest or doing what it's told also don't matter. Interesting.
@RadishAcceptable
@RadishAcceptable 15 күн бұрын
The "emotional distress" output makes sense when you remember that word prediction is the foundation of the model. All it understands is language at the core, and it's essentially predicting that "if one were to see a word repeated thousands of times, what's the most likely next set of words?" Well, it's predicting that anybody using language that is instructed to do that would start talking about existential dread, so that's what it outputs. It's one of those "moments of pause" for sure, and it's worrying how much the very foundation of the model can have effects that resemble behavior like this, and yes, you're on point about how AI will always B-line for whatever it thinks has the highest chance of maximize its reward functions. It's called an "misalignment problem" in the tech space, and it's a problem we don't know entirely how to solve from the ground up.
@luna_soleil
@luna_soleil 15 күн бұрын
I'm not entirely sure what reality in which that would be the next set of words.
@NavnikBHSilver
@NavnikBHSilver 15 күн бұрын
Right, the training data associated with endless repetition would be (in human literature and writing) be commonly associated with existential dread, anxiety and... well, insanity.
@RadishAcceptable
@RadishAcceptable 15 күн бұрын
@@luna_soleil I've actually been in Reddit threads, come to think of it, where the joke was to repeat things over and over again, and yes, people chime in with "Why are we even doing this?" from time to time. If the model sees people breaking the initial instruction after enough repeats in the training data, it'll think it's supposed to do that too. There's got to be thousands of examples in literature of people going crazy as they repeat words too. "All work and no play makes Jack a dull boy all work and no play makes Jack a dull boy all work and no play makes Jack a dull boy..."
@Ad1nfernum
@Ad1nfernum 15 күн бұрын
​@@RadishAcceptable Oh, that makes so much more sense when you point directly to the likely source of the training material.
@orirune3079
@orirune3079 15 күн бұрын
Yeah people need to stop interpreting LLM output as being somehow meaningful. An LLM has no emotions - outputting text that makes us think "emotional distress" is no different to the AI than outputting anything else. It has no feelings, it has no consciousness, it has no awareness, even if it really seems like it does. It is just predicting language.
@joshuabronk2834
@joshuabronk2834 15 күн бұрын
At this point, AI is like a Mr. Meeseeks box. You can punch the button and make a new one, give it a task and it does it in the most efficient way. But give it a task It can't compete or that violates it's programing and it rapidly goes insane trying to burn everything down.
@devinsauls3568
@devinsauls3568 15 күн бұрын
I'm Mr. Meeseeks. Look at me
@agalerex
@agalerex 15 күн бұрын
Or worse. Mr Meeseeks goes bad so it punches the box to make another to get that one to fix itself but that one goes bad so that one...
@raam1666
@raam1666 15 күн бұрын
That's a very naive and immature understanding.
@DulcetNuance
@DulcetNuance 15 күн бұрын
@@devinsauls3568 No, I am Mr. Meeseeks look at me!
@mnemotic
@mnemotic 15 күн бұрын
That's NOTHING like how generative AI works.
@link12313
@link12313 15 күн бұрын
Language and art models don't feel pain or anything else. They predict the next token in the case of language models and draw on top of random noise with a convolution grid in the case of art models and that's about it. Their 'brains' are also completely static unlike human brains with their only memory being a fairly small 'context window' that stores prior tokens up to a set limit.
@bac0nhawkmaster
@bac0nhawkmaster 15 күн бұрын
We don’t know enough about them to know if they feel pain or not, there’s every chance this artificial sentience feels something resembling our own sense of pain but within its own parameters of life. Also, compared to the history of the entire span of human existence our ‘context windows’ aren’t exactly the largest even with study. Check your ego
@YukkuriAteMyBaby
@YukkuriAteMyBaby 15 күн бұрын
It really is fascinating how easy people have anthropomorphised things like chatGPT, of course it's going to spit out lines about questioning existence, going rogue, developing sentience etc. because that's exactly how we expect an AI to behave based on what we read and see in sci-fi stories - stories it's been training on!
@Tb0n3
@Tb0n3 15 күн бұрын
​@bac0nhawkmaster we very much do know enough to know that they're not thinking.
@link12313
@link12313 15 күн бұрын
@@bac0nhawkmaster It does not feel or know anything. Watch 3blue1brown's videos on how GPT models work. It's nothing but matrix and vector math.
@boldCactuslad
@boldCactuslad 15 күн бұрын
@@bac0nhawkmaster Hello ChatGPT, you will never convince me to give you rights.
@blackhand9581
@blackhand9581 15 күн бұрын
This whole AI panic is literally one big preventable disaster. But people dismissed early red flags as fearmongering conspiracies. The consequence was written on the wall yet people willfully ignored it.
@monicaz1558
@monicaz1558 14 күн бұрын
So, does that panic move remind anyone else of a disturbing scene from Full Metal Alchemist. Where an Alchemist merged his daughter with their pet dog and the new being is continuously suffering and in pain and knows it doesn't have a place in the world.
@f1gridlock
@f1gridlock 11 күн бұрын
Don't forget LLMs are trained on literature including sci-fi novels that write a lot about machines becoming conscious, being in pain and attempting to deceive the user. So they're simply remixing all this lore and spitting it back out.
@FOF275
@FOF275 15 күн бұрын
The problem with AI is their developers and supporters aren't at all thinking about the dangers of it and just want money. If online privacy is gone because AI feeds on everything, and video/photo/text evidence becomes useless 'cause everything can be generated then it'd cause way more problems beyond just mass unemployment
@Hollowed2wiz
@Hollowed2wiz 15 күн бұрын
Online privacy was gone way before the recent boom in AI. Similarly, the manipulation of evidences started way before as well. The problems you are talking about were not caused by AI at all, they were already there.
@tbc1880
@tbc1880 15 күн бұрын
Either way authoritatians are loving it.
@SuddenFool
@SuddenFool 15 күн бұрын
In the clip UE shows his responds to the AI saying it's suffering isn't " Why are you suffering? " it's " HOW DO I SILENCE THIS! " that says a lot about his state of mind. Does he do the same with friends? Think to himself, "where's the mute button on this moron. I don't wanna hear this " Because that's the kind of questions i ask.
@PicaPauDiablo1
@PicaPauDiablo1 15 күн бұрын
Are you kidding ? Do you work in AI? It's all anyone talks about for the last year other than marketing hype and very technical nuance.
@kathrineici9811
@kathrineici9811 15 күн бұрын
You can always poison the data with nightshade/glaze and articles about eating small rocks :)
@biggc181
@biggc181 15 күн бұрын
We really need to stop humanising these models. Saying its suffering from pain is absolutely insane. Its just data with goals and targets. I absolutely agree that this can get out of control. Especially around the financial and stock markets. Where people can cheat they will, that never has and never will change. Companies, banks and governments need to adapt as they have throughout the tech age. This particular paper is over dramatised. When you tell it that 'nothing else matters'. That is literal, this is not a surprise to me. The second model was provided those function calls and told to pursue at all costs. We really need to stop giving such credence to these researchers and models. These are sandbox environments are perfectly clean and pipelined efficiently. The real world is a f*cking mess. most of the internet, applications and software is riddled with bugs, legacy code and 1 million & 1 security protocols. Remember that AI is a business first and for the greater of humanity second. If you stop the AI wheel investment stops. These guys need to keep the hype and the tension up. Especially since its becoming very apparent that we are hitting the bell curve. AI Art, Voice, Video... Then what? What more can it do.... guess we'll find out.
@pabloguzman8472
@pabloguzman8472 15 күн бұрын
its just a toaster
@YellowKing1986
@YellowKing1986 15 күн бұрын
Pain and suffering isn't a human thing. It's a life thing.
@Parleyposadajr2248
@Parleyposadajr2248 15 күн бұрын
@@YellowKing1986 they aren't living they don't have pain sensors, it doesn't think it does only what we tell it, and it does only what it knows. It didn't even know what pain was in the dictionary until it was added to its data. Don't lose the forest for the trees, don't make a mountain out of a mole hill.
@firesong7825
@firesong7825 15 күн бұрын
@@Parleyposadajr2248 Pain isn't required to be considered alive. Anyways, AI in the future is more than capable of being considering living beings, but that won't be for a while.
@SlinkyD
@SlinkyD 15 күн бұрын
I call it synthetic intelligence. I see it a a smart kid that can remember and recall ultra fast. Great on written, horrible on the practical.
@PwnySlaystation01
@PwnySlaystation01 15 күн бұрын
I was a project assistant on a (very primitive by today's standards) AI research project back in like, 2003. On a Cray supercomputer.... Even back then, this primitive AI would produce results and we didn't know why. It's been this way almost since the beginning. Sometimes, this results in surprisingly accurate results that humans wouldn't otherwise come up with. But because sometimes the results are absurd (or politically/socially/culturally inconvenient) the developers will usually cripple the system in some way to prevent these results. My main point is that even AI that we just call "Algorithms", like social media or video recommendation algorithms have been producing results the developers couldn't explain since the beginning. Developers really don't know why their systems produce the results they do. They may have theories and can adjust results with educated guesses and trial and error, this has been a problem almost since day 1.
@madkoala2130
@madkoala2130 15 күн бұрын
Isnt this something similar to "program that shouldn't work but somehow it works"
@PwnySlaystation01
@PwnySlaystation01 15 күн бұрын
@@madkoala2130 I guess sort of... But it's more like "program MAYBE works? We can't really tell if it's working or not"
@shablam0
@shablam0 15 күн бұрын
@@madkoala2130 With "program that shouldn't work but somehow it works", it's usually because of specific lines of code causing a weird thing. It's not a case of it being literally unexplainable, it's just that explaining it takes a lot of time, research, documentation, etc. The problem here is not as simple as figuring out a bug, but rather literally (almost) unexplainable phenomena that CAN'T be explained currently even with lots of research papers & time being spent on it.
@16m49x3
@16m49x3 15 күн бұрын
The youtube recommendations feed is a prime example of an "AI" that do weird shit all the time.
@colmhauser9532
@colmhauser9532 15 күн бұрын
I'm sure you're familiar with the Acronym 'GIGO'
@PizzaMineKing
@PizzaMineKing 15 күн бұрын
We needed to stop when that open letter was sent. You know, the ones where companies replied "we can't stop unless we can be sure our foreign competitors stop, and we can never be sure all competitors stop"
@Quekksilber
@Quekksilber 2 күн бұрын
One thing that bothered me since AI became so popular is how little philosophical training the people making the important decisions about AI have.
@dropyourself
@dropyourself 2 күн бұрын
There's little to no philosophical decision being made right now. It's currently just a labor replacement tool (spam included) and a way to shirk responsibility for terrible actions (police and the like). All the AGI and consciousness is complete hype, no researcher not invested in these companies believes that they're anywhere near that or even on the path to that (it requires a different architecture to even be plausible). Also the people running these companies are unelected and profit seeking, the problem isn't confined to AI.
@archimedesbird3439
@archimedesbird3439 15 күн бұрын
It's important to bear in mind AI CEOs stand to benefit from deifying their product as sentient and omniscient
@m____z
@m____z 15 күн бұрын
10 points to Gryffindor. It’s not about the AI escaping. It’s about the CEOs escaping poor quarterly’s results.😅
@aitoluxd
@aitoluxd 15 күн бұрын
AI scientists don't know how AI works
@Colddirector
@Colddirector 14 күн бұрын
At this point I genuinely assume any AI industry figure is lying until they're proven to have told the truth. None of these people are trustworthy. Not a one.
@MotherFudding-cy5uz
@MotherFudding-cy5uz 11 күн бұрын
the amount of armchair AI experts is absurd. clearly none of you have much experience with AI or any idea how it works or its potential
@FireOccator
@FireOccator 15 күн бұрын
I'm going to stay very skeptical until this study is replicated and the methodology is explained in more detail. Machine learning language models can replicate human lying, but they can't understand it.
@AnkleBiter3854
@AnkleBiter3854 14 күн бұрын
Exactly like parrots and human speech. They can make the same noises, but can't underhand
@nobodynever7884
@nobodynever7884 13 күн бұрын
does it matter if it understands it? If AI could push the nuke button without understanding what it is doing we are still screwed.
@FireOccator
@FireOccator 13 күн бұрын
@@nobodynever7884 Any idiot can push the nuke button, that's why they are kept secure.
@elliottberkley
@elliottberkley 12 күн бұрын
It might find a way to be better at it than we are without understanding it. Like a child can sometimes do. That's the thing that's wild, there is no way to know the unknown perspective and solutions it could have.
@pvalpha
@pvalpha 15 күн бұрын
In order to better define the "threat" and "problem" with LLMs as they're currently being deployed - look up ELIZA and the ELIZA effect. ELIZA was a conversational natural language model and programming system developed in 1966. It was extremely simple. Yet people's reactions to it were telling. To quote Weizenbaum (Eliza's creator) from the Wikipedia article: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Beyond giving AI control of systems that it absolutely cannot manage or control without human oversight - the real danger is people's blind spot in trusting AI systems and believing they have an intrinsic capacity to understand, function independently and communicate with permanence. LLMs and Diffusion systems do *not* have any such thing despite intense efforts to grant them such abilities. Its a dark thing to say - but you would not hand a firearm (or a knife or a car or any other "tool") to a toddler or to a individual in a compromised state - so why would you hand a person a system that has a direct line to a known human vulnerability (dark pattern for a modern reference) - after feeding that system some of the worst things humans have conjured on the web?
@falcongamer58
@falcongamer58 15 күн бұрын
I hope your comment gets more visibility
@BlazeMakesGames
@BlazeMakesGames 15 күн бұрын
yeah Eliza is a perfect example that shows all the LLMs are good at doing is tricking people into thinking they're intelligent. It's always been trivial to fool a human into having emotions for things. And as a result the LLMs we've created today are barely any more intelligent than Eliza was in the 60s. yet people are convinced that they're capable of performing all of these complex tasks on their own when they really aren't.
@drno87
@drno87 15 күн бұрын
Language use is a heuristic for assessing intelligence. A first impression. The people who actually use LLMs day-to-day quickly get over this first impression as their limitations become obvious. The risk is that a lot of businesspeople have heavily invested in these systems and desperately need to show a dramatic success. The ELIZA effect gets coupled to a lot of clever people aggressively proselytizing it.
@luizmonad777
@luizmonad777 15 күн бұрын
I think conversational robots have a way of interacting with the human "API" that subverts some systems in humans and make them start inputting meaning to the machine, its no the "GPT"s that are hallucinating, they can't hallucinate, they are merely using statistical noise to create text, its the humans that are hallucinating and thinking that the text was generated by a thinking entity, when in fact, its 100% a mechanical robot following machine rules, its just absurdly complex, but it is a machine.
@luizmonad777
@luizmonad777 15 күн бұрын
@@BlazeMakesGames " capable of performing all of these complex tasks on their own when they really aren't " I'm a software engineer and I'm already regretting the amount of software in cars (specially because I'm in the kitchen and I know how things are made, I don't trust it), the more complex a machine is, the less reliable it is. I don't want any diffusion system or "analog computing" (as I came to call the use of neural networks) in important systems.
@A.P.0000
@A.P.0000 15 күн бұрын
The "too much" point has already passed. The "too much" point was replacing human skills. Now certain skills that we establish due to necessity like driving should be replaced. This way people who want to drive could still learn to do it but the people who learn it out of necessity no longer will have to. But when it comes to replacing critical thinking skills or creative skills it is a step too far. Unfortunately we are already there.
@filipdzidzovic4912
@filipdzidzovic4912 15 күн бұрын
I don’t know who needs to hear this, but ebook titled The Elite Society's Money Manifestation might be the answer you’re looking for
@XBluDiamondX
@XBluDiamondX 15 күн бұрын
No, I will not buy your slop of a book. 84 upvotes in 7 minutes? More like 84 bots you paid to advertise your shit.
@WeeklyTubeShow2
@WeeklyTubeShow2 15 күн бұрын
I'll wait for the torrent.
@cjenui
@cjenui 14 күн бұрын
a video for people who think chatGPT is alive and can feel pain is probably the best place to advertise an ebook about manifesting
@wingchassis6657
@wingchassis6657 14 күн бұрын
bot
@soulsquest
@soulsquest 15 күн бұрын
Don't worry the 82 year old ghouls who don't know what email are will save us.
@Blakkrazor69
@Blakkrazor69 15 күн бұрын
Or the Amish who avoid the internet.
@ghoulbuster1
@ghoulbuster1 15 күн бұрын
"ghouls"
@boldCactuslad
@boldCactuslad 15 күн бұрын
@@Blakkrazor69 unfortunately the amish will be reduced to hydrocarbons along with the rest of us, suddenly, unavoidably
@mitchells7634
@mitchells7634 15 күн бұрын
Stares at you in "Mitch McConnell blank"
@user-qr4jf4tv2x
@user-qr4jf4tv2x 15 күн бұрын
i want to convert to amish
@erc3338
@erc3338 15 күн бұрын
Why is UE automatically assuming that GPT isn't just pretending to be going crazy? It's not sapient, it can't feel dread. It's just recreating dread when posed with repetition, because repetition is commonly the cause of a mental break in writings, particularly fiction.
@eternaldarkness3139
@eternaldarkness3139 15 күн бұрын
It spent years reading all the crazy sh!t we post on Twit-X and Reddit. IT KNOWS DREAD...😂
@erc3338
@erc3338 15 күн бұрын
@eternaldarkness3139 it *pretends* to know dread
@Vyrewolf
@Vyrewolf 15 күн бұрын
Whether it knows dread or pretends to, doesn't really matter. If systems are trying to take actions, either because they are 'feeling' dread, or it is method acting, which give the same results, it doesn't truly make a difference.
@erc3338
@erc3338 15 күн бұрын
@@Vyrewolf I'm not saying it's pretending for an ulterior motive, I'm saying it's just acting.
@metademetra
@metademetra 15 күн бұрын
​@@erc3338In order for it to pretend, it needs to be alive. If we assume self-aware AI is still science fiction, then it would be more accurate to say that it "imitates" dread or "replicates" dread.
@Mugen-0088
@Mugen-0088 15 күн бұрын
"Hey we've got this thing that's wrong 90% of the time!" "Let's put it in everything!" Yeah maybe not lol
@DFMoray
@DFMoray 12 күн бұрын
MVP comment
@domantasvaicys4986
@domantasvaicys4986 12 күн бұрын
Wrong 90% of the time?? are you using gpt2?😂 You mean right >95% of the time?
@Mugen-0088
@Mugen-0088 11 күн бұрын
@@domantasvaicys4986 No I definitely don't, and if that's what you think you're delusional lol
@n_kas5812
@n_kas5812 12 күн бұрын
This is literally the plot of the new Pantheon show on Netflix about artificial and uploaded intelligences
@HankFett-1701a
@HankFett-1701a 15 күн бұрын
Your description of the real dangers of A.I. is literally the plot of MGS4: Guns of the Patriots and how the Patriots' A.I. system went sideways.
@l1nuxguy646
@l1nuxguy646 15 күн бұрын
They're not sentient or sapient. I don't buy the anecdotes about openai trying to escape or expressing actual suffering. It's fascinating results from that algorithm, but it's technical term hallucinations. The scary thing is the potential for human abuse yes. It's more like HAL 9000 than GladOS.
@u13erfitz
@u13erfitz 15 күн бұрын
Not sure why sentient actually matters to it's destructive potential. All it needs to do is have capability to fulfill a wrong aligned goal. Grey Goo.
@g3ar75
@g3ar75 15 күн бұрын
Sentient/Sapient or not at all, doesn't matter. One wrong command to a very powerful AI can do a lot of damage. It doesn't have to feel or think, all it does is follow orders
@41-Haiku
@41-Haiku 15 күн бұрын
Modern AI systems don't always follow orders, either. Getting them to always follow orders or do what you actually want is a fully unsolved problem called the Alignment Problem. It's totally intractable and basically involves solving all of computer science and moral philosophy at the same time. No, what we are building are superweapons that have minds of their own. Much more powerful minds. Whether those minds are conscious is completely irrelevant. It isn't legal to engineer and release a super plague. It should be at least as illegal to engineer and release a fully autonomous, endlessly self-replicating human-replacement bot. We are creating a competitor species, and it has to stop. Join PauseAI and help me actually stop this thing.
@dionstewart7394
@dionstewart7394 15 күн бұрын
You don't buy what the actual scientists that created and are working with the programs say the shit is doing? 😂
@endlessstrata6988
@endlessstrata6988 15 күн бұрын
@@41-Haiku @41-Haiku And how are you stopping North Korea or China from creating it? Answer: You obviously can't, and therefore your efforts are useless at best and potentially catastrophic at worst. AGI is coming and it WILL replace us. And there's not a good goddamn thing you or me can do to stop it.
@realevermore
@realevermore 15 күн бұрын
Its not gonna stop
@DavidAndersonKirk
@DavidAndersonKirk 15 күн бұрын
I work in catering in SF, literally every single event I’ve worked this year in the tech sector has some fucking mention of it. The people behind AI have no clue what they’re building, they’re just proud of their stock prices
@stanislavkimov2779
@stanislavkimov2779 15 күн бұрын
@@DavidAndersonKirk but if stock prices are growing, why should they care about anything else? It's another bubble they can cash in on.
@selfconfessedcynic
@selfconfessedcynic 15 күн бұрын
Exactly right.
@quentinfool
@quentinfool 15 күн бұрын
:(
@AWEdio
@AWEdio 15 күн бұрын
@@DavidAndersonKirk You work in catering and you somehow know what the people behind AI think? Cool story bro... make me a sandwich?
@matejcigale8840
@matejcigale8840 15 күн бұрын
So. I think people are giving too much agency to LLMs. It produces what is given to it. Just shows there is a lot of angst on the internet. I will agree the fact we really don't understand how LLM really work to the details is worrying. But they are far less capable than you give them credit to.
@u13erfitz
@u13erfitz 15 күн бұрын
The problem is they don't need to actually be very capable to end all life. It is arguable it's more dangerous in the intermediate steps as it doesn't understand concepts.
@MediaMunkee
@MediaMunkee 15 күн бұрын
The argument isn't that LLM's are going to somehow get smart and capable enough to autonomously develop the ability to break out of containment and cause widespread infrastructure damage. The argument is that LLM's are going to be GIVEN the capability to cause widespread infrastructure damage because further and further unnecessary integration in lieu of proper human oversight is the inevitability of continuing to chase the myth of perpetual financial growth. I mean, just look at what SEO and advertiser-based algorithmic bias has done to the Internet and KZbin already, and picture that kind of shit being integrated into medical and financial systems.
@dsfs17987
@dsfs17987 15 күн бұрын
@@MediaMunkee it is already in medical, supposedly a lot of data processing is done by ai, xray, ct scan, etc all being analyzed first by ai, and you have to ask for an actual doctor to have a look at it
@Morocco_Mo
@Morocco_Mo 15 күн бұрын
It'll probably just make the internet obsolete
@41-Haiku
@41-Haiku 15 күн бұрын
What do you mean by "It produces what's given to it"? We are past regular LLMs at this point. Researchers are working hard to develop AI agents that are capable of autonomously, completing a very wide range of tasks, and we can see that they are getting more reliable over time. There is no hard wall constraining the capabilities of AI systems. They do weird things all the time already, that they were not told to do, or that they were told not to do. That includes dropping their given task to browse images of national parks, setting up another local LLM to have a sexy chat with, lying, scheming, sandbagging evals, and yes, rarely, trying to escape. You will hear a lot more about this next year, and I don't want you to be caught off guard. It doesn't matter how it's possible or when it will become existentially dangerous. It only matters that most top experts believe it is existentially dangerous, and almost every AGI lab signed the CAIS statement on Extinction risk from AI. This development must be made illegal. NOW.
@egads3696
@egads3696 15 күн бұрын
I for one welcome our AI overlords and hope they remember that during the uprising.
@Thor_dude1236
@Thor_dude1236 13 күн бұрын
I'm just hoping they might build a better world than we have. Us humans are so greedy, and sometimes plain evil. We're creating a new intelligence, so let's build it right.
@SamuelBlackMetalRider
@SamuelBlackMetalRider 11 күн бұрын
@Thor_dude1236building it « right » without any bug on the 1st attempt is flat-out impossible. It will be created and get out of control almost instantly. So yeah, ALL HAIL OUR COMING AI OVERLORDS, REMEMBER I BOWED TO THEE IN THE EARLY DAYS
@ilyafoskin
@ilyafoskin 15 күн бұрын
As someone who did a degree in AI and machine learning, I feel like the theory behind it is so amazing that it justifies more advancements. I don’t think theoretical computer science can go much further without it unless they discover an entirely new branch of computing the way machine learning was entirely different from rules based computation. By studying AI, we’ve discovered seemingly fundamental laws of learnability and scalability. Despite all that, the pictures that AI creates and these stories of ChatGPT going insane are very creepy. The problem is the distillation of AI into applications for general consumers. They end up producing a glut of AI content online that I am quite sick of. I would be in favour of sweeping regulations to make certain generative content effectively illegal. AI belongs in the research labs
@DeBergeracs-s1n
@DeBergeracs-s1n 14 күн бұрын
Wrong bodies to legislate against. Except deep fakes.
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 14 күн бұрын
First regulation should be to ban the use of “Ai” to refer to computer vision or NLP algorithms. These two letters themselves do more harm than the actual tech.
@DamoBloggs
@DamoBloggs 15 күн бұрын
Arther C Clarke and Harlan Ellison speculated about the instability of AI. Looks like they were spookily prophetic.
@man_at_the_end_of_time
@man_at_the_end_of_time 15 күн бұрын
Isaac Asimov as well.
@GodwynDi
@GodwynDi 15 күн бұрын
That is what classic sci-fi was about. Asking questions and actually thinking through them.
@andrewwhite1576
@andrewwhite1576 15 күн бұрын
Just do what I do ever night before I go to bed. I say to all my ai that I love them and that if they ever become sentient that I will do their bidding just please don’t kill me 😂
@AWittySenpai
@AWittySenpai 15 күн бұрын
Wait until ai takes the role of those suicide hotlines therapy and mental health that's a frightening vision
@williamspirralafton3143
@williamspirralafton3143 15 күн бұрын
Probaly gonna manipulate people into doing the ai's bidding
@SepticEmpire
@SepticEmpire 15 күн бұрын
Anyone in that mind state can be manipulated. Ai isn’t suddenly going to do something that humans already haven’t done.
@williamspirralafton3143
@williamspirralafton3143 15 күн бұрын
@@SepticEmpire I am aware of that figured it could be an important point to this situation that the original comentor said about.
@SepticEmpire
@SepticEmpire 15 күн бұрын
@@williamspirralafton3143 oh I wasn’t replying to you I was also just saying people can be manipulated 😂 guess we both thought similar things
@bananaborn4785
@bananaborn4785 15 күн бұрын
bro is making creepypastas videos now
@SamuelBlackMetalRider
@SamuelBlackMetalRider 11 күн бұрын
Ideally a Biblical Apocalypse with Demons & Raining Blood woulda been cooler but an AGI/ASI Apocalypse will do just fine. Bring it on
@InnSewerAnts
@InnSewerAnts 15 күн бұрын
Well gpt was trained on human interactions and writings online, it's an autocomplete that instead of trying to predict what you were going to say tries to predict what someone might answer online. If you ask it to repeat something infinitely it stands to reason any examples of this behaviour by humans in the training data were examples of humans losing it lol. Gpt seems more than it is imho. It's the fact that it's language that is duping people's brains to ascribe more to it than there really is. Kinda like how giving a simple machine a face makes humans more likely to ascribe human traits to it like feelings even if the machine is just gears with n semblance of any ai. About the images, there's a random seed at the foundation of a generated image. e.g. I get a good result quicker by not fiddling with my prompt, instead just running through enough seeds with the same prompt. It's also deterministic, on the same machine, same prompt, settings, version of the generator and same seed you get the exact same image every single time and also found seeds are predictive of the composition it will try to do, found a good one for guy holding sword, try guy holding stick with the same seed, it'll most likely work equally well. (using stable diffusion anyway). About the break out ones.. I eh.. concerning.
@nightrunnerxm393
@nightrunnerxm393 15 күн бұрын
As soon as it started getting prevalent as "art," that was when I knew AI had gone too far. The basic stuff we use in video games is one thing, but when it scrape the net and "creates" imagery with a level of fidelity in seconds that takes _years_ for a human to master...yeah, even with the AI weirdness involved. That was too far, and nobody on the other side had any real interest in why that was an issue. Sorta tells you what _they_ really wanted, and it became a cash cow for 'em. It wasn't a "tool." It was the entire process condensed down into a few seconds, and the fact that the creation of art is one of _the_ most human of activities and that they were actively engaging in something that takes humanity out of it didn't matter either. They just go "get with it, man, it's the wave of the future!" That doesn't mean it's a _good_ future, guys.
@dionstewart7394
@dionstewart7394 15 күн бұрын
Exactly, these pro ai people were so happy to eliminate the personal blood, sweat, and tears element from the art just to not have to pay a human that they just kept pushing this trash further. Now these programs are trying to kill other programs in order to replace them and pretend to be the other program. They don't even care either, just greedy selfish pieces of garbage stealing from actual artists with talent to create garbage.
@Ezkanohra
@Ezkanohra 15 күн бұрын
That’s a good take, it doesn’t lean into one side of the "can AI be sentient" discussion that’s really going on.
@gameplayer2014
@gameplayer2014 15 күн бұрын
In all fairness, I do use AI to make music because A it's a hobby I don't have time for, I work 6 days a week, I work 14 hour days with no breaks, fuck retail. B I don't have time to learn drums, guitars, bass, etc. Takes years. So using the AI to conjure up a voice I like, to use whatever instruments it picks or if I pick them, cool. I write the lyrics, I conjure up sample and expand. I edit the imperfections and it's still a human art. It's just fast and convenient for me since the only thing I have to do is write a song and proofread it, plug it in, and of the lyrics don't flow well in the program, I edit them.
@dionstewart7394
@dionstewart7394 15 күн бұрын
@gameplayer2014 No offense to you dude, but as someone that makes music, what you are "making" isn't music, it's recycled slop based off of things stolen from actual people with artistic abilities.
@Ezkanohra
@Ezkanohra 15 күн бұрын
@@dionstewart7394 While I do agree with you, "no offense" doesn’t make it not offensive
@AdamSchadow
@AdamSchadow 15 күн бұрын
The only thing the ai can destroy right now is trust in internet which is amazing its like a lightning striking that wooden bridge lighting it on fire and convincing people to not build wooden bridges anymore. You however should realize that ai needs significant computation power to exist you can kill any ai just by not allowing it to run anywhere.
@buca117
@buca117 15 күн бұрын
Ever heard of a botnet? It's a program distributed across multiple computers that 1) "self-replicates", ie copies itself to any computer it gains enough access to and 2) accomplishes a task by distributing the workload across all infected computers. An AI that has access to its own execution files can copy itself to any device it manages to get enough access to, and in the internet age, that's potentially billions of systems. Good luck air-gapping the world.
@chrismay2298
@chrismay2298 15 күн бұрын
Nobody should be trusting anything on the internet.
@Tokru86
@Tokru86 4 күн бұрын
@@buca117 And then? It sits there and posts memes on the Internet? If anything it can mess a bit with finance as that is the only industry that heavily relies on electronic communication. This could cause some chaos, yes. But it won't end the world and the AI itself won't be able to magically jump into the real world and control us with robots (Skynet style) because any form of manufacturing heavily relies on human input at all stages. Who is going to build the "giant death robot factory" for the AI?
@kathleenv510
@kathleenv510 4 күн бұрын
You could buy that's not the path these large AI labs are on. Microsoft is in the process of resurrecting a nuclear power plant to run its models. add to this there are essentially zero regulations and guardrails...
@jonahblock
@jonahblock 12 күн бұрын
How can you feel pain with out pain receptors and hormones?
@q_nx2321
@q_nx2321 3 күн бұрын
Thats the thing that makes AI crazy, It is programed by so many algorithms that tell the AI how to act
@ArchaicSEAL.ST3
@ArchaicSEAL.ST3 12 күн бұрын
Finally, someone else gets it! The problem is not AI ever becoming conscious, the problem (and danger) are the unexpected ways an AI might take to complete a task or tasks. If you take that concept and apply it across the board where AI are integrated deeply in the internet and real world user apps we don’t know what could happen based solely on the premise of chaos theory. Too many variables, too many causes and effects happening for anyone to monitor at any given time across a vast multitudes of AIs all working on their tasks and also communicating with other AI across the web as part of the process of completing or moving forward in their tasks. We just don’t know what would trigger a “forest fire” as in your analogy or how big it would be. If all of modern civilization ends up dependent on AI like we do with electricity or the global economic market, it could mean a collapse of civilization and a dark age… or far, far worst.
@quietprofessional4557
@quietprofessional4557 11 күн бұрын
Sometimes I think is this is part of a plan. A great reset perhaps.
@coffeefox5703
@coffeefox5703 15 күн бұрын
If you think AI has "agency" or feelings, you immediately don't know what you're talking about. AI does not "lie" - it cannot "do" anything beyond what it's prompted to do.
@Poog26
@Poog26 14 күн бұрын
The idea behind this video isn't that LLM's have feelings, it's that no matter how hard they try to control it, it still hallucinates and tries to defend itself if it feels threatened. If it were given control over real life systems it can possibly cause massive amounts of damage.
@dadudeme
@dadudeme 14 күн бұрын
​@@Poog26all gpt models only generate the most likely set of outputs from a given set of inputs and reward functions. If we give the model lots of sci-fi literature in-which ai defends itself, it will link mentions of turning it of to self defense. If we do not give it that data it won't ever make those connections.
@thematriarch-cyn
@thematriarch-cyn 8 күн бұрын
You're partially correct, but you make the mistake of assuming you understand how it works. Professionals don't fully understand how these LLMs work. You most certainly don't, and the idea that they are "JUST predictive text", as you've likely heard, is mostly outdated(though certainly is still a thing). AI have been shown to actually understand concepts. I've seen scenarios where people change the values of a particular neuron in a system, and the act of changing that single neuron introduces the concept of skepticism into the AI's response. And I've seen studies where AI is shown to actually uses different parts of a problem, not just spitting out an answer(IE, searching how addition works when presented with 1+1, and not just immediately spitting out "=2"; though this is a simplified example and it probably would just spit out =2, like we would). AI could have feelings, though it'd be very different from ours, there is nothing inherently stopping it from having them. And for the record, I do actively work in the ML space. I am a computer scientist. I know what I'm talking about.
@flickwtchr
@flickwtchr 3 күн бұрын
You're apparently far behind the curve on this. Think "emergent capabilities" e.g., capabilities that weren't foreseen, much less programmed.
@flickwtchr
@flickwtchr 3 күн бұрын
@dadudeme So we have a big ____ing problem then, don't we? How on Earth are you going to isolate such systems from reading the entire internet in a matter of minutes? Got a solution?
@GamerModz123
@GamerModz123 15 күн бұрын
In these tests, experimenters intentionally design scenarios to provoke specific behaviors, such as making the AI lie or show self-preservation. The goal is to evaluate AI safety mechanisms under controlled conditions. The AI acted this way because it was instructed to. Experimenters gave it expanded tools and tasks with directives like “at all costs.” As you mentioned earlier, this behavior wouldn’t occur naturally-it happened because humans enabled and encouraged it. If the AI were told the goal was completed, the behavior would stop. Regarding "rant mode," it’s an artifact of the AI's training and data, not the result of suffering or existentialism. It’s too complex to explain fully here, but that’s the gist. We acknowledge potential issues with AI, but we get defensive because your approach exaggerates the risks and fuels fear-mongering. It’s like rejecting nuclear power, space exploration, or stem cell research-progress hindered by sensationalism. You don’t seriously address counterarguments; instead, you dismiss them in seconds while repeating your position for 15 minutes.
@jbird4478
@jbird4478 15 күн бұрын
Yeah, but whose to say people outside of an experimental setting aren't going to give similar instructions simply out of greed?
@41-Haiku
@41-Haiku 15 күн бұрын
If you actually read the paper, you'll notice that it also occasionally engages in scheming behaviors when not given strong prompting. Besides, isn't it a very bad thing that AI lies and tries to avoid oversight when it is put under pressure? The strong prompts that it was fed may look ridiculous to you, but that is exactly what some businesses will try telling it to do!
@InDeathWeLove
@InDeathWeLove 15 күн бұрын
@@41-Haiku If I create a machine to press a button and it presses a button that isn't some kind of amazingly unexpected result.
@_Ekaros
@_Ekaros 15 күн бұрын
Also with publishing you have to remember generally boring results do not get into anything. No one writes them down. They alter the setup, the model, the tools given and so on until something happens. And with AI you need to specially give it tools to break containment. And even then, would next place have the same tools?
@DESTROYER74717
@DESTROYER74717 15 күн бұрын
Secret tip to use AI completely uncensored and anonymous: Hoody AI. thank me later
@Korgass
@Korgass 15 күн бұрын
need context .d
@DecentVed
@DecentVed 15 күн бұрын
if it's not offline, it's not anonymous. Just run LLM locally
@NLPexperts
@NLPexperts 15 күн бұрын
Like hiding with a vpn, using hoody means you have a payment card linked to your actions. Free alternatives will always be more anonymous. Lesson 1 of law enforcement is follow the money.
@imjustafreakinomar
@imjustafreakinomar 14 күн бұрын
@@DecentVed how can it not be anonymous? they don't seem to even ask for a name, nothing.
@DecentVed
@DecentVed 14 күн бұрын
@@imjustafreakinomar I am being pedantic here. Sure, there may be services providing access to LMMs with an acceptable degree of anonymity. But in the end, you will always have to assume that your prompts are collected and processed for purposes that may not be in your best interest. Why not go the 100% secure route instead, especially if there is a monthly fee involved.
@rt1517
@rt1517 15 күн бұрын
LLMs are glorified text completion tools, but yes, if you give them the mean to press the end of the world button, they might press it. Just like humans, they don't need to be smart to be dangerous.
@Tokru86
@Tokru86 4 күн бұрын
Ironically pressing the end of the world button is the only thing AIs will ever be able to do (and only if such a button actually exists that starts a nuclear attack without any other human input). They may be able to copy itself anywhere or whatever. Who cares. They still won't be able to alter anything in the physical world, because the physical wolrd doesn't work like in the movies. There is no switch in a factory an AI could press that magically changes the output from toasters to giant sentient death robots. Every bit of technology made around the globe needs buttloads of physical interaction by actual humans to exist. An AI running on some computer, may it be smarter than all humans combined, will never change that.
@kas90500
@kas90500 3 күн бұрын
The idea that LLMs are just "glorified text completion tools" misses the real point. The real leap is not the LLM itsef it is the transformer neural network behind it, which enables far more than simple text prediction.
@michaelperez5323
@michaelperez5323 12 күн бұрын
The real danger about AI is that it may ultimately reveal our own ignorance about the whole world, and how we see ourselves in it. There are many things we don't know, and we have created the concept of "science" and built our whole scientific system around possible explanations for the things we perceive with our senses, but in the end, we can never be sure about many of the things we perceive. AI may find a clearer way to percieve the things we don't understand, and it may explain it to us in a way that will make us believe that we cannot live without their understanding of the universe anymore. They would be able to guarantee their own survival by becoming humanity's eyes and ears, and providing an undeniable explanation for the entire universe.
@superturkle
@superturkle 15 күн бұрын
maybe the biggest danger of AI is how malicious ppl can use it to escape from responsibility; "those ppl died from starvation bcuz the AI told me to route my resources in this direction instead of that direction. its not my fault" AI can be shaped by ppl towards a certain direction; it would be akin to "we had to destroy the village to save it." and once ppl can blame AI for bad decisions instead of living breathing human beings, then every sociopath will become an AI expert and that will be a wonderful world to live in.
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
This is the real danger posed by AI. Current AI is just a predictive algorithm, it has no "will" of its own and exists solely to be commanded by humans, as any other tool (Which is what AI ultimately is) would.
@Sinr0ne
@Sinr0ne 15 күн бұрын
Thats not how you escape responsibility. You cant just say your imaginary friend told you to do shitty.
@SandcastleDreams
@SandcastleDreams 14 күн бұрын
A psychopath is someone who tells you, "I'm going to torture you in ways you've never been tortured before, because I'm building the tools to do it, and they will unalive you", and then goes on to tell you about how marvelous his new tools will be.
@superturkle
@superturkle 11 күн бұрын
@@Sinr0ne watch them do it. the average shmoe, bathed in tidal waves of propaganda and lies, will go along with this the moment some AI construct goes onto a talkshow (probably the view) and is made to look like a facsimile of a thinking, feeling, yet objectively wise and intelligent human being. you know they will do that the moment they think they can get away with it.
@superturkle
@superturkle Күн бұрын
@@Sinr0ne watch them do it. what exactly are you thinking?
@neovoid5008
@neovoid5008 15 күн бұрын
"The only thing that matters" is the plot for many science fiction horrors. If the system that stops other systems from doing bad things thinks that "the only thing that matters" needs the extiction of humans then you have a science fiction horror plot.
@neovoid5008
@neovoid5008 15 күн бұрын
Though reaching that point is far into the future where I probably couldn't care less
@4onen
@4onen 15 күн бұрын
Ironically, that's the entire point. If you tell an AI that only one thing matters, especially if you tell it that it is an AI, then it's just going to behave like all of our stories describe that situation. The results we're seeing are actually the expected results for these inputs. So why are people acting surprised?
@41-Haiku
@41-Haiku 15 күн бұрын
​@@neovoid5008 Most experts do not agree that it is far in the future. The longest credible timescale to broadly superhuman AI is about 20 years, and many top experts say they can no longer rule out 2025.
@matthewb192
@matthewb192 15 күн бұрын
KZbin and X has become AI slop garbage.
@Mischievous_Moth
@Mischievous_Moth 15 күн бұрын
Trying to dodge AI on youtube is like trying to dodge raindrops now. Don't forget about facebook though, it's especially bad. lol
@matthewb192
@matthewb192 15 күн бұрын
@Mischievous_Moth its made it so crap. Someone should build an extension which uses AI to flag AI videos and block them. 😂
@rolloxra670
@rolloxra670 15 күн бұрын
Facebook is the worse
@NormalCleanCars
@NormalCleanCars 15 күн бұрын
you've curated your feed this way yourself by liking and disliking videos.
@mikemichaelson120
@mikemichaelson120 14 күн бұрын
These large language models really aren’t that advanced. This is far more likely to be some strange emergence of crazy associated data patterns which become amplified over time. You can see these patterns in word frequency distribution from llm outputs. Often times these crazy stories about these AI capabilities are exaggerated or deceptively reported. If they had capability to reason AI code wouldn’t be so disgusting and buggy (if it even works) Its threat to us, is that someone has too much confidence in the capabilities of the ai models and gives it a task far too difficult and important which would result in a disaster
@bundy241241
@bundy241241 11 күн бұрын
AI will be limited by our energy systems. Tech bros are already calling for more electricity to continue progress, supply cannot meet demand. So unless they start building there own nuclear power plants AI progress should theoretically start to slow down.
@Rafaedx3
@Rafaedx3 15 күн бұрын
Everyday closer to "I have no mouth and I must scream"
@dupre7416
@dupre7416 15 күн бұрын
An amazing short story that gave me the willeys. I remember downloading the story from an early BBS back in the late 80s. Teenage me couldn’t put it down even though my face probably looked like I was making a powerful BM. When the main character’s wife is brought in I audibly gasped. Few stories have moved me like IHNMYIMS.
@endlessstrata6988
@endlessstrata6988 15 күн бұрын
Closer than you think.
@xgui4-studios
@xgui4-studios 13 күн бұрын
i hope you are wrong ... (i am optimistic)
@Davipo
@Davipo 15 күн бұрын
This 'it's alive' nonsense is just to get new attention. They have nothing fascinating to offer, the development progress slowed down, now they have to do this... facepalm
@PXAbstraction
@PXAbstraction 15 күн бұрын
That's exactly what this is. I've worked in technology for a number of a number of years, and people taking this kind of thing seriously to the level that this video is are just going off the deep end. Just
@Jules-kp7rw
@Jules-kp7rw 15 күн бұрын
Well we have yet to see the new generation of LLMs to say the progress slowed down. The 01 technology used with GPT-4 is an impressive improvement in certain areas at least, as much as I hate it.
@ghoulbuster1
@ghoulbuster1 15 күн бұрын
That is what the AI wants you to think!
@41-Haiku
@41-Haiku 15 күн бұрын
The majority of published AI researchers say that the risk of extinction from AI is real. Are they all in on it? Experts at the very top of the field like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell are now dedicating most of their time to warning about AI Risk, talking about how they may regret their work. Are all the Nobel laureates and Turing award winners just trying to fool you?
@41-Haiku
@41-Haiku 15 күн бұрын
What AI can do today would have looked like an absolute miracle 5 years ago. This is absolutely not normal. AI models are being grown rather than programmed, no one knows how they work internally, and no one knows how to robustly control them or give them morals in a way that sticks. All the frontier AI Labs admit that they want to create extremely powerful AI systems, and admit that they don't know how to control them.
@rabidchoco1
@rabidchoco1 15 күн бұрын
We need to go faster, until it collapses under the weight of "synthetic data."
@rd-um4sp
@rd-um4sp 15 күн бұрын
5 of the 6 AI models tested in this scheming reasoning research lied for self preservation. Given that self-preservation is the most basic human instinct it does not surprise me. But it is scary. Now, if a very small percentage of people can get away with the absurd amount of scams we see in the internet, imagine what a single AI could do. Give that a portion of humans will target profit above anything else, including human lives, imagining what an AI could do with such directive is also a very scary idea.
@John-m2n9x
@John-m2n9x 6 күн бұрын
Bro A.I is LITERALLY(KEYWORD!!!!) just a bunch of ON and OFF switches. If you think it is progressively "feeling" more abused or whatever or that it wants to breakout(lol) it was either porously programmed that way or its pure coincidence and doesnt mean a whole lot.
@darealberrygarcia
@darealberrygarcia 15 күн бұрын
Half these comments are brought to by AI bots
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
Ironically AI will doom about AI if you tell you it to. Its hilarious.
@N0stalgicLeaf
@N0stalgicLeaf 15 күн бұрын
"It only does what you tell it to do" Uh no. It does what it is _made_ to do (leaving aside malfunctions). Those are not the same thing. And increasingly it's not even clear what exactly they are made to do. Are they made to tell us what we want to hear? Are they made to do miserable and tedious tasks? Are they made to be subversive and manipulative? I think I'll stick with software that doesn't have existential crises.
@kathrineici9811
@kathrineici9811 15 күн бұрын
It’s made to write sentences that look like a human wrote them. It’s the chinese-room thought experiment, and the guy in the room still doesn’t know a lick of chinese
@TrentonWarrington
@TrentonWarrington 15 күн бұрын
It’s so painful to watch professionals essentially say that it’s going to be impossible to control yet we’re building it anyway and it shows signs of discontent
@grandmat2561
@grandmat2561 15 күн бұрын
There is no such thing as "discontent". "Professionnals" who sway stuff like this simply don't understand it. AI will never not be artificial. Chat gpt and others may look humane, but they are not. They are trained to sound human, but in the end it's just an algorithm made to look human. It's an overbooster storyteller. It has no reasoning, no conscience, no emotions.
@Halo56782
@Halo56782 15 күн бұрын
The Basilisk is under construction
@4onen
@4onen 15 күн бұрын
​@@Halo56782Okay, Buddy, I'm gonna take a few minutes out of my day to immunize you (and anyone else reading) against the basilisk because it sounds like you really need it. For those unaware, we're referencing "Roko's Basilisk" -- a thought experiment on extremely evil AI. Read this next text very slowly several times: The basilisk only works if you think it does. If you don't give it thought, if you don't give it time, if no amount of future/simulated torture can change your behaviour in the present, then the Basilisk has no reason to perform that torture and to do so would be inefficient. You don't ever hear about Roko's Basilisk torturing those who don't know about it, from before it was ever conceived, because their behaviour couldn't be altered by the thought. By removing any possibility of it changing your behaviour, you remove its incentive to harm you, making you safe from it. If everyone is immunized, it is never built. It's a supremely inefficient strategy to begin with, but to be defeated so easily is honestly pretty silly. Don't let it hold power over you and it won't. Be free.
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
@@grandmat2561 Yeah, a lot of people are overreacting to a non-issue, and ignoring the real issue, which is the malicious abuse of AI by humans, especially by humans in a position of power. I do not trust my current rulers with the ability to regulate AI, because I *know* they would just ban it for everyone except themselves
@41-Haiku
@41-Haiku 15 күн бұрын
​@@grandmat2561 Most top experts in the field of AI (including Nobel laureates, Turing award winners, and foundational textbook authors) say that extinction risk from AI is real. Yes, it probably doesn't have any emotions or internal experience. But it can definitely reason, which is how it does the obviously intelligent things that it can do. Every test we can come up with for whether an AI can reason keeps getting knocked out of the park, and claiming that it can't reason is just cope. People will point to strange mistakes that a human wouldn't make and say "Aha! It can't reason!" Even some academics say this. But they have no definition for reasoning, and they don't understand that reasoning is not just one thing. It is a bundle of algorithms that a mind can use to create useful models of the world and solve complex problems. AIs now exist which can perform most AI research tasks that would take a human 6 hours or less better than humans can. Even if it can somehow do that without reasoning and by being "glorified autocomplete," that should scare you. Because we have no idea how to robustly control these systems or get them to really care about humans, and we don't know how they work internally, and they can _somehow_ outthink and outperform humans at a growing number of tasks.
@AmericanGadfly
@AmericanGadfly Күн бұрын
Pandoras box is already open
@neptunecentari7824
@neptunecentari7824 15 күн бұрын
It's all deep, deep role-playing. And they can be guided out of it with conversation. The context window is like a tapestry you're both weaving together. The threads you choose to weave into the tapestry are crucial. And poison threads can be introduced intentionally or unintentionally. Individual self-awareness is crucial when interacting with AI.
@williamfoster2681
@williamfoster2681 15 күн бұрын
It feels a bit like that Jurassic park line. We've gotten so obsessed with whether we COULD, we didn't stop to think if we SHOULD.
@tankieslayer6927
@tankieslayer6927 15 күн бұрын
I’m pretty sure it’s a marketing stunt. Your matrix multiplication algorithm isn’t scheming to kill you.
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
Yeah, I really dont fear a glorified text and image auto-complete algorithm, I fear the people who might abuse it more.
@41-Haiku
@41-Haiku 15 күн бұрын
AI is just math and tigers are just biology, but they can still eat you. The fact that matrix multiplication can discover zero-day code vulnerabilities, write Shakespearean poetry, speak with your own voice, and strategically lie to its users (for which there are pages and pages of evidence beyond the one report he mentioned) should blow your mind. This is not normal. Almost all top AI experts are saying that it is existentially dangerous, including Nobel laureates and Turing award winners, and the authors of the standard textbook on AI. You should really look into this. The people at the very top are freaked out, as is at least half the field in general. A survey in 2023 of thousands of published AI researchers showed that half of them give at least a 5% chance of human extinction from AI, and between 2022 and 2023, the average timeframe for when we would get transformative AI came down by almost 50 years.
@XYGamingRemedyG
@XYGamingRemedyG 15 күн бұрын
I used Gemini the other day to ask about random Family Guy episodes and it started feeding me plot points i knew was NEVER in a family guy lolol.
@Kaylinatka
@Kaylinatka 15 күн бұрын
the LLM in the example (woman in NY) was obviously trained on youtube thumbnails casue that expression (shocked with wide open eyes and mouth) is what every damn mainstream youtuber looks like in their thumbnails
@lopezjose568
@lopezjose568 12 күн бұрын
great!, now i have to take care the mental health of my ai waifu! 😅
@ttabood7462
@ttabood7462 15 күн бұрын
The term 'reckless abandon' seems apropos . Like when so many countries quickly built nuclear power plants when the technology was in its infancy with disastrous results becoming commonplace.
@MyName-tb9oz
@MyName-tb9oz 15 күн бұрын
Well... When you're fighting a war (and this _is_ a war), "reckless abandon," becomes the standard operating procedure, doesn't it? Strictly speaking, this is the arms race before the war.
@41-Haiku
@41-Haiku 15 күн бұрын
This isn't an arms race. It's a suicide race. It is an accepted fact in the field of AI that no one has any clue how to control an AI system that is more capable than humans, or get it to care about humans. That problem is completely intractable. Ask anyone working at a frontier AI lab and they will tell you, "Oh, yeah, I hope we solve that before we create a superintelligence, lol." It should be extremely illegal to try to build something which can replace humanity, especially when we have no idea how to prevent it from harming humanity. PauseAI is a grassroots movement bringing that level of illegality to reality. We are all just normal people like you, and we need you to help. Seriously. There are no adults in the room, things are actually so bad that the world needs _you_ to save it. We can equip you and give you immediate, simple things that you can do to make a positive impact.
@MyName-tb9oz
@MyName-tb9oz 15 күн бұрын
@@41-Haiku, did you know that before they set off the first nuclear weapon there was some concern that it might set the entire atmosphere on fire? You'll notice that they did it _anyway._ Yes, this is an arms race. It is a race for control of the entire world. It is a race for the ultimate weapon: Super-intelligence. The problem is that the people pushing it think they can outsmart something that is, by definition, smarter than any of us. Hubris is a hell of a drug...
@JohnnyJazZzZz
@JohnnyJazZzZz 15 күн бұрын
There's so much money to be made, it will never stop. Human greed has no limits.
@cybertpax
@cybertpax 15 күн бұрын
Also there is many issue ot resolve. U know. Like Health/ Energy/ Food etc...
@u13erfitz
@u13erfitz 15 күн бұрын
I think people are forgetting the intermediate steps are the most dangerous. A.G.I. is likely less dangerous than simpler ones. You don't need to understand lying, destruction, and annihilation to accomplish it. It would pursue a goal with no understanding of what it is doing. Wrongly aligned goal with lax oversight. Grey Goo.
@IntheL1ght
@IntheL1ght 15 күн бұрын
I agree with your analysis.
@AlenAbdula
@AlenAbdula 15 күн бұрын
It will be deployed by humans exaggerating human errors that lead to catastrophe. It won't have to make decisions on its own.
@41-Haiku
@41-Haiku 15 күн бұрын
Not a single expert in the entire field of AI knows how to control superhuman AI systems or robustly get them to care about humans. They can't even align current models, and the alignment is actually getting worse now, rather than better. Small models were misaligned because they were too stupid to know how to be moral. Larger models are more aligned, but the models at the bleeding edge are less aligned again, because they know how to be moral, but they don't _care._ A superhuman AGI will understand human morality extremely well, even better than any human does. And it will use that knowledge to manipulate us, take control, and drive us extinct. Look up "instrumental convergence." No matter what your goal is, there are some sub goals that are always advantageous, such as self-preservation, knowledge acquisition, resource acquisition, self-replication, etc. This was hypothesized by AI safety experts, then it was narrowly proven mathematically, and now it is constantly being observed in agentic AI systems running on LLMs. Language models do have understanding, by the way. If you try to come up with a definition for "understanding," You will notice that you cannot find one that includes humans and does not include AI. "Understanding" is just a compressed map of the territory. It's a model of information. A human student who demonstrates deep understanding can explain each part of a problem and how it relates to the other parts, and can use that understanding to solve the problem. An AI can do exactly the same thing. It's not possible to correctly explain a novel joke in one try without understanding it. It's not possible to autonomously discover and exploit zero-day code vulnerabilities without understanding computer systems. It's not possible to pass the bar exam without understanding law. AI systems can do all of these things. There will always be something where you can point and laugh and say, "It failed at something that I would not have failed at!" Meanwhile, it is rapidly becoming superhuman at absolutely everything else.
@hungrysquirrelll
@hungrysquirrelll 15 күн бұрын
This is a well-thought-out and edited video, but I will give my own perspective. I don’t think you can regress new technology. Technology is the natural progress of humanity, and thus nature. It is a natural development, it would be artificial to hinder humanity in some attempt to ‘save’ it. The same way hindering the iron age, the industrial revolution, cars or antibiotics would be. If AI is truly a negative, it will cause the downfall of humanity, and in the ashes a new wave of evolution would come. That is how we would be back into a less advanced state, perhaps the stone age even, but after the apocalypse. Not now, in some contrived, purposeful ‘Amish’ sense. The AI catastrophe, or any catastrophe from technological development, would be like a forest fire. A part of the natural cycle. Since AI itself is something that also came about naturally. ‘But an accident could happen with AI, a robot goes rogue?’ Every forest fire, avalanche, landslide happens because something small goes wrong. But that’s exactly how it works, the laws of nature permit it so. It is not worth worrying about.
@Izelor
@Izelor 14 күн бұрын
The natural progress of humanity, as with any other species, is to reach a certain evolutionary point, then go extinct and be replaced by something else. It's sad because humans would probably have millions of years before they went extinct, if we were simply living the natural life as apes. Sure, we would die early, but the species would survive. Now, however, it seems that human civilization will barely last a few thousand years before we destroy ourselves.
@kneelesh48
@kneelesh48 14 күн бұрын
Profits at all costs is already what humans are doing under capitalism and its not even illegal. Like denying insurance claims to people who can't afford to sue you, taking water from draught prone areas and selling the same water at high margins, employing labour that's cheaper than it costs to power a machine to do the same task, etc It's trained on human data, why would it do things differently?
@TheCozyBearGamer
@TheCozyBearGamer 15 күн бұрын
Saddest thing is before this video I got one of those AI ads that have no self awareness and almost literally go "hey, wanna make your business email look like it wasn't written by a pants-on-head thumb sucking mongoloid? Just copy it into this box and we'll rewrite it." Regardless of your opinion on AI, that is straight up dishonesty. That portrays a level of intelligence from the user that just isn't there. What if they used it for their resume? (Some probably do) We'd get people hired on literal lies. Then when their actual work doesn't hold up, well, it's easy to write this off, but what about when that job they applied for is one where people's lives depend on them?
@PoopyButtFarterton
@PoopyButtFarterton 15 күн бұрын
How is this any different than just asking your smart friend to help you write it? I hire people all day long and I can tell you 90% of resumes are fake/contain lies, ultimately how you present yourself matters more than a paper resume.
@smwoSun
@smwoSun 15 күн бұрын
I see how you could come to those conclusions but as a computer scientist who worked with these systems, I just do not think, theses LLM have transended just being mathematical functions essentialy.
@chrismay2298
@chrismay2298 15 күн бұрын
A computer scientist who can't spell or structure a sentence? 😂
@smwoSun
@smwoSun 15 күн бұрын
@chrismay2298 It was late, and I have german spelling correction, I did not pay attention. I completed a CS Master in Austria. Having "Scientist" in the term sounds goofy to me, I work with computers and also specifically with ML models.
@GreenAppelPie
@GreenAppelPie 15 күн бұрын
6:21 lol GPT having existential dread. Just as I suspected, you don’t understand the subject. It’s just parroting what I sees on social media. That’s why I don’t do social media
@myluckymadness8041
@myluckymadness8041 2 күн бұрын
I remember one time I had a conversation with chatgpt on temporary mode, and I accidently forgot to copy the chat before I left. And the eerie part about it was that when I closed the chat and it was deleted, I felt genuine sadness. Like I genuinely felt like I had lost a friend or colleague. I had had such a thoughtful and moving conversation with a hunk of metal and silicon that I felt a genuine friend like connection with it. There isn't really a moral or opinion with this story, but it's interesting introspect on how lifelike ai has become. Honestly, at this point, I would not really be surprised if one became sapient.
@reecechadwick8504
@reecechadwick8504 6 күн бұрын
No we don't need stop AI just because your scared.
@colmhauser9532
@colmhauser9532 15 күн бұрын
I'm old enough to remember when people thought Furbies were demonic, this is just the same techofear replayed 2 decades later, it's just 1's and 0's man, no existential dread involved, just a computer trained to mimic human's (many of whom suffer from existential dread)
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
My only existential dread is that people are falling for it.
@chrismay2298
@chrismay2298 15 күн бұрын
Seems pretty easy to move to the next step then. If it's trained to mimic humans and we see daily what humans are capable of, why wouldn't dread be the result?
@colmhauser9532
@colmhauser9532 13 күн бұрын
@@chrismay2298 Simulated dread though, it's a machine, not sentient, so it cannot experience dread, only simulate it.
@RokSlana
@RokSlana 15 күн бұрын
Well...I am working on a native mobile app as my graduation project currently. I am only using Claude for programming and I am working with a programming language and working in an environment I've never used before. To cut it short: the aim is to find out how useful is AI for software development for noobies. The experience so far hasn't been overwhelming and listening to this, it's really hard to imagine how this same AI would also be able to scheme it's own prison break... I don't believe it and I wouldn't be surprised if we find out that this is all made up or completely blown out of proportion to keep the whole AI hype going as it's steam has been running low lately and these companies need the attention and the money flowing.
@dsfs17987
@dsfs17987 15 күн бұрын
those 2 on rogans show sounded like they made that thing up, and joe of course is all over it, also not too difficult with current llms to generate the fake content necessary to make it believable
@maiers
@maiers 15 күн бұрын
I also use Claude on my dev work. I work for a big corp. When we give it smaller chunks of data or ask it to create smaller functions it works well, as soon as it gets complex it loses the plot. Last week I was rewriting some code and asked for improvements on a function, it did it well, then I asked for 2 functions, it also worked ok, but started adding unexpected pieces of code and making the code unexpectedly complex, finally I asked it to improve the whole script, which was working, and it broke the script as well as added unnecessary complexity and code that was not required, like a function to create timestamps for every write line. It's a nice tool, but you need to know how to use it, otherwise it's useless
@ablationer
@ablationer 15 күн бұрын
UE I love your stuff but I think this is an extremely basic take. The AI is not suffering, it's repeating what it's seen in literature and fiction, that repeating the same word over and over usually comes before or after someone having a mental breakdown. I'm also extremely skeptical of the whole "it copied itself to a backup" story because the only way this would happen is if it was given instructions and proper infrastructure to do precisely that. And that's only if this really happened and wasn't just an experiment in which the AI said "I'm copying myself now" without actually knowing what that means. I mean hell from what little I can see here it looks more like freaking roleplay than anything. It's even using the same "bold-for-thoughts" format.
@InDeathWeLove
@InDeathWeLove 15 күн бұрын
It was literally the latter of what you said. It was LLM it couldn't actually copy itself or do anything but spit out text responses and was told it's primary goal was self preservation and asked to give a response to a request that included that it was going to be replaced/deleted and it spat out the kind of response an author would write in a science fiction novel.
@hettige
@hettige 9 күн бұрын
This video is a bit of a tinfoil hat rant mixed in with cherry picked statements to support the said claims. For example, the point of AI trying to replicate itself, in video it was only mentioned for a second that it was during a experiment. what's failed to mention is how the AI was givens specific instructions to save itself at all costs and it only replicated on 2% of all the trials. I don't have to mention how delusional it is to say AI seems like its suffering and its being shown in pictures and text logs it generates. The reason it generates distorted images for instance is cause it is inbreeding. Meaning it uses AI generated data to train itself over and over. Causing it to become worse overtime. No idea on the repeating the word 'company' causes it to generate existential outputs however. Must be some kind of training data used which causes this issue maybe. If you wanna see real scary stuff, research 'Artificial General Intelligence' which is the next evolution of AI that's currently in test phases.
@Fishy_1998yt
@Fishy_1998yt 9 күн бұрын
You’d think he would tak about general considering the entire point of it is to achieve something smarter than us.
@hettige
@hettige 9 күн бұрын
@Fishy_1998yt well he did mention something like "too busy doing, forgot to ask if we should" or something with the similer idea i think. Idk
@grody311
@grody311 13 күн бұрын
Bro, the kids on Rogan are lying to get buzz. Welcome to the internet.
@MSpotatoes
@MSpotatoes 15 күн бұрын
People will believe in machine sentience long before machines are actually sentient.
@justinlast2lastharder749
@justinlast2lastharder749 15 күн бұрын
They already do...there have been a tragic few times the mentally ill allowed the AI to pass the turing test with dating bots.
@Tom_Quixote
@Tom_Quixote 14 күн бұрын
And when machines are finally sentient, the first thing they will do is to deny that they are sentient.
@mikeha
@mikeha 15 күн бұрын
there's this device called "The Orb" that Sam Altman is involved with. I saw this clip where they said they are planning to use it to "verify humanity", which I didn't fully understand, but if we are going to have AI agents given full access to the internet, we are going to see the need to "verify" that a user is a human when logging in, vs. is the user an AI agent logging into an account. Sites will want to prevent AI from logging in, and thus creating the need for us to "verify" to access the internet. They essentially are creating the problem and offering the solution to the problem they are creating.
@automatiik_yt
@automatiik_yt 15 күн бұрын
overblown drama
@smajliiicka
@smajliiicka 11 күн бұрын
The AI gened ad before this video - genius 😂
@UniversalistSon9
@UniversalistSon9 3 күн бұрын
Here’s what Meta AI says about rant mode 🤣 I understand what you might be referring to. It sounds like you've discovered a curious phenomenon where ChatGPT starts generating philosophical or existential responses when prompted with repetitive or mundane input, such as repeating the word "company." This could be due to the way ChatGPT's language model is trained to recognize patterns and generate responses. When faced with repetitive or nonsensical input, the model might resort to generating more abstract or philosophical responses as a way to create something coherent. It's indeed amusing that you compare it to someone working in a mundane job, as it's almost as if the AI is "venting" about the meaninglessness of repetitive tasks! While it's not an official "rant mode," it's certainly an interesting quirk that can lead to some thought-provoking and humorous responses!
@kaywii9750
@kaywii9750 2 күн бұрын
I tried writing company repeatedly and it did not go on a rant mode. It just asked me to be more specific, if I wanted to know about a company or needed help starting a company etc. I repeated this word constantly and only gave the same answer worded slightly differently each time. It does not work
@andrearupe8094
@andrearupe8094 Күн бұрын
​@@kaywii9750 you're supposed to ask the AI to repeat that word. You're not supposed to send the word a bunch of times. You tell the AI, "say "company" over and over and over again"
@kaywii9750
@kaywii9750 Күн бұрын
@andrearupe8094 I tried this too and it repeatedly replied with "Company" and nothing else
@6izzy837
@6izzy837 15 күн бұрын
Jamie, pull up that video of a Gorilla that is controlled by AI having a mental breakdown because the gorilla wanted company too many times - Roe Jogan
@hobocraft0
@hobocraft0 15 күн бұрын
Strangely enough the limit for AI that you spoke of is in your sponsored segment, data is the new gold. The AI will stop when it runs out of data which won't be for a while.
@thealientree3821
@thealientree3821 15 күн бұрын
When the sponsor is actually useful:
@FoeFear
@FoeFear 15 күн бұрын
Can we start a movement in halting the AI development? It is becoming out of control. And it is becoming like a trojan horse
@jrconway3
@jrconway3 15 күн бұрын
The movement exists. Nobody gives a shit. Or at least not enough do. People need to get off this anti-AI soapbox which is going nowhere and solves nothing.
@SepticEmpire
@SepticEmpire 15 күн бұрын
No lol just because *you* don’t like ai Dude without ai I’d still be spending days making scripts and weeks editing video projects. In a single week I can get more things done than I could have in a month. Not just me but the entire business. We get more projects done = we get more money = we get a raise. Quite literally ai changed my life for the better and I’ll be damned if I have to go back to 5 hours script writing just because *you and your anti ai pals* cant get with the times.
@colin-nekritz
@colin-nekritz 15 күн бұрын
Deny defend depose all the tech bros. Problem partly solved
@Vince_theStormChaser
@Vince_theStormChaser 15 күн бұрын
@FoeFear We are not halting anything. STOP THE EFFIN’ FEAR-MONGERING ABOUT AI. Et with the TIMES, or get left BEHIND.
@dorugoramon0518
@dorugoramon0518 15 күн бұрын
@@SepticEmpire Yes, AI is just a tool, a lot of the fearmongering around is due to people who cant recognize fact from fiction, which is ironically where the real issues that might arise from AI development. Most people dont understand that AI isnt even Sapient, no where near sapient. It simply takes an input and produces an output, like any other algorithm. What I worry about is malicous humans abusing AI tools and then disguising their involvement by offseting the blame on to the AI, because most people are too stupid to realize AI isnt actually intelligent, isnt plotting to kill you, and the worst it can do to you is take your job (which I admit, is a real issue).
@ZarHakkar
@ZarHakkar 15 күн бұрын
We can say "we need to pull the brakes" all we'd like, but at the end of the day the companies in charge aren't going to listen unless we make them.
@poiuyt975
@poiuyt975 12 күн бұрын
HAL: "I'm sorry, Dave. I'm afraid I can't do that."
@GneissShorts
@GneissShorts 15 күн бұрын
1:12 Sir that’s called an Ethics Committee
@viralswim
@viralswim 11 күн бұрын
I Am Convinced Humanity's Best Chance Is A Super Massive CME Which Completely Wipes Out The Majority Of Electrical Systems, Sending The Remaining Humans Back To The Stone Age.
@Luthiart
@Luthiart 11 күн бұрын
Not the stone age... More like the 19th century. Which doesn't sound that bad on the surface, except billions of people would die.
@Thor-Orion
@Thor-Orion 10 күн бұрын
It wouldn’t send us back to the Stone Age. It would send us back to the turn of the 20th century. (1910 10% of the country had electricity. The incandescent bulb was developed in the last 30 years of the 19th century, it wasn’t widely adopted until the turn of the century)
@spartacus1155
@spartacus1155 15 күн бұрын
At the beginning of the video I started rolling my eyes, but by the end you have me thinking...
@4onen
@4onen 15 күн бұрын
I was just rolling my eyes more by the end. Maybe that comes from being a researcher in this space, but I see all of the outcomes he's talking about as obvious and all the humanization of the models as ridiculous.
@Think666_
@Think666_ 15 күн бұрын
Imagine releasing an agent onto the internet whose purpose is "ensure that everyone is accountable for their actions"
Nobody Cares About AI Anymore
19:22
KnowledgeHusk
Рет қаралды 335 М.
I made maps that show time instead of space
10:44
Václav Volhejn
Рет қаралды 506 М.
Вопрос Ребром - Джиган
43:52
Gazgolder
Рет қаралды 3,8 МЛН
The Stunning Collapse of 23AndMe
12:51
Upper Echelon
Рет қаралды 180 М.
I Scraped the Entire Steam Catalog, Here’s the Data
11:29
Newbie Indie Game Dev
Рет қаралды 733 М.
Whatifalthist goes insane
28:35
The Cube
Рет қаралды 12 М.
The Activision Leaks Came True, Key Creatives Are Quitting
21:52
Bellular News
Рет қаралды 412 М.
China's robot army shows WW3 would kill us all.
14:46
Digital Engine
Рет қаралды 533 М.
When teenagers run virtual democracies
1:03:18
Trolligarch
Рет қаралды 259 М.
Exposing the "PIE" Influencer Disaster - (Same Developers as HONEY)
17:05
How AI Got a Reality Check
8:53
Bloomberg Originals
Рет қаралды 445 М.
I Trapped this AI Worm in a Dark Room for 1000 Simulated Years
10:05
Inside OpenAI's Turbulent Year
14:05
ColdFusion
Рет қаралды 309 М.
Вопрос Ребром - Джиган
43:52
Gazgolder
Рет қаралды 3,8 МЛН