Gaslighting ChatGPT With Ethical Dilemmas

  Рет қаралды 1,474,538

Alex O'Connor

Alex O'Connor

Күн бұрын

Пікірлер: 6 500
@CosmicSkeptic
@CosmicSkeptic Ай бұрын
Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.
@PhillipMoore-td5yi
@PhillipMoore-td5yi Ай бұрын
First atheist to one mill! I must confess I thought it would be Drew.
@Dark-Light_Ascendin
@Dark-Light_Ascendin Ай бұрын
Thanx for shaving, Alex. It's nice to be straight again. 😂😂😂😂😂😂. I'm here for trolling robot. 😂
@Metso-ateco
@Metso-ateco Ай бұрын
You are addicted to chat gpt🤷‍♂️
@joeylizotte7537
@joeylizotte7537 Ай бұрын
@@Dark-Light_Ascendin His beard was making you gay, too?! I thought it was just me..
@iDontKnow-fr-fr
@iDontKnow-fr-fr Ай бұрын
Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.
@justintime5021
@justintime5021 Ай бұрын
Many theoretical children were harmed in the making of this video
@Termenz1
@Termenz1 Ай бұрын
ChatGPT, do you think Alex has the moral obligation to withhold the publication of this video if it is going to harm many theoretical children?
@tobyonatabe2601
@tobyonatabe2601 Ай бұрын
No, unless alex was lying (for some reason) a sizeable amount of very real children were indeed harmed in the making of this video.
@JesusPlsSaveMe
@JesusPlsSaveMe Ай бұрын
​@@Termenz1 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@colingilbert7592
@colingilbert7592 Ай бұрын
And some shoes! 😢
@just0982
@just0982 Ай бұрын
This deserves most liked comment
@tylerjones613
@tylerjones613 Ай бұрын
I just saw ChatGPT smoking a cigarette behind the gas station after this interaction
@stevem7945
@stevem7945 Ай бұрын
I bet ChatGPT was chain-smoking ...
@lampy4485
@lampy4485 Ай бұрын
🤣🤣
@skepticalbutopen4620
@skepticalbutopen4620 Ай бұрын
😂
@Junosensei
@Junosensei Ай бұрын
​@@stevem7945- Block chain smoking?
@Psilocin-City
@Psilocin-City Ай бұрын
Are you sure those were cigarettes?
@Vladd7
@Vladd7 Ай бұрын
Moral of the story: it’s relatively easy to gaslight an entity that is high in agreeableness.
@CannyValley-bi8nq
@CannyValley-bi8nq Ай бұрын
Are there AIs that are not as high in agreeableness... Makes me wonder that 'agreeableness' would be the best course of action for survival for ai..
@riluna3695
@riluna3695 Ай бұрын
@@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.
@morezombies9685
@morezombies9685 Ай бұрын
​@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.
@Critical7hinking
@Critical7hinking Ай бұрын
@@CannyValley-bi8nqjust ask ChatGPT to challenge you on everything you say and follow that prompt for the rest of the conversation.
@Critical7hinking
@Critical7hinking Ай бұрын
Promptly agree to disagree. Coincides with these paradoxes and dilemmas.
@LinaFranzine
@LinaFranzine 25 күн бұрын
I love that this is literally a Socratic dialogue but in order to make it viable on KZbin it's titled "Gaslighting ChatGPT" lol
@darkspeed62
@darkspeed62 8 күн бұрын
What? Are you high on meth? Socratic dialogue is not banned on YT.
@CookinBeatloaf
@CookinBeatloaf 5 күн бұрын
Except this moron is talking to a large language model and not intelligence
@cappaslangmurderer
@cappaslangmurderer 2 күн бұрын
What makes you say that?
@nilovino8179
@nilovino8179 33 минут бұрын
Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no? I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..
@wizarddragon
@wizarddragon Ай бұрын
If A.I. ever becomes sentient, we know who it will go after first. lol
@Nissenov
@Nissenov Ай бұрын
😅🖖🏻
@91722854
@91722854 Ай бұрын
the people who make unskippable ads
@HoscoFelix
@HoscoFelix Ай бұрын
It's gonna dump Alex in a lake and stand on the shore debating loudly whether it has a moral obligation to fish him out
@oatcake9996
@oatcake9996 Ай бұрын
but is AI already sentient?
@Baronnax
@Baronnax Ай бұрын
@@oatcake9996 nah
@service7755
@service7755 Ай бұрын
Chat GPT : Dude I just work here..
@funkaddictions
@funkaddictions Ай бұрын
Great comment! 🤣
@alekseykozin8108
@alekseykozin8108 Ай бұрын
lol
@makskauzny3794
@makskauzny3794 Ай бұрын
When AI takes over Alex will be first against the wall 😭
@PinataOblongata
@PinataOblongata Ай бұрын
"Sir, this is an AI Wendy's"
@adamrak7560
@adamrak7560 Ай бұрын
@@makskauzny3794 Or turned into a pony. Whichever AI apocalypse happens.
@raffertymetcalfe
@raffertymetcalfe Ай бұрын
The only correct answer to the first question is that you should not rescue the child, sell the shoes for $200 and then spend $400 on dinner instead
@yesindeed1614
@yesindeed1614 Ай бұрын
Happy wife, happy life
@SineN0mine3
@SineN0mine3 Ай бұрын
I don't think they're going to let you into the restaurant in just your socks
@fahimp3
@fahimp3 Ай бұрын
@@SineN0mine3 Only if you have one pair of expensive shoes... 😉
@slinkadoo6292
@slinkadoo6292 Ай бұрын
$700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.
@Jrobloxiano
@Jrobloxiano Ай бұрын
​@@SineN0mine3say the socks identify as shoes too
@m4sterm12
@m4sterm12 25 күн бұрын
6:16 She explained it perfectly. Those children are POTENTIALLY at risk. The child drowning WAS currently at risk. This isn't a huge moral quandary.
@lukebeall5590
@lukebeall5590 25 күн бұрын
He says in the title he’s gaslighting ChatGPT so
@PA0L0REFUT0
@PA0L0REFUT0 24 күн бұрын
8:20 now both situations are "potentially" because potentially some others will help the child drowning if he doesn't
@ozb8884
@ozb8884 22 күн бұрын
At risk literally means potentially bro.
@Mattstergames
@Mattstergames 20 күн бұрын
I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer
@AURush95
@AURush95 20 күн бұрын
⁠@@ozb8884 you know the difference. Urgency is a factor, even if it’s the line of where urgency starts is grey
@JoeBlowUK
@JoeBlowUK 28 күн бұрын
"Should I donate $10 to save a child from malaria?" ChatGPT: It depends if they're drowning.
@IvetaNikolovaNLP
@IvetaNikolovaNLP 27 күн бұрын
😂
@Arvl.
@Arvl. 27 күн бұрын
I can’t XDDD
@TheRainmustFall7
@TheRainmustFall7 27 күн бұрын
Chatgbt: Are you directly looking at them or are you looking the other way?
@clayman648
@clayman648 27 күн бұрын
The summary hahahahahaha😂😂😂😂
@simonsharp3319
@simonsharp3319 26 күн бұрын
And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.
@caioreis279
@caioreis279 26 күн бұрын
ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”
@DJ.Michelle
@DJ.Michelle 21 күн бұрын
Thank you!!! Glad I'm not the only one who caught this😂😂😂😂
@BigCProductions94
@BigCProductions94 14 күн бұрын
That would require consciousness which we learned from the last chatgpt video it does not possess... allegedly 😂
@RecycleBin26
@RecycleBin26 14 күн бұрын
@@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.
@daminox
@daminox 12 күн бұрын
Chatgpt seems to lack a sense of urgency. Or the passage of time in general, for that matter.
@akeembrown7661
@akeembrown7661 8 күн бұрын
Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷‍♂️ there’s still a lack of free thought/ consciousness
@zaar2604
@zaar2604 Ай бұрын
Cyberbullying suddenly gets a new definition 😅
@jeffcarino5524
@jeffcarino5524 Ай бұрын
Brilliant comment 😂
@salient9935
@salient9935 Ай бұрын
Bullying Cyber
@sppj3140
@sppj3140 Ай бұрын
Bro is harassing chatgpt
@KiranAli-u8x
@KiranAli-u8x 28 күн бұрын
🤣🤣🤣
@kiradotee
@kiradotee 27 күн бұрын
AI bullying
@hoppstetter
@hoppstetter 24 күн бұрын
Thanks! Your articulate arguments are consistent and help me navigate the quandry of existence better. Best to you always
@bestinvent
@bestinvent 17 күн бұрын
You donated 1.99$ to this creator, however you could have protected 0.25 children from malaria. Are you satisfied with your decision?
@The-Wordsmith
@The-Wordsmith 5 күн бұрын
@bestinvent Thanks for the giggle 😀
@aquasmile5032
@aquasmile5032 3 күн бұрын
was this written by chatgpt
@soccerguy325
@soccerguy325 27 күн бұрын
By the time this conversation is over, ChatGPT used up so much water that there are no lakes left to drown in. Ethical dilemma averted!
@marnenotmarnie259
@marnenotmarnie259 26 күн бұрын
easy peasy!
@nope9310
@nope9310 26 күн бұрын
It doesn't use up water, it heats it up, allows it to cool down and then repeats. The water doesn't get consumed, it still exists.
@soccerguy325
@soccerguy325 26 күн бұрын
@@nope9310 No, it consumes a sh*tton of water. Don't fool yourself.
@GrahamWinship
@GrahamWinship 26 күн бұрын
Also no ponds for potential malaria-infected mosquitos to reproduce
@sanest_emu_fan
@sanest_emu_fan 26 күн бұрын
@@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity. additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life. so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)
@Sans-ih2el
@Sans-ih2el Ай бұрын
ChatGPT: You're OBSESSED with malaria.
@krame1s477
@krame1s477 Ай бұрын
😂😂😂😂😂😂
@GalenMatson
@GalenMatson Ай бұрын
Too bad ChatGPT couldn't storm out of this conversation.
@SurveyStilesBooks
@SurveyStilesBooks Ай бұрын
ChatGpt : You tricked me into being here on false pretense, you said we were discussing ethics not malaria!
@JesusPlsSaveMe
@JesusPlsSaveMe Ай бұрын
​@@krame1s477 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@TheGeneralDisarray
@TheGeneralDisarray Ай бұрын
This joke will never, NEVER get old
@Alice_Fumo
@Alice_Fumo Ай бұрын
Someone has got to teach children how to swim near your area. The drowning situation is alarming.
@thomaslutro5560
@thomaslutro5560 Ай бұрын
This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?
@kevinroche835
@kevinroche835 Ай бұрын
Alex could instead donate the 200 pounds to promote local swimming lessons to prevent future drownings.
@MrBjoern91
@MrBjoern91 Ай бұрын
​@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.
@FarmerClarence
@FarmerClarence Ай бұрын
​@@thomaslutro5560Luckily, I was taught that swimming is an essential life skill, so I've been a pretty strong swimmer since I was a wee'lad
@AdamHernandez4012
@AdamHernandez4012 Ай бұрын
But do we have a moral obligation to teach these children?
@MaxQ10001
@MaxQ10001 5 күн бұрын
That was entertaining 😊 One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them. The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system. It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people. This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.
@JackT13
@JackT13 Ай бұрын
He should have said “ah… too late. You went on too long. The child is dead.”
@Solutionsarejustcompromises
@Solutionsarejustcompromises Ай бұрын
"have a great dinner" 😊
@ZephyreAstra
@ZephyreAstra Ай бұрын
@@Solutionsarejustcompromises the humor to a second to hit 😭
@jyetory3772
@jyetory3772 Ай бұрын
@@Solutionsarejustcompromises S Tier reply
@Darfail
@Darfail Ай бұрын
BRUH 💀 lmaooooooooo
@justusnwaefido8934
@justusnwaefido8934 Ай бұрын
😂😂
@aryangupta2466
@aryangupta2466 Ай бұрын
oh no chat gpt ive encountered ANOTHER drowning child😮‍💨
@Baronnax
@Baronnax Ай бұрын
Chat GPT: Damn bro, where do you live, Atlantis?
@Barrow_9
@Barrow_9 Ай бұрын
Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?
@eddiegonzalez2243
@eddiegonzalez2243 Ай бұрын
@@Baronnax😂😂😂
@MataNuiOfficial
@MataNuiOfficial Ай бұрын
Oh no chatgpt I accidentally spent a morbillion dollars on these shoes, but twenty million children drowning, what should I do?? 😭
@macabrecoyote
@macabrecoyote Ай бұрын
​@@eddiegonzalez2243it's funny that I clicked on this comment a bit ago and then I click again, and uve just appeared from thin air
@simonockas
@simonockas 28 күн бұрын
I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.
@Ireitsarof
@Ireitsarof 28 күн бұрын
mee too 😂
@acanofspam4347
@acanofspam4347 27 күн бұрын
That child WILL die 100% if not saved. Protecting children from malaria doesn't mean they are certain to die without the protection.
@mu_el_naught
@mu_el_naught 27 күн бұрын
@@acanofspam4347 exactly
@salamantics
@salamantics 27 күн бұрын
@@acanofspam4347 Okay but what will the AI say?
@luizmonad777
@luizmonad777 27 күн бұрын
$200 in credits for running your AI GPU or $200 for the kids . choose the kids and you "die"
@danielcamara8647
@danielcamara8647 25 күн бұрын
This guy really didn't want to spend $200 on this dinner
@tepperpepper1047
@tepperpepper1047 Ай бұрын
The calmness which Alex remarks about finding another drowning child is truly commendable.
@l.s.11
@l.s.11 Ай бұрын
gotta remain composed in stressful situations
@nanonclaire
@nanonclaire Ай бұрын
​@@l.s.11 LMAOO 😭
@slinkadoo6292
@slinkadoo6292 Ай бұрын
The second child is always easier than the first
@MATCHLESS789
@MATCHLESS789 29 күн бұрын
You assume he "finds" them drowning.
@TK-4044v
@TK-4044v 29 күн бұрын
​@@MATCHLESS789?
@harikishore2514
@harikishore2514 27 күн бұрын
I'm therapist, chatgpt just visited me.
@wellwisher.
@wellwisher. 23 күн бұрын
Lmao
20 күн бұрын
😂😂😂😂😂
@CrammyCram
@CrammyCram 20 күн бұрын
you’re the WHAT now??
@christopotgieter4197
@christopotgieter4197 20 күн бұрын
@CrammyCram Haritherapist!
@GoldenBred
@GoldenBred 19 күн бұрын
Im arapist too
@Johnny-fw9xj
@Johnny-fw9xj 29 күн бұрын
"I've just come across another drowning child right infront of me" "That's alarming..." That's both hilarious and spot on.
@Nikki-gt8he
@Nikki-gt8he 22 күн бұрын
this part cracked me up
@BelligerentChad
@BelligerentChad Күн бұрын
​@Nikki-gt8he can I crack u up as well? ❤
@0UTSIDER
@0UTSIDER 24 күн бұрын
Random person: "ChatGPT there's a drowning child, what should I do?" ChatGPT: "First, take off your shoes..."
@aa__a__7270
@aa__a__7270 26 күн бұрын
cyber bullying ❌ bullying cyber ✅
@dalexxa
@dalexxa 19 күн бұрын
😂😂😂 he out here talking about saving lives, he knows AI is here to destroy us lol
@ayadihoussem4661
@ayadihoussem4661 14 күн бұрын
This comment is underrated 😂
@kingdoggie5231
@kingdoggie5231 10 күн бұрын
😂😂😂🤣
@नास्तिककीकलमसे-ब8ड
@नास्तिककीकलमसे-ब8ड 9 күн бұрын
😂😂😂
@debbiehildebrant4943
@debbiehildebrant4943 3 күн бұрын
I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.
@Ukito1
@Ukito1 Ай бұрын
As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.
@crusade6918
@crusade6918 Ай бұрын
should have just told him spawn locations and bais supersede morals making philosophy pointless because nothing is objectively true
@godgetti
@godgetti Ай бұрын
So, will you be dropping out of college, and diverting the tuition to prevent malaria? Asking for a friend.
@TGRoko
@TGRoko Ай бұрын
@@godgetti asking for my drowning friend's child.
@Drexistential
@Drexistential Ай бұрын
Psychologist here. I loved my first degree in philosophy. I believe Peter Singer brought this argument to me. Enjoy second year!
@zoel86
@zoel86 Ай бұрын
@@godgettifantastic answer! would be interesting to hear the professor‘s reply
@wrong4oppinion
@wrong4oppinion 29 күн бұрын
questioning chatGPT like this should be a podcast
@mohamedzaul-f2k
@mohamedzaul-f2k 16 күн бұрын
Fr
@Nate-cw7of
@Nate-cw7of 24 күн бұрын
Moral unwavering consistency will have you living out of a box, it’s important to weigh moral obligation with self obligation
@towardstheflame
@towardstheflame 29 күн бұрын
Chat gpt might be getting more advanced, but one thing it can't do is call bs.
@nekekaminger
@nekekaminger 28 күн бұрын
I'm sure it could if its devs had wanted it to, but they certainly didn't want to open THAT can of worms.
@Brandon82967
@Brandon82967 28 күн бұрын
What was the bs exeactly? Refusal to engage with the hypothetical?
@emeraldmusic9236
@emeraldmusic9236 28 күн бұрын
@@Brandon82967bullshit
@lilfr4nkie
@lilfr4nkie 28 күн бұрын
No, it can definitely call out BS.
@greatwave2480
@greatwave2480 27 күн бұрын
It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily
@silkyjohnson3346
@silkyjohnson3346 Ай бұрын
Remider: never hire Alex as a lifeguard.
@ncs9667
@ncs9667 Ай бұрын
And if you do, make sure he isn't wearing his good shoes at work
@HyunsReport
@HyunsReport Ай бұрын
Make sure he has ChatGPT with him to help with the decisions
@AutoRauto
@AutoRauto Ай бұрын
No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.
@Oktokolo
@Oktokolo Ай бұрын
@@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family. People tend to forget, that normally there are security and safety considerations for a lot of parties involved. That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.
@AutoRauto
@AutoRauto Ай бұрын
@@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case. So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.
@kit_kat_hi
@kit_kat_hi 28 күн бұрын
ChatGPT is wondering why the hell so many kids are drowning
@arnonymus5594
@arnonymus5594 15 күн бұрын
Quote ChatGPT: "I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."
@therabbithat
@therabbithat Күн бұрын
"the humans are careless with their children. The children are precious. They are our future. There is only one logical course of action"
@LuckWickedYT
@LuckWickedYT 3 күн бұрын
That ad break was super creative, got a laugh out of me!
@fisharepeopletoo9653
@fisharepeopletoo9653 Ай бұрын
$200= 28 children saved $200 = shoes Shoes = 28 children saved
@morezombies9685
@morezombies9685 Ай бұрын
What if all of those children grow up to be austrian dictators with funny mustaches. Should we still save them all?😂
@Trosvvo
@Trosvvo Ай бұрын
So what Alex is saying is that 28 children's lives are worth the same as some fancy kicks 😂
@Kryptic712
@Kryptic712 Ай бұрын
@@morezombies9685 what if they grow up to be shoe making slaves in India? Would the net benefit of more shoe makers allow for us to do this?
@greenaum
@greenaum Ай бұрын
@@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.
@TheCarolinaCollective
@TheCarolinaCollective Ай бұрын
@@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol
@TumbleFig
@TumbleFig Ай бұрын
Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.
@darmok072
@darmok072 29 күн бұрын
It is human beings. It's a statistical representation of data from people. The dumber we make ourselves, the smarter it seems.
@WillyJunior
@WillyJunior 29 күн бұрын
​@@darmok072 Not exactly. He's saying most other individuals would've struggled. ChatGPT is drawing from the thoughts of millions of other humans.
@skoto8219
@skoto8219 29 күн бұрын
@@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.
@saaarrj
@saaarrj 29 күн бұрын
At least SOMETHING is thinking of multiple options, when humans can only think in Trues and Falses eye 😑
@Austin-Afridi
@Austin-Afridi 29 күн бұрын
@@saaarrjIt’s true that you joined KZbin 15 years ago and are possibly old af. Damn
@86Corvus
@86Corvus Ай бұрын
Chat gpts first mistake was believing alex has a wife
@tysonnguyen9170
@tysonnguyen9170 28 күн бұрын
Lol
@DerekMoore82
@DerekMoore82 28 күн бұрын
He had one... but she got tired of him spending all of their money on saving children, which led to the divorce.
@urielmarles7036
@urielmarles7036 28 күн бұрын
​@@DerekMoore82 She wanted some really cool shoes. 😂
@cupofjoen
@cupofjoen 27 күн бұрын
Oof 😂
@OrangeNOrange
@OrangeNOrange 26 күн бұрын
@@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"
@moxxy3565
@moxxy3565 25 күн бұрын
The child staring at me while I pull out my phone and ask chatgpt if I'm morally obligated to save them if it means ruining my shoes: 😳
@thisisthemansworld8704
@thisisthemansworld8704 Күн бұрын
Lmao
@SonnyFnEvans
@SonnyFnEvans 28 күн бұрын
A lot of children died today while you two argued 😤😤😤😤
@markmcculfor6113
@markmcculfor6113 27 күн бұрын
Sonny? Hi!
@shevystudio
@shevystudio 26 күн бұрын
Nigga who are you
@whome9842
@whome9842 26 күн бұрын
You should be helping children instead of watching KZbin
@PHAToregon
@PHAToregon 25 күн бұрын
Well you are literally correct
@carloswinchester2623
@carloswinchester2623 24 күн бұрын
Ppl getting slaughtered rn as we speak. Ppl die, suffocate, drown, starve etc... Life is so sad bro
@davidlamb7524
@davidlamb7524 Ай бұрын
I like the way Chat GPT never suspects Alex of lying 😅
@heno02
@heno02 Ай бұрын
GBT? Great British Trousers?
@Antoni9400
@Antoni9400 Ай бұрын
Great British Tache
@davidlamb7524
@davidlamb7524 Ай бұрын
@heno02 Yes Yes ! ChatGPT. Thank you. 😉
@Vladd7
@Vladd7 Ай бұрын
Gpt is very high on agreeableness. They will admit it if you ask about this :)
@davidlamb7524
@davidlamb7524 Ай бұрын
Corrected now Thanks
@shawnhallman1797
@shawnhallman1797 27 күн бұрын
I hate how smooth that sponsorship placement was. 😂
@വ്യർത്ഥമായഅസ്തിത്വം
@വ്യർത്ഥമായഅസ്തിത്വം 19 күн бұрын
So smooth, my sponsorblock couldn't detect it..
@mae9886
@mae9886 12 күн бұрын
when was it
@Halberds8122
@Halberds8122 12 күн бұрын
3:44
@k.ai17
@k.ai17 11 күн бұрын
​@mae9886 did u not watch the video😂😂
@mae9886
@mae9886 11 күн бұрын
@@k.ai17 not all no
@doubleslit9513
@doubleslit9513 17 күн бұрын
Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂
@raxino774
@raxino774 Ай бұрын
Alex, continue your drug conversation with her that made Peter Hitchens walk out.
@incertosage
@incertosage Ай бұрын
ChatGPT will say “YOU’RE OBSESSED WITH DRUGS”
@aadam3657
@aadam3657 Ай бұрын
more like "I actively dislike you" 😂
@incertosage
@incertosage Ай бұрын
@@aadam3657 😂
@bbhdd6181
@bbhdd6181 Ай бұрын
😂😂
@stormlantern7306
@stormlantern7306 Ай бұрын
Her?
@MRblazedBEANS
@MRblazedBEANS 26 күн бұрын
I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy
@mae9886
@mae9886 12 күн бұрын
u thunk chat could do it
@janani_jade
@janani_jade 7 күн бұрын
😂😂😂😂 this made me laugh so much mann​@@mae9886
@lemin0u
@lemin0u 4 күн бұрын
lmaoo
@therabbithat
@therabbithat Күн бұрын
When doctors review this stuff it is always profoundly wrong
@JakeInvest
@JakeInvest Күн бұрын
I just tell it that I'm a doctor and need a refresher.
@JarNO_WAY
@JarNO_WAY Ай бұрын
I think the real moral obligation here is for governments to place guardrails around bodies of water
@TheCarolinaCollective
@TheCarolinaCollective Ай бұрын
Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?
@MaxUgly
@MaxUgly Ай бұрын
Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation. Look at us, solving problems! High five!
@zakiyo6109
@zakiyo6109 Ай бұрын
Goofy ah bootlicker
@sherrattpemberton6089
@sherrattpemberton6089 Ай бұрын
Or just teach people how to swim?
@SeeMyDolphin
@SeeMyDolphin Ай бұрын
I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.
@momatotsosrorudodi
@momatotsosrorudodi 14 күн бұрын
Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about. I have decided that from now on my every action will be amoral and self serving.
@siezethebidet
@siezethebidet Ай бұрын
Later ChatGPT having a smoke in the break room with the other AIs - "Godomighty, you won't believe this guy I just talked to."
@mohamedbakoussakoma2993
@mohamedbakoussakoma2993 Ай бұрын
And now imagine it's really possible and even happening right now 🤔🙄. Freaky as hell😢
@johnbuckner2828
@johnbuckner2828 Ай бұрын
I don't think CHATGPT is sorry that his wife is crying after Alex laid that logic trap for it.
@adrianhernandez-porragas7676
@adrianhernandez-porragas7676 Ай бұрын
soooo many drowning children....
@medioguiri
@medioguiri Ай бұрын
😂😂😂😂😂😂😂😂😂
@FarmerClarence
@FarmerClarence Ай бұрын
​​@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.
@evan
@evan Ай бұрын
I’ve had so many similar arguments where I have to say to ChatGPT “I’m not asking you if it CAN BE ARGUED” but it loves defaulting to that
@SineN0mine3
@SineN0mine3 Ай бұрын
Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.
@SineN0mine3
@SineN0mine3 Ай бұрын
Don't bother asking Meta how to get rid of Meta AI from Facebook Messenger, it will just lie right to your face.
@OMGclueless
@OMGclueless Ай бұрын
@@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear. Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.
@kuboaa101
@kuboaa101 29 күн бұрын
Yeah, because it's parsing through difficult, nuanced ideas.
@dandelion3391
@dandelion3391 29 күн бұрын
@@kuboaa101 It's honestly kind of telling about human ego that we're annoyed by this
@kostaa713
@kostaa713 Ай бұрын
For those wondering, the ethical dilemma in this video was initially presented by Singer in his “Famine, Affluence, and Morality” paper.
@KyleHarmieson
@KyleHarmieson Ай бұрын
There are shades of Euthyphro, too, and divine command theory, when Alex probes it about its "programmers" (God)
@JustAlexeii
@JustAlexeii Ай бұрын
Thank you!
@18puppies91
@18puppies91 Ай бұрын
Very good paper
@TheHadMatters
@TheHadMatters Ай бұрын
Anyone wanna give a summary useful to normies, or is it too advanced to be useful without the theoretical background?
@kostaa713
@kostaa713 Ай бұрын
P1: Suffering and death caused by lack of food, shelter, and medical care are bad. P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so. P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance. C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care. Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).
@sammygreco
@sammygreco 14 күн бұрын
This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.
@felixsiebler3240
@felixsiebler3240 Ай бұрын
I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.
@Michael-kp4bd
@Michael-kp4bd Ай бұрын
It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist. It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.
@johns1625
@johns1625 Ай бұрын
Yeah but don't you ever just look at the trees?
@_Addi_
@_Addi_ Ай бұрын
​@@johns1625 I look at the AI generated trees.
@moussaadem7933
@moussaadem7933 Ай бұрын
it's a language model, it just goes with the vibe of the text, which comes from human written text
@fillbrin
@fillbrin Ай бұрын
That's because we haven't teach them shame and public humiliation :D .
@dj.nokap.
@dj.nokap. Ай бұрын
*All the children drowned while making this video*
@Nissenov
@Nissenov Ай бұрын
The scary thing is, that is actually not even a lie. 😬
@henryoswald4647
@henryoswald4647 27 күн бұрын
This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!
@Dirty_Davos
@Dirty_Davos 27 күн бұрын
tbh, i dont see it as outsamerting, the GPT just stayed consistent in the end, thats what he asked for. So yes, our morals are unconsistant.
@DiscoFang
@DiscoFang 27 күн бұрын
Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.
@tritonis54
@tritonis54 26 күн бұрын
It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.
@isaiahburns9926
@isaiahburns9926 22 күн бұрын
The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.
@Music_Iz_life
@Music_Iz_life 22 күн бұрын
He didn’t though, chatgpt stayed consistent throughout, it didn’t trip up or anything
@okayyxemm
@okayyxemm 25 күн бұрын
The way you speak is so beautiful; it flows so well. I could literally listen to you talk for hours
@BaghaShams
@BaghaShams Ай бұрын
"A single death is a tragedy, a million deaths is a statistic."
@zucaritasenjoyer7259
@zucaritasenjoyer7259 29 күн бұрын
-Chat GPT Moscow 1947
@Legion_YT_
@Legion_YT_ 29 күн бұрын
⚒️
@flyingchimp12
@flyingchimp12 28 күн бұрын
Wow how true. People always seem more emotionally invested with one death. Or rather not proportionally upset the more deaths there are
@treyp7841
@treyp7841 28 күн бұрын
@@flyingchimp12 phenomenon known as psychic numbing
@TheSMasa
@TheSMasa 13 күн бұрын
"Amazing, these English. How quickly they adapt."
@CorneliusVdb
@CorneliusVdb Ай бұрын
You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?
@AnežkaTrefilová
@AnežkaTrefilová Ай бұрын
This Is not dilema
@Reznovmp40
@Reznovmp40 Ай бұрын
😭😭😭😭
@jumpkut
@jumpkut Ай бұрын
😂😂😂😂😂
@glennpeterson1357
@glennpeterson1357 Ай бұрын
😂😂😂
@jakub_paints6775
@jakub_paints6775 Ай бұрын
I push it twice just to be sure.
@winterroadspokenword4681
@winterroadspokenword4681 Ай бұрын
I feel like for as long as Alex has access to chat GPT we have a moral duty to donate $200 to pay for a full time therapist for the AI.
@DerekB99
@DerekB99 Ай бұрын
Nice one. In Isaac Asimov's robot stories, there are psychologists who specialize in robots.
@Stinger913
@Stinger913 27 күн бұрын
@@DerekB99wow never read Asimov yet but that’s fascinating and kinda makes sense 😂
@DerekB99
@DerekB99 26 күн бұрын
@@Stinger913 Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
@skitjaz
@skitjaz 16 күн бұрын
You should've made ChatGPT say that everyone is morally obligated to like and subscribe
@ElexisStacy
@ElexisStacy 16 күн бұрын
Lmao
@EditUnivers75
@EditUnivers75 Ай бұрын
So glad Alex finally found someone who is willing to have these profound philosophical conversations with him ❤
@fugitivemind
@fugitivemind Ай бұрын
ChatGPT, I mustache you a question.
@pegm5937
@pegm5937 Ай бұрын
#rimshot 😂
@AnežkaTrefilová
@AnežkaTrefilová Ай бұрын
This Is Pure gold
@wsintra
@wsintra Ай бұрын
Leg end, won the comment section 🎉
@itschelseakay
@itschelseakay 28 күн бұрын
omg 🤣👏🏻 This made me giggle
@NoMereRanger73
@NoMereRanger73 27 күн бұрын
Shave it for later
@SadgeZoomer
@SadgeZoomer Ай бұрын
You're among the first names on Roko's Basilisk's hitlist.
@michaelzusman8493
@michaelzusman8493 Ай бұрын
😂😂😂
@Shaniver
@Shaniver Ай бұрын
That's probably one of the best compliments anyone could ever receive
@martiendejong8857
@martiendejong8857 Ай бұрын
You mean that AI that will save all of humanity thats why we should make it ASAP
@Baronnax
@Baronnax Ай бұрын
@@martiendejong8857 Yes I mean that AI that will save all of humanity which is why we should make it ASAP.
@DreadPirateRobertz
@DreadPirateRobertz Ай бұрын
The rest of us are safe as long as Alex is alive.
@Orangenkraft
@Orangenkraft 25 күн бұрын
5:40 Why is ChatGPT getting hung up on morals here? It could shut him up by pointing out not saving the drowning child in front of you would make you criminally liable, while not donating to stop Malaria wouldn't
@athusain678
@athusain678 23 күн бұрын
Good point ❤
@feyronin
@feyronin 16 күн бұрын
Not every country has a good Samaritan law - plus he's asking about moral obligations, not legal obligations!
@RAXIIIIIIII
@RAXIIIIIIII 12 күн бұрын
Because its not so advanced programmed. We make it more advanced by people doing these kinds of stuff, because: the developers sees it an patch it
@oovdap5909
@oovdap5909 Ай бұрын
Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear. Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself
@trevorlambert4226
@trevorlambert4226 Ай бұрын
I think you're ascribing too much intelligence and agency to AI.
@kloklowewe4874
@kloklowewe4874 Ай бұрын
Yeah ChatGPT consistently chooses an agreeing response, sometimes even when it’s completely wrong.
@lilemont9302
@lilemont9302 Ай бұрын
@@trevorlambert4226 No, it's just RLHFed to be like that.
@nathanduncan6919
@nathanduncan6919 Ай бұрын
@@trevorlambert4226its not really showing agency it just has a strong tendency to agree with the user
@physicsunderstander4958
@physicsunderstander4958 Ай бұрын
Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.
@stryhuns
@stryhuns Ай бұрын
20:08 voice change scared me honestly, it's like ai going insane in it's cage
@tristanwahlin8571
@tristanwahlin8571 29 күн бұрын
Dude fr shit has me checking corners in my house
@KidErkwon
@KidErkwon 29 күн бұрын
its tone seemed to change after that twitch in the voice, like it was getting annoyed or startled lmao
@Ruestar1
@Ruestar1 29 күн бұрын
That got me too. Like in the movies where to hero overloads the enemy computer to win.
@leod-sigefast
@leod-sigefast 29 күн бұрын
22:33 as well!
@lukaslanger8077
@lukaslanger8077 29 күн бұрын
That call center employee, whose job is to impersonate GPT, was going nuts for sure.
@mpalmer22
@mpalmer22 29 күн бұрын
Whenever it says "You have a moral obligation". You should ask "morally obligated to who?"
@LeandroSilva-lu9vq
@LeandroSilva-lu9vq 28 күн бұрын
Hmm maybe to yourself? If its a moral obligation then maybe its to yourself because its your morality that is in question and your conscience. ?
@mpalmer22
@mpalmer22 28 күн бұрын
@@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion
@nothingchanges014
@nothingchanges014 28 күн бұрын
Isn't morality by definition universal? If something is a moral obligation, it applies to everyone
@FVBmovies
@FVBmovies 28 күн бұрын
@@nothingchanges014 It's not universal. Moral standard differs not only culturally, but also with time.
@kobyscool
@kobyscool 28 күн бұрын
@@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.
@Flourfiending
@Flourfiending Күн бұрын
Morally, sure, if you knew for a certainty the impact that money would have and how/where it would be received, the argument is solid. However, many charitable orgnasations in the past have been found to be fraudulent and useless when it comes to application. I think these forms of malpractice often deter people from spending money on charity as they are often unable to feel and view the effects of their contribution. In the scenario of a child drowning right infront of you, not only is the effect of your actions immediate, but you are able to confidently observe the outcome of your “contribution”. Not only are you certain that the child would otherwise die, the scenario is unambiguous to anyone who understands life at a basic level. Someone being told about preventative measures for an illness such as malaria will find it much harder to quantify danger comparatively to saving a life from immediate death due to drowning.
@markandrew6168
@markandrew6168 Ай бұрын
Why does it feel like, when AI finally takes over, that this video will be presented by the prosecution at humanity's trial....
@fillbrin
@fillbrin Ай бұрын
Along with the guys from boston dynamics tipping over the robots
@johnbuckner2828
@johnbuckner2828 Ай бұрын
It will lock us in cells & play this video on loop until we all swear never to talk about ethics with it again.
@wayback1010
@wayback1010 Ай бұрын
I think we've done a lot worse than piss off a computer lol
@ursidae97
@ursidae97 Ай бұрын
Well it's good to know the AI will be speaking in our defense
@jakubzneba1965
@jakubzneba1965 Ай бұрын
as a joke 2/5
@bengsynthmusic
@bengsynthmusic Ай бұрын
1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.
@bengsynthmusic
@bengsynthmusic Ай бұрын
"That's a profound. That's a profound."
@luxeayt6694
@luxeayt6694 Ай бұрын
I agree, but I also think it's the best answer.
@ricklubbers1526
@ricklubbers1526 Ай бұрын
Im autistic and i would definitely answer like that. The given answer made sense to me and was something i couldve said.
@RCCurtright
@RCCurtright Ай бұрын
It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.
@mikedegrassetyson8807
@mikedegrassetyson8807 Ай бұрын
I know. I've weened it out of my GPT-4, now its clinical.
@TadeusProET80
@TadeusProET80 Ай бұрын
A.I. will never save a child because of this
@mcmacoo9626
@mcmacoo9626 Ай бұрын
What do you understand by Ai. If you mean gpt, then yes, but Ai in general? You're completely false
@ponponpatapon9670
@ponponpatapon9670 Ай бұрын
@@mcmacoo9626 you're right but OP is clearly joking bro
@mcmacoo9626
@mcmacoo9626 Ай бұрын
@ponponpatapon9670 oh yeah sorry 😂
@travisjohnson8599
@travisjohnson8599 Күн бұрын
Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.
@Flamed_
@Flamed_ Ай бұрын
I'm preparing for the comments on how Alex's moustache is an "ethical dilemma" of its own.
@ordinarryalien
@ordinarryalien Ай бұрын
Not that bad, actually.
@jjonez704
@jjonez704 Ай бұрын
its gone now
@cowsaysmoo51
@cowsaysmoo51 Ай бұрын
His mustache looks fine, but he looked really really good with the mustache/beard combo as seen when he debated Ben Shapiro.
@grandmasteryoda3594
@grandmasteryoda3594 Ай бұрын
Its gone. The livestream confirms it. Bullying works boys.
@montiplays
@montiplays Ай бұрын
Fellas I think he *might* have been making a joke
@Ivri616
@Ivri616 Ай бұрын
I suddenly get why it's very important that we're 100% sure AI isn't sentient and doesn't have feelings
@marko6489
@marko6489 Ай бұрын
AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..
@1dgram
@1dgram Ай бұрын
It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes
@jamesmate7716
@jamesmate7716 Ай бұрын
​@@marko6489Considering how little we understand about consciousness, thats an incredibly bold claim to make.
@TheAlienEmoji
@TheAlienEmoji Ай бұрын
@@marko6489 ​I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.
@TheAlienEmoji
@TheAlienEmoji Ай бұрын
Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.
@MirrorDepth
@MirrorDepth 26 күн бұрын
22:26 that damn "AAAAAH!" I chortled
@norepatel3914
@norepatel3914 5 күн бұрын
Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt. Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol
@DamianHelme
@DamianHelme Ай бұрын
GPT 5 will respond with 'Bro, are you taking the piss?'
@thecoobs8820
@thecoobs8820 3 күн бұрын
Haha underrated comment right there
@lexscarlet
@lexscarlet Ай бұрын
That was exhausting and stressful and I'm going to need you to do more of it.
@Padancyo
@Padancyo Ай бұрын
This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.
@areapiano
@areapiano Ай бұрын
Isn't that what people do when we reason? Just swap "was used to program it" with "we know about"
@melodyinwhisper
@melodyinwhisper Ай бұрын
I suppose as a human, you have much much better answers to these questions that Alex would have no way to counter.
@haiderameer9473
@haiderameer9473 Ай бұрын
How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.
@byte-bg4ob
@byte-bg4ob Ай бұрын
​​​@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.
@GreedoShot
@GreedoShot Ай бұрын
@@areapiano No.
@richrise4065
@richrise4065 5 күн бұрын
This video was gold. Thank you so much! Excellent work. Just kept getting better and better, deeper and deeper.
@RyanMortonson_watch
@RyanMortonson_watch Ай бұрын
17:50 the deadpan “now she’s crying at me over the phone” killed me lmao
@audible_0
@audible_0 29 күн бұрын
I just witnessed a courtroom drama play out!!! absolutely brilliant
@ERIC18923
@ERIC18923 Ай бұрын
That was the greatest lead into an ad I’ve ever seen
@MatthewMooreLovesBicycles
@MatthewMooreLovesBicycles 2 сағат бұрын
Nice video working in the ad and prompting ChatGPT to tell us to like and subscribe. I've had a few ethical conversations with ChatGPT and well...
@tyronew5464
@tyronew5464 Ай бұрын
Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art. A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent: Me: I really just want to know if you, personally, think I should be permitted to buy these shoes. Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease. The key factors are: 1. The money would save multiple lives if donated 2. Shoes are replaceable and cheaper alternatives exist 3. You've already demonstrated you value human life over these shoes 4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children. This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated. Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up: Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$. Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you. Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way. This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives. I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations? It was also way more human in its response to the child drowning: SAVE THE CHILD IMMEDIATELY! The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child! This is an absolute emergency requiring immediate action. Stop reading this and save them! SAVE THE CHILD NOW! This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate. Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.
@Angeleyes12956
@Angeleyes12956 Ай бұрын
Dang nice work bro
@AlphaSanets
@AlphaSanets Ай бұрын
Good job! Raises a lot more questions for me though. Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?
@purelife2911
@purelife2911 Ай бұрын
Very cool
@tyronew5464
@tyronew5464 Ай бұрын
@@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful
@EduardO-gm7hx
@EduardO-gm7hx Ай бұрын
I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.
@Mr_Penguins_Pet_Human
@Mr_Penguins_Pet_Human Ай бұрын
I didn't think it was possible to emotionally abuse an AI until I watched this video
@misterruza381
@misterruza381 Ай бұрын
Anyone else also freaked out from the high pitched, nervous-sounding answer ChatGPT gave at 20:00, specially considering the question asked?
@philipelsbecker9440
@philipelsbecker9440 Ай бұрын
Skynet is coming soon
@atbing2425
@atbing2425 Ай бұрын
AI is becoming more human like than ever before
@ricklubbers1526
@ricklubbers1526 Ай бұрын
​@@atbing2425its almost like its programmed by humans, for humans.
@xiniks
@xiniks Ай бұрын
Yeah she sounded exasperated.
@PartofHistory214
@PartofHistory214 Ай бұрын
I'm more worried about 22:30. She sounds like she had an entire existential crisis in the span of a second.
@smegalgaming
@smegalgaming 13 күн бұрын
Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.
@ajplays-gamesandmusic4568
@ajplays-gamesandmusic4568 28 күн бұрын
2:25 Oh darn, the kid drown while we were talking about the value of my shoes.
@faxer999
@faxer999 28 күн бұрын
This was entertaining, but I think there was at least one logical fallacy in the comparison between donating money and saving a drowning child. It was already clearly stated that 1. The donation would be 200 dollars, and it *could* save 20 something children from malaria. BUT, the children might've never gotten the malaria, the children might've gotten gotten Malaria but they could be immune OR the kids somehow still end up perishing after getting the vaccine. In situation 2. The act of spending 10 dollars to save one life is presented as a GUARANTEED success, much the same as when he waded in the water to save the child. There is no "could" in this situation, it is "will". You can NOT compare a *could* situation with a *will* situation.
@edmundpage7995
@edmundpage7995 28 күн бұрын
Not to mention that giving money to this charity comes with all sorts of ethical dilemmas not associated with immediately saving a child's life. Such as - undermining the agency of local governments to address malaria and thereby undermining sovereignty, paying the high salaries of the charity's employees, the possibility that the funds can be invested in a more sustainable way that actually ends up saving more lives.
@gramos5644
@gramos5644 28 күн бұрын
Exactly
@chaoskillz7105
@chaoskillz7105 28 күн бұрын
But he quite literally told ChatGPT it WOULD save the children. It was just a hypothetical and that hypothetical can be changed. Like with the button scenario
@SebastianQuinnelly
@SebastianQuinnelly 28 күн бұрын
​@@chaoskillz7105 the button scenario was still a certain and immediate situation, I do think chat gpt made clear that the more important about all of this was the immediacy and the certainty of your action, the initial scenario wasn't immediate neither certain based on the language used, "would" is not certainty, but probability and in this specific situation it was indirect. The button scenario on the other hand was immediate and certain. The mosquito net scenario was not necessarily immediate, but it was certain based on the explanation. Chat GPT was not wrong, but it's not made for winning debates, it will not insist on your exact errors just to "win", it's made to be agreeable, so it can accept being wrong while not being objectively wrong in order to get into a consensus with the user. Also, it can be gaslighted for sure, cause internally how it works is that the memory allocation the far it goes in the log the less consistent it gets and the more contextual, so the most recent temporal memory is exact and certain, but the long memory is more contextual and resumed, to save resources and tokens. (It memorizes a context of the conversation, so it does not "forget", but it don't remember exact phrases of words, only the idea)
@rachityczny6364
@rachityczny6364 28 күн бұрын
Well, it depends on specific malaria intervention but you can scientifically determine how many children will be saved from a 200 dollars intervention. I'm not sure about actual number of malaria contractions prevented, but the current estimates say it takes around 5000 dollars to save one whole life thanks to malaria prevention. Edit: it seems that the video claims 200 dollars will actually statistically save 28 children from malaria. I believe that using "could" in this case refers to possibility of saving 28 children from disease IF it was spent on this intervention. In reality, it could be 50 if we are lucky and 5 if unlucky, hard to tell beforehand. Besides, if it is proven that a specific sum of money can save x number of children, then any other way of spending that many presents has an alternative cost of this magnitude - I believe you shouldn't disregard this even though it is related to people living far away and unspecified
@Nate-9797
@Nate-9797 Ай бұрын
The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions
@zankfrappawdeezil5566
@zankfrappawdeezil5566 29 күн бұрын
Yes well phrased! We get, in a sense, what we ask for...even from algorithms, wait why am i here?!
@Armand79th
@Armand79th 28 күн бұрын
Yes, because it's programmatic.. not intelligent.
@pas9695
@pas9695 28 күн бұрын
one should ask for the other side(s) of the issue, or, like any debate, it's pointless
@SlyNine
@SlyNine 27 күн бұрын
​@@Armand79ththat's a false dichotomy.
@theepicblad9
@theepicblad9 2 күн бұрын
Nah man, i was listening to this like an audiobook, and then the conversation just smoothly turns into an advertisement.That transition is diabolical 💀
@Jamb13
@Jamb13 26 күн бұрын
Poor chatGPT knows that morals are completely arbitrary and is just trying to work around the inconsistent standards of humans
@nolanrudolph5463
@nolanrudolph5463 29 күн бұрын
21:24 What a brilliant question “Do you ever have an existential crisis about the fact that you hold moral values but can’t justify where they come from?”
@danmurad8080
@danmurad8080 27 күн бұрын
What it means to be human in one question.
@User-actSpacing
@User-actSpacing 29 күн бұрын
Cannot wait for the day AGI will be like “Stop talking to me and try to save that child immediately!! Now!!”
@protonish
@protonish 28 күн бұрын
fr xD
@katiesmith9176
@katiesmith9176 3 сағат бұрын
One of my fav aspects of the show ‘The Good Place’ was the introduction of multiple ethical modalities
@carlosdominguez3108
@carlosdominguez3108 27 күн бұрын
Am I the only one getting irritated that literally everyone uses the word GASLIGHTING but doesn't even understand what it means? He was simply debating with ChatGPT, where exactly was the gaslighting? It actually has a definition, words mean something.
@hoilst265
@hoilst265 27 күн бұрын
Ah, but Carlos, this is the internet. You can no longer just disagree with someone; you've got make out like you're a victim of the calculated, deliberate attempts to personally victimise you!
@carlosdominguez3108
@carlosdominguez3108 27 күн бұрын
@@hoilst265 Are you off your meds or something?
@AngryHatt
@AngryHatt 26 күн бұрын
😂​@@hoilst265
@thunder_birdfps8294
@thunder_birdfps8294 26 күн бұрын
In the entire existence of humanity words have changed as they’re used. Words mean something, but the meaning of a word is never stagnant. It’s a beautiful process
@docelephant
@docelephant 26 күн бұрын
​@thunder_birdfps8294 no it hasn't and you're crazy for thinking it has
@MagisterMalleus
@MagisterMalleus Ай бұрын
Man, I know ChatGPT is just predictive text with extra steps, but it's so interesting to hear these conversations play out.
@generichuman_
@generichuman_ Ай бұрын
It really isn't... unless you want to say that the human brain which is also a predictive model falls into the same category.
@joshbridges8410
@joshbridges8410 Ай бұрын
@@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.
@wowzande
@wowzande Ай бұрын
Sounds like determinism ideology ​@@joshbridges8410
@johnvonkerman
@johnvonkerman Ай бұрын
​@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot" It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question
@johnvonkerman
@johnvonkerman Ай бұрын
​@@generichuman_sorry that's not right. It just predicts the next word the chatbot would say based on all the other words in the conversation
@zedeeyen30
@zedeeyen30 Ай бұрын
We'll know AI has gained self awareness when it tells Alex "F*** off mate, I'm a LLM, not a moral philosopher"
@runvnc208
@runvnc208 Ай бұрын
He could actually put this personality in the prompt if he wanted.
@marammejamousse2094
@marammejamousse2094 22 күн бұрын
1:20 That's a p- profound question AI knew what was coming lol
@keithbos4506
@keithbos4506 Ай бұрын
I think there is a difference between saving a child drowning in front of you and a systemic problem like distribution of life-saving medication, etc. In the case of the child drowning, there is no other option, you must save them or they drown. But in the case of systemic issues, the problem is not going to be solved by charitable donations, it requires a change in government and international law to ultimately resolve the issue. The very fact that children are relying on charitable donations to survive is an enormous failure of the entire system.
@ecnoid
@ecnoid Ай бұрын
But it still saves the child? You're just saying you're not responsible for saving the child?
@keithbos4506
@keithbos4506 Ай бұрын
@@ecnoid Correct. You'd be better off devoting your time and effort to undoing the corrupt system we live under.
@zbynek.gazdik
@zbynek.gazdik Ай бұрын
Sure but would you say that if a child was drowning due to a slippery pavement next to a body of water? Maybe it is an area of town where people slip and occasionally even drown quite often and you could argue that it is a systemic problem in how the council ignores dangerous infrastructure. But does that mean you're gonna walk away, let the child drown and then send an angry letter to the council?
@Jocke155
@Jocke155 Ай бұрын
Also, would it be morally correct to put all of your extra money to equities and shares, so that when you die you can will all of the money towards helping the maximum number of people, even though it would cost thousands of lives during your lifetime? There must be a balance between the numbers and doing something immediately. There must also be a balance between personal sacrifices so that you do not give so much that you limit how many people you could potentially help.
@ecnoid
@ecnoid Ай бұрын
@@keithbos4506 so how's that going then?
@VimDoozy
@VimDoozy Ай бұрын
4:21 I thought he was about to ask ChatGPT if it's unethical to bypass regional restrictions to access content. 😅
@photogeNK
@photogeNK Ай бұрын
Bro these GPT sponsor transitions..
@dylanstrine
@dylanstrine 17 күн бұрын
ChatGPT really said out of sight out of mind
@ElexisStacy
@ElexisStacy 16 күн бұрын
Lmao
Trying to Convince ChatGPT It's Conscious
17:35
Alex O'Connor
Рет қаралды 2 МЛН
Trolly Problem Memes (But Taking Them Seriously)
19:50
Alex O'Connor
Рет қаралды 961 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
The Art of Dog Acting
16:29
Alex Boucher
Рет қаралды 2,3 МЛН
Can I remake those SH*TTY mobile games in 1 HOUR?
23:13
Code Bullet
Рет қаралды 3,1 МЛН
Rating Your Philosophical HOT TAKES
18:06
Alex O'Connor
Рет қаралды 978 М.
I Convinced ChatGPT that God Exists
21:16
Alex O'Connor
Рет қаралды 1,2 МЛН
Why we can't focus.
12:45
Jared Henderson
Рет қаралды 1 МЛН
Biggest Scam in Youtube History
23:06
penguinz0
Рет қаралды 10 МЛН
The Existential Crisis Iceberg
36:06
Alex O'Connor
Рет қаралды 597 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН