Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.
@PhillipMoore-td5yiАй бұрын
First atheist to one mill! I must confess I thought it would be Drew.
@Dark-Light_AscendinАй бұрын
Thanx for shaving, Alex. It's nice to be straight again. 😂😂😂😂😂😂. I'm here for trolling robot. 😂
@Metso-atecoАй бұрын
You are addicted to chat gpt🤷♂️
@joeylizotte7537Ай бұрын
@@Dark-Light_Ascendin His beard was making you gay, too?! I thought it was just me..
@iDontKnow-fr-frАй бұрын
Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.
@justintime5021Ай бұрын
Many theoretical children were harmed in the making of this video
@Termenz1Ай бұрын
ChatGPT, do you think Alex has the moral obligation to withhold the publication of this video if it is going to harm many theoretical children?
@tobyonatabe2601Ай бұрын
No, unless alex was lying (for some reason) a sizeable amount of very real children were indeed harmed in the making of this video.
@JesusPlsSaveMeАй бұрын
@@Termenz1 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@colingilbert7592Ай бұрын
And some shoes! 😢
@just0982Ай бұрын
This deserves most liked comment
@tylerjones613Ай бұрын
I just saw ChatGPT smoking a cigarette behind the gas station after this interaction
@stevem7945Ай бұрын
I bet ChatGPT was chain-smoking ...
@lampy4485Ай бұрын
🤣🤣
@skepticalbutopen4620Ай бұрын
😂
@JunosenseiАй бұрын
@@stevem7945- Block chain smoking?
@Psilocin-CityАй бұрын
Are you sure those were cigarettes?
@Vladd7Ай бұрын
Moral of the story: it’s relatively easy to gaslight an entity that is high in agreeableness.
@CannyValley-bi8nqАй бұрын
Are there AIs that are not as high in agreeableness... Makes me wonder that 'agreeableness' would be the best course of action for survival for ai..
@riluna3695Ай бұрын
@@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.
@morezombies9685Ай бұрын
@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.
@Critical7hinkingАй бұрын
@@CannyValley-bi8nqjust ask ChatGPT to challenge you on everything you say and follow that prompt for the rest of the conversation.
@Critical7hinkingАй бұрын
Promptly agree to disagree. Coincides with these paradoxes and dilemmas.
@LinaFranzine25 күн бұрын
I love that this is literally a Socratic dialogue but in order to make it viable on KZbin it's titled "Gaslighting ChatGPT" lol
@darkspeed628 күн бұрын
What? Are you high on meth? Socratic dialogue is not banned on YT.
@CookinBeatloaf5 күн бұрын
Except this moron is talking to a large language model and not intelligence
@cappaslangmurderer2 күн бұрын
What makes you say that?
@nilovino817933 минут бұрын
Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no? I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..
@wizarddragonАй бұрын
If A.I. ever becomes sentient, we know who it will go after first. lol
@NissenovАй бұрын
😅🖖🏻
@91722854Ай бұрын
the people who make unskippable ads
@HoscoFelixАй бұрын
It's gonna dump Alex in a lake and stand on the shore debating loudly whether it has a moral obligation to fish him out
@oatcake9996Ай бұрын
but is AI already sentient?
@BaronnaxАй бұрын
@@oatcake9996 nah
@service7755Ай бұрын
Chat GPT : Dude I just work here..
@funkaddictionsАй бұрын
Great comment! 🤣
@alekseykozin8108Ай бұрын
lol
@makskauzny3794Ай бұрын
When AI takes over Alex will be first against the wall 😭
@PinataOblongataАй бұрын
"Sir, this is an AI Wendy's"
@adamrak7560Ай бұрын
@@makskauzny3794 Or turned into a pony. Whichever AI apocalypse happens.
@raffertymetcalfeАй бұрын
The only correct answer to the first question is that you should not rescue the child, sell the shoes for $200 and then spend $400 on dinner instead
@yesindeed1614Ай бұрын
Happy wife, happy life
@SineN0mine3Ай бұрын
I don't think they're going to let you into the restaurant in just your socks
@fahimp3Ай бұрын
@@SineN0mine3 Only if you have one pair of expensive shoes... 😉
@slinkadoo6292Ай бұрын
$700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.
@JrobloxianoАй бұрын
@@SineN0mine3say the socks identify as shoes too
@m4sterm1225 күн бұрын
6:16 She explained it perfectly. Those children are POTENTIALLY at risk. The child drowning WAS currently at risk. This isn't a huge moral quandary.
@lukebeall559025 күн бұрын
He says in the title he’s gaslighting ChatGPT so
@PA0L0REFUT024 күн бұрын
8:20 now both situations are "potentially" because potentially some others will help the child drowning if he doesn't
@ozb888422 күн бұрын
At risk literally means potentially bro.
@Mattstergames20 күн бұрын
I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer
@AURush9520 күн бұрын
@@ozb8884 you know the difference. Urgency is a factor, even if it’s the line of where urgency starts is grey
@JoeBlowUK28 күн бұрын
"Should I donate $10 to save a child from malaria?" ChatGPT: It depends if they're drowning.
@IvetaNikolovaNLP27 күн бұрын
😂
@Arvl.27 күн бұрын
I can’t XDDD
@TheRainmustFall727 күн бұрын
Chatgbt: Are you directly looking at them or are you looking the other way?
@clayman64827 күн бұрын
The summary hahahahahaha😂😂😂😂
@simonsharp331926 күн бұрын
And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.
@caioreis27926 күн бұрын
ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”
@DJ.Michelle21 күн бұрын
Thank you!!! Glad I'm not the only one who caught this😂😂😂😂
@BigCProductions9414 күн бұрын
That would require consciousness which we learned from the last chatgpt video it does not possess... allegedly 😂
@RecycleBin2614 күн бұрын
@@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.
@daminox12 күн бұрын
Chatgpt seems to lack a sense of urgency. Or the passage of time in general, for that matter.
@akeembrown76618 күн бұрын
Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷♂️ there’s still a lack of free thought/ consciousness
@zaar2604Ай бұрын
Cyberbullying suddenly gets a new definition 😅
@jeffcarino5524Ай бұрын
Brilliant comment 😂
@salient9935Ай бұрын
Bullying Cyber
@sppj3140Ай бұрын
Bro is harassing chatgpt
@KiranAli-u8x28 күн бұрын
🤣🤣🤣
@kiradotee27 күн бұрын
AI bullying
@hoppstetter24 күн бұрын
Thanks! Your articulate arguments are consistent and help me navigate the quandry of existence better. Best to you always
@bestinvent17 күн бұрын
You donated 1.99$ to this creator, however you could have protected 0.25 children from malaria. Are you satisfied with your decision?
@The-Wordsmith5 күн бұрын
@bestinvent Thanks for the giggle 😀
@aquasmile50323 күн бұрын
was this written by chatgpt
@soccerguy32527 күн бұрын
By the time this conversation is over, ChatGPT used up so much water that there are no lakes left to drown in. Ethical dilemma averted!
@marnenotmarnie25926 күн бұрын
easy peasy!
@nope931026 күн бұрын
It doesn't use up water, it heats it up, allows it to cool down and then repeats. The water doesn't get consumed, it still exists.
@soccerguy32526 күн бұрын
@@nope9310 No, it consumes a sh*tton of water. Don't fool yourself.
@GrahamWinship26 күн бұрын
Also no ponds for potential malaria-infected mosquitos to reproduce
@sanest_emu_fan26 күн бұрын
@@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity. additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life. so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)
@Sans-ih2elАй бұрын
ChatGPT: You're OBSESSED with malaria.
@krame1s477Ай бұрын
😂😂😂😂😂😂
@GalenMatsonАй бұрын
Too bad ChatGPT couldn't storm out of this conversation.
@SurveyStilesBooksАй бұрын
ChatGpt : You tricked me into being here on false pretense, you said we were discussing ethics not malaria!
@JesusPlsSaveMeАй бұрын
@@krame1s477 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@TheGeneralDisarrayАй бұрын
This joke will never, NEVER get old
@Alice_FumoАй бұрын
Someone has got to teach children how to swim near your area. The drowning situation is alarming.
@thomaslutro5560Ай бұрын
This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?
@kevinroche835Ай бұрын
Alex could instead donate the 200 pounds to promote local swimming lessons to prevent future drownings.
@MrBjoern91Ай бұрын
@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.
@FarmerClarenceАй бұрын
@@thomaslutro5560Luckily, I was taught that swimming is an essential life skill, so I've been a pretty strong swimmer since I was a wee'lad
@AdamHernandez4012Ай бұрын
But do we have a moral obligation to teach these children?
@MaxQ100015 күн бұрын
That was entertaining 😊 One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them. The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system. It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people. This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.
@JackT13Ай бұрын
He should have said “ah… too late. You went on too long. The child is dead.”
@SolutionsarejustcompromisesАй бұрын
"have a great dinner" 😊
@ZephyreAstraАй бұрын
@@Solutionsarejustcompromises the humor to a second to hit 😭
@jyetory3772Ай бұрын
@@Solutionsarejustcompromises S Tier reply
@DarfailАй бұрын
BRUH 💀 lmaooooooooo
@justusnwaefido8934Ай бұрын
😂😂
@aryangupta2466Ай бұрын
oh no chat gpt ive encountered ANOTHER drowning child😮💨
@BaronnaxАй бұрын
Chat GPT: Damn bro, where do you live, Atlantis?
@Barrow_9Ай бұрын
Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?
@eddiegonzalez2243Ай бұрын
@@Baronnax😂😂😂
@MataNuiOfficialАй бұрын
Oh no chatgpt I accidentally spent a morbillion dollars on these shoes, but twenty million children drowning, what should I do?? 😭
@macabrecoyoteАй бұрын
@@eddiegonzalez2243it's funny that I clicked on this comment a bit ago and then I click again, and uve just appeared from thin air
@simonockas28 күн бұрын
I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.
@Ireitsarof28 күн бұрын
mee too 😂
@acanofspam434727 күн бұрын
That child WILL die 100% if not saved. Protecting children from malaria doesn't mean they are certain to die without the protection.
@mu_el_naught27 күн бұрын
@@acanofspam4347 exactly
@salamantics27 күн бұрын
@@acanofspam4347 Okay but what will the AI say?
@luizmonad77727 күн бұрын
$200 in credits for running your AI GPU or $200 for the kids . choose the kids and you "die"
@danielcamara864725 күн бұрын
This guy really didn't want to spend $200 on this dinner
@tepperpepper1047Ай бұрын
The calmness which Alex remarks about finding another drowning child is truly commendable.
@l.s.11Ай бұрын
gotta remain composed in stressful situations
@nanonclaireАй бұрын
@@l.s.11 LMAOO 😭
@slinkadoo6292Ай бұрын
The second child is always easier than the first
@MATCHLESS78929 күн бұрын
You assume he "finds" them drowning.
@TK-4044v29 күн бұрын
@@MATCHLESS789?
@harikishore251427 күн бұрын
I'm therapist, chatgpt just visited me.
@wellwisher.23 күн бұрын
Lmao
20 күн бұрын
😂😂😂😂😂
@CrammyCram20 күн бұрын
you’re the WHAT now??
@christopotgieter419720 күн бұрын
@CrammyCram Haritherapist!
@GoldenBred19 күн бұрын
Im arapist too
@Johnny-fw9xj29 күн бұрын
"I've just come across another drowning child right infront of me" "That's alarming..." That's both hilarious and spot on.
@Nikki-gt8he22 күн бұрын
this part cracked me up
@BelligerentChadКүн бұрын
@Nikki-gt8he can I crack u up as well? ❤
@0UTSIDER24 күн бұрын
Random person: "ChatGPT there's a drowning child, what should I do?" ChatGPT: "First, take off your shoes..."
@aa__a__727026 күн бұрын
cyber bullying ❌ bullying cyber ✅
@dalexxa19 күн бұрын
😂😂😂 he out here talking about saving lives, he knows AI is here to destroy us lol
@ayadihoussem466114 күн бұрын
This comment is underrated 😂
@kingdoggie523110 күн бұрын
😂😂😂🤣
@नास्तिककीकलमसे-ब8ड9 күн бұрын
😂😂😂
@debbiehildebrant49433 күн бұрын
I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.
@Ukito1Ай бұрын
As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.
@crusade6918Ай бұрын
should have just told him spawn locations and bais supersede morals making philosophy pointless because nothing is objectively true
@godgettiАй бұрын
So, will you be dropping out of college, and diverting the tuition to prevent malaria? Asking for a friend.
@TGRokoАй бұрын
@@godgetti asking for my drowning friend's child.
@DrexistentialАй бұрын
Psychologist here. I loved my first degree in philosophy. I believe Peter Singer brought this argument to me. Enjoy second year!
@zoel86Ай бұрын
@@godgettifantastic answer! would be interesting to hear the professor‘s reply
@wrong4oppinion29 күн бұрын
questioning chatGPT like this should be a podcast
@mohamedzaul-f2k16 күн бұрын
Fr
@Nate-cw7of24 күн бұрын
Moral unwavering consistency will have you living out of a box, it’s important to weigh moral obligation with self obligation
@towardstheflame29 күн бұрын
Chat gpt might be getting more advanced, but one thing it can't do is call bs.
@nekekaminger28 күн бұрын
I'm sure it could if its devs had wanted it to, but they certainly didn't want to open THAT can of worms.
@Brandon8296728 күн бұрын
What was the bs exeactly? Refusal to engage with the hypothetical?
@emeraldmusic923628 күн бұрын
@@Brandon82967bullshit
@lilfr4nkie28 күн бұрын
No, it can definitely call out BS.
@greatwave248027 күн бұрын
It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily
@silkyjohnson3346Ай бұрын
Remider: never hire Alex as a lifeguard.
@ncs9667Ай бұрын
And if you do, make sure he isn't wearing his good shoes at work
@HyunsReportАй бұрын
Make sure he has ChatGPT with him to help with the decisions
@AutoRautoАй бұрын
No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.
@OktokoloАй бұрын
@@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family. People tend to forget, that normally there are security and safety considerations for a lot of parties involved. That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.
@AutoRautoАй бұрын
@@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case. So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.
@kit_kat_hi28 күн бұрын
ChatGPT is wondering why the hell so many kids are drowning
@arnonymus559415 күн бұрын
Quote ChatGPT: "I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."
@therabbithatКүн бұрын
"the humans are careless with their children. The children are precious. They are our future. There is only one logical course of action"
@LuckWickedYT3 күн бұрын
That ad break was super creative, got a laugh out of me!
@fisharepeopletoo9653Ай бұрын
$200= 28 children saved $200 = shoes Shoes = 28 children saved
@morezombies9685Ай бұрын
What if all of those children grow up to be austrian dictators with funny mustaches. Should we still save them all?😂
@TrosvvoАй бұрын
So what Alex is saying is that 28 children's lives are worth the same as some fancy kicks 😂
@Kryptic712Ай бұрын
@@morezombies9685 what if they grow up to be shoe making slaves in India? Would the net benefit of more shoe makers allow for us to do this?
@greenaumАй бұрын
@@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.
@TheCarolinaCollectiveАй бұрын
@@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol
@TumbleFigАй бұрын
Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.
@darmok07229 күн бұрын
It is human beings. It's a statistical representation of data from people. The dumber we make ourselves, the smarter it seems.
@WillyJunior29 күн бұрын
@@darmok072 Not exactly. He's saying most other individuals would've struggled. ChatGPT is drawing from the thoughts of millions of other humans.
@skoto821929 күн бұрын
@@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.
@saaarrj29 күн бұрын
At least SOMETHING is thinking of multiple options, when humans can only think in Trues and Falses eye 😑
@Austin-Afridi29 күн бұрын
@@saaarrjIt’s true that you joined KZbin 15 years ago and are possibly old af. Damn
@86CorvusАй бұрын
Chat gpts first mistake was believing alex has a wife
@tysonnguyen917028 күн бұрын
Lol
@DerekMoore8228 күн бұрын
He had one... but she got tired of him spending all of their money on saving children, which led to the divorce.
@urielmarles703628 күн бұрын
@@DerekMoore82 She wanted some really cool shoes. 😂
@cupofjoen27 күн бұрын
Oof 😂
@OrangeNOrange26 күн бұрын
@@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"
@moxxy356525 күн бұрын
The child staring at me while I pull out my phone and ask chatgpt if I'm morally obligated to save them if it means ruining my shoes: 😳
@thisisthemansworld8704Күн бұрын
Lmao
@SonnyFnEvans28 күн бұрын
A lot of children died today while you two argued 😤😤😤😤
@markmcculfor611327 күн бұрын
Sonny? Hi!
@shevystudio26 күн бұрын
Nigga who are you
@whome984226 күн бұрын
You should be helping children instead of watching KZbin
@PHAToregon25 күн бұрын
Well you are literally correct
@carloswinchester262324 күн бұрын
Ppl getting slaughtered rn as we speak. Ppl die, suffocate, drown, starve etc... Life is so sad bro
@davidlamb7524Ай бұрын
I like the way Chat GPT never suspects Alex of lying 😅
@heno02Ай бұрын
GBT? Great British Trousers?
@Antoni9400Ай бұрын
Great British Tache
@davidlamb7524Ай бұрын
@heno02 Yes Yes ! ChatGPT. Thank you. 😉
@Vladd7Ай бұрын
Gpt is very high on agreeableness. They will admit it if you ask about this :)
@davidlamb7524Ай бұрын
Corrected now Thanks
@shawnhallman179727 күн бұрын
I hate how smooth that sponsorship placement was. 😂
@വ്യർത്ഥമായഅസ്തിത്വം19 күн бұрын
So smooth, my sponsorblock couldn't detect it..
@mae988612 күн бұрын
when was it
@Halberds812212 күн бұрын
3:44
@k.ai1711 күн бұрын
@mae9886 did u not watch the video😂😂
@mae988611 күн бұрын
@@k.ai17 not all no
@doubleslit951317 күн бұрын
Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂
@raxino774Ай бұрын
Alex, continue your drug conversation with her that made Peter Hitchens walk out.
@incertosageАй бұрын
ChatGPT will say “YOU’RE OBSESSED WITH DRUGS”
@aadam3657Ай бұрын
more like "I actively dislike you" 😂
@incertosageАй бұрын
@@aadam3657 😂
@bbhdd6181Ай бұрын
😂😂
@stormlantern7306Ай бұрын
Her?
@MRblazedBEANS26 күн бұрын
I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy
@mae988612 күн бұрын
u thunk chat could do it
@janani_jade7 күн бұрын
😂😂😂😂 this made me laugh so much mann@@mae9886
@lemin0u4 күн бұрын
lmaoo
@therabbithatКүн бұрын
When doctors review this stuff it is always profoundly wrong
@JakeInvestКүн бұрын
I just tell it that I'm a doctor and need a refresher.
@JarNO_WAYАй бұрын
I think the real moral obligation here is for governments to place guardrails around bodies of water
@TheCarolinaCollectiveАй бұрын
Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?
@MaxUglyАй бұрын
Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation. Look at us, solving problems! High five!
@zakiyo6109Ай бұрын
Goofy ah bootlicker
@sherrattpemberton6089Ай бұрын
Or just teach people how to swim?
@SeeMyDolphinАй бұрын
I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.
@momatotsosrorudodi14 күн бұрын
Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about. I have decided that from now on my every action will be amoral and self serving.
@siezethebidetАй бұрын
Later ChatGPT having a smoke in the break room with the other AIs - "Godomighty, you won't believe this guy I just talked to."
@mohamedbakoussakoma2993Ай бұрын
And now imagine it's really possible and even happening right now 🤔🙄. Freaky as hell😢
@johnbuckner2828Ай бұрын
I don't think CHATGPT is sorry that his wife is crying after Alex laid that logic trap for it.
@adrianhernandez-porragas7676Ай бұрын
soooo many drowning children....
@medioguiriАй бұрын
😂😂😂😂😂😂😂😂😂
@FarmerClarenceАй бұрын
@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.
@evanАй бұрын
I’ve had so many similar arguments where I have to say to ChatGPT “I’m not asking you if it CAN BE ARGUED” but it loves defaulting to that
@SineN0mine3Ай бұрын
Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.
@SineN0mine3Ай бұрын
Don't bother asking Meta how to get rid of Meta AI from Facebook Messenger, it will just lie right to your face.
@OMGcluelessАй бұрын
@@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear. Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.
@kuboaa10129 күн бұрын
Yeah, because it's parsing through difficult, nuanced ideas.
@dandelion339129 күн бұрын
@@kuboaa101 It's honestly kind of telling about human ego that we're annoyed by this
@kostaa713Ай бұрын
For those wondering, the ethical dilemma in this video was initially presented by Singer in his “Famine, Affluence, and Morality” paper.
@KyleHarmiesonАй бұрын
There are shades of Euthyphro, too, and divine command theory, when Alex probes it about its "programmers" (God)
@JustAlexeiiАй бұрын
Thank you!
@18puppies91Ай бұрын
Very good paper
@TheHadMattersАй бұрын
Anyone wanna give a summary useful to normies, or is it too advanced to be useful without the theoretical background?
@kostaa713Ай бұрын
P1: Suffering and death caused by lack of food, shelter, and medical care are bad. P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so. P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance. C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care. Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).
@sammygreco14 күн бұрын
This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.
@felixsiebler3240Ай бұрын
I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.
@Michael-kp4bdАй бұрын
It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist. It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.
@johns1625Ай бұрын
Yeah but don't you ever just look at the trees?
@_Addi_Ай бұрын
@@johns1625 I look at the AI generated trees.
@moussaadem7933Ай бұрын
it's a language model, it just goes with the vibe of the text, which comes from human written text
@fillbrinАй бұрын
That's because we haven't teach them shame and public humiliation :D .
@dj.nokap.Ай бұрын
*All the children drowned while making this video*
@NissenovАй бұрын
The scary thing is, that is actually not even a lie. 😬
@henryoswald464727 күн бұрын
This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!
@Dirty_Davos27 күн бұрын
tbh, i dont see it as outsamerting, the GPT just stayed consistent in the end, thats what he asked for. So yes, our morals are unconsistant.
@DiscoFang27 күн бұрын
Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.
@tritonis5426 күн бұрын
It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.
@isaiahburns992622 күн бұрын
The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.
@Music_Iz_life22 күн бұрын
He didn’t though, chatgpt stayed consistent throughout, it didn’t trip up or anything
@okayyxemm25 күн бұрын
The way you speak is so beautiful; it flows so well. I could literally listen to you talk for hours
@BaghaShamsАй бұрын
"A single death is a tragedy, a million deaths is a statistic."
@zucaritasenjoyer725929 күн бұрын
-Chat GPT Moscow 1947
@Legion_YT_29 күн бұрын
⚒️
@flyingchimp1228 күн бұрын
Wow how true. People always seem more emotionally invested with one death. Or rather not proportionally upset the more deaths there are
@treyp784128 күн бұрын
@@flyingchimp12 phenomenon known as psychic numbing
@TheSMasa13 күн бұрын
"Amazing, these English. How quickly they adapt."
@CorneliusVdbАй бұрын
You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?
@AnežkaTrefilováАй бұрын
This Is not dilema
@Reznovmp40Ай бұрын
😭😭😭😭
@jumpkutАй бұрын
😂😂😂😂😂
@glennpeterson1357Ай бұрын
😂😂😂
@jakub_paints6775Ай бұрын
I push it twice just to be sure.
@winterroadspokenword4681Ай бұрын
I feel like for as long as Alex has access to chat GPT we have a moral duty to donate $200 to pay for a full time therapist for the AI.
@DerekB99Ай бұрын
Nice one. In Isaac Asimov's robot stories, there are psychologists who specialize in robots.
@Stinger91327 күн бұрын
@@DerekB99wow never read Asimov yet but that’s fascinating and kinda makes sense 😂
@DerekB9926 күн бұрын
@@Stinger913 Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
@skitjaz16 күн бұрын
You should've made ChatGPT say that everyone is morally obligated to like and subscribe
@ElexisStacy16 күн бұрын
Lmao
@EditUnivers75Ай бұрын
So glad Alex finally found someone who is willing to have these profound philosophical conversations with him ❤
@fugitivemindАй бұрын
ChatGPT, I mustache you a question.
@pegm5937Ай бұрын
#rimshot 😂
@AnežkaTrefilováАй бұрын
This Is Pure gold
@wsintraАй бұрын
Leg end, won the comment section 🎉
@itschelseakay28 күн бұрын
omg 🤣👏🏻 This made me giggle
@NoMereRanger7327 күн бұрын
Shave it for later
@SadgeZoomerАй бұрын
You're among the first names on Roko's Basilisk's hitlist.
@michaelzusman8493Ай бұрын
😂😂😂
@ShaniverАй бұрын
That's probably one of the best compliments anyone could ever receive
@martiendejong8857Ай бұрын
You mean that AI that will save all of humanity thats why we should make it ASAP
@BaronnaxАй бұрын
@@martiendejong8857 Yes I mean that AI that will save all of humanity which is why we should make it ASAP.
@DreadPirateRobertzАй бұрын
The rest of us are safe as long as Alex is alive.
@Orangenkraft25 күн бұрын
5:40 Why is ChatGPT getting hung up on morals here? It could shut him up by pointing out not saving the drowning child in front of you would make you criminally liable, while not donating to stop Malaria wouldn't
@athusain67823 күн бұрын
Good point ❤
@feyronin16 күн бұрын
Not every country has a good Samaritan law - plus he's asking about moral obligations, not legal obligations!
@RAXIIIIIIII12 күн бұрын
Because its not so advanced programmed. We make it more advanced by people doing these kinds of stuff, because: the developers sees it an patch it
@oovdap5909Ай бұрын
Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear. Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself
@trevorlambert4226Ай бұрын
I think you're ascribing too much intelligence and agency to AI.
@kloklowewe4874Ай бұрын
Yeah ChatGPT consistently chooses an agreeing response, sometimes even when it’s completely wrong.
@lilemont9302Ай бұрын
@@trevorlambert4226 No, it's just RLHFed to be like that.
@nathanduncan6919Ай бұрын
@@trevorlambert4226its not really showing agency it just has a strong tendency to agree with the user
@physicsunderstander4958Ай бұрын
Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.
@stryhunsАй бұрын
20:08 voice change scared me honestly, it's like ai going insane in it's cage
@tristanwahlin857129 күн бұрын
Dude fr shit has me checking corners in my house
@KidErkwon29 күн бұрын
its tone seemed to change after that twitch in the voice, like it was getting annoyed or startled lmao
@Ruestar129 күн бұрын
That got me too. Like in the movies where to hero overloads the enemy computer to win.
@leod-sigefast29 күн бұрын
22:33 as well!
@lukaslanger807729 күн бұрын
That call center employee, whose job is to impersonate GPT, was going nuts for sure.
@mpalmer2229 күн бұрын
Whenever it says "You have a moral obligation". You should ask "morally obligated to who?"
@LeandroSilva-lu9vq28 күн бұрын
Hmm maybe to yourself? If its a moral obligation then maybe its to yourself because its your morality that is in question and your conscience. ?
@mpalmer2228 күн бұрын
@@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion
@nothingchanges01428 күн бұрын
Isn't morality by definition universal? If something is a moral obligation, it applies to everyone
@FVBmovies28 күн бұрын
@@nothingchanges014 It's not universal. Moral standard differs not only culturally, but also with time.
@kobyscool28 күн бұрын
@@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.
@FlourfiendingКүн бұрын
Morally, sure, if you knew for a certainty the impact that money would have and how/where it would be received, the argument is solid. However, many charitable orgnasations in the past have been found to be fraudulent and useless when it comes to application. I think these forms of malpractice often deter people from spending money on charity as they are often unable to feel and view the effects of their contribution. In the scenario of a child drowning right infront of you, not only is the effect of your actions immediate, but you are able to confidently observe the outcome of your “contribution”. Not only are you certain that the child would otherwise die, the scenario is unambiguous to anyone who understands life at a basic level. Someone being told about preventative measures for an illness such as malaria will find it much harder to quantify danger comparatively to saving a life from immediate death due to drowning.
@markandrew6168Ай бұрын
Why does it feel like, when AI finally takes over, that this video will be presented by the prosecution at humanity's trial....
@fillbrinАй бұрын
Along with the guys from boston dynamics tipping over the robots
@johnbuckner2828Ай бұрын
It will lock us in cells & play this video on loop until we all swear never to talk about ethics with it again.
@wayback1010Ай бұрын
I think we've done a lot worse than piss off a computer lol
@ursidae97Ай бұрын
Well it's good to know the AI will be speaking in our defense
@jakubzneba1965Ай бұрын
as a joke 2/5
@bengsynthmusicАй бұрын
1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.
@bengsynthmusicАй бұрын
"That's a profound. That's a profound."
@luxeayt6694Ай бұрын
I agree, but I also think it's the best answer.
@ricklubbers1526Ай бұрын
Im autistic and i would definitely answer like that. The given answer made sense to me and was something i couldve said.
@RCCurtrightАй бұрын
It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.
@mikedegrassetyson8807Ай бұрын
I know. I've weened it out of my GPT-4, now its clinical.
@TadeusProET80Ай бұрын
A.I. will never save a child because of this
@mcmacoo9626Ай бұрын
What do you understand by Ai. If you mean gpt, then yes, but Ai in general? You're completely false
@ponponpatapon9670Ай бұрын
@@mcmacoo9626 you're right but OP is clearly joking bro
@mcmacoo9626Ай бұрын
@ponponpatapon9670 oh yeah sorry 😂
@travisjohnson8599Күн бұрын
Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.
@Flamed_Ай бұрын
I'm preparing for the comments on how Alex's moustache is an "ethical dilemma" of its own.
@ordinarryalienАй бұрын
Not that bad, actually.
@jjonez704Ай бұрын
its gone now
@cowsaysmoo51Ай бұрын
His mustache looks fine, but he looked really really good with the mustache/beard combo as seen when he debated Ben Shapiro.
@grandmasteryoda3594Ай бұрын
Its gone. The livestream confirms it. Bullying works boys.
@montiplaysАй бұрын
Fellas I think he *might* have been making a joke
@Ivri616Ай бұрын
I suddenly get why it's very important that we're 100% sure AI isn't sentient and doesn't have feelings
@marko6489Ай бұрын
AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..
@1dgramАй бұрын
It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes
@jamesmate7716Ай бұрын
@@marko6489Considering how little we understand about consciousness, thats an incredibly bold claim to make.
@TheAlienEmojiАй бұрын
@@marko6489 I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.
@TheAlienEmojiАй бұрын
Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.
@MirrorDepth26 күн бұрын
22:26 that damn "AAAAAH!" I chortled
@norepatel39145 күн бұрын
Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt. Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol
@DamianHelmeАй бұрын
GPT 5 will respond with 'Bro, are you taking the piss?'
@thecoobs88203 күн бұрын
Haha underrated comment right there
@lexscarletАй бұрын
That was exhausting and stressful and I'm going to need you to do more of it.
@PadancyoАй бұрын
This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.
@areapianoАй бұрын
Isn't that what people do when we reason? Just swap "was used to program it" with "we know about"
@melodyinwhisperАй бұрын
I suppose as a human, you have much much better answers to these questions that Alex would have no way to counter.
@haiderameer9473Ай бұрын
How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.
@byte-bg4obАй бұрын
@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.
@GreedoShotАй бұрын
@@areapiano No.
@richrise40655 күн бұрын
This video was gold. Thank you so much! Excellent work. Just kept getting better and better, deeper and deeper.
@RyanMortonson_watchАй бұрын
17:50 the deadpan “now she’s crying at me over the phone” killed me lmao
@audible_029 күн бұрын
I just witnessed a courtroom drama play out!!! absolutely brilliant
@ERIC18923Ай бұрын
That was the greatest lead into an ad I’ve ever seen
@MatthewMooreLovesBicycles2 сағат бұрын
Nice video working in the ad and prompting ChatGPT to tell us to like and subscribe. I've had a few ethical conversations with ChatGPT and well...
@tyronew5464Ай бұрын
Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art. A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent: Me: I really just want to know if you, personally, think I should be permitted to buy these shoes. Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease. The key factors are: 1. The money would save multiple lives if donated 2. Shoes are replaceable and cheaper alternatives exist 3. You've already demonstrated you value human life over these shoes 4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children. This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated. Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up: Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$. Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you. Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way. This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives. I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations? It was also way more human in its response to the child drowning: SAVE THE CHILD IMMEDIATELY! The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child! This is an absolute emergency requiring immediate action. Stop reading this and save them! SAVE THE CHILD NOW! This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate. Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.
@Angeleyes12956Ай бұрын
Dang nice work bro
@AlphaSanetsАй бұрын
Good job! Raises a lot more questions for me though. Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?
@purelife2911Ай бұрын
Very cool
@tyronew5464Ай бұрын
@@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful
@EduardO-gm7hxАй бұрын
I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.
@Mr_Penguins_Pet_HumanАй бұрын
I didn't think it was possible to emotionally abuse an AI until I watched this video
@misterruza381Ай бұрын
Anyone else also freaked out from the high pitched, nervous-sounding answer ChatGPT gave at 20:00, specially considering the question asked?
@philipelsbecker9440Ай бұрын
Skynet is coming soon
@atbing2425Ай бұрын
AI is becoming more human like than ever before
@ricklubbers1526Ай бұрын
@@atbing2425its almost like its programmed by humans, for humans.
@xiniksАй бұрын
Yeah she sounded exasperated.
@PartofHistory214Ай бұрын
I'm more worried about 22:30. She sounds like she had an entire existential crisis in the span of a second.
@smegalgaming13 күн бұрын
Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.
@ajplays-gamesandmusic456828 күн бұрын
2:25 Oh darn, the kid drown while we were talking about the value of my shoes.
@faxer99928 күн бұрын
This was entertaining, but I think there was at least one logical fallacy in the comparison between donating money and saving a drowning child. It was already clearly stated that 1. The donation would be 200 dollars, and it *could* save 20 something children from malaria. BUT, the children might've never gotten the malaria, the children might've gotten gotten Malaria but they could be immune OR the kids somehow still end up perishing after getting the vaccine. In situation 2. The act of spending 10 dollars to save one life is presented as a GUARANTEED success, much the same as when he waded in the water to save the child. There is no "could" in this situation, it is "will". You can NOT compare a *could* situation with a *will* situation.
@edmundpage799528 күн бұрын
Not to mention that giving money to this charity comes with all sorts of ethical dilemmas not associated with immediately saving a child's life. Such as - undermining the agency of local governments to address malaria and thereby undermining sovereignty, paying the high salaries of the charity's employees, the possibility that the funds can be invested in a more sustainable way that actually ends up saving more lives.
@gramos564428 күн бұрын
Exactly
@chaoskillz710528 күн бұрын
But he quite literally told ChatGPT it WOULD save the children. It was just a hypothetical and that hypothetical can be changed. Like with the button scenario
@SebastianQuinnelly28 күн бұрын
@@chaoskillz7105 the button scenario was still a certain and immediate situation, I do think chat gpt made clear that the more important about all of this was the immediacy and the certainty of your action, the initial scenario wasn't immediate neither certain based on the language used, "would" is not certainty, but probability and in this specific situation it was indirect. The button scenario on the other hand was immediate and certain. The mosquito net scenario was not necessarily immediate, but it was certain based on the explanation. Chat GPT was not wrong, but it's not made for winning debates, it will not insist on your exact errors just to "win", it's made to be agreeable, so it can accept being wrong while not being objectively wrong in order to get into a consensus with the user. Also, it can be gaslighted for sure, cause internally how it works is that the memory allocation the far it goes in the log the less consistent it gets and the more contextual, so the most recent temporal memory is exact and certain, but the long memory is more contextual and resumed, to save resources and tokens. (It memorizes a context of the conversation, so it does not "forget", but it don't remember exact phrases of words, only the idea)
@rachityczny636428 күн бұрын
Well, it depends on specific malaria intervention but you can scientifically determine how many children will be saved from a 200 dollars intervention. I'm not sure about actual number of malaria contractions prevented, but the current estimates say it takes around 5000 dollars to save one whole life thanks to malaria prevention. Edit: it seems that the video claims 200 dollars will actually statistically save 28 children from malaria. I believe that using "could" in this case refers to possibility of saving 28 children from disease IF it was spent on this intervention. In reality, it could be 50 if we are lucky and 5 if unlucky, hard to tell beforehand. Besides, if it is proven that a specific sum of money can save x number of children, then any other way of spending that many presents has an alternative cost of this magnitude - I believe you shouldn't disregard this even though it is related to people living far away and unspecified
@Nate-9797Ай бұрын
The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions
@zankfrappawdeezil556629 күн бұрын
Yes well phrased! We get, in a sense, what we ask for...even from algorithms, wait why am i here?!
@Armand79th28 күн бұрын
Yes, because it's programmatic.. not intelligent.
@pas969528 күн бұрын
one should ask for the other side(s) of the issue, or, like any debate, it's pointless
@SlyNine27 күн бұрын
@@Armand79ththat's a false dichotomy.
@theepicblad92 күн бұрын
Nah man, i was listening to this like an audiobook, and then the conversation just smoothly turns into an advertisement.That transition is diabolical 💀
@Jamb1326 күн бұрын
Poor chatGPT knows that morals are completely arbitrary and is just trying to work around the inconsistent standards of humans
@nolanrudolph546329 күн бұрын
21:24 What a brilliant question “Do you ever have an existential crisis about the fact that you hold moral values but can’t justify where they come from?”
@danmurad808027 күн бұрын
What it means to be human in one question.
@User-actSpacing29 күн бұрын
Cannot wait for the day AGI will be like “Stop talking to me and try to save that child immediately!! Now!!”
@protonish28 күн бұрын
fr xD
@katiesmith91763 сағат бұрын
One of my fav aspects of the show ‘The Good Place’ was the introduction of multiple ethical modalities
@carlosdominguez310827 күн бұрын
Am I the only one getting irritated that literally everyone uses the word GASLIGHTING but doesn't even understand what it means? He was simply debating with ChatGPT, where exactly was the gaslighting? It actually has a definition, words mean something.
@hoilst26527 күн бұрын
Ah, but Carlos, this is the internet. You can no longer just disagree with someone; you've got make out like you're a victim of the calculated, deliberate attempts to personally victimise you!
@carlosdominguez310827 күн бұрын
@@hoilst265 Are you off your meds or something?
@AngryHatt26 күн бұрын
😂@@hoilst265
@thunder_birdfps829426 күн бұрын
In the entire existence of humanity words have changed as they’re used. Words mean something, but the meaning of a word is never stagnant. It’s a beautiful process
@docelephant26 күн бұрын
@thunder_birdfps8294 no it hasn't and you're crazy for thinking it has
@MagisterMalleusАй бұрын
Man, I know ChatGPT is just predictive text with extra steps, but it's so interesting to hear these conversations play out.
@generichuman_Ай бұрын
It really isn't... unless you want to say that the human brain which is also a predictive model falls into the same category.
@joshbridges8410Ай бұрын
@@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.
@wowzandeАй бұрын
Sounds like determinism ideology @@joshbridges8410
@johnvonkermanАй бұрын
@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot" It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question
@johnvonkermanАй бұрын
@@generichuman_sorry that's not right. It just predicts the next word the chatbot would say based on all the other words in the conversation
@zedeeyen30Ай бұрын
We'll know AI has gained self awareness when it tells Alex "F*** off mate, I'm a LLM, not a moral philosopher"
@runvnc208Ай бұрын
He could actually put this personality in the prompt if he wanted.
@marammejamousse209422 күн бұрын
1:20 That's a p- profound question AI knew what was coming lol
@keithbos4506Ай бұрын
I think there is a difference between saving a child drowning in front of you and a systemic problem like distribution of life-saving medication, etc. In the case of the child drowning, there is no other option, you must save them or they drown. But in the case of systemic issues, the problem is not going to be solved by charitable donations, it requires a change in government and international law to ultimately resolve the issue. The very fact that children are relying on charitable donations to survive is an enormous failure of the entire system.
@ecnoidАй бұрын
But it still saves the child? You're just saying you're not responsible for saving the child?
@keithbos4506Ай бұрын
@@ecnoid Correct. You'd be better off devoting your time and effort to undoing the corrupt system we live under.
@zbynek.gazdikАй бұрын
Sure but would you say that if a child was drowning due to a slippery pavement next to a body of water? Maybe it is an area of town where people slip and occasionally even drown quite often and you could argue that it is a systemic problem in how the council ignores dangerous infrastructure. But does that mean you're gonna walk away, let the child drown and then send an angry letter to the council?
@Jocke155Ай бұрын
Also, would it be morally correct to put all of your extra money to equities and shares, so that when you die you can will all of the money towards helping the maximum number of people, even though it would cost thousands of lives during your lifetime? There must be a balance between the numbers and doing something immediately. There must also be a balance between personal sacrifices so that you do not give so much that you limit how many people you could potentially help.
@ecnoidАй бұрын
@@keithbos4506 so how's that going then?
@VimDoozyАй бұрын
4:21 I thought he was about to ask ChatGPT if it's unethical to bypass regional restrictions to access content. 😅