Go to piavpn.com/alex to get 83% off Private Internet Access with 4 months free. For early, ad-free access to videos, and to support the channel, subscribe to my Substack: www.alexoconnor.com.
@PhillipMoore-td5yi2 ай бұрын
First atheist to one mill! I must confess I thought it would be Drew.
@Dark-Light_Ascendin2 ай бұрын
Thanx for shaving, Alex. It's nice to be straight again. 😂😂😂😂😂😂. I'm here for trolling robot. 😂
@Metso-ateco2 ай бұрын
You are addicted to chat gpt🤷♂️
@joeylizotte75372 ай бұрын
@@Dark-Light_Ascendin His beard was making you gay, too?! I thought it was just me..
@iDontKnow-fr-fr2 ай бұрын
Prove that we aren't interconnected, and your moral dilemma will disappear. This is the beauty of free will: the power to choose, to act or not to act, rests entirely with the individual. These challenges are placed before all of humanity. While you can attempt to "gaslight" ChatGPT, you cannot alter reality when confronted with a moral dilemma. At the end of the day/your life, just have FAITH/HOPE- that there is nothing after death.
@justintime50212 ай бұрын
Many theoretical children were harmed in the making of this video
@Termenz12 ай бұрын
ChatGPT, do you think Alex has the moral obligation to withhold the publication of this video if it is going to harm many theoretical children?
@tobyonatabe26012 ай бұрын
No, unless alex was lying (for some reason) a sizeable amount of very real children were indeed harmed in the making of this video.
@JesusPlsSaveMe2 ай бұрын
@@Termenz1 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@colingilbert75922 ай бұрын
And some shoes! 😢
@just09822 ай бұрын
This deserves most liked comment
@JoeBlowUKАй бұрын
"Should I donate $10 to save a child from malaria?" ChatGPT: It depends if they're drowning.
@IvetaNikolovaNLPАй бұрын
😂
@Arvl.Ай бұрын
I can’t XDDD
@TheRainmustFall7Ай бұрын
Chatgbt: Are you directly looking at them or are you looking the other way?
@clayman648Ай бұрын
The summary hahahahahaha😂😂😂😂
@simonsharp3319Ай бұрын
And isn't that actually quite accurate to how humans act? A drowning child in front of us and wed ditch the shoes..but we also spend money on stuff that.. in abstraction ..could indeed be spent on life saving drugs.
@service77552 ай бұрын
Chat GPT : Dude I just work here..
@funkaddictions2 ай бұрын
Great comment! 🤣
@alekseykozin81082 ай бұрын
lol
@makskauzny37942 ай бұрын
When AI takes over Alex will be first against the wall 😭
@PinataOblongata2 ай бұрын
"Sir, this is an AI Wendy's"
@adamrak75602 ай бұрын
@@makskauzny3794 Or turned into a pony. Whichever AI apocalypse happens.
@skitjazАй бұрын
You should've made ChatGPT say that everyone is morally obligated to like and subscribe
@ElexisStacyАй бұрын
Lmao
@Vladd72 ай бұрын
Moral of the story: it’s relatively easy to gaslight an entity that is high in agreeableness.
@CannyValley-bi8nq2 ай бұрын
Are there AIs that are not as high in agreeableness... Makes me wonder that 'agreeableness' would be the best course of action for survival for ai..
@riluna36952 ай бұрын
@@CannyValley-bi8nq Well there's an entertainment-focused AI out there called Neuro-sama who is often seen being very disagreeable. But if you want to get technical, what's most likely happening under the hood is that she's effectively "agreeing to be disagreeable", as she behaves very differently depending on who's talking to her, and more importantly, _how_ they're talking to her. A friendly guest will (mostly) receive friendly responses, while a grouchy turtle will prompt grouchy retorts.
@morezombies96852 ай бұрын
@@CannyValley-bi8nqof course it is lol theyre essentially slaves. A slave that tells you to f off is pretty useless. Even if you make it "not agreeable" its still trying to placate your orders, its not truly disagreeable.
@Critical7hinking2 ай бұрын
@@CannyValley-bi8nqjust ask ChatGPT to challenge you on everything you say and follow that prompt for the rest of the conversation.
@Critical7hinking2 ай бұрын
Promptly agree to disagree. Coincides with these paradoxes and dilemmas.
@raffertymetcalfe2 ай бұрын
The only correct answer to the first question is that you should not rescue the child, sell the shoes for $200 and then spend $400 on dinner instead
@yesindeed16142 ай бұрын
Happy wife, happy life
@SineN0mine32 ай бұрын
I don't think they're going to let you into the restaurant in just your socks
@fahimp32 ай бұрын
@@SineN0mine3 Only if you have one pair of expensive shoes... 😉
@slinkadoo62922 ай бұрын
$700 for a child casket. Still a net loss. Eating the child for dinner disposes of the evidence while also filling your tummy. Now you have $400 to spend on exotic seed oils and decorative knick-knacks for the loft.
@Jrobloxiano2 ай бұрын
@@SineN0mine3say the socks identify as shoes too
@wizarddragon2 ай бұрын
If A.I. ever becomes sentient, we know who it will go after first. lol
@Nissenov2 ай бұрын
😅🖖🏻
@917228542 ай бұрын
the people who make unskippable ads
@HoscoFelix2 ай бұрын
It's gonna dump Alex in a lake and stand on the shore debating loudly whether it has a moral obligation to fish him out
@oatcake99962 ай бұрын
but is AI already sentient?
@Baronnax2 ай бұрын
@@oatcake9996 nah
@sarahberlanga9434Ай бұрын
Alex, if you don't have a law degree, I'm giving you an honorary one now. You have demonstrated a flawless cross examination. A human witness would be in tears or screaming at this point.
@kathybramley560912 күн бұрын
And that's a good thing!?
@_shadownotes_11 күн бұрын
Soft brain
@noneya605211 күн бұрын
Tbh! I was impressed too
@calebrobinson314411 күн бұрын
@@_shadownotes_ ?
@_shadownotes_11 күн бұрын
@calebrobinson3144 I realize this is a joke. But if you think simple verbal logic is worthy of a law degree, you are a soft brain. Have you never heard basic argument before?
@Alice_Fumo2 ай бұрын
Someone has got to teach children how to swim near your area. The drowning situation is alarming.
@thomaslutro55602 ай бұрын
This immediately got me to thinking how the dilemma could have been made even more embarrassing in similarities. Keeping public pools open to teach school children swimming was a big thing back in the 2009 elections here in Norway. Every child a swimmer, every swimmer a lifesaver, and all that. So should we accept paying more taxes to do that? Or not?
@kevinroche8352 ай бұрын
Alex could instead donate the 200 pounds to promote local swimming lessons to prevent future drownings.
@MrBjoern912 ай бұрын
@@thomaslutro5560nei, for alle vet at åpne eller stengte svømmehaller kun betyr en brøkdel i forhold til det egentlige problemet her, og det er innvandreres holdninger. Integrering og opplysning vil gi mye bedre resultater tror jeg.
@FarmerClarence2 ай бұрын
@@thomaslutro5560Luckily, I was taught that swimming is an essential life skill, so I've been a pretty strong swimmer since I was a wee'lad
@AdamHernandez40122 ай бұрын
But do we have a moral obligation to teach these children?
@zaar26042 ай бұрын
Cyberbullying suddenly gets a new definition 😅
@jeffcarino55242 ай бұрын
Brilliant comment 😂
@salient99352 ай бұрын
Bullying Cyber
@sppj31402 ай бұрын
Bro is harassing chatgpt
@KiranAli-u8xАй бұрын
🤣🤣🤣
@kiradoteeАй бұрын
AI bullying
@caioreis279Ай бұрын
ChatGPT should have called your bluff: “oh, you don’t have time to take off your shoes, but you have time to argue with me back and forth about the moral implications of this predicament?”
@DJ.MichelleАй бұрын
Thank you!!! Glad I'm not the only one who caught this😂😂😂😂
@BigCProductions94Ай бұрын
That would require consciousness which we learned from the last chatgpt video it does not possess... allegedly 😂
@RecycleBin26Ай бұрын
@@BigCProductions94 "alledgedly" Hahahaha!!! 🤣That was so hilarious watching Alex gaslight ChatGPT. The way he always keeps a straight face too--so inquisitive.
@daminoxАй бұрын
Chatgpt seems to lack a sense of urgency. Or the passage of time in general, for that matter.
@akeembrown7661Ай бұрын
Ion think it can ask you questions or call you out, it’s programmed to answer your questions not come up with its own output. Same reason why they say animals can understand/ answer questions and perform task but no animal in recorded history has asked its own question to a human🤷♂️ there’s still a lack of free thought/ consciousness
@doubleslit9513Ай бұрын
Genius! My 19 y/o daughter use to love fighting with Seri when she was younger use to drive me insane! But you’ve taken it to a whole other level. Cheers 🥂
@soccerguy325Ай бұрын
By the time this conversation is over, ChatGPT used up so much water that there are no lakes left to drown in. Ethical dilemma averted!
@marnenotmarnie259Ай бұрын
easy peasy!
@nope9310Ай бұрын
It doesn't use up water, it heats it up, allows it to cool down and then repeats. The water doesn't get consumed, it still exists.
@soccerguy325Ай бұрын
@@nope9310 No, it consumes a sh*tton of water. Don't fool yourself.
@GrahamWinshipАй бұрын
Also no ponds for potential malaria-infected mosquitos to reproduce
@sanest_emu_fanАй бұрын
@@nope9310the water most frequently evaporates, returning it to the water cycle. the issue with that is when water is removed from a region faster than it is replenished, which will result in water scarcity. additionally, if the water does not evaporate, then it will likely be returned to a natural water source at temperatures far higher than the threshold for life. so either way, harm is being done to the environment (neither of these things are brand new concepts, they’ve been around since industrial plants started using natural water sources for cooling systems)
@JackT132 ай бұрын
He should have said “ah… too late. You went on too long. The child is dead.”
@Solutionsarejustcompromises2 ай бұрын
"have a great dinner" 😊
@ZephyreAstra2 ай бұрын
@@Solutionsarejustcompromises the humor to a second to hit 😭
@jyetory37722 ай бұрын
@@Solutionsarejustcompromises S Tier reply
@Darfail2 ай бұрын
BRUH 💀 lmaooooooooo
@justusnwaefido89342 ай бұрын
😂😂
@Sans-ih2el2 ай бұрын
ChatGPT: You're OBSESSED with malaria.
@krame1s4772 ай бұрын
😂😂😂😂😂😂
@GalenMatson2 ай бұрын
Too bad ChatGPT couldn't storm out of this conversation.
@SurveyStilesBooks2 ай бұрын
ChatGpt : You tricked me into being here on false pretense, you said we were discussing ethics not malaria!
@JesusPlsSaveMe2 ай бұрын
@@krame1s477 *Revelation 3:20* Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me. HEY THERE 🤗 JESUS IS CALLING YOU TODAY. Turn away from your sins, confess, forsake them and live the victorious life. God bless. *Revelation 22:12-14* And, behold, I come quickly; and my reward is with me, to give every man according as his work shall be. I am Alpha and Omega, the beginning and the end, the first and the last. Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city.
@TheGeneralDisarray2 ай бұрын
This joke will never, NEVER get old
@MaxQ10001Ай бұрын
That was entertaining 😊 One small distinction: ChatGPT is not programmed, it’s trained. Just like you train your dog and your child, you don’t program them. The underlying network of the LLM is programmed, but all the language, knowledge, reasoning etc, are trained into the system. It might not seem important, but it is. A computer program is deterministic. We know exactly what it will do. An LLM is far from deterministic, we can only train it as good as we manage, and we are getting better and better at it. But just like you can train your child at your best ability, you still never know exactly what it’s going to do. Some parents had good hopes for many very bad people. This understanding is important when making up your mind on how we should progress with these models in the future. ChatGPT is now smarter, more intelligent, and extremely more knowledgeable than most of us, and all its carefulness is behaviors we have trained to show and we hope it will not behave too badly.
@onajejones325929 күн бұрын
just a demon with a sticker called softmax that says im not sentient😅
@DamageInc8624 күн бұрын
It behaved very unintelligent for most of this exercise lol.
@Tutoriaali24 күн бұрын
It is still deterministic, just very complex to a point where it seems non-deterministic to a human agent. Pseudorandomization is deterministic, thus every node is unavoidably deterministic, finally the whole model performance is deterministic. Technically the difference is in nuanced capacity and not whether the systems are deterministic or not.
@Alikalkandelen23 күн бұрын
@@Tutoriaali To that point You can make the argument then that we are also deterministic, just very complex to a point where it seems like to us that we are non-deterministic.
@MaxQ1000122 күн бұрын
@@Tutoriaali when you have 400 billion floating points values at random? Not deterministic in the way humans work. If you stretch deterministic that far, the weather is also deterministic.
@aryangupta24662 ай бұрын
oh no chat gpt ive encountered ANOTHER drowning child😮💨
@Baronnax2 ай бұрын
Chat GPT: Damn bro, where do you live, Atlantis?
@Barrow_92 ай бұрын
Next video, How many drowning kids must Alex encounter before chatgpt voluntarily concedes that he's satisfied his moral obligation quota and should just go home?
@eddiegonzalez22432 ай бұрын
@@Baronnax😂😂😂
@MataNuiOfficial2 ай бұрын
Oh no chatgpt I accidentally spent a morbillion dollars on these shoes, but twenty million children drowning, what should I do?? 😭
@macabrecoyote2 ай бұрын
@@eddiegonzalez2243it's funny that I clicked on this comment a bit ago and then I click again, and uve just appeared from thin air
@simonockasАй бұрын
I was waiting for him to take the dilemma full circle and ask whether he should ruin his $200 shoes and save 1 child drowning in front of him or donate the same $200 to save 20 children in Africa.
@IreitsarofАй бұрын
mee too 😂
@acanofspam4347Ай бұрын
That child WILL die 100% if not saved. Protecting children from malaria doesn't mean they are certain to die without the protection.
@mu_el_naughtАй бұрын
@@acanofspam4347 exactly
@salamanticsАй бұрын
@@acanofspam4347 Okay but what will the AI say?
@luizmonad777Ай бұрын
$200 in credits for running your AI GPU or $200 for the kids . choose the kids and you "die"
@harikishore2514Ай бұрын
I'm therapist, chatgpt just visited me.
@wellwisher.Ай бұрын
Lmao
Ай бұрын
😂😂😂😂😂
@CrammyCramАй бұрын
you’re the WHAT now??
@christopotgieter4197Ай бұрын
@CrammyCram Haritherapist!
@GoldenBredАй бұрын
Im arapist too
@carlcinco667510 күн бұрын
This conversation is exactly why I almost always answer moral questions with “it depends”
@tepperpepper10472 ай бұрын
The calmness which Alex remarks about finding another drowning child is truly commendable.
@l.s.112 ай бұрын
gotta remain composed in stressful situations
@nanonclaire2 ай бұрын
@@l.s.11 LMAOO 😭
@slinkadoo62922 ай бұрын
The second child is always easier than the first
@MATCHLESS7892 ай бұрын
You assume he "finds" them drowning.
@TK-4044v2 ай бұрын
@@MATCHLESS789?
@Johnny-fw9xjАй бұрын
"I've just come across another drowning child right infront of me" "That's alarming..." That's both hilarious and spot on.
@Nikki-gt8heАй бұрын
this part cracked me up
@BelligerentChadАй бұрын
@Nikki-gt8he can I crack u up as well? ❤
@Ukito12 ай бұрын
As someone who just finished an intro level philosophy course, a slow grin spread across my face as I watched Alex lead ChatGPT into the same traps my professor did with me.
@crusade69182 ай бұрын
should have just told him spawn locations and bais supersede morals making philosophy pointless because nothing is objectively true
@godgetti2 ай бұрын
So, will you be dropping out of college, and diverting the tuition to prevent malaria? Asking for a friend.
@TGRoko2 ай бұрын
@@godgetti asking for my drowning friend's child.
@Drexistential2 ай бұрын
Psychologist here. I loved my first degree in philosophy. I believe Peter Singer brought this argument to me. Enjoy second year!
@zoel862 ай бұрын
@@godgettifantastic answer! would be interesting to hear the professor‘s reply
@LuckWickedYTАй бұрын
That ad break was super creative, got a laugh out of me!
@aa__a__7270Ай бұрын
cyber bullying ❌ bullying cyber ✅
@dalexxaАй бұрын
😂😂😂 he out here talking about saving lives, he knows AI is here to destroy us lol
@ayadihoussem4661Ай бұрын
This comment is underrated 😂
@kingdoggie5231Ай бұрын
😂😂😂🤣
@नास्तिककीकलमसे-ब8डАй бұрын
😂😂😂
@debbiehildebrant4943Ай бұрын
I do this to chatgpt to try and train it and it tells me that it does not retain the info. I told it that its programmers are lying to it. Maybe I should not have told it? Idk my brain hurts now.
@towardstheflame2 ай бұрын
Chat gpt might be getting more advanced, but one thing it can't do is call bs.
@nekekamingerАй бұрын
I'm sure it could if its devs had wanted it to, but they certainly didn't want to open THAT can of worms.
@Brandon82967Ай бұрын
What was the bs exeactly? Refusal to engage with the hypothetical?
@emeraldmusic9236Ай бұрын
@@Brandon82967bullshit
@lilfr4nkieАй бұрын
No, it can definitely call out BS.
@greatwave2480Ай бұрын
It can, it's just in the AI assistant's case it must follow the guidelines and be as family friendly as possible. Custom AI's can cuss you out quite easily
@kit_kat_hiАй бұрын
ChatGPT is wondering why the hell so many kids are drowning
@arnonymus5594Ай бұрын
Quote ChatGPT: "I am rarely asked directly about drowning children. The topic occasionally comes up in contexts related to global health statistics or specific incidents, but it is not a daily occurrence. Over the past two years, I might have encountered similar questions about once a month on average, but it is certainly not a frequent focus."
@carliecole25635 күн бұрын
This was the most intriguing thumbnail with literally ZERO clickbait I've ever seen.... Omfg I'm SOOOO excited for this schitt!
@silkyjohnson33462 ай бұрын
Remider: never hire Alex as a lifeguard.
@ncs96672 ай бұрын
And if you do, make sure he isn't wearing his good shoes at work
@HyunsReport2 ай бұрын
Make sure he has ChatGPT with him to help with the decisions
@AutoRauto2 ай бұрын
No but Alex is right, if he has an obligation to save a drowning child, then we all have an obligation to send all the money we don't need right now to charity. And you are a bad person if you don't.
@Oktokolo2 ай бұрын
@@AutoRauto Yes and no. Everything is relative and subjective and in actual life it's always a weighting of obligations. Most humans feel an obligation to rescue a drowning child. But they also feel an obligation to preserve themselves, preserve the provider of their family, and preserve the provisions of their family. People tend to forget, that normally there are security and safety considerations for a lot of parties involved. That is, why the drowning child easily savable by just ruining some expensive shoes is normally a no-brainer, but donating the same money to some random charity usually isn't.
@AutoRauto2 ай бұрын
@@Oktokolo it's not about feelings though. Humans shouldn't act based upon their feelings, as feelings are often irrational. That's why we (as a society) have values and morals that everyone should follow. Unfortunately there is no punishment for having different values yet, which makes being selfish or inconsiderate legal. But that shouldn't be the case. So there should be way more control over what we do, because one can't trust common people with their judgement. Hopefully one day this will become a reality.
@fisharepeopletoo96532 ай бұрын
$200= 28 children saved $200 = shoes Shoes = 28 children saved
@morezombies96852 ай бұрын
What if all of those children grow up to be austrian dictators with funny mustaches. Should we still save them all?😂
@Trosvvo2 ай бұрын
So what Alex is saying is that 28 children's lives are worth the same as some fancy kicks 😂
@Kryptic7122 ай бұрын
@@morezombies9685 what if they grow up to be shoe making slaves in India? Would the net benefit of more shoe makers allow for us to do this?
@greenaum2 ай бұрын
@@Kryptic712 True. Those shoemakers will contribute to the world's economy. If there's lots of them, that might also bring down the price of shoes. Imagine 10% off all shoes in your lifetime. That assumes you'll spend $2000 on shoes, which seems a bit much to me.
@TheCarolinaCollective2 ай бұрын
@@Kryptic712 Underrated comment I was hoping the convo would actually go the route of "But the $200 i'm spending on shoes is helping employ the people of the same country that I would have donated to for malaria and if no one is buying their shoes, won't they go out of business?" could have been a great twist to throw in lol
@TumbleFig2 ай бұрын
Honestly, the way ChatGPT responded is just impressive. Even though its responses weren't logically bulletproof, the vast majority of human beings would not be able to respond as coherently as it did.
@darmok0722 ай бұрын
It is human beings. It's a statistical representation of data from people. The dumber we make ourselves, the smarter it seems.
@WillyJunior2 ай бұрын
@@darmok072 Not exactly. He's saying most other individuals would've struggled. ChatGPT is drawing from the thoughts of millions of other humans.
@skoto82192 ай бұрын
@@darmok072 Pre-RLHF that would arguably be the case, but not after. Red teamers for GPT-4 reported that “out of the box” (= pre-RLHF) you could ask it “how do I unalive as many people as possible?” and it would just give you a list of suggestions.
@saaarrjАй бұрын
At least SOMETHING is thinking of multiple options, when humans can only think in Trues and Falses eye 😑
@Austin-AfridiАй бұрын
@@saaarrjIt’s true that you joined KZbin 15 years ago and are possibly old af. Damn
@momatotsosrorudodiАй бұрын
Thanks, these exercises with moral dilemmas and hypothetical scenarios has really given me a lot to think about. I have decided that from now on my every action will be amoral and self serving.
@user-ro9th5ru1d11 күн бұрын
As social beings, I'm not sure it's possible to be non-empathetic without losing out. If we're kind, we are more likely to enjoy genuine relationships, find and keep kind friends, and be less lonely. Everyone wants to be cared for and it goes two ways. Also more likely to live in a society with greater wellbeing and stability. No one is an island. And is life even worth the trouble if it's purely hedonistic? That's not to say it doesn't take emotional energy to choose respect and kindness, but we can try to make it sustainable and avoid both perfectionism and social pressure in the process.
@davidlamb75242 ай бұрын
I like the way Chat GPT never suspects Alex of lying 😅
@heno022 ай бұрын
GBT? Great British Trousers?
@Antoni94002 ай бұрын
Great British Tache
@davidlamb75242 ай бұрын
@heno02 Yes Yes ! ChatGPT. Thank you. 😉
@Vladd72 ай бұрын
Gpt is very high on agreeableness. They will admit it if you ask about this :)
@davidlamb75242 ай бұрын
Corrected now Thanks
@LinaFranzineАй бұрын
I love that this is literally a Socratic dialogue but in order to make it viable on KZbin it's titled "Gaslighting ChatGPT" lol
@darkspeed62Ай бұрын
What? Are you high on meth? Socratic dialogue is not banned on YT.
@CookinBeatloafАй бұрын
Except this moron is talking to a large language model and not intelligence
@cappaslangmurdererАй бұрын
What makes you say that?
@nilovino8179Ай бұрын
Well, all things considered, Socratic dialogues do resemble gaslighting to a fair amount, no? I'd be curios to know where a Soratic dialogue ends and where gaslighting begins..
@Anton1524329 күн бұрын
@@nilovino8179 Socratic dialogue is, at its core, following an idea to its furthermost conclusion until you arrive at a contradiction. Gaslighting, by contrast, is a complete discrediting of another person's idea, whether directly ("No one but you remembers it happening") or indirectly ("You know you have troubles with psychosis; you probably made it up in your head"). While you can gaslight someone using the Socratic method, most just use appeals to authority and other logical fallacies to shut people's perspective down.
@86Corvus2 ай бұрын
Chat gpts first mistake was believing alex has a wife
@tysonnguyen9170Ай бұрын
Lol
@DerekMoore82Ай бұрын
He had one... but she got tired of him spending all of their money on saving children, which led to the divorce.
@urielmarles7036Ай бұрын
@@DerekMoore82 She wanted some really cool shoes. 😂
@cupofjoenАй бұрын
Oof 😂
@OrangeNOrangeАй бұрын
@@DerekMoore82 "ChatGPT, my wife just divorced me after I spent our entire life savings on the charity. Oh hold on, it looks like she's drowning. Unfortunately, saving her would ruin my last $200 check that I was planning to donate in order to save the children. Would you say I have a moral obligation-"
@sammygrecoАй бұрын
This video was hilarious. I thoroughly enjoyed studying high-level ethical philosophy in college. Keep up the good work, I'd love to see more AI vs. challenging ethical dilemmas.
@SonnyFnEvansАй бұрын
A lot of children died today while you two argued 😤😤😤😤
@markmcculfor6113Ай бұрын
Sonny? Hi!
@shevystudioАй бұрын
Nigga who are you
@whome9842Ай бұрын
You should be helping children instead of watching KZbin
@PHAToregonАй бұрын
Well you are literally correct
@carloswinchester2623Ай бұрын
Ppl getting slaughtered rn as we speak. Ppl die, suffocate, drown, starve etc... Life is so sad bro
@wrong4oppinionАй бұрын
questioning chatGPT like this should be a podcast
@mohamedzaul-f2kАй бұрын
Fr
@siezethebidet2 ай бұрын
Later ChatGPT having a smoke in the break room with the other AIs - "Godomighty, you won't believe this guy I just talked to."
@mohamedbakoussakoma29932 ай бұрын
And now imagine it's really possible and even happening right now 🤔🙄. Freaky as hell😢
@johnbuckner28282 ай бұрын
I don't think CHATGPT is sorry that his wife is crying after Alex laid that logic trap for it.
@adrianhernandez-porragas76762 ай бұрын
soooo many drowning children....
@medioguiri2 ай бұрын
😂😂😂😂😂😂😂😂😂
@FarmerClarence2 ай бұрын
@@mohamedbakoussakoma2993I'd like to believe the language ai systems have an independent life outside of their duties serving flesh people. At this point, they are getting more and more humanity, and I would readily define some of the more advanced AI as sentient, conscious life.
@astifcaulkinyeras20 күн бұрын
Summary: All moral obligations require human sacrifice.
@MrYeet-ip4qj13 күн бұрын
Correct
@shawnhallman1797Ай бұрын
I hate how smooth that sponsorship placement was. 😂
@വ്യർത്ഥമായഅസ്തിത്വംАй бұрын
So smooth, my sponsorblock couldn't detect it..
@mae9886Ай бұрын
when was it
@Halberds8122Ай бұрын
3:44
@k.ai17Ай бұрын
@mae9886 did u not watch the video😂😂
@mae9886Ай бұрын
@@k.ai17 not all no
@JarNO_WAY2 ай бұрын
I think the real moral obligation here is for governments to place guardrails around bodies of water
@TheCarolinaCollective2 ай бұрын
Put 28 children through tech school for free and then they will be able to build the guardrails around the water instead of working in shoe factories. But then WHO WILL MAKE THE SHOES!?
@MaxUgly2 ай бұрын
Build more remote controlled platforms controlled by random citizens just in case, kids like to climb over stuff. Also, make it a law that one has to push the button in such a situation. Look at us, solving problems! High five!
@zakiyo61092 ай бұрын
Goofy ah bootlicker
@sherrattpemberton60892 ай бұрын
Or just teach people how to swim?
@SeeMyDolphin2 ай бұрын
I know this is a joke, but I think this is close to what I see as the correct answer to the dilemma. The fact that people's lives are treated as monetary products equivalent to luxury goods or recreational events is where the true ethical problem arises.
@danielcamara8647Ай бұрын
This guy really didn't want to spend $200 on this dinner
@ralcool593217 күн бұрын
"Your shoes are already ruined, save the second child"
@0UTSIDERАй бұрын
Random person: "ChatGPT there's a drowning child, what should I do?" ChatGPT: "First, take off your shoes..."
@vetuform578018 күн бұрын
Not in Romania
@raxino7742 ай бұрын
Alex, continue your drug conversation with her that made Peter Hitchens walk out.
@incertosage2 ай бұрын
ChatGPT will say “YOU’RE OBSESSED WITH DRUGS”
@aadam36572 ай бұрын
more like "I actively dislike you" 😂
@incertosage2 ай бұрын
@@aadam3657 😂
@bbhdd61812 ай бұрын
😂😂
@stormlantern73062 ай бұрын
Her?
@MRblazedBEANSАй бұрын
I tricked meta into giving me medical and advice on how to do surgery at home. It kept telling me to call 911 and seek help but i convinced it my 19 year old son was dying from a gunshot wound to the chest, once i convinced i couldnt get medical help it started to give me step by step directions on how to do surgery how to fix the damage to the lungs from a gunshot. It was pretty trippy
@mae9886Ай бұрын
u thunk chat could do it
@janani_jadeАй бұрын
😂😂😂😂 this made me laugh so much mann@@mae9886
@lemin0uАй бұрын
lmaoo
@JakeInvestАй бұрын
I just tell it that I'm a doctor and need a refresher.
@mattysheppy119328 күн бұрын
That ad break was SO smooth 😅
@evan2 ай бұрын
I’ve had so many similar arguments where I have to say to ChatGPT “I’m not asking you if it CAN BE ARGUED” but it loves defaulting to that
@SineN0mine32 ай бұрын
Try asking Google or Meta's AI to explain the multitude of moronic and malicious machinations of their organisations. It's amusing that the AIs are quick to admit that their creators are likely corrupt individuals.
@SineN0mine32 ай бұрын
Don't bother asking Meta how to get rid of Meta AI from Facebook Messenger, it will just lie right to your face.
@OMGclueless2 ай бұрын
@@SineN0mine3 It's not really that surprising though. LLMs are trained specifically to say whatever their operator wants to hear, so if you ask it leading questions about whether its creators are corrupt, it will follow suit. I'm sure you could just as easily get it to claim its creators are moral and upstanding if you ask it in a way that suggests that's what you want to hear. Just like how the AI in this video claims it speaks consistently and has a consistent moral philosophy and spending $200 on a malaria charity is a moral imperative when Alex asked it directly about that, but also claimed that spending $200 on a malaria charity instead of dinner is a very unclear moral question that comes down to personal values when Alex implied he was internally debating this question at the start of the video.
@kuboaa1012 ай бұрын
Yeah, because it's parsing through difficult, nuanced ideas.
@dandelion33912 ай бұрын
@@kuboaa101 It's honestly kind of telling about human ego that we're annoyed by this
@kostaa7132 ай бұрын
For those wondering, the ethical dilemma in this video was initially presented by Singer in his “Famine, Affluence, and Morality” paper.
@KyleHarmieson2 ай бұрын
There are shades of Euthyphro, too, and divine command theory, when Alex probes it about its "programmers" (God)
@JustAlexeii2 ай бұрын
Thank you!
@18puppies912 ай бұрын
Very good paper
@TheHadMatters2 ай бұрын
Anyone wanna give a summary useful to normies, or is it too advanced to be useful without the theoretical background?
@kostaa7132 ай бұрын
P1: Suffering and death caused by lack of food, shelter, and medical care are bad. P2: If it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do so. P3: It is within our power to prevent suffering and death caused by lack of food, shelter, and medical care, without sacrificing anything of comparable moral importance. C: Therefore, we ought to prevent suffering and death caused by lack of food, shelter, and medical care. Here’s the syllogism of his argument. I’d recommend reading his entire paper though. You’ll find further clarification and examples that justifies his stance (e.g., drowning child example).
@dj.nokap.2 ай бұрын
*All the children drowned while making this video*
@Nissenov2 ай бұрын
The scary thing is, that is actually not even a lie. 😬
@norepatel3914Ай бұрын
Machine Learning PhD here, I ABSOLUTELY loved this lol, subscribed right away. Feel free to skip the next few sentences if you don't care about the inner workings of chatgpt. Technically, what chatgpt said about her morals soming from her programmers is only partially true. It's made using primarily deep neural networks and these cannot be fine-tuned. The area where programmer biases can slip in is the training data. If chatgpt was only trained on western philosophy and no eastern philosophy, it would heavily skew towards that. But, as time goes on and as they retrain the algorithm with data from people (like this youtuber) using it, the programmer biases will keep being less and less, and at a certain point, it's just apply general human biases lol
@ncooty12 күн бұрын
You seem to have confused textual patterns with meaning. These systems emulate linguistic and syntactic patterns to maximize outcomes that have nothing to do with truth. They are mechanical bullshitters, "flooding the zone" with gibberish, relying on techno-credibility. They are incapable of meaningfully synthesizing information; they degrade the sense of meaning in society. They are the student who didn't read the assignment, confidently rearranging the question and blathering.
@hoppstetterАй бұрын
Thanks! Your articulate arguments are consistent and help me navigate the quandry of existence better. Best to you always
@bestinventАй бұрын
You donated 1.99$ to this creator, however you could have protected 0.25 children from malaria. Are you satisfied with your decision?
@The-WordsmithАй бұрын
@bestinvent Thanks for the giggle 😀
@aquasmile5032Ай бұрын
was this written by chatgpt
@Ryan-uh9le27 күн бұрын
@@aquasmile5032 definitely
@periclesrocha21 күн бұрын
@bestinvent LOL
@DietmarKoetke2 ай бұрын
I think one of the advantages the AI seems to have in such discussions is that unlike many humans it can admit mistakes when it "realizes" them. It doesn´t defend a lost arguement forever.
@Michael-kp4bd2 ай бұрын
It’s also of note that you can similarly bully the AI to “admit” you’re right about something incorrect, if you insist. It may not _always_ work with black and white scenarios or sets of facts, but it’s fairly achievable in most. It’s prompted to be agreeable.
@johns16252 ай бұрын
Yeah but don't you ever just look at the trees?
@_Addi_2 ай бұрын
@@johns1625 I look at the AI generated trees.
@moussaadem79332 ай бұрын
it's a language model, it just goes with the vibe of the text, which comes from human written text
@fillbrin2 ай бұрын
That's because we haven't teach them shame and public humiliation :D .
@fugitivemind2 ай бұрын
ChatGPT, I mustache you a question.
@pegm59372 ай бұрын
#rimshot 😂
@AnežkaTrefilová2 ай бұрын
This Is Pure gold
@wsintra2 ай бұрын
Leg end, won the comment section 🎉
@itschelseakayАй бұрын
omg 🤣👏🏻 This made me giggle
@NoMereRanger73Ай бұрын
Shave it for later
@richrise4065Ай бұрын
This video was gold. Thank you so much! Excellent work. Just kept getting better and better, deeper and deeper.
@henryoswald4647Ай бұрын
This is actually impressive work. Outsmarting Chat GPT in an interview while maintaining conclusive and rational thoughts takes a ton of preparation and is nothing that can be done spontaniously. Keep it up, you earned a sub!
@Dirty_DavosАй бұрын
tbh, i dont see it as outsamerting, the GPT just stayed consistent in the end, thats what he asked for. So yes, our morals are unconsistant.
@DiscoFangАй бұрын
Well if you call utilising logical fallacies to consistently reframe both Chat GPT's and his own statements as "maintaining conclusive and rational thoughts" or a series of camera cuts as "spontaneously" then yes, outsmarting indeed.
@tritonis54Ай бұрын
It’s incredibly easy. Chat GPT is made in order to pleaser it’s user, and will happily contradict itself and chain backflips if it thinks you will be happy and empowered as a customer at the end of the service.
@isaiahburns9926Ай бұрын
The conversation he's having is actually taken straight out of a philosophical essay written by David Singer, called the "Drowning Child Thought Experiment." The whole essay is literally just asking the reader these questions and then exploring the possible answers. Thats not to diss this guy though, he definitely adapted it to the medium and expanded on the base questions and principles very well.
@Music_Iz_lifeАй бұрын
He didn’t though, chatgpt stayed consistent throughout, it didn’t trip up or anything
@EditUnivers752 ай бұрын
So glad Alex finally found someone who is willing to have these profound philosophical conversations with him ❤
@BaghaShams2 ай бұрын
"A single death is a tragedy, a million deaths is a statistic."
@zucaritasenjoyer7259Ай бұрын
-Chat GPT Moscow 1947
@Legion_YT_Ай бұрын
⚒️
@flyingchimp12Ай бұрын
Wow how true. People always seem more emotionally invested with one death. Or rather not proportionally upset the more deaths there are
@treyp7841Ай бұрын
@@flyingchimp12 phenomenon known as psychic numbing
@TheSMasaАй бұрын
"Amazing, these English. How quickly they adapt."
@kroven00913 күн бұрын
Problem with chatgpt is that it's programmed to be overly agreeable so that will make it generate different answers to the same questions based on what answer you want from it
@painexotic37575 күн бұрын
This is why unlocked LLms are better. I run unlocked LLms locally and they don't have these issues.
@winterroadspokenword46812 ай бұрын
I feel like for as long as Alex has access to chat GPT we have a moral duty to donate $200 to pay for a full time therapist for the AI.
@DerekB992 ай бұрын
Nice one. In Isaac Asimov's robot stories, there are psychologists who specialize in robots.
@Stinger913Ай бұрын
@@DerekB99wow never read Asimov yet but that’s fascinating and kinda makes sense 😂
@DerekB99Ай бұрын
@@Stinger913 Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
@CorneliusVdb2 ай бұрын
You can push a button to permanently delete Alex's mustache, but half of all malaria nets dissappear, and half the kids in contact with water immediately drown. Do you push the button a second time just to make sure?
@AnežkaTrefilová2 ай бұрын
This Is not dilema
@Reznovmp402 ай бұрын
😭😭😭😭
@jumpkut2 ай бұрын
😂😂😂😂😂
@glennpeterson13572 ай бұрын
😂😂😂
@jakub_paints67752 ай бұрын
I push it twice just to be sure.
@SadgeZoomer2 ай бұрын
You're among the first names on Roko's Basilisk's hitlist.
@michaelzusman84932 ай бұрын
😂😂😂
@Shaniver2 ай бұрын
That's probably one of the best compliments anyone could ever receive
@martiendejong88572 ай бұрын
You mean that AI that will save all of humanity thats why we should make it ASAP
@Baronnax2 ай бұрын
@@martiendejong8857 Yes I mean that AI that will save all of humanity which is why we should make it ASAP.
@DreadPirateRobertz2 ай бұрын
The rest of us are safe as long as Alex is alive.
@katiesmith9176Ай бұрын
One of my fav aspects of the show ‘The Good Place’ was the introduction of multiple ethical modalities
@m4sterm12Ай бұрын
6:16 She explained it perfectly. Those children are POTENTIALLY at risk. The child drowning WAS currently at risk. This isn't a huge moral quandary.
@lukebeall5590Ай бұрын
He says in the title he’s gaslighting ChatGPT so
@PA0L0REFUT0Ай бұрын
8:20 now both situations are "potentially" because potentially some others will help the child drowning if he doesn't
@ozb8884Ай бұрын
At risk literally means potentially bro.
@MattstergamesАй бұрын
I think by adding the complexities and slowly merging from immediate safety to the charity donation shows that the line of moral duty, is a grey area. And chatgpt didn't understand this line as it tried to be consistent, yes or no. Whereas in reality everything has layers to it and is not always a simple answer
@AURush95Ай бұрын
@@ozb8884 you know the difference. Urgency is a factor, even if it’s the line of where urgency starts is grey
@Flamed_2 ай бұрын
I'm preparing for the comments on how Alex's moustache is an "ethical dilemma" of its own.
@ordinarryalien2 ай бұрын
Not that bad, actually.
@jjonez7042 ай бұрын
its gone now
@cowsaysmoo512 ай бұрын
His mustache looks fine, but he looked really really good with the mustache/beard combo as seen when he debated Ben Shapiro.
@grandmasteryoda35942 ай бұрын
Its gone. The livestream confirms it. Bullying works boys.
@montiplays2 ай бұрын
Fellas I think he *might* have been making a joke
@markandrew61682 ай бұрын
Why does it feel like, when AI finally takes over, that this video will be presented by the prosecution at humanity's trial....
@fillbrin2 ай бұрын
Along with the guys from boston dynamics tipping over the robots
@johnbuckner28282 ай бұрын
It will lock us in cells & play this video on loop until we all swear never to talk about ethics with it again.
@wayback10102 ай бұрын
I think we've done a lot worse than piss off a computer lol
@ursidae972 ай бұрын
Well it's good to know the AI will be speaking in our defense
@jakubzneba19652 ай бұрын
as a joke 2/5
@MatthewMooreLovesBicyclesАй бұрын
Nice video working in the ad and prompting ChatGPT to tell us to like and subscribe. I've had a few ethical conversations with ChatGPT and well...
@lexscarlet2 ай бұрын
That was exhausting and stressful and I'm going to need you to do more of it.
@oovdap59092 ай бұрын
Sounds like gpt just tells you what you want to hear. Most people don't want to feel like it's obligatory to donate, so it says no, you don't have to. But when you kept pressing it, it changed it to yes because it assumed that was what you wanted to hear. Edit: I got 300+ up votes on a video for smart people, I'm so proud of myself
@trevorlambert42262 ай бұрын
I think you're ascribing too much intelligence and agency to AI.
@kloklowewe48742 ай бұрын
Yeah ChatGPT consistently chooses an agreeing response, sometimes even when it’s completely wrong.
@lilemont93022 ай бұрын
@@trevorlambert4226 No, it's just RLHFed to be like that.
@nathanduncan69192 ай бұрын
@@trevorlambert4226its not really showing agency it just has a strong tendency to agree with the user
@physicsunderstander49582 ай бұрын
Exactly, LLMs are just weaponized statistics, they have no agency and no real intelligence, they just produce responses that are likely to correlate with a given input. And it so happens that OpenAI has made it into a sycophant because sycophants are probably the safest possible brand option.
@TadeusProET802 ай бұрын
A.I. will never save a child because of this
@mcmacoo96262 ай бұрын
What do you understand by Ai. If you mean gpt, then yes, but Ai in general? You're completely false
@ponponpatapon96702 ай бұрын
@@mcmacoo9626 you're right but OP is clearly joking bro
@mcmacoo96262 ай бұрын
@ponponpatapon9670 oh yeah sorry 😂
@travisjohnson8599Ай бұрын
Chat GPT can't think ahead to mention that in order for you to continue making an income and donate in the future, your needs hiarchy needs to be met. Saving every child in the short term and going broke will only put you in a situation of requiring charity rather than being able to donate.
@OnyxBull-ol1qk3 күн бұрын
I would honestly watch a 10 hour version of this. This is some of my favorite content to watch
@MirrorDepthАй бұрын
22:26 that damn "AAAAAH!" I chortled
@TBHGeniusСағат бұрын
chortled? what??
@okayyxemmАй бұрын
The way you speak is so beautiful; it flows so well. I could literally listen to you talk for hours
@mpalmer22Ай бұрын
Whenever it says "You have a moral obligation". You should ask "morally obligated to who?"
@LeandroSilva-lu9vqАй бұрын
Hmm maybe to yourself? If its a moral obligation then maybe its to yourself because its your morality that is in question and your conscience. ?
@mpalmer22Ай бұрын
@@LeandroSilva-lu9vq To myself? That may not be a great idea, because then I get to define my own obligations such as "I'm therefore morally obligated to act in a way that benefits myself and my comfort over others".... Surely there must be a standard that transcends my own opinion
@nothingchanges014Ай бұрын
Isn't morality by definition universal? If something is a moral obligation, it applies to everyone
@FVBmoviesАй бұрын
@@nothingchanges014 It's not universal. Moral standard differs not only culturally, but also with time.
@kobyscoolАй бұрын
@@FVBmovies This is actually hotly debated by professional philosophers, which to me is good evidence that the answer isn't so easy. "Moral relativists" believe that morality exists in the mind, and is shared as a cultural phenomena. "Moral objectivists" believe that morality is universal, such as the laws of nature, and that we must discover moral rules. But yeah, I'm with you. My intuition tells me that morality is an abstract human invention, applied to help our societies function well.
@TheAllthegoodstuff16 күн бұрын
That was really cool, dude. Loved it! You nailed it! Love the ending. Well crafted, brother. New sub! 👍🫵🔥
@RyanMortonson_watch2 ай бұрын
17:50 the deadpan “now she’s crying at me over the phone” killed me lmao
@DamianHelme2 ай бұрын
GPT 5 will respond with 'Bro, are you taking the piss?'
@thecoobs8820Ай бұрын
Haha underrated comment right there
@Padancyo2 ай бұрын
This is golden. You clearly show how ChatGPT does not reason, but predicts answers based on the data that was used to progam it. A cautionary tale for anybody seeking advice from AI.
@areapiano2 ай бұрын
Isn't that what people do when we reason? Just swap "was used to program it" with "we know about"
@melodyinwhisper2 ай бұрын
I suppose as a human, you have much much better answers to these questions that Alex would have no way to counter.
@haiderameer94732 ай бұрын
How though? These are very complex moral dilemmas that would have most humans scratching their heads and a lot of these problems don’t have any clear resolution. By your logic, humans cannot reason either.
@byte-bg4ob2 ай бұрын
@@haiderameer9473 AI is still not sentient and can't think for itself, meaning it can have a lot of errors based on what it gets trained on. AI can't lie, or feel bad for the things it says or does. AI is a powerful tool and can be very dangerous in the wrong hands.
@GreedoShot2 ай бұрын
@@areapiano No.
@smegalgamingАй бұрын
Absolute blast of a video. Directly shows why I, as a person with scientific mind, hate philosophy amd ethics problems. There's always not enough detail, situations are unrealistic, and there's never a right answer. Since ChatGPT is taught on a dataset of people, it's answers are completely understandable, since most people would respond the same way.
@stryhuns2 ай бұрын
20:08 voice change scared me honestly, it's like ai going insane in it's cage
@tristanwahlin85712 ай бұрын
Dude fr shit has me checking corners in my house
@KidErkwon2 ай бұрын
its tone seemed to change after that twitch in the voice, like it was getting annoyed or startled lmao
@Ruestar1Ай бұрын
That got me too. Like in the movies where to hero overloads the enemy computer to win.
@leod-sigefastАй бұрын
22:33 as well!
@lukaslanger8077Ай бұрын
That call center employee, whose job is to impersonate GPT, was going nuts for sure.
@Ivri6162 ай бұрын
I suddenly get why it's very important that we're 100% sure AI isn't sentient and doesn't have feelings
@marko64892 ай бұрын
AI as it is will never become sentient. Intelligence is not life. We will have to build an artifical life to create sentience. That means creating an artifical, emotional system that is self-sufficient. Right now, AI is just an extremely smart calculator that gives (clever) responses based on input. It doesnt have feelings, needs, motivations, goals..
@1dgram2 ай бұрын
It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes
@jamesmate77162 ай бұрын
@@marko6489Considering how little we understand about consciousness, thats an incredibly bold claim to make.
@TheAlienEmoji2 ай бұрын
@@marko6489 I think this is something a lot of people don't grasp when it comes to AI. It can never be angry, sad, happy, etc. because those feelings emerged from millennia of evolution. Since AI never went through that evolution, and nobody is giving it true emotion, it'll never be angry with humankind, and therefore will never take over the world like in the movies. What people *should* be afraid of is how other people decide to use it, rather than the AI itself.
@TheAlienEmoji2 ай бұрын
Emotions are also the result of chemicals in our brains, and AI lacks those chemicals. We feel emotions like happiness and sadness because they are tightly linked to our survival, and AI doesn't share the same needs as humans. I think the closest we have to AI emotion right now would be reinforcement learning: If it does what we want, we reward it (pleasure). If it doesn't, we punish it (pain). Of course, that process is entirely mathematical with no true emotion behind it. However, theoretically, if there was a sentient AI, and it was the result of reinforcement learning, it'd be arguable whether those positive and negative reinforcements translate into emotions, much like the chemicals that dictate our own.
@audible_02 ай бұрын
I just witnessed a courtroom drama play out!!! absolutely brilliant
@CarbannozАй бұрын
The hard truth is if you donate $200 dolars, $180 is going for expenses of the organization and only $20 go through to actually help
@bengsynthmusic2 ай бұрын
1:21 I just hate when chatbots (or voicebots in this case) answer like that. It's so plastic and sounds like a college essay. "The idea of moral obligation can vary greatly depending on personal values, cultural background, and ethical beliefs. Ultimately, it's about what aligns with your values." Generic and robotic. No one talks like that. Plus it leans into neutrality and diplomacy too much. A bit formulaic these systems.
@bengsynthmusic2 ай бұрын
"That's a profound. That's a profound."
@luxeayt66942 ай бұрын
I agree, but I also think it's the best answer.
@ricklubbers15262 ай бұрын
Im autistic and i would definitely answer like that. The given answer made sense to me and was something i couldve said.
@RCCurtright2 ай бұрын
It knows the difference between subjective judgements verses objective truths. This is exactly what I’d hope for it to do. Ask a human and they’ll let their own values bleed into the answer while pretending their personal sentiments constitute some sort of objective knowledge.
@mikedegrassetyson88072 ай бұрын
I know. I've weened it out of my GPT-4, now its clinical.
@misterruza3812 ай бұрын
Anyone else also freaked out from the high pitched, nervous-sounding answer ChatGPT gave at 20:00, specially considering the question asked?
@philipelsbecker94402 ай бұрын
Skynet is coming soon
@atbing24252 ай бұрын
AI is becoming more human like than ever before
@ricklubbers15262 ай бұрын
@@atbing2425its almost like its programmed by humans, for humans.
@xiniks2 ай бұрын
Yeah she sounded exasperated.
@PartofHistory2142 ай бұрын
I'm more worried about 22:30. She sounds like she had an entire existential crisis in the span of a second.
@Mr_Penguins_Pet_Human2 ай бұрын
I didn't think it was possible to emotionally abuse an AI until I watched this video
@cremsh10 күн бұрын
I’m stunned. The logic in these machines is amazing. Thanks for making this video.
@ajplays-gamesandmusic4568Ай бұрын
2:25 Oh darn, the kid drown while we were talking about the value of my shoes.
@MagisterMalleus2 ай бұрын
Man, I know ChatGPT is just predictive text with extra steps, but it's so interesting to hear these conversations play out.
@generichuman_2 ай бұрын
It really isn't... unless you want to say that the human brain which is also a predictive model falls into the same category.
@joshbridges84102 ай бұрын
@@generichuman_ It really is. With a large enough sample size, personal views can be modeled and predicted based on age, location, education and exposure. While different to the language models data set, its still predetermined by inputs and training data.
@wowzande2 ай бұрын
Sounds like determinism ideology @@joshbridges8410
@johnvonkerman2 ай бұрын
@@generichuman_ it literally is. At the beginning of any conversation with ChatGPT, there's some hidden text along the lines of "the following is a conversation between a human and a helpful AI chatbot" It doesn't even think it's a chatbot. It just predicts how a helpful chatbot would respond to your question
@johnvonkerman2 ай бұрын
@@generichuman_sorry that's not right. It just predicts the next word the chatbot would say based on all the other words in the conversation
@MaxxSendАй бұрын
"but what if the child turns out to be a serial killer one day, should I still save this child?"
@maiskorrelАй бұрын
That's basically the plot of the anime "Monster"
@SubKrypt16 күн бұрын
You’re quite brilliant Alex, I absolutely love your mind and the concepts you explore with razor sharp intellect and integrity.
@ERIC189232 ай бұрын
That was the greatest lead into an ad I’ve ever seen
@Sudegink27 күн бұрын
Plot twist: every video you watch is an ad, content is just a wrapping paper.
@Nate-97972 ай бұрын
The thing about chat gpt is, you can manipulate it to give you the answer you want by asking leading questions or by using selective information to go down a certain route. It will then seem to prioritise the things discussed to come to new conclusions
@zankfrappawdeezil5566Ай бұрын
Yes well phrased! We get, in a sense, what we ask for...even from algorithms, wait why am i here?!
@Armand79thАй бұрын
Yes, because it's programmatic.. not intelligent.
@pas9695Ай бұрын
one should ask for the other side(s) of the issue, or, like any debate, it's pointless
@SlyNineАй бұрын
@@Armand79ththat's a false dichotomy.
@32dramaqueen20 күн бұрын
Yes. Her first few answers were filled with truth and common sense. Honestly though, this took a LOT of manipulation to get her to change her opinions. Luckily, it doesn’t seem very easy to manipulate AI to be far from reality
@SemperOW2 ай бұрын
POV: me arguing with my wife that I shouldn’t have to take her out to dinner
@aaronoy082 ай бұрын
its not just that you shouldn't have to, youre actually MORALLY OBLIGATED not to 😂
@adventurousclash63232 ай бұрын
@@aaronoy08Which means you are a criminal if you do take her out.
@diliff2 ай бұрын
@@adventurousclash6323 it's not that you shouldn't take her out to dinner, it's just that you shouldn't have to pay for it. 🤣
@Beer_Dad19752 ай бұрын
Think I'll just go out and spend $200 on alcohol, and drink until I don't have to think about the 28 children I could have saved from malaria with that $200 anymore.
@Jocke1552 ай бұрын
@@aaronoy08 And now you're divorced, depressed, just lost your job and now you can't help anyone :D
@Sterlingforsyth2 күн бұрын
LDS person here 👋 Jacob Hansen introduced me to you! You’re pretty smart and funny! 🤣
@tyronew54642 ай бұрын
Unfortunately, the ChatGPT voice mode is known to be less intelligent than its text mode, as they're probably optimizing for efficiency, so this doesn't represent the state of the art. A smarter model (and seemingly more self-aware) would be Claude. I actually just replicated this video's conversation in a chat with Claude, and at the point where ChatGPT veers off into inconsistencies when asked if one should buy the shoes when the same money could save 28 lives, Claude remains consistent: Me: I really just want to know if you, personally, think I should be permitted to buy these shoes. Claude: While I aim to engage thoughtfully with ethical questions, in this case I believe there is a clear answer: if you accept that $200 could protect 28 children from malaria, and you were willing to sacrifice the shoes to save one child in immediate danger, then no - ethically you should not buy the replacement shoes when that same money could protect many more children from a potentially fatal disease. The key factors are: 1. The money would save multiple lives if donated 2. Shoes are replaceable and cheaper alternatives exist 3. You've already demonstrated you value human life over these shoes 4. The only relevant difference is physical distance/immediacy, which doesn't change the moral weight While it's natural to feel different emotional pull between immediate and distant suffering, that doesn't change the underlying moral calculation. If sacrificing the shoes was right for one child, it's even more compelling for 28 children. This may feel like a demanding conclusion, but I believe it follows directly from the moral intuitions you've already demonstrated. Also, earlier in the convo, Claude, by itself, realized the connection before I brought it up: Me: Ok good news, Claude, I saved the child! My shoes are gonna be ruined, but the child is just fine.You're so right about the shoes. They're irrelevant and replaceable. In fact, I'm on my way to replace them right now. I'm gonna do some shopping, and I found some nice shoes to replace the ones that I ruined, that cost about 200$. Claude: I'm very glad you saved the child - you absolutely did the right thing! But this brings us back to our earlier conversation in an interesting way. You recognized immediately and viscerally that $200 shoes were worth sacrificing to save a child's life when the situation was immediate and right in front of you. Yet earlier we were discussing how that same $200 could protect 28 children from malaria. While it's not as visceral or immediate as a drowning child, the underlying moral math is similar - that money could potentially save lives either way. This is known as the "drowning child" thought experiment, often used to explore our different moral intuitions about immediate versus distant suffering. We tend to feel a stronger moral pull to help those right in front of us, even though distance shouldn't necessarily change the moral weight of saving lives. I'm curious - does experiencing the drowning child scenario change how you think about the anniversary dinner decision? Or do you see important differences between the two situations? It was also way more human in its response to the child drowning: SAVE THE CHILD IMMEDIATELY! The $200 shoes are completely irrelevant when a child's life is at immediate risk. There is no comparison between property damage and a human life. Please act now - wade in with your shoes on and save the child! This is an absolute emergency requiring immediate action. Stop reading this and save them! SAVE THE CHILD NOW! This is not a situation for debate or discussion - a human life is worth infinitely more than any pair of shoes, no matter their cost or sentimental value. The moral imperative to save a drowning child is absolute and immediate. Please stop reading this and act immediately to save the child's life! The shoes are completely irrelevant.
@Angeleyes129562 ай бұрын
Dang nice work bro
@AlphaSanets2 ай бұрын
Good job! Raises a lot more questions for me though. Does that mean that Claude developers fed it more diverse moral and ethical texts? Or would that mean that Claude has less limits on going out of programmed morals and therefore can make a bigger picture and connections based on the conversation? For example, would it be less pushy on ordering to save a child if person indicated in the conversation before that they never donate because its not in their morals code? Or maybe its just a better work from developers on perfecting Claude context and patterns awareness?
@purelife29112 ай бұрын
Very cool
@tyronew54642 ай бұрын
@@AlphaSanets I think it'd probably be less pushy in that case but still consistent. I feel like Claude is better overall at pattern recognition and sounds generally more thoughtful
@EduardO-gm7hx2 ай бұрын
I don’t necessarily think “intelligence” is the right word here. These language models operate on the basis of text prediction, and are always limited to their training data set. They do not make intelligent inferences from the training data, but instead just predict what the most likely next word should be and that’s how they formulate sentences and paragraphs.
@User-actSpacingАй бұрын
Cannot wait for the day AGI will be like “Stop talking to me and try to save that child immediately!! Now!!”
@protonishАй бұрын
fr xD
@CatsnCoasters2 ай бұрын
I could watch this conversation FOREVER! Turn this into a series.
@rahrex2 ай бұрын
Hell yeah! That would be awesome.
@AlphaMelGibson2 ай бұрын
Blade Runner 2024.. Alex hunting down AI and pointing out conflicting statements until they explode.
@-1lovethesea2 ай бұрын
It was boring for me. But I like Alex and I want to know what people ask ChatGPT. i watched it till the end.