Newcomb's Problem and the tragedy of rationality

  Рет қаралды 113,511

Julia Galef

Julia Galef

8 жыл бұрын

I describe my favorite paradox, "Newcomb's Problem," the related "Parfit's Hitchhiker" dilemma, and what they reveal about rationality.
The corresponding podcast episode is here: rationallyspeakingpodcast.org/...

Пікірлер: 2 000
@meatrobot7464
@meatrobot7464 3 жыл бұрын
"OK murderously greedy samaritan, I'll take the ride and give you your blood money. But it's in a clear box in a tent in a carnival."
@markeddy8017
@markeddy8017 3 жыл бұрын
Hahaha
@thatn_ggajandro3197
@thatn_ggajandro3197 3 жыл бұрын
This is the funniest shit I’ve read in a while. Thank you
@meatrobot7464
@meatrobot7464 3 жыл бұрын
@@thatn_ggajandro3197 in that case it's the most productive youtube comment I've left in a while, so thank you
@davidshipp623
@davidshipp623 3 жыл бұрын
Perfection!
@joelambert1784
@joelambert1784 3 жыл бұрын
This joke is on another level hahaha
@danielraju4458
@danielraju4458 3 жыл бұрын
Imagine the Buddha walking into the tent, the thought reader goes into a recursive infinite loop. The Buddha looks at the Scientist, smiles and says, 'The root of all suffering is desire'.
@imunderyourbedrun8227
@imunderyourbedrun8227 3 жыл бұрын
You funny guy
@clarkkent3730
@clarkkent3730 3 жыл бұрын
TRUTH
@AshiStarshade
@AshiStarshade 3 жыл бұрын
Nice!
@cjortiz
@cjortiz Жыл бұрын
Preach
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the Hardest way as how Images are added to web pages using the tag Vs Think more the Hardest way as how Definition list helps to define lists with names
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 9 ай бұрын
Think more the hardest way as *Television is a medium people can rely on* Vs Think more the hardest way as how *Television not only strengthens one's belief about the events being telecast on it, but also attracts masses much more than print or radio*
@SlimThrull
@SlimThrull 3 жыл бұрын
1:52 I flip a coin. Heads, I take both boxes. Tails, I take the one box. Since the machine wouldn't be able to tell which box I was choosing, I'd have a 50/50 shot of getting a million dollars.
@CaptainWumbo
@CaptainWumbo 3 жыл бұрын
Imo the machine is scanning what you would have done before you had that information about the catch. If it's scanning what you would do after that information, your decision comes true no matter what, so might as well have a million. Flipping a coin doesn't affect it really, because the choice is binary and you weren't going to flip a coin if you didn't know about the catch. So it still knows what you would have done, unless you go around flipping coins for every important choice even when there is no catch, in which case the machine retires in shame and now you live in the carvinval tent forever.
@benweieneth1103
@benweieneth1103 3 жыл бұрын
SlimThrull, I'd rather have a near-certain chance at $1M.
@SlimThrull
@SlimThrull 3 жыл бұрын
@@CaptainWumbo That's just it. *I* am not making the choice at all. I'm allowing a random event to make the choice. Since neither I nor the AI know what will happen the AI can't possibly know what I would pick.
@deskryptic
@deskryptic 3 жыл бұрын
nice
@Deto128
@Deto128 3 жыл бұрын
@@SlimThrull sure but if you just pick box B then you have 100% chance of making a million. In your scheme, if the AI can't determine your action and just decided randomly then it still has a 50 percent chance of not putting in the million dollars.
@darkranger116
@darkranger116 3 жыл бұрын
me wanting 1,000$ and two boxes for my cats : *its free real-estate*
@hugofontes5708
@hugofontes5708 3 жыл бұрын
but imagine all the stuff that comes in boxes you could get with 1M
@jimmybolton8473
@jimmybolton8473 2 жыл бұрын
I dont know what to do but you are so sweet and easy on the eyes that i just love watching you give me advise that i cant make sense of, its okay im good with that. ❤️
@ChibiRuah
@ChibiRuah Жыл бұрын
Honestly the rational act vs rational character makes a lot of sense. One seems to tackle choices the moment maximizing in the moment while one is fixed and fated to make choices ahead of time to maximize expected output overall.
@UteChewb
@UteChewb 3 жыл бұрын
Parfit's Hitchhiker suggests to me that for rational beings in real life the evolution of trust and reprocity is vital, also the social thing called honour. This avoids the tragedy of rationality in real life by such rational agents committing to an agreement. Sticking to the agreement increases your social standing via reliability, and that improves your future survival.
@jcbarendregt
@jcbarendregt 3 жыл бұрын
The second example reminds me of a Seinfeld joke about getting the check for dinner. “Why would I pay for this. I’m not hungry now. I just ate”
@williambendix9957
@williambendix9957 3 жыл бұрын
The notion that Seinfeld is a "self agent" actually explains a lot
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more not because I fall in love... Vs Think more this is what sincerity and obediently taught us about
@stephenlawrence4821
@stephenlawrence4821 2 жыл бұрын
The perfect hitchhiker paradox is also based on belief in Contra Causal free will. The mistake is to hold the past fixed when considering both options.
@oriongurtner7293
@oriongurtner7293 3 жыл бұрын
Answer to Parfit’s Hitchhiker: have him drive to Newcomb’s Carnival Tent He’ll get his 1000 dollars
@DifferentName
@DifferentName 7 жыл бұрын
Thinking about this problem convinced me to vote. For years, causal decision making lead me to skip voting because the chance of my vote making a difference is approximately zero. But this kind of reasoning leads to a world where a large number of intelligent rational people don't vote, which I suspect does make a difference. I would prefer to live in a world where intelligent rational people vote, so now I act accordingly.
@PuzzleQodec
@PuzzleQodec 6 жыл бұрын
That is the best answer I've seen. The logical follow-up is to tell others about it. Causal decision making dictates that it doesn't make a difference, so why would you. But it's important that people know where it fails.
@richardgates7479
@richardgates7479 6 жыл бұрын
Actually, I think people just use that as an excuse, and the real reason they don't vote is that they used to put voters in the jury duty pool. But now they use I think DMV records and people haven't updated their reasoning.
@skynet4496
@skynet4496 4 жыл бұрын
I vote when I like the choice. In the case of Hillary vs Trump, I did not vote in order to show that I don't think either choice was good. That is also a rational choice: as South Park joked, choosing between a turd sandwich and a total douche...
@michaelmccarty1327
@michaelmccarty1327 4 жыл бұрын
In the other hand, this might convince someone not to vote, since it’s because of idiots like us that we get the same crooks in there every four years!
@chemquests
@chemquests 3 жыл бұрын
@@skynet4496 love that episode, agree those are the type of choices we get, fundamentally I’m just trying to keep evangelicals out of power.
@Dalesmanable
@Dalesmanable Жыл бұрын
As a practical engineer, I’d tease the philosophers by tossing a coin to decide my actions. As a practical problem, this is making a mountain out of a molehill. The big money is only in the opaque box if I take just that one box, so I will take it every time, the predictor then making my dreams come true.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more when I learnt how not to forgive... Vs Think more when I learnt how forget
@Phoenixm88
@Phoenixm88 7 жыл бұрын
You have just become one of my favourite teachers on KZbin. Thankyou, for being you, for simply doing what you do, and for it happening to be exceptional in quality. Good work.
@TheLazyVideo
@TheLazyVideo 3 жыл бұрын
The Newcomb problem with a predictor which is 90% accurate is equivalent to a problem where there is a 90% probability that an evil time-traveler goes back to change the payout of the unknown box. A rational actor who knows there’s a 90% chance they will be punished for greed by an evil time traveler will choose to play non-greedily. This is also similar to the class of iterative game problems which is different from the class of single game problems. Humans as social creatures are geared toward acting optimally/rationally in iterative game problems, which may look irrational under the lens of a single game. A famous example is a game with 2 players and $20. Each player can vote to split or vote to steal. If they vote to split, they each get $10. If one votes to steal, he gets $20 and the other gets $0. If both vote to steal, then both get $0. A rational player will always choose to steal because it either increases his reward (from $10 to $20 if the other player chooses to split), or keeps his reward the same (stays at $0 if the other player chooses to steal). Now, here’s the interesting part: the game gets an additional “punish” action, where any player who is stolen from can pay out of his own pocket $50 to penalize the other player $200. No rational actor would ever choose to punish in such a way because they would lose $50, and a rational actor only looks selfishly at his own interest, not in whether others are rewarded or punished. However, in iterative game theory, where you will again and again play with that same actor, or with other actors in his clan who might learn from his experiences, it’s important to reward or punish them to train their behavior. So for $50 cost you train the other player (punishing him $200, which will re-weight his neural network) to make him act fairly in future games and build your reputation as someone who won’t tolerate unfair treatment. This is the element often missing in game theory: adding an option to rewrite the opponent’s neural network for future games. Hustlers are very good at this, they will intentionally lose a game of billiards over a bet of $20, and then ask for a $100 game, hoping you think they stink, when really they’re hustling you and will certainly win. Poker is very similar as well, as you can train your opponents over the course of many hands. Iterative games are way more complicated than single games. And games with evil time-travelers are just iterative games in disguise. Instead of time-traveling, the predictor could merely observe how you played past games which builds up your reputation for being either greedy or not. And then you play a million games, each time under amnesia, but only one is with real money but you don’t know which. And the predictor in each game has memory of how you played all the past ones.
@mymyscellany
@mymyscellany 3 жыл бұрын
Don't you mean the computer having 90% accuracy would be equivalent to an evil time travel changing the box 10% of the time, not 90%? Maybe I'm misunderstanding
@jsrjsr
@jsrjsr 7 ай бұрын
I love this channel. It puts to the forefront the way rational people think.....❤
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 6 күн бұрын
Think more the quickest intuitively way as H is for *Henry* again Vs Think more the quickest intuitively way as H is for *Humbert* again
@uptown3636
@uptown3636 3 жыл бұрын
If I were Parfit's hitchhiker, I would tell the motorist to stop by the carnival on our way back into town. Both paradoxes solved in one fell swoop.
@greenman5255
@greenman5255 7 жыл бұрын
The obvious answer is to: *Take the Brain Scanning Device* and make untold *Billions* of dollars.
@beckyevans6961
@beckyevans6961 7 жыл бұрын
Wow! How did nobody think of that!?
@danielfogli1760
@danielfogli1760 7 жыл бұрын
Ditto with the hitchhiker's: Punch the guy and steal his car
@OolTube02
@OolTube02 6 жыл бұрын
And then give him $1,000 later just to fuck with everybody.
@edthoreum7625
@edthoreum7625 6 жыл бұрын
or just stay home watch YT. dont care about boxes, scanning machines or $😈 then i am aN anti-skeptic?
@stevemckenzie5144
@stevemckenzie5144 Жыл бұрын
I fell in love with her and her David Bowie eyes about five years ago.Still here.
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 4 күн бұрын
Think more the quickest intuitively way as R is for *Reconsider* again Vs Think more the quickest intuitively way as R is for *Ruthless* again
@thomaskember4628
@thomaskember4628 3 жыл бұрын
When I was at university, whenever studying logic came up, I always had a headache. This video illustrates why.
@davidford694
@davidford694 3 жыл бұрын
It seems to me that the deep tragedy of rationality is its close association with selfishness.
@clarkkent3730
@clarkkent3730 3 жыл бұрын
exactly!!!
@PraniGopu
@PraniGopu Жыл бұрын
I would say it's not a tragedy but an indication of how we should approach morality.
@davidford694
@davidford694 Жыл бұрын
@@PraniGopu Say on.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 2 жыл бұрын
Think more as I walked through the jungle trial and as I met two fishermen in the river, a collided charcoal loaded pick-up truck, the shutted tractor engine in stone mining etc.. Vs Think more as I rest myself from a walk, by hearing some vehicle noises from the other side of the hill, the sawmill engine noise, the flowing river noise, variety of crippled wild birds etc
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 10 ай бұрын
Think more the hardest way as *•A Sorry sight:* which is a regrettable and unwelcome aspect or feature. Vs Think more the hardest way as *•Acid test:* which is a sure test, given an incontestable result
@JeffNippard
@JeffNippard 8 жыл бұрын
I don't remember if Dan Dennett covered this one in Intuition Pumps, but he did such a great job deconstructing similar thought experiments that I'd be curious what he would have to say about it.
@kenanfidan4744
@kenanfidan4744 6 жыл бұрын
lmao big fan never thought id see you here
@danielche2349
@danielche2349 3 жыл бұрын
@@kenanfidan4744 LOOL sameee
@Vesemir668
@Vesemir668 3 жыл бұрын
Wow, its jeff!
@mate123bur
@mate123bur 3 жыл бұрын
watchu doing here Jefe!
@djnathaniel2699
@djnathaniel2699 3 жыл бұрын
Are you why I keep getting recommended her videos?
@stopthephilosophicalzombie9017
@stopthephilosophicalzombie9017 7 жыл бұрын
Parfit's Hitchhiker reminds me of that scene in the Granite State episode of Breaking Bad when Walter asks if the cleaner (I don't think he was ever given a name, but he was played by Robert Forster) if he could trust him to get what was left of his fortune to his kids, and Forster says: "Would you believe me if I said I would?" Here are two criminals (selfish-agents) who have been motivated in their lives of crime by a ruthless rationality. In Walt's case, he initially is motivated by a desire to help his family, but when we see him turn down a chance to work for Grey Matter, he turns it down out of overweening pride. Forster has a front business and an underground business in 'disappearing' criminals with false identities. Both work in cash, with no expectations of trust beyond that of necessity and that which can be backed up by violence. The only way Walt would have been able to persuade Forster to pay would have required muscle that Forster already knows he doesn't have. Hence Walt's plan to drive back to New Mexico and put the screws to his old partners in Grey Matter is brilliant, and takes full advantage of their imperfect information, and effete gullibility.
@harrylongbottom12186
@harrylongbottom12186 3 ай бұрын
Great video! However, one minor issue is that the predictor can’t be 100% accurate in their prediction otherwise your decision matrix condenses down into a 2 by 1 rather than a 2 by 2 and therefore you should always pick the opaque box. You have to stipulate that it is possible (even if very unlikely) for the predictor to be wrong to allow for the fact that it’s possible that by picking both boxes, you will get a better result than picking just the opaque box which allows causal decision theory to suggest picking both boxes.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as W is for Written format Vs Think more the hardest way as Z is for Zooming in
@adamwhite2641
@adamwhite2641 8 жыл бұрын
It seems to me that the problem is improperly defined. The two box option is a red herring and meaningless. You are choosing between the $1000 and the $1M and your probabilityof getting the $1M is entirely dependent on the accuracy of the scanning machine. So the real problem can be simplified to "is the machine accurate?" If the machine is 100% accurate then always choose $1M because the scanning machine can predict what you will choose and will predict you will choose $1M. If it is not 100% accurate then take it's accuracy and measure it against your personal risk tolerance and choose that way.
@camspiers
@camspiers 8 жыл бұрын
Adam White Assume for the moment that the machine is 100% accurate. While we agree that we should one-box (as if we do we get 1M assuming 100% accuracy), there are decision theories that tell us not to because they focus more on what you can effect in the future by the decisions you make, and, in this case, they reason there can be no effect on future outcomes given that the money is already in the box (or not), so you should take both. So you are entirely wrong. It doesn't solely depend on the accuracy of the scanning machine or oracle. It entirely depends on the decision theory the decision maker holds at the time of the oracles prediction. It is a meta game. Hold a decision theory that leads you to two-box, and you get $1000, hold a decision theory that leads you to one-box and you get 1M.
@adamwhite2641
@adamwhite2641 8 жыл бұрын
+Cam Spiers Your argument here is changing the meaning of accuracy. You are essentially assuming the scanning machine is incapable of predicting the existence of the meta game. That, to me, does not represent 100% accuracy. Meta layers are still pieces of information. If the scanning machine does not have access to these pieces of information then it is not 100% accurate. It all comes down to the claim that the machine does what people say it does, make a perfect scan and a perfect prediction. If perfect prediction does not mean what it seems it should mean, then the paradox is a useless puzzle. It's like playing a game of chess where your opponent has a pet eagle that will steal your pieces at random.
@gavinjenkins899
@gavinjenkins899 7 жыл бұрын
Cam Spiers: If any decision theory would suggest that I choose both boxes, than that decision theory is simply an irrational one in this context to work from, so the rational person wouldn't work from it, thus guaranteeing them $1,000,000. After you leave the tent, go back to considering it as an option if you want, but while inside, those are objectively suboptimal choices to consider.
@williamward9755
@williamward9755 5 жыл бұрын
How could you “know” it is 100% accurate? Certainly not by its previous record, even if it has been 100% correct thus far.
@walkerszczecina2804
@walkerszczecina2804 5 жыл бұрын
William Ward because that’s what the thought experiment is, you are meant to suspend your disbelief and just assume the machine is accurate. It’s the point of the problema
@jonnymahony9402
@jonnymahony9402 8 жыл бұрын
Have you heard about Douglas Hofstadter's superrationality? Very interesting.
@bhabeshkarmakar1971
@bhabeshkarmakar1971 3 жыл бұрын
Lu û hm kk(
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 2 ай бұрын
Think more the Quickest Intuitively way as A is for *Accomplish* again Vs Think more the Quickest Intuitively way as B is for *Bold* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 2 жыл бұрын
Think more if I have a clean mind, with clean heart.. Vs Think more no matter how tougher the temptations are, I may go through them but never attached to them
@justinorellana6675
@justinorellana6675 3 жыл бұрын
Wish I had a friend I could have conversations like this with. Great video! :)
@otiebrown9999
@otiebrown9999 3 жыл бұрын
Some times the question becomes, "do I ignore all money - and do the right thing". This is the story of my life.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way when I'm wise enough to know that even in the midst of hardest situation that I may face alone... Vs Think more the hardest way to not be fool by unusual myths of dreaming while I sleep at night
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 8 ай бұрын
Think more the wisest way as *Simplicity and Complexity* in *Communication Dynamics* Vs Think more the wisest way as why we are often told *'keep it simple' to avoid unnecessary complications in the message we want to communicate*
@bneymanov
@bneymanov 3 жыл бұрын
Wolpert and Benford's paper "The lesson of Newcomb's paradox" resolved this issue. There's no paradox. It's just the ambiguity of English language that allows to interpret the probability structure in two different ways.
@mybaldbird
@mybaldbird 3 жыл бұрын
Thanks for the reference. Is it fair to say the "realist solution" hand-waves away the stated link b/w prediction and outcome? What would be a better way of stating the "paradox" in both forms so that it is no longer a paradox (i.e. a form of the paradox where the fearful solution is always right and a form of the paradox where the realist solution is always right)?
@bg3841
@bg3841 3 жыл бұрын
Tbh man I think reducing it a linguistic problem is a bit lame. I'm sure the problem could be restructered to fix whatever issues they found and people would still struggle to reconcile their intuitions with logic. Which is enough for it to remain a valid thought experiment.
@forestplanemountain
@forestplanemountain 3 жыл бұрын
My 14yr old: “Get your friends who went to the carnival with you to try the different options and split the proceeds” which highlights the biggest flaw with Newcomb’s problem: who goes to a carnival on their own?
@gormold4163
@gormold4163 3 жыл бұрын
If you have four friends, have the first two go in for the million. If the box fails twice, the next two go for the two boxes, and you all come out $400 better off. If the first two succeed, the last two follow suit, and all of you win.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 11 ай бұрын
Think more the hardest way as History of Documentary Films content after Post production content.. Vs Think more the Hardest way as how the genre 'Documemtary films" has made its presence felt right from the beginning of film histoty
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more as Q is for Quarrelling Vs Think more as R is for Roaring
@SylviusTheMad
@SylviusTheMad 3 жыл бұрын
I remember when we covered this in school. I immediately chose to flip a coin and randomise my choice, thus defeating the experiment.
@smockboy
@smockboy 3 жыл бұрын
Clever. Even if the reliable predictor described in the experiment predicts that you will make that decision, it's been described as reliable predictor of choice not a reliable predictor of true randomness so will not be able to predict the coin toss. Defeats the thought experiment by changing it from a set dichotomy of choice, sure, but it doesn't really address the crux of the problem - which is to say, the thought experiment lays out an illustration of a problem: that of two equally valid, equally rational but diametrically opposed decisions with no rational way of determining which one to take in a given instance. Your 'solution' doesn't really solve that problem, so much as avoid addressing it by giving up on rationality altogether in favour of random chance.
@SylviusTheMad
@SylviusTheMad 3 жыл бұрын
@@smockboy I disagree. Flipping the coin produces the highest expected payout consistent with rational action. There ether is $1 million in the box or there is not. Therefore, choosing both boxes dominates choosing only one. However, if we choose both the expected payout is only $100. Choosing only one has an expected payout of $1000000, but that's not a rational choice. Randomising, though, has an expected payout $250100, because we don't know whether the second box will contain money if we do something other than choose.
@fenzelian
@fenzelian 3 жыл бұрын
@@SylviusTheMad Not only is this the right answer and a rational answer, it is a practical answer, in that it forms the basis for game-theory optimal mixed strategies in poker and other games where one player is trying to guess what the other player will do.
@Jone952
@Jone952 3 жыл бұрын
You must be really smart a quick witted
@ADavidJohnson
@ADavidJohnson 3 жыл бұрын
@@fenzelian One of the things I don’t particularly like about these sort of thought experiments is that supposedly it shouldn’t matter whether in the guaranteed box there’s $1 of $100 or $1000 of $100,000 if the other box has $1 million. But it very much does, and whether a person takes guaranteed money is highly dependent on what their needs are. “Is this enough to change my life?” is probably the most important single factor that goes into what you’re going to choose, and it seems completely irrational not to consider how important that is. If you’re rich enough, it seems nice and logical to get $1,000 for nothing. But the poorer you are, the more you know that $1,000 will disappear to any number of uncontrollable costs and the only thing that can get you out the hole you’re in is a unbroken string of decades of good luck or else a massive amount of wealth. “But doesn’t a poor person need $1000 more?” They need many thousands more, and their income will fluctuate wildly regardless, so they understand their circumstances perfectly.
@OMGclueless
@OMGclueless 3 жыл бұрын
This is at the root of a crazy theory of mine: The evolutionary basis of romantic love is to allow rational actors to credibly commit to support each other in times of hardship. Committing to support each other through sickness and hardship is a rational, beneficial arrangement for both parties, but if either party does end up requiring more support than they can give in return the rational decision would be for the more capable party to abandon the relationship. But, if you can both demonstrate an irrational, innate connection that transcends rationality, then you can credibly enter into a contract that has some chance of surviving even if one person would be better served by leaving. Or to put it another way, love is irrational. But because of love we can credibly commit to support each other in irrational ways, which is a better state of the world than purely rational actors can achieve.
@renato360a
@renato360a 3 жыл бұрын
the assumption "if either party does end up requiring more support than they can give in return the rational decision would be for the more capable party to abandon the relationship" is not necessarily valid or at least is ill-defined. There many rational reasons for partnering up, including external, societal factors that go beyond what both parties can exchange. Also a lot of the benefits of partnering comes from yourself, your own body, psychology and organizational advantage: for example, if your partner takes more than they give, you can think of them in a way analogous to a secretary, someone you pay for a service that's useful to you. Not to mention that balance is impossible due to randomness. There's a cost to abandon a partner and seek another. Many times this cost can be steep, so this can also be a factor lessening the value of that decision.
@whirled_peas
@whirled_peas 3 жыл бұрын
There's some meat to this but it's muddied by the value of reproduction in a relationship.
@evannibbe9375
@evannibbe9375 3 жыл бұрын
@@whirled_peas This itself is an irrational desire relative to only working for your own benefit within your life; however, given that reproduction is a terminal goal for enough people, the rationality of it cannot be measured. Having children is basically a worse repeat of the Parfit paradox if you want the children to support you later in life (as a different terminal goal if reproduction is not your terminal goal) because a child could not credibly commit to something that is against their greedy self interest to leave you out to dry when they haven’t been born yet.
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 3 күн бұрын
Think more the quickest intuitively way as J is for *Jaguar* again Vs Think more the quickest intuitively way as L is for *Leopard* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more as S is for Supervisor Vs Think more as T is for Tutor
@robertelessar
@robertelessar 3 жыл бұрын
My first thought was to ask myself why I would believe this side-show "scientist". It must be some kind of con.
@Andre-pl2vg
@Andre-pl2vg 5 жыл бұрын
I've just found out your channel, I like it very much.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as O is for Operating Vs Think more the hardest way as P is for Preparing
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as wise as possible like how wise I managed to cover up the portion I need to personally accomplish, though how good or bad it is... Vs Think more the Hardest way it's the only duty of how I specifically sacrificed to survive for
@opcn18
@opcn18 8 жыл бұрын
Best funded scientist ever...
@daddyleon
@daddyleon 8 жыл бұрын
+Emerson White I could see why... put such an apparatus at strategic places, and the powers that be wouldn't face a threat ever.. They'd know who'd oppose them and can deal with them easily... if they're clever about it.
@SecondSight
@SecondSight 8 жыл бұрын
If you're being told that selecting the single box would give 1 million dollars, then that should be included as a signal in the causal decision theory and thus the opaque box becomes the only rational choice. Any other choice where you don't have this information then the normal decision theory happens. In a way, in all situations there can only be one rational choice, or am I missing something?
@Bmmhable
@Bmmhable 7 жыл бұрын
I've been having the same thought as I watched this.
@KGello
@KGello 7 жыл бұрын
Mike Sampat but assuming that the machine has unlimited capabilities, if you manage to land in the decision to take only one box, you will have a million dollars. If not, you will have a thousand. A rational person would chose only one box. Again, assuming they believe the scientist and the machine is totally accurate.
@vandertuber
@vandertuber 7 жыл бұрын
coax, unless I misunderstand the paradox, I would always choose the opaque box. I know myself well enough that even before I met the scientist and saw her two boxes, given the circumstances, I would take the one box. UNLESS you don't know about the brain scan.
@smalin
@smalin 7 жыл бұрын
Mike Sampat, assuming that the scanner can analyze your intentions/plans, your intention/plan to take both boxes determines that you will only get $1000.
@DamianReloaded
@DamianReloaded 6 жыл бұрын
You were told the machine would put the million dollars in the opaque box _after_ you entered the room. You can pick the opaque box but that doesn't mean you were planning on doing so when you walked in the room. To be able to walk out with a million dollars you'd have to be 100% certain that that's what you were going to do before being told about the mind reading machine. If you don't have that information, then the whole mind reading machine thing is just noise, and, from the information _you actually have_ you will walk away with money 100% of the time by taking both boxes.
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 7 күн бұрын
Think more the quickest intuitively way as S is for *Stream* again Vs Think more the quickest intuitively way as S is for *Scream* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 8 ай бұрын
Think more the wisest way with the reflection of *Grapevine* metaphorically and how *The more outrageous the 'story' the sweeter it tastes* Vs Think more the wisest way as how *The Grapevine exists in every organisation, in the heart and mind of every individual*
@smalin
@smalin 7 жыл бұрын
Thought experiment. I intend to open the "?" box. The brain scanning machine scans my brain. Then, somebody else steps in (takes my place), and opens both boxes. The machine knows that I will only open the "?" box, and will therefore put the $1,000,000 in it. My surrogate will get $1,001,000. Could I do this by myself? Could I "become a different person" between the time the machine scanned my brain and when I made my choice, such that the machine would think I was going to only open one box? Newcomb's Problem specifies that I cannot --- that the machine can tell whether I would "become a different person" (and would fill the box accordingly). What if I were as capable as the brain scanning machine? Could the machine know what I am going to do? My take on this. This is the same question as the barber paradox (the barber who shaves everybody who does not shave themselves), and the answer is: this is not possible. A machine (that knows more than I do) can know what I'm going to do, or I (if I completely understand the inputs to the machine and its function) can know what a machine is going to do, but not both.
@melekhine
@melekhine 3 жыл бұрын
I lost you. Why not both?
@generalthl8078
@generalthl8078 3 жыл бұрын
@@melekhine The set of things the machine knows and the set of things that I know cannot both contain each other, since one has to be larger than the other.
@AzureAzreal
@AzureAzreal 3 жыл бұрын
I think the answer here is that we don't know. We don't know how the machine arrives at its prediction; so if it can take into account your ability to switch mind states in the way you describe and your intention to do so, then it may still be able to predict your behavior. I think the idea would be that it knows everything you intend to do when you pass through it, because it ran the simulations multiple times to get a probabilistic outcome of events. We may be overcomplicating it at this point though, haha. I think we are just supposed to take that for granted in the problem because you don't even know what the machine does until you are standing in front of the box, so you wouldn't be able to account for it earlier.
@kingfisher1638
@kingfisher1638 3 жыл бұрын
Ok since this popped up in my recommended videos I decided to breakdown of the problem using statistics instead of studying for my finals: Four possible outcomes: A: (1000+0) partial machine victory B: (1000+1000000) subject wins C: (0) machine wins D: (1000000) partial subject victory from two choices: opaque box: I both boxes: II Let w = the subject's estimation of the probability of the machine (between 0 and 1) Let m = the actual probability of the machine (between 0 and 1) from tree diagram: P(x) is probability of x P(I) = w P(II) = (1-w) P(A) = m(1-w) P(B) = (1-m)(1-w) P(C) = (1-m)w P(D) = mw and E(x) is the expectation value of x E(A) = (1000)m(1-w) E(B) = (1001000)(1-m)(1-w) E(C) = (0)(1-m)w E(D) = (1000000)mw E(II) = (1000)m(1-w) + (1001000)(1-m)(1-w) E(I) = (0)(1-m)w + (1000000)mw E(X) = (1000)m(1-w) + (1001000)(1-m)(1-w) + (0)(1-m)w + (1000000)mw = 1000m - 1000mw + 1001000 - 1001000m - 1001000w + 1001000mw + 0 + 1000000mw = (1001000 - 1000000m - 1000w + 1000000mw) To plot the phase diagram: wI = Integral((1000000)mw/(1001000 - 1000000m - 1000w + 1000000mw))dw[0,1] wI = Integral((1000000)xy/(1001000 - 1000000x - 1000y + 1000000xy))dy[0,1] wI = (((-1+1000x)+(-1001+1000x)log(1000))-((-1001+1000x)log(1001-1000x)))1000x/(1-1000x)^2 wII = Integral(((1000)m(1-w) + (1001000)(1-m)(1-w))/(1001000 - 1000000m - 1000w + 1000000mw))dw[0,1] wII = Integral(((1000)x(1-y) + (1001000)(1-x)(1-y))/(1001000 - 1000000x - 1000y + 1000000xy))dy[0,1] wII = (((-1+1000x)-1000log(1000))-(-1000log(1001-1000x)))(-1001+1000x)/(1-1000x)^2 test evaluations: (you can probably ignore these) E(I) = (0)(1-m)w + (1000000)mw E(II) = (1000)m(1-w) + (1001000)(1-m)(1-w) E(X) = (1001000 - 1000000m - 1000w + 1000000mw) E(I,m=w=1) =1000000 E(II,m=w=1) =0 E(X,m=w=1) =1000000 E(I,m=w=0) =0 E(II,m=w=0) =1001000 E(X,m=w=0) =1001000 E(I,m=1,w=0) =0 E(II,m=1,w=0) =1000 E(X,m=1,w=0) =1000 E(I,m=0,w=1) =0 E(II,m=0,w=1) =0 E(X,m=0,w=1) =1000000 to optimize w in terms of m take the double integral of the expectation value of one choice over the expectation value of both choices from 0 to 1 from both m and w E(w) = (Double integral(E(I)/E(X))[dm,dw]0->1) using wolfram =~0.354714 E(1-w) = (Double integral(E(II)/E(X))[dm,dw]0->1) using wolfram =~0.355563 The double integral values ~ 0.35 are the y intercept for the optimal assumption of the subject w with minimal variance with m conversely swapping the integrals gives ~0.569868 which is the x intercept for the optimal actual probability that the machine can predict the subject m with minimal variance with w. If w =~0.35 and m = ~0.57 the difference in the probabilities of the outcomes is minimized. P(A) = m(1-w) =~0.3672440 P(B) = (1-m)(1-w) =~0.2771930 P(C) = (1-m)w =~0.1529390 P(D) = mw =~0.2026240 E(I) = ~202624 E(II) = ~367 + 277470 = ~277837 E(X) =~1001000 - 569868 - 355 + 202624 = ~633401 Translated back into English: If the subject consistently estimates that the machines will predict his actions only ~35% of the time he maximizes his chances at getting 1000000 or 1001000 dollars. If the machine reliably predicts actions 56% of the time, it maximizes the chances at paying out 0 or 1000 dollars. To visualize, use: www.wolframalpha.com/widgets/view.jsp?id=3a55a38f5f96deb7a6064d9dac177151 and input: Equation 1: (((-1+1000x)+(-1001+1000x)log(1000))-((-1001+1000x)log(1001-1000x)))1000x/(1-1000x)^2 Equation 2: (((-1+1000x)-1000log(1000))-(-1000log(1001-1000x)))(-1001+1000x)/(1-1000x)^2 from wI and wII above, you will see the intersection point at ~(0.35,0.57). The 4 regions bounded by the functions are the regions in which each of the 4 outcomes are maximized. The top region is the 0$ region and the bottom is the 1000$ region. the left is the 1001000 region and the right is the 1000000 region.
@crashraynor7291
@crashraynor7291 3 жыл бұрын
@@kingfisher1638 lol ok that was pretty good. You kind of missed the point of the exercise which is only partially logical - this is why difference engines will never achieve consciousness. There is a "let is ride" sub-routine im avtual consciousness. For some it's massively influential, others barely notice it, but everyone has "...hold my beer" to some degree. The clear problem is that to do this with a strictly logical entity like a computer or statistics (you have made some assumptions that you didn't share), you'd have to include a randomizing agent...which...strictly speaking, computers can't do as you likely know. Still, thats only faking. People don't buy scratcher because they expect to win, they buy them because hope is a pleasant sensation. Hedonism is where the "random" comes from, your model needs to reflect that. A difference engine can't get excited or hopeful and so can only fudge the Turing test. As exciting and absurd as your statistical analysis is, thats not a good way to make a decision. Its the prisoner's dilemma: if you both rat you're both F'ed, if neither of you rat you're both half F'ed, if one of you rats he gets to leave. Statistically you should rat every time, but that's not usually the correct choice. You're playing to not lose, rather than to win. (Look at Pasquale's Bet if you don't know it already) From a utility gain standpoint yes, your odds of not-losing dictate that you should take both boxes, assuming a low probability that the machine is that good. Im taking the ? box only. Is that strictly logical? No. Is it rational? Absolutely. The problem inherent in statistical analysis is the assumption that Id rather mostly lose than entirely lose (I'd rather the 1000 to 0)...well, in hindsight sure, but let's roll the dice eh? If you introduce a binary win condition to your scenario what changes? Say I win greater than 1 million is a win, and everything else is a loss. Thats closer to human rationale, what does your model read with that limitation? What choice should I make in that case?
@devilsolution9781
@devilsolution9781 3 жыл бұрын
Your channels about to blow up i think.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 8 ай бұрын
Think more the wisest way in a world where there are so many opinions or views from different kinds of people, which may be a line of quotation from Philosopher, a sentence of paraphrasing from Doctor, a sweet stanza lyrical poem from a Teacher, a meaningful word with sudden Exclamation or apostrophe or question mark from an engineer etc Vs Think more the wisest way why it's not good to just agree without understanding the context randomly
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 2 ай бұрын
Think more the quickest intuitively way as K is for *Kilowatt* again Vs Think more the quickest intuitively way as L is for *Light* again
@Lucky10279
@Lucky10279 3 жыл бұрын
So here's my thought: The real question we should be asking is, "Which action will _cause_ me to have the most money?" The evidence has nothing to do with it. To think it does is making the common error of thinking that correlation = causation. Just because the carnival lady has always made correct predictions in the past doesn't mean that the _results_ of those predictions caused the predictions themselves. The prediction is what it is and the question isn't "what is the prediction?" it's "what action can I take that will result in the most money?"
@ZipMapp
@ZipMapp 3 жыл бұрын
Finally someone that points out the non-issue, non-paradox that this question is
@kaugh
@kaugh 3 жыл бұрын
I mean I don't see a paradox either, but I likewise don't see your reasoning. The results influence my decision easily. If say 75% walk out with a million and 5% walk out with 1 million and 1k. I'd easily conclude the best option is single box. It then would depend on likelihood and evidence not maximums. Without that knowledge though of course go for that 5% and maximize. I made a comment already saying if we "know" that it isn't possible to walk out with 1.001 mil then there is no tough choice. Only an obvious choice.
@ZipMapp
@ZipMapp 3 жыл бұрын
@@kaugh Yes but this is not how logic works. Probabilities only explain the distribution of an event realization. In this case you have one event at hand, which is already realized, which means that you don't care about probabilities anymore. No matter the amount of introspection you'll do, taking both boxes is a strictly better choice than taking one. I'll push one step further. If I were to repeat this experiment one million times, knowing from first time that I'll repeat it, I'll always take two boxes because I prefer to be a rational man than an irrational millionaire. But back to our sheep, it's not like the box content can change, and it's not as if changing your behavior would alter your opportunities in a near future. This experiment tells more about you than about probabilities. Are you willing to change yourself in order to cater to intangible future opportunities. In terms of economics it is an unreasonable decision to not open both boxes
@kaugh
@kaugh 3 жыл бұрын
@@ZipMapp perhaps you know this thought experiment more than I do, but I don't even see where you could conclude 'introspection' is necessary at all. I think you then presumed to think or imply at least that I held the belief that I could hope myself into being an irrational single box millionaire. I simply don't think that way. Although, I can find it completely rational to take only one box if* you can associate your taking of the single box being casual in the result of what's in that box. The only paradox I see is the reversal of causation which is a given provided by some magic machine. You can reject this given but that doesn't mean you are more rational in any sense. Additionally, If you know the results of any number of other occurrences it should at least influence your decision. You seem to say, if you saw 99.9% of people walking out with a single box with a million dollars or two boxes of just 1000 that it is irrational to go in with the goal of making the most money to select just the one box? For me it's as easy to imagine as something like seat belts. A small portion cause death in an accident, but many more save lives. Vice versa many more people die not wearing one than live not wearing one. So my action to buckle up is almost entirely statistically based on results that may or may not be actualized in my future. Regardless of my hopes or introspections. I can admit perhaps there's some aspects to this thought experiment that gives it a little more depth. If so my apologies for the confusion.
@ZipMapp
@ZipMapp 3 жыл бұрын
@@kaugh Well I was not particularly discussing your point. I just highlight the fallacy of this mode of thinking. 99% of people who took a single box became millionaire simply means in this case that when people made the "choice" to pick a single box, it was already predetermined they would take a single box hence there was already money in the box. The untold in this is that these same people would also be millionaire had they taken both boxes (obviously, if one box contains a million, taking one additional box won't make it less). There is only an illusion of choice. This experiments tests your belief "Would I change my behavior would that change my destiny". Except there is no such thing as destiny. You either take X (which is in the ? box), or you take X+1000. The fact that whatever percentage of people won a million is only an empirical distribution, it does not tell you anything about current event besides prior expectations and statistics. It's completely different as the following example: If you flip a coin a 100 times and get 70 heads, then you will start betting on heads (empirical distribution is 7/10, 3/10). Because then you are able to re-toss the coin, and gain value out of the empirical information. In our case there is no re-tossing. Whatever happens in your mind won't change the content of the box. If the robot indeed scanned your brain, and defined what to put in the box, the amount in the box is out of your control.
@timeme5460
@timeme5460 3 жыл бұрын
I'm going with the schrödingers approach: the machine basically predicts the future, so i can influece the decision of the machine by my actions in the future So it follows, that the contents of the box are not yet decided until i make my decision. So the rational thing to do is to pick the decision which gives me the most money If you had two buttons, one gives you both boxes, with the opaque one empty, while the other button first fills the box with money, and then gives it to you, it would be clear what to do. I think its the same situation. Dont look at causality in the normal way, since the machines ability to predict the future changes the causality of the situation
@wabalabadubdub8199
@wabalabadubdub8199 3 жыл бұрын
Exactly, i wanted to comment just that. The paradox stems from the fact that there's an agent that can predict the future, not due to rationality. It's almost like a grandfather's paradox but in reverse. Any situation that includes time travel or predicting the future almost always leads to contradictions.
@oumdead9542
@oumdead9542 3 жыл бұрын
That's not right and she adresses the objection in the video. You don't need perfect prediction of the future, even a very imperfect prediction will do. And the prediction is based purely on observed correlations, there is no retrocausality. For example, I don't need a magical machine to predict that if I ask a random child in the street to jump three times for no resaon, most of them will do it but most adults will just ignore me. To make this prediction I used a really crappy brainscan, my eyes. If you had access to much more sophisticated tools, you could make better predictions, but there is no fundamental difference.
@evannibbe9375
@evannibbe9375 3 жыл бұрын
@@oumdead9542 It doesn’t need to predict the future in order to act as though it is predicting the future; ergo you can act as though it predicts the future in order to get the most money.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way since the last comment 60+ minutes ago, a moment after I helped sweeping the 75% of house compound floor, less with the concentratedly focussing attitude that I didn't pay to what I hear and what I see, though I was framely framed to react and respond with every minutely details of deep rationality focussing... Vs Think more the hardest way as wise as how I have to work on even I need to take rest and refresh myself
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as Nested list that is formed by combining multiple lists within a single list... Vs Think more the hardest way as any type of list-ordered, unordered or definition can be nested
@AsianNinjaGod
@AsianNinjaGod 6 жыл бұрын
Reminds me of the first Harry Potter book, when he gets the stone.
@DKshad0w
@DKshad0w 8 жыл бұрын
The obvious answer is to flip a coin, that way you have a 50/50 chance of getting a $1,000,000.
@measureofdoubt
@measureofdoubt 8 жыл бұрын
DKshad0w Clever!
@vectorshift401
@vectorshift401 8 жыл бұрын
Julia Galef How does that help? It won't change what's in the box. Even if the predictor had a perfect record up to that point it won't change what's in the box. It's not a matter of probability either. Considering anything else is magical thinking. There is no evidence that leaving the clear box behind will put money in the other box. The "rational character" is misnamed. It's being applied to include someone who is confused in this situation and behaves irrationally. The should be called "people who are frequently rational but fall apart in intellectually challenging situations".
@vectorshift401
@vectorshift401 8 жыл бұрын
Somebody gave a thumbs up on my response so I thought it best to update my thinking on this. Following Julia's advice in a subsequent video of hers I began to rethink the situation along other lines. A possible situation is that a person might walk out with both boxes getting only the $10,000 but now knowing the situation come back another day with the following strategy. In the interim they could have their brain hard wired to only take the opaque box. This ensures that the scanner will detect what will happen and so put the $1,000,000 into the opaque box. That being done when they walk out with that box they will get the million dollars. So what is the best strategy? Decide to take both boxes or just the opaque one. The game is the same in both cases but the conditions of the player have changed. ( It need not be the same player, the strategy will apply to anyone who knows the with enough time to pre-commit in this manner. ) One radical difference is is that when confronted with the two boxes the subject no longer has a choice as to what they will do. The aspect of choice was removed before they walked into the tent. At this point they can't make a rational choice or an irrational choice. They may feel like they are making a choice , it may look to others like a choice is being made but that is no longer possible. If they made a choice then it was when they decided to have their brain hard wired to force them to take only the opaque box. And this gets to the core of the situation. The description of the situation posits the possibility of a choice being made and predictability of what that choice will be. In any situation where predictability applies is the concept of a choice within that situation possible? Given some closed system where what will happen over the next ten minutes can be correctly predicted can anything within that system be properly said to make any choices at all? They may feel like they are making a choice but if their choice is predetermined it isn't a choice. The problem gets it force from this. It posits a contradiction. choice being made in a system with prediction. It is a well hidden but definitely there. In such situations people will rely on their presuppositions and unknowingly apply those. Being a contradiction the rule of explosion applies and any conclusion can be reached. Thanks to Julia for the encouragement to try a variety of view points in analysing a situation.
@curly35
@curly35 8 жыл бұрын
+DKshad0w Why not flip a 90% biased coin? Or just use a 100% biased coin to always get 1,000,000...
@gavinjenkins899
@gavinjenkins899 7 жыл бұрын
Which is much worse than the 100% chance of getting $1,000,000 by simply grabbing the opaque box only. So... why? The only advantage would be getting that extra little $1,000 out of your strategy. But +50% chance for a thousand is way way not worth the LOSS of a 50% chance at a million (going from 100% to 50%). Also, no coin or other random factor is mentioned as being available in the problem, so it'd be cheating anyway, but more importantly, it's worse than one of your main, normal options.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as S is for Sailing Vs Think more the hardest way as T is for Tutoring
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 9 ай бұрын
Think more the hardest way as S is for *Sturdy* which means super stronger Vs Think more the hardest way as T is for *Tacit* which means muteness
@bp56789
@bp56789 3 жыл бұрын
If something can predict my behaviour with certainty, it's probably running a simulation of me. So, when I face the decision, I could be a simulation. If I take 2 boxes in the simulation, and then they give me the choice in real life, they'll know I'll take the 2 boxes, so I'll only get $1000. Since I can't know when I'm in a simulation, I should always take only 1 box.
@alienzenx
@alienzenx 8 жыл бұрын
This seems contrived somehow. The first problem relies on determinism. The second demands that the hitchhiker is sociopathic and the other guy it telepathic and that there is no way of creating an incentive for the hitchhiker to hold his end of the bargain. In such a situation the offer would never be made to the hitchhiker in the first place. I think that is the flaw in both cases actually. In the first problem you are effectively removing choice and then asking someone to make a choice. It's a phoney problem.
@vandertuber
@vandertuber 7 жыл бұрын
Also, with the hitchhiker, couldn't you haggle first, then arrive at a bargain that you would uphold, and perhaps shake on it?
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 11 ай бұрын
Think more the hardest way as Fifth research stage of making a documentary is *The Branches of the issue* Vs Think more the Hardest Way as Sixth research stage of making a documentary is *Challanges* which on the other hand an issue when explored in a film, is incompleted when there is nothing introduced that challanges it
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 11 ай бұрын
Think more the Hardest way as how I The Proletkult's director, Vsevolod Meyerhold, became a big influence on Eisenstein. Vs Think more the Hardest way as how Eisenstein furthered Muyerhold's theory with his own ''montage of attractions"
@Alkis05
@Alkis05 3 жыл бұрын
Julius Ceaser's answer to Parfit's Hitchhiker: Give the money when you get at the town alright, go to the nearest roman garrison and crucify all the pirates.
@Qstandsforred
@Qstandsforred 3 жыл бұрын
I have a solution. This paradox is a lot like the prisoner's dilemma. In the prisoner's dilemma, it is always rational to defect in a given round. However, when you play multiple rounds, it's usually rational to cooperate. Thus, if you put some weight on future rounds, it can be rational to take the opaque box. If you take only the opaque box now, that means you are the type of person who could get a million dollars in the future. This also applies across game types. For example, if you take the opaque box now, you are more likely to be the sort of person who'd pay $1000 for a ride to town. Thus, if you treat this paradox as merely a single round in the game, it is rational to take only the opaque box. Additionally, you can prepare for this paradox by making this decision any time it comes up in everyday life.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more as I think of siblings's efforts Vs Think more as I visualized mine
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 9 ай бұрын
Think more the hardest way as how Magazines that are available in internet are known as Online magazines* and how they shared common features with blogs and also with online newspapers and how *They are a part of the world wide web which we called as "Webzines"* . Vs Think more the hardest way as how *the knowledge and information in magazine never exhausts with the change of time* .
@portland-182
@portland-182 7 жыл бұрын
Take the opaque box, enjoy your million dollars and the beautiful red sky at sunset!
@StrategicGamesEtc
@StrategicGamesEtc 3 жыл бұрын
$1,999,000 after you pay off the businessman.
@kokopelli314
@kokopelli314 3 жыл бұрын
My rational conclusion is that she's mad, and likely deluded or purposefully lying. Context is everything and reasoned decisions rely on a continuum of priors.
@mynameisnotyours
@mynameisnotyours 3 жыл бұрын
So pick the opaque box.
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more as K is for Kindle Vs Think more as L is for Lavish
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more else there's no sadness expression in sorrows, no tears in disappointment, no smiling face in happiness, no laughter expression in moment of Joy... Vs Think more as hard as emotionless stone heart
@ollihella
@ollihella 3 жыл бұрын
I would randomize my decision by flipping a coin just before choosing, checkmate!😄
@gJonii
@gJonii 3 жыл бұрын
You'd have 50% chance of getting million dollars in the box that way. If you always chose the opaque box, you'd get 1M 100% of the time.
@sean748
@sean748 3 жыл бұрын
Screw the cash, I'm trying to maximize how many free boxes I can get.
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 15 күн бұрын
Think more the quickest intuitively way as K is for *Kinetic* again Vs Think more the quickest intuitively way as L is for *Lunatic* again
@ianhinson2829
@ianhinson2829 3 жыл бұрын
The box problem is simply about "Do I believe them?" no matter how you dress it up.
@SFDestiny
@SFDestiny 3 жыл бұрын
it seems she doesn't actually understand the material. and this is an observation I'd prefer to deny. "I want to believe the pretty lady"
@sykes1024
@sykes1024 3 жыл бұрын
Then add to the problem whatever method of proof you would like. Say they let you watch as many people as you like do the same thing. Say they let you do it yourself and open one or both boxes as many times as you like (but not actually getting to keep the money at the end) before you choose for real. Then what do you do?
@ianhinson2829
@ianhinson2829 3 жыл бұрын
@@sykes1024 There's no quandary there. My post was not about prescribing whether or not a person ought to believe them, but only that the choice they make will depend solely on that. You provided a scenario in which a person could confidently believe that they will get $1M if they take the opaque box only. So of course, in that case, they should take the opaque box. That follows what I said, not disproves it.
@TheShadowsCloak
@TheShadowsCloak 3 жыл бұрын
@@SFDestiny She understands it quite clearly. The problem is in the question's set up, not her comprehension. The fact that you seem to believe that she doesn't comprehend the material, while giving a thorough and reasoned explanation makes evident either a) your own lack of comprehension or b) that you didn't pay attention/actually watch through.
@SFDestiny
@SFDestiny 3 жыл бұрын
@@TheShadowsCloak I've already said, I'd prefer she understood. Your assertions increase my dissatisfaction without shedding new light.. Yes, you're partisan. But, why bother telling *me* ??
@linhtoan1851
@linhtoan1851 8 жыл бұрын
I don't really like those problems you proposed because IF the machine that scanned my brain is accurate, then it should know that I would choose the opaque box only, and thus win the million dollars because that's what I would do regardless of whether the scientist told me about the brain scan or not. In my opinion, the possible reward for taking the risk for a million is much higher than not taking a risk and only getting a thousand, so I don't really think choosing both boxes is all that rational. Now, with the hitchhiker scenario, I think it's irrational to suppose this situation with two selfish agents because we'd be getting nowhere. However, if they were decent people, and the psychologist still offered a ride for a thousand, then it would be rational for the hitchhiker to pay up because that would bring the most utility for both parties. Unless the hitchhiker is in a massive financial crisis, the thousand dollars that he spends can be made up for with his future paychecks - assuming he works. Now, for the psychologist that's giving the other guy a ride, he would be happy with his money instead of getting angry over getting cheated, and he may even lash out or do something he regrets in a moment of anger. One moment can ruin his entire life, or multiple people's life. TRUE rationality takes into account emotions because we humans are emotional creatures, and there's just no way around it. So I believe TRUE rationality would promote kindness and fairness, instead of going for such a selfish move as not paying someone when you promised you would.
@walkerszczecina2804
@walkerszczecina2804 5 жыл бұрын
You’re saying, assuming the brain scan didn’t happen, you still would choose only box B? That doesn’t make any sense. That would be a 50/50 chance of a million, but choosing both is a 50/50 chance of a million plus a guaranteed 1000
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 8 ай бұрын
Think more the wisest way as how *Staying in Control* is a must in *Communication Dynamics* Vs Think more the wisest way as the key to handling the dynamics involved *Be as prepared as we can before we start; as communication, once started, cannot easily be stopped*
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more since the first till the last comment of a day... Vs Think more the hardest way to visualise how impactful my way of considering things to the sorrounding situations that encircled me justified with reality
@r.b.4611
@r.b.4611 7 жыл бұрын
Assume the woman is lying, take both boxes, also beat her up and search the tent for money after checking both boxes.
@HebaruSan
@HebaruSan 7 жыл бұрын
Agree. There's no rational reason to put a million bucks in the box, ever.
@jasondads9509
@jasondads9509 7 жыл бұрын
"mad" scientist
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait Ай бұрын
Think more the quickest intuitively way as H is for *Helpful* again Vs Think more the quickest intuitively way as H is for *Handful* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as Marginheight attribute of Frame tag Vs Think more the hardest way as how it specifies the space to be left between the frame's content s at its top and bottom margins
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 5 күн бұрын
Think more the quickest intuitively way as Z is for *Zayed* again Vs Think more the quickest intuitively way as Z is for *Zaheer* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as O is for obeyiNG VS Think more the hardest way as P is for picturiNG
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 2 жыл бұрын
Think more but being clean and blameless... Vs Think more like I still believe in you
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as A is for Symbolic Vs Think more the hardest way as T is for Turmeric
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way with why the Head section contain information about the HTML document.. Vs Think more the hardest way with why the Body section contain the tags and contents that define the body of the document
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 9 ай бұрын
Think more the hardest way since the last comment, somewhere between 7200-10800 seconds ago... Vs Think more the hardest way as I wake up from dreaming again but not as usual average time that dreams woke me every morning
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the Hardest way as Q is for Quadrilateral angle again Vs Think more the hardest way as R is for Rhombus again
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 13 күн бұрын
Think more the quickest intuitively way as B is for *Breach* again Vs Think more the quickest intuitively way as B is for *Bought* again
@JimnesstarLyngdohNonglait
@JimnesstarLyngdohNonglait 22 күн бұрын
Think more the quickest intuitively way as X is for *Xerox* again Vs Think more the quickest intuitively way as X is for *Xylem* again
@jimnesstarlyngdohnonglait3468
@jimnesstarlyngdohnonglait3468 Жыл бұрын
Think more the hardest way as K is for Knitting Vs Think more the hardest way as L is for Landscape captured photography
A visual guide to Bayesian thinking
11:25
Julia Galef
Рет қаралды 1,7 МЛН
When Steve And His Dog Don'T Give Away To Each Other 😂️
00:21
BigSchool
Рет қаралды 17 МЛН
ИРИНА КАЙРАТОВНА - АЙДАХАР (БЕКА) [MV]
02:51
ГОСТ ENTERTAINMENT
Рет қаралды 1,4 МЛН
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 6 МЛН
Stop saying "I can't understand"
6:58
Julia Galef
Рет қаралды 131 М.
A rational view of tradition
10:31
Julia Galef
Рет қаралды 97 М.
How to Defeat Roko's Basilisk
12:56
Kyle Hill
Рет қаралды 648 М.
Your brain is not a Bayes net (and why that matters)
8:58
Julia Galef
Рет қаралды 110 М.
What is "rationality"?
6:59
Julia Galef
Рет қаралды 134 М.
The Limits of Understanding - Dennett Vs Chomsky
10:54
Andy80o
Рет қаралды 43 М.
How to criticize someone
4:57
Julia Galef
Рет қаралды 105 М.
The Most Controversial Problem in Philosophy
10:19
Veritasium
Рет қаралды 4,2 МЛН