"If your preferences aren't transitive, you can get stuck in a loop" Yuuuup, that's my life right there
@ChazAllenUK4 жыл бұрын
"If your preferences aren't transitive, you can get stuck in a loop" "If your preferences aren't transitive, you can get stuck in a loop" "If your preferences aren't transitive, you can get stuck in a loop" Oh dear!
@armorsmith434 жыл бұрын
Falnésio Borges yes. People are naturally irrational.
@MatildaHinanawi3 жыл бұрын
This is actually interesting because in my experience, if you do these sort of "what's your favorite of X" tests you end up with intransitive results, or, at least, the first time you do it, your results include a LOT of incongruencies due to the fact that certain pairs were juxtaposed in such a way as to create "intransitive impressions", leading to lots of items being wildly misordered on the final list. I would go so far as to say that a lot of human preferences (a lot of them more subtle) are NOT coherent!
@paradoxica4242 жыл бұрын
another thing is the evolutionary behaviour of novelty seeking… which definitely implicates non-transitive preferences
@cheshire12 жыл бұрын
@@paradoxica424 Novelty seeking doesn't imply intransitivity. It just means that you value a world state where you know about the thing more than one where you don't. Humans do have very inconsistent preferences though.
@MaximAnnen-j1b Жыл бұрын
@@cheshire1 Do you value a world where you know everything more than the world where you know nothing? In the former you cannot do novelty seeking.
@cheshire1 Жыл бұрын
@@MaximAnnen-j1bthe difference is novelty seeking as a terminal goal versus as an instrumental goal. I personally value it terminally. I also value knowledge itself and knowledge is instrumentally incredibly useful, so I would still choose being omniscient over knowing nothing. I don't think I'll have to worry about running out of things to learn tho.
@cheshire1 Жыл бұрын
@@MaximAnnen-j1b Even valuing novelty seeking terminally does not imply circular preferences. It just means a term in the utility function for the experience of learning something.
@IIIIIawesIIIII4 жыл бұрын
Human utility-functions are dynamically reference-frame and time-dependent. Humans are composed of countless antagonistic agents. It might well be, that a general AI could have analogs to "moods" and "subjectivity" as well to moderate it's own inner goal-conflicts. Especially in cases, where the AI's model of the world is using complementary information from it's situations in time.
@EvertvanBrussel Жыл бұрын
I like this idea. I'd love to see a video that goes deeper into this and explores what kind of consequences such an implementation could have, both positive and negative. @RobertMilesAI, did you already make a video about this? If not, would you be willing to make one?
@Paul-iw9mb7 жыл бұрын
5:47 "Human intelligence is kind of badly implemented". Damm god, you had one job!
@RobertMilesAI7 жыл бұрын
To be fair he canonically had an entire universe to implement and a one week deadline
@ragnkja7 жыл бұрын
Can't an omnipotent being stretch deadlines?
@bytefu7 жыл бұрын
Nillie God may be omnipotent, but certainly isn't infinitely smart: not only he was unable to stretch the deadline, he actually forced it upon himself, having infinite resources and nobody above him in hierarchy of power. No wonder we have inconsistencies if we were created in his image.
@DamianReloaded7 жыл бұрын
Judaism is very symbolic for expressing stuff. Also, the book of Genesis was the last one to be written, probably around the time when The Library of Alexandria already existed or started to make sense for it to exist. In any case, it'd be hard to prove or disprove god's existence using a book as premise. ^_^
@wolframstahl12637 жыл бұрын
It might be easier to prove or disprove if our goddamn intelligence was decently implemented.
@inyobill5 жыл бұрын
Wow, Assembly level system engineer for 34 years and never heard of HCF. _Always_ something new to learn.
@BrightBlackBanana7 жыл бұрын
this channel is gonna be great
@Chloexxx617 жыл бұрын
I believe it's already been great.
@wolframstahl12637 жыл бұрын
We are witnessing the birth of something amazing, and in a few years we can tell our children, our friends or our machine overlords that we were there from the beginning.
@cupcakearmy7 жыл бұрын
humbugtheman again ;P
@forthehunt33357 жыл бұрын
Let's make youtube great again!
@nand3kudasai7 жыл бұрын
lol i love the edits on this video
@MIbra967 жыл бұрын
Please do a video on AGI designs that aren't meant to behave like agents.
@michaelmoran90203 жыл бұрын
The point is that they basically have to be. If the mathematics of agents cant apply then its incoherent.
@99temporal3 жыл бұрын
if they take decisions base on inputs and act upon those decisions, they're agents... and thats it
@esquilax55633 жыл бұрын
@@michaelmoran9020 looks like you didn't read the text at 06:35
@michaelmoran90203 жыл бұрын
@@esquilax5563 I stand by what I said regardless. Just because something isn't designed explicitly as an agent I would be extremely surprised if you couldn't model it as one
@SangoProductions2135 жыл бұрын
I mean...nontransitive preferences seem exceedingly common for people to have.
@raizin49085 жыл бұрын
I believe it's not so much that the preferences are non-transitive, but that we are optimizing for multiple variables, and we can't optimize for all of them at the same time. For example, our preference between eating and sleeping changes throughout the day depending on how hungry and sleepy we are feeling at any point in time. We might prefer talking with friends when we're low on social interaction, or watching a video on AI safety when we're low on intellectual stimulation. But although our preferences shift around based on all kinds of want-and-need-based variables, at any single point in time there is an order of transitive preferences for that particular moment.
@nowheremap5 жыл бұрын
I don't believe that's possible; we're just bad at examining our own preferences, which also change all the time.
@shadowsfromolliesgraveyard65775 жыл бұрын
@@nowheremap There are rational cases where preferences are nontransitive, like rock-paper-scissors, Penney's game, or Efron's dice
@randomnobody6605 жыл бұрын
@@shadowsfromolliesgraveyard6577 I don't think you make any sense. Assuming you are talking about the fact one option wins against another, you are completely missing the point. Say you are playing rock paper scissors with me over something you really care about, you prefer winning to not winning. However, that means, you are indifferent to winning with rock, winning with paper, and winning with scissor, which are prefer to tieing with rock etc etc. Out of the 9 world states possible between us 2, you prefer 3 to 3 others to 3 more. Within each set of 3 you are indifferent. Your preference is clearly transitive. Assuming that's not what you mean, well, what did you mean?
@shadowsfromolliesgraveyard65775 жыл бұрын
@@randomnobody660 Okay consider 3 guys. A rich plain dumb guy vs a poor hot 100IQ guy vs a middle-class ugly genius guy. All measures equal, someone's preference for them would go Guy 1 < Guy 2 < Guy 3 < Guy 1, because each guy beats the previous on 2/3 measures. That's a rational non-transitive preference.
@mybigbeak7 жыл бұрын
I would rather be out with friends than practice juggling. But I can't go out with my friends as I have work to do. But work is boring so I might just take 5 minutes to practice juggling. whoops I just spent all day juggling.
@RanEncounter6 жыл бұрын
I would have to disagree on this video about humans making stupid decisions. Humans making bad decisions is evolutinarily compulsary. Otherwise we would not take large risks and do stupid things to achieve a goal. Sometimes things that we see as stupid in paper is actually a genius approach (even if we don't know why it is genius), although most times it is not. If we do not have people that take these risks, we will progress a lot slower or even come to a halt in progress. It is a necessary thing for learning and making breakthroughs.
@himselfe7 жыл бұрын
Preferences represent desire, intelligence is what we use to reconcile those desires with reality. I would imagine you already know this, but I think it's an important distinction to make when discussing utility functions. In terms of quantifying preferences, I think your trilemma of possibilities falls short. A > B OR B > A are a given, but A ~ B implies that you don't care whether A or B is the outcome. In reality there is a fourth possibility: You want both A and B to happen, and there we enter the realm of cognitive dissonance. If you've ever seen Star Trek: Voyager, think of the episode where the doctor descends into insanity because he could not reconcile with his choice to save one patient over the other. I think that episode demonstrates the problem of trying to resolve cognitive dissonance through purely logical functions quite well. I would argue it's a mistake to treat intelligence as purely a logical function, logic forms only a fraction of what constitutes intelligence.
@Malevolent_bacon4 жыл бұрын
At 4 minutes in you say you can't have a>b, b>a, a~b, but that's the problm, we think that way. For example ,I live pepperonia pizza and sausage pizza about the same, with little to no preference behind a coin flip but I may also be deciding based on outcomes for others, such as a friend who gets heartburn from pepperoni rather than sausage. Seems like the fourth state of which you say can't happen should be programmed in to allow for a variance between a and b being close that the AI should move on to continue evaluating based on a separate reward function
@fleecemaster7 жыл бұрын
The only thing that comes to mind is when I really want both A and B to happen. I'm not indifferent, because I score both as 10. Then I tend to do whatever is possible to make both happen. Maybe in a logical case, you are forced to pick 1 option and just have to decide which you want more. But in real world cases you tend to try and change all the other parameters you can to make both possible.
@RobertMilesAI7 жыл бұрын
Bear in mind that A and B are *world* states, so they include all the other parameters as well. There's no such thing as "A and B both happen" - each one is an entire world state, so the thing you're thinking of where you get the stuff in A and also the stuff in B, is really its own world state, call it C. You're indifferent between A and B but you prefer C to either of them
@fleecemaster7 жыл бұрын
Gotcha! Wow! Thanks a lot for the reply! Really love the channel! :)
@maxsperling48625 жыл бұрын
I think that your statement at 5:01, "If you have an ordering of world states, there will exist a set of numbers for each world state that corresponds to that ordering" isn't quite accurate. I don't necessarily think this detracts from the main point you're arguing, but I think it's worth mentioning. If the number of world states is countable, then your claim is totally fine. If not, though, it isn't necessarily true. Not all uncountable totally ordered sets correspond to subsets of the real numbers. If the set of world states is (best modeled as) something with cardinality larger than that of the real numbers, then it'd be impossible for it to correspond to a subset of the reals. I think this is probably fine though? I'm certainly not an expert, but I would assume that any actual AGI would have preferences that can be modeled countably, due to physical limitations. In that case, the ordering certainly corresponds to a subset of the real numbers. Also, from watching your other videos, most of the properties of the utility function that you talk about, and that are relevant to the problems you discuss, seem like purely order-theoretic properties. They don't really seem to rely on the values the utility function takes actually being "numbers" to begin with, so long as there's a total order on them. Still though, (in my non-expert opinion) it seems probably fine to think intuitively about utility functions in terms of real numbers, especially in an informal context like here on KZbin. Of course, that's just a vague judgment call, so I'm sure someone can poke some holes in it.
@flamingspinach4 жыл бұрын
I had the same thought. I'd like to point out, though, that you don't need the the set of world states to be cardinally larger than the real numbers for there to be an ordering on world states that can't be embedded in the real numbers. If your preference order is order-isomorphic to the long line, i.e. ω₁ × [0,1), then it would have the same cardinality as the real numbers but would not be possible to embed into them, because the image of ω₁ × {0}, which is an uncountable well-ordered set, would need to be an uncountable well-ordered subset of ℝ, and no such subset exists. (I think the Axiom of Choice is needed to show this.)
@MetsuryuVids7 жыл бұрын
I didn't know you had a channel. I always like your appearances on Computerphile.
@antipoti5 жыл бұрын
I absolutely love your subtle humor hidden all over your serious arguments! :D
@yazicib17 жыл бұрын
Badly implemented is a strong word. Erraitc and random behaviors of humans has the benefit of being stuck in local optimums. That is being human. AGI should definitely implement such things that would change its utility functions. Similar to dna replication errors resulting on amazing diversity of life. Errors and randomness are definitely essential in AGI
@rwired7 жыл бұрын
5:40 "... human beings do not reliably behave as though they have consistent preferences but that's just because human intelligence is kind of badly implemented our inconsistencies don't make us better people it's not some magic key to our humanity or secret to our effectiveness or whatever ..." -- but it may be the key to solving the control problem!
@nowheremap5 жыл бұрын
Artificial Stupidity? Neat!
@naturegirl19994 жыл бұрын
The control problem? Question, has anyone coded an AI with no specific objective? Humans aren’t born knowing anything about the world, we change our minds on what we want very frequently. Humans don’t have set reward functions, so why can’t we program AI who can also learn and find their own reward functions?
@circuit104 жыл бұрын
@@naturegirl1999 This whole video is about why we do have a reward function even if we don't realise it. Think if the reward function as happiness. If nothing made you feel anything and you had no instincts you wouldn't do anything. Similarly the AI would do nothing or do random things. Machine learning basically works by generating a random program, seeing how well it does a task and if it does well tweaking that program a bit more etc. If you have no reward function you're just generating random useless programs.
@hexzyle4 жыл бұрын
The bad implementations result in things like prejudice, lying to ourselves about what we want as we do something that is inconsistent, (cognitive dissonance allows people to do terrible things while telling themselves they're not at fault) confirmation bias etc It's not stupidity. It's ignorance. It wouldn't solve the problem. It would just mean the AI intentionally doesn't care about our values instead of just accidentally not caring.
@soranuareane7 жыл бұрын
Thank you for citing the lovely hcf instruction. I believe most modern processors have analogs. For Intel, it's to disable the thermal interrupt system and go into a sha256 loop. For Sparc it's probably to break a glass window. For AMD it's likely something to do with writing primes to SCR. I believe Atmel has something?
@levipoon56847 жыл бұрын
The indifferent relation needs to be transitive for a utility function to exist. But if we have a sequence of very similar world states and for any two consecutive states, the difference is not measurable by the agent of concern, so there isn't a preferred state. But if we consider the first and last sates in this sequence, the difference may accumulate to a measurable level and there can be a preferred state. Wouldn't that mean that this indifferent relation isn't really transitive?
@lukegodfrey11035 жыл бұрын
It would still be transitive; it would just imply the agent has a limited accuracy of evaluating it's utility function
@cheshire12 жыл бұрын
The comparison > is transitive as long as you can't find states A, B, C with u(A) > u(B) > u(C) but not u(A) > u(C), as is the case in your example. There would be states where u(A)=u(B)=u(C) but not u(A)=u(C). That means the equivalence relation of being indifferent is not transitive, but it doesn't need to be.
@tylerowens7 жыл бұрын
I think intransitive preferences could come into play when the agent doesn't have all options available at any given time. The loop only occurs if I have the option to move between the three cities at will. I can't think of any concrete examples right now, but I can entertain the possibility that you might make different choices between three things if only given pairs of the things. In that case, you'd have to alter your utility function to be able to take in not only the world state but also the decision space.
@SlackwareNVM7 жыл бұрын
The editing is pure gold! Really well explained video on top of that. Great job!
@BrightBlackBanana7 жыл бұрын
human inconsistencies might not always be counter-productive. eg: self-deception can be evolutionarily beneficial in some cases as it allows humans to better deceive others or to find motivation easier, and these could lead to inconsistencies in behaviour. source: a university lecture i fell asleep in once
@bytefu7 жыл бұрын
If you compare just deceiving others better and doing the same, but with self-deception as a bonus, you can easily see that the latter is clearly counter-productive. Also, evolution is not a synonym of improvement. Though our lives would certainly improve if we had consistent preferences, because consistency leads to easier optimization of activities we need to perform. It would be much harder, if not impossible, to write an optimizing compiler for a programming language, if CPU instructions executed for varying number of clock cycles; programs would be slow and have unpredictable lags.
@DamianReloaded7 жыл бұрын
I kind of agree. Bad decisions could be a feature more than the result of a negligent implementation they could play the role of a "bias" or "noise" to help climbing out of local minima/maxima. Once in a while you get randomly seeded and instead of doing what you always do, you do something silly or genius.
@victorlevoso89847 жыл бұрын
the problem is that 1. Not all problems human reasoning flaws are actually useful , evolution can't look ahead and didn't have that munch time(comparatively) to improve the new cognitive features in humans so you are going to find realy negligent implementation like you can find in a lot of other things evolution did, evolution can easily get stuck in local maxima." 2.A lot of flaws in human reasoning actually were useful.....in the ancestral environment , now a lot of things have changed and those flaws aren't useful anymore . 3.No ,doing random things isn't very useful there are a lot more ways of succeeding than failing , and most flaws of humans reasoning don't help people to climb out of local maxima.
@DamianReloaded7 жыл бұрын
Humans aren't logical "by nature". Teaching humans to think logically is very difficult and quite expensive (energy wise), and it doesn't have a great rate of success. Thus, I believe, it'd be very rare for a human being to arrive to a solution mainly by adhering to logical reasoning, we would most likely recur to brute forcing the problems until we stepped on a solution. Acting "randomly" could help shorting the path to the next optimal solution. It'd be a lottery, but even a lottery has more chances than trying every possible step in order when your health bar is ticking down. ^_^
@victorlevoso89847 жыл бұрын
Yes thinking using system 2 is costly , thinking carefully about things you need time and energy and you can't always do that, but that doesn't mean people solve that by brute forcing problems , generally problems are too complex todo that , when we don't have time to think carefully we rely on intuitions, which isn't irrational or random , intuitions are perfectly valid algorithms(which are worse but cheaper and mostly rely on previous experience) that our brain executes that we aren't consciously aware of , people doesn't try completely incoherent and random actions , what seems random is actually things that we subconsciously decided that could have good results if it was a lottery every time that we choose something without thinking then we wouldn't survive everyday life, is posible there is some randomness involved but we certainly aren't randomly trying all possibilities(it they were they wouldn't be able to play go , let alone complex "real life" problems), and I don't know what trying all posible steps in "order" would even mean in real life situations , evolution does try randomly and that's why it spends millions of years to do little improvements, but humans aren't doing that .Returning to the original topic, what Rob said that was bad mind design is that humans don't have preferences coherent with an utility function which doesn't have anything to do with how we determine what actions will have more utility but with the fact that we don't actually have coherent preferences.Also even if maybe some apparent bugs in human minds are actually features some of those feature are completely outdated and there are others that clearly are bad design , for example the fact that I need all my concentration and mental energy to multiply two digit numbers ,I can do complex image processing subconsciously but cant multiply big numbers, and that's consciously spending time and a lot of energy , unconsciously people can only barely multiply , because this our intuitions stop working in exponential functions.
@Niohimself7 жыл бұрын
Now hold on; You assume that any sensible AI would pick an ideal world state and go to it straight like an arrow. That's a bit of "ends justify means" reasoning. What if we come from a different direction: by saying that in certain situations, some action is better than another, regardless of the big picture? I.e. no matter what world state we end up in, we MUST behave a certain way. I believe that placing safe behavior above absolute rationality and efficient goal-directed planning results not in the most optimal possible AI, but in one that we, as humans, can more easily cooperate with.
@JonnyRobbie7 жыл бұрын
Ok, I love it, but there are a few technical notes I'd like to mention. First, you have to play around with your camera a bit, and set some kind of AE-lock. Those exposure/white balance cuts were pretty glaring. That brings us to my second issue, jump cuts. Please, don't put a cut behind every two words. Try to use jump cuts as little as possible. Let the information you just presented breath a little. If you feel like you mispronounced or stuttered, don't cover it up with jump cuts. Try to make your editing in a larger passages and eventually just reshoot those as a whole if you've made a mistake. Thanks for the series.
@RobertMilesAI7 жыл бұрын
Yeah. I figured since I was using the same camera in the same place with the same lights, I didn't need to do white balance again, but some light must have got in from outside, since I was shooting at a different time of day. White balance every time, I guess is the lesson. The jump cuts are a stylistic choice mostly, I'm going for a vlogbrothers-style thing, but obviously those guys are actually good at it, which makes a big difference. You're right though, some of the particularly ugly cuts are there because I didn't want to reshoot. Often I wanted to leave a little more space but hadn't left enough time after a take before looking away at my notes or whatever. Before starting this channel I'd never owned a DSLR or used any video editing software. It's no fun doing all your learning and mistakes in front of thousands of people, so thank you for being kind and constructive with your feedback :)
@phaionix87527 жыл бұрын
I thought the jump cuts were fine. Feels just like the other vloggers' videos do.
@sallerc7 жыл бұрын
I would say it depends on the content, in this kind of videos it's good to have some time in between to actually grasp what you are saying.
@p0t4t0nastick7 жыл бұрын
this channel's gonna blow up, at least among its "science" peer channels. let's take the ride from this channel's humble beginnings to some epic output in the near distant future, hurray!
@NathanTAK7 жыл бұрын
I'll put on Ride of the Bitshift Varialkyries.
@RobertHildebrandt7 жыл бұрын
Maybe humans have multiple utility functions, which's importance changes dependending on whether they are achieved? For example, if you don't have anything to drink, the utility function (drink something) increases in importance. If the goal "having enough water in your body" is not reached, it may start to outweigh the importance of other utility functions, after a while. For example "behave in a way other people won't throw you out of the tribe" may becom outweight. Then you might stop caring what other thing and just drink out of the flower pot. After you have drunk, the importance of the utility function "drink enough" is reduced and social norms start to outweigh the "drink enough" utility function. Is there something I am missing out about utility functions?
@BrosOfBros7 жыл бұрын
So, if I understood Rob's video correctly, you should not think of a utility function as a motivation or goal. You should think of it as a function that takes in an entire state, and gives that state a numeral rating. So in your example, you talk about a person with two motivational pressures, "drinking enough" and "adhering to social norms". These motivations do not have their own individual utility functions. Instead, there is one utility function that takes in these two states as parameters and returns some rating. Your example can be represented like: Scenario A: utilityFunction(low water, adhering to social norms) = -1000 Scenario B: utilityFunction(sufficient water, not adhering to social norms ) = -200 Scenario C: utilityFunction(sufficient water, adhering to social norms) = 100 There are not separate utility functions so much as motivations that affect the utility function's result. Some of those motivations (such as having enough to drink) may have more of an impact on the result than other motivations (such as adhering to social norms). For that reason, a human would want to ensure they have sufficient water before worrying about adhering to social norms.
@bytefu7 жыл бұрын
Then there is a higher order utility function, so to speak, which incorporates those functions along with rules of assigning weights to them.
@sonaxaton7 жыл бұрын
Like Artem said, you could have smaller utility functions that determine the value of specific goals, and the overall utility function combines them in some way to produce a utility for the entire world.
@gastonsalgado76117 жыл бұрын
Hmmm, I don't complete buy that preferences must be transitive. For example, let's say I have 6 diferent objectives in mind. And then between two different worlds I prefrer the one that beats the other in more objectives. This sounds like a valid comparation method to me... But this method isn't always transitive, for example in non transitive dice. Anyone has a solution for this? Isn't my world comparator a valid one?
@cheshire12 жыл бұрын
@@gastonsalgado7611 Let's say someone presents you with dice A, B, C with the sides (2,4,9); (1,6,8) and (3,5,7) respectively and allows you to pick one. They guarantee to grant as many of your objectives as the number on the die shows. Let's say you pick C at first. These dice are intransitive, which means B will beat C _more than half_ the time. The other person offers you to switch to C if you pay a penny. Surely switching to the other die is worth a penny? But wait, if you choose B, then A would beat B more than half a time; and then C would beat A. Would you really go around in circles, spending a penny each until you're out of money? Of course not. This illustrates that 'beats the other more than half the time' is not a smart decision criterion. A better one is to look at the _expected value_ of each die. Every one of them has an expected value of 5 and you would be indifferent about which to pick. Expected value is always transitive, so no one could pump money out of you using a similar trick. A superintelligent AI would not let you pump money out of it.
@wesr9258Ай бұрын
I will note that assigning everything a single real number doesn't account for everything, simply because the cardinality of real numbers is likely smaller than the cardinality of different states of the world (but for each set of world states to which you are indifferent, you only choose one.), and even if they have the same cardinality, you still might be unable to. For example, let’s say the set of possible stayed of the world is a 2-dimentional point, with coordinates x and y. You prefer (a, b) to (c, d) iff (a>c OR (a=c AND b>d)). The proof is pretty tricky, but even though the cardinality of the set of all pairs of real numbers is equal to that of the set of all real numbers, there is no way to make each coordinate point correspond to a real number value f(a, b) such that f(a, b)>f(c, d) for all cases in which (a, b) is preferable to (c, d).
@Omega02024 жыл бұрын
Transitivity is false if you have multivalues: I have 3 board games but only 1 hour of spare time. A is great but I'm sure I WON'T finish it: (10, 0) B is good but I'm NOT SURE if I'll finish: (5, 5) C is meh but I'm sure I WILL finish it: (1, 10) My utility function is: If either has an unsure success (0 < y < 10), pick the best. Otherwise, if one is sure (y == 10), pick it. Otherwise, pick the likeliest (better x). Result is: A > B, B > C, C > A
@doublebrass7 жыл бұрын
you did a great job of explaining the arguments in the comments section without being condescending, way more than most channels can say. love your content!
@gabrielc83995 жыл бұрын
Wait a sec. A well-order set can not always bijected into |R (for exemple if we order P(|R), but I m not sure it is possible). In this case that's not important because we can consider only a finite set of world state.
@6iaZkMagW7EFs7 жыл бұрын
Sometimes people have intransitive preferences, but the loops are usually to large for the person to notice. Like the way MC Escher makes a staircase that connects to itself. For example, you can have a series of objects with desirable characteristics. If each object has a single characteristic that is much better than that of the other, but for which every other characteristic is diminished only slightly, the person will choose the object. Object 1: 1.0,1.0,1.0,1.0,1.0,1.0 Object 2: 2.0,0.8,0.8,0.8,0.8,0.8 Object 2: 1.8,1.8,0.6,0.6,0.6,0.6 Object 3: 1.6,1.6,1.6,0.4,0.4,0.4 Object 4: 1.4,1.4,1.4,1.4,0.2,0.2 Object 5: 1.2,1.2,1.2,1.2,1.2,0.0 Object 1: 1.0,1.0,1.0,1.0,1.0,1.0 Each object has one characteristic that is boosted greatly, while the others are reduces slightly. If the person tends to focus on the good, they will move foreword in the loop. If the person focuses on the number of things, rather than the greatest magnitude of change, they will move backward.
@zaq13205 жыл бұрын
I'd argue that the incoherency of human preferences is actually an extremely valuable part of our intelligence and behaviour, and it's hard to show that directly becuase it's hard to talk about our preferences accurately... because they're incoherent. It is, however, possible to talk about current AI safety research concretely and I think that serves as an excellent kind of "proof by contradiction". How would humans behave if we did have coherent preferences? AI safety research shows probably really, really badly. Worse than we already behave. As a side note, I'm pretty sure our incoherency of preference comes down to three things: the incomparability of world states - ie maybe I feel like Cairo, Beijing and Amsterdam all have unique and irreducible advantages and if I went to one I would regret not going to the other two (importantly I'm not ambivalent, I'd point to the common phrase "it's apples and oranges" as evidence of this); the malleability of preferences - ie going to cairo changes how much I want to go to amsterdam so writing a liner list is pretty pointless; and errors - often humans make decisions without realizing they're doing so.
@cogwheel427 жыл бұрын
2:45: I think a meaningful distinction can be made between indifference and ambivalence. If the individual preference magnitudes of A and B are large, that says something different than if they are small. If I am ambivalent about two issues and am faced with a situation where I need to manifest my preferences in some way, I am more likely to expend energy investigating, trying to tilt the balance in favor of one side or another than if I am genuinely indifferent. To put it another way, indifference says "either choice is fine" but ambivalence would mean "neither choice is fine"
@cheshire12 жыл бұрын
What would you do if given the choice between A and [do nothing] and you're ambivalent? [do nothing] is always an available policy option. If given the choice between A and B and you decide to [do nothing], that just means you have higher preference for [do nothing] than either A or B.
@insidetrip1017 жыл бұрын
Interesting. What you say about "prefrences" I think is actually what is true about humans. I know its a cliche, but there's a truth to the age old "the grass is greener" phrase. It seems to me that one of the problems with AI is that maybe it requires something completely irrational underneath it. Hopefully that's the case, because I think general ai is absolutely terrifying. EDIT: Should have listened to the end of the video, because you basically say that you think that our contradictory values and preferences lead to "bad decisions." I'm just unconvinced, namely because you have to buy into the utilitarian ethic (i.e. the goal of ethics ought to be to achieve the greatest amount of pleasure for the greatest number of people), and I don't think that's true at all. I didn't want to bring up the idea that you dismiss earlier, namely that you "can" come up with a "number" for "preferable world states." But, that's also problematic, namely because most all of our literature, philosophy, and religion from all of history seems to indicate that there's something to suffering that makes life meaningful... i.e. that the greatest amount of pleasure for the greatest number of people (unlimited pleasure for all people) might not actually be the desirable world state. Intuitively this does seem to make sense to me. Do you actually think the world would be better if everyone had vapid lives like Kim Kardasian or Paris Hilton?
@mc44447 жыл бұрын
An existence of a utility function is just assumed at 5:21 but it isn't demonstrated like the things before. Since the existence of such a function is part of a world state I think there might be a way to construct a proof similar to the Halting problem that shows that a utility function couldn't exist in every instant.
@wioetuw9125 жыл бұрын
If each world state is associated with a single value, then by definition there is a function that maps the world states to those values. A function is just a set of ordered pairs (a, b) where the first element of the pair is different for each pair in the set. There doesn't necessarily need to be an explicit formula that you can use to calculate the value from the argument. The function itself would still exist.
@bntagkas7 жыл бұрын
i have come to the line of thought that, the reason we have 'inconsistencies', what i would call programmed to fail the script, would be specifically designed (in the potential assumption that we are some sort of ai in a sim) exactly because it makes us interesting. Stuff like sneezing-getting anxious-fear-love-etc-- in alot of sense i can make out of it make us choose often the 'wrong' choice, assuming our subconscious mind can optimize for the 'right' answer every time, in my view right being best triggering of intake of dopamine etc. But if you think kind of how we are slowly but surely becoming gods ourselves, meaning masters of our bodys and our environments, able to manipulate all kinds of pixels we are made of to give potentially results that are in accordance to our preference -if our preference is preprogrammed or not is up to anyone to guess- then if you extrapolate us to where will be in future x alot of years, you might see as i do a being that is able to create worlds, but has lost all stuff that make life fun and meaningful through constant optimization of intelligence and eventually rooting out the illogicalities and faults that were making the now god so interesting. its like you have beaten the game for the most part, the magic is gone. you kind of want to see how it can be interesting again, but are reluctant to actually re-program in your self the mistakes...maybe you make some simulated universe that you can use for many things -an arms race to create another god, where if the sim god 'wins' the game of life he can be your friend? (natural selection/evolution)- or/and observe for your viewing pleasure how interesting these creatures are by making mistakes, kind of how your own kind was like before they solved virtually all the problems about life. but hey what do i know im just a crazy guy who likes to listen to philosophy and science while gaming...
@john.ellmaker5 жыл бұрын
The trouble starts with preferences changing over time, if I like a song and listen to it too much I become indifferent to it or even dislike it. Just like if I eat too many oranges I won't want any citrus items but starved of those i'll prefer them. Can anything be a coherent preference truly?
@bwill3257 жыл бұрын
I think you gloss over the cost of consistency a bit too quickly. Human decision making lacks consistency largely because we are lazy or in other words the work to maintain a consistent belief stucture is too high. Requiring consistency will hinder efficency particularly as complexity grows.
@ZT1ST4 жыл бұрын
I don't know that I agree that lacking consistency is because we are lazy - lacking consistency could be an evolved trait given that consistency is more exploitable. Like in Hawks vs Doves - if both Doves in a contest just stand there and posture until the other one leaves, that's bad for both of them, even if not as bad as running away from a Hawk.
@TheDefterus7 жыл бұрын
I just love the quick cut to "completeness" after assuming references exist :D
@morkovija7 жыл бұрын
Jokes aside - this would be one of the very few channels I'd consider supporting through patreon and buying merch from. We need more of you Rob! You got us hooked!
@morkovija7 жыл бұрын
Yass! Got to the end of the video =)
@RavenAmetr6 жыл бұрын
Rob, I like your talks, despite I generally have a hard time agreeing with your reasoning. "Human intelligence is kind of badly implemented" Oh, is that so simple? Let's just call "bad" everything which doesn't fit the theory (sarcasm). I see the fuzziness and ambiguousness of human's utility function as a direct consequence of the generality of the human intelligence. You as a machine learning expert, are certainly aware of problems such sparse rewards and the fact that intrinsic rewards are not scalable. Constructing the utility function you cannot ignore these problems, or you'll never achieve the generality of the search/optimization. Therefore, the utility function must be as far as possible from any concrete problem. In the human case, we call this function "Curiosity". I think it should drastically change the agenda of A.I. safety.
@OnEiNsAnEmOtHeRfUcKa5 жыл бұрын
Human intelligence IS badly implemented. It has all sorts of ridiculous flaws that only really serve for us to screw ourselves over. While there do exist some flaws that are an inherent consequence of working generally with ambiguous, incomplete information, there's a much greater number of flaws that aren't. Evolution only ever really makes things "barely good enough", and our minds are no exception. Also, he covers the "curiousity" function in regards to AI in another video.
@Nulono2 жыл бұрын
I'd be really interested to see a video about non-agent AGIs.
@donnyaxe787 жыл бұрын
Why are there only three options?: A > B B > A A ~ B What if both A and B are terrible options? You would not be ambivalent if both were terrible. Why is a fourth option not available, ie neither A or B?
@MarceloCantosCoder6 жыл бұрын
The > < ~ question only talks to the relative preference of one over the other. In other words, if your choices were restricted to A or B, you have those three ways to make your choice. There isn't a fourth. You may in fact want neither A nor B, but there are still three choices. If you dislike A more, then A < B. If you dislike B more, then A > B. If you dislike them equally, then A ~ B.
@Fweekeh7 жыл бұрын
Fascinating series, I love it!. One question I have: given that utility functions will require definitions and given that it is capable of learning and will have some degree of malleability and self-malleability, maybe to maximise its reward it will start changing definitions - to move the goalposts so to speak (I'm sure there are many anthropomorphic comparisons with human intelligence I could make here, post-hoc rationalisation and so on. "I couldn't get that job but you know what, I didn't want in anyway"). Maybe T-bot will decide that it is easier to achieve it's tea-goal by just re-defining tea or who the target is. I suppose this would also bring up the issue of how its learning and changing may alter its initial utility functions, (or are utility functions meant to be something 'hard-wired'?). Would be interested to hear your thoughts on this!
@juliusapriadi Жыл бұрын
One possible answer is in another Rob video: Apparently it depends on the type of agent, if it either manipulates its utility function, or heavily fights against it. If it can rewrite itself, the most efficient thing to do probably is to set all world states to the same value and do nothing, idling indefinitely. I'm looking forward to Rob's videos about non-agent AGI, since there's probably an answer to such ideas.
@valsyrie6 жыл бұрын
I agree with your arguments that any coherent *goal-directed* AI will have to have a utility function, or it will behave as if it has one. But it seems like this is still making a couple unwarranted assumptions. It seems like it could be possible (and maybe quite desirable) to build an AI that has *deontic* restrictions or biases on the kinds of actions it takes- that is, it would make decisions based at least partly on properties of the potential actions themselves, not just the world states that it predicts these actions will produce. It could have coherent, fully specified, transitive preferences over world states, but *also* have preferences over properties of its own potential actions, and some decision system that takes both kinds of preferences into account. Maybe these deontic restrictions could be defined in terms of subgoals. Maybe you could have some restriction like, "never act with the intention of killing anyone." This could be a good restriction for an early AI even if we can imagine situations where we might want an AI to kill someone (i.e. to defend other humans). Of course, if you define "world states" broadly enough, it could include the properties of the AI's actions as part of world states. But it seems like it could still be quite useful from both a conceptual and a technical perspective to maintain a distinction between world states and properties of actions, and have separate sets of preferences for each.
@Devlin201020115 жыл бұрын
Hey, I’d just like to note your statement about how the world states have to be A>B, B>A or A~B. I don’t think that really works though, it’s sort of assuming a two valued logic which isn’t necessarily how humans think. For example a reasonable human thought is “Is Jim bald? Well he’s not bald, but he’s also not not bald.” Or at a stop sign “I should really drive through the intersection, but i should really wait for this person to drive through the intersection.” We commonly make minor contradictions within our logic that’s important for how we view the world, which I feel would need to be established outside of simply A>B, B>A, A~B.
@quickdudley7 жыл бұрын
You're assuming a utility function has to be a total ordering of world states: partial orderings are also consistent but are not amenable to being numbered.
@XxThunderflamexX5 жыл бұрын
What would it mean to have only partial ordering of world states? What makes up a 'group' of world states such that a designer would never want an intelligent actor to try to decide between members of different groups?
@ookazi10005 жыл бұрын
@2:47: DID for the win. We've got conflicting preferences all the time. It's a fun ride.
@erykfromm7 жыл бұрын
I would disagree that inconsistency in our utility function, does not help us to progress. You need some mutations source to create variations which will be then tested for they fitness by natural selection.
@unholy17715 жыл бұрын
I'm sorry, but that's incredibly dumb
@LadyTink4 жыл бұрын
3:00 I kinda feel like option 3 is incomplete. indifference implies a lack of concern or care. In reality, something like ambivalence is also possible. . Like with humans with our "hidden and fractured and conflicting" utility functions, we can often be unsure which world state we would prefer. I know that often in ai we gauge the confidence in an answer. . So it's like ambivalence is saying you're confident in conflicting world states as being the answer. Often to the exclusion of any middle ground options.
@marvinvarela7 жыл бұрын
We need the utility function to be update-able at run-time.
@whatsinadeadname5 жыл бұрын
I know this is an old video, but there are two additional relations between A and B: ambivalence and undefined. Ambivalent means you aren't indifferent, but are simultaneously unwilling or 'torn' to choose between the two states. Undefined is akin to the evaluation null == null.
@cheshire12 жыл бұрын
What would you do if given the choice between A and [do nothing] and you're amibvalent between the two? And what would you do if you're 'undefined'?
@Verrisin7 жыл бұрын
2:40 ... what if I prefer A(T) > B(T) but also A(T+1) < B(T+1) ? (and there is no way to turn A(T) into B(T+1)) Does a world state already include all of it's future / causal consequences of picking that state?
@totoslehero68814 жыл бұрын
Normally you would just have a utility function for each time frame telling you how much you prefer things happening now or in the future (i.e. the functions already take into account the fact that you have to wait for those future benefits so the utility functions are comparable between time frames) and then maximise those functions along possible chains of events that can be reached from any initial state.
@addymant Жыл бұрын
Something worth noting is that even if we assume that all humans are perfectly rational and have transitive preferences, if you aggregate those preferences to create a societal ranking using a pairwise majority (i.e. a majority prefer A to B so the group prefers A to B), it's been proven that you will sometimes have an intransitive ranking (see Condorcet, 1785) Obviously humans can't just vote on every possible world state, but ideally a super intelligence would be able to intuit very well what a majority of people prefer to happen
@SignumInterriti Жыл бұрын
I get transitivity but what about stability? My human preferences are changing constantly, which one of three citys I would rather be in can change just with mood, or weather, or external input like watching a movie about it or talking to someone who was there. I do consider this instability of preferences, in other words the ability to change my mind, an important feature of my humanity.
@levipoon56847 жыл бұрын
If humans don't have a well-defined utility function because of their inconsistent preferences, is it possible that the most desirable AGI is one that doesn't have consistent preferences?
@darkknight63917 жыл бұрын
Would you please enable the subtitle's creation option?
@RobertMilesAI7 жыл бұрын
I didn't know that was an option I had to enable. Thanks!
@arthurguerra38327 жыл бұрын
I wrote portuguese subtitles to this one. The option is not enabled to the other videos, though.
@vanderkarl39273 жыл бұрын
R.I.P. community contributions 😔
@darkknight63913 жыл бұрын
@@vanderkarl3927 It's something that takes a lot of time, unfortunately
@gsainsbury866 жыл бұрын
Could you not consider changing preferences as a result of the agent within the world state? E.g. if you are thirsty, you would like a drink. If you are not thirsty, having had a drink has lower utility. It seems to me that if you have had a change in preferences, that must have had some reason, even if you don't consciously understand it?
@firefoxmetzger90637 жыл бұрын
I agree with the method of using utility functions for AI. However, there is one flaw at least in the way it is presented here. The utility function only cares about what is the state of the two worlds and which one is preferred, but the transition between those states is equally important. Take for example the trolley problem in philosophy or any other moral experiment. I'd probably use the autonomous mercedes that saves passengers over bystanders with this audience... I think a utility function should be locally defined (i.e. how good is it to get into state B from A using X) rather than globally (how good is B). This will sometimes cause inconsistent, "poorly designed" human behavior, but will overall produce (I think) more intelligent behaviour. There is a lot more I would want to say, but it's just a KZbin comment so I'll keep it short and end here.
@turkey3434344 жыл бұрын
Robert but we also know that there are certain preference relations which are "rational" or in other words complete and transitive but they are provably not representible with a utility function. What if our preferences are more like these ones? An example is a lexicographic preference relation that is both "rational" and cannot be represented by a utility function: Let A = [0, 1]x[0, 1]. Define lexicographic preferences ≿ over A such that for all x, y ∈ A, x ≿ y iff x1 > y1 or both x1 = y1 and x2 ≥ y2 .
@asailijhijr5 жыл бұрын
In many ways, it seems that the act of raising a child (or interacting with young people in a way that is intended to be beneficial) is comparable to refining their utility function. They have some ideas about the world and when they act on them, you police their bad decisions (or behaviours associated with bad decisions) in order for them to improve their ideas about the world and act on them.
@JulianDanzerHAL90013 жыл бұрын
5:45 also, a functio ncan be rather complicated and time dependent, etc
@sonaxaton7 жыл бұрын
I think I don't have a problem with defining a utility function as you said and assigning numbers to world states. I think the problem is more that it's hard for a human to design a utility function that matches what they want. Isn't that what the utility function is really meant to be after all? It's supposed to represent what the designer of the AI wants so that the AI does what the designer wants. So the difficulty is for the designer to make a utility function that matches their own human "utility function". Like in the stamp collector AI, the designer didn't account for all these other possible world states because they were bad at defining their own utility function. So I think you could actually boil it down to the problem of defining what a human's utility function is. Or more general, what human society's utility function is.
@Xartab7 жыл бұрын
As he said in one Computerphile video, writing the utility function from scratch would basically mean solving ethics. Good luck with that (not ironic, I'm trying too. Good luck to us all).
@sallerc7 жыл бұрын
That's indeed a challenge. I listened to a Sam Harris podcast today where they talked about this (among other things). Very interesting. The episode is called: Waking Up With Sam Harris #84 - Landscapes of Mind (with Kevin Kelly)
@platinummyrr2 жыл бұрын
one interesting mathemathics/philosophy problem is that it might be possible to have an uncountable (unlistable) number of world states, and there for *not* be able to provide a one to one mapping between world states and natural numbers.
@mafuaqua7 жыл бұрын
Is transitivity really valid in all circumstances? Consider chess players: If A beats B and B beats C, it does not mean that A beats C.
@outaspaceman7 жыл бұрын
I remember when 'Fuzzy Logic' was a thing..
@robertglass16985 жыл бұрын
On preferences. I either prefer A over B, B over A or I'm indifferent. But what if A is chocolate ice cream and B is vanilla ice cream and my preference depends on which I had last because what I prefer more than A or B is variety?
@indigo-lily7 жыл бұрын
Damn, that is a sweet Earth
@RobertMilesAI7 жыл бұрын
FINALLY. Three months I've been waiting for somebody to notice that reference
@fleecemaster7 жыл бұрын
Haha! Maybe if there was lots of explosions on it? :)
@SecularMentat6 жыл бұрын
"Rawund"
@PopeGoliath6 жыл бұрын
Just watched the video for the first time. Caught it instantly. I'm just late to the party. ;)
@giulianobernardi45007 жыл бұрын
But Rob, I may say, your videos are great!
@aforcemorepowerful Жыл бұрын
Did you already make the video about non-agent approaches? Or do I not understand the subject enough.
@tim572437 жыл бұрын
You left out the part about reasoning about probabilities. If A has utility U(A) and B has utility U(B), and we define L to be a lottery A with probability a and B with probability (1-a), then we should have U(L) = a * U(A) + (1-a) * U(B). I do not know how to prove this would have to hold for any rational agent, but it is familiar enough that I would expect to be able to look it up.
@drdca82637 жыл бұрын
I thought you were going to go through the vNM utility function proof. Notably, just going by "there're some reals with the same ordering" doesn't really justify maximizing the expected value. Which, of course, you know. But I feel like talking about preferences between "lotteries" is p important to explain?
@sekoia2812 Жыл бұрын
I swear I recognize some of the tunes you put at the end of your videos. This one seems to be the same as "Goodbye Mr. A" by The Hoosiers. Did you just pick it out of a list or..? How did you get that song?
@terdragontra89005 жыл бұрын
with finitely many coherent preferences you can map the preferences to a subset of the real numbers so that "preference" maps to "greater than", but with an infinite amount of preferences this isn't necessarily true (of course this is arguably not of much practical significance)
@visualdragon4 жыл бұрын
April 2020. Something went horribly wrong with the 3 Earth function and now we have Murder Hornets. Thanks Rob.
@billykotsos46427 жыл бұрын
Amazing insight as always, delivered in a fun way Keep it up Rob!!
@quenchize6 жыл бұрын
I think transitivity is a flawed assumption. It assumes that the set of possible world states is "well ordered" and it is most probably not. If you take a single parameter, how much money I have say, then my list of world states considering only that is well ordered. With more complex utility function however it is easily possible for the set to not be well ordered. The best you can get is that I can rate all world states as better, worse or indifferent to my current state. This is because our information about any future state is limited and so our decisions are bounded rational. For example we rate losing something we have as worse than not having it in the first place. Say state A is there are hungry people on the streets. State B is they are all provided for and there is no hunger. State C is that you have a million pounds. A person in state A may well say I prefer B to C. Now they get a million pounds and are in state C. If it costs a million pounds to solve the hunger problem does this person still prefer state B ? Some would, many would not. If you drop the assumption that the set is well ordered that could lead the way for a safer utility function. If your agents utility function is to achieve Pareto optimal state of all the other humans and agents utility functions. Where does that break ?
@olicairns89717 жыл бұрын
Continuity is also important for utility representations of preferences. Suppose I prefer having more money to less money, but prefer having more free time to less free time irrespective of the amount of money I have. In this case you cannot represent my preferences with numbers even if they are complete and transitive.
@benaloney7 жыл бұрын
This channel makes my utility function higher
@andrewdunbar8285 жыл бұрын
Second basic assumption is wrong. Preferences are not transitive even if we like to think they are. Much like we think every room is straight and square. Until we try to wallpaper or tile a room. Preferences are actually like Rock Paper Scissors. Because each thing his multiple attributes that come into play when making comparisons. But the set of strengths and weaknesses are not always orthogonal across all things, even cities.
@TheOneMaddin5 жыл бұрын
I am a mathematician so let me be nitpicky: there are total orders that cannot be embedded into the reals. Here is one set of consistent preferemces that cannot be represented by a utility function: lets say there are two liquids (honey and water), and you can have any positive (real) amount of them. You always prefer having more honey. If you can have more honey, you would deal any amount of water for that. However, among worlds with the same anount lf honey, you prefer the one with the most water. This set of preferences has no utility function. The reason is of set-theoretic nature.
@ryanhastings71515 жыл бұрын
The model you're talking about is known in standard economic theory, and is referred to as a lexicographic preference ordering (en.wikipedia.org/wiki/Lexicographic_preferences). There are many ways to address this, including extending the domain of your utility function to include infinitesimals (generally by using the hyperreal numbers), multiplying the utility you assign to honey by an arbitrarily huge factor (once this factor becomes large enough, e.g. it exceeds the number of moles of honey obtainable in the observable universe, this approach becomes functionally indistinguishable from a true lexicographic ordering), etc. More to the point, however, human values don't appear to be lexicographic--it's not the case that you would trade e.g. arbitrarily large amounts of aesthetic satisfaction for an arbitrarily small increase in sexual pleasure. So it's questionable in the first place how relevant these kinds of preference orderings are to the study of constructing artificial agents that attempt to preserve our values.
@mrdrsir37814 жыл бұрын
I feel like a lot of activities get worse if repeated. There’s like a rate of decay in the utility of actions and that’s why humans have varied lives and most of us don’t just do one thing over an over.
@simonscience58465 жыл бұрын
Actually, there is a fourth option for preferability of worldstates: I dont know. Its possible to not know what you want, or how you want the world to be. Say you never tasted a tomato, and I ask for you to rate a worldstate where you eat a tomato, the only right awnser is "I dont know" Edit: Made this comment in the middle of the video, and I kindof want to expand on it: this third "I dont know" option kindof destroys that worldstate thesis, because its actually really hard for us to rate some worldstates better then others. Not because of any conflicting logic, but because of missing information. Not ignorance mind you: mostly because it might be hard for you to make an informed decision about that worldstate withought allready being in that worldstate. Often we use assumptions about the missing information to judge a worldstate, and when that makeshift guesswork collides with the rigor of something like an AGI, well shyte hits the fan. Mind you, I dont think utility functions are stupid, they are very usefull, but there is still a case to be made for why my needs are complex and nuanced.
@Jaqen-HGhar7 жыл бұрын
Great video, I probably like hearing you talk about this kind of stuff more than anyone else on KZbin. As soon as I get settled I'm going to start hitting up your Patreon. Right now all I can do is watch them as a Red subscriber (at least I hope that send some amount your way) and tell everyone I can that this is where it's at. Keep it up and remember, though it's tempting, quality over quantity.
@davidturner98273 жыл бұрын
6:03 "inconsistency... is just making us make bad decisions". That's a dangerous line of thought from an AI safety perspective! I thought we wanted to align AI to human preferences, and there's plenty of evidence (e.g. Tversky 1969) to suggest that even transitivity isn't guaranteed to be a feature of such an alignment. And of course, utility monsters are all too easy to create...
@leftaroundabout7 жыл бұрын
“...there will exist a set of numbers for each world-state that correspond to that ordering. Perhaps you could just take them all in order and give each one a number according to where it falls in the ordering...” - no, you couldn't, because there are exponentially many states (exponential in the number of quantities you consider about the world).
@gabrielkwiecinskiantunes89507 жыл бұрын
Man, this is good stuff. Thanks, Robert!
@Unbathed7 жыл бұрын
How do intransitive entities such as Grime Dice (singingbanana.com/dice/article.htm) fit in? The example of the preference cycle {Amsterdam, Beijing, Cairo} appears to echo the preference cycle in Grime Dice, but the explanation "human reason is ill-designed" does not appear to map onto the design of Grime Dice.
@RobertMilesAI7 жыл бұрын
In Grime Dice, the intransitivity applies to strategies, not preferences. If you have a coherent preference ordering over dice colours, you can pick the colour of die you prefer and all is well. You'll lose against an opponent who understands the dice, but your preferences are over dice colours not over winning and losing. You lose but you don't care; you got to pick the blue die so you're happy. On the other hand, if you want to win the game, there's nothing incoherent about picking whichever die is most likely to win. Your preferences are over winning and losing not over dice, so the intransitive property of the dice isn't an issue. This is a "human intelligence is badly implemented" thing, just because we assume in this example that strategies must be transitive, but in fact rock-paper-scissors type situations are common.
@gastonsalgado76117 жыл бұрын
But could't you apply this to a comparator function? For example having 6 diferent preferences. And choosing the world that beats the other in more of these Isn't this a valid comparator? If it is, could it be exploited to solve some AI problems like changing the utility comparator? And well if its not, why?
@TheDGomezzi7 жыл бұрын
Interesting that your new computerphile video is all about AIs that are unsure about their utility functions.
@codyniederer47567 жыл бұрын
This editing is gold.
@KaiHenningsen4 жыл бұрын
I wouldn't be so quick to assert that the bugs in our intelligence-implementation aren't what causes us to be (halfway) safe to be around. (Apart from the latter half being unproven.) Though I suspect goal confusion is more important that a well-functioning utility function - not that these two aren't connected. However, I suspect that from our basic makeup, we only have short-term goals. Long-term goals are probably learned. I suspect we have short-term final goals and long-term instrumental goals, which is a rather paradoxical situation, and I suspect that's relevant to how society works.
@DrFlashburn3 жыл бұрын
Great video, you say our inconsistencies don't make us better people at around minutes six. Actually neuroticism which can be similar to what you're talking about has been found in certain situations to make people more successful. And if you squint inconsistencies and preferences can be thought of as a machine learning algorithm that doesn't get stuck in a local maxima or minima, like some technique for better exploration of design space. Just a thought.
@petersmythe6462 Жыл бұрын
Keep in mind also that as soon as you introduce uncertainty, the numbers, and not just their order, start to become meaningful. If you would rather have a good cup of tea than a bad cup of tea, and rather have a bad cup of tea than be stung by millions of wasps, that says nothing about how large the relative difference between each of these states is. What does show the relative difference is how you rank uncertain combinations of states. Like, if you have a 10% chance of being stung by millions of wasps and a 90% chance of having a good cup of tea, what chance do you need to have to get stung by millions of wasps to decide you should take the bad cup of tea instead?
@Verrisin7 жыл бұрын
How is time related to world states? I can easily have a loop where A(t) > B(t) & B(t+1) > A(t+1) & A(t+2) > B(t+2) ... Now which do I prefer? ...
@chair5475 жыл бұрын
Cats sure have intransitive preferences when they want to go outside
@Falcondances7 жыл бұрын
I died laughing at the "Halt and Catch Fire" screenshot
@y__h7 жыл бұрын
6:59 Obligatory example of Irrational Behaviour 😂
@sarajohnsson4979 Жыл бұрын
But being able to find *a* utility function that matches an agent's preferences doesn't mean that utility function will be much use when trying to model that agent's behaviour in the presence of uncertainty, right? If we assume a utility maximizer, sure, ranking outcomes is enough. But surely there'd be a limit to how competent a pure utility maximizer can get in a non-deterministic environment (and our world doesn't appear to be deterministic) On the other hand, if we're instead assuming an *expected* utility maximizer, a ranking of outcomes is no longer enough, because knowing an agent thinks A > B > C, you can't tell whether it would choose "Guaranteed B" over "50/50 chance of A or C"
@atimholt4 жыл бұрын
I’d say preference is more multi-dimensional. It’s entirely possible that the human brain evolved incoherent preferences because coherent ones are physically impossible to reconcile in some concrete sense. I saw a silly music video on PBS (I think?) once. It was meant to teach about percentages, but it did so by having a boyfriend explain to his girlfriend why she only got “four percent of [his] love”. The whole song was him divvying his love amongst family, friends, etc. But something like love isn’t really amenable to divvyings or rankings. We’re *hoping* that that’s possible, but it requires creating something that isn’t at all like human intelligence from the get go, meaning our assumptions about how it *can* work might be something evolution steered past eons ago. But that’s just what AI research *is*, isn’t it? Are terminal goals something that can ever be rigorously described and abided, or is there a deeper subtlety to the math that forbids it? Is that even a practical concern, or can we avoid the paperclip apocalypse with what *is* possible and at least *some amount* more rigorous than evolved systems?