No video

Defining Harm for Ai Systems - Computerphile

  Рет қаралды 34,652

Computerphile

Computerphile

Жыл бұрын

How do we measure harm to improve the performance of Ai in the real world? Dr Hana Chockler is a Reader in Computer Science at King’s College London.
EXTRA BITS: • EXTRA BITS: Defining H...
Links from Hana:
title: A Causal Analysis of Harm. authors: Sander Beckers, Hana Chockler, Joe Halpern. conference: NeurIPS'22. link: proceedings.neurips.cc//paper...
title: Quantifying Harm. authors: Sander Beckers, Hana Chockler, Joe Halpern. conference: IJCAI'23. link: arxiv.org/abs/2209.15111
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 156
@EDoyl
@EDoyl Жыл бұрын
Some very old philosophical questions have sat with no answer or multiple contentious answers for a long time, and now the computers need solid explicit answers to all of them. That's quite a daunting problem.
@weksauce
@weksauce 11 ай бұрын
False. Computers don't need any of these questions answered. Nor do humans.
@boldCactuslad
@boldCactuslad 10 ай бұрын
@@weksauce Enjoy your default ending to the AI apocalypse, friend, because that's all you can hope for without answers.
@benjaminclehmann
@benjaminclehmann Жыл бұрын
Worth noting that defining harmful actions as those which decrease someone's utility is a utilitarian idea. Utilitarian ethics (where what is moral is determined only by how it impacts some goal, such as a utility function) is very useful but it very regularly contravenes human morality. Utilitarianism leads to an idea of morality that can much more readily be reasoned about (and it's why economics originated as an offshoot of utilitarianism) but it usually also leads to a morality that we would object to. Think of all the supervillains that assume the ends justify the means. This isn't a criticism, utilitarianism is very useful and there's a reason it's the most realistic way we can rigorously define harm without relying on human judgement. It's just utilitarianism can be very easily misused, the history of science can be ugly and often utilitarianism is a prerequisite for those ugly deeds. As a note, Dr. Chockler talks about simply change in utility until she gets into her probabilistic example, but in general the utility function of some moral philosophy can be a lot more complicated, it can be a (potentially probabilistic) social preference function that considers multiple people and some sort of notion of equity.
@thuokagiri5550
@thuokagiri5550 Жыл бұрын
Philosophers touch anything and suddenly it turns into this convoluted deep dark rabbit hole. And I love it
@ahmadsalama6447
@ahmadsalama6447 Жыл бұрын
Ikr, not just in computer systems, everything man
@odorlessflavorless
@odorlessflavorless Жыл бұрын
and make anything deranged ? 😂
@raffriff42
@raffriff42 Жыл бұрын
It’s great when philosophers debate while millions die.
@austinbutts3000
@austinbutts3000 10 ай бұрын
The medical device industry in the US has largely addressed this problem of harm. The FDA mandates that the manufacturer disclose information about the design to them in proportion to the amount of harm the device could cause. This especially applies to software. Good luck getting that approach through to the autonomous driving industry in this environment.
@behemoth9543
@behemoth9543 Жыл бұрын
If AI is ever introduced into societies on a global scale it will very likely be another area where US and to a lesser extent european customs and social structures become an inherent part of a technology and drive their cultural dominance. Its truly fascinating how the internet has already led to a major cultural homogenization of english speaking people across the world and that "soft power" is a huge driver of geopolitical reality aswell. The example of tips is a great one for this rift aswell. A waited expecting a tip in this way is going far beyond anything that could be considered reasonable in most of the world and would probably cause a lot of customers to never visit that restaurant again if he voiced that displeasure to them.
@Norsilca
@Norsilca 9 ай бұрын
Or said another way, a restaurant not paying a living wage to its employees!
@SecularMentat
@SecularMentat Жыл бұрын
It seems to me 'measuring' any of the systems that lead to the definition of harm is a difficult task. Granted machine learning can fudge a lot of it. It would have to know what the preferred state of an agent would be first. That alone is a huge definitional issue I'd imagine.
@pleasedontwatchthese9593
@pleasedontwatchthese9593 Жыл бұрын
I agree, like what is harm is an opinion that will need to be learned
@SecularMentat
@SecularMentat Жыл бұрын
​@@pleasedontwatchthese9593 I think it'd have to be an individual target for each person that the machine knows. But maybe have a 'baseline' for 'average human'. But then, if you let an agent work on those assumptions it seems like to maximize its utility function to minimize harm, the machine wouldn't by default never take action. Because all actions seem to have some measure of possibility of harm.
@bengoodwin2141
@bengoodwin2141 Жыл бұрын
These are all things that humans do already, unconsciously
@SecularMentat
@SecularMentat Жыл бұрын
@@bengoodwin2141 yup. We're evolved for it for sure. Machines will take a bit of coaxing to get it right. Heck humans sometimes aren't great at threat perception. We range from jumping at shadows to wanting to pet the fluffy bear.
@Insaniaq
@Insaniaq Жыл бұрын
I love the edit at 2:21 where bob watches a computerphile video, got me cracked up 😂
@bornach
@bornach Жыл бұрын
Poor Bob. He was about to get to the best bit of that video just when the car crash happens
@aarocka11
@aarocka11 Жыл бұрын
I initially read that as haram. Lol
@saamboziam5955
@saamboziam5955 Жыл бұрын
🤣🤣🤣🤣
@uropig
@uropig Жыл бұрын
💀
@MartinMaat
@MartinMaat Жыл бұрын
Basically the same. We want the least haramful outcome at all times.
@WilkinsonX
@WilkinsonX Жыл бұрын
I read Harambe 🦍💔
@aarocka11
@aarocka11 Жыл бұрын
@@WilkinsonX dicks out for harambe 😭🦍❤️
@ungodly_athorist
@ungodly_athorist Жыл бұрын
Was Goofy harmed by being called Pluto?
@Veptis
@Veptis Ай бұрын
A rarely discussed topic is how this moral compass of values changes globally by culture. Some prefer themselves, some prefer others, some prefer wealth and social status, others the many etc. You can optimize a dilemma decision machine for the geographical region with the target of causing least societal outcry. Or just get rid of cars.
@MarkusSimpson
@MarkusSimpson Жыл бұрын
I love Dr Chocklers chilled demeanour, definitely one of my favourite teachers 🤓
@chanm01
@chanm01 Жыл бұрын
This is all interesting from an academic POV, but if we're actually gonna do anything with these definitions and criteria, I think you probably need to talk to one of the law professors. Sure, AI presents a bunch of novel fact patterns, but I somehow doubt that the suits which arise are going to be heard as if no prior case law exists.
@paulbennett1349
@paulbennett1349 Жыл бұрын
With the doctors dilemma, maintenance of the current level of the condition is not a harm of zero. Sure, people get used to being in pain their entire lives but I don’t think any of them would consider it to be a static level of harm. The lack of hope of improvement is what drives many to suicide. So calculation of harm is only as robust as our understanding of all the variables. Since most people seek to investigate to the first point of exhaustion (where is my brain happy to stop) rather than the last (can I demonstrate that any other factors must be insignificant), I can see some rather large consequences.
@doubleru
@doubleru Жыл бұрын
In the first example, why is Bob suing his own car's manufacturer, rather than whoever was responsible for creating a hazard in the first place by stopping their car in the middle of traffic that was so intense that there is literally no way for Bob's car to come to a halt in time to avoid a crash? Because as the video itself points out, we need to trace the causality in order to measure harm, and the main cause of the crash was the hazard on the road, not how Bob's car reacted to it.
@supermax64
@supermax64 Жыл бұрын
From his point of view the car chose to throw itself in the fence. Also he's more likely to get a million dollar payout from the manufacturer than a random person. I'm sure some people would or will try to sue unless it's explicitly ruled that the manufacturer is never responsible (which would be surprising, at least at the start).
@mpouhahahha
@mpouhahahha Жыл бұрын
i fell asleep and it's still 11am🤤
@samuelthecamel
@samuelthecamel Жыл бұрын
The problem is that harm is completely subjective, despite how much we would like to think that it's objective.
@charlesrussell6183
@charlesrussell6183 9 ай бұрын
great look at the big picture
@salvosuper
@salvosuper Жыл бұрын
The one thing harming the waiter is the unethical work culture
@brookrichardson1373
@brookrichardson1373 Жыл бұрын
Why do all of these AI driving scenarios always involve vehicles without working breaks?
@klutterkicker
@klutterkicker Жыл бұрын
So imagine that you're at a time before you get into this scenario, when you have the option of 1.) driving fast and 0.5% of the time you get into a deadly scenario, or 2.) driving slow and 0.1% of the time you get into a deadly scenario. We're kind of back at the doctor's dilemma with medicine vs surgery, but is driving slow actually a harm? And what if instead of pouring over all of these decisions we used that development time to improve our traffic prediction, and we could avoid 20% of possible deadly scenarios... would that have a chance to replace more sophisticated last-resort decision-making?
@vadrif-draco
@vadrif-draco Жыл бұрын
Well said. The example in the video just forced us into the situation on 1.) and then told us to deal with it without considering how this situation itself could've been avoided.
@user-sl6gn1ss8p
@user-sl6gn1ss8p Жыл бұрын
@@vadrif-dracoI think that's a common problem of utilitarianism, it usually doesn't challenge the reasons for things
@don_marcel
@don_marcel Жыл бұрын
I need more Dr. Chockler explanations! Undercover hilarious wit
@weksauce
@weksauce 11 ай бұрын
What's the "default" is the wrong question. Harm is irrelevant. Everything should do the best expected value benefit minus cost choice. The real questions are how much agents should value other agents' expected benefit minus cost, how much agents should be expected to spend acquiring information to make their expectations more accurate, and how finite agents ought to approach expected values of options that have very small probabilities (Pascal's Muggings and such).
@nunyobiznez875
@nunyobiznez875 Жыл бұрын
10:36 The standard tipping rate in the US is actually 15%. Though, some like to give more, and I think some people just find it easier to calculate 20% in their head.
@bornach
@bornach Жыл бұрын
At a restaurant I remember being offered a choice at the bottom of the bill: 15%, 20%, 25%. Cannot recall if this was in CA or TX. Apparently there are regional differences.
@BTheBlindRef
@BTheBlindRef 7 ай бұрын
Yes, 15-18% is "service was decent, as expected". 20%+ is "wow, the service was great or went above and beyond". Especially where I live, where all service workers are guaranteed full minimum wages before tips already. I might consider a higher tip rate reasonable in some other places in the US where tipped service workers are allowed to be paid under standard minimum wage with the expectation that tips more than make up the difference.
@kuronosan
@kuronosan Жыл бұрын
If there is harm to the waiter not getting a tip, the waiter is being harmed by the restaurant owner, not the customer.
@kalizec
@kalizec Жыл бұрын
This is exactly what I wanted to add here as well. The example of not tipping is so extremely poorly chosen that the entire video suffers from it. The example misses the entire point of determining cause, only to try and calculate harm on a non-causal factor. The restaurant at least has a contract with the waiter. The customer definitely does not have a contract with the waiter. It is possible that terms and conditions apply to the customer visiting the waiter, but I've yet to see or hear a single restaurant going after a customer who doesn't tip the waiter enough for violating their terms and conditions, so that's clearly not a thing. P.S. people who argue that tipping is a social contract can easily be countered with the following argument. Namely that society itself is not honouring a social contract that people, waiters included, deserve a decent wage. So, if social contracts are to be considered binding, then the harm is still not perpetrated by the customer but by the society.
@rauljvila
@rauljvila Жыл бұрын
I find the tip-scenario perfect to illustrate the problem with this approach, all the phillosophical issues are hidden under the rug of the "default value". Many people won't agree that 20% tip is the default value when there is no law forcing you to do so. EDIT: In fairness, she acknowledges this point at the end of the Extra Bits video: > in the example of the hospital and the organ harvesting the default might be the treatment that is expected in our current norms. But you are absolutely right, I mean this all definitely involves discussion about societal norms right.
@MrRedstoner
@MrRedstoner Жыл бұрын
@@kalizec And really, the answer is that the US would need to fix their laws, otherwise whoever is making wage decisions would otherwise be harming stakeholders in the restaurant and on the chain goes.
@cwtrain
@cwtrain Жыл бұрын
Fuggin' thank you! Defining the system inside of exploitive capitalist constructs made me sick.
@pleasedontwatchthese9593
@pleasedontwatchthese9593 Жыл бұрын
​@kalizec I think your looking way into it. It's a contrived example. What if the waiter is the owner? Why does the restaurant only take cash tips? Etc I mean non of that matters, they just wanted to show how to find out more and less harm. Not try and fix Capitalism, lol
@salat
@salat Жыл бұрын
16:20 Solving moral dilemmas by weighting _everything_? How? E.g. should a high price&protection car crash preferably into "weaker" cars because that guarantees a higher probability that it's passengers survive, while the passengers of the weaker car won't? Should it always crash into other price&protection cars? Who wanted to buy such a crash magnet and how much would it cost to insure? We've had this discussion before here on the channel, right?
@eidane1
@eidane1 Жыл бұрын
I think the problem of explaining harm to an AI. when the people trying to explain it do not understand it themselves...
@zzzaphod8507
@zzzaphod8507 Жыл бұрын
Why isn't the option of the car going more slowly and stopping before hitting the obstacle considered as an option?!
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
You're missing the point
@phizc
@phizc Жыл бұрын
​@@rosameltrozo5889not really. At least not in terms of the lawsuit. For the car, the obvious option is to just injure the driver instead of killing him. But for the purpose of the lawsuit, the situation didn't pop into existence when the car decided to swerve into the guard rail. Why didn't the car notice the stationary car? Corner? Why did it go so fast around the corner that it wouldn't have time to avoid the stationary car?
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
@@phizc You're missing the point
@phizc
@phizc Жыл бұрын
@@rosameltrozo5889 explain.
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
@@phizc It's not about the technical details , it's a thought experiment to show the difficulties of making AI "understand" what humans understand more or less intuitively, such as harm
@vermeul1
@vermeul1 Жыл бұрын
Obviously the AI is not driving according to “expecting the unexpected”
@arsilvyfish11
@arsilvyfish11 Жыл бұрын
A great video covering the need of hour stuff for AI!
@jsherborne92
@jsherborne92 Жыл бұрын
I feel for bob
@Raspredval1337
@Raspredval1337 Жыл бұрын
BUT there's another fitness function: expenses. Imagine you're an autonomous car manufacturer. And your car has decided to crash into a safety fence. Now the passenger is alive, injured and is going to sue somebody, just because they're mad. And there's an option to passively crash into another car, leaving us with no angry passengers, who would try and sue anybody. And it's even somewhat cheaper to make an AI, which doesn't care. Makes you think, doesn't it
@AcornElectron
@AcornElectron Жыл бұрын
Rob looks different somehow.
@sabrinazwolf
@sabrinazwolf Жыл бұрын
I think that's Goofy, not Pluto.
@juliusapriadi
@juliusapriadi Жыл бұрын
and next, the car factors in the likely penalties for either it's owner or manufacturer, to make its decision. For example diplomats are not held liable legally, so the diplomat's car might opt for killing some kids, if that meant a better outcome for its diplomat passenger. I'd expect a system designed in favor of the manufacturer, not the passenger, as long as there's no regulation telling the manufacturers to prioritize otherwise. Another thought: All those theoretical concepts are beautiful in their logic, but decisions of politicians and managers are not logic, but often irrational. So I find it difficult to predict the adaptation of AI based on wether we'll solve harm or AI safety - it's very possible that the (rarely expert) people in charge will simply "press the button" and see what happens.
@RAFMnBgaming
@RAFMnBgaming Жыл бұрын
a diplomat however is more at risk of scrutiny over their actions and losing their job over an incident.
@muche6321
@muche6321 Жыл бұрын
I believe most politicians and managers are rational. It's just that they optimize for other values than you'd want them to. E.g. politicians optimize for staying in the power/keep getting re-elected. Sometimes that means improving lives of all people within an area, sometimes it means improving lives of a select group of people at the expense of other groups in the area and ignoring the opinion of those other groups through gerrymandering, populism, etc.
@eljuano28
@eljuano28 Жыл бұрын
So, is anyone talking about the fact that Bob is a crappy driver?
@IngieKerr
@IngieKerr Жыл бұрын
_You_ can :) but that won't solve the AI alignment issue. One has to start from the principle "Assume all operators of this machine are foolish" :) Alternatively, he might have had a horrible reaction to thinking about categories of homomorphic cube tessellation symmetries, but then arguably that was Bob's own fault for watching a Mike Pound video about programming.
@eljuano28
@eljuano28 Жыл бұрын
@@IngieKerr you're my kind of nerd 🤓
@dgo4490
@dgo4490 Жыл бұрын
Obviously, the fairest and most non-discriminate outcome is everyone ded... Equality and all!
@FHBStudio
@FHBStudio Жыл бұрын
That was the Sovyet way, and it's still prevalent today.
@raffriff42
@raffriff42 Жыл бұрын
“That’s what I call thinking!” ~Majickthise, HHGGTG
@bengoodwin2141
@bengoodwin2141 Жыл бұрын
To me, *some* of these seem obvious. You can cause harm without it being wrong if it was still the least bad outcome, and that "achievable" is relative, and in the second example, making tje "default" "achievable" requires harm
@jursamaj
@jursamaj Жыл бұрын
The tipping example is flawed. If anybody is causing harm, it's the employer who isn't paying a living wage, so that he can have artificially low prices.
@shandrio
@shandrio Жыл бұрын
@@jursamaj But you are changing the frame of reference... You have to narrow it down to only waiter - client relation in this example to be able to theorize about the problem. Of course later in real life when you take into consideration all the players the problem gets WAY WAY harder...
@arletottens6349
@arletottens6349 Жыл бұрын
The rule can be simple: minimize your own blame. Which means: stick to the rules, drive safely, and only use accident avoidance that does not create additional dangers.
@randomusername6
@randomusername6 Жыл бұрын
Great, now all that's left to do is defining "blame"! I got it: "blame is responsibility for causing harm". Oh, wait...
@underrated1524
@underrated1524 Жыл бұрын
Most of society goes by this simple rule. This works, but it does lead to people spending a *lot* of their time and effort playing "blame hot potato". Turning your problems into everyone else's problems is a solid strategy from your point of view, but if everyone does it the problem never gets solved.
@C00Cker
@C00Cker Жыл бұрын
But then, there is the issue of harming others by being unnecessarily pedantic about following the general rules if the situation requires breaking them. Also, most rules are based on the fact that it is almost impossible to coordinate well in real time. With AI agents, the cars can share the current situation on the road and prevent most of the accidents.
@Squeesher
@Squeesher Жыл бұрын
I love her voice, could listen to 1000 videos with her teaching
@theancientagoracorner2379
@theancientagoracorner2379 Жыл бұрын
Poor Bob. Always gets screwed in all use cases. 😅
@timng9104
@timng9104 Жыл бұрын
feels like game theory
@OcteractSG
@OcteractSG Жыл бұрын
It seems like AI is going to be adopted regardless, and the world will have to scramble to figure out the ethical problems before things get too far off the rails.
@supermax64
@supermax64 Жыл бұрын
The penalty for waiting is too great because other countries won't.
@Cassius609
@Cassius609 Жыл бұрын
harm considered harmful
@FHBStudio
@FHBStudio Жыл бұрын
There's also the problem that not all "harm" is "bad". Some suffering is necessary suffering. Sacrifice is suffering and there is no guarantee of a worthwhile payoff. If we start from the premise that harm must always be minimized, sacrifice becomes impossible. Growth and investment becomes impossible.
@ApprendreSansNecessite
@ApprendreSansNecessite Жыл бұрын
You mean sacrificing yourself or "sacrificing" someone else? Because no one would say the former is harm since you do this to yourself, while the latter should be renamed "taking advantage of"
@FHBStudio
@FHBStudio Жыл бұрын
@@ApprendreSansNecessite There's a difference between harming yourself and sacrificing yourself. However, the difference to us isn't always clear, let alone to a machine.
@Mr.Leeroy
@Mr.Leeroy Жыл бұрын
what accent is that?
@welemmanuel
@welemmanuel Жыл бұрын
"quantify harm"... engineers trying, and failing, not to be relativistic, this is why technocracy is so appealing to them. I'm not saying it's useless to measure it, the problem is the ruler, morality is arbitrary on a utilitarian worldview
@atlantic_love
@atlantic_love Жыл бұрын
LOL at all the channels trying to ride the "AI" train before it peters out.
@SkyFpv
@SkyFpv Жыл бұрын
Choices which are ethical are not the same as choices which are moral. Ethics concerns justice and reduces a person's blame. Morals ignore justice (allowing forgiveness) and instead consider culture and emotion. You HAVE to separate these two metrics before you can draw a conclusion in these examples.
@nilss1900
@nilss1900 Жыл бұрын
Why couldn’t the car just brake instead of crashing?
@supermax64
@supermax64 Жыл бұрын
Too close for the breaks to work in time.
@initialb123
@initialb123 11 ай бұрын
@@supermax64 Then the driver ( the AI? ) was driving too fast, the primary responsibility is to be able to stop in time, or in American "to stop short" . Road users have a responsibility to not hit stationary objects. If you fail to stop in time it's bad news for you, you are liable man or machine.
@BunnyOfThunder
@BunnyOfThunder Жыл бұрын
Option 4: Drive at a safe speed so you can stop without harming anyone.
@CeruleanSounds
@CeruleanSounds 11 ай бұрын
I think we should sue AI
@jeromethiel4323
@jeromethiel4323 Жыл бұрын
Without empathy, it's almost impossible to quantify harm. And computers do not have empathy, and i can not even think of a way to emulate empathy in a digital system.
@jursamaj
@jursamaj Жыл бұрын
I think a bigger issue is that no machine we now have or can expect any time soon has any actual comprehension. You can't have empathy without having comprehension 1st.
@bornach
@bornach Жыл бұрын
A lot of people lack empathy too. That doesn't prevent them rising to the top of society where they run companies which create the AI for self driving cars
@TiagoTiagoT
@TiagoTiagoT 11 ай бұрын
WTF? The harm isn't not tipping, the harm is the employer not paying their workers a fair wage for their work.
@xileets
@xileets Жыл бұрын
WHY, does the autonomous car not detect the hazard and stop? (Because it's a Tesla? heh) Seriously tho, This seems like a necessary function of the vehicle, a reasonable expectation for the user, and therefore, the manufacturer's or designer's fault for not implementing it.
@IngieKerr
@IngieKerr Жыл бұрын
The point of the example given is that it is assumed a-priori that it absolutely _cannot_ stop in time without breaking the laws of physics, for any number of reasons that would not deem it to be directly a fault of the car's systems. [e.g. car in front arrived suddenly in front from a side-road without right-of-way, car was not visible due to some transient obstruction... etc].
@phizc
@phizc Жыл бұрын
​@@IngieKerrbut she also talked about a lawsuit resulting from it. There the OPs point do matter. Unless of course the stationary car teleported to where it was immediately before the AI decided to swerve.
@xileets
@xileets Жыл бұрын
@@IngieKerr ​ Good point. I would accept this, however, because it IS something highly HIGHLY unlikely, a hazard appearing suddenly out of nowhere, it's not so useful. Consider how this would happen? Sink hole, plane crashing onto road, etc... it would have to appear WITHOUT Warning, inside the anticipated safe-stopping distance of the vehicle, in order to be a useful analysis. I understand that this is a thought experiment, but being both intimately familiar with philosophy and philosophical discussion, and a risk management engineer, I feel that these highly hypothetical scenarios are far less helpful in teaching and demonstrating "potential" risks, harm, threats, etc. Concrete examples also have caveats to consider like, engineering oversight, but here we are avoiding physics-breaking solutions in a statistics-breaking problem. Far too fanciful a scenario to demonstrate what is a simple problem. BUT I see your point, don't get me wrong. I understand now what they were trying to show.
@supermax64
@supermax64 Жыл бұрын
No amount of sensors can make the car precognitive. Some actions from other drivers WILL result in a crash even with the best efforts from the car to minimize said crash. The thought experiment specifically focuses on one such case that is BY DESIGN inevitable.
@chuckgaydos5387
@chuckgaydos5387 Жыл бұрын
Maybe the A.I. could examine our laws, news, and literature in order to determine which of its options would be considered to be the most reasonable to most of society. Of course, this would have to be done in advance since there wouldn't be time to do it when the situation arises. Since there likely would be no objectively best course of action, we'd at least get something that we could live with.
@RAFMnBgaming
@RAFMnBgaming Жыл бұрын
it is importat to nderstand that laws can ( and shoud be able to) change to reflect our state as a society, and some are best given defacto leeway outside of what they say, like accidental shoplifting of small things is often forgiven without charges or piracy is accepted for preservation, so fixing it on a specific set of laws at a specific time does come with problems.
@chuckgaydos5387
@chuckgaydos5387 Жыл бұрын
The A.I. would have to keep itself up to date.
@RAFMnBgaming
@RAFMnBgaming Жыл бұрын
@@chuckgaydos5387 the problem is that if your objective is to enforce the current laws as best as possible, that implicitly means protecting them from being changed to anything else so you can continue to enforce them in the future. There's a real risk of being trapped in a cultural legal limbo until the next carrington event by an AI trying to mantain the status quo.
@chuckgaydos5387
@chuckgaydos5387 Жыл бұрын
@@RAFMnBgaming The objective is not to enforce current laws. It's to have the A.I. make decisions that will be acceptable to most of human society. Rather than have people try to program the A.I. to do this, the A.I. could observe our opinions and figure it out for itself.
@muche6321
@muche6321 Жыл бұрын
It seems to me this could lead to a thing similar to airplane tickets overbooking, where the equilibrium is between the number of people not showing up and people overbooked. If you're the Bob who got bumped, you might feel harmed and get compensated for it. But that harm is the result of other people wanting the cheapest tickets for themselves.
@hoseja
@hoseja Жыл бұрын
This person wants to dictate what you're not allowed to do.
@initialb123
@initialb123 Жыл бұрын
If I can't make out some words and the auto generated closed caption can't understand what's being said, perhaps the speaker needs to acknowledge their heavy accent and consider some pronunciation classes. If you have no trouble following along I commend you, however neither the auto closed caption system or I could determine what some of the words were.
@DavidAguileraMoncusi
@DavidAguileraMoncusi Жыл бұрын
Time stamps?
@nickjwowrific
@nickjwowrific Жыл бұрын
I would say that if you are a native english speaker you should be embarrassed that you can't understand what she is saying and should maybe interact with more people outside of your country. If english is your second language then I would assume that you know how difficult learning a language is and should be more understanding. Contrary to what you think her job is not making these videos, she is helping make one because the channel thought she had something interesting to talk about. People have lives outside of just trying to entertain you and are allowed to spend their free time however they like.
@uropig
@uropig Жыл бұрын
first
@erikanderson1402
@erikanderson1402 Жыл бұрын
… how about we just build some decent trains. Autonomous cars are a scam and a waste of resources
@SiMeGamer
@SiMeGamer Жыл бұрын
Then go and build a train. Trains are some of the most inefficient forms of transportation. The cost of operating and maintaining trains, stations and tracks are terrible. That's why you don't see private companies entering the train business unless it's under government subsidies. And if you are going to argue for the government to do it/be involved, then you will be entering a completely separate debate that is about the morality of taxes which is a much broader philosophical avenue. Autonomous vehicles, when finally put into practice, will result in much lower traffic because of shared rides, autonomous taxi services, car pools and way less occurrences of jams, blockades and accidents. And the more this technology develops and enters the traffic ecosystem, the more we could afford to make smaller vehicles due to higher safety standards which will take even less space. Perhaps we will find a more sustainable train solution. Who knows? What we do know for a fact that if AI vehicles operate at the presumed standard, then traffic will be much better for everyone. I love public transportation as a concept, but it is really hard to make well because of many, many considerations, some of which are moral (taxes, for example). So in the meantime, as we figure public transportation and urban design, I encourage the development of autonomous vehicles. They could buy us a lot of headaches when we are ready for better public transportation solutions.
@erikanderson1402
@erikanderson1402 Жыл бұрын
@@SiMeGamer by no objective metric is that true. Maintaining a fleet of cars needed to move the same number of people as a modern train costs way more and has a much lower level of asset utilization. It is much more efficient by every conceivable metric.
@erikanderson1402
@erikanderson1402 Жыл бұрын
@@SiMeGamer well incidentally, train companies were previously forced to provide passenger rail as a public service. I think we should just reconstitute those policies because they were quite effective. And a fleet of cars constitutes a lot more possible points of failure than an effective public transport system.
@muche6321
@muche6321 Жыл бұрын
​@@SiMeGamer Let's compare trains with cars. Operating trains requires people who need to be trained and paid. Operating a private car requires one person who is not paid. Their training is also usually unpaid, done by parents/friends, followed by a formal test. Both trains and cars require maintenance. Stations could be compared to parking lots/garages. Stations' maintenance is paid by the transportation company, whereas parking lots/garages are paid by the companies/people that want to attract customers/for themselves. Tracks are again maintained by the transportation company. Roads' maintenance is paid by the government from taxes. In summary, the costs of operating trains are concentrated to the train company, whereas most of costs of operating cars is spread out to other subjects.
@omegahaxors3306
@omegahaxors3306 Жыл бұрын
What people thought AI safety was: "Either we hit this car or hit this baby, this decision is very important" What AI safety was probably going to be: "Either we hit this baby or we take longer to arrive at destination" What AI safety actually ended up being: "Baby detected, speeding up to hit baby, baby has been eliminated"
@hurktang
@hurktang Жыл бұрын
"the adoption of AI system is not gonna happen until we figure all this out" So candide...
@mibo747
@mibo747 Жыл бұрын
Where is the man?
@muche6321
@muche6321 Жыл бұрын
Behind the camera?
@justwanderin847
@justwanderin847 Жыл бұрын
Just say NO to government regulation of computer programming.
@kibels894
@kibels894 Жыл бұрын
"Obviously related to AI systems" yeah because they're obviously harmful lmao
@bersl2
@bersl2 Жыл бұрын
Harm is when my artist and writer friends have their work fed into the machine without their informed consent or fair compensation. >:(
@arletottens6349
@arletottens6349 Жыл бұрын
There's no law that requires consent or compensation for looking at your work and learning from it.
@kuhluhOG
@kuhluhOG Жыл бұрын
@@arletottens6349 This is more of a philosophical thing: Is an AI learning from something and a human learning from something the same thing? Some people (especially companies which push AI) will say yes. Other people (especially artists) will say no. The question is now what society at large will answer, and that will take time.
@maltrho
@maltrho Жыл бұрын
No it certainly is not. Your friends sales are in no way affected, and the machine does not use their work in any direct way. It is like complianing that writers use public language and words created by other persons without any payment.
@kuhluhOG
@kuhluhOG Жыл бұрын
@@maltrho if the sales are effected depends on the output of AI some AI tools are at this point specifically made to mimic specific artists (even alive ones) as close as possible
@maltrho
@maltrho Жыл бұрын
@@kuhluhOG They mimic wellknown artists styles, not your totally unknown friends's, and practically (if not absolutely) nobody uses chatbots for 'free' fiction litterature.
@omegahaxors3306
@omegahaxors3306 Жыл бұрын
Tipping culture needs to die. Just raise your prices. Rich people don't tip anyway so all that does is make things more expensive for people who already have the hardest time paying in the first place. Besides, these days tips just go straight to the CEO anyway.
Bing Chat Behaving Badly - Computerphile
25:07
Computerphile
Рет қаралды 324 М.
Turing Machine Alternative (Counter Machines) - Computerphile
26:17
Computerphile
Рет қаралды 54 М.
Они так быстро убрались!
01:00
Аришнев
Рет қаралды 2,3 МЛН
Пранк пошел не по плану…🥲
00:59
Саша Квашеная
Рет қаралды 7 МЛН
What it feels like cleaning up after a toddler.
00:40
Daniel LaBelle
Рет қаралды 91 МЛН
Survive 100 Days In Nuclear Bunker, Win $500,000
32:21
MrBeast
Рет қаралды 107 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 856 М.
TETRA Vulnerability (TETRA:BURST) - Computerphile
18:43
Computerphile
Рет қаралды 93 М.
Go First Dice - Numberphile
17:58
Numberphile
Рет қаралды 336 М.
ChatGPT does Physics - Sixty Symbols
16:42
Sixty Symbols
Рет қаралды 640 М.
Someone improved my code by 40,832,277,770%
28:47
Stand-up Maths
Рет қаралды 2,4 МЛН
Glitch Tokens - Computerphile
19:29
Computerphile
Рет қаралды 317 М.
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
10:20
Robert Miles AI Safety
Рет қаралды 83 М.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 175 М.
Winning the Fields Medal (with James Maynard) - Numberphile
16:10
Numberphile
Рет қаралды 657 М.
Они так быстро убрались!
01:00
Аришнев
Рет қаралды 2,3 МЛН