Speaking as an American, I say we give the AI a shot at running things. At least we'll have some kind of intelligence involved.
@houseofvenusMDАй бұрын
I would argue that AI has already been running things...or at the very least social media algorithms. They have influenced (read: driven) our thoughts and actions for the past 15 years.
@bobcousins4810Ай бұрын
The problem I have with the 3 Laws of Robotics (or 4 laws) is that practically speaking they are impossible to implement. Even the first law "should not allow a human to come to harm" means that a robot must recognise that a surgeon who amputates a limb is doing that for the long term benefit of the patient. Is the robot to confirm the doctor's diagnosis that amputation is necessary? That would mean the robot has to be a medical expert, and review every such case. Note that the Law says "a human" so presumably that applies to all 7 billion humans. However, regardless of the utility of Asimov's Laws, his books are a great exploration of the ethics of robotics at a time when ChatGPT is far off in the future. Assuming that robots (or generally "AI agents") do become smart enough, at some point we will need to address these issues. The biggest issue is the euphemistically named "alignment", which already affects tools like ChatGPT - which may be regarded in the future as hopelessly primitive. It's already difficult to make ChatGPT (or the other systems) follow the rules set for them - people find creative ways to get the models to break their own rules. It may even already be shown that it is impossible to get any AI to follow rules. I suspect we will have decades of the AI equivalent "grey goo" - where AI becomes pervasive but is never 100% correct, works in ways we don't really understand, and we can never completely control. We can see this happening already, with organizations making incorrect decisions using AI, and then telling people the AI must be right.
@androidfarmer8863Ай бұрын
Oh - I'm sorry, you must have mistaken us for a species that listens.
@AllBrakesNoGasАй бұрын
Seems like Asimov developed his opinion after playing Fallout 4.
@YevgenySimkinАй бұрын
I love Asimov but I just finished rereading Caves of Steel and he gets absolutely everything in his predictions about the future wrong. Not sure it makes any sense to take any of his robot specific predictions as any less off base than the rest🤷♂️
@bobcousins4810Ай бұрын
As someone said "nothing dates faster than the future". What may have seemed plausible at the time now seems laughably wrong. But I think it is a mistake to think that SF is about "making predictions". SF generally explores alternative realities, mostly in the future, but also current and past. I don't think any SF author is saying "this is 100% what is going to happen" but rather "this is one of an infinite number of possibilities which I turned into a story".
@YevgenySimkinАй бұрын
@@bobcousins4810 Absolutely - and it would be silly to expect these authors to be prophets rather than just dreamers. The thing that Asimov gets wrong has to do with the population growth and food availability. And I don't fault him for it - it would have required seer level prescience to have known in 54 what reality in 2024 would be. I'm just saying it's silly to take anything he got right about the robots and AI as any kind of prognostication since all that is just as likely to be totally different than his other predictions.