A case study in proper AI alignment You can read a text version of this Guidebook entry at: izaktait.subst...
Пікірлер: 9
@MusingsFromTheJohn00Ай бұрын
An existential risk is not one only meaning extinction. An existential risk means there is a risk of changing how one's fundamental experience of life is in ways they are against or are afraid of because it will be an unknown and/or uncertain change. Developing AI is 100% an existential risk, one we cannot stop from happening, because great change is coming whether we are ready for it or want it. Now, what actual mature Artificial General Super Intelligence with Personality (AGSIP) individuals in the future will be like is NOTHING like what is presented in this video. However, I can answer the question of "What True Alignment Looks Like". True alignment with AGSIP technology and individuals will happen by humans merging with the technology so that whether an individual began as a human or began as an AI, they both become equal members of the same race, having not just the additive best of both being human and being AGSIP, but having a synergetic multiplier of the best of both. Of course, like many things we do, the path getting there may be a rough one.
@dannygjkАй бұрын
Isaac Asimov already covered your premise and possible solution and he showed it to be flawed. Did you merely plagiarize his writing?
@TheGuideToAIFriendsАй бұрын
No, I did not. How do you find his proposed solution similar to mine?
@dannygjkАй бұрын
@@TheGuideToAIFriends He tried multiple solutions and always found a way around them. Do you want me to reread all the stories again?
@BillRobinson1805Ай бұрын
No.
@TheGuideToAIFriendsАй бұрын
@BillRobinson1805 no? No what?
@fertilizerspikeАй бұрын
I agree
@BillRobinson1805Ай бұрын
@@TheGuideToAIFriends It's going to end up starting the Butlerian Jihad.