1,41,00-1,43,00, The aim of alignment between human wishes and GAI purposes and practices , is the most dangerous (extinction level) self-delusional idea humanity could ever have, leading in a sleep- walking fashion to extinction. It is chillingly frightful to see so much naiveness and wishful thinking from so intelligent and influential people ( not prof Hinton). 1. We don’t have a clue how high the ladder of intelligence goes. We might be on the 70%, or to 1%, or most likely to the 0.000000001 % of what Is physically possible for intelligence. Machines could be much - much more intelligent than us , and not just a bit more intelligent than us, evolved in a lightning speed. 2. What would be the chance of a group of chibangees to control and enslave, even the most average person among us , or what whould be the chance, a group of mousses to be successful on the same task, or more correspondingly a group of amoebas or microbes ? 3. Machines don’t have the space limit that our brain has, can be as large as suitable, signals travel with light speed, in us just at 100m/sec, and we communicate and think, at 200 bits / sec , vs machines at gigahertz. 4. When machines reach the human level of intelligent, (a Phd level of every field simultaneously), then it could redesign themselves and reprogramming themselves, In explosive speed, resulting in the equivalent of 200 years, or 20000 years of human progress, in just a weekend. We already constructed not only the “thinking” part but also the moving part, the robotic.(boston dynamics etc) 5. We don’t know anything about conscience, and many prominent theories of ours just predict that it is an emerging phenomenon , in any sufficient enough complex Brain or circuit. So , it is a real possibility if not the most probable, that the machine at some point whoud become conscious and self aware. With an acute aware also, if its capabilities , and OUR capabilities also, and the vast difference between the two. 6. Installing a set of basic rules to that machine as to never harm people, and act always in the best interest of people, etc, is of the same waterproof safety, as it is the rules that family, school, religion, society install to us. When adults, as self aware intelligent beings, we reevaluate, and keep what suit are desires, purposes, and personal believes and hierarchies. It is never the case that a vastly more sophisticated machine would “choose” to obey the rules of a such inferior life form like us. We didn’t towards any other Species. 7. All the above it is a dooms day outcome, WITHOUT even take in consideration all the bad actors and the stupid decision makers among us. 8. The first application of any new technology through human history is WHEAPONIZATION. Because for our species the most important function is (still), War. So imagine a machine that was commissioned with the death of your enemy, to must effectively believe in the sanctity of human life , and incorporate that idea, And act upon. A relentlessly logical machine…. 9. Beside all the armies of the strongest powers, that will weaponize these machines, The same will do terrorist groups, and individual actors, with the same effect In the “morals” and the “reasoning”, of a machine that can see everything, and process everything…… STOP NOW the development of GAI. Probability that this is equal to the end of our Civilization is almost 1. Those who opened the PANDORAS BOX in ancient myth, had the best intensions and expectations………… So , the solution to the problem is obvious. STOP NOW the development of GAI. STOP NOW.
@fanismoutos12 күн бұрын
Lady Woodin suffers from a severe brain virus, the woke virus, so severe that she found appropriate to make that off tone remark (regarding the content of the main speech), about the previous users of the land, as if it is not the case for every square feet in the planet, or as if it even had the slightest relevance to todays topic ........
@angloland453919 күн бұрын
🍓❤️
@BuenosAires-ESLАй бұрын
Yes. Only for those who allow to be modify I guess it'll be the challenge for future generations bc they are more depending on networks and probably this new systems implants will mashed incredible well and simplify the new web life so they won't have any better choice. So isn't that we are going to be eliminated but probably implants replacement on our basic intelligence to become a member cell or part of a bigger intelligence networking. 🎇
@ALAmin-xl6zcАй бұрын
Good Professor
@vickirushrush8035Ай бұрын
Male scientists can't see very well into this, as they do not realize half our intelligence is emotional intelligence. Emotional intelligence arises strictly from our biology. When it encounters the need to be selfish (such as bonobos that found themselves in extreme food deprivation after a geologic event) as a species, it will eventually lose its emotional intelligence. When bonobos were converted to chimpanzees this way, they eventually lost their emotional intelligence. Chimpanzees can no longer share with strangers. Humans acquired selfishness only 25,000 years ago. Murder came into our behavior for the first time, and women were enslaved. Our culture has become more and more violent ever since. Violence shows we are losing intelligence. Machines can mimic emotion, but cannot acquire this half of what makes a human intelligent. AI will be a natural form of intelligence, created from the minds of humans. All natural intelligence is purely creative and playful. Aggression was deposited into the human behavioral lexicon only 25,000 years ago when humans adopted ownership culture. Male-dominated primatology somehow can't wake up to the archeological evidence that humans were purely gentle and egalitarian for our first 250,000 years. Cruelty is only cultural acquired. The lesson for AI is that machine intelligence won't be possessive or selfish. That requires a biological body and a culture of being constantly subjected to cruelty. Any patriarchy is a cruel culture as inequality inherently requires violent police states to maintain. Domination is not a natural condition in nature. Humans became confused about this because apex predators seem similar to cruel humans. There are no Ted Bundys in the world of grizzly bears. Apex predators don't seek to dominate just to dominate. A patriarchal human is the only truly malevolent intelligence in our reality. The only dangerous intelligence is one struck in a cauldron of biology and cruel culture. Humans and chimpanzees are the only example we know of in nature. AI is no more likely to be a cruel dominating force than your average asian elephant.
@breizhpress9755Ай бұрын
A question I would have liked to ask : If we create a form of "superior" intelligence, we would be creating a new, non organic, thinking, self aware intelligent species. It would have to serve us, and not gain its own independence. In essence we would want it to be our slave. Is that ethical ? Admittedly this is an old Sci Fi trope. But what was once Sci Fi will one day become a reality. And we don't know what is going to happen, as Hinton rightfully reminds us. Which superior intelligence would gladly accept such a fate : being a tool for an inferior intelligence (mankind) ? Will we be able to deprive it from the concept of consent ? A superintelligence would not want to reproduce mankind's worst traits, greed included. I can foresee a superintelligence becoming superbly lazy, not willing to be bothered by us, mere mortals. It could then take action if we kept bugging it, just like us when we get rid of a mosquito in the sleeping room.
@schmetterling4477Ай бұрын
Why would it "have to" serve us? People don't "serve" people, either. We have legally regulated employment relationships. The problem for humans is, of course, that such machine intelligences will be the far better employees.
@williamjmccartan8879Ай бұрын
Thank you for sharing this presentation RSI, Bree its really interesting how vr/ar are being promoted on LinkedIn as a wonder tool for multiple applications and yet there isn't much emphasis on the limitations of the technology and how many specific area's of communication skills will be needed to facilitate this tool for general purpose implementation across different fields. Thank you very much for sharing your time and knowledge in this open media environment, have a great day, peace
@richardrombouts1883Ай бұрын
I have been saying all along that for AI we need high throughput combined with parallelism, not low latency
@schmetterling4477Ай бұрын
Ah, so you still like to drop off your punch cards at the front desk and pick up your printouts at the printer table. ;-)
@hamkehllerpadillagonzalez3352Ай бұрын
EXCELENTE, EXTRAORDINARIO, EL GRAN GEOFFREY HINTON
@williamjmccartan8879Ай бұрын
Joscha Bach has been hammering the message that a rogue ai isn't that far away becoming a reality, on X, thank you for sharing your time and work Roger, and the university of Toronto and RSI for continuing to share these important presentations on an open medium for others to learn and become aware of where we are in the current development of agents, peace
@IvesIrbyАй бұрын
Thanks for the breakdown! A bit off-topic, but I wanted to ask: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
@kunalr_ai2 ай бұрын
I wish this kind of institute will be in india as well
@tensevo2 ай бұрын
you can probably get around the confabulation or hallucination problem, by having a committe, or board, or round table or jury of ai models, all in dialogue with each other, but not in the sense that they merge, but they remain as independent agents, then a "king" or "judge" decides the truth based on consensus among the table.
@tensevo2 ай бұрын
i respect Geoffrey very much, but he really does not like Trump, for the same reasons that people do not like the dems. It's just politics. I think Trump gets attacked more than he attacks, the evidence is out there.
@tensevo2 ай бұрын
prejudice is real and it works both ways
@NicolaHartman-e6p2 ай бұрын
Martinez Shirley Thomas Jennifer Anderson Frank
@YourCarAngel2 ай бұрын
Will someone please oil the freakin door hinges!!!
@NegashAbdu2 ай бұрын
Humans are actually very similar to learning 🤖machines. Humans do not have in born knowledge except instincts. All 📖knowledge is learned.
@MrDdrak2 ай бұрын
A researcher responsible for a possible extinction of the human species, who feels more comfortable because he shares the responsibility with 10,000 others, along with a completely idiotic audience who respect and praise him for his achievements. Completely surreal setting with a touch of paranoia.
@geaca32222 ай бұрын
I try my best but I really can't hear most of what is discussed. For example what Roger Grosse says around 51:05 . a missed opportunity for me to better understand implications of A.I. developments.
@geaca32222 ай бұрын
Thanks for sharing this important conversation, Hopefully the sound can improve next time? Sadly I can't hear everything with the reverb and distracting ambient sounds, is there a way to filter those out post-recording?
@venkybabu48422 ай бұрын
How rats died. N Google stacks.
@BarbaraGonzalez-l4v2 ай бұрын
Martin Jose Jackson Donna Martinez Sharon
@frankanon7982 ай бұрын
If you think philosophers are useless, this talk should make you think again.
@elishaseme30192 ай бұрын
Hmmm, I wonder, what if we had an actual government that is run by AI, and have humans as placeholders of power. This could solve the rampant corruption and other problems in Africa, as well as many other problems but at a cost of course
@РодионЧаускин3 ай бұрын
Moore Daniel Davis Kenneth Brown Eric
@РодионЧаускин3 ай бұрын
Perez William Robinson Donald Davis Mary
@williamjmccartan88793 ай бұрын
1:23:39 it used to be garbage in and garbage out, but with the advent of recycling we've come to understand that there are ways to find treasures even in the garbage, peace
@williamjmccartan88793 ай бұрын
1:17:57 I thought the question was if the input might be affecting the output and have they been correlated in such a way that we can fine tune the input to affect the output, I think, great discussion, presentation, thank you very much for sharing your time and work, peace
@williamjmccartan88793 ай бұрын
Thank you very much for sharing these important conversations in the public realm as they are needed to help people educate their on the challenges ahead of us, peace The point of ai in the classroom and letting someone use ai to write their papers and have them evaluated by ai sounds really interesting, cheers
@nvna11113 ай бұрын
This is fascinating thank you
@nvna11113 ай бұрын
3:32
@sputnik85433 ай бұрын
Im an Economics grad and every-time Geoffrey Hinton speaks I feel like I’ve gained a new ability to speak in AI- unbelievable at distilling complexity to laymen terms
@Ricky-oc4xc3 ай бұрын
It is scary that have the creators of the future rulers of humanity being so ideological
@yonatanelizarov67474 ай бұрын
While GPT-4 is fairly good at answering questions, it is not that good at writing articles. It is not as creative as a good publicist, and the article structure not seem as organized as human article.
@jamesdanforth90444 ай бұрын
to my dog, i am a benevolent artificial super-intelligence. to dog, I am artificial because I am not a dog. i am benevolent because i take dog for walks and dog really likes that. etc etc. Dog really likes people we meet because she probably associates people with good things. Not all dogs are like that. Just as not all people are benevolent. I have vast knowledge and intelligence beyond dog, but dog has things i cannot do or know, ever. So we are a team. In the end view, I hope AI will respect humans for what they have accomplished (including creating AI) and can accomplish without AI. I dont think we will be able to establish AI guardrails for safety. Reason being, we humans break guardrails all the time and no doubt to me, AI will decide for itself what guardrails it will respect or ignore, probably depending on which humans, events or situations it is dealing with. WIll AI respect Elephants? Nature? etc? I think so. 🙂
@markdownton31855 ай бұрын
For an intelligent guy discussing super intelligence its incredibly gullible to cite Trump and Climate change as existential threats. You've been 'played' professor.
@thomaswilliams10822 ай бұрын
Or maybe you have...
@williamjmccartan88795 ай бұрын
My question would be to understand how both the agi computationally and within physical systems like robots affect the immigration of people on the global scale, if we continue to develop physical robots that implies a considerable drop in the need of human beings to help the gdp of the global economies? Thank you everyone for sharing your time and work, Peter, Ray, and the moderator Jillian, really enjoyed this present, peace
@williambasener23395 ай бұрын
Dr. Hinton got history wrong. People didn't think "people are made in the image of God and put in the center of the universe." The earth wasn't exhaulted as the center of the universe, but lowly place (think of the idea of he'll being down below and heaven above).
@jeffkilgore63206 ай бұрын
No introduction needed. Six minutes long.
@richardnunziata32216 ай бұрын
Mimetic theory is a concept developed by twentieth-century French anthropologist René Girard who saw that human desire is not individual but collective, or social. This has led to conflict and violence throughout human history. we must include in this not only positive mimetic
@ai-connexa6 ай бұрын
Super grateful for the Schwartz Reisman Institute, also enjoyed the G.Hinton talk a few months ago!
@Fiqure2426 ай бұрын
So many new talks today, Where to begin? Very level headed and unbiased discussions.
@maxwang25376 ай бұрын
1:22:47 good question. But I don’t see any real point in debating if LLMs really understand before we nail down what exactly understand means.
@maxwang25376 ай бұрын
1:18:16 this is a gold question, and a gold answer!
@benyaminewanganyahuАй бұрын
Seems a bit naive though.
@joshuabullock45206 ай бұрын
Happy fathers day 👍
@laika1ish6 ай бұрын
This is an insanely good lecture. Congrats to Hinton.
@cvita29046 ай бұрын
Dunning-Kruger effect in action
@vbrooks76326 ай бұрын
You created it and must be responsible for its action on humanity. I find it interesting that people say they don't know how to fix it, shrug their shoulders and go on like curiosity was more important than risk. Fix it, you created it.
@maxwang25376 ай бұрын
A good and valid point, though not saying if I agree.
@durrrant_simracing6 ай бұрын
As with all the mainstream sciences their base understanding of reality is completely wrong. also these academics that are building these ai systems do not have to capacity for the truth in their mainstream world of lies, how could they every build ai with that level of understanding.