Rebecca Gorman | This House Believes Artificial Intelligence Is An Existential Threat | CUS

  Рет қаралды 1,977

Cambridge Union

Cambridge Union

9 ай бұрын

Rebecca Gorman speaks as the Second Proposition speaker on the motion in the Debating Chamber on Thursday 12th October 2023.
The rapid growth in the capabilities of AI have struck fear into the hearts of many, while others herald it as mankind's greatest innovation. From autonomous weapons to cancer-curing algorithms to a malicious superintelligence, we aim to discover whether AI will be the end of us or the beginning of a new era.
............................................................................................................................
Rebecca Gorman
Rebecca Gorman is the Founder and CEO of Aligned AI, an IT consulting group working to make sure AI functions in alignment with human ethics and values. She was named in REWork's Top 100 Women Advancing AI in 2023, nominated for VentureBeat's Women in AI Award for Responsibility and Ethics in AI, and is a member of Fortune's Founders Forum.
Thumbnail Photographer: Nordin Catic
............................................................................................................................
SUBSCRIBE for more speakers:
/ @cambridgeunionsoc1815
............................................................................................................................
Connect with us on:
Facebook: / thecambridgeunion
Instagram: / cambridgeunion
Twitter: / cambridgeunion
LinkedIn: / cambridge-union-society

Пікірлер: 16
@The7dioses
@The7dioses 4 ай бұрын
Can someone please explain how these systems kill teenagers? Hello?
@cleitondecarvalho431
@cleitondecarvalho431 4 ай бұрын
be pacient, you'll know how when the tv news start noticing it.
@The7dioses
@The7dioses 4 ай бұрын
@@cleitondecarvalho431 I don't watch TV news. Since you already know how, why don't you share what you know instead?
@maxmustermann7794
@maxmustermann7794 3 ай бұрын
Encouraging criminal behaviour or even suicide. GPT-J already did that last year. Which does not mean any human can be convinced to such behaviour but it certainly is possible as you can find out by numerous articles about what happened when someone did commit suicide after talking to a chatbot for months. I do not watch the news either, but I like to understand what ever sparks my curiosity. Long story short, a man in Belgium ended his life after GPT-J encouraged him over the course of months. Something, that could maybe have been prevented, maybe not, but it surely did not discourage the person to take these actions, on the contrary as I said above. Search for it, you'll even find the transcript of their conversations which I can describe with one simple word. Horrifying.
@The7dioses
@The7dioses 3 ай бұрын
@@maxmustermann7794 Thank you for the valuable information and for responding. I will definitely look into this.
@singingway
@singingway 3 ай бұрын
Algorithms can lead a person down a path of illogic "driving straight to the bottom of the brain stem" as it controls what the user sees in response to the users comments, choices and actions. Young minds are particularly vulnerable to being influenced to be dissatisfied with the Self, insecure, unsure, confused about self identity (who am I and what do I really believe?) And teens who have taken drastic actions or self harm have sometimes been shown to be heavy media users made depressed by that media consumption.
@singingway
@singingway 5 ай бұрын
Her points: 1. AI does not currently always do what we intend it to do. Her example is that AI applications meant to add entertainment to social media, instead cause death in teenagers. 2. Ai systems have been deployed at scale for 20 years. Some people have benefitted, some have been harmed. 3. Machine learning. Doesn't work in edge cases. 4. Example if we allow AI to decide when to fire nuclear missiles. 5. It is not built for the purposes to which it is being deployed. 6. The key is to build it such that it follows our instructions and then give it good instructions.
@ohmydaise
@ohmydaise 3 ай бұрын
thx
@jimhiggs6281
@jimhiggs6281 9 ай бұрын
Now her we can listen to!
@AntonioVergine
@AntonioVergine 5 ай бұрын
No, the point Is not that AI is safe if it does exactly what we asked it to do. The point is that we do not know what AI understood about our values and intentions. So AI, if instructed to do so, could solve the world hunger, but at the cost of something else we did not expect. The problem is that we can't know "the reasoning" behind the AI choices. So we can't know if the reasoning of the AI is flawed when AI is more intelligent than us. Example? In chess, you will see AI doing very bad moves. But you consider them bad just because you're not smart enough to see the full picture, while AI is. In a similar way, we will give autonomous powers to AI, but we will not be able to be sure that the final results of what we will ask will not carry a threat for humanity.
@MrMick560
@MrMick560 6 ай бұрын
Can't say she put my mind at ease.
@richmacinnes4173
@richmacinnes4173 6 ай бұрын
9 billion people on the planet, it only takes 1 person to make a mistake, at best..guaranteed someone will use it for their own goals, and within hours of being released
@danremenyi1179
@danremenyi1179 4 ай бұрын
Poor
Peter Thiel | Cambridge Union
1:19:02
Cambridge Union
Рет қаралды 12 М.
마시멜로우로 체감되는 요즘 물가
00:20
진영민yeongmin
Рет қаралды 30 МЛН
Nutella bro sis family Challenge 😋
00:31
Mr. Clabik
Рет қаралды 14 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:25
CRAZY GREAPA
Рет қаралды 37 МЛН
Каха и суп
00:39
К-Media
Рет қаралды 6 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 728 М.
The Dawn of Superintelligence - Nick Bostrom on ASI
11:38
Science Time
Рет қаралды 137 М.
마시멜로우로 체감되는 요즘 물가
00:20
진영민yeongmin
Рет қаралды 30 МЛН