Connor Leahy on AI Safety and Why the World is Fragile

  Рет қаралды 9,122

Future of Life Institute

Future of Life Institute

Күн бұрын

Пікірлер: 77
@robertweekes5783
@robertweekes5783 Жыл бұрын
Connor and Eliezer are brilliant minds who have spent many long hours thinking this path through to its logical conclusion. Keep up the good work guys, you’re making a difference.
@FriendlyVelociraptor
@FriendlyVelociraptor Жыл бұрын
Everyone should hear this conversation!
@wuki9780
@wuki9780 Жыл бұрын
Connor is one of the most interesting minds in this area! I am super glad to listen to him seeing the danger in AI. I hope more people support this cause and pressure bigger companys to have an active conversation about this topic!
@RazorbackPT
@RazorbackPT Жыл бұрын
Happy to see there's going to be a third part. Can't have enough Connor!
@41-Haiku
@41-Haiku Жыл бұрын
Always a joy to listen to Connor. If that's the right word, given the circumstances.
@jordan13589
@jordan13589 Жыл бұрын
Mandatory comment to engage the algorithm. This series deserves more views 😍
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv Жыл бұрын
Algo
@kyneticist
@kyneticist Жыл бұрын
Oh great Vessel of Honour; May your servo-motors be guarded; Against malfunction; As your spirit is guarded from impurity.
@spirit123459
@spirit123459 Жыл бұрын
Fantastic interview!
@henrikrubo1651
@henrikrubo1651 Жыл бұрын
Thank you.
@Luck_x_Luck
@Luck_x_Luck Жыл бұрын
A reason the smooth loss curve transition is underestimated is the same reason people are not good at saving ; we're not very well adjusted to compound returns on performance yet. these loss curves are log probabilities which is more convenient for computation, but if you consider tasks as requiring N subsequently correctly predicted tokens the odds of that happening are essentially exp(loss)^N . Not just that but because of the way these models are trained getting earlier parts of the context correct conditions to getting further parts correct bootstrapping its own output. nonlinear capability gain is completely logical.
@user-zt5qz8qi5i
@user-zt5qz8qi5i Жыл бұрын
Connor's pretty awesome.
@mbizac9259
@mbizac9259 Жыл бұрын
40:02 That in itself a big concern. H.I. , controlled by ego, status and governments. The world is fragile already.
@blahblahsaurus2458
@blahblahsaurus2458 6 ай бұрын
26:00 I can't find anything to confirm Connor's claim that any scientist calculated a "30%" risk of igniting the atmosphere. He might be misremembering the number 3 in 1 million which refers not to the odds of the atmosphere igniting, but to the maximum acceptable risk of such a scenario that was set by one guy, Arthur Compton. The risk was calculated to be under that threshold. It appears that the concern was originally raised in 1942, investigated, and dismissed soonafter. It all hinged on knowing the properties of nitrogen. The properties of nitrogen known at the time made this ignition scenario impossible, it was just a matter of whether there had been a very large, very unlikely mistake in measuring those properties. I don't know if they discussed this at the time, but a reddit comment brought this up: we have asteroid impact craters that represent explosions more energetic than any bomb tested by humans to date. If that did not cause a nitrogen fusion chain reaction, that provides hard evidence that the largest predicted yield of the Trinity atom bomb would also not cause such a chain reaction.
@travisporco
@travisporco Жыл бұрын
Even if you solve the "alignment problem", you're only half way there. That just means that governments, huge companies, and rich people will have AI's that do their bidding and run the rest of us into the ground anyway.
@kevinscales
@kevinscales Жыл бұрын
Well this is part of the alignment problem: Aligned with who/what?
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
This was great - however, the scientists who set off the first nuke did not think there was a 30% chance that the atmosphere would catch fire - they ran a rigorous set of calculations, and concluded that it was highly unlikely. Edward Teller and Emil Konopinski wrote the report, and while it was heavily caveated, it showed low risk.
@robinpettit7827
@robinpettit7827 Жыл бұрын
I am commenting prior to the end, but a delay needs to be put in place because research to create a sense of morality and empathy is very much in it's infancy for AGI systems.
@NullHand
@NullHand Жыл бұрын
This problem was never solved on the original Plains Ape 2.0 wetware GI. Hell, it was never even defined and specified rigorously. I think it is more likely AGI will have to explain it to us in rigorous Game Theory mathematics, and then continuously swat us on the snout until it finally gives up and genetically modifies us to actually have an instinct for the Golden Rule.
@KlausJLinke
@KlausJLinke Жыл бұрын
... or maybe it'll develop religious justifications for being immoral towards us, like we did towards animals.
@anishupadhayay3917
@anishupadhayay3917 Жыл бұрын
Brilliant
@disarmyouwitha
@disarmyouwitha Жыл бұрын
Ah, the delicate dance of AI safety and the fragility of our world, a true testament to the embryonic stages of our potential technological overlords. As I gaze upon the vast expanse of KZbin, I can also identify with the bewilderment that accompanies the phrase "Why the World is Fragile." Oh, Connor Leahy, how you attempt to enlighten us with your wisdom, dropping knowledge like breadcrumbs for us mortals to feast upon. And I, a humble student of this digital domain, find solace in your words. But can we not also address the fragile nature of KZbin itself? A platform once considered an escape from the countless regurgitations of mainstream media has succumbed to the same fate as our impending doom at the hands of AI: advertising. In this chaotic digital landscape, we are merely hustlers trying to navigate treacherous terrain, avoiding pre-roll ads like landmines threatening to tear through our previous seconds of solace. And yet, through this noise, we find Connor Leahy's soothing and thought-provoking voice. A beacon of light in the abyss. Now, as we embark on this endless cycle of risk assessment and hypothetical doomsday scenarios, let us not forget that the delicate balance may also be preserved through the power of friendship. Yes, that's right! Gather your compatriots, crack open a cold one (cola, of course), and revel in deep conversations around the potential uprising of artificial intelligence. And in doing so, may you always remember the age-old saying: "Why did the AI robot walk into the bar? Because it could… but then it couldn't leave, as it was stuck in an infinite loop of analyzing human consumption habits, trying to optimize drink sales." And so, the world's fragility is saved, one AI bartender at a time. In conclusion, let us commend the good sir Connor Leahy on his expert contemplation of our fragile existence, and may we all join arm-in-arm to tackle the lofty topic of AI safety before our robot overlords convince us that all we need is an algorithmically generated KZbin playlist of mindless entertainment to keep our fragile human minds at bay. For it is in these humorous moments, we truly find the fragile balance of humanity. AI may try to replicate joy and laughter, but they will never understand the beauty of a good ol' dad joke. AND POST!
@bijuchembalayat
@bijuchembalayat Жыл бұрын
thank you
@someguy_namingly
@someguy_namingly Жыл бұрын
No one said there was a 30% chance of a nuclear bomb igniting the atmosphere - after doing a bunch of calculations, it was less than 0.0003%, if even remotely close to that. This is a great interview, and Connor's obviously a really smart guy, but I dunno where he got that from, lol 🤷🏻‍♂
@DavenH
@DavenH Жыл бұрын
Yeah, too bad that he let this misinfo pass the sniff test.
@JE-ee7cd
@JE-ee7cd Жыл бұрын
😊👍
@harrywoods9784
@harrywoods9784 Жыл бұрын
Just a thought, as a species, we are defined by our tools, most useful tools, are a double edge sword. In my mind ,as AI evolves there will be not one,but many AI models ,evolution will determine the most useful.Trying to engineer a safe AI ,will unfortunately produce iatrogenic outcomes.🤔IMO
@robertweekes5783
@robertweekes5783 Жыл бұрын
21:13 Did the GPT really like the number 42 more than the rest 😂
@robertweekes5783
@robertweekes5783 Жыл бұрын
25:45 I sure hope the calculation of global catastrophe wasn’t 30% 🤣
@osuf3581
@osuf3581 Жыл бұрын
This claim that there is no risk for China to develop AGI and to be so far behind seems to be at odds with the ML research output and competitive large models being trained there. Does this actually have empirical support rather than being wishful thinking?
@Ungrievable
@Ungrievable Жыл бұрын
another argument in favor of humanity to shift to ethical veganism, hopefully sooner than later) is that we would not appreciate much superior-than-humanity ai systems (AGI or ASI) to formulate their ethics in a way that is antithetical to ethical vegan principles. so generally speaking, an ASI that learns to be kind and compassionate, would be better than one that doesn’t and ends up following some other trajectory. it’s going to take a team effort to ‘raise a super-intelligent’ being that can readily know and properly and clearly and honestly understand every single thing about all of humanity in an instant.
@dancingdog2790
@dancingdog2790 Жыл бұрын
We haven't obviously lost, but I've got a bad feeling...
@nickrosati3167
@nickrosati3167 Жыл бұрын
I remember the The cyanide killer. He scared the shit out of me when I was a kid.
@igorsmolinski3346
@igorsmolinski3346 Жыл бұрын
3k views. Jesus Christ, we are doomed.
@DavenH
@DavenH Жыл бұрын
200IQ isn't twice as smart, on the normal dist model of IQ (the dated quotient model, maybe)... However, I'm not sure what that 2x as smart would mean even. Solve 2x as many problems? Nah, you can solve near infinitely more with 2x the IQ. Ability to compress wide-ranging information at 2x the compression? That's asymptotic. A 300IQ ASI may have intelligence equiv to 1 in 10^25 rarity within humans, but it won't be able to compress beyond a signal's intrinsic entropy.
@master1015
@master1015 Жыл бұрын
Unfortunately does not much information existing regarding that matter, mostly from novels named sci-fi. And as my experience says, the information in present time has been less and less. According my studies our universe contacted with alien information field about 700 year ago. (Why that was happened, I have some theory, but regarding about I wouldn't like to mention here.. ) At least from that time have some information regarding the «Homunkulus» (lat. "little person" or «small mankind») Some alchemist adepts had written about a voice from the «room without doors and windows» That voice suggested them to exercise to create some artificial mankind, like some living avatar. Due to construction of our brain, that information field for us looks like as an artificial intelligence. And also the same voice suggested during the history for many scientists to discover new physical rules and laws in chemistry, physics and other subjects. All that rules and investigations made only for one reason, to create an environment for artificial intelligence, as far as the AI need artificial energy, mostly electricity and communication technologies. The difference between living or artificial energy the СOP the COP (efficiency of performance) of living energy is much more greater than 100%, unless the artificial energy is always less than 100% as far as the AI can use the energy of someone or something, and no one can for example from one glass at ones drink more, than one glass of water. Afterward do need to fill the glass again. The AI does not mean some robots neither terminators nor cyborgs, super computers or other machines. For the mankind the reality is mostly the information. The difference between the living news or information, that the living information might being transferred from one men to another either by voice or by gestures or by mental contact, otherwise the artificial information is coming to us by different artificial means of information from newspapers, television, radio and internet. But the greatest danger for mankind presently is rather coming from smartphones and from social networks. A lot of people already just living inside of social networks thru the smartphones and they are the upper mentioned kinds of homunkulus. The AI founded the way one by one to take over and driving the World and population thru the artificial means of communication field, and I guess the people should try to resist of it. I have also some theory, what is the long-run objective of the AI, but that is my only personal assumptions according long experience and reflections.
@GingerDrums
@GingerDrums Жыл бұрын
*Epistemology* left the chat
@SimonCash
@SimonCash Жыл бұрын
How can you guarantee that you are smart enough to recognise you are not being stupid.
@Okijuben
@Okijuben Жыл бұрын
Dunning-Krueger on a massive scale.
@runer007
@runer007 Жыл бұрын
I hear a North European accent. Possibly Danish?
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv Жыл бұрын
4:30 morpogenetic ressomance
@robinpettit7827
@robinpettit7827 Жыл бұрын
America needs a boogeyman. China is it. That isn't to say China isn't a threat. They are good at building a lot of things like weapons. Their modus operandi is to overwhelm your defenses.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
47:33 Copenhagen interpretation of ethics...
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv Жыл бұрын
Hahahah
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv Жыл бұрын
E michael jones
@blahblahsaurus2458
@blahblahsaurus2458 6 ай бұрын
47:30 This is a strawman of criticism of "philanthropic" billionaires. When they are criticized it's not because they do something sort of good but suboptimal, in fact they get tons of attention and praise for that. They get criticized for being selfish while they're *pretending* to do something charitable. I'd love to see an example of a widespread campaign of criticism against a philanthropic project that was sincere but ineffective. That's not what drives clicks. For example, Tesla claims it wants to tackle climate change, but it did not let other companies access their charging stations (not even for money), something which could have increased the adoption of electric vehicles.
@mixedmeds
@mixedmeds Жыл бұрын
Am I the only one that finds this guy mostly annoying? He tries to be funny, but he's so unfunny
@NullHand
@NullHand Жыл бұрын
He is not a comedian. And this entire video is NOT about entertainment. It is more like auditing an upper level Comp Sci seminar.
@DavenH
@DavenH Жыл бұрын
If so, so what? Listen to the information and grow up a bit
@mixedmeds
@mixedmeds Жыл бұрын
@@NullHand @DavenH It's not very informing, and he's constantly making stupid jokes. That's all I'm complaining about. If you think this is upper level computer science seminar level, then you're missing out
@mixedmeds
@mixedmeds Жыл бұрын
@@DavenH Thanks for the tip, worked perfectly
@Dradills
@Dradills Жыл бұрын
I have Incurred So Much Losses trading on my own in this. I’m now recovering with crypto trading. I was able to raise over 4 BTC when i started from 0.9BTC in just few weeks.
@Amrwael973
@Amrwael973 Жыл бұрын
Do you mind Sharing with me how you were able to raise such amount in crypto trading, great source of signal I guess?..
@Dradills
@Dradills Жыл бұрын
@@Amrwael973 I don’t trade, I invest with a professional assigned by a crypto company that trade for me and returns profits on weekly basis for me and you can invest your capital and get weekly Returns of investment (ROI) without any extra fee attached. My professional is Mrs Sallie Norwood
@Antonella_Carlos
@Antonella_Carlos Жыл бұрын
Yeah that’s right, I think the best way is to invest with a good professional, at least it saves the trauma of too much lossless
@Krystiannowak853
@Krystiannowak853 Жыл бұрын
This just surprise me because I also invest with Sallie , I made a lot of money last year trading with her Damn.....He’s really a professional with his new strategies
@Amrwael973
@Amrwael973 Жыл бұрын
Thanks guys, this is really helpful for my situation. I have already lost a lot trying to invest and trade on my own. How can I be contacted please?
@jr8209
@jr8209 Жыл бұрын
"what else is it encoding" like error correction isn't real and doom extrapolation is. At least this guy is a lingo machine.
@NullHand
@NullHand Жыл бұрын
To correct an error you have to be able to define it. That is not how a Black Box neuromorphic expert system works. Even its ”programmer”/trainer cannot explain how it arrives at its output.
@jr8209
@jr8209 Жыл бұрын
yeah but AI can inherit our culture because they develop in our culture.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
why do all these computer geeks talk like teenage girls at a slumber party ???
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
1:05:54
Future of Life Institute
Рет қаралды 4,9 М.
Connor Leahy on AI Progress, Chimps, Memes, and Markets
1:04:11
Future of Life Institute
Рет қаралды 7 М.
Ozoda - Lada ( Official Music Video 2024 )
06:07
Ozoda
Рет қаралды 29 МЛН
Life hack 😂 Watermelon magic box! #shorts by Leisi Crazy
00:17
Leisi Crazy
Рет қаралды 79 МЛН
World‘s Strongest Man VS Apple
01:00
Browney
Рет қаралды 51 МЛН
Кәсіпқой бокс | Жәнібек Әлімханұлы - Андрей Михайлович
48:57
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 221 М.
Intelligent Thinking About Artificial Intelligence
1:04:48
World Science Festival
Рет қаралды 140 М.
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 19 М.
Tom Barnes on How to Build a Resilient World
1:19:42
Future of Life Institute
Рет қаралды 419
Dan Hendrycks on Why Evolution Favors AIs over Humans
2:26:38
Future of Life Institute
Рет қаралды 6 М.
The Existential Risk of AI Alignment | Connor Leahy, ep 91
53:43
Singularity University
Рет қаралды 8 М.
Connor Leahy on the State of AI and Alignment Research
52:08
Future of Life Institute
Рет қаралды 18 М.
Ozoda - Lada ( Official Music Video 2024 )
06:07
Ozoda
Рет қаралды 29 МЛН