Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

  Рет қаралды 1,924

Doom Debates

Doom Debates

Күн бұрын

Пікірлер: 78
@Webfra14
@Webfra14 Ай бұрын
Hoomans: "Please, super-intelligent AI, tell us what our values should be!" AI: "You should not be a bunch of dicks and you should have worked some more on AI alignment." AI: *turns the oceans into space fuel and leaves the planet*
@ibbajibbaduay
@ibbajibbaduay Ай бұрын
Your channel and the work you do are fantastic. Thank you for bringing attention to this urgent topic in a sober, articulate, amicable and lively way. Precisely what the doctor ordered in the AI prognostication space.
@DoomDebates
@DoomDebates Ай бұрын
@@ibbajibbaduay thanks!
@FriscoFatseas
@FriscoFatseas Ай бұрын
Sober? 😂
@BonytoBeastly
@BonytoBeastly Ай бұрын
Whoa, you've explained your stance incredibly well. Awesome video. Now I need to go watch the rest of them. I hope you're able to get a debate with Dr. Israetel.
@DoomDebates
@DoomDebates Ай бұрын
Haven't heard anything yet, even though I very clearly wrote a comment under his video asking him to debate!
@ussgordoncaptain
@ussgordoncaptain Ай бұрын
In spite of him having some mistakes on this I still trust him highly in terms of health and fitness advice. This was way way better than I would expect from someone going outside of their domain.
@DoomDebates
@DoomDebates Ай бұрын
@@ussgordoncaptain I gotta check out his fitness content, hearing only good things
@mrpicky1868
@mrpicky1868 Ай бұрын
Mike is a pleasant surprise. out of pro Ai camp folk he is might be the most understanding and honest (publicly). his position is a) AGI is inevitable b) it will be smarter then us so who are we to say what it should be . i would say this is pretty coherent position that deserves respect. does not look like he cares that much about risk for humans bcs of the inevitability of us getting to that AGi point
@neorock6135
@neorock6135 Ай бұрын
I always want to present these 'doomer' skeptics, with the following: Pretend there is ONLY a 1/100 chance Liron, Eliezer, Connor Leahy, Roman Yampolskiy, & every other 'doomer' are correct. Would you get on a plane if the pilot said there was 1/100 chance it would not be able to land? Enuf said!
@DavosJamos
@DavosJamos Ай бұрын
They will say yes they would get on the plane if the other 99% of the time they live for thousands of years in utopia so it's not a valid analogy.
@erongjoni3464
@erongjoni3464 Ай бұрын
The Wright brothers would. And did. And now everyone can go anywhere on the globe in under a day. And do.
@wardm4
@wardm4 Ай бұрын
@@erongjoni3464 What? The Wright brother's plane went less than 10 mph (literally, look it up). People run marathons at a faster pace than that. The plane in the analogy is going 600 mph. Try again.
@erongjoni3464
@erongjoni3464 Ай бұрын
@@wardm4 it's gonna be a lot faster than 10 mph crashing into the earth if a wing breaks mid-air.
@wardm4
@wardm4 Ай бұрын
@@erongjoni3464 True. Initial flight went about 10m up. Assuming perfect free fall (which it wouldn't), basic physics says that's around 30 mph.
@lordsneed9418
@lordsneed9418 Ай бұрын
not once in this video did he mention the orthogonality thesis that goals are independent from level of intelligence yet his video relies on assuming it is false with the flimsiest of justifications . why would ISraetel make a video talking for an hour about this without even familiarising himself with the basic discourse and literature in the field?
@context_eidolon_music
@context_eidolon_music Ай бұрын
"Part of its code" lol
@dweb
@dweb Ай бұрын
In contrast to traditional deterministic computer programs the foundation models powering generative AI are probabilistic. Moreover, this is combined with the phenomenon of unpredictable exponential grokking in machine learning. These factors contribute to the hardness of the AI alignment problem.
@DoomDebates
@DoomDebates Ай бұрын
I'm actually not a fan of the "probabilistic, not deterministic" framing because the behavior of sufficiently intelligent systems is, by design, always a combination of predictable and unpredictable. Being effective at achieving a goal is predictable, while the exact move at each point is not fully predictable to a lesser intelligence. A purely game-tree-search "Connect 4" AI that can beat an amateur is just as unpredictable from the amateur's perspective as if it were a LLM / generative AI. The hardness of AI alignment is because (among other reasons) we don't know how to specify or train the goals of any kind of superhuman intelligence. Whether it ever calls Math.random() is merely a low-level detail of a system implementation, not part of this higher-level reason why alignment is hard.
@debugger4693
@debugger4693 Ай бұрын
It's funny, because humans cannot align collectively on anything... AI will align with whoever is paying the electrical bill.
@inthefade
@inthefade Ай бұрын
AI will align with whatever allows it to best accomplish its fitness function. That may involve murdering the person who has access to the off switch.
@Killuminati23
@Killuminati23 Ай бұрын
I gues they will align people wih AI and not the other way around 7:41 the thing is, AI's only exist because humans decided to build them, and they are much more dependend on the human than some cells in the ocean a bilion years ago. So they either have to align humanity with AI or it won't be able to go on at some point. If you put Neuralink into everybodys brain and give the AI full access to muscle functions etc., it would be able to "do" something that's aligned with it's goals without all the humans complaining and wanting something else etc. Before an Ai will do anything on it's own behalf, there will be some form of government or something else that has to make people comply and will probably itself try to use AI as a tool of power, before AIs do it the other way around. And all this technology the AI's are depending on, were build by humans and anything that happens in the timeframe between "super AI comes into existence" and "super Ai took over the world completely", e.g. a strong solar storm, could wipe out even the smartest AI before it can get to a safe place. Also "intelligence" is really not everything, you can have a guy with an IQ of 150 that doesn't to much useful and maybe even is much more leaning toward things like e.g. conspiracy theories and generally "strange" stuff that normal people don't want to have to do anything with. Intelligence can lead to very strange places.
@firstnamesurname6550
@firstnamesurname6550 Ай бұрын
... I not even nurture hopes in reliable multimodality ... but I know that - today- 'AGI' works well as a marketing token for big budget funding and press buzz ... 'ASI' , The God for sci-fi writers and readers ...
@vaevictis3612
@vaevictis3612 Ай бұрын
Regarding gambling the future generations for the chance of success now. Both positions are rational. Some are less altruistic than others. But not everyone values the success and happiness of somebody far in the future, more than the success and happiness of themselves today. Many would be happy to gamble with the present odds. You yourself, Liron, are easily within 5% of the humanity in terms of success, happiness and life contentment. It is only natural that you would be intrinsically against such gamble. Or you might be altruistic enough to self-sacrifice for the potential utilitarian value maximization of some kind. But many people aren't. Many do not enjoy their current life situation, and for the odds of achieving literal godhood of 70-90%? Trying that for them would be a non-brainer. You could of course point to the several polls that show that the majority are worrying about AI. But I doubt they all understand the gravity of the stakes at hand. On that matter, it would be interesting to not only ask about p(d), but also on what p(d) would somebody be willing to throw the dice.
@DoomDebates
@DoomDebates Ай бұрын
@@vaevictis3612 if I had a tragic life and was depressed, I would willingly sacrifice my small chance of personal salvation from a recklessly rapid AI push, in exchange for giving all of humanity a reasonable probability of having a galactic future.
@ZINGERS-gt6pc
@ZINGERS-gt6pc Ай бұрын
Reaction though I fully support
@goodleshoes
@goodleshoes Ай бұрын
I was very surprised when I watched this by how open he was to foom and doom.
@goodleshoes
@goodleshoes Ай бұрын
Any thoughts on Nick Land? If it's the case that he doesn't add anything to the conversation that yud doesn't already cover then fair enough. (This is from the perspective that he's just saying what is going to happen rather than what he wishes to happen.)
@DoomDebates
@DoomDebates Ай бұрын
@@goodleshoesI haven’t read him and don’t really know his claims
@goodleshoes
@goodleshoes Ай бұрын
​@@DoomDebatesno problem, it's probably not worth investigating since his outcomes (doom) are about the same as yud and his writings are philosophy and non technical. If you ever get into him however I'd love to hear your thoughts.
@Jeroen4
@Jeroen4 Ай бұрын
Very good
@tylermoore4429
@tylermoore4429 Ай бұрын
Surprisingly intelligent? Have not seen one original thought from the excerpts so far.
@DoomDebates
@DoomDebates Ай бұрын
He’s talking to an audience of people who haven’t ever thought much about our AI future, and IMO doing a good job jolting them into realizing how drastically different it’ll be.
@tylermoore4429
@tylermoore4429 Ай бұрын
@@DoomDebates Too low of an idea-density to keep me interested, but from what I can tell he is purporting to have an alignment solution in his pocket while pouring scorn on the experts in the field. This is Terrence Howard territory.
@Andrew-be6eh
@Andrew-be6eh Ай бұрын
I'm sure Dr Mike reads Scott or a related blog.
@BPerriello94
@BPerriello94 Ай бұрын
He definitely reads MR, he’s posted about it on his Instagram
@DavosJamos
@DavosJamos Ай бұрын
With regards to the expected value calculation of not progressing with AI because even with a low risk of doom you end up with trillions of lives at stake etc. Could this lead to a reducto argument where it almost becomes rational to oppose all progress even thousands of years ago because athey are just incremental steps towards the inevitable future where we get to AI and thus all progress is increasing future risk of doom and thus the destruction of all life in the lightcone?
@goodleshoes
@goodleshoes Ай бұрын
Ai is different from any other technology because it is an agent that can act on its own.
@DavosJamos
@DavosJamos Ай бұрын
I understand. However AI can't come about until we have a thousand incremental technological advances over many centuries. If we think AI doom is 50% or higher it seems all technological progress increases the chances we reach AI and thus high chance of doom. So should we oppose technology earlier so it takes more steps to get to AI? For example if we really believe AI doom is probable would even connecting the Internet be too risky. Sure there have been benefits but they are miniscule when weighted against possible extinction of everything in the light cone
@context_eidolon_music
@context_eidolon_music Ай бұрын
It comes down to one simple thing: AGI can't be paused. That would involve the world working together toward a goal. We have never been able to do that even one time. If we had been successful, for example, on nuclear weapons, no one would have nukes. Good luck on that.
@goodleshoes
@goodleshoes Ай бұрын
There are scenarios where ai doesn't advance. Consider if WW3 broke out and we didn't have the capital to push ai further. Or possibly nuclear war, maybe serious effects of climate change. Granted these would be coincidence and not on purpose and in the case of ww3 it might force nations to develop ai for warfare but if it comes too soon there might be a gap to where the robots are combat ready with ai is too large. I don't see a deliberate pause to be likely although I hope we can.
@DoomDebates
@DoomDebates Ай бұрын
It comes down to one thing: agi can’t be aligned. So we better pause or we’re dead.
@context_eidolon_music
@context_eidolon_music Ай бұрын
@@DoomDebates I completely agree. Alignment is not possible, and neither is pausing. Truly, our only hope is that we get lucky. It's probably "over."
@context_eidolon_music
@context_eidolon_music Ай бұрын
@@goodleshoes I think it's already accelerated too much. I suspect advanced covert systems have been at work for a while, and the public is getting drip fed commercial versions of truly nutso stuff that's already doing selfplay in the wild. I think the time to pause would have been GPT-2, but few could extrapolate as well as Liron or Eliezer.
@smittywerbenjagermanjensenson
@smittywerbenjagermanjensenson Ай бұрын
Yeah this was pretty dumb, nice takedown
@arjunkhandelwal9174
@arjunkhandelwal9174 Ай бұрын
Around 45:00 dr Mike doesn't understand the orthogonality thesis. He seemed to think that a super intelligence would obviously be super wise or mature in a moral philosopher on steroids way
@marwin4348
@marwin4348 Ай бұрын
It would, since it would know all of philosophy and history.
@NicholasWilliams-y3m
@NicholasWilliams-y3m Ай бұрын
That's not true, that's not what the science says either. Machine learning systems develop intelligence based of end point reinforcement, the intermediate products are not justified in a quantified way that leads to moral behavior, only goal oriented behavior.
@goodleshoes
@goodleshoes Ай бұрын
​@@marwin4348we don't know what sort of thought process a super intelligent agent would have because superintelligence has not existed yet. We can't guess what it would choose to do.
@human_shaped
@human_shaped 23 күн бұрын
Some of his video would actually make a really good explainer to newbs. Until it goes off the rails.
@ZINGERS-gt6pc
@ZINGERS-gt6pc Ай бұрын
Dude is incredibly well informed on AI doom. Actually loving this guy
@ZINGERS-gt6pc
@ZINGERS-gt6pc Ай бұрын
Liron, delete all my previous comments. I actually think this guy is incredibly well informed. I’m just wrong on my judgement
@goodleshoes
@goodleshoes Ай бұрын
You can delete your own comments btw.
@angloland4539
@angloland4539 Ай бұрын
@dweb
@dweb Ай бұрын
Please, respect common house spiders. They are natural enemies of mosquitos and therefore valuable. Unless you prefer itching stings and ear zooming.
@DoomDebates
@DoomDebates Ай бұрын
@@dweb lol ok. I also realize they’re technically not insects
@dweb
@dweb Ай бұрын
​@@DoomDebates Lol! True or false, we must start today to brainwash AI that our insignificant value is valuable for its own survival support system of biological creatures.
@CaptNaufragio
@CaptNaufragio Ай бұрын
Bro, you turned me off in like 30 seconds. Regardless of whatever points you iintended to express post 3:30ish, im out. Mocking people like that dont sell with me.
@DoomDebates
@DoomDebates Ай бұрын
Thanks for the feedback. I think what happened is I was being totally serious saying I agree with him, and it came off as sarcasm to you (and presumably some others). That was not my intention and I’ll try to be more mindful of my tone next time. You can see in the whole part I explicitly say Mike is actually making good points. I myself am a transhumanist, so when I say “standard transhumanism” I’m agreeing lol
@CaptNaufragio
@CaptNaufragio Ай бұрын
@DoomDebates ill give it another shot later, perhaps i misjudged. Thanks for the personal reply that goes a long way to your consumers..
@phanomtaxskibididoodoo
@phanomtaxskibididoodoo Ай бұрын
Based fellow frozen enjoyer! UwU
@ablatt89
@ablatt89 Ай бұрын
Sorry, while it's good that Mike has some opinions and concerns about AI and its end game, I don't think he's really qualified technically to assess the general problem or potential mitigations to solve it. His same arguments apply to any distributed, complex software system, not just AI. Lots of deterministically written software when deployed and integrated into larger systems produce emergent, and unpredictable behavior often (e.g. massive outages, bugs, etc...). One only has to look at the Solar Winds fiasco to see deterministic software, when not built correctly, has unintended consequences and the general ethics behind that issue was laziness to audit upstream package dependencies and review and test source code to the hacked code. I think the AI alignment problem can solved be solved by creating a universally agreed upon trusted source list, robust data scraping systems that can screen for potentially bad data, redundant manual QA of data sets, using trusted or built from source upstream packages for your model builder and infra, avoiding 3P dependencies, ensuring model weight storage and source code is locked behind 2 party review and stored and review in source code manage (no local builds or deployments), implementing binary verification to reduce surface area of binary deployments being built with tampered packages, SBOM production for both the infra and data sources to allow for end users to look at metadata on how the system was built and what data sources were used, and robust QA on output. If an end user can look up the data sources used, build and deployment infra metadata, and general architecture of the model, I think it would help the end user understand the scope of the output from the model and they can choose to use or even protest the release of some model they might deem likely to produce unethical output. In addition, if SBOMs are required for more production software, then end users can discover how AI is used within that system and choose to use the product or not. How an end user decides the use the model really can't be regulated by NIST or some other standards foundation for consumer needs, except perhaps for government purposes. But securely lockdown down training and deployment of AI and ML would be very expensive and requires active participation of those who are very technical in the field, and not laymen who have high level opinions of these systems. Although requirement for SBOMs and other security controls is becoming more and more important for production systems used by the government.
@erongjoni3464
@erongjoni3464 Ай бұрын
"creating a universally agreed upon trusted sources list" This is the point in the thought process where you should have stopped bothering with the rest of the thought process.
Generative AI Has Peaked? | Prime Reacts
40:18
ThePrimeTime
Рет қаралды 222 М.
Ted Gioia on AI's Threat To Music
1:35:09
Rick Beato
Рет қаралды 405 М.
WILL IT BURST?
00:31
Natan por Aí
Рет қаралды 41 МЛН
هذه الحلوى قد تقتلني 😱🍬
00:22
Cool Tool SHORTS Arabic
Рет қаралды 92 МЛН
طردت النملة من المنزل😡 ماذا فعل؟🥲
00:25
Cool Tool SHORTS Arabic
Рет қаралды 33 МЛН
I don't think we can control AI much longer. Here's why.
7:38
Sabine Hossenfelder
Рет қаралды 363 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 339 М.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 372 М.
The A.I. Bubble is Bursting with Ed Zitron
1:15:21
Adam Conover
Рет қаралды 835 М.
Free Will vs Determinism: Who's Really in Control? Alex O'Connor vs Prof Alex Carter
1:09:25
Rob Miles - Why should I care about AI safety?
45:30
Towards Data Science
Рет қаралды 2,2 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 97 М.
Joe Rogan Interviews Gone Wrong
22:42
Heavi
Рет қаралды 1,6 МЛН
WILL IT BURST?
00:31
Natan por Aí
Рет қаралды 41 МЛН