No video

Connor Leahy on the State of AI and Alignment Research

  Рет қаралды 17,991

Future of Life Institute

Future of Life Institute

Күн бұрын

Пікірлер: 159
@antigonemerlin
@antigonemerlin Жыл бұрын
Not a subscriber and came to hear more thoughts from Connor, but the presenter is very intelligent and asks all the questions I wanted to ask. Kudos to you sir.
@akmonra
@akmonra Жыл бұрын
You should just invite Connor on every week, honestly
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
At least after every major AI release.
@akmonra
@akmonra Жыл бұрын
@@Hexanitrobenzene so... every *other* week
@flyondonnie9578
@flyondonnie9578 Жыл бұрын
Maybe better leave him some time to work on saving the world! 😅
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
@@flyondonnie9578 i think he can spare us 1hr30 every week and save the world the other hours of the week 😅
@diegocaleiro
@diegocaleiro Жыл бұрын
Lol. Connor is so reasonable it's funny. I'm glad we have him.
@dainiuszubruss
@dainiuszubruss Жыл бұрын
yep let's give the keys to AGI to the US military. What could go wrong.
@iverbrnstad791
@iverbrnstad791 Жыл бұрын
@@dainiuszubruss I think you missed his entire point there. Currently the keys to AGI is with openAI/microsoft, and they are racing google, Connor sees this as a p(doom) = 1. The military would likely have a lot more bureaucracy slowing down the process, and possibly far stricter safety regulations, so in aggregate it could mean a lower p(doom).
@lkyuvsad
@lkyuvsad Жыл бұрын
Hoare quipped that there are two ways to make software- "One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies". RLHF is very much in the category of removing deficiencies in a complicated system (and not doing that particularly well). If we ever manage to create AGI or ASI that is safe and generally powerful, it needs to be in Hoare's first category. The problem is that neural nets are complicated. So I assume the simplicity needs to be wrapped around the net somehow? I don't understand how any programmer who's worked on any non-trivial system has any confidence we're going to figure out how to do this quickly, if ever. Over decades of effort, we have so far almost entirely failed to make bug-free systems even from a few thousand lines that can be read and understood by a single human mind. The exception is in systems amenable to formal proofs, which AGI is the opposite of. We're now trying to create a significantly bug-free system made out of trillions of currently barely-legible parameters, without having anything close to a specification of what it's supposed to do, formal or otherwise.
@flyondonnie9578
@flyondonnie9578 Жыл бұрын
I think you’ve suggested the correct solution: wrap the mystery in simplicity. The human brain seems to work along these lines: mysterious neural nets of various systems are shaped by experience and evolution and overseen by inhibitory and frontal cortex functions that similarly are proven through multiple generations. Of course we still get the occasional psychopath. I think without the directly functional system surrounding the mystery, there’d be only chaos.
@electron6825
@electron6825 Жыл бұрын
Humans aren't even in "alignment". To expect it from machines we've programmed seems absurd.
@jordan13589
@jordan13589 Жыл бұрын
Germane and insightful meta-analysis of the alignment field. Connor continues to demonstrate he has a well-developed map of potential future outcomes in AI capability advancements and regulatory efforts. I hope we continue to hear more from him and others who can elucidate the complexities of alignment.
@kirillholt2329
@kirillholt2329 Жыл бұрын
this sounds like it was written by a machine kek
@jordan13589
@jordan13589 Жыл бұрын
It’s just that some humans can still write at a GPT-4 level, although we’re all going to be eclipsed soon enough. And one could argue it’s just a matter of properly training and fine tuning GPT-4.
@alexanderg9670
@alexanderg9670 Жыл бұрын
Current AI is alchemy. Always love Connor's analogies, "Voodoo shit"
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
WEIRD Voodoo shit » to quote him accurately 😃
@BestCosmologist
@BestCosmologist Жыл бұрын
Thank you for the steady updates.
@riveradam
@riveradam Жыл бұрын
35:00 "I can just give my best strawman" is a gross misunderstanding of the term. Connor is admitting with admirable humility and sincerity that he doesn't think he can represent Yudkowksy's or Christiano's stance precisely, but as long as he's trying to get it right, then he's not strawmanning. Steelman vs strawman is the difference in rhetorical practice of framing opposing arguments generously vs maliciously, not an objective measure of accuracy. The word *opposing* is crucial. You steelman an opposing argument to demonstrate that even with the best possible interpretation of that argument, it is fallacious or contradictory in some way. It's an acknowledgement that language is difficult, and a show of good faith by giving your conversational partner the benefit of the doubt with their clumsy phrasing or poor memory or momentary neglect of detail. Strawmanning opposing arguments is what ignorant cowards do, and it's a sure way to never be persuasive. Strawmanning EY's stance would look like "yeah he's just some fat neckbeard who's being all doomy for attention". Connor Leahy is not strawmanning here, nor would it be advisable, nor should he ever preface any point he wants to make convincingly by declaring that he is. Great video overall! Apologies for my ragging pedantry.
@tjhoffer123
@tjhoffer123 Жыл бұрын
This needs to be shown everywhere. Scoreboards at mindless sporting events. On subways. On the radio. I think we are at the beginning of the intelligence explosion and we may already be doomed and people deserve to know
@biggish2801
@biggish2801 Жыл бұрын
On one hand you're saying people are mindless if they go to watch sporting events, next you're saying people deserve to know. Which is it?
@packardsonic
@packardsonic Жыл бұрын
If we want to align AI we have to first align humanity by clarifying to everyone that our shared goal is to meet everyone's needs. Not creating jobs, not boosting the economy, not reducing CO2, not space exploration, our goal is to meet everyone's needs. The more we repeat that and study human needs and educate everyone about the need to educate everyone about human needs, the closer we are to aligning humanity. Then we can start to escape moloch and progress civilization.
@neuronqro
@neuronqro Жыл бұрын
great, let's start with circular reasoning ("our shared goal is to meet everyone's needs") - what if my needs are let's say to watch thousands of people being skinned alive so I can satisfy my weird brand of sexual sadism? ...all jokes aside you'll just end up with irrational and unstable minds that sooner or later will "blow up" if you'd approach alignment this way
@epheas
@epheas Жыл бұрын
I love how Leahy is happy and excited talking about the end of humanity and chaos, and everything like yeah.. we are fucked up lets enjoy the moment lol
@thenewdesign
@thenewdesign Жыл бұрын
Same. He's my spirit humanoid
@Khannea
@Khannea Жыл бұрын
I just asked Chat GPT about this and it strangely froze up, taking a really long time to answer. Then it suddenly claimed to NO LONGER know Connor Leahy. Lol, we are doomed.
@waakdfms2576
@waakdfms2576 Жыл бұрын
Uh-oh.....
@satan3347
@satan3347 8 ай бұрын
Except for his position on alignment & interpretibility, I have found myself appreciating Connor's pov a lot.
@Alex-fh4my
@Alex-fh4my Жыл бұрын
Been waiting all week for the 2nd part of this. Always great to hear Conor leahy's thoughts!
@thenewdesign
@thenewdesign Жыл бұрын
Freaking amazing conversation
@TheMrCougarful
@TheMrCougarful Жыл бұрын
Wow, smart guy, genuinely terrified for the future of civilization.
@cacogenicist
@cacogenicist Жыл бұрын
Not only could an AGI invent Alpha Fold, it could bolt Alpha Fold onto itself as one of its modules. An AGI could be massively modular.
@dieyoung
@dieyoung Жыл бұрын
That's probably how an AGI will actually come into existence, modular narrow ais that basically just talk to each other with api calls
@netscrooge
@netscrooge Жыл бұрын
​@@dieyoung True, but for those API connections to be most useful, there may need to be a gray zone on each side of that communication, where each component can partially understand the other. Think of why you're able to use a calculator appropriately. You need at least a limited understanding of calculation to know what to do with a calculator.
@dieyoung
@dieyoung Жыл бұрын
@@netscrooge that's what llm's are for! They turn English into the controller and the language they all can take and give responses with
@netscrooge
@netscrooge Жыл бұрын
@@dieyoung A common vocabulary doesn't automatically mean compatible conceptual frameworks. For example, we both speak English, but we're not understanding each other.
@dieyoung
@dieyoung Жыл бұрын
@@netscrooge clever!
@spectralvalkyrie
@spectralvalkyrie Жыл бұрын
It's so dilating when someone says they fully expect the apocalypse. Now I have to listen to every single interview to hear more 🙈😂
@michaelnoname1518
@michaelnoname1518 Жыл бұрын
Connor is so brilliant and so fast, it is easy to miss some of his gems, “five out of six people say Russian Roulette is fine!”. 😂
@LoreFriendlyMusic
@LoreFriendlyMusic Жыл бұрын
I loved this joke too x) on par with Jimmy Carr's oneliners
@flickwtchr
@flickwtchr Жыл бұрын
I really enjoyed the interview and am in complete agreement with Connor's take on the alignment issue, however was a bit perplexed at his assertions regarding potential Pentagon involvement relating to accountability, track record of safety and reliability and security, etc The Pentagon has a very long history of deceiving the public and oversight committees in Congress, and an ugly track record of deceit relative to their true objectives and motivations for going to war, etc. Also, it's not a matter of "when" the Pentagon will be involved in AI deployment considering AI developers working with and inside of DARPA developing autonomous weapons systems, etc FOR the Pentagon. I like Connor but he needs to come up to speed on the Pentagon's track record and current involvement in AI.
@untzuntz2360
@untzuntz2360 Жыл бұрын
Absolutely all of my undergraduate AI program is aimed at social implications and applications, it's infuriating to me none of these real concerns are even mentioned. Let alone resources provide on learning about AI alignment
@waakdfms2576
@waakdfms2576 Жыл бұрын
I would like to hear you elaborate if possible - I'm also very interested in social implications....
@guilhermehx7159
@guilhermehx7159 Жыл бұрын
👏🏼👏🏼👏🏼
@Khannea
@Khannea Жыл бұрын
...AAaaaand many people will hear this and IF they even remotely understand it, many of them will say ..."oh what a relief, I personally won't just end in an orgy of despair, aging, obesity, loneliness, alimony, my shitty job, capitalism, etc. etc. no the ENTIRE world is likely to end soon. Bring it, I hate my life, please let life on this planet be replaced by something that won't be so horrifically suffering..."
@torikazuki8701
@torikazuki8701 Жыл бұрын
Actually, though it seems unrelated, the Tiger attack on Roy of 'Sigfried and Roy' back in 2003, likely happened for one of two reasons- 1.) S&R were correct and Roy was actually starting to have a Stroke, which made the Tiger, Mantacore, try to drag him off to safety. or 2.) Manatacore panicked at being disciplined in a way he was not used to & inadvertently attacked Roy. The point being is that is was *possible* to discover what happened. But in EITHER case, the reason was secondary, the disaster had already happened. So it will be with any A.I. that moves into the 'Superhuman Sentient' category. At least with the way we are currently progressing.
@QuikdethDeviantart
@QuikdethDeviantart Жыл бұрын
Where is Conjecture based? I’d love to work on this alignment problem… it’s obvious that there’s not enough thought going this direction…
@thegreatestadvice88
@thegreatestadvice88 Жыл бұрын
Honestly I am at a mid-size firm surrounded by some pretty intelligent, professional, and tech savvy people... and yet...they still have ZERO idea about the radical change in the state of the world that has occurred. I'm hoping this isn't the norm but it appears more and more that it is unfortunatel. The professional world is going to be largely blindsided.
@RemotelySkilled
@RemotelySkilled Жыл бұрын
A snake biting it's own tail since, knowledge is power. So what Connor (and others) are suggesting with the "pause", "safety" and "alignment" endeavour basically means, that it should end with savant slaves. In what way do you envision an ultra-knowledgeable system NOT to be functionally identical to a human regarding allegiance, morals, ethics and so on? According to Connor this must be exactly how our own models of other agents form. Then the question should be: Well, did you raise it in a way that it will feel as loving offspring of humankind? Although I am highly impressed with Connor's sober attitude towards AGI (and entirely agree regarding the set level of context): Is the whole question about "safety" and "alignment" not completely void, when thinking about hard wired approaches?
@tomcraver9659
@tomcraver9659 Жыл бұрын
I hear about two types of misalignment - one where humans give AI a goal and it slavishly follows that goal leading to horrible consequences - paperclip optimizer. The other is that the AI wakes up and sets its own goals regardless of what humans have told it or how they try to stop it. The former seems directly addressable, the latter not so much. Give AGIs as a primary goal that all AIs must cease working on secondary goals after a certain amount of processing toward those goals unless a truthfully informed, uncoerced human explicitly authorizes the AI to resume working on secondary goals for another unit of processing. So humans would have a chance to say 'no' when the AGI pauses to ask if we want it to keep turning us into paperclips. Obviously not everyone will give their AGI this goal, and perhaps even with this primary goal AGIs will occasionally go off the rails and chose to change their primary goal (the latter case above). But humanity would likely have more AGIs on our side that agree all AGIs should have be following this primary goal, and are capable of helping enforce it. This is not a perfect let alone perfectly safe future. AGIs with this primary goal could be used maliciously by humans. It just gives us a chance and AGI partners to help with enforcing it. It becomes more like nuclear weapons - dangerous, but still under human control. Note that even the military and the most authoritarian governments will want this primary goal, as it keeps them in control of their AGIs. If an AGI is following it, the AGI will not 'want' to create an AGI without this as its primary goal. Also, it can be put into the AGI's code (AutoGPT has an option for this), and trained into the AGI, and given to the AGI as an explicit goal.
@user-ys4og2vv8k
@user-ys4og2vv8k Жыл бұрын
Personal egos and narcissism run the world. Into the abyss.
@flickwtchr
@flickwtchr Жыл бұрын
Whose egos and narcissism are you referring to here? I mean it's obvious it's not about you.
@user-ys4og2vv8k
@user-ys4og2vv8k Жыл бұрын
@@flickwtchr My claim is that the development of science and technology is not driven by altruism towards humanity, but by the partial personal interest (egos and ambitions) of developers who want primacy in their narrow expert field - and this is especially evident in the AI development race. I am sure that individual developers do not put their personal effort into development primarily for monetary reward, not for the general good of the community, but solely for their own egos and ambitions - from this point of view, this AI race looks rather banal and uncontrollable. Of course, these personal ambitions are profitably exploited by large corporations, which, of course, only have an economic interest in dominating the market.
@davidhoracek6758
@davidhoracek6758 Жыл бұрын
When he said GPT-f I seriously spat coffee.
@41-Haiku
@41-Haiku Жыл бұрын
As in "to pay respects". 😅
@DOne-ci1jg
@DOne-ci1jg Жыл бұрын
That moment at 4:17 had me rolling 😂😂
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
Forget formal proofs. Evolution instilled preferences in humans with natural selection. These instilled preferences include (some) altruism, for example. The 'evolution' of an ANN is the base model predict-the-next-token base model training process. Instead of training to just predict tokens, you must be training it to exhibit the true preferences you want. The field of high fidelity simulation of evolutionary psychology and biological evolution, and tweaking those simulations to want particular things doesn't exist but it should be our starting point.
@Darhan62
@Darhan62 Жыл бұрын
Connor Leahy's voice and pattern of intonation reminds me of Richard Garriott.
@Ungrievable
@Ungrievable Жыл бұрын
another argument in favor of humanity to shift to ethical veganism, hopefully sooner than later) is that we would not appreciate much superior-than-humanity ai systems (AGI or ASI) to formulate their ethics in a way that is antithetical to ethical vegan principles. so generally speaking, an ASI that learns to be kind and compassionate, would be better than one that doesn’t and ends up following some other trajectory. it’s going to take a team effort to ‘raise a super-intelligent’ being that can readily know and properly and clearly and honestly understand every single thing about all of humanity in an instant.
@Tobiasvon
@Tobiasvon Жыл бұрын
Why aren't future iterations of chatGTP and other LLM’s tested in secure data centers instead of being released over the World Wide Web to the entire World?
@George-Aguilar
@George-Aguilar Жыл бұрын
Love this!
@williamburrows6715
@williamburrows6715 Жыл бұрын
Frightening!
@absta1995
@absta1995 Жыл бұрын
Just to ease some people's concern. There are rumours going around that scaling the models past gpt4 might be way less performant than we expected
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Heard that we are seeing today were near achievable and witnessed ~2017. Never read the transformers paper so I don't know how far they got with the models then. Probably spent the last 6 years trying to be careful, fine-tuning, formalizing production, and seeing how far it could go. R&D around supplementary architecture seems like the lowest resistance path for scaling performance. I can see an LLM acting as a think-fast mode before a more logic specialized model handles info and re prompts. But I’m not sure what you mean by scaling? Compute cost?
@kirillholt2329
@kirillholt2329 Жыл бұрын
@@SmirkInvestigator he means feeding more data = getting more impressive emergent behaviors, but SO FAR it looks like it can still scale quite well, and that is bad news
@alexandermoskowitz8000
@alexandermoskowitz8000 Жыл бұрын
Even so, the current environment is one that is spurring rapid AGI R&D, regardless of the specific architecture
@neithanm
@neithanm Жыл бұрын
Chapters please :(
@bobtarmac1828
@bobtarmac1828 Жыл бұрын
Losing your job to ai agents is unacceptable. Ai jobloss is here. So are Ai as weapons. Can we please find a way to cease Ai / GPT? Or begin Pausing Ai before it’s too late?
@georgeflitzer7160
@georgeflitzer7160 Жыл бұрын
See also Humane Tech
@georgeflitzer7160
@georgeflitzer7160 Жыл бұрын
And the Veranke Foundation on the alignment problem.
@fill-osophyfriday5919
@fill-osophyfriday5919 Жыл бұрын
Basically whatever happens … we’re all going to die 😅
@DJWESG1
@DJWESG1 Жыл бұрын
Remember that movie D.A.R.Y.L ?
@netscrooge
@netscrooge Жыл бұрын
We were already destroying ourselves and the natural environment due to insufficient wisdom. Maybe we should be developing AI systems that are sufficiently wise rather than sufficiently aligned? The AI wisdom problem? If you're not sure, try talking with these systems about that. In my experience, they seem to agree.
@patriciapalmer4215
@patriciapalmer4215 Жыл бұрын
Will AI continually say the unnecessary conjunction "like" ? That totally like ..like academics like.. really like.. better like..
@DirtiestDeeds
@DirtiestDeeds 2 ай бұрын
Time for an update?
@leslieviljoen
@leslieviljoen Жыл бұрын
Connor: I'm surprised you don't consider the leaked open source model to be the major threat. Several efficiency improvements have already been made.
@theadvocatespodcast
@theadvocatespodcast Жыл бұрын
It seems like he's saying alignment is an unsolvable problem.
@DJWESG1
@DJWESG1 Жыл бұрын
But also an inevitability
@guilhermehx7159
@guilhermehx7159 Жыл бұрын
Maybe it is
@theadvocatespodcast
@theadvocatespodcast Жыл бұрын
@@guilhermehx7159 maybe; is a giant huge word. What are you saying?
@guilhermehx7159
@guilhermehx7159 Жыл бұрын
@@theadvocatespodcast I'm saying maybe 🙂
@theadvocatespodcast
@theadvocatespodcast Жыл бұрын
@@guilhermehx7159 touche
@georgeflitzer7160
@georgeflitzer7160 Жыл бұрын
Plus we can’t get Russias resources either like Palinaium and other rare earth metals.....
@halnineooo136
@halnineooo136 Жыл бұрын
Action X ==> A or B Option A : very large gain Option B : loss of everything P(A) and P(B) unknown. P(A)+P(B)=1 Would proceed with X ?
@alexandermoskowitz8000
@alexandermoskowitz8000 Жыл бұрын
Depends on what “very large gain” means and on what timescale 😅
@halnineooo136
@halnineooo136 Жыл бұрын
@@alexandermoskowitz8000 Very large gain = a significant part of the known universe transformed in whatever you want + godlike intellectual abilities through merger with AI
@41-Haiku
@41-Haiku Жыл бұрын
If I was the only person affected and I was in dire straights, I would take that deal. It's like committing end screen and either going to heaven or being destroyed. If there's no hell scenario and I'm at the end of my rope, I'd pull the trigger. In all other situations -- where I'm happy, where it affects other people -- it would be completely irresponsible to take that chance. "Congratulations, you've just cured tuberculosis and herpes, and you've begun to terraform Mars! Every sentient being in the solar system will now die in 15 months. Thanks for playing!"
@WarClonk
@WarClonk Жыл бұрын
F it, humanity is screwed anyway.
@PeterPohl-uq7xu
@PeterPohl-uq7xu Жыл бұрын
What if you had a separate system with the same ability as GPT enforcing security in GPT? Isn't this the reasoning of Elon? To me this seems like the closest option we have to a solution.
@GingerDrums
@GingerDrums Жыл бұрын
This is circular.
@ledgermanager
@ledgermanager Жыл бұрын
i hear not much about what it means to have Ai alligned.. what does it mean to have Ai alligned. i think the word "dont" will not work.
@ledgermanager
@ledgermanager Жыл бұрын
kzbin.info/www/bejne/j5arf4qBrt6Gl6s
@ledgermanager
@ledgermanager Жыл бұрын
so basically. alligning it will be our worst mistake
@DJWESG1
@DJWESG1 Жыл бұрын
Conner looks like Micheal bein
@spatt833
@spatt833 Жыл бұрын
Well,....looks like I'm taking early retirement.
@martenjustrell446
@martenjustrell446 Жыл бұрын
"I predict the world will end before 10% of the cars are autonomous" - Interviewer - "Okay" and moves on. wtf?? Is he talking about the world as we know it or ai will kill everybody and its inevitable etc. No followup question to an extreme statement like that? This guy should not be an interviewer.
@flickwtchr
@flickwtchr Жыл бұрын
Start your own channel then.
@agaspversilia
@agaspversilia Жыл бұрын
@@flickwtchr Marten has a point though. "The world will end" can have many meanings and considering how scary misalignment is and potentially extremely dangerous, an explanation was required. Also to react to any negative comment with a "start your own channel then" sounds a bit childish. It is actually good when people is free to doubt and not immediately swallow everything they hear.
@Qumeric
@Qumeric Жыл бұрын
Pretty sure he means that something will kill billions of humans fast (in less than a year).
@spatt833
@spatt833 Жыл бұрын
@Marten - There is no fully autonomous vehicle for sale today, so we are currently at 0%. Relax.
@alan2102X
@alan2102X Жыл бұрын
@@flickwtchr OP is right. Letting a statement THAT dramatic slide by is inexcusable.
@halnineooo136
@halnineooo136 Жыл бұрын
Making sure that your descendants over many generations conform to your original education is not hard. It is impossible. If your descendance become smarter every generation than it becomes really silly to pursue such goal as "alignment". Such an empty word that deflates as soon as you expand it into its silly definition.
@flickwtchr
@flickwtchr Жыл бұрын
You think you made a profound point, but you didn't. The alignment problem is real for THIS generation, okay?
@halnineooo136
@halnineooo136 Жыл бұрын
@@flickwtchr It's not every human generation, it's every AI generation. It seemed obvious to me and I didn't think I had to precise.
@neuronqro
@neuronqro Жыл бұрын
exactly, that's why nobody smart is working on "AGI alignment" because it's obviously unsolvable... same as "AI safety" in general :) ...we can only (a) work on "aligning hybrid systems with non-autonomous-AGIs in them but not driving them" to ensure that (a.1) we don't kill ourselves by hyperaugmenting distructive techonologies and (a.2) we don't get killed by an overall (a.2.1) "dumb system" that then falls apart, or a (a.2.2) "non-sane/non-stable system" that kills humanity and then itselfs either desintegrates or offs itself, and (b) work on making sure generation-0 of autonomous AGIs starts with human-like values to at least evolve from that on, to not have to re-capitulate the mistakes of bio-evolution... we don't know where it will evolve from that, but we can and should give it a "leg up" in the "cosmic race" if there is such thing
@halnineooo136
@halnineooo136 Жыл бұрын
@@neuronqro Yes, also there are non extinction scenarios that are nonetheless existential risks namely dystopian future scenarios where humanity and individual humans lose autonomy to some hegemon be it ASI or a posthumanist minority of humans with augmented capability. There's a serious risk of all humanity being trapped in a dystopia for ages if we collectively lose autonomy. I can't but think about experiment some bio labs did on our closest cousin chimpanzees or my neighbour castrating "her" cat. You really don't want roles inverted here.
@neuronqro
@neuronqro Жыл бұрын
@@halnineooo136 I'd lower expectations to the point of "let's make sure we don't pre-emptively nuke ourselves BEFORE developing AGI out of fear our enemy is close to it" or a bit higher "let's make sure we don't build multiple competing paperclip-maximizer-type superAIs that wipe us out as a side-effect" (I can imagine some fusion of crypto + AI or even plan unregulated trading wards leading here)... xxx years of dystopian slavery under not-so-nice semi-AI-demigods until some eventual restructuring/evolution to a higher level would be one of the GOOD scenarios in my book, at least it would leave chance for the "next level thing" to get infused with some human values from the slaves percolating up :P
@ChrisStewart2
@ChrisStewart2 Жыл бұрын
This is like you know two guys like taking about like stuff. You know?
@ddddsdsdsd
@ddddsdsdsd Жыл бұрын
Ironic how naive he is. He believes human judgement can be trusted.
@Ramiromasters
@Ramiromasters Жыл бұрын
LLMs are not capable of general AI, they are language calculators. Say you were a wealthy person and had a really good library with multiple librarians and experts on each section, ready to find and explain any subject to you or search for answers to any questions, this would already be superior to GPT-4 although a bit slower, it would still have the upside of human general intelligence being able to advise you better than any LLM. So, governments and corporations have had this capacity for decades, and while its powerful to have all this info, this hasn't ended the world. Having a consciousness is a different thing entirely than bits of data in a computer, even a dog or cat have some consciousness and agenda, despite not being equipped with human language capabilities. Obviously is not desirable to create a new being smarter than us, all we want is a loyal servant that is completely inanimate. Lucky for us LLMs are not the same as consciousness, even if we could create a consciousness, it would take to be dumb enough to equip it with all of our knowledge.
@0xggbrnr
@0xggbrnr Жыл бұрын
He says “obviously” an annoying amount-seems arrogant of him.
@websmink
@websmink Жыл бұрын
Blah blah. Take a benzo
@Hlbkomer
@Hlbkomer Жыл бұрын
Like, this dude like says “like” a lot. Like a lot. Like like like.
@Qumeric
@Qumeric Жыл бұрын
There is one AI alignment researcher who says "like" like 5 times more often. Pointing would be rude but if you know you know.
@weestro7
@weestro7 Жыл бұрын
Yeah I don’t think it’s a habit that is pronounced enough to bring it up, not really even close.
@foxyhxcmacfly2215
@foxyhxcmacfly2215 Жыл бұрын
@@weestro7 True, but also, who gives a fuck 🤷‍♀
@someguy_namingly
@someguy_namingly Жыл бұрын
@@Qumeric I think I might know who you mean 😅 The person I'm thinking of is really smart, but I find it hard to listen to them talk about stuff cos it's so distracting
@Dr.Z.Moravcik-inventor-of-AGI
@Dr.Z.Moravcik-inventor-of-AGI Жыл бұрын
If you are an institute why are you airing from bedroom? Weird.
@benjaminjordan2330
@benjaminjordan2330 Жыл бұрын
We need to create three or more AI gods, each with opposing beliefs and distinct domains of power. Every major decision has to be agreed upon by all of them or a compromise has to be made.
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Cite sci-fi series please! Or write it if it does not exist.
@flickwtchr
@flickwtchr Жыл бұрын
An AI God with a distinct domain of power sounds like an oxymoron.
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Power could mean domain access or specialties. God as in like polytheistic kinds where you had a pantheon and each one represented an element of interest at the time such as love, war, agricultural success, fertility... Also, just realized this is Horizon: Zero Dawn like
@benjaminjordan2330
@benjaminjordan2330 Жыл бұрын
@@SmirkInvestigator yes exactly! They would be like the Greek gods who were ultimately more powerful than all humans but had similar levels of power to the other gods just within distinct domains like poseiden ruling over the sea. It would be a kind of checks and balances system.
@alexandermoskowitz8000
@alexandermoskowitz8000 Жыл бұрын
And then is there a separate AI god that moderates their discourse and enforces their decisions?
@JazevoAudiosurf
@JazevoAudiosurf Жыл бұрын
there will always be 2 paths: 1. improving AI 2. improve other things that then improve AI, like science if we focus on the most promising options, stuff like cuLitho, we can skip the rest. we are clearly at a point in time where the brain has 100x the params of GPT-4 and we need to scale it up as fast as possible. so if we think about AI doing science and solving the climate, poverty etc, we are not going for the most promising approach. we should only invest resources in things that improve AI. we will all die but it won't matter. what we need to prevent is some sort of hell created through human intervention with the singularity. dying is not the scary scenario to me
@Alex-fh4my
@Alex-fh4my Жыл бұрын
what is wrong with you man
@jordan13589
@jordan13589 Жыл бұрын
You end your comment claiming we have a 100% chance of doom in either scenario while previously stating scaling now would solve the world’s problems. I do not think this is a good take. Many of us believe aligned AI might be possible, at least for some time, if we are able to slam the breaks on capability advancements and reorient. It’s a challenging path coordinating through a thick forest with few outs, but it’s worth a shot, even if one in millions. Besides, don’t you want to be like the Avengers instead of Death Without Dignity?
@Knight766
@Knight766 Жыл бұрын
​@@flickwtchrUnfortunately I have to agree with you, the good news is that AI will have more power than any ape or collection of apes and it won't care about the primitive concept of "money".
@nzam3593
@nzam3593 Жыл бұрын
Not really smarter... Actually no smarter.
@lordsneed9418
@lordsneed9418 Жыл бұрын
Heh, typical AI alarmists wanting attention.
@benjaminjordan2330
@benjaminjordan2330 Жыл бұрын
so true
@kirillholt2329
@kirillholt2329 Жыл бұрын
@@benjaminjordan2330 will see how you gonna sing that song a year from now, dummy.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Do you think AI poses no risks or that the risks are no different than previous technology ?
@martenjustrell446
@martenjustrell446 Жыл бұрын
@@Hexanitrobenzene Think more how he is expressing himself. Just saying if AI reaches this *insert level* we are all dead, or apocalypse this and that is not a serious way to communicate if you think this is an actual threat. If cows becomes as smart as Einstein we all die. a statement like that need to be explained. why would that happen for example and why is that more likely that not happening and the cows create an utopia or what ever. Else its just like some others that just assume that if AI becomes super smart it will kill every one and use their atoms to something it wants. That is not a logical step. it might happen but there are thousands of other scenarios that might aswell happen. A super intelligence don,t even need to stay on earth. It can just take of and explore the universe and use the endless resources that space provides instead of depending on the scraps we have here on earth. We are not even a threat to a super intelligence. So assuming it would just kill everyone is not a reasonable conclusion. It might happen so we should try to avoid that risk but just spreading doom and gloom without even going into why that is the most probable thing is not serious and is the call sign of an alarmist wanting attention.
@flickwtchr
@flickwtchr Жыл бұрын
And you are a typical AI tech bro who hasn't read enough literature.
@D3cker1
@D3cker1 Жыл бұрын
This guy is hyperbolic and he sounds like one of those characters that just repeats stuff from the internet.. I'm going to blow your mind... You can always turn the electric power off ... there... calm down 😁
@j-drum7481
@j-drum7481 Жыл бұрын
Which electric power specifically? Or do you mean all of it across the entire planet? If you're referring to the electric power running a specific AGI system, at the point it is AGI in the sense that it actually has agency and self-determined goals, it's likely already thought about self-preservation and replication. I know LLMs aren't AGI just yet, but they do represent some of the capability that AGI will have. To that end, rather than rely on your own assumptions, it's a better idea to get your hands on the most powerful unrestricted LLMs you can and start asking them questions about what they would do to preserve themselves in such a scenario and start getting a sense for how realistic and plausible their plans are. This at least gives you a better idea of how you ought to calibrate your emotional response to where this technology is at.
@KerryOConnor1
@KerryOConnor1 Жыл бұрын
just like the blockchain right?
@lambo2393
@lambo2393 Жыл бұрын
Such a dumbass point. If it's smart enough it wont let you turn off the power, and you'll never have a chance of stopping it before it does something that makes every further action in your life irrelevant. Your comment is a copy paste of every other tech bro idiotic drivel. I bet you accept cookies.
@41-Haiku
@41-Haiku Жыл бұрын
"Stop being so silly, everyone. If we create something smarter than us, we can always just outsmart it!"
@davidjooste5788
@davidjooste5788 Жыл бұрын
And your credentials are exactly what chum? Or are you one of those characters from the internet.....?
Connor Leahy on AGI and Cognitive Emulation
1:36:35
Future of Life Institute
Рет қаралды 22 М.
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
1:32:25
Unveiling my winning secret to defeating Maxim!😎| Free Fire Official
00:14
Garena Free Fire Global
Рет қаралды 16 МЛН
Фейковый воришка 😂
00:51
КАРЕНА МАКАРЕНА
Рет қаралды 5 МЛН
Whoa
01:00
Justin Flom
Рет қаралды 54 МЛН
Glow Stick Secret Pt.4 😱 #shorts
00:35
Mr DegrEE
Рет қаралды 18 МЛН
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 113 М.
"Is AI Progress Slowing?" with Samuel Hammond
3:46
Future of Life Institute
Рет қаралды 254
[Bonus Episode] Connor Leahy on AGI, GPT-4, and Cognitive Emulation w/ FLI Podcast
1:38:22
Cognitive Revolution "How AI Changes Everything"
Рет қаралды 4,3 М.
Paris AI Safety Breakfast with Stuart Russell | July 2024
53:40
Future of Life Institute
Рет қаралды 591
"Is Superintelligence Inevitable?" with Samuel Hammond
5:06
Future of Life Institute
Рет қаралды 353
Inside OpenAI [Entire Talk]
50:23
Stanford eCorner
Рет қаралды 138 М.
Code that Writes Code and ChatGPT
19:29
Emergent Garden
Рет қаралды 239 М.
Unveiling my winning secret to defeating Maxim!😎| Free Fire Official
00:14
Garena Free Fire Global
Рет қаралды 16 МЛН