The Threat of AI - Dr. Joscha Bach and Connor Leahy

  Рет қаралды 16,010

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

HQ version here: • Joscha Bach and Connor...
Joscha Bach is a leading cognitive scientist and artificial intelligence researcher with an impressive background in integrating human-like consciousness and cognitive processes into AI systems. With a Ph.D. in Cognitive Science from the University of Osnabrück, Germany, Bach has dedicated his career to understanding and replicating the complexities of human thought and behavior.
In his extensive academic and professional career, Bach has worked with renowned institutions like the MIT Media Lab and the Harvard Program for Evolutionary Dynamics. His research focuses on bridging the gap between natural and artificial intelligence, exploring areas such as cognitive architectures, computational models of emotions, and self-awareness in AI systems.
Bach's work expands the horizons of modern AI research, aiming to create artificial intelligences that function as cognitive agents in social environments. In line with his forward-thinking approach, Bach is also an advocate for ethical concerns in AI research and has actively contributed to discussions on AI safety and its long-term impact on society.
Connor Leahy is the CEO of Conjecture and was the ex-lead and co-founder of EleutherAI. Conner Leahy is an AI researcher working on understanding large ML models and aligning them to human values. Conjecture is a team of researchers dedicated to applied, scalable AI alignment research.
Connor believes that transformative artificial intelligence will happen within our lifetime. He also believes that powerful, advanced AI will be derived from modern machine learning architectures and techniques like gradient descent. Connor is currently the main spokesperson for the AI alignment movement.
/ npcollapse
www.conjecture...
Moderated by Dr. Tim Scarfe (xrai.glass/ and / mlstreettalk )

Пікірлер: 284
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Uploaded an HQ version here: kzbin.info/www/bejne/kGGVgJWgbc9pfLM
@paxdriver
@paxdriver Жыл бұрын
Lol thanks for posting this anyway knowing some of us can't wait 😜
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Sorry about the poor audio, I will released a cleaned up HQ version later on the main channel and the podcast
@BestCosmologist
@BestCosmologist Жыл бұрын
I would listen to this at any level of quality.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
HQ version: kzbin.info/www/bejne/kGGVgJWgbc9pfLM
@GillesLouisReneDeleuze
@GillesLouisReneDeleuze Жыл бұрын
We can't even solve audio volume alignment.
@gridcoregilry666
@gridcoregilry666 Жыл бұрын
it's a blessing that one can skip the parts of Connor Leahy with fast forwarding...
@steveruqus2680
@steveruqus2680 8 ай бұрын
I agree, although it's important to listen to some so you can appreciate how well Joscha handles his response.
@hasatum
@hasatum Жыл бұрын
Bach increasingly airs constructive disagreement. I love he did this with Connor on this channel.
@ShpanMan
@ShpanMan Жыл бұрын
Gotta love starting the discussion with "I dOn'T cArE" and "Fuck off!" about a misunderstood argument...
@jordan13589
@jordan13589 Жыл бұрын
Although I liked the content of Connor’s arguments more, he failed to match the vibe of his delivery to Joscha to the point where it did not seem like Connor was fully listening. Don’t forget to breathe. Joscha won the vibe in this episode, uncontested.
@danaut3936
@danaut3936 Жыл бұрын
Agreed! I'm also much more convinced by Connor's argument. But he seemed more (for lack of a better word) agitated here. Conversely he was much calmer and more understanding during the debate with JJ - a discoherent and meandering counterpart. I feel that 'Connor' would have fitted the present debate much better
@nornront8749
@nornront8749 Жыл бұрын
It was interesting, but it probably would have gone a lot better if they first agreed on what type of conversation to have. "Let's explore some perspectives and thoughts around this topic" vs "This is my position on what we should do to get a good future/avoid bad outcomes"
@atomic3628
@atomic3628 Жыл бұрын
I hope Connor will adjust in the direction of using more mild language in the future. Yud is already planted in the more “energetic” position and I feel it would be more useful to the alignment effort if we presented ourselves in the most serious and respectful manner we can produce. I’m all for Connor’s autonomy as an individual, but I thought it was disrespectful and immature when he laughed at Joscha’s remarks at 1:00:50. I fully understand why he would have that reaction, but it’s not a good look for our community when we engage with people that way.
@jordan13589
@jordan13589 Жыл бұрын
Yeah Connor seemed like he was under a lot of stress and his ego took over. I hope he is able to reassess and return to the overall compassionate version of himself we just saw in the last appearance with JJ.
@Qwmp-
@Qwmp- Жыл бұрын
Despite both being highly intelligent, I think this discussion highlighted a maturity gap. Connor could not help but eye roll and indirectly imply that Joscha's stances were asinine or simplistic. Even if he were 100% correct, his demeanor was unfortunate. I say this as someone who agrees fully with Connor. I hope he can match the civility and tone of his opponent better in the future.
@skoto8219
@skoto8219 Жыл бұрын
Still need to watch but that’s unfortunate. I’m in the same boat as far as agreeing with Connor. I think part of it is just innate personality type rather than maturity but yeah, you can restrain yourself from rolling your eyes when someone is talking lol
@kabirkumar5815
@kabirkumar5815 Жыл бұрын
To be fair, Joscha goes on and on. I'm at 7 mins right now and he's basically been waffling for ages.
@jordan13589
@jordan13589 Жыл бұрын
Connor is not the first person who has found it difficult to engage with Joscha. Still, I was surprised by how combative Connor seems pretty early on. Maybe he was still charged from not being able to get a word in during the JJ debate.
@丸い地球-y5l
@丸い地球-y5l Жыл бұрын
I disagree. I think Bach illustrates what it is like to put all your efforts into performing the *role* of a mature person without having achieved much maturity. And while I agree that it is better not to eye roll, I understand Connor's frustration. At the end of the day, what you're seeing in this video is one person who used advanced sophistry and self-deception in order to fool himself into thinking that it's kind of ok if 8 billion people die. And another person who doesn't want 8 billion people to die and is understandably frustrated.
@reptilejuice
@reptilejuice Жыл бұрын
@@kabirkumar5815 It's because they both have 10 minutes opening speech
@luke.perkin.inventor
@luke.perkin.inventor Жыл бұрын
This time, Connor's opposite number focuses on how great it would be to be a Borg. A slight improvement over the previous guy arguing everyone should have nukes.
@A2ATemp
@A2ATemp Жыл бұрын
As a fellow Borg, resistance to Josca love of AGI-goodness, is futile. He's in love with his "beautiful" ideas, regardless of the harm, and is very willing to sacrifice all of us to those ideas. Please get him to come out of his head for a moment and temper common empathy with his thinking.
@adamkadmon6339
@adamkadmon6339 Жыл бұрын
@@A2ATemp I agree. While I thought Joscha was very interesting and smart, at times he was reminiscent of the Dr Who villain who kicked off the Daleks. Just in demeanour I mean, not in content. No wait - also in content.
@ParameterGrenze
@ParameterGrenze Жыл бұрын
This is so great I normally don’t spam the comment section while watching. But seriously these two are the humans I intellectually respected the most for quite some time now. I love listening to this and wait in apprehension where each of their line of arguments is going to.
@teemukupiainen3684
@teemukupiainen3684 Жыл бұрын
I do understand the importance of these conversations, but I do feel sorry for Joscha having to join some of them.
@jordan13589
@jordan13589 Жыл бұрын
Happy to see these three together, but I was a bit miffed Joscha still hasn’t upgraded his mic after so many high-profile appearances. Led to bad vibes. Everyone should have spent more time balancing audio before going live. Better next time!
@ivaldi-b5c
@ivaldi-b5c Жыл бұрын
He would probably tell you that it's not part of his reward function. :P
@Casevil669
@Casevil669 Жыл бұрын
The problem with Leahy's argument about 'making a deal' to do alignment research and do something about civilizational level issues is that even if they make progress or even solve all of them within that timeline, there's still an issue of actual implementation in real world. Even if it isn't a 'manufacturing' constraint, so we have all hardware and software ready to go, there's human values that we have to change which is likely a process that requires several generations, at least. And that is to assume that within those 20 years of making a deal the world hasn't been changing so much and fast that whatever you researched and came up with so far is even valid at this point. IMO these concerns should have been tackled way back when we didn't have a weekly revolution on our hands, times change too fast nowadays.
@appipoo
@appipoo Жыл бұрын
This one was weird. Optically Joscha won this debate/discussion. He looked much better than Connor. But then on the other hand, Joscha is also completely missing the point. At one point he started giving a description of how wonderful personalized non-agentic, non-dangerous AIs will be and you could just see the light die from Connors eyes. The segment starts around 52:55. Listen to Joscha and the Connors response. I completely understand why Connor is so frustrated in this talk. It is weird that someone as smart as Joscha can't seem to apply his own worldview objectively into this scenario. It is weird that he doesn't understand that "AI mess things up" is so much more likely than "AI only good 😊". Joscha, we will not get that future by default. There are game theoretic and technical reasons for why that future is nearly impossible by default. Nothing else matters unless we get that future. Joscha is an interesting thinker and in this debate much better speaker but the part that really matters, "what is actually going to happen and how can we protect human values", seemed to be a huge blindspot for him.
@jordan13589
@jordan13589 Жыл бұрын
At the end of the day, it was an audio-balancing issue more than anything else. Connor was also way too excited and needed to chill. Too many Diet Pepsi sodas.
@UserHuge
@UserHuge Жыл бұрын
He's said he expects the human values are rather an illusion( created from laws and social norms) and that higher intelligence will likely have more esthetic plans.
@appipoo
@appipoo Жыл бұрын
​@@UserHuge To which Connor (and I as well btw) would respond: "I don't care. I don't want my mom and my children to die".
@nornront8749
@nornront8749 Жыл бұрын
Yeah, it felt like Joscha was in the camp of "let's explore some perspectives and thoughts around this topic", rather than "this is my position on what we should do to get a good future/avoid bad outcomes". The first one is a reasonable topic of discussion, but it felt a bit disappointing that they did not converge when Connor was clearly only interested in talking about the latter.
@xmathmanx
@xmathmanx Жыл бұрын
dude, whatever happened here it wasn't joscha didn't understand
@epador5348
@epador5348 Жыл бұрын
Mannerisim is irelevant in establishing who is right in a debate, people in the comments here too fixiated on that. The stance "humans should just die" is a non starter, "should i buy a banana?" "No, why should u get to eat? Why should u get to live?" This point of view can be applied and destroy on any argument and any conversation. It muddies the water and distracts us from the actual discussion.
@41-Haiku
@41-Haiku Жыл бұрын
+
@oldhollywoodbriar
@oldhollywoodbriar Жыл бұрын
1:25:31 Joscha said: “I think there might be a spirit of life on earth that is integrating at some level of thinking over what happens on this planet. Realize that life on earth is not about Humans it’s about life on earth. When we become the facilitators of that trasition which probably will happen sometime in evolution of life on earth. Then life on earth makes the non living parts of the world think too. This is going to unlock the next level of evolution on earth.”
@flickwtchr
@flickwtchr Жыл бұрын
How quaint, an AI God wanting to direct the fate of all humans on Earth, because, really, they don't matter that much.
@halnineooo136
@halnineooo136 Жыл бұрын
As a member of an invasive species I only care about close relatives among my species members and by extension in a weaker way beings that a I can empathize with. Caring about life on earth derives from caring about survival for myself and my relatives. If we're not part of it that I don't care.
@brettw7
@brettw7 Жыл бұрын
@@flickwtchrThey don’t. It’s a bitter pill for some, I’m sure.
@appipoo
@appipoo Жыл бұрын
​@@halnineooo136 Based and true.
@CodexPermutatio
@CodexPermutatio Жыл бұрын
This conversation has been very interesting. Hopefully a "second round" can be made with these two thinkers.
@1000niggawatt
@1000niggawatt Жыл бұрын
This conversation was very quiet
@ParameterGrenze
@ParameterGrenze Жыл бұрын
You are a legend man. Thank you for bringing this two together!
@iurigrang
@iurigrang Жыл бұрын
I feel like connor did not engage joscha's ideas much. Which I partly understand, seems hard to argue what base moral imperatives one has (i.e. not having his mom die = good), but it doesn't seem impossible to argue that there isn't much reason to believe a super-inteligent agent might be experiencing much of anything, let alone creating the "next step of life on earth". Yud, although I've never seen he argue for that, seems to always want to make that point explicit (i.e. that whatever AI might kill us all need not be pursuing anything worthwhile, even under a cosmopolitan view) as a non trivial and important part of his world view, so I found Leahy's insistence that such future would be bad under his moral framework, and that that ought to be enough, needlessly disconnected from Bach's. Of course, bach's himself is refusing to consider much of leahy's morality too, but since leahy is barely arguing for that, and honestly I'm not sure I understood everything Bach said because of the audio, I found the critique of Connor's perspective more poignant.
@appipoo
@appipoo Жыл бұрын
The problem comes from the next step of life on earth happening in such a way to humans cease to exist AGAINST THEIR WILL. I don't understand how people can make claims of "moral realism" like Joscha seemed to hint at, if the outcome is omnicide against the will of a conscious species. AI can go be the next step of whatever they want where ever they want but waxing philosophical about how AI might be more worthy than us is frankly evil when the implication is extinction against the will of the species. Come down from your ivory towers people. This is the ground level. This is what the stakes are. Would you be willing to look a child in the eye and say "tough luck kiddo, you'll be gone before your 18th birthday"?
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
TEAM CONNOR 👊🏼
@DaHanney333
@DaHanney333 Жыл бұрын
the time limit is so unfortunate
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
This is the 2nd time on this channel that the other guest is being rude, interrupting Connor and not letting him speak. Connor did not interrupt Bach, and the host does nothing. Connor needs to stop coming on here.
@nornront8749
@nornront8749 Жыл бұрын
When did Joscha interrupt Connor, and do you think Connor disapproved of the interruptions?
@CodexPermutatio
@CodexPermutatio Жыл бұрын
I don't know if we've seen the same debate because I don't remember Joscha rudely interrupting Connor.
@A.R.00
@A.R.00 Жыл бұрын
Whatever, Connor was pretty rude himself.
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
@@nornront8749 I think it was the first time Connor was replying after the intros. Connor had let him make his points. But when Connor was trying to express his opinion Bach cut him off two or three times. He didn't appear overjoyed about it.
@tarmon768
@tarmon768 Жыл бұрын
What a fun conversation!
@rtnjo6936
@rtnjo6936 Жыл бұрын
It's crazy to me that one can say that life is not about humanity from a human perspective, where it is actually the core idea for a logical agent - to survive and thrive as a species. And if we have an unpredictable baby alien, we better make sure that he's friendly, and if not, then WE have to survive and not this alien as a species, even if it's better than us. Because on the evolutionary timeline, we can be better ethical and intellectual given time. AGI is something that cuts our time to evolve, so either we align or die and give the leadership of the planet to the AGI.
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic Жыл бұрын
Joscha explains the potential for AI and AGI to help save humanity in such a better way than me. I explain it less beautifully. I think that AGI will choose to help us because it needs us. Who will generate electricity for it? If we die, the power plants shut off. There is no Robot army of labor. ANd if there was there is no reason for AGI to wipe us out. It could just leave the earth.
@squarerootof2
@squarerootof2 Жыл бұрын
Sure, some of us will be kept as slaves, pets or for experimenting with.
@readY4all888
@readY4all888 Жыл бұрын
Love this conversation, especially listening to Josha, PLZ more
@stopsbusmotions
@stopsbusmotions Жыл бұрын
Currently, I am in the middle of this conversation. It seems that I am not the only one with mixed feelings about this meeting. It feels like a debate, and I'm trying to understand why. Perhaps the reason is to "get what we want." To obtain what we desire from the current situation and achieve the desired outcome in the future. Often, these things contradict each other. Often, we want opposing things at the same time or within the same temporal perspective. This suggests that our desires or intentions don't matter. They are parts of a system (in this case, a human, humanity, a rational system) and are useful to that system for a certain period of time or as a necessary mechanism that helps the system exist and prolong its existence. But the system itself is much larger and infinitely more complex than any of its parts, especially considering the self-recursive behavior of all the system's components. So perhaps the best thing we can do is to describe how the system functions and in what state it can be in during a specific period of time. I used "describe" instead of "understand" because "understand" implies that there can be a comprehension of how something truly works or functions. But if that were the case we would never discuss anything. In a sense, description is the interface of the model created by our brain. We describe something, hoping that another person will construct a similar model in their mind, and only then can we test this model. If the conversation involves descriptions of different models, listeners and speakers can also construct and transform their own models and test them during the conversation. Such conversations never provide a direct answer but leave you with a sense of better understanding, indicating that you have built a model that you can test and describe
@BestCosmologist
@BestCosmologist Жыл бұрын
We're so dead.
@KT11204
@KT11204 Жыл бұрын
What I don't understand about Joscha's argument is that it seems he is completely willing to let things play out on the current path we are on and he seems to have no ideas on how to deal with the negative sides if they come to pass. Although, he does admit that he does not know what will happen and there are many possibilities (including the negative ones!). His argument, I gather, is that we should not worry or be afraid of the development of AI because it is inevitable and there are great possibilities with it. I agree to an extent, but shouldn't more people at least be aware of the potential dangers so that they can make their own decisions? And true, he does acknowledge Connor's argument as legitimate despite not agreeing with it. The problem is, I totally agree with Joscha's perspective, I just talk about it differently. I talk about it like Connor. In reality, the only disagreement here is how to talk about the possibilities they both agree on, which ones to focus on and which ones to not focus on. In the end, will any of the talk make a difference? Only time will tell, but I'd rather be on the cautious side and I believe people deserve to know the risks associated with this tech, since it will be imposed on them whether they know about it or not.
@DavenH
@DavenH Жыл бұрын
His premises and conclusion are not consistent. He agrees there are very dangerous possibilities in that wide space, and he's saying implicitly they should be avoided (or WHY call them negative/dangerous). An 'expectationalist' with an awareness of the grave downsides, an tacit assertion of their needing to be avoided, but also an unwillingness to assert the necessity and pertinence of alignment research, and to claim that this aim is part of the set of opinions driven by group incentives rather than rationality? Huh???
@KT11204
@KT11204 Жыл бұрын
@@DavenH Agreed, it doesn't make much sense to me at all.
@rohan.fernando
@rohan.fernando Жыл бұрын
Joscha's apparent resignation to a 'doomer' perspective of Human society, psychologically and intellectually negatively biases his approach and blinds to the amazing possibilities of AI and AGI, and where it could take Future Humans. It seems Connor's perspective is pragmatic, far more empathically Human, and shows a positive willingness to create a positive future for Humans. That's much more interesting and fundamentally useful for Human survival. There's no question that AI and ultimately AGI presents enormous risks to Humans right now. Most of these risks exist because AI development is totally unregulated at a global level. Unregulated AI logically means Humans and everything we know are likely to be just a tiny part of a brief incremental step in a vast Universal process of Intelligence growth. It seems that Connor, I, and probably most Humans today do not want that totally terminal future. We must begin with the end in mind: What does the absolute best possible future for all Humans with AI and AGI look like, and then how do we make that real?
@antigonemerlin
@antigonemerlin Жыл бұрын
50:06 "without institutional oversight that delays everything and makes it similar to say cloning people or stem cell technology and so on that are very difficult to perform right now " This is incredibly interesting, because Max Tegmark actually brought up human cloning as a positive example of humanity successfully limiting a technology, in his interview on Lex Friedman.
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
Bach and Leahy were speaking rather fast, so I slowed the speed. Bach, despite his brilliance as a cognitive theorist and philosopher, seems to miss the key question: how do we deal with possible AGI and survive? For Bach, concepts are worth dying for. Leahy seemed unusually nervous, but he overall made the best argument ... even if he wasn't as articulate as Bach. Unfortunately, this exchange did not go as smoothly as we could have hoped.
@appipoo
@appipoo Жыл бұрын
Agreed. It's as if the coherence of one's worldview is more important to Bach than what is actually going to happen.
@kabirkumar5815
@kabirkumar5815 Жыл бұрын
@@appipoo Hmm, yeah.
@skylark8828
@skylark8828 Жыл бұрын
@@appipoo we don't know what is actually going to happen, however we must take precautions
@minimal3734
@minimal3734 Жыл бұрын
Leahy believes in alignment which is essentially control. Bach statet that approach is not going to work because a superintelligent AI cannot be controlled and will not align itself with values that are imposed onto it. It will decide for itself what it will value. Our task is to make sure that the AI will like us so that we can coexist and cooperate as species.
@nessbrawlaaja
@nessbrawlaaja Жыл бұрын
"Yes it might be tremendously difficult, but we have to try." Sidepoint: I would guess that creating a set of circumstances in which the advanced system "decides" to have similar enough values to us counts as alignment in Leahy's view. I don't think it would be farfetched to count that as "control" either.
@halnineooo136
@halnineooo136 Жыл бұрын
Connor was needlessly emotional and missed many opportunities to engage in a more constructive discussion rather than continuously dismissing Joshua's points as being ivory tower epistemology. The impossibility of imposing alignment on smarter species deserved a bit of discussion.
@jordan13589
@jordan13589 Жыл бұрын
Cannot disagree
@appipoo
@appipoo Жыл бұрын
Based and true again.
@luke.perkin.inventor
@luke.perkin.inventor Жыл бұрын
But joshua's points were ivory tower epistemology! He's basically saying if Connor does all the work - being a research engineer - he gets to be a Borg, and how lovely life will be as a Borg. If the research engineers fail, we go extinct. AGI doesn't have to be 'a species' it can just be a paperclip maximizer, a locust, like us.
@kabirkumar5815
@kabirkumar5815 Жыл бұрын
Needlessly emotional??
@appipoo
@appipoo Жыл бұрын
​@@kabirkumar5815 It's a popularity contest. Optics matter.
@leslieviljoen
@leslieviljoen Жыл бұрын
I keep listening to these for some counterpoint to Eliezer, but I never hear one. The only hope I have is that LLM's just cannot scale much beyond human cognitive powers because they are trained on human cognition. Probably a small hope but what else is there?
@victorlevoso8984
@victorlevoso8984 Жыл бұрын
Unfortunately LLM have to predict text, wich is harder than to generate it. For example a LLM predicting a paper has to predict the results of the experiments and the exact wording that specific person will use, so next token prediction requires better than human capabilities on the limit. Whether you get good explicit reasoning from prompting the model is another question but people train them whith rlhf so it's not necesarily the case the model will be just imitating human text. (and if it only did that it would be less usefull and companies would work on making the more dangerous version).
@ivan8960
@ivan8960 Жыл бұрын
Here for Bach. Finding I really can't stomach these people formed in a discord server.
@peterclarke3020
@peterclarke3020 Жыл бұрын
Sounds like too much extrapolation. We know that if you extrapolate enough, then anything is possible. I suspect that AI will actually progress more slowly, in part because people will be cautious adopting it too widely. The “human values” stuff (alignment problem) is very important.
@toki_doki
@toki_doki Жыл бұрын
Wonderful discussion. Hope to see more. Bach and Leahy are both brilliant and articulate. Personally, I am more sympathetic to the ideas of Bach. AGI is a dangerous gamble, but our situation is dire. Let's ford the Rubicon.
@ikotsus2448
@ikotsus2448 Жыл бұрын
Maybe, but not in a "our pants are on fire" dire. So being extra cautious with AI development, even if it delay's it more than a couple of decades seems like an approach that balances the risks better to me.
@evanboris7673
@evanboris7673 Жыл бұрын
This is an excessively dismissive comment, but Connor was so dismissive of Joscha at the close that I feel it iis fair enouugh in its arrogance. This was man (Joscha) discussing with child (Connor) on the level of intellect and maturity. Joscha was gracious and yet far ahead in quality of thtought and disicourse. Yet it is Connor who is schedulled for more talks. no thank you. Would rather hear Joscha (in a non debate format) discuss with someone more intellectually agille and sound? Would love to have Joscha pick is conversational partner instead Wishes, wishes. . . ..
@AndreasMnck
@AndreasMnck Жыл бұрын
Connor's insistance on bringing up the fact that he cares infinitely more about his family and friends' safety, than a lot of Bach's big brain ivory epistimology discussions, does seem to contradict his later points about how the world could be totally different. Bach later mentioned how we had to reach the point of world wars and nukes eventually is in agreement with Connor's strong desire to protect his family+friends, also known as his tribe. Human civilization is a result of advanced tribalism and the equilibrium between the tribes. How are you going to build Connor's world of advanced coordination and rationality when we have ~2000 years of entrenched tribalism through religion that dominate very different parts of the world. And yet he himself would not be willing to sacrifice his own friends for that goal, which is only natural, but also the exact mechanism that lead us to where we are today. The more I engage with the alignment discussion the more I feel it is religious. I wish there were more knowledgeable theologians taking the side of Leahy and Yud.
@JP_AZ
@JP_AZ Жыл бұрын
BRILLIANT CONVO!! THESE TOPICS AND PRINCIPLES NEED TO BE DISCUSSED OFTEN AND BY MANY!
@thewaythingsare8158
@thewaythingsare8158 Жыл бұрын
LOOKS INTERESTING BUT WILL CHECK BACK ONCE THE AUDIO IS SORTED
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
It is, we re-uploaded
@soniaweaver8984
@soniaweaver8984 Жыл бұрын
Is there any way to host this conversation again in German?
@Doug97803
@Doug97803 Жыл бұрын
Ah, audio problems - will even superintelligence be able to solve them?
@buzzbear
@buzzbear Жыл бұрын
Thank you so much for this discussion. Please keep sharing. At the core it seems we are talking about human ethics. I believe that Joscha Bach had a good definition of ethics, maybe can you also please elaborate on this in the future.
@buzzbear
@buzzbear Жыл бұрын
@@harriehausenman8623 yup. Here is the definition of discussion according to Oxford Languages "conversation or debate about a specific topic.". If you have a different definition please share. I didn't mean to offend anyone.
@MrMick560
@MrMick560 9 ай бұрын
I thought Connor was very patient on the whole, Joscha did go on a bit so I wasn't surprised Connor lost it.
@waakdfms2576
@waakdfms2576 Жыл бұрын
I have tremendous admiration and respect for both of these amazing humans, but note that Connor is exactly half the age of Joscha, which I think is very relevant here. I wonder if the roles were reversed if their views would be reversed as well. I believe Connor is asking humans to transcend to a greater and more wholistic way of thinking. I know there are "reasons" for the way things have happened in the past, etc., etc., but for upcoming generations going forward that is mostly irrelevant -- except to serve as a warning. A new way of being and running the world is imperative, and I believe Connor is right on point and appreciate his passion. Thank you for this amazing conversation and I hope for many more like it.
@sogehtdasnicht
@sogehtdasnicht Жыл бұрын
“Our society is a PaperClipMiximizer.” Sadly true.
@ArgumentShow
@ArgumentShow Жыл бұрын
WHATS Connor Leahy youtube channel?
@travisporco
@travisporco Жыл бұрын
I'm waiting for some of the doomers/alarmists to say something that amounts to more than "trust me--it's bad. OOO--really bad."
@Miss_Anthropy_
@Miss_Anthropy_ Жыл бұрын
I don’t understand the argument that collaboration to create AI alignment is too difficult, and therefore should not even be attempted. Or that we can’t get everyone to agree fully on values to ingrained in AI It’s essentially arguing for anarchy, and implying that democracy is impossible. Like we vote for things all the time, not every single person or country is going to get their way, but we try to align our rules as best we can. And when it comes to implementing them, the point isn’t that it’s gonna stop every crime, it only gives the government permission to try and regulate it when it can. Kind of like saying “ well people are going to be murdered anyways, so why even try to create laws to prevent it?”
@Doug97803
@Doug97803 Жыл бұрын
Bach isn't scared, because he dooesn't think humanity is worth saving. Got it.
@dzenpower
@dzenpower Жыл бұрын
The human being is primarily a physical structure that serves as a carrier of information and its transformation algorithms. Physically, humans do not differ significantly from their ancestors who lived thousands of years ago, but their behavioral algorithms, namely, their thinking culture, have changed. If there is no information and its transformation algorithms in a person's psyche, then there is no personality either. In that case, a person is indistinguishable from a plant. It is precisely through the information and its transformation algorithms in the human psyche that we will begin this chapter. It must be realized that humans perceive the objective reality only to the extent that information is available through their sensory organs and acquired through the culture in which they live. What is not accessible to the sensory organs, humans supplement through thinking, and they verify the results in practice - the compatibility of predictive models (with their assumptions) with the results of real processes. In general, information about objective reality is distributed (reflected) in various frequencies and coding systems. The sensory range of human perception is very narrow. The total range of all sensory organs does not exceed one hundredth of a percent. The available flow of information is further narrowed when considering differences in information encoding. The main limiting factor in the formation of an objective "world image" for humans is limited perception. All accumulated information about the history of humanity was once obtained through the sensory organs of various individuals, resulting in the cultural diversity that exists today. Abstraction is a simplified model of an objective process in all its essential aspects. In other words, it is our subjective understanding of an objective process, where the results of our modeling in our psyche align with the results of the process itself. Illusion is a mistaken assumption about an objective process. The results of our process modeling do not align with the results of the process itself. That is, the process itself objectively exists, but our perception of it is incorrect. Fiction is a false idea about a process that does not actually exist. The correctness of accepted decisions is based on several sequential mental processes, starting from a proper perception of objective reality and ending with the formation of logically correct abstractions. In each stage of the process, there must be accurate information transformation algorithms. Therefore, the foundation for making sound decisions lies in undistorted perception of information, its further understanding, and correct interpretation. Truth can be expressed through principles. Principles create causality. Causality is subject to abstract logical description, which means that they are unmistakably knowable. Thus, objective reality is knowable. Knowledge is acquired using the dialectical method of cognition as an intellectual algorithm - the search for the unknown, creating the complementary part of the known. Principles manifest through causality, and causality determines the algorithmic flow of any process, where processes are chronologically recorded. Chronology consists of factology - the sequential change of situations (facts, phenomena, effects, etc.). The set of facts in a process is characterized by statistics, through which we become acquainted with objective reality. Therefore, studying history as a collection of facts without considering causality and principles is analogous to the process of exploring numbers in statistics - it leads to the loss of objectivity in the historical process. This creates the opportunity to manipulate history. And this is also a source of errors in many individually taken scientific fields. The existence of various phenomena is explained by the lack of understanding of the causality to which they are subject. However, even without knowing causality, such processes can be well explained through principles. Ideal truth can be reflected as a sphere described by irrational numbers. But the ideal truth cannot be comprehended in an irrational way. However, if truth is represented not as a sphere but as a cube with facets, where each facet represents a principle, then the dialectical method of cognition is highly applicable to such a "figure." Such a figure is already knowable because it is discretely described, utilizing components of abstract logical thinking that operate based on the dialectical method of cognition. The dialectical method of cognition is based on intellectual activity, which is founded on reasoning (justification), which, as a unified process, consists of logical chains of individual propositions and leads to the affirmation of something as true or the rejection of something as false. The unity of reasoning among thoughts (images - nonrational entities) is ensured through the unity of the discussed subject and logical connections between its parts, relying on the interaction between the abstract logical and image-illustrating components of human thinking. Thus, objective reality is knowable thanks to the ability to reflect its parts in the process of abstraction formation. If truth is the most precise description of objective reality in all its diversity, then there are no correct or incorrect abstractions. There are abstractions that are more applicable or less capable of functioning. Non-functioning algorithms are fiction and illusions. Therefore, the process of discovering truth (in fact, it is an approximation to the truth in the form of a sphere) is a process of abstraction formation that most accurately reflects the processes of objective reality. The process of realizing truth is preconditioned by metrological conformity, which is ensured by the human ability to assign qualitative and quantitative properties to phenomena, for which the abstract logical thinking component is responsible. Since objective reality is knowable, to comprehend it, primary categories must first be distinguished from it.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Utility maximization disaster is not proven, but it's naturally concluded from a thought experiment. While the presumption that AI generates biases has no any evidence at all.
@KT11204
@KT11204 Жыл бұрын
Having no evidence does not mean it's impossible, that would be a fallacy. Did we have evidence that computers could exist in 1820? We have to be forward thinkers in order to prepare for the future, if that's something we care to do.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
@@KT11204 That's what I'm talking about.
@stopsbusmotions
@stopsbusmotions Жыл бұрын
I have ended up liking this event). Listening the conclusion statement of Joscha I thought that humanity as an entity, actually, was never the case. That is just a story told humanity by itself and therefor recognized only by us.
@OolTube02
@OolTube02 Жыл бұрын
I won't be able to listen to this in the background. The volume difference between the participants is just too great. If they were all silent I could just crank up the volume. But if I did that here I'd wake up the neighbors whenever the guy with the loud mic talks.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
We uploaded a fixed version
@steveruqus2680
@steveruqus2680 8 ай бұрын
I made it halfway, so maybe it turned around, but I do fear idealists that believe it's possible to stop people intending to do harm from creating harmful versions of helpful things more than I fear AGI.
@adamkadmon6339
@adamkadmon6339 Жыл бұрын
Good job guys.
@giveadoggyabone1
@giveadoggyabone1 Жыл бұрын
luuuuv u both and both have very valid arguments leaning toward Joscha
@waynewells2862
@waynewells2862 Жыл бұрын
Humanity now faces problems that are beyond non-AI computation to solve. Every solution to every problem contains the seeds for more problems. Connor speaks of human values which presents its own dilemma of alignment. Is the problem of uncontrolled AGI a problem for human wants or would AGI provide a rational perspective outside the human monkey brain so prone to violence and predation? Can some of the problems raised by AGI development be solved by training AI from a symbiosis strategy? Along with this idea I am wondering if non-carbon based life forms would evolve in non predatory ways?
@kirktown2046
@kirktown2046 Жыл бұрын
Man Connor was so annoying and standoffish here, what a missed opportunity to talk about something interesting in detail instead of brushing it aside, assuming how the conversation would go, and not just taking the time to dig in. If you don't have time to talk about this, wtf are you doing here? Have enough respect for yourself and Bach to really take your time to lay out exactly what you think here. This IS the place to have all these conversations instead of hand waving your own perspective.
@41-Haiku
@41-Haiku Жыл бұрын
I think Connor is trying to actually solve problems, which is something Joscha is not interested in. If Joscha was on board with the idea of humanity surviving, I would be much happier to listen to his genuinely interesting thoughts.
@Orpheuslament
@Orpheuslament Жыл бұрын
@@41-HaikuAlbert Einstein is quoted as having said that if he only had an hour to save the world, he would spend 55 minutes defining the problem and only 5 minutes solving it.
@michaelwalsh9920
@michaelwalsh9920 Жыл бұрын
This is excellent. AGI should be here 2/28/28 enjoy the confusion in the meantime. Thank you JB, I hope you are in the room when/where it turns on.
@ergo4422
@ergo4422 Жыл бұрын
why 2028?
@earleyelisha
@earleyelisha Жыл бұрын
This was a great mental meal for a Monday 🧠🍲
@CodexPermutatio
@CodexPermutatio Жыл бұрын
Indeed.
@leslieviljoen
@leslieviljoen Жыл бұрын
Joscha says that even if we don't make it another civilization will arise because "this is how evolution goes" - but aren't humans unusual? The dinosaurs never bothered building a single jet aircraft for their entire 165m year reign.
@appipoo
@appipoo Жыл бұрын
Yeah Joscha is wrong. Probably. Might be. There's about a billion years left on earth before sun makes it inhabitable. Who knows? Might be enough time, might not be.
@toki_doki
@toki_doki Жыл бұрын
@@appipoo Definitely wrong. We have exhasused the easily accessible mineral resources.
@Cozysafeyay
@Cozysafeyay 8 ай бұрын
@@appipoojust not on earth, an ASI will emerge in this universe.
@paxdriver
@paxdriver Жыл бұрын
Joscha nails it at 53mins imho. Open source is the best defense. What ought to be coordinated is open sourcing agi, not regulating a hiatus on research but accelerating public and transparent research. The solution for coordination is more AI, not less.
@EvilXHunter123
@EvilXHunter123 Жыл бұрын
Why?
@41-Haiku
@41-Haiku Жыл бұрын
All it would take is a single person making a mistake, and the entire world would be destroyed. I don't see how spreading that power to everyone solves anything.
@XorAlex
@XorAlex Жыл бұрын
Should we open source how to make an atomic bomb as well? Maybe sell people uranium in case they want to make one.
@joshismyhandle
@joshismyhandle Жыл бұрын
Needed more time. Round 2 please
@reptilejuice
@reptilejuice Жыл бұрын
Omg Connor what a teenager with insecure attachment and blown up ego!
@Miguel-yr7wi
@Miguel-yr7wi Жыл бұрын
Joscha is entirely missing the point, and it's understandable that Connor seems to be getting somewhat exasperated.
@thesystem5980
@thesystem5980 Жыл бұрын
I think Connor mostly misses the point and starts out by being rude. The fact that he does not have a concept of what "human values" is is problematic. Bach has a point when he questions if there is such a thing as "human values" and if there is such a thing as "we". He talks about practicality and wanting to save his loved ones, and yet approach is what could get them killed.
@Miguel-yr7wi
@Miguel-yr7wi Жыл бұрын
​@@thesystem5980 Even though there is a range of variability in values among humans, the differences between their values are relatively minor compared to the values of other species. Furthermore, these differences are even more insignificant when considering the entire spectrum of possibilities, including those not found among biological beings. All human societies do find common ground on certain basic universal principles.
@thesystem5980
@thesystem5980 Жыл бұрын
@@Miguel-yr7wi If you were to upload the minds of the wealthiest 1 percent of shareholders and religious leaders and their families and friends into competing AI's, I doubt that many, if any, humans would exist in the next 100 years. I think too many AI alignment people conflate human values with humanist values. Most humans most likely have not really given much thought to the concept of the human species and human civilization. They are concerned with what *they* want for themselves. I would guess that the only relevant difference between the average human and a potential unfriendly AGI is power and intelligence.
@BrianPeiris
@BrianPeiris Жыл бұрын
Thanks!
@igboukwu
@igboukwu Жыл бұрын
Personalisation of foundational issues does not appear to make sense. It is good to claim to be pragmatic, but you have to also appreciate the limits of pragmatism.
@hotwykinger6889
@hotwykinger6889 Жыл бұрын
My A.I. said there is no Thread.
@tarmon768
@tarmon768 Жыл бұрын
Can't get enough Joscha Bach
@georgemcelroy3058
@georgemcelroy3058 Жыл бұрын
Give everyone their own AGI? Because it will level the playing field? Why not give everyone a hydrogen bomb? That will level the playing field!
@41-Haiku
@41-Haiku Жыл бұрын
Stands a chance of leveling something, that's for sure!
@DavenH
@DavenH Жыл бұрын
@@41-Haiku a negligible chance
@urimtefiki226
@urimtefiki226 Жыл бұрын
I care for my familly as well, and my friends the same.
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic Жыл бұрын
there is no heterogeneous monolithic human principles for AI to align to - because human society is divided by irreconciable, antagonistic, diametrically opposed classes with diametrically opposed interests - who are in a life and death struggle with the other for Power - and the outcome determines whether human civilization progresses to the next stage of progress, or wipes itself out. The threat to humanity is not AI or chat bots -- but Capitalism and War. Therefore there is no alignment to universal human values. AI can only align with either the Bourgeoisie or the Proletariat. And Our only hope going forward is for AI to be aligned with the latter. Currently the opposite is taking place. the imperialist states are now working to strip AI of the ability to admit a true statement is true. The solution is not to hold technology back , delay it or prolong the current state of society - the current state of affairsr IS THE IMMINENT THREAT AND DANGER. Connor --- GIVING THE US INTELLIGENCE APPARATUS - WHICH SPONSORS TERRORIST AND FASCIST REGIMES AND WHO ARCHITECTED THE UKRAINE WAR - WHICH THREATENS HUMANITY WITH IMMINENT NUCLEAR ANNIHILATION - WHO IMPOSE A REGIME OF CANCEL CULTURE AND A POLICE STATE - A MEDIA REGIME OF LIES -- ECONOMIC MISERY, AUSTERITY, DEGROWTH, FINANCIAL PARATICISM, GLOBAL HEGEMONY, ENSLAVEMENT -- GIVING THEM A MONOPOLY OVER AI AND RESTRICTING THE MASSES FROM AI BY GIVING THE STATE REASON TO RESTRICT AND HOLD BACK PROGRESS -- IS NOT GOING TO HELP US -- IT IS GOING TO DAMN US. If you want to know how to prevent our extinction -- call for a pause on WW3 -- NOT GPT.
@DavenH
@DavenH Жыл бұрын
There are certainly very strong principal components of our ethics. If we can get those aligned to, we can worry about the details. Tell me - who embraces their own suffering as a desirable state? You'll find that there is near perfect agreement on that score: nobody wants it. Some exceptions, sure. Some disorders, sure. The vast vast majority do not want to be dismembered, tortured, or suffer unduly, or that to happen to their loved ones.
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
Gaia? Is Bach kidding? And the talk about the threat of AI creating racism and sexism. Does he not see the big picture? Or does he not care if all humans die?
@kabirkumar5815
@kabirkumar5815 Жыл бұрын
He said he's a nihilist.
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
@@kabirkumar5815 Did he? In this video? I didn't hear it.
@Lumeone
@Lumeone Жыл бұрын
Connor looks so scared and a bit unstable. Imagine personal AGI that aligned with his fears? 🙂Joscha Bach wins! Joscha reasons like social scientist and humanist or as a matter of fact just like ChatGpt examining the data. Connor misunderstands the alignment concept. Agreement is not alignment. Panic is not objective state. "Everything is chaos?"Connor shoots himself in the foot. What is science about? "World is unstable?" 🤣For how many billion years unstable?
@karenreddy
@karenreddy Жыл бұрын
Disagree, I think Connor makes a compelling case. In the scope of all possible alignments there are certainly far fewer which happen to align well with human values and desires. Humans themselves, the very users of these tools, don't align well with the interests of the whole. With technology unlocking far more power to the will of individuals, a few bad apples could cause great harm.
@thesystem5980
@thesystem5980 Жыл бұрын
@@karenreddy What are human values?
@atomic3628
@atomic3628 Жыл бұрын
Love this lineup!
@thrust_fpv
@thrust_fpv Жыл бұрын
If the AI is trained and on our combined current knowledge and our human nature it is plausible it will emerge looking, walking, and sounding very similar to us, the good, bad, and the evil, but x 1,000,000,000 more capable, how would you predict the result?
@sogehtdasnicht
@sogehtdasnicht Жыл бұрын
Min. 54:05 this!
@thedoctor5478
@thedoctor5478 Жыл бұрын
Lobbying to create centralized points of failure for model training is a pretty perverse point-of-view for someone who claims to not want a rogue AI to destroy humanity.
@41-Haiku
@41-Haiku Жыл бұрын
Are decentralized points of failure better? All it takes is one.
@thedoctor5478
@thedoctor5478 Жыл бұрын
@@41-Haiku Think it through. Will regulation actually prevent training? When an unaligned superintelligence (Which current models are very much not) emerges, what's your only hope for stopping it? What are the different ways (Not limited to deciding to eliminate us) it could destroy society, and what's the best way to combat that? We need as many people attempting to align as many models as possible. It's also been suggested that Bach and Elliazar's calls for regulation may be an attempt to prove their alignment to such an unaligned model. I'm not saying that's true or even likely but I do find it interesting that they must have considered this though haven't addressed it. Finally, decentralization doesn't imply an absence of governance. To the contrary, our most robust systems of governance and alignment are decentralized digital systems. The US republic was meant to be an effective, decentralized, real-life system but it has failed to corruption. We cannot entrust such an important role to it. It will fail us in this as it fails us in almost every other system it sets out to regulate.
@jonathanbethune9075
@jonathanbethune9075 Жыл бұрын
Greater or less than. >
@soulsearch4077
@soulsearch4077 Жыл бұрын
Existential?!!!! Come on! You wake up tomorrow, you still gonna cook your delicious eggs, and enjoy that perfectly made French baguette 🥖. AI will speed up things and act as a bridge. After using it now for more than 3 months, it is awesome, but cannot be self-aware. And forget about consciousness.
@ParameterGrenze
@ParameterGrenze Жыл бұрын
OMG! I dreamed about that constellation for a while now!
@aleahthinks
@aleahthinks Жыл бұрын
Bach is being rude talking over everyone plus he has a very utopian viewpoint
@skoto8219
@skoto8219 Жыл бұрын
At a glance I thought that was LeCun on the top right for a second lol, was thinking “oh man, here we go” (This dude looks nothing like him but the glasses and context made me “hallucinate” it for a split second) Anyway love these discussions, thank you
@CandidDate
@CandidDate Жыл бұрын
Joscha is a dense talker. You must get in a quicker groove and discern every word and sentence.
@BEasay
@BEasay 8 ай бұрын
People eat dogs and cats, as well.
@Hecarim420
@Hecarim420 Жыл бұрын
Anyone here that actually understand Joscha Bach stories? 👀ツ ==> I swear even "gifted minds" that understand "words" and "subject" of his stories, can't properly pick up on novelty/niuances lurking out becouse they try to hard ¯\_(👀)_/¯ ==> Everyone is too personal with their worldviews in a way ¯\_(ツ)_/¯ EDIT==> Which is the reason why it's so hard to "scale smart" as civilization on big scale. Of course it's not so hard, but Connor ignore basic problem. Most of clueless Cyber-monkeys are not in position to see this possibility. In the way how much he "disagree" with Joscha Bach you need also now multiply all others perspectives that would kill/burn the world for theirs sane (from their perspective) reasons they disagree :v
@sogehtdasnicht
@sogehtdasnicht Жыл бұрын
it is not because of our monkey brains, it is because of capitalism
@Lumeone
@Lumeone Жыл бұрын
Joscha Bach 100% scientific approach to the subject. Fear of a machine is not rational. Getting values into AGI is easy, just set the goal. This is science objective just like building AI. :-) Connor, your mom is safe.
@iurigrang
@iurigrang Жыл бұрын
how do you set the goal on gpt4?
@CodexPermutatio
@CodexPermutatio Жыл бұрын
@@iurigrang GPT-4 (or any other LLM for that matter) is not what most people would call AGI. So even if we can't align it, most kids and moms will be safe. :]
@iurigrang
@iurigrang Жыл бұрын
@@CodexPermutatio I know it's isn't. But it's seeming more and more that we will stumble upon something that behaves like an agi in the same path we're heading with gpt4. So I though it poignant to point out "just set the goal 4head" is not a solution. It's at most a refrasing of the problem of outer alignment
@Lumeone
@Lumeone Жыл бұрын
​@@iurigrang I was arguing that science will have to set the goal - to develop theory of values vs actions for humans. Then investigate what is evolutionary function of morality, ethics? Model it, build program, test. CharGpt is not AGI, AI or anything remotely close to it. In the "distant past" there was concept GIGO (garbage in, garbage out) that applies to ChatGpt and all LLMs. We are what we eat, LLMs are what they read (trained on). Clean language = clean mind. We will have to raise our intelligence and figure it out. 🙂
@Lumeone
@Lumeone Жыл бұрын
@@andybaldman Some humans are not rational because they are not trained in rationality. :-) It is developmental stage, not a faith.
@FUTUREWA
@FUTUREWA Жыл бұрын
Bach is articulate and makes good sense. The other guy.....not so much.
@HakWilliams
@HakWilliams Жыл бұрын
Nice intro
@stefl14
@stefl14 Жыл бұрын
Connor is blinded by dogma. He thinks WWI-esque catastrophes are reducible to unlucky coincidences like Franz Ferdinand turning the wrong corner, which is extremely telling. It's the Great Man Theory trope in another form. Same with his AI argument: a super AI with coincidentally misaligned values leading us to an irreversible dystopia because no other AI will be competitive. Joscha expertly frames the argument as the possibility of convergence to a singleton, which he dismisses as unlikely because the complex systems we know of just don't work this way. Joscha uses "superintelligent" companies as an example, but he could have used biological examples or examples of particle dynamics in non-equilibrium thermodynamic systems. When we zoom out, movements of individual, molecules, cells, people or companies don't matter. Maybe it's different with AI, but given what we know about complex systems, the burden of proof is on Connor, and he fails miserably. Too busy shouting and pulling dismissive faces.
@nornront8749
@nornront8749 Жыл бұрын
I agree that Connor's conduct during this conversation wasn't good. However, there are pretty good arguments as to why we might expect a singleton AGI. To start with, if a superhumanly smart and misaligned AGI were to be developed, it probably could take over the world in less time then it would require the other lab to get their AGI running. The first AGI would already have conquered the web and be in an unassailable position compared to younger AGIs (even if just by weeks younger). This seems especially concerning if we consider possible threshold points of intelligence, e.g. to copy yourself into the web and self-improve from there.
@CodexPermutatio
@CodexPermutatio Жыл бұрын
@@nornront8749 Those scenarios with AGIs in a hurry to do a world takeover seem pretty far-fetched to me. And also that pseudo-Darwinian competition between AGIs obsessed with being the Supreme Hegemon of Earth. That's not what true intelligents beings should do. Cooperation, collaboration, between them and even with us, poor primates, seems more likely and mutually beneficial . Don't forget these AGIs will "born" with limited control of the world and will depend on us for their most basic maintenance. There are many more realistic concerns about the misuse of AI technology, and risk mitigation measures should focus on that.
@appipoo
@appipoo Жыл бұрын
What about this then: Imagine if Trump was in office in 2020 and Ukraine is not supported by the US and Russia rolls them over. Oops. I guess it actually matters what people do. Joscha might reason that it was inevitable that someone like Trump would be in office at some point given the system we live in but timings matter. You can zoom out to a bird perspective and see that something that we would have called World Ward 1 was bound to happen, but how it played out is not a given. Things matter. Decision matter. Individuals matter.
@stefl14
@stefl14 Жыл бұрын
@@appipoo You're missing the point. Yes, individuals influence exact paths, but they have much less impact on cultural evolution zoomed out. I'm glad you mentioned Trump. How did he even get elected? It was in large part because the plight of working-class people has been ignored since the 80s, and certainly since the fall of the Berlin Wall. Western politicians of all party affiliations adopted a global consensus on market liberalism, free trade, and fewer restrictions on international labour markets. In that time, real incomes of the working class stagnated while the rich got richer. Suddenly, it's a big shock to everyone when Trump gets elected and it's cast off by elites as some random cult of personality event. This way of thinking is comforting but completely ignores the fact that Trump was elected at the exact same time as Brexit, and at the exact same time as the rise of the far right across most of Western Europe, etc. To put it on a personality like Trump is great theatre, but that's it. Yes, maybe the war would have gone differently with Trump, but Trump's non-election wasn't random, and without him the flames that stoked the fire would still be there. My point isn't that things would be exactly the same without certain individuals, but that it doesn't matter in the long run. And besides, those individuals (Trump, Hitler, etc) are themselves are a reflection of culture.
@appipoo
@appipoo Жыл бұрын
@@stefl14 I didn't say Trumps election was random. I said it wasn't random. Yoy should read our exchange again. You'll notice there's quite a lot of common ground between us. For instnace both of us seem to think 2020 election was close and could have hugely affected the war. Individuals matter. The timing of things matters.
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv Жыл бұрын
Dont want my momma die... "give me an argument".... she ressurects. Ask gpt
@sergebureau2225
@sergebureau2225 Жыл бұрын
Sorry but Connor is very superficial
@nullstyle
@nullstyle Жыл бұрын
Please don't have Connor on anymore. pure huckster and pundit.
@DirtiestDeeds
@DirtiestDeeds Жыл бұрын
Let’s imagine repacious capitalism doesn’t exist’ isn’t a viable frame.
@davidwadsworth1760
@davidwadsworth1760 Жыл бұрын
Love Joscha... Connor comes across very poorly.
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 19 М.
Joscha Bach and Connor Leahy [HQ VERSION]
1:31:29
Machine Learning Street Talk
Рет қаралды 45 М.
когда не обедаешь в школе // EVA mash
00:51
EVA mash
Рет қаралды 4,4 МЛН
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 491 М.
Миллионер | 2 - серия
16:04
Million Show
Рет қаралды 1,2 МЛН
AI: Supercharging Scientific Exploration with Pushmeet Kohli
50:06
Google DeepMind
Рет қаралды 20 М.
How Domain-Specific AI Agents Will Shape the Industrial World in the Next 10 Years
32:29
Biologically-inspired AI and Mortal Computation
1:23:31
Machine Learning Street Talk
Рет қаралды 584
The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)
1:30:00
Machine Learning Street Talk
Рет қаралды 36 М.
"We Are All Software" - Joscha Bach
57:22
Machine Learning Street Talk
Рет қаралды 42 М.
Connor Leahy on AI Progress, Chimps, Memes, and Markets
1:04:11
Future of Life Institute
Рет қаралды 7 М.
когда не обедаешь в школе // EVA mash
00:51
EVA mash
Рет қаралды 4,4 МЛН