Yudkowsky says insane things with a straight face ("bomb the datacenters"). Cantrill says sane things with the veins on his neck popping out. Still prefer the latter.
@bcantrill Жыл бұрын
🤣
@gJonii Жыл бұрын
Given that the talk nor your comment managed to actually get Yudkowskys claims in context, I'm kinda unsure if this is deliberate lying or if the basic concept is so hard to grasp. The basic concept is fairly simple, you have to make a choice, either you ban making things that kill all of us with threat of force... Or you don't. Ban means you have to be ready to bomb data centers if they are used to endanger humanity. If you are not prepared to do that, there is no ban, and none of this discussion matters. Yudkowsky stated he doesn't think ban is realistic, so any talk of slowing down the extinction of humanity is meaningless, and the bombing datacenters was largely in the context of demonstrating how far we are from treating AI seriously. But yeah, reassuring lies are about all we have left, I'm just sad the anger is directed at the folk that tried to prevent the disaster.
@edgeeffect Жыл бұрын
That's the best speaker-biog for Bryan Cantrill I've ever seen.
@edgeeffect Жыл бұрын
"Everything is a conspiracy when you don't understand how anything works." - some guy on The Internet. "It's either firmware OR humanity and YOU HAVE TO pick a side" - Bryan Cantrill
@kamikaz1k Жыл бұрын
Loved where it was going but then ended with “it’s our humanity” which is a bit b/s - especially since he was talking about concrete reasons why it’ll be ok. The final reason should be is reality has too much detail, so till AI has an accelerated way to experiment in reality, learn from the physical, there is always going to be a gap/bottleneck.
@nbuuck Жыл бұрын
I had an utterly wrong preconception about the argument Cantrill would make here, partly given the framing of the talk when mentioned on social media, but also given how often tech entrepreneurs and venture capital investments are discussed on the Oxide and Friends series. I expected this argument would be a slightly different economic one, with the premise that we shouldn't abandon the economic opportunities simply out of fear or concern. I also had a different inferred understanding of what "AI doomerism" means: I thought those of us in the IT security space, given our concerns and stereotypical skepticism, were being labeled as doomers _a la_ Suppressive Persons in the eyes of the Church of Scientology. I was relieved that Cantrill acknowledged some of the risks at the end of the talk, if not my preferred placement. Once one understands that, at least from Cantrill's perspective, that AI doomers are those with perhaps irrational, actual-Doomsday-scenario concerns, I felt less threatened by the premise and the term "AI doomers." That said, I lament that we're spending time addressing irrational doomerism (I guess that doesn't sound redundant in my head, hence my misconceptions) given that it is nominatively irrational when we could put more air time and dialog toward the security, privacy, and social concerns and maybe even theorize solutions, the latter of which I've heard very little in the oceans of worry being written about AI. That doesn't mean I think Cantrill shouldn't have focused on the irrational nature of AI doomerism... he may not feel like a sufficient authority on AI security and privacy to compose such a talk. I certainly have a bit better understanding of some of the schools of thought about AI and how we label them after listening to this. Thanks, Bryan!
@nblr2342 Жыл бұрын
Once again, a terrific talk. Very dense and with enjoyable pace. Glad to hear they got the accoustics fixed - even if it's just by using a hand mike. Pro tip: Invest in a good DPA head mic setup.
@_ingoknito Жыл бұрын
AI as force multiplier for human flaws: absolutely!
@datenkopf Жыл бұрын
What does he say about Lex Friedman at 39:24? (I think the subtitles are wrong or I don't get it)
@julienlegoff6139 Жыл бұрын
Get the Narcan!
@patmelsen Жыл бұрын
36:48 interestingly, this train of thought also kind of summarizes the position that climate change naturalists have, in where they say that we should not let an unspecified fear of climate change stop us from making the best of this planet (which may involve burning mineral oil).
@theyruinedyoutubeagain10 ай бұрын
Bryan is one of the most brilliant people I know and, while I wholeheartedly agree with his stance on the idiocy AI scaremongering, this reflects a shockingly poor understanding of the opposing point of view and reeks of stunted thinking. Feels like an application of the common trope of exceptional people having unwarranted confidence when discussing things outside their domain.
@BspVfxzVraPQ Жыл бұрын
If my autocompletion causes a "existential threat" than that is on you not me. If you hook up my autocomplete to the nuclear button... like. oh, blame the autocomplete. that is so robofobic...
@cowabunga2597 Жыл бұрын
He is gonna have a heart attack in the middle of the talk. Nice talk btw.
@GeorgeTsiros8 ай бұрын
No, he won't. This is what gives him life. I am like him when I am explaining stuff.
@ahabkapitany8 ай бұрын
this was really embarrassing to listen to. 1. take a midwit tweet 2. use it as a strawman 3. shout for half an hour arguing with said midwit tweet I came here expecting him to take this topic seriously, instead I just found Don't Look Up energy.
@ts4gv10 ай бұрын
introducing x-risk with such a dismissive tone won't work for much longer (i hope). this was a frustrating & bad presentation. :/
@dlalchannel8 ай бұрын
Is his claim that AI will *never* be able to solve the engineering problem(s) that he and his team did?
@jscoppe Жыл бұрын
Argument by YELLING REALLY LOUDLY. Bryan seems like the Cenk Uyger of AI debate. "OF COURSE!!" Also, I loved when the nerd told the other nerds to touch grass.
@allesarfint Жыл бұрын
"Intelligence is not Enough", tell me about it. Suffering my whole life because of this.
@yono28153 ай бұрын
Chill dude...
@a2aaron8 ай бұрын
what if it turns out that firmware is actually super reliable, its just that bryan was cursed by a wizard at birth to always have firmware issues
@maxcohn3228 Жыл бұрын
Really solid audio on this talk
@navicore Жыл бұрын
Thanks for this reasoned sanity.
@masonlee910911 ай бұрын
Love Cantrill, but it is a pretty short-sighted take on AI x-risk to dismiss the possibility of agentified super intelligence.
@captainobvious9188 Жыл бұрын
Learn even a little bit how modern AI works? It’s nowhere near any of the AI in fiction, as believable as they are.
@vmachacek Жыл бұрын
I'm watching this talk for 10th time now, still entertaining...
@palindromial Жыл бұрын
Skip to 15:30 if you want to avoid the cringy bits. The engineering bits are pure gold though. A+++ would watch again.
@aeriquewastaken Жыл бұрын
Cringy bits?! Those were great!
@palindromial Жыл бұрын
@@aeriquewastaken I didn't find what Bryan has to say cringy, but the bits he cites are nevertheless cringy to me. So overall, I much preferred the rest of the talk.
@cepamoa1749 Жыл бұрын
he only know to scream...tiring...
@420_gunna Жыл бұрын
stimulant check *banging credit card on table*
@mcneeleypeter3 ай бұрын
A humourous but not very intelligent take.
@Ergzay Жыл бұрын
Pretty good talk until the ending part where he suddenly re-invokes a bunch of nebulous "dangers".
@ginogarcia8730 Жыл бұрын
i want what this guys smoking
@jeffg4686 Жыл бұрын
Capitalism versus Sociaiism - head to head. This is the real discussion. Everyone's too afraid -- too programmed that they can't see past capitalism.
@GeorgeTsiros8 ай бұрын
once again, the software was the problem once again, shit coding is to blame we're never going to be engineers. We're just keyboard jokeys.