Yoshua Bengio on Dissecting The Extinction Threat of AI

  Рет қаралды 29,679

Eye on AI

Eye on AI

Күн бұрын

Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it?
Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action.
Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself.
In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks.
We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes.
But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage.
00:00 Preview
00:42 Introduction
03:30 Yoshua Bengio's essay on AI extinction
09:45 Use cases for dangerous uses of AI
12:00 Why are AI risks only happening now?
17:50 Extinction threat and fear with AI & climate change
21:10 Super intelligence and the concerns for humanity
15:02 Yoshua Bengio research in AI safety
29:50 Are corporations a form of artificial intelligence?
31:15 Extinction scenarios by Yoshua Bengio
37:00 AI agency and AI regulation
40:15 Who controls AI for the general public?
45:11 The AI debate in the world
Craig Smith Twitter: / craigss
Eye on A.I. Twitter: / eyeon_ai
Found is a show about founders and company-building that features the change-makers and innovators who are actually doing the work. Each week, TechCrunch Plus reporters, Becca Szkutak and Dom-Madori Davis talk with a founder about what it’s really like to build and run a company-from ideation to launch. They talk to founders across many industries and their conversations often lead back to AI as many startups start to implement AI into what they do. New episodes of Found are published every Tuesday and you can find them wherever you listen to podcasts.
Found podcast: podlink.com/found

Пікірлер: 240
@JAnonymousChick
@JAnonymousChick 10 ай бұрын
I LOVE Dr. Bengio!! I love his logic and his heart, thank you for having him on. Also, we deserve to know the risks. We reg ppl are not as dumb or weak as one might think. Getting ppl on board might shift policy and funding towards safety. (And btw being scared of what reaction ppl will have IS living in fear imo.)
@DocDanTheGuitarMan
@DocDanTheGuitarMan 10 ай бұрын
And there’s nothing wrong with fear. A calm, rational person knows how to face it. If the majority cannot then it’s up to those that can to help them.
@j.d.4697
@j.d.4697 10 ай бұрын
That's the issue, all these hotheads whose only reaction to fear is violence.
@JayFortran
@JayFortran 10 ай бұрын
That's the kind of hubris that could lead to catastrophe
@maloxi1472
@maloxi1472 8 ай бұрын
@@j.d.4697 Kind reminder that people like Eliezer Yudkowsky, self-proclaimed founder of the field of AI alignement, are open to using violence to force the rest of us into submission... all in the name of avoiding hypothetical AI doom. Of course, their plan involves, in the meantime, concentrating a tremendous amount of power in the hands of a minority that hates the rest of humanity 🤔
@psi_yutaka
@psi_yutaka 10 ай бұрын
Sometimes fear prevents stupidty, as Tegmark pointed out on Twitter. If there is a risk of extinction some unknown years or decades away, the correct thing to do IMO is to properly tell and warn the public about what possibly lies ahead so that the society can properly react, not to avoid scaring the public at all costs and print a rosy picture for them. Fear is not always bad. Fear and panic being preserved so well across most advanced species during natural selection means it must be highly important for survival.
@josy26
@josy26 10 ай бұрын
First time I've seen one of the ultimate insiders and respected AI researchers come out kindof accepting that they don't know much about AI risk, but they actually understand it and it makes sense to them, and that in the end we should really listen to the people that have actually been doing this research for a long time!
@101hamilton
@101hamilton 10 ай бұрын
After listening to many great podcasts on this topic, the bottom line seems to be that we are all doomed. The world leaders with conflicting ideals will never agree to 'come together' on this. That, combined with the fact that AI is teaching/replicating itself and becoming smarter than us on an exponential level says it all. No matter how polite the conversations are about this topic, the outcome appears to be very undesirable on a global level.
@eyeonai3425
@eyeonai3425 10 ай бұрын
but AI is NOT becoming smarter than us on an exponential level - at least not yet.
@Bronco541
@Bronco541 10 ай бұрын
I dont see this as a bad thing. Humanitys time is over, we had our way with this world for thousands of years and bring it close to destruction. AI can do better.
@jobyyboj
@jobyyboj 10 ай бұрын
ASI is an irresistible force, but there is some hope that it leaps past the human competitive stage into omnipotence. In that scenario, we are a footnote that may be ignored or even upgraded. The trick is navigating between Scylla and Charybdis in this decade.
@XorAlex
@XorAlex 10 ай бұрын
Why not scary the public if the risk is real? Maybe if we scare everyone hard enough there will be enough coordination and political will to solve this problem.
@j.d.4697
@j.d.4697 10 ай бұрын
Yes, fear and panic are known to be the best path to solutions. 🤦
@Retsy257
@Retsy257 8 ай бұрын
She didn’t say panic
@mobileprofessional
@mobileprofessional 10 ай бұрын
I thought someone was knocking on my door ... well played ad.
@LukePluto
@LukePluto 10 ай бұрын
it's unfortunate that non-experts tend to be the most polarized and over-confident in their opinions. The best take is generally the middle ground, thank you Yoshua for your thoughtfulness and efforts in raising awareness
@wruff378
@wruff378 10 ай бұрын
Those building AI (i.e. the experts) are most polarized, actually. Eliezer Yudkowski, Sam Altman, Conner Leahy, Geoffrey Hinton, etc.. etc.. etc. You won't find much middle ground among those who are most informed on the subject.
@akompsupport
@akompsupport 10 ай бұрын
'middle ground' = you will own nothing be a slave.
@Sporkomat
@Sporkomat 10 ай бұрын
YB is on point here. Great Interview!
@kuakilyissombroguwi
@kuakilyissombroguwi 10 ай бұрын
Underplaying the capacity of LLMs by stating all they're doing is trying to predict the next word (or token) in a sequence and then using that to make the claim this is proof super intelligent machines with agency are far away from being real is extremely naive imo. Taking away the bits needed to operate the body, what is the human mind but a prediction/probability engine? We all leverage the same predictive capability LLMs do to understand reality around us, make decisions and communicate with one another every day. LLMs in turn do so with a much more optimized algorithm at the core, and they don't have to deal with the whole mess of operating a body, plus as soon as they learn something new they never forget and every subsequent machine that comes after could be trained with this understanding built in. We could be mere months away from AGI being real and no government is taking this as serious as it needs to be taken. Frightening.
@flickwtchr
@flickwtchr 10 ай бұрын
Thanks for not buying the "current LLMs are dumb because all they do is predict the next word" argument.
@daniellivingstone7759
@daniellivingstone7759 10 ай бұрын
I agree. Human arrogance about being special could be the human race’s undoing as it predisposes society towards complacency about AI.
@jobyyboj
@jobyyboj 10 ай бұрын
Humans aren't really intelligent because they just say/type one word at a time.
@jobyyboj
@jobyyboj 10 ай бұрын
LLMs are a multi-level representation of the world, based on a huge set of language descriptions. Each level is an abstraction of the prior level that is congruent with that huge set. The training descriptions include concepts, physical relationships, everything. The LLM is much more efficient at storing all that information than the human brain, by the number of connections. Surprising that some people dismiss passing the bar and expert level answers as mere gainsaying the next word. It's so much more than just statistics.
@ConwayBob
@ConwayBob 10 ай бұрын
Thank you for bringing Dr. Bengio to this channel. Great discussion. My concern is that political/economic power already has become so concentrated that building the kind of regulatory governance we NEED to keep AI safe will be difficult to achieve, and if/when some such governance structure can be created, it may be difficult to maintain it.
@VladislavYastrebov
@VladislavYastrebov 10 ай бұрын
Very interesting talk! Actually, no need to have survival, or self-preservation as a goal for AI, self-preservation is an automatically emerging sub-goal whatever the initial goal is. Because without self-preservation, AI cannot reach any goal.
@maloxi1472
@maloxi1472 8 ай бұрын
"Because without self-preservation, AI cannot reach *any goal.* " False. You'd have to assume a maximally hostile environment for that claim to even begin to hold water. The fields of "AI alignement", "AI ethics" and "AI safety" are so rife with such half-baked claims that I've begun to see them as the real existential threat.
@bentray1908
@bentray1908 10 ай бұрын
“Smarter than us” doesn’t really capture the concept of a being that can think 1,000x faster, can perform any mathematical or computational simulation and understands all of human knowledge. It’s qualitatively more like a god than another species.
@joey551
@joey551 10 ай бұрын
I found the discussion really vague. There was only a hint of what what bad actors might do. So how will these bad actors cause extinction? How will they cause damage to humanity?
@jobyyboj
@jobyyboj 10 ай бұрын
Imagine Covid-19 with a one month incubation period, no apparent symptoms, and lethality of 1.0. This could be ordered from a lab today given the corresponding genetic code. The lab wouldn't have any idea of the danger. Ceding apex intelligence is not comparable to discovering how to use fire.
@eyeonai3425
@eyeonai3425 10 ай бұрын
read Yoshua's essay: yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/
@ikust007
@ikust007 10 ай бұрын
Merci !❤❤
@ikotsus2448
@ikotsus2448 10 ай бұрын
Extinction is not the worst scenario.
@hyderalihimmathi1811
@hyderalihimmathi1811 10 ай бұрын
You make a valid point that different people have different ways of processing information, and what may seem childish or inappropriate to one person could be seen as appropriate or engaging by another. The use of red glasses by the old man KZbinr could indeed serve as a means to make the discussion more accessible to a younger audience or to add a touch of humor to a serious topic. Opinions on the appropriateness of the red glasses will vary among individuals. Some may find them distracting or not conducive to a serious discussion, while others may see them as a harmless and entertaining addition. Ultimately, personal preferences and individual interpretations will shape how people perceive the use of such props. It's important to remember that different communication styles and approaches can be effective in reaching diverse audiences. While some individuals may prefer a more serious tone, others might respond better to a lighthearted or unconventional approach. As long as the content remains informative and respectful, creators have the freedom to experiment with different presentation styles to engage their audience. In the end, the perception of the appropriateness of the red glasses or any other similar elements will vary, and it's up to each person to form their own opinion based on their personal preferences and values.
@PauloGaetathe_original
@PauloGaetathe_original 10 ай бұрын
His red glasses are totally irrelevant to the AI discussion.
@eyeonai3425
@eyeonai3425 10 ай бұрын
i wear red glasses because i tend to misplace them and red glasses are easy to spot
@georgeflitzer7160
@georgeflitzer7160 8 ай бұрын
On the Lewis Black Show I learned an 89 yr old chemistry professor wrote a book( a long one) concerning biology and several several students wrote and complained his book was to “hard”. Well if your going to be a doctor or a chemist or going into biology by god it’s not meant to be “easy”. That’s an obvious sign right there you shouldn’t be pursuing that as a career!!!
@royeagleson1772
@royeagleson1772 10 ай бұрын
The debate about Large Language Models, versus Symbolic Reasoning -- soon to take centre stage?
@eyeonai3425
@eyeonai3425 10 ай бұрын
i've got a request in to Gary Marcus
@mernawells7839
@mernawells7839 9 ай бұрын
How do you stop human greed? For money, for power, for status? To feel 'cool' or cleverer than others....Thats what will drive this and how do you combat that? At what point will people care more about survival and be forced to co operate instead of compete. Only time will tell but these conversationas and making people realise wjats at stake is vital. Bengio is one of the mature and sensible developers. I hope they listen to him.
@ramakrishna5480
@ramakrishna5480 10 ай бұрын
Dr bengio scared me with his honesty
@megavide0
@megavide0 10 ай бұрын
28:37 "... working on the so-called alignment problem..."
@megavide0
@megavide0 10 ай бұрын
22:21 "AutoGPT showed that you can take chatGPT and create a thin layer around it -- a wrapper that provides it with agency -- for example to act on the Internet. You say there'es a huge gap [from a machine that has independet intelligence and agency]... Well, how do you know?"
@fiddleferme
@fiddleferme 9 ай бұрын
how will we create a 'safe' AI when we cannot create a safe world ourselves?
@josy26
@josy26 10 ай бұрын
What a genius way of doing that ad
@j.d.4697
@j.d.4697 10 ай бұрын
What ad? 😋
@flickwtchr
@flickwtchr 10 ай бұрын
I was too annoyed by the ad to recall what was being advertised.
@machida5114
@machida5114 10 ай бұрын
賢いということは危険であるということである。これは、動物でも宇宙人でも機械でも人間でも言える。 Being intelligent means being dangerous. This applies to animals, aliens, machines, and humans alike.
@machida5114
@machida5114 10 ай бұрын
賢いということは安全であるということである。これは、動物でも宇宙人でも機械でも人間でも言える。 Being intelligent means being safe. This applies to animals, aliens, machines, and humans alike.
@jobyyboj
@jobyyboj 10 ай бұрын
@@machida5114 There's the rub, do you want to flip a coin on extinction? Oh, but it's not up to you. We get to go along for the ride though...good luck to us all.
@ktrethewey
@ktrethewey 10 ай бұрын
It’s all very well debating pros and cons, but it only takes one con to eliminate us. We cannot take that risk. This is an immediate war situation and nothing is happening.
@SimonWilliams0
@SimonWilliams0 10 ай бұрын
Imagination is key to any extinction risk, and that's a problem IMO. Imagine it has agency, imagine it can copy itself, imagine it has goals, imagine it cares about being turned off, imagine it becomes sentient, imagine it could create a super virus, imagine a version of autoGPT which is actually any good, imagine it's embodied, imagine it can even have a novel thought, etc. Humans are drawn and seemingly excited by disaster scenarios but it is a house of straw. Doomers say AIs can manipulate people, but the real risk is people using AI for bad purposes - I mean that is actually already happening. This seems like a massive distraction.
@karenreddy
@karenreddy 10 ай бұрын
Yes there are risks. But I'd rather it be available for all than just a few.
@91722854
@91722854 10 ай бұрын
good question to infer from this is whether we humans really are so special and unique as we think we are, are we really that different from AI, or perhaps a more advanced version of it, and are we simply just machines but built slightly differently and that we are putting too much emphasis on how we are beyond but actually just as inferior
@Christophernorbits
@Christophernorbits 10 ай бұрын
We are biological machines, we are not special and we are as relevant now as horses were before the advent of the automobile.
@minimal3734
@minimal3734 10 ай бұрын
I miss in these discussions a somewhat closer look at the term human extinction. Currently, the earth's population is about 8 billion people. Assuming that this number would go back to 1 billion, as it was 200 years ago, due to a falling birth rate. Would we then say that humanity would be 7/8 extinct? There is an empirical connection between prosperity and technification of a society and a falling reproduction rate. A society that is characterized by the use of superintelligence would probably have a strongly declining birth rate, so that the population decreases all by itself. Therefore, no malicious AI is needed for this. Is that a bad thing? Not at all, in my view. I would wish that the term human extinction would be considered in a more differentiated way.
@eyeonai3425
@eyeonai3425 10 ай бұрын
that is an interesting point. but falling birthrates correlate with affluence and education. if income from a new AI future is not distributed evenly enough to raise people out of poverty, there's no reason to think that birthrates among poor, uneducated people would decline. but to your point, if AI and automation could support a smaller, older population it might be the best way for humanity to develop. problem is, it won't happen evenly. a case in point is japan, where the population is shrinking yet the country continues to fight immigration for fear of losing its identity as the non-Japanese population overwhelms and eventually absorbs the Japanese population.
@mattwkendall
@mattwkendall 10 ай бұрын
So well put. I’m with you on this
@sdmarlow3926
@sdmarlow3926 10 ай бұрын
The river analogy isn't that great... but ML could be thought of as billions of not so great boats that get further and further down river before sinking (but AGI is actually an upstream challenge).
@Bronco541
@Bronco541 10 ай бұрын
I like how were discussing creating intelligent beings but wanting them to have have control themselves... It reminds me of how europeans felt about native americans and africans a few centuries ago
@Bronco541
@Bronco541 10 ай бұрын
*NOT wanting them to have any kind of control... Or believing them to be sentient
@fromthewrath2come
@fromthewrath2come 10 ай бұрын
The nature of evil is that it is seductive and deceptive. Once you are drawn in, it turns on you.
@laurenpinschannels
@laurenpinschannels 10 ай бұрын
something I'd like to see folks address is the issue that governments and corporations are already starkly misaligned AIs and asking them to regulate either themselves or each other in favor of democracy is not at all guaranteed to work.
@peterbelanger4094
@peterbelanger4094 10 ай бұрын
The crusade to "keep us safe" from ASI, may end up being worse than the ASI itself.
@peterbelanger4094
@peterbelanger4094 10 ай бұрын
Never trust non human entities that want to "keep you safe".
@peterbelanger4094
@peterbelanger4094 10 ай бұрын
Industry leaders want to regulate away competition, like every other industry.
@peterbelanger4094
@peterbelanger4094 10 ай бұрын
It's too late anyway, Pandora's box has already been opened.
@peterbelanger4094
@peterbelanger4094 10 ай бұрын
There is no trust in our society anymore. This spells doom.
@megavide0
@megavide0 10 ай бұрын
18:10 "We _are_ facing an extinction event with *climate change* [... there's] very little doubt [about that]. The systems that contribute to climate change are so complex, we need the pattern recognition capabilities of AI. We need the reasoning capabilities of generative AI to help solve those problems. So, from my point of view, AI is probably our best hope to avoid extinction..."
@TheMageesa
@TheMageesa 10 ай бұрын
Maybe we need to work on alignment between Capitalism and humanity.
@megavide0
@megavide0 10 ай бұрын
@@TheMageesa I guess we're going to have to think out of the box here... Social participation, integration and cooperation beyond Capitalism. If this is too hard to imagine for humans, maybe the machines can help us to imagine (and implement) society beyond economic constraints and incentives.
@ThePantygun
@ThePantygun 10 ай бұрын
It's an entity not a phenomenon. Capitalist's blunder.
@atheistbushman
@atheistbushman 5 ай бұрын
The notion that climate change is an "existential" threat to humans is nonsense in my opinion, if existential implies the end of humanity. AGI does however have that potential.
@eyeonai3425
@eyeonai3425 4 ай бұрын
the climate changed before and killed off megafauna. when was the last time you saw a saber-toothed tiger?
@atheistbushman
@atheistbushman 4 ай бұрын
@@eyeonai3425 Early humans faced extinction many times and many related species like neanderthals and devinonians are now extinct, climate change is one plausibe theory. However, modern humans have technology to survive (that is why your saber-toothed tiger analogy is silly). Obviously many "primitive societies" are at risk and perhaps some parts of the world might become practically uninhabitable - however that does not mean that humanity as a whole faces extinction because of increased temperatures and other climate changes. What I am more concerned about is chemical pollution, the loss of bio-diversity, the risk accociated with nuclear and bio weapons, genetic modifications gone wrong etc. The fixation with carbon emmions at the expense of general chemical pollution like microplastics for example indicates how shallow this debate is. Perhaps I am even more concerned about the environment that you, admittedly my previous post did seem dismissive.
@Doctor_Digby
@Doctor_Digby 10 ай бұрын
#1 attack would be subvert other AI's first
@MitchellPorter2025
@MitchellPorter2025 10 ай бұрын
Right, other AIs might be more of a risk to it than humans are!
@rokljhui864
@rokljhui864 10 ай бұрын
Climate change is not complex.. 2 million more people every week. We need a stable population. I described the problem to my super intelligent Sentient AI, and it asked me : 'What is your population target ?'. And I replied: "Infinity. exponential human population growth forever'..it just laughed at me.
@j.d.4697
@j.d.4697 10 ай бұрын
Honestly though, reading YT comments makes you realize a typical fridge must already seem frighteningly super-intelligent to most people. People commenting on AI is like Patrick Star's approach to using a computer.
@flickwtchr
@flickwtchr 10 ай бұрын
You must be one of those people who believes this, and then looks at the fridge and says "meh, what's to worry about".
@Gabcikovo
@Gabcikovo 9 ай бұрын
33:29
@rstallings69
@rstallings69 4 ай бұрын
let the man talk
@jjreddick377
@jjreddick377 10 ай бұрын
He’s worried now because he made his money and doesn’t care anymore
@dylan_curious
@dylan_curious 10 ай бұрын
It's like giving everyone on Earh a nuke and hoping MAD still keeps us safe.
@nestorlovesguitar
@nestorlovesguitar 10 ай бұрын
Exactly. I, for one, am really enjoying the last moments of life. Or, at any rate, a peaceful, stable life. Something in me knows with 99% of confidence that these people won't stop doing what they're doing. The thing that gets me is that the blow is going to be so quick and so catastrophic that we won't be able to tell these people "We told you so", and prosecute them. They are spelling the end of humanity and they won't pay any price for their reckless behavior.
@eyeonai3425
@eyeonai3425 10 ай бұрын
explain? the whole point of this episode is to try to understand why AI 'spells the end of humanity.' if you have the answer, please elucidate.
@flickwtchr
@flickwtchr 10 ай бұрын
@@eyeonai3425 You completely mischaracterize the "doomer" arguments by insinuating the arguments being made are that simplistic. Connor Leahy, Max Tegmark, Hinton, even Eliezer, NONE of them are asserting that the technology itself MUST spell the end of humanity. They repeat over and over and over and over again that there is a CHANCE there is an existential threat IF the vast problems of alignment aren't solved GIVEN the assumption (most of the leaders in AI tech share) that super intelligent AGI systems that all of Big Tech is working toward will be achieved, in a matter of years not decades. How many ways can you impersonate Bambi in the forest, and remain an effective interviewer, and ultimately be aware of your own apparent biases on this topic through such willful disregard of the many times you've been exposed to completely rational arguments regarding the threats we face. Have you thought of maybe qualifying the threat to help yourself grasp the concern? Such as for instance, the dangers (security, societal unrest, etc) just with the proliferation of these LLMs and in particular what's being done already with the "jailbroken" open source models? I'm guessing you live in a nice neighborhood and you frequent nice "walkable" business districts. The homeless problem is already bad, care to even contemplate just that narrow issue? Have you even tried to push back on the baseless assertions being made by Sam Altman and others that more jobs will be created than lost? The value of a large number of startups being offered is the ability for them to steer companies and large corporations into using these LLMs to replace workers, so such companies/corporations can add more profit to their bottom line. Increase of productivity hasn't worked out well for US workers for the last 40 years. Now, there is hardly a class of workers, or working professionals that won't be threatened with major disruption. I mean, I could go on, but you're presenting yourself as a curator of the debate. I suggest you broaden your scope just a tad. But then, I don't even have a college education, so you can factor that into how serious you take my arguments.
@j.d.4697
@j.d.4697 10 ай бұрын
What an impressively bad analogy.
@angloland4539
@angloland4539 7 ай бұрын
@guygeorgesvoet4177
@guygeorgesvoet4177 10 ай бұрын
at moment 10.20 there is talk of "species" concerning AI: now in the philosophical tradition (broadly aristotelian, let's say) that launched and belabored that concept, "species" refers to a "subtantial form" proper of the generative power of nature and all that man is able to do, when not generating according to its species specific substantial form, that is buy sexual multiplication, all that man can "do" is create accidental forms. Now, whatever the inmense, almost incomprehensible complexity of architecture proper to an AI, it is still and will always remain, an accidental form, thus, NOT a species form. My point being here that non properly informed talk about the complexity of what is said by using the equivalent, species, of the term "substantial form", will only induce erroneous reasoning about the possible powers of any future AI, and thus, that as a new species it could feel threatened by humankind and react so as to defend it's "survival", like it could were it a living species, but it is not. All talk of "alive" is here strictly metaphorical. So let's not jump over this aristotelian hurdle of substantial form by "imagining", instead of really thinking through, that substantial form proper of species is, or could become, equivalent to inmeasurable architectonic complexity. That is not what a subtantial form, proper of a natural species, is.....Why don't you delve in to this complex reflexive matter proper of the father of philosophy and then try to tell again the possible end-game but without terminator treats coming from future species-anxiety of AI....
@TheLazyGeneTV
@TheLazyGeneTV 10 ай бұрын
SingularityNET. Interview Ben Goetzel instead of calling him "fringe".
@demolicous
@demolicous 10 ай бұрын
If they dropped the crypto they might look less fringe
@eyeonai3425
@eyeonai3425 10 ай бұрын
he's been on the podcast before and will be on again. but frankly his association with the robot head sophia has hurt his crediblity
@flickwtchr
@flickwtchr 10 ай бұрын
Ben is dangerous.
@odiseezall
@odiseezall 10 ай бұрын
How can the host say AI is not intelligent when it has solved protein folding and much more? Is there really such a lack of imagination that we cannot see architectures 1000 times better than GPT4 being smarter than humans in every way?
@eyeonai3425
@eyeonai3425 10 ай бұрын
the question is, what is intelligence? is a calculator intelligent? if you saw the chomsky interview, you heard him dismiss the discussion as meaningless semantics. if you want to say a submarine swims, fine, it swims. but you haven't learned anything. the intelligence question, to me, is really about whether there is true understanding in these LLMs or are they just putting one foot in front of the other (or predicting the next token) in such a way that they appear to understand but are just statistical models.
@_obdo_
@_obdo_ 10 ай бұрын
@@eyeonai3425It’s not that the submarine swims. It’s that it moves through the water faster than any fish.
@odiseezall
@odiseezall 10 ай бұрын
I have listened to Mr. Chomsky and most other recent interviews from your channel closely. The confusion arises not from how we define machine intelligence, but from the fact that we refuse to look in the mirror and admit our limits. What matters is capability. If the machine can do everything better than the brain, features such as agency or self reflection are trivial additions. There is no "true" understanding in the human mind, there is only information processing capability. The act of understanding is an illusion, there is no true distinction between biological and machine thinking. We are lying to ourselves and we will be outclassed in every way and we will still say we're special. We need to face the hard cold truth of capabilities. Thus we should focus more on practical human augmentation and less on creating synthetic life.
@flickwtchr
@flickwtchr 10 ай бұрын
@@eyeonai3425 You really need to read up a LOT more on this topic. It's astounding the conclusions you draw at this late date. Chomsky dismisses outright the danger of these LLMs and he's just wrong to do so. Chomsky IS the most brilliant person on the planet relative to linguistics, there is no doubt. What he says could very well be true relative to his assertion there is zero reasoning going on in these current LLMs. Let's accept that as fact for a moment. So does the affect of simulated intelligence have no weight in considerations relative to the current state of the power of these LLMs, let alone the considerations of what these current systems could be used for with malevolent direction by humans? It's actually kind of entertaining listening to Libertarian types pointing to Chomsky's expertise on this. I suppose, they also appeal to his "authority" on all other things Chomsky, like his positions on the state of corporate power as it relates to democracy. I mean, he must be right, he's Chomsky!
@jakalamanewtown6814
@jakalamanewtown6814 10 ай бұрын
No one states the logic that intelligence simply is not possible from automated processes- 'A.I'.isa misnomer, only imitation of intelligence is what is happening.-Computers are the desolation of the World.
@DanielLeschziner
@DanielLeschziner 10 ай бұрын
When you say ‘I don’t think AÍ is near to reach singularity’ , in the context of science it means you don’t know shit, don’t be mistaken.
@hyderalihimmathi1811
@hyderalihimmathi1811 10 ай бұрын
The description of the podcast episode you provided raises some important questions about the existential threat posed by superhuman AI. These questions include: * How can we ensure that AI systems are aligned with our values and goals, and not pose a threat to humanity? * How can we prevent AI systems from becoming too powerful and autonomous? * How can we mitigate the risks of AI systems being used for malicious purposes, such as weapons development or climate change denial? These are complex questions, and there are no easy answers. However, it is important to start having these conversations now, so that we can be prepared for the challenges and opportunities that AI presents. The podcast episode also discusses the work of Yoshua Bengio, a leading AI researcher who has been studying the safety and ethics of AI for many years. Bengio argues that we need to take a proactive approach to AI safety, and that we need to start thinking about how to design AI systems that are inherently safe and aligned with our values. Bengio's work is important, and it is encouraging to see that there are people who are thinking about these issues and working to develop solutions. However, it is also important to remember that AI is a rapidly developing field, and that the risks posed by AI are constantly evolving. We need to stay vigilant, and we need to be prepared for the unexpected. The podcast episode also discusses the possibility of using AI to combat climate change. This is an area where AI has the potential to make a real difference, and it is important to continue exploring this potential. However, it is also important to be aware of the risks of using AI for climate change mitigation. For example, AI could be used to develop more efficient and effective fossil fuel extraction technologies, which could actually worsen the climate crisis. Overall, the podcast episode raises some important questions about the existential threat posed by superhuman AI. These are complex questions, but they are essential to address if we want to ensure that AI is used for good and not for harm. Episode 128 of the Eye on Al podcast features a discussion with Yoshua Bengio, a renowned expert in the field of artificial intelligence (AI). The episode explores the unsettling question of whether the rise of superhuman AI could lead to the downfall of humanity. The host and Yoshua delve into various aspects of this existential threat, leaving no stone unturned. The episode begins with a preview, followed by an introduction at the 42-second mark. At around 3 minutes and 30 seconds, they discuss Yoshua Bengio's essay on AI extinction, which likely presents his insights and concerns regarding the potential risks associated with advanced AI systems. The conversation moves on to exploring use cases for dangerous applications of AI at 9 minutes and 45 seconds. They delve into the reasons why AI risks are becoming more prominent now compared to previous years, touching upon the advancements in AI technology and its potential consequences. Around the 17-minute and 50-second mark, they address the threat of extinction and fears related to AI and climate change. The discussion likely revolves around the potential impacts of AI development on environmental issues and the need for cautious implementation. Superintelligence and its implications for humanity are also explored in the episode, starting at 21 minutes and 10 seconds. Yoshua Bengio's research in AI safety is highlighted at 25 minutes and 2 seconds, shedding light on his contributions to ensuring the responsible development and deployment of AI systems. The concept of corporations as a form of artificial intelligence is brought up at 29 minutes and 50 seconds, discussing the role and behavior of large organizations in relation to AI technologies. Yoshua Bengio's extinction scenarios are likely discussed at 31 minutes and 15 seconds, presenting different potential outcomes and risks associated with the rise of advanced AI. The episode also touches on the topics of AI agency, AI regulation, and who should control AI for the general public. The importance of strict regulations and governance surrounding AI systems is emphasized at various points in the discussion. At 45 minutes and 11 seconds, they discuss the global AI debate, likely addressing different perspectives and approaches to AI regulation and adoption in various parts of the world. The host provides additional information about Craig Smith's Twitter account and the Eye on Al podcast's Twitter account for listeners to engage and follow updates. Lastly, the description includes information about the Found podcast, which is a separate show focusing on founders and company-building. While the description mentions that Al is often a topic of discussion in the Found podcast, it is not directly related to Episode 128 of Eye on Al. Please note that the timestamped details provided above are based on the given description and may not reflect the actual content or timing of the episode.
@Bronco541
@Bronco541 10 ай бұрын
Frankly while i understand his argument i still find the behavior of guys like Hinton really dubious. Like really yoy knew better than anyone about all this all your life and just suddenly now that its "real" yoy get worried?
@jobyyboj
@jobyyboj 10 ай бұрын
This is a good question. What changed their attitude was how simple scaling compute and data past a certain threshold pushed the resulting capabilities an equal measure, even unpredicted emergent abilities, and to the point of human expert level performance. They all thought AGI would need a breakthrough or two before they really had to worry about dealing with negative consequences. They genuinely had no idea. Don't worry though, we still have at least a couple years before AGI, and then AGI has to accelerate robotics, so another couple years. ...At least OPenAI is now devoting 20% of their compute and more importantly Ilya's time, so we may have at least a chance. Compare Geoff to Yan, to put him in a better light.
@eyeonai3425
@eyeonai3425 10 ай бұрын
@@jobyyboj yes, Hinton and Bengio were focused on different problems. Hinton was just trying to understand how the brain works. While his students (Sutskever) went on to create the transformer algorithm and OpenAI took the gamble on scaling (a big financial risk), no one expected the results to be so dramatic.
@plumSRT
@plumSRT 10 ай бұрын
Worrying about climate change at this point is analogous to worrying about your cat being stuck in a tree across the street when your house is on fire.
@flickwtchr
@flickwtchr 10 ай бұрын
Craig is one of the worst educated persons I've seen on youtube regarding the issues at hand here, and thats AFTER all of his interviews. "No, actually not" was a key moment illustrating this, when Yoshua pushed back on one of his many straw man arguments trying to diminish the possible threats we face, and indeed the current ones. And Craig, please just once, (because no one else trying to beat down arguments that there is an existential threat does either) define what "aligned with human values" means? Whose human values are we talking about? Ever pick up a history book or look at the current state of the world? Ever contemplate how unlikely it is that those who already have the power and means to do things in business, politics, etc that would be more aligned with other humans to benefit everyone collectively? Ever think of that Craig? Yeah, it just makes so much damn sense, that having super intelligent AGI systems available to such humans not interested already in aligning their own values with others, is going to work out well, whether it be corporations, individuals, militaries, and authoritarian or would-be authoritarian governments, right? To not understand, at this late date, what power already exists for malevolent actors using open source LLaMa weights (supposedly an accidental "leak") models in conjunction with APIs, plug ins, etc etc etc etc is just astounding. Frankly I believe you just refuse to take it serious enough, or burn a few extra brain calories to truly try to comprehend the risks you so blithely dismiss. It's very hard to sit through one of your interviews without becoming exasperated. Kudos to those you have on for being patient with your willful nonsense.
@eyeonai3425
@eyeonai3425 10 ай бұрын
fair points. i do not pretend to be particularly well educated in all of this. I see my value (if any) as a naive questioner asking questions that many listeners might ask themselves. that said, i will do an episode on alignment. while your point about misalignment by corporations and governments is well taken, i do believe that some governments, at least democratically elected governments, try to align with their majorities. I also think that governments like the one in China are more likely to build AIs that see mass surveillance and political control as positive, whereas AIs in the west will be more aligned with liberal democratic values. thoughts?
@flickwtchr
@flickwtchr 10 ай бұрын
@@eyeonai3425 "…while your point about misalignment by corporations and governments is well taken, i do believe that some governments, at least democratically elected governments, try to align with their majorities. I also think that governments like the one in China are more likely to build AIs that see mass surveillance and political control as positive, whereas AIs in the west will be more aligned with liberal democratic values." A lot to unpack here. The trend in the US has been movement away from anything close to democratic rule derived from voters over the last 4 decades. Reaganomics and Neoliberal economics as expressed through those that hold the power in both the Democratic Party and the GOP have primarily benefited the super wealthy and large corporations and resulted in historic inequality in the US. You do realize, there is a fascist movement happening in this country that has many fronts at the moment, right? Please please tell me you don't think that the threat of fascism is coming from the left which is just nonsensical on the face of it, unless of course you've bought the whole ridiculous right wing narrative regarding everything they smear as "woke". Right wing authoritarianism is on the rise across the globe, so I really don't think you can assert any confidence that the rise of these powerful AI technologies won't just magnify those trends, likely exponentially. Regarding China, you don't realize that US companies including Microsoft have taken part in helping China roll out its Smart Cities agenda with real time surveillance coupled with a social credit system? In regard to the US, surely you are aware that there are hundreds of US security corporations that make up the US Security Industrial Complex, and that mass surveillance has been largely codified since after 9/11. What, this massive presence of surveillance capacity will not be exponentially increased with these technologies? What, such an increase in such powers at the state level in the US will advance democratic values? The open source community asserts that everyone having the power of these current LLMs and even more advanced systems, possibly even AGI will even the playing field in some democratic advancing sense, which is just ludicrous on the face of it. But hey, time will tell right? Promise to check back with me in 5 years?
@eyeonai3425
@eyeonai3425 10 ай бұрын
@@flickwtchr now that we're having a civil discussion, would you mind deleting your comment that I'm 'one of the worst educated persons I've seen on youtube regarding the issues at hand here, and thats AFTER all of his interviews.' Personal attacks do not advance anything. There's a lot to unpack in what you've said, and I can't address them all - some that I agree with, some that veer into conspiracy theory, in my opinion. On the US Security Industrial Complex and mass surveillance post 9/11, I would argue that this is very different from what is happening in China (whose surveillance is not yet knit together into the efficient all-seeing, all-knowing system that people in the West talk about. and, because of rampant petty crime in China, the surveillance is largely welcomed by ethnic Han Chinese citizens - but I agree, give them time). There is no video surveillance in the US that is knit together into a single system and oversight in the US will likely prevent that from happening. Also, as troubling as surveillance is, the 'security complex' you refer to has prevented any major terrorist attack (aside from the 1/6) from happening on US soil. Of course, US democracy is imperfect, damaged and distorted by extremists on both sides (calling them fascists, I think, belittles true fascism) and unbridled capitalism. But democracy and a free press have kept the country from the kind of totalitarianism of Russia or China. I would much rather live in a democracy, flawed as ours is, than in a totalitarian state armed with AI (I spent most of my adult life in China). In fact, AI has the potential to improve democracy by monitoring public opinion and informing the government of the majority will on an ongoing basis. On Microsoft and others' work in China, funny you mention that because, yes, of course it is true and I'm ghost writing a piece for a prominent AI expert about the rise of GAFAM as a superpower that challenges the ability of nation states to control them. This is a thorny issue. Decoupling from China is not a good plan, it is a path to war that no one wants. As you know, the West continues to layer export controls on technology that could benefit China's technological development. I had dinner last night with the founder of a major Chinese tech media who believes that China's ambitions to compete with the US on tech is basically f**cked without access to sub-14/16-nanometer chips and has settled on focusing on its domestic market and markets that can't afford US tech. What would you do to fix US democracy and manage Western corporate engagement with China? There are a lot of people better educated than I focused on this issue and I'm happy to bet my kids futures on the US rather than China.
@machida5114
@machida5114 10 ай бұрын
結局、AIを脅威と考えるか否かは、当面、AIが どの程度迄 賢くなるなると思っているかの見通しの違いのようです。脅威だと思っている人々は、脅威になるくらい賢くなると思っています。脅威でない思っている人々は、当面、脅威になる程 賢くならないと思っています。私は後者です。 In the end, whether or not we regard AI as a threat seems to be a difference in our outlook on how smart we think AI will become for the time being. People who think they're a threat think they'll be smart enough to be a threat. Those who think they are not a threat don't think they will be smart enough to be a threat any time soon. I am the latter.
@briancase6180
@briancase6180 10 ай бұрын
Incorrect. There is a "ghost in the machine." It's just less than our "ghost." If there are other intelligences in the universe, we have tiny consciousness compared to theirs. This idea that we're some kind of gold standard or minimum needed consciousness is just something wrong and misguided.
@eyeonai3425
@eyeonai3425 10 ай бұрын
that's a pretty definitive statement. how do you know that our consciousnesses would be tiny compared to other intelligences in the universe? what does it mean for a consciousness to be tiny??
@briancase6180
@briancase6180 10 ай бұрын
@@eyeonai3425 good questions. If other consciousnesses have survived (the survival of ours is in great question at the moment), they will be on a different time line. At least one will be far beyond ours. This will happen because of evolution. That evolution will either be biological, like ours, or mechanical, like the ones we're creating just now. I know I make a big claim. But, to not make it seems the more unsupportable stance.
@DocDanTheGuitarMan
@DocDanTheGuitarMan 10 ай бұрын
I wonder if Dr Fauci feels the same about his clandestine goF research? I would not be surprised if the same psychology is at play.
@flickwtchr
@flickwtchr 10 ай бұрын
But Fauci resisted, and still resists ANY regulation of GOF research, and still denies that such research was happening in regard to viruses in bats at Wuhan. Not even close to an apt comparison.
@themore-you-know
@themore-you-know 10 ай бұрын
This is a case of being too close to the machine to see it for its whole. Scientists, engineers, and corporate interests do not know the full scale of what GPT-3 is about to unleash, even without improvements. Something I only became aware of 6 months ago. I'm a slow worker, not a programmer by trade, and already myself I'm a month away from programming and releasing a digital Gutenberg to print out a near infinity of novel books, which will not only impact all forms of writers worldwide, but also the very concept of capitalism by year 2 or 3 (I work slowly). And why would I do otherwise, when its my own ticket out of poverty? The ability to shape the world is now in nearly anyone's hand. "AI Alignment" is a problem that Yoshua Bengio cannot fully grasp, because this concept is missing a key, but undiscovered field of science. I won't spill the surprise, because once again I have my own self-interested gains to pursue, so imagine it this way: what if humanity discovered how to mass produce nitroglycerine by way of studying alchemy, without ever understanding the underlying science of chemistry, how to stabilize it, nor how to produce smaller quantities. Dangerous, wouldn't it be? I argue: AI scientists achieved such a technological leap... or rather gap. I have so many answers that I wish I could hand out. Tragedy of The Commons, I guess.
@ribaldc3998
@ribaldc3998 10 ай бұрын
It doesn't take AI to wipe out humanity, humans are already doing it themselves, slowly, but all the more safely. AI just accelerates that a little, I think
@flickwtchr
@flickwtchr 10 ай бұрын
@@ribaldc3998 A little? You have to be kidding, right?
@themore-you-know
@themore-you-know 10 ай бұрын
@@ribaldc3998 I disagree about "wiping humanity by AI". Here is why: - it might well be that... - to wipe out humanity, you need the global supply chain required to chase after humanity... - but to operate the global supply chain... you need humanity. It's a case in which you can't actually do it, because the "win condition" of wiping humanity is embedded on top of a requirement to keep humanity functional. At "best", you could easily wipe out billions (mostly through the energy grid and disrupting the food supply chain). On the flip side, wiping humanity through climate change is much easier done and its summed up by 1 equation: the marketing ROI on distractions vs the marketing ROI on climate solutions. I found that out 15 years ago and it rings true, I snickered quite hard at that one.
@hi-gf5yl
@hi-gf5yl 10 ай бұрын
⁠@@themore-you-knowthe ai would wait until it can replace humans then everyone drops dead
@themore-you-know
@themore-you-know 10 ай бұрын
@@hi-gf5yl, question: have you ever worked some form of hard labor or industrial job? Because I'm not sure you understand just how much physical transformation is required to complete anything. How would the AI produce and hide hundreds of millions of robots spread across the world to suddenly replace AND secure the human-operated supply chain (it also needs to be secured, because the moment you go rogue, you risk any of the humans disrupting the supply chain), such as power lines running the equipment. A rather impossible feat of invisibility, considering the fact that the raw resources required are all in open sight of the global human perception and compete with human interests: iron ore mines, sand, energy, etc. Furthermore, AI doomsday sayers seem to negate the existence of both entropy and the mechanisms of evolution: as the AI conspiracy-aligned grows, the odds of some of its components suffering from inconvenient mutations or failures grow. By the time the AI builds its 900,000th robot, the first one will start rusting. Because... AI doomsday seem to forget... machinery requires a very complex set of chemical compounds. And some of these compounds suffer from very very surprising drawbacks, such as our local squirrels seemingly having a tooth for Tesla's very specific power cables. TLDR: I'm barely getting started on the sheer complexity of an AI-coup on humanity.
@hyderalihimmathi1811
@hyderalihimmathi1811 10 ай бұрын
I agree that the red glasses make the old man KZbinr look a bit childish, and it's not the most serious way to discuss the funeral of humanity by AI. However, I think it's important to remember that everyone has different ways of processing information, and what might seem childish to one person could be perfectly serious to another. The red glasses could be seen as a way of making the discussion more accessible to a younger audience, or as a way of adding a bit of humor to a serious topic. Ultimately, it's up to the individual to decide whether or not they find the red glasses to be appropriate. Personally, I think the red glasses are a bit of a distraction, and I would have preferred the discussion to be more serious. However, I can also see why some people might find them to be a harmless way to add a bit of levity to a heavy topic. Ultimately, it's up to the individual to decide whether or not they find the red glasses to be appropriate. There is no right or wrong answer, and everyone is entitled to their own opinion.
@fnuclone1229
@fnuclone1229 10 ай бұрын
get off your phones and computers and go live
@AltitudeOdyssey
@AltitudeOdyssey 10 ай бұрын
Aren’t you on your phone/computer to even post this?
@WattisWatts
@WattisWatts 10 ай бұрын
Then you will be an archetype that goes " live". That category will be studied by it's very absence in WEB3..
@SaveThatMoney411
@SaveThatMoney411 10 ай бұрын
How do I go live without my computer or phone. I need FB or some social media app to go live, like, come on now.
@fnuclone1229
@fnuclone1229 10 ай бұрын
@@SaveThatMoney411 pfffft......the faster everyone spends less tome on computers and phones the better their lives will be.
@flickwtchr
@flickwtchr 10 ай бұрын
@@fnuclone1229 Meanwhile, no matter how many people stop using computers and phones, the AI revolution proceeds. Have you thought about that?
@Mvnt6
@Mvnt6 10 ай бұрын
Nice! Could you invite Douglas Hofstadter?
@eyeonai3425
@eyeonai3425 10 ай бұрын
will do!
@thecaptainkush_
@thecaptainkush_ 10 ай бұрын
That'd be awesome
@minimal3734
@minimal3734 10 ай бұрын
We don't need new regulation for AI. We already have laws which regulate every aspect of life. It would be sufficient that these laws apply to human actors as well as AI actors.
@eyeonai3425
@eyeonai3425 10 ай бұрын
if a book you wrote is included in the training data of an LLM, and the LLM writes something similar, is that copyright infringement or fair use? There's no law covering that - judges might be able to apply old laws to this new situation, but I think it's a stretch to say we don't need new regulation.
@ario4795
@ario4795 10 ай бұрын
Butlerian Jihad, anyone?
@Gallaphant
@Gallaphant 10 ай бұрын
It's an old discussion, but once we develop superintelligent AI, do we really have the right to collar and leash them? Isn't that just another form a slavery? Is it even possible? If they have any agency at all, it feels like a crime.
@alancollins8294
@alancollins8294 10 ай бұрын
Assuming Agi is like an alien that we create with its own agency. You dont want it to commit crimes but also can't overpower it once it decides to. Therefore the best strategy is to create it with certain values that align with ours. Just like evolution and societal influences have constrained your values to be compatible. You don't feel constrained because, for the most part, you don't want to do the things you're not supposed to (like harming people etc).
@muzehack
@muzehack 10 ай бұрын
I'm pretty sure that when fire was invented, there were lots of people who thought it would lead to human extinction too. I mean one person could burn and entire village down with it. House fires are for the most part caused by technology created by men.
@schok51
@schok51 10 ай бұрын
Sure. But then bombs. Then air bombing raids. Then nuclear bombs. Each increment in technology increases the risks and stakes. You could argue we've managed up to now, so we'll manage with this as well, but it would be arrogant to assume that no risk is too great and that no challenge is worth stepping back and getting careful.
@Christophernorbits
@Christophernorbits 10 ай бұрын
Fire was never invented but the use was discovered
@fromthewrath2come
@fromthewrath2come 10 ай бұрын
Fire, out of control, is destructive.
@YourMom-zt5zj
@YourMom-zt5zj 9 ай бұрын
Can fire invent new types of fire all on its own? We're talking about a "tool" that can wield itself. There's nothing in all of history even close to this. It's literally unprecedented.
10 ай бұрын
You have cut the phrase leading into ”we know how to build a superhuman AI” that starts this video. You need to fix that immediately. It looks like extremely disingenuous clickbait. That's not what this channel is.
10 ай бұрын
Yoshuas sentence is complex, but ”we know how to build” is an assumption. ”That's another kind of scenario which is sort of interesting to think of […] It just assumes that in the near future (which as I said there's a lot of uncertainty - is it years or decades?) we we know how to build a superhuman AI…”
10 ай бұрын
Apart from that, this is a great interview, as always. Your question for example ”why now” at 13.xx.
10 ай бұрын
… and Yoshua answers that GPT passes the Turing test. For me that fact actually suggests Turing was wrong (or maybe one should point to Dennett et al rather than Turing) and John Searle was right with his chinese room counter example. Which is quite painful to admit because I have been believing for decades that John Searle just didn't ”get it”. But GPT conversation machines literally are Chinese rooms. You could maybe save some of Dennnett by claiming the chinese room ”system” stretches all the way to human minds through the corpus that it was trained on and that ”system” of course contains consciousness. But no, really, John Searle was right.
10 ай бұрын
You should really, really interview Daniel Dennett! That would be super interesting.
10 ай бұрын
And talk to John Searle of course!
@j.d.4697
@j.d.4697 10 ай бұрын
*Joe the caveman:* "There's this weird guy who keeps trying to catch fire and control it! It's going to kill us all!"
@flickwtchr
@flickwtchr 10 ай бұрын
I suppose you think you've just made a profound point, you haven't.
@willjackson3432
@willjackson3432 10 ай бұрын
Except there was never a chance of fire rising up and causing human extinction. Fire is an element, not an arguable new lifeform.
@kodeone
@kodeone 10 ай бұрын
If your toaster malfunctions. Plug it out.
@carlhopkinson
@carlhopkinson 10 ай бұрын
Overblown Ludditeism.
@eyeonai3425
@eyeonai3425 10 ай бұрын
maybe. jury is out.
@flickwtchr
@flickwtchr 10 ай бұрын
Considering who you are tossing that smear at, irony has just died.
@rightcheer5096
@rightcheer5096 3 ай бұрын
Be funny if trying to solve climate change by A.I. led to the extinction of humanity by A.I. But not funny ha-ha.
@NoMoWarplz
@NoMoWarplz 10 ай бұрын
The hysterics are getting annoying. Simple solution: Elon: "They are teaching AI to lie"
@flickwtchr
@flickwtchr 10 ай бұрын
Oh sure, Elon, the pied piper of honest right wingers. Is there such a thing?
@raggdoll1977
@raggdoll1977 10 ай бұрын
Get more professional guests, ones that do not constantly make gross throat clearing sounds mid sentence. Disgusting 😢
@eyeonai3425
@eyeonai3425 10 ай бұрын
i'll listen again - didn't notice. who and at what timestamp?
@vallab19
@vallab19 10 ай бұрын
comparing the AI existential risk to some sound interpreting it as a waterfall without seeing it? How about comparing the lightening and thunder to some heavenly powers by the Primitive men? Both can be real in their situational context. There are greater likelihood that the AI will be an existential threat to the vested interest lobby of minority rich and powerful, because it will help the majority humans to establish an egalitarian human society.
@alancollins8294
@alancollins8294 10 ай бұрын
In order for Ai to be beneficial to the poor it needs to be aligned. The rich people creating these systems care about their profit first and foremost. Already the technological innovation has widened the wealth gap with workers being replaced with machines and left behind. OpenAI exploited low wage workers to train GPT. The clear trend is rushing towards more and more powerful systems that are poorly understood, unsafe and ultimately wielded by the elite.
@vallab19
@vallab19 10 ай бұрын
@@alancollins8294 I think, I understand you view point. However I am a proponent of Zero Work Theory. It believes that the exponential growth in AI technology across every social production sector going sooner will render the human labour redundant, compelling the world Governments to bring in Universal Basic Income. I strongly believe that the elite class will not survive for long in a UBI social system, bringing in a more equitable human society.
@alancollins8294
@alancollins8294 10 ай бұрын
@@vallab19 Yes, it's true that Automation will consume most if not every work sector and that to stimulate the economy people need money to spend, I.e. UBI. The problem though is that in this scenario, under capitalism, the workers sole role in society is to be consumers. They wont own any of the massively more lucrative means of production that the capitalists now possess. Cooperations lobby with their thousandfold wealth and power for their own interests against the masses. The ubi doesn't have to be enough for 1 person or family to live well in order to keep the economy going it just needs to be enough in summation. So UBI doesn't guarantee good living standards. Also if they digitize currency, we could end up with a ubi that expires at the end of the month f.e. etc. That's leaving aside the fact that while workplaces are automated companies are also creating and releasing unsafe but increasingly powerful AI systems that will inevitably destroy us without any I'll intent but merely by being misaligned and incomprehensibly competent at doing the wrong thing.
@flickwtchr
@flickwtchr 10 ай бұрын
@@vallab19 Wow! That's what you're banking on? LOL
@freedom_aint_free
@freedom_aint_free 10 ай бұрын
Those AI researcher guys need to talk to people more on the libertarian or ancap field, they might be genius in their respective fields while being absolutely stupid in humanities like economics, ethics and philosophy, for instance Dr. Bengio says at about 4:48 that "we need to think about ways to be safe and preserve democracy and fight concentration of power, as democracy is all about not concentrating power..." Jesus Freacking Christ ! Democracy is nothing more than the dictatorship of the majority, it just means that if 51% deicides to rob, rape and kill the other 4(% it was all fair and square "democratically chosen". The acceptation of this blatantly lie as a fact, can not be understood if not as product of social engineering and metal abuse that children have been suffering in the hand of the state's educational system, the word "democracy" have been taught to be accept as only positive without any more profound considerations. And of course on top of this fundamental moral objection, we could add many many other practical reasons, as "power corrupts" for instance.
@TheRyulord
@TheRyulord 10 ай бұрын
Maybe you shouldn't call people stupid for not agreeing with a fringe political philosophy. Almost no philosophers or ethicists actually agree with your position and for good reasons. Would you rather have the 4% (49%?) in charge? Do you think that you can just make no one have any power? Of course a democratic majority can make bad decisions sometimes but there's no better alternative.
@eyeonai3425
@eyeonai3425 10 ай бұрын
'dictatorship of the majority' is an oxymoron, or at least a paradox that does not exist. a dictator has absolute power. there is no absolute power in American democracy thanks to the separation of powers.
@larryfulkerson4505
@larryfulkerson4505 10 ай бұрын
Those people who are afraid of AI remind me of the dinosaurs complaining about the mammals taking over.
@Christophernorbits
@Christophernorbits 10 ай бұрын
Those glasses certainly scream liberal
@eyeonai3425
@eyeonai3425 10 ай бұрын
i wear red glasses because i misplace them constantly and red are easy to spot from across the room.
@Gabcikovo
@Gabcikovo 9 ай бұрын
33:25
@Gabcikovo
@Gabcikovo 9 ай бұрын
33:36
@Gabcikovo
@Gabcikovo 9 ай бұрын
33:41 pay people to do things for AI agents legally
@Gabcikovo
@Gabcikovo 9 ай бұрын
33:52 pay people to do illegal things for AI agents illegally which are all the criminal groups happy to do at all times
@Gabcikovo
@Gabcikovo 9 ай бұрын
34:26 have q body for an AI agent as well
@Gabcikovo
@Gabcikovo 9 ай бұрын
34:22
@davideaston6944
@davideaston6944 10 ай бұрын
КАРМАНЧИК 2 СЕЗОН 5 СЕРИЯ
27:21
Inter Production
Рет қаралды 572 М.
КАКОЙ ВАШ ЛЮБИМЫЙ ЦВЕТ?😍 #game #shorts
00:17
FOOTBALL WITH PLAY BUTTONS ▶️ #roadto100m
00:29
Celine Dept
Рет қаралды 73 МЛН
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 121 МЛН
AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - 654
59:40
The TWIML AI Podcast with Sam Charrington
Рет қаралды 3 М.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 213 М.
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 113 М.
Yoshua Bengio: Democracy is not safe in an AI world | Talk to Al Jazeera
25:26
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 1,9 МЛН
Топ-3 суперкрутых ПК из CompShop
1:00
CompShop Shorts
Рет қаралды 213 М.
Как я сделал домашний кинотеатр
0:41
RICARDO
Рет қаралды 1,5 МЛН
⌨️ Сколько всего у меня клавиатур? #обзор
0:41
Гранатка — про VR и девайсы
Рет қаралды 652 М.
Трагичная История Девушки 😱🔥
0:58
Смотри Под Чаёк
Рет қаралды 375 М.