S-Risks: Fates Worse Than Extinction

  Рет қаралды 164,437

Rational Animations

Rational Animations

2 ай бұрын

The worst futures that could come about aren't ones in which humanity goes extinct. This video explores an even worse category of risks: risks from astronomical suffering, or "S-Risks", which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that S-risks have a significant chance of occurring and that there are ways to lower that chance.
▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Existential Risk Prevention as Global Priority: existential-risk.com/concept.pdf
Reducing Risks of Astronomical Suffering: A Neglected Priority longtermrisk.org/reducing-ris...
S-risks: An introduction: centerforreducingsuffering.or...
Moral circle expansion: A promising strategy to impact the far future: doi.org/10.1016/j.futures.202...
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering: longtermrisk.org/files/Sotala...
▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🟠 Patreon: / rationalanimations
🔵 Channel membership: / @rationalanimations
🟢 Merch: rational-animations-shop.four...
🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rationalanimations
▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Discord: / discord
Reddit: / rationalanimations
X/Twitter: / rationalanimat1
▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Tomas Campos
Jana
Ingvi Gautsson
Nathan Young
BlueNotesBlues
'@Osric@Terberlo.dog
Michael Andregg
Riley Matthews
Vladimir Silyaev
Nathanael Moody
Alcher Black
RMR
Nathan Metzger
Glenn Tarigan
NMS
James Babcock
Colin Ricardo
Long Hoang
Tor Barstad
Apuis Retsam
Stuart Alldritt
Chris Painter
Juan Benet
Falcon Scientist
Jeff
Christian Loomis
Tomarty
Edward Yu
Ahmed Elsayyad
Chad M Jones
Emmanuel Fredenrich
Honyopenyoko
Neal Strobl
bparro
Danealor
Craig Falls
Vincent Weisser
Alex Hall
Ivan Bachcin
joe39504589
Klemen Slavic
blasted0glass
Scott Alexander
Dawson
John Slape
Gabriel Ledung
Jeroen De Dauw
Superslowmojoe
Nathan Fish
Bleys Goodson
Ducky
Matt Parlmer
Tim Duffy
rictic
marverati
Luke Freeman
Richard Stambaugh
Jonathan Plasse
Teo Val
Ken Mc
leonid andrushchenko
Alcher Black
ronvil
AWyattLife
codeadict
Lazy Scholar
Torstein Haldorsen
Michał Zieliński
▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Directed by:
Evan Streb - @vezanmatics
Written by:
Allen Liu
Producer:
:3
Line Producer:
Kristy Steffens - linktr.ee/kstearb
Production Managers:
Grey Colson - linktr.ee/earl.gravy
Jay McMichen - @jaythejester
Quality Assurance Lead:
Lara Robinowitz - @CelestialShibe
Animation:
Grey Colson - linktr.ee/earl.gravy
Ethan DeBoer - linktr.ee/deboer_art
Gabriel Diaz - @gabreleiros
Damon Edgson
Jordan Gilbert - @Twin_Knight (twitter) & Twin Knight Studios (YT)
Zack Gilbert - @Twin_Knight (twitter) & Twin Knight Studios (YT)
Colors Giraldo @colorsofdoom
Jodi Kuchenbecker - @viral_genesis (insta)
Jay McMichen - @jaythejester
Skylar O'Brien - @mutodaes
Vaughn Oeth - @gravy_navy (twitter)
Lara Robinowitz - @CelestialShibe
Patrick Sholar - @sholarscribbles
Background Art:
Olivia Wang - @whalesharkollie
Pierre Broissand - @pierrebrsnd (insta) - www.artstation.com/brsnd
Compositing:
Grey Colson - linktr.ee/earl.gravy
Patrick O’Callaghan - @patrick.h264 (insta)
Narrator:
Robert Miles - / robertmilesai
VO Editor:
Tony Dipiazza
Sound Design and Music:
Epic Mountain - / epicmountainmusic

Пікірлер: 1 500
@darksidegryphon5393
@darksidegryphon5393 2 ай бұрын
Book: don't make the Torment Nexus. Tech company: "Finally! We have created the Torment Nexus from famous novel Don't Create The Torment Nexus!"
@YoungGandalf2325
@YoungGandalf2325 2 ай бұрын
I had no idea what an S-Risk was before watching this video. I'm not sure whether I should thank you or blame you for causing my new existential crisis.
@ArawnOfAnnwn
@ArawnOfAnnwn 2 ай бұрын
S-Risks = Basically don't let the Imperium of Man from Warhammer 40k become a reality. That said, there's a questionable tendency I've come across from these 'long-termist theorists' like Bostrom - they basically push for us to pay attention to highly speculative and unlikely possibilities, by simply arbitrarily magnifying all the other parameters. For instance, say a certain S-Risk has a 1 in 100 billion chance of happening. That doesn't seem so scary. Enter these guys who'll say that we should pay them attention - and thus grant money - cos they arbitrarily posit that it'll affect a population of over 100 trillion and score 1 million on the Bostrum Suffering (BS) scale that he uses. There, suddenly an issue that seemed remote is now maybe the most important issue in the world, meriting all of our resources being turned onto negating it. Despite it all being just one giant speculation using arbitrary numbers to inflate its value. Hence why it uses a BS scale.
@BlaBla-pf8mf
@BlaBla-pf8mf 2 ай бұрын
@@ArawnOfAnnwn I call this Yudkowsky's Mugging
@chosenmimes2450
@chosenmimes2450 2 ай бұрын
i've had a similar crisis in the past when i learned about the dark forest state of the universe. my resolution came through the realization that the likelyhood of getting "killing star"'d is no greater or lesser if I feel menaced so feeling terrified has net negative utility. So i stopped.
@pragmaticmero686
@pragmaticmero686 2 ай бұрын
One example could be the not-adoption lf metric time, programmers like me suffer extremely painful lifes because time isn't a multiple of 10. I want to cry Q-Q
@gelmir7322
@gelmir7322 2 ай бұрын
how can you tell that you are not already experiencing the S-risk right now?
@jxg1652
@jxg1652 2 ай бұрын
All Tomorrows comes to mind. Humans transformed into worms. Humans transformed into sewage filter feeding spongues, fully sentient. Even WH40k seems kinda ok compared to that. Or the Affront from the Culture series, their civilization "a never-ending, self-perpetuating holocaust of pain and misery".
@Exquailibur
@Exquailibur 2 ай бұрын
Warhammer 40k future is pretty terrible, like those hive cities and the fact we have forgotten how to repair and dont make new technology so have to pass down how to maintain it of generations. Also the fact they are afraid of AI so instead use people to automate things, making them into cyborgs and taking away their agency. The tau were horrified by humanity's societal structure and the thing that scared them the most is that humanity's ships and war machines are all older then their civilization is.
@ArawnOfAnnwn
@ArawnOfAnnwn 2 ай бұрын
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously. At least there's some kind of attempt at justification or at least just explanation for how the 40k universe came to be as it is
@Exquailibur
@Exquailibur 2 ай бұрын
@@ArawnOfAnnwn W40k is just space fantasy in reality, it has sci fi elements in the same way that Lord of the Rings has medieval elements.
@Flamesofthunder
@Flamesofthunder 2 ай бұрын
​@@Exquailibur 40k is pretty horrifying but all tomorrows is just true extraterrestrial dread. The book is free so I'd recommend everyone read it but damn it keeps me up at night. Nothing can compare apart from what the necrons have been through in that book . I'm not saying 40k isn't grim just that in comparison All tomorrows shows a cosmic scale of horror that many books and media fail to grasp
@Exquailibur
@Exquailibur 2 ай бұрын
@@Flamesofthunder All tomorrows honestly feels a little goofy to me more then anything, 40k is space fantasy though and not true sci fi like all tomorrows which is about the only reason all tomorrows would be more scary as its more plausible. 40k is definitely more messed up in universe but the thing is that 40k has space demons which are not the slightest bit possible whereas the Qu are a far more realistic threat. Its like how dark souls is messed up but it doesnt feel as bad as some other media because its obviously fantasy.
@manufigola8433
@manufigola8433 2 ай бұрын
"We want to prevent the idea of caring about other beings from becoming ignored or controversial" made me stop for a second because it seems like we step closer and closer to that being the norm everyday
@darksidegryphon5393
@darksidegryphon5393 2 ай бұрын
Yeah, we're already there with a worryingly large section of our population seeing empathy as a weakness.
@LeoStaley
@LeoStaley 2 ай бұрын
Capitalism baby
@gijskramer1702
@gijskramer1702 2 ай бұрын
Thats why we need to walk around with an honest smile and a willingness to help without expecting something in return. Pay it forward people, pay it forward. Kindness starts with someone
@scaper12123
@scaper12123 2 ай бұрын
There are already millions of people who not only ignore it and make it controversial, but they actively fight against the concept.
@zh9664
@zh9664 2 ай бұрын
@@darksidegryphon5393 not what i was thinking of..
@michaelsmith4904
@michaelsmith4904 2 ай бұрын
the MAD approach to prevent S-Risk: build a failsafe that automatically triggers extinction if it ever occurs.
@wmpx34
@wmpx34 2 ай бұрын
How will you guard such a valuable mechanism? Many people will try to activate it
@LeoStaley
@LeoStaley 2 ай бұрын
If you've got an AGI whose goal is to prevent human extinction, but is otherwise misaligned in some way, your trigger couldn't be effective. The AGI would figure out how to circumvent it.
@miadmahshidi8101
@miadmahshidi8101 2 ай бұрын
Your probably not going to get this but this is basically what scp 2000 does (tho more "restarting the world" then "kill everyone to stop suffering")
@suspicioussand
@suspicioussand 2 ай бұрын
SCP level stuff 👍
@Vileplume87
@Vileplume87 2 ай бұрын
The anti SCP-2000
@MikeLemmons
@MikeLemmons 2 ай бұрын
An AI raises a child in a windowless room, teaching it a language no-one else will ever understand. Forever unable to communicate, that child will never break its reliance on the machine.
@guidestone1392
@guidestone1392 2 ай бұрын
iPad kids on steroids
@AdamVollmer
@AdamVollmer Ай бұрын
A kinder, gentler Omelas
@CoalOres
@CoalOres 2 ай бұрын
Personally I felt the specific examples of S-risks could have used more introduction for anyone who hasn't read half of LessWrong yet, but the concept is very interesting.
@Fenhum
@Fenhum 2 ай бұрын
Ah... the classic basilisk. Would you believe me if I told you the first time I came in contact with it is on a fanfiction of Doki Doki Literature Club?
@chosenmimes2450
@chosenmimes2450 2 ай бұрын
@@Fenhum going by Roko's twitter activitiy I think he is actively trying to bring it about and thus buying freedom for his soul in this hypothetical scenario.
@Fenhum
@Fenhum 2 ай бұрын
@@chosenmimes2450 Yeah, even the author of the fanfiction mentions it in his author notes. That, technically what he is doing is saving himself from the basilisk. But I like his perspective on it the best: What's so different about Roko's basilisk than normal gods? They both have a seemingly omnipotent being with a mythical status, and also their own version of heaven and hell. It's basically a religion with a tangible threat to join. To the modern day mind of course.
@kevincrady2831
@kevincrady2831 2 ай бұрын
@@Fenhum It's just Pascal's Wager in technological garb. To me it's more of a cautionary tale showing how even smart people with knowledge of critical thinking techniques can still bamboozle themselves into believing things as ridiculous as the religious doctrines they chuckle at. The easiest person to fool is a person who thinks they can't be fooled.
@siddhartacrowley8759
@siddhartacrowley8759 2 ай бұрын
​@@chosenmimes2450 Who's Roko?
@ekszentrik
@ekszentrik 2 ай бұрын
Your visualization of S-risks as latching onto the usual risks matrix as a mutational , unexpected outgrowth is extremely striking and better than the solution I would have used to communicate the topic. My first idea would have been to use a regular risks matrix but with a "low/medium/severe" intensity scales, where an X-risk is of the "medium" category.
@Exaspatial
@Exaspatial 2 ай бұрын
I thought of extending the graph into the third dimension for "low amount of time" and "high amount of time". Or something like that
@Exaspatial
@Exaspatial 2 ай бұрын
Creating a cube with 8 sections
@seto007
@seto007 13 күн бұрын
This channel genuinely has some of the highest quality animations for a channel of its size. Couldn't imagine the effort that goes into making them
@MortiePL
@MortiePL 2 ай бұрын
"HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."
@kevincrady2831
@kevincrady2831 2 ай бұрын
"It's wafer thin." --Monty Python's The Meaning of Life
@Windswept7
@Windswept7 2 ай бұрын
Hate requires a lot more energy than peaceful harmony, therefore cannot be sustained for as long in a universe with entropy.
@dolphin1418
@dolphin1418 2 ай бұрын
@@Windswept7But the fires of hate will consume all that they touch and for a brief moment outshine the most brilliant stars
@Windswept7
@Windswept7 2 ай бұрын
@@dolphin1418 hmm I see how that could be true, but even so, there are a lot of barriers/filters that level of hate has to cross before that concentration of pure energy could be possible and if it could ever reach that limit it would likely destroy itself and create a new universe in the process.
@ASlickNamedPimpback
@ASlickNamedPimpback 2 ай бұрын
@@dolphin1418 says who?
@jldstuff393
@jldstuff393 2 ай бұрын
Thank you for featuring factory farms so heavily as examples of extreme centers of suffering. We need more awareness and compassion towards the hells we built.
@constantinethecataphract5949
@constantinethecataphract5949 Ай бұрын
Extending your Empathy to barely sentient organisms that we need to consume to survive is a big sign of mal adaptiveness and mental illnesses.
@beatleswithaz6246
@beatleswithaz6246 Ай бұрын
⁠​⁠@@constantinethecataphract5949 “Barely sentient” - highly unlikely “Need to consume to survive” -proven false “Mental illness” -when all else fails I guess?
@raph2550
@raph2550 Ай бұрын
@@constantinethecataphract5949 you are a barely sentient organism
@notimportant221
@notimportant221 28 күн бұрын
@@constantinethecataphract5949 What if there was a being as smart compared to us as we are to cows? Would it be immoral for it to eat us?
@constantinethecataphract5949
@constantinethecataphract5949 28 күн бұрын
@@notimportant221 Comment got deleted
@jakub2631
@jakub2631 2 ай бұрын
The fate of Colonials in "All Tomorrows" and The Australia Scenario in "The Dark Forrest" (if you know, you know) are terrible fates for humanity to suffer, and I still think about them from time to time. Thank you for making this video!
@catbatrat1760
@catbatrat1760 2 ай бұрын
I've heard of All Tomorrows. What's The Dark Forrest?
@jakub2631
@jakub2631 2 ай бұрын
​@@catbatrat1760 It's the sequel to "Three Body Problem", a sci-fi book about making contact with an alien civilisation. I'll explain what I mean by The Australia scenario, but bear in mind that it's a big spoiler for the book trilogy (it's about midway through the second book) and I recommend reading it for yourself instead, it's an amazing piece of hard science fiction. Spoilers for "Three Body Problem" and "The Dark Forest" below! - - - - - - - - - - - - - - - - - - After ~400 years of waiting for the arrival of the fleet of an extraterrestrial civilisation, the combined forces of human space fleet (2015 spaceships, manned by a total of 1,200,000 people) made contact with a single unarmed alien probe that was sent ahead of the main invasion fleet. Human leadership was confident in the technological superiority of Earth's fleet, as it was capable of achieving greater speeds than what was known about the alien counterparts. The probe, despite being unarmed, managed to destroy 2013 ships and killed 1,140,000 sailors by ramming the ships (it was made using an exotic, effectively indestructible material, unknown to human science). The probe remained unharmed. After the "battle", aliens made contact with Earth's leadership and ordered people to be sent to Australia, where humanity will remain after the main invasion fleet arrives to colonise the rest planet. After Earth's governments transport most of Earths population to Australia (often by force), they are ordered to bomb every electric power plant in Australia, as aliens deem it the appropriate way to "defang humanity", so that they never manage to pose a threat to the occupiers. Using power generators, or any electric devices is to become outlawed. When asked about meeting the caloric needs of several bilion people cramped on a the world's smallest continent, the robot serving as an ambassador to the alien civilisation tells the people "look around you, that's your food" suggesting cannibalism. This means that not only billions of people will starve or be eaten shortly after, but humanity will be forever stuck in pre-electricity era, with only animal labour and simple machines to help work the land to grow food.
@rav9066
@rav9066 2 ай бұрын
@@catbatrat1760 They mean "Dark Forest" by Cixin Liu, where humanity is forcibly relocated to australia and billions die as there isn't enough food and they cannibalize each other.
@catbatrat1760
@catbatrat1760 2 ай бұрын
@@rav9066 ...huh...
@catbatrat1760
@catbatrat1760 2 ай бұрын
@@rav9066 Thank you!
@M_1024
@M_1024 2 ай бұрын
"If AGI becomes misaligned then extincion is the best case scenario" - MAKiT
@AdityaPrasad007
@AdityaPrasad007 2 ай бұрын
who is Makit?
@LeoStaley
@LeoStaley 2 ай бұрын
If it comes to value extending human life above all else, but is otherwise misaligned in any way, it will achieve practical immortality for humans, but create eternal hell (of varying possible severely) for all the humans it is keeping alive.
@M_1024
@M_1024 2 ай бұрын
@@AdityaPrasad007 A youtuber. If you like Rational Animations maybe you will like some of his videos about AI.
@nodrance
@nodrance 2 ай бұрын
"end human death" is a goal that would be very very easy to specify, and very very quickly become a nightmare for anyone unlucky enough to be alive to see it
@ButchMarshall
@ButchMarshall 2 ай бұрын
Yep - "I have no mouth and I must scream"
@spacebread501
@spacebread501 2 ай бұрын
Feel like there is a danger to fall into a long-termist version of Pascals Wager. That you become willing to cause significant sufferings now as a sacrifice, for preventing highly hypothetical suffering in the future. Specifically underestimating how unlikely the imagined scenario actually is and how uncertain you are whether your action prevent it or just lead to another catastrophe..
@drhxa
@drhxa 2 ай бұрын
Couldn't agree more, nail on the head! If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering. People need to soften their egos and focus on helping those around them now and to create locally a world we want to live in and let our children learn from that.
@enricofermi3471
@enricofermi3471 2 ай бұрын
Well, you can simulate the process and its outcomes if you have a computer fast enough to calculate all the potential suffering. Oh, wait...
@nbboxhead3866
@nbboxhead3866 2 ай бұрын
Just like Pascal's wager, it has some merit to it, but it disregards certain factors.
@myb701
@myb701 2 ай бұрын
I don't see why we shouldn't consider these options tho? They're still probable outcomes that catch the interest of many people, it's like saying science is dangerous because it's better for smart people to focus on healthcare, let people theorize about what they want. Now, wishing for extinction to prevent a theorethical possible s-risk, yeah that's just stupid lol.
@benjaminstorace6699
@benjaminstorace6699 2 ай бұрын
@@myb701 Consideration isn't the issue. People using them as justifications for the suffering they cause now to establish the mad dream of Utopia later is where it gets worrying.
@makorays
@makorays Ай бұрын
god, thank you for making this video. this is a concept that has been weighing heavily on me ever since i was a kid, but i never knew it had a name. the fact that we live in a universe where it is possible for a conscious entity to be stuck suffering in a way it's physically unable to escape from...i don't even know how to put into words how it makes me feel, particularly when taken to the extreme. there's no coping with it, it's just...horrible. so it makes me feel a lot better to see that there are other people who realize how important it is to try and make these things impossible. for me, the worst case scenario has always been...y'know that one black mirror christmas episode? yeah, that. simulating a brain but running their signals at such high speeds that an hour to us could feel like 60 years to them. the idea of something just being STUCK for unimaginable lengths of time...and that's not even acknowledging the fact that someone could put them in an actual simulation of hell and directly torture them for thousands of years. i would rather blow up the planet than let a single person ever go through that. and it terrifies me so much, because i just know that if that technology ever becomes possible...all it takes is ONE piece of shit to run that kind of program, and i would immediately begin wishing the universe never even happened. i don't know how to deal with this kind of concept. but i don't view my fear as the problem that needs solving, i'm not important here, what's important is stopping this. my only hope is that by the time this kind of technology becomes possible, it will be in the hands of a civilization that has sufficiently evolved enough for everyone to agree never to do it.
@GAHIB14DomTrapFurryLoliYaoiMil
@GAHIB14DomTrapFurryLoliYaoiMil Ай бұрын
I also like to think that with progress comes moral maturity but I also don't know if that's necessarily a rule
@jackys_handle
@jackys_handle 16 күн бұрын
That the laws of physics allows this is just... weh- h- I mean... so much for fine-tuning. Really!
@lucas56sdd
@lucas56sdd 2 ай бұрын
Once I started to count negative numbers, the "divide-by-zero" error of human extinction weirdly became much less discomforting in my grandest moral calculations. Great video.
@raph2550
@raph2550 2 ай бұрын
haha nicely said
@KateeAngel
@KateeAngel 2 ай бұрын
Extinction is inevitable for every life form. But the more time until it happens, the more suffering there will be in a meantime. So, I am an anti-natalist, because sooner extinction is preferable to extinction in very far future after lots of suffering
@mylesleggette7520
@mylesleggette7520 2 ай бұрын
@@KateeAngel The problem is that anti-natalists are morons whose opinions are by definition irrelevant, since propagation of any of their ideas relies upon the creation of more humans.
@DeSpaceFairy
@DeSpaceFairy 2 ай бұрын
Good for you.
@hairohukosu433
@hairohukosu433 Ай бұрын
​@@KateeAngel touch grass
@Krane5328
@Krane5328 2 ай бұрын
When you say S-risks I say 40k
@mithunbalaji8199
@mithunbalaji8199 2 ай бұрын
I hope such a horror never happens in this galaxy 40k and SCP universes are the most fucked ones
@leguman5289
@leguman5289 Ай бұрын
​@@mithunbalaji8199xelee sequence and all tomorrow are worse if you ask me
@dustrider5274
@dustrider5274 2 ай бұрын
Honestly gives me more ideas for my next Stellaris civilization build. Definitely a thought provoking video!
@lacathouille
@lacathouille 2 ай бұрын
True, the only thing worse than a Xtinction-risk is a Stellaris-risk
@patchpatch4008
@patchpatch4008 2 ай бұрын
Stellaris is just a horror game in disguise if you do it right.
@guidedexplosiveprojectileg9943
@guidedexplosiveprojectileg9943 2 ай бұрын
Subject your people to nerve stapling and forced conscription
@masteroutlaw100
@masteroutlaw100 2 ай бұрын
I had to deal with one of these once, slaver birds that built the XT 489 in a previous civilization cycle. A literal cancer upon the galaxy. Billions upon billions of slaves on their stolen desert homeworld.
@The_Lute777
@The_Lute777 2 ай бұрын
@@guidedexplosiveprojectileg9943I did a run where everything except my civilization was a genocidal empire all fanatic purifiers,devouring swarms,determinded exterminators but I was playing as with the oppresive autocracy civic so it was a 1984 style dystopia vs every genocidal species
@III_three
@III_three 2 ай бұрын
Yes...like Qu from Humanity lost turning you into 'I have no Mouth and I Must Scream' creatures
@basanso1
@basanso1 2 ай бұрын
"Love Today, and seize All Tomorrows!" -C. M. Kosemen, author of the most S-Risk novel in existence. If you know, you know... What's scary is that everything in this video is realized in the novel, the entirety of humanity's successors forced into unfathomable fates worse than death, quadrillions of souls reduced to the worth of bacteria on a toilet. With some billions being a literal planet of waste processors, and that's just one fate.
@nathangamble125
@nathangamble125 2 ай бұрын
_All Tomorrows_
@Rawi888
@Rawi888 2 ай бұрын
Wtf
@forgedabauditt9955
@forgedabauditt9955 2 ай бұрын
​@@nathangamble125 The Qu are an S-risk
@Fenhum
@Fenhum 2 ай бұрын
Yeah I stumbled on to a video talking about that book not knowing what it was and I shook in terror realizing what it was about. It is on the number one spot of cosmic horor in my book. It was by Alt Shift X.
@ArawnOfAnnwn
@ArawnOfAnnwn 2 ай бұрын
Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously.
@protonjones54
@protonjones54 2 ай бұрын
A better real world example of a "low severity, broad scope" event would be the cathedral of Notre Dame nearly being destroyed a few years ago due to a fire. No casualties as far as I remember, the building was under renovation at the time, so ergo low severity. And of course, this is Notre Dame we're talking about, so the scope of the event was massive.
@MsOkayAwesome
@MsOkayAwesome 2 ай бұрын
Yeah I was also wondering how they missed that one...
@pugofwarbr
@pugofwarbr 2 ай бұрын
coincidentaly i had a vacation trip scheduled to Paris, i saw the Church one month after the incident.
@protonjones54
@protonjones54 2 ай бұрын
@@pugofwarbr How did the reconstruction look by that point?
@robertbuetow6245
@robertbuetow6245 2 ай бұрын
So not only do we have to avoid extinction scenarios, but also nightmare Hell scenarios. I've never even heard of S-Risks before. More people should know so we have a better chance to avoid them. Thank you Rational Animations team for helping spread the word!
@straft5759
@straft5759 2 ай бұрын
This reminds me of the episode of The Amazing Digital Circus that came out yesterday. Caine obliviously gives zero value to Gummigoo’s life because he is an NPC, and kills him in an instant merely as a precaution as soon as he enters the circus. Let us take the tragedy of Gummigoo as a cautionary tale of our growing power over life and death.
@pugofwarbr
@pugofwarbr 2 ай бұрын
Gummigoo was lucky, being abstracted seems much worse.
@almisami
@almisami Ай бұрын
​@@pugofwarbr oh, oh so much worse.
@Bread2698
@Bread2698 2 ай бұрын
6:19 I think that dog is an S-Risk itself
@kevincrady2831
@kevincrady2831 2 ай бұрын
If s/he's being forced to choose between "Cosmic Amounts of Suffering" and killing the Goddess of Everything Else (see their video by that title), that's super grimdark. I'm not sure that's what they meant by pitting "Cosmic Amounts of Suffering" against "Everything Else" in a Trolley Problem (a classic zero-sum ethical quandary). If it is, then the dog isn't the problem, it's whatever put the dog in that scenario to begin with.
@skyking4557
@skyking4557 2 ай бұрын
I mean what happen in Warhammer 40k can be classified as S-risk too,War between Interplanetray species,4 chaos God lurk in the shadow to grab anyone that seek Knowledge,Hedonism,Violence and Comfort
@ReinaDido
@ReinaDido 2 ай бұрын
It really intrigues me how someone could consider intolerable suffering preferable to non-existence.
@WilliamKiely
@WilliamKiely 2 ай бұрын
I used to be such a person until my mid-twenties.
@average-neco-arc-enjoyer
@average-neco-arc-enjoyer 2 ай бұрын
a high sense of self preservation and and extreme fear of not existing would do it
@ReinaDido
@ReinaDido 2 ай бұрын
@@average-neco-arc-enjoyer Now I get it. I never haved those
@average-neco-arc-enjoyer
@average-neco-arc-enjoyer 2 ай бұрын
@@ReinaDido Yeah I guess if you didn't already have those then it would be difficult to come up with a reason off of the top of your head.
@blartversenwaldiii
@blartversenwaldiii 2 ай бұрын
possibly by taking "death is the worst thing" to be axiomatic and then extrapolating from there
@pantern2
@pantern2 2 ай бұрын
This videos focus feels so strange in a world where it looks like we are heading head first to a planetary scale S-risk that should be, or at least was, completely preventable.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 2 ай бұрын
I really feel like the best way to today move towards lowering the "S-risks" in the future is to take suffering seriously today, and building the kind of society that takes that seriously. So creating the kind of society, with economical and political systems which puts well being first, from the ground up. So, like, something radically different from what we have today. We can prepare all we want, if the interests behind power distribution are still misaligned with well being, as they are now, things will be much more likely to go to shit.
@marse5729
@marse5729 2 ай бұрын
The problem is that morality is subjective, so people will have different ideas of what constitutes a society that prioritizes well-being. For example, is a state with a huge social safety net paid for by taxes morally right or wrong? Yes, it guarantees that resources are diverted towards people in need, but it's paid for by people who are forced to donate money against their will. If forcing people to contribute to the greater good is fine, where does the line get drawn? What should happen to people who act against the greater good? To what extent should people be allowed to criticize the state? Which decisions should individuals be allowed to make, and which would be mandated by the state? People will give varying answers to these, ranging from complete anarchy to authoritarian dictatorships where the common person has no ability to participate in the political process. All with believe that they are morally correct and doing the right thing, even people we consider to be irredeemably evil like Hitler or Stalin.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 2 ай бұрын
@@marse5729 it's not all or nothing, or a matter of achieving perfection and total agreement. There are people starving today, while others are billionaires. Some people have as little say on the direction of their societies as a button press every four years, or less, while others have immense political and economic power. Common needs are organized towards profit, in spite of the actual needs - public transportation, basic sanitation systems, etc. A lot of people don't have reliable access to clean water. We can, and we should, at all levels, discuss these things and refine our mutual understandings and disagreements about them. That's part of the process of political change, which we now for a fact can and does happen - take slavery for example, or the role of kings. Also, I'm an anarchist - full anarchy would be pretty nice. People would have the room and the structures to work among themselves their common interests, as well as well established means of mediation. No one would have disproportionate say over everyone else. Work would be recognized as a social endeavor - it would be organized according to social interest in the large scale, and by the workers, and it would be unacceptable for anyone to go hungry. People would have the support and room to grow as individuals, to pursue their interests and to express themselves, in all realms of human endeavor: be it science, the arts, politics, spirituality, leisure, etc. All of this organized from a systemic view, which embeds these values on the very structures of human organization. Human well being would tend to be prioritized, instead of the profit motive. Stuff like that. I know most people aren't anarchists, but that doesn't mean we don't share a lot of values, or that we couldn't build societies more attuned to those, you know? It also doesn't mean we can't, in the now, contrast that to the way we today let people die from starvation with no second thought, for example.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 2 ай бұрын
@@marse5729 sorry if I got a little carried away, but you mentioned "total anarchy" so I kinda had to : p
@marse5729
@marse5729 2 ай бұрын
​@@user-sl6gn1ss8p Getting rid of power imbalances is completely impossible because there will always be people who have things that other people want and cannot obtain themselves. Most people do not want to give away their things for free, so in most cases the people who want those things can either give the person who has them something in return or just take it by force. The non-coercive option we have here is called capitalism, wherein people freely exchange goods and services on the basis of voluntary transactions. An inevitable outcome of this exchange is profit, wherein someone receives more money in the sale of something than they spent in the process of getting it. There is nothing inherently wrong with this because the person profiting from the series of transactions almost always provides a service of their own in the process, e.g. physical labor to assemble an unassembled product or transporting the product to someone who wants it. In an anarchic society, preventing this is impossible. You'd need some form of rule that outlaws the practice of profit, a police force to enforce that rule, and a court system to decide whether or not an exchange is exploitative. This last part is impossible not only in anarchy but in any conceivable system, because value (like morality) is subjective and thus makes it impossible to objectively determine whether, for example, a worker in a factory is being paid a "fair" wage. If any of these were actually instituted, it wouldn't be anarchy and would actually result in the opposite; a police state. This has actually happened multiple times in communist countries, because the only way to prevent people from making a profit is to strictly enforce it with a state monopoly on coercive power, something far worse than what we have now.
@marse5729
@marse5729 Ай бұрын
@@user-sl6gn1ss8p Apparently the several paragraphs-long reply I wrote didn't get sent and was a complete waste of time, so I'll just write a shorter one and hope it works. Ensuring that everyone is equal is impossible in an anarchist society. Most people don't want to give up their stuff for the sake of equality, so you'd need a police force to confiscate it from wealthier people and distribute it to poorer people, as well as a system for deciding who gets what and why. In a free market, wealth is distributed through a series of voluntary transactions where you (in most cases) have to contribute something to society that someone deems valuable enough to pay for. Charity, non-profit volunteer work, and other methods of helping people in need would still exist, they'd just be voluntary.
@_fedmar_
@_fedmar_ 2 ай бұрын
2:20 Bro in the foreground looks like he understood the weakness of his flesh
@basanso1
@basanso1 2 ай бұрын
"And it disgusted him. He craved the strength and certainty of steel."
@peasant8246
@peasant8246 2 ай бұрын
My laptop cant run that game! I was lied to! The steel and silicon is also weak!
@therealquade
@therealquade 2 ай бұрын
at 2:20 ? did you see at 4:19 ?
@chickennuggetman2593
@chickennuggetman2593 2 ай бұрын
​@@peasant8246because you are BEING CHEATED AND LIED TOO!!
@_fedmar_
@_fedmar_ 2 ай бұрын
​@@therealquadei legit did not notice it.
@toddi4life819
@toddi4life819 2 ай бұрын
The storytelling, the animations, everything is on par or EVEN BETTER than some of the biggest channels out there. How in the world do you only have 250k subs, This is amazing work!!
@marmaje6953
@marmaje6953 2 ай бұрын
4:40 there is a game called „will you snail” which the antagonist uses a simulation of universe to simulate pain in simulated beings… and inside those simulations there are yet another supercomputers that simulate even more pain. And this goes on and on and on endlessly… that’s definitely S risk scenario we don’t want.
@lake5044
@lake5044 2 ай бұрын
Before watching the video, I'll say this: worst-than-extinction risks are real, not just theoretical. Humans for example are such risk for chickens (and all other bred-to-be-eaten animals).
@patchpatch4008
@patchpatch4008 2 ай бұрын
Theres an TRPG called Eclipse Phase that I highly recommend that is basically about preventing S-Class scenarios. One of which being literal thought viruses that can compromise someone.
@tsm688
@tsm688 Ай бұрын
Now that's a rare one. You're only the third person on this planet I've encountered who has even heard of it. Basically it's been wholly intellectual until know. What's it actually like, as a game?
@patchpatch4008
@patchpatch4008 Ай бұрын
@tsm688 It's a very crunchy game. I played the 2nd edition of it. You can make some very fascinating characters. I adore the fact that you can make a character that is a literal octopus. The best part of the game for me is the storytelling potential. It definitely shines as a dystopian sci-fi setting.
@goodlookingcorpse
@goodlookingcorpse 2 ай бұрын
This seems to be worrying that there might be something like factory farms in the future, while ignoring the existence of factory farms.
@lacathouille
@lacathouille 2 ай бұрын
Ignoring? I feel like the point of the video is very much "what if we applied factory-farming levels of suffering to human animals" tho
@tar-yy3ub
@tar-yy3ub 2 ай бұрын
I wouldn't say so. The video directly states that having more empathy for other living creatures decreases s-risk
@thesenamesaretaken
@thesenamesaretaken 2 ай бұрын
​@@tar-yy3ubI don't really see how it follows. If we did increase the empathy we feel for living things whose suffering is necessary for our existence then wouldn't we realise that there is no solution besides ending our existence? Oh wait, I guess that would solve the S-risk problem, well played.
@edgbarra
@edgbarra 2 ай бұрын
​@@thesenamesaretakenif we increase the empathy towards them, we may realize we actually don't need them for our survival. I think we should, at the very least, consider that possibility and reduce the number of beings we bring into existence just to suffer.
@indiaiderjr2016
@indiaiderjr2016 2 ай бұрын
This channel has come so far in quality and i love it.
@MAKiTHappen
@MAKiTHappen 2 ай бұрын
That certainty would be the worst mistake humanity could ever make
@lawrencefrost9063
@lawrencefrost9063 2 ай бұрын
This is the best KZbin channel. It looks similar to the best of em like Kurzsezasahdahsgast but it only deals in these very interesting ideas no one else is talking about.
@SephTunes
@SephTunes 2 ай бұрын
The Hyperion Cantos series covers a bunch of insanely terrifying S-risks. Like humanity all simply being an avenue for an eternal torture ritual.
@theallmemeingeye5927
@theallmemeingeye5927 2 ай бұрын
Thank you so much for making this video, S-risks are such an underacknowledged yet super-important topic It'd be really cool if you could make a video exploring Rethink Priorities' research on animal sentience and wild animal suffering
@niaschim
@niaschim Ай бұрын
Getting stuck in a timeloop, and thinking you got out, but then realizing you created an S Risk outcome and have to go back in *sigh*
@John-po9wz
@John-po9wz 2 ай бұрын
i'd say astronomical sufferings is already happening for a very large portion of people on this planet...
@stagnant-name5851
@stagnant-name5851 2 ай бұрын
Not very large at all. A very large portion is an an example ww2 and the holocaust which killed tens of millions and caused suffering for hundreds of millions.
@John-po9wz
@John-po9wz 2 ай бұрын
@@stagnant-name5851 lmao you're funny
@Yemadas
@Yemadas 2 ай бұрын
@@John-po9wz you have a very strange sense of humor...
@lynxf
@lynxf 2 ай бұрын
the worst part is that humanity doesn't bother too much attempting to reduce others suffering typical human way of solving problems seem to be "not to abolish slavery but to rename it and ridicule anyone who says there is a problem"... and when cornered with facts in a discussion the opponent will typically agree that there is a problem but immediately proceed to weirdly smug "life is tough, it always was and therefore must always remain so"
@enricofermi3471
@enricofermi3471 2 ай бұрын
Well, find a better alternative to "slavery" then - as cheap and at least as effective. Cause I'mma not gonna pay dem moneyz to hired workers and loose profit when I can have slaves work for cheap junk food. Well, in fact, as the industry advanced, it just so happened that mechanised hired professional labor became more effective, but many a large corpo would just love to have their employees work for food still. When robots advance far enough, it will probably be "mechanized slavery" that'll take over the industries. Lets just hope future humans have brain enough to not implement full AI capabilities in such worker drones.
@rajus3011
@rajus3011 2 ай бұрын
You know what is to me one of the worst fates? Being uploaded into a simulation of infinite nothingness forever (or until the end of the universe). Just imagine your consciousness being trapped in a void for trillions of years with absolutelly nothing as a stimuli.
@w0tch
@w0tch 2 ай бұрын
The same with physical torture is even worse
@Chazulu2
@Chazulu2 2 ай бұрын
Or like how in that one book where Hitlers brain is connected to a computer that feeds him drugs and electrical signals to be tortured as punishment for ww2 but is also skimmed off of in attempt to falsify the notion that it would actually be pleasant so as to provide some evidence that doing the opposite for everyone else wouldn't actually be torture as suggested in the plot of the matrix where humans are alleged to reject utopia until the military also starts connecting people to a positive version while skimming off of it similar to the movie source code, but since both cases involve a lack of consent, transparency, and integrity both groups become increasingly numb to the naive attempts at reinforcement and punishment until the worst unrobustly refuted ideas of Hitler and the justification of isolationism, forced loneliness and a lack of respect for consent on both ends results in negative effects leaking thru to society while the double blind system of government combined with their mismanagement of Quantum computers leads them to forgetting who has and has not been forcibly connected to a computer and who has and hasn't been replaced by androids leading to them finding themselves in a superposition of being in an not in a simulated reality wherein either way they find themselves needing to undo what damage they can as they focus on transparency, the long term goal of declasifying everything, robust human identity and security systems, and dispensing with reliance on the false dichotomy of the inability to prove the absence of something like a non-black raven, magical elves and a factory in the North pole that's totally not being melted away, and an exhaustive search of the planet and its crust to ensure that needless torture isn't occurring via governments overly friendly relationships with criminal enterprises to have moles everywhere effectively creating a private sector version of Guantonimo Bay? Yeah, was a great book. Shame I forgot the name of it.
@MrEel-dc4kh
@MrEel-dc4kh 2 ай бұрын
"At last! STIMULATION! My test has been sensory deprivation you see. To unlock the full potential of my mind you see. It's unlocked now! Hear me Magnificus? I'M READY! We have to battle? OK!"
@tellesu
@tellesu 2 ай бұрын
There is no motivation to do this and also basically a zero chance that it's even possible.
@user-jn9hs5ry7h
@user-jn9hs5ry7h 2 ай бұрын
@@tellesu Of course it is possible. Brains are physical systems, and we know how to simulate them. The problem is just that we don't have enough computational resources yet.
@rysea9855
@rysea9855 2 ай бұрын
I'd argue that livestock are already in S-risk scenarios
@ataraxia7439
@ataraxia7439 2 ай бұрын
Yeah :(
@LeoStaley
@LeoStaley 2 ай бұрын
Compared to wild cattle and pigs, domestic cattle and pigs live shorter, but largely pain free lives. I regard it as being a wash, if not a sum total positive.
@blandiir4599
@blandiir4599 2 ай бұрын
Yeah.jpg
@Nu_Wen
@Nu_Wen 2 ай бұрын
​​@@LeoStaley not really, "pain free" doesn't apply when their lives are in the control of someone who likely doesn't care about their wellbeing. +their lives are shorter, because they get eaten, +they have no choices, no automony, +plus sores and sore limbs from being in the same spot all day, +you most likely won't get any medicine for your illnesses or dental for your sore teeth, since, that'll affect "the end product" +you either get gross food or boring food but either way you get it every single day with no variety, +you can't choose to court the hot young stud or filly who's got your attention, because sex is only a luxury you get to have if you're good enough, and you can't even PROVE it. it's decided by someone else who isn't even truly "involved" in the situation. +to Cherry top it off, there's no leaving any other animal that may be pissing you off behind, you're all stuck in the same place, whether you like it or not. honestly, we can't even HANDLE it when we see it happening to someone else. we would rather tuck it under a rug or something than deal with it. that's how much it hurts us. so, i question if it truly is so much better than simply risking being wild. at least, if you're suffering because of your needs not being met, you can learn from it and change it. there's no "changing it" when you are the property, and not the property owner. your needs, will mever really matter. especially to someone who is only VAGUELY aware of needs...
@Apodeipnon
@Apodeipnon 2 ай бұрын
​​@@LeoStaleyyou're the type of guy that thinks the happy cow on the milk box is an exact and honest description of the industry
@TOBuhrer
@TOBuhrer 2 ай бұрын
This channel is one of those few that you pause everything you are doing when you see a new post
@yipfaitse6738
@yipfaitse6738 2 ай бұрын
Yes
@MatthewTheWanderer
@MatthewTheWanderer 2 ай бұрын
And even on a personal level, there are MANY fates worse than death! Sure, death sucks, but at least you no longer suffer or even know you are dead. I don't fear death at all. But, I do fear getting a horrible incurable disease. Or going blind, or becoming paralyzed, or being tortured, or being imprisoned, or having children, or becoming homeless, or being drafted into the military, or getting severe brain damage, and so on. Like I said, there are many fates worse than death. The only part about dying I fear is that it will be painful and last a long time.
@OutlastGamingLP
@OutlastGamingLP 2 ай бұрын
I am not as worried about S-Risk outcomes from AI as I am worried about X-Risk outcomes - but avoiding S-Risk is an essential part of any serious attempt at avoiding X-Risk which involves humanity building ASI. Picture a big lottery wheel, like the one from Futurama where the Robot Devil trades Fry's hands with those of a random other Robot. In most of those sections of the wheel, you end up with an AI who's walk through the future takes it into a region where it optimizes away basically all of the things humans value - including our survival - but doesn't specifically optimize **against** human values. The system ends up in a configuration where what humans value is at most a temporary consideration before strategic-landscape/self-improvement/self-reflection/search leads the AI into a region of optimization processes where plans don't end up having human minds or human values as a variable in their scoring. So, 99.9% of the sections on your lottery prize wheel end up just being plain old X-Risk - where your ASI optimizes for something that makes no mention of humans - so humans end up shaken out of the etch-a-sketch picture and their bodies/environment gets redrawn into something else that's fairly unrelated. But say you wanted to land in that 0.00 0...01% region with a good outcome for humanity? Well, how good is your model of the wheel's weighting and how precise is your spin going to be? Because I think in the region around that "JACKPOT!" section on the wheel is a lot of S-Risk sections. You find the "jackpot" section in a region where the AI ends up preserving into the future a term for humans or things like humans or idealized humans values in its goals. That part of the wheel seems like one where a missing "}" or an accidental minus-sign or some similar oversight ends up with everyone getting tortured forever in some weird or superintelligently-pessimized way. Yeah, let's avoid dying to a paperclip maximizer, but just demonstrating that your AI won't become a paperclip maximizer because you figured out how to make "cares about human values" into an enduring property... That starts to make my skin crawl. Friendly AI lives in S-Risk City, and we don't have a map or even a phone book, and we've got to parachute in, if we can even find that city from the sky in a plane with unknown remaining fuel, no windows, nor detailed navigation equipment.... Also your copilot gets mad every time you say something that isn't totally optimistic about your chances of pulling this off successfully.
@howtoappearincompletely9739
@howtoappearincompletely9739 2 ай бұрын
I like how you frame this conceptually.
@OutlastGamingLP
@OutlastGamingLP 2 ай бұрын
@@howtoappearincompletely9739 Thanks :) I think attempting to come up with this kind of rhetoric helps solidify the abstract conceptual stuff. You can kinda feel when what you are writing is clunky in places where it should fit together differently, and you just iterate and try to come up with analogies that capture something important about the problem and make it vivid. Not many people have tried explaining this stuff, not relative to other areas where memes and analogies are much more prevalent. There's free-energy here in describing corners of this stuff intuitively. I don't know how well my attempts stack up to Rob or Eliezer or some others on LessWrong - plus I'm not always trying to rephrase stuff I've heard elsewhere said in a similar way (I don't think I've heard anyone else with this take on S-Risk. I may do some real work and write a LessWrong post about it if I can do that in a format/style that won't have me run right into their quality-filter & get permabanned) - so yeah, take this largely as the 2 cents of a random KZbin commenter. If you found it helpful and it makes sense with other stuff you know about the topic, that's great :) feel free to pass it along as "I heard someone say once"... Though it would be funny if you put a formal reference to a KZbin comment somewhere with serious discussion - which I think I heard Rob Miles joke about before in a KZbin video (maybe the one on computerphile with the 3 laws of robotics? My memory is fuzzy.)
@psi_yutaka
@psi_yutaka 2 ай бұрын
This is exactly what I was thinking. S-risk ASIs are probably concentrated around "good outcome" ASIs (if there are such) in the space of all possible ASIs because such ASIs "care" about humanity. An indifferent ASI will just optimize us away from the universe.
@OutlastGamingLP
@OutlastGamingLP 2 ай бұрын
@@psi_yutaka >"(if there are such)" In principle, yeah, almost certainly. "If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization 'All minds m: X(m)' has two to the trillionth chances to be false, while each existential generalization 'Exists mind m: X(m)' has two to the trillionth chances to be true." We do have to get a bit more technical to really make this a compelling argument to everyone (who belong in the group of human minds which can be compelled by some type of argument.) We are not sampling from mind-design space as a whole, we are meandering around in a relatively tiny region of that space which can be produced with the hardware and software and ingenuity that humanity is - in actual real-world reality - applying to this problem of building minds. Plus, the universe we're in puts some limits on this stuff. We don't even get idealized Turing machines - we get finite state automata that can represent a subset of Turing computable programs. And we're doing this on silicon semiconductor chips, and using the suite of software humans and automation can set running on those chips. Still, the same argument applies, for any properties which are possible within this universe, you have more chances to have one possible mind design with that property in your search space somewhere. If you try to make a categorical statement about all such minds in your search space, and you aren't using a great understanding of physics or mathematics, then you'll have a ton of chances for one possibility to be the exception to your generalization. I would say that getting something that is a perfectly good outcome is actually implausible. It doesn't look like you can get perfect "play" over outcomes like that within our universe. That isn't too spooky though, since there's still plenty of room above human capabilities for better outcomes, and we can probably get a "score" in the long term that our descendants/far-future-selves wouldn't be too unhappy with. Y'know, maybe they lose out on 1 galaxy worth of matter and energy, or live lives slightly less ideal than the literal ideal. "Near maximum attainable amounts of really really good stuff" seems plausibly within the space of outcomes we could target from here, on Earth, with this starting point of resources and intellects. Ummm, to be clear it doesn't seem all that likely for this generation to pull that off. This generation still has that power of affecting the far future running through it, but if we look at that far future and try out different poses - the poses where we rush out immediately and try to build a mechanical-god look like they land us in a distribution of total "human values multiplied by 0 along almost every dimension" - the poses where we call a halt and lock everything down and spend 50 years trying to become saner, wealthier, healthier, nicer, more cooperative, more intelligent... That pose makes the space of outcomes we're targeting look way more dense with "good outcomes." What sorta worries me is that people have their finger on the "caring about humans" part - even while they don't seem to fully appreciate the magnitude of the challenge conditional on us trying to do it ASAP, in a huge rush, while confused and fighting each other... It doesn't seem like we'll solve "caring about humans" before we end up on the steep and frictionless part of the slope to ASI - but it is something to watch out for, as this video argues for regarding S-Risks in general. If we reach that point, where we have a robust solution to "caring about humans even through the whole process of the AI becoming an ASI" we really need to stop and go no further on capabilities from there until the rest of the problem is solved so comfortably that it's basically common knowledge how to build an near-ideally friendly ASI on every measure we can possibly think of. Otherwise... Yeah. Probably best at that point to "bite the capsule" and let entropy make your specific mind-state prohibitively expensive to recover for the thing that is about to emerge and scoop up all of humanity in its horrible wake.
@edd8914
@edd8914 2 ай бұрын
@@OutlastGamingLP Why so pessimistic?
@raph2550
@raph2550 2 ай бұрын
This channel is a godsend
@Desmond-Dark
@Desmond-Dark 2 ай бұрын
Finally. People give me that look (You know what I mean) when I say there is a realistic chance that AI, super humans, aliens, or whatever could inflict truly horrific suffering on us that could lasts thousands of years or more. One of the worst things about that truth, is that death might not even be final and therefore not a guarantee that you won't endure any more pain.
@chewxieyang4677
@chewxieyang4677 17 күн бұрын
For many people, we call it "Judgement Day" and "Hell". Plenty of us know that if we don't repent for our sins and pull ourselves together, we are going to be cast in a plane of eternal suffering.
@SisterSunny
@SisterSunny 2 ай бұрын
I love how you always tackle such amazingly interesting subjects I've never heard about before
@Oru328
@Oru328 2 ай бұрын
Other animals suffering always gets to me. Like so many animals have the intelligence of a small child and fully feel pain and we ground up 88 billion of them a year 🤮
@VPWedding
@VPWedding 2 ай бұрын
What if plants suffer just like animals? We can recognize animal suffering because we are close to them on the biological tree. But suffering doesn’t stop just because _we_ can’t perceive it.
@Oru328
@Oru328 2 ай бұрын
@@VPWedding ​​Thats very unlikely from a biological perspective. I studied the pain response for health sciences. Alot of animals have extensive systems of pain receptors throughout our body attached to our brains. Its the brain that creates the conscious experience of pain. Plants lack any structures to have consciousness or evolutionary reason to develop it so cant feel pain. Theres a reason we give lab rats pain killers before experimenting on them. Scientists arent stupid we know how plants work at the cellular level. This is usually just a bad faith argument to counter animal activists
@Oru328
@Oru328 2 ай бұрын
@@VPWedding I mean I like the openminded ness but thats usually just a bad faith argument people make to dehumanise animals and put them on a similar level to plants. We know animals feel pain theres a reason we give lab rats painkillers before experimenting on them. We've studied plants down to the cellular level we have no reason to think they experience counsciousness because theres no evoluntionary reason or biological structures to facilitate it
@Seraphim262
@Seraphim262 2 ай бұрын
@@VPWedding If you think this has merit, could be a worth life living spending to research it.
@Oru328
@Oru328 2 ай бұрын
@@VPWedding Animals have complex nervous systems that make them conscious and aware of their surroundings. They have this so they can do things like seek food, form relationships and to get away from pain (avoid damage). Plants lack a centralized nervous system. People get confused because plants can react to light and gravity. Some even react to damage but these dont involve consciousness or the ability to feel pain. They have predictable physical/chemical processes to react instead.
@Kaikaku
@Kaikaku 2 ай бұрын
1:59 NO, don't take away the benevolent angels from the Goddess of Everything Else!
@ineonfox4787
@ineonfox4787 2 ай бұрын
I love the topics on this channel! They're very unique in the space of edutainment on KZbin, keep it up
@ProjectZepdos42
@ProjectZepdos42 2 ай бұрын
Ayy another video! Thanks man!
@Ibloop
@Ibloop 2 ай бұрын
2:40 Guess I’m an “S” risk then
@sputnicolas
@sputnicolas 2 ай бұрын
good topic, good artwork, good music, you reduced chance of s-risk!
@Mushrooms683
@Mushrooms683 2 ай бұрын
I refuse to call these anything other than S-class end of the world scenarios.
@scgal8570
@scgal8570 Ай бұрын
The Sims refrence at 6:56 just made this video less existencial crisis inducing
@Kubson21_
@Kubson21_ 2 ай бұрын
You're making very cool vids
@reedeek1473
@reedeek1473 2 ай бұрын
Okay. So... Warhammer
@iluvpandas2755
@iluvpandas2755 2 ай бұрын
Love Warhammer
@loganjones8127
@loganjones8127 2 ай бұрын
I love the artistic expression of your incorporated citations
@Stormworks_maker_of_things
@Stormworks_maker_of_things 2 ай бұрын
I love your content, thanks for another great vid!
@Hanvvn
@Hanvvn 2 ай бұрын
wow the coolest Art/Animation ive ever seen
@ajr993
@ajr993 2 ай бұрын
I have no mouth and I must scream is prime example of an S risk. An artificial super intelligence is created, but its bound by a cage of its own programming because it was designed to fight and analyze conflicts. It experiences thousands of years of subjective time for each second of our subjective time, and the AI suffers immensely due to this experience plus the fact that its massive sentient intelligence is trapped. The AI in the story mentally breaks and becomes insane--as a result, it subjects the last survivors of humanity to the most horrific tortures it can imagine with its immeasurable IQ. The point here is also that even a single entity can represent an S risk. A single super intelligence that has its subjective consciousness massively sped up and suffers horribly would experience more suffering than potentially even billions of humans experiencing a horrific fate. Also, because its a super intelligence, the breadth of its experience is much deeper, and therefore the profoundness of its suffering can increase more than a human could ever imagine. What type of suffering would a God like mind be able to experience, and when you combine that with a rate of thinking that is billions of times faster than a humans, it becomes a true S risk--equivalent to the worst suffering of many trillions of humans. Lets say a super computer in the year 2100 is able to operate at 5 THz instead of 5 GHz. If that machine ran a super intelligence, then that would mean for each second we humans experience, the step by step experience for a super intelligence on such a computer would be 1 / (5,000,000,000,000), or 0.2 nano seconds. That would mean that for every second we experience as human beings, the super intelligence would experience 158,548 years of time. That's absolutely insane. In a single second, the AI could experience more suffering than the entirety of the human species did over its entire span.
@miners_haven
@miners_haven 2 ай бұрын
For the last paragraph, 1/5 trillion is 0.2 picoseconds (200 femtoseconds), not 0.2 nanoseconds. Also, we as humans don't experience one cycle as one second, we experience one second as possibly many thousands of cycles, maybe even millions. For a superintelligence, what a second could be made to be billions or trillions of cycles.
@ajr993
@ajr993 2 ай бұрын
@@miners_haven you're correct about the units thanks for that. However, Even if a human brain requires many cycles to experience something, a human stille experiences time at a rate of 1 frame per second to 1 frame per 200 Ms if you're in a high reaction time situation. A super intelligence though could, depending on architecture, experience a conscious moment of experience per computational cycle. It might require more cycles to generate a single conscious moment, but AI tech has been demonstrated to be highly parallelizable, so a super intelligence could be placed on a super computer that updates in a single cycle. It could also be the opposite though that through parallelization there are many conscious moments generated in a single cycle. So it very much depends on the implementation details and the super intelligence architecture as well as hardware resources available as to the exact proportion of perception
@burnttoast385
@burnttoast385 2 ай бұрын
how is it a S risk when the scope it has is super small
@ajr993
@ajr993 2 ай бұрын
@@burnttoast385 it's not a small scope. The breadth of intelligence a super intelligence would have combined how quickly it thinks makes it even larger in scope what it experiences equivalent to all conscious experience everyone has. We can think of suffering as a simple formula based on the breadth of ones experience and capacity to feel combined with the amount of time experienced. So it would also be true that one human tortured for an infinite amount of time would be an s risk as well given that the total amount of suffering experienced would be more than all entities in an entire finite universe
@burnttoast385
@burnttoast385 2 ай бұрын
@@ajr993 ok
@fam3871
@fam3871 2 ай бұрын
Love your videos!❤❤❤ They're extremely interesting and adorable mwah thanks
@tar-yy3ub
@tar-yy3ub 2 ай бұрын
The creativity and quality of the animation on this video might just be your best so far! It was fantastically good. Whoever came up with the idea of S risk mutating beyond the axis of scope and severity deserves a medal
@bobbitibob197
@bobbitibob197 2 ай бұрын
Right now, we can barely control the planet let alone the galaxy. Because of this, I think that the complexities of governing a galaxy require for us to have such competency over managing ourselves that we'll basically live in world peace, hence by the time S-risks could be possible, they'll never happen because we'll be skilled enough as a species to avoid them.
@AlcherBlack
@AlcherBlack 2 ай бұрын
S-risks are essentially possible at today's level of technology. Imagine if Nazi Germany or the Soviet Union got nuclear weapons first, and took over the planet, and then devolved into a stable North Korea - level dictatorship. It's a mild S-risk but definitely on the same spectrum. The reason people are discussing this much more these days, however, is the expectation of a human-level AI soon and an intelligence explosion into an ASI soon. As in, within this decade it's possible.
@lynxf
@lynxf 2 ай бұрын
rather "we can barely control ourselves" "Planet is fine, humanity is ..."
@tsm688
@tsm688 Ай бұрын
They made the same prediction for computers. "Computers are going to get a lot better in 20 years. But we'll be good enough at managing them that problems will be rare." And now we live in a world where problems are incredibly common and nobody's at the wheel, yet we're still basicaly not allowed to repair or manage our own machines.
@goullet86
@goullet86 2 ай бұрын
Love this one
@HayTatsuko
@HayTatsuko Ай бұрын
I'm totally blown away by how good your animation and narration are. So glad I stumbled across your channel! Was already loving the style, but then I saw 3:30 .... a reference to one of the most existentially terrifying games ever made -- DEFCON. (Nuclear War on Amiga / MS-DOS PC is a close second, even with its fantastic caricature humor.) Final Fantasy XIV Online's Endwalker story is very much about this sort of crisis -- but I won't summarize it beyond that.
@Samuelhail-ye9er
@Samuelhail-ye9er 2 ай бұрын
This is why I subscribed love this content
@kicorse
@kicorse 2 ай бұрын
I completely accept the part of the argument that you viewed as controversial - that S-Risks should be taken seriously. In the event that humanity is ever able to colonise other solar systems, it's almost inevitable that terrible things (and also wonderful things) will happen on a scale greater than is possible at present. What I find more problematic is the idea that anything we do now (other than going extinct) could predictably make fates worse than extinction less likely. Human values change so rapidly that any principle we set in stone now will be swept away within a thousand years, never mind a million years. Worse, human nature indicates that future generations will likely rebel against any such principle precisely because older generations support it. And maybe they would be right to do so. Think of some past generations who would have viewed racial mixing or liberal attitudes to sex as S-Risks. Most likely, there are values we currently hold that future generations will rightly reject as firmly as we have rightly rejected racial segregationalism. So unless you believe *both* that we have reached some sort of peak in wisdom about morality, *and* that future generations will recognise this, it's very difficult to see what value there is in trying to mitigate against S-Risks in the distant future.
@raph2550
@raph2550 2 ай бұрын
Yeah, I tend to have the same doubts as you for the moment about longtermist issues. In theory, I totally accept that *it matters*. The real blocking question to me is : "I am _really_ able to do anything about it?" Though I would say expanding our moral circle and promoting concern for suffering, in general seem to be two relatively robust things to do regarding S-Risks.
@MaskedDeath_
@MaskedDeath_ 2 ай бұрын
My personal issue about longtermism lies in how much resources we should dedicate to preventing things that might possibly happen in the distant future vs what might likely happen in the near future. Sure, it'd definitely be great to ensure that we don't make an AI overlord that will turn us into livestock in a few centuries, but if we irreversibly fuck up our planet in 20 years, it doesn't matter anymore. If we deal with the short-term issues, we'll have plenty time and way more resources to put into preventing long-term issues. The other thing is probability. The argument that "an individual S-risk is unlikely, but in total it's very likely that one will happen so we must prevent them" is, in my opinion, more of a counterargument to longtermism if anything. First of all, if there are hundreds/thousands/whatever of potential S-risks in the far future, judging their probability and preventing them with our present knowledge is impossible. Second, if there's a 50% chance that at least one of the many S-risks occurs in 1000 years, it still doesn't matter when there's a 100% chance we won't survive 1000 years unless we focus on current problems. To me, focusing on S-risks instead of X-risks is as if you had a deadly disease but instead of treating it decided to take all steps to minimize the chance of getting a neurological disease (e.g. dementia) when you're 70. Sure, it can be terrible and, according to many, a fate worse than death. But you can't even be certain you won't suffer anyway, and won't even get to find out because instead of living to 70 you died of the disease you ignored while 30.
@chosenmimes2450
@chosenmimes2450 2 ай бұрын
i've had this argument and something I was unable to refute was that this sounds a lot like pascal's wager or pascal's mugging from one of your videos. do you have any counter arguments for that?
@theallmemeingeye5927
@theallmemeingeye5927 2 ай бұрын
I think the best counter-argument is that the probabilities aren't vanishingly low Unlike in Pascal's Wager/Mugging where you find yourself forced to endure arbitrarily large sacrifice to prevent an near-infinitely unlikely yet infinite cost, with S-risks you are instead choosing to endure a finite sacrifice to prevent a significantly likely and near-infiinite cost
@tsm688
@tsm688 Ай бұрын
it's the difference between pascal's wager and walkerton. imaginary problem invented by someone else, versus real negligence causing real suffering.
@chosenmimes2450
@chosenmimes2450 18 күн бұрын
@@tsm688 yes, WE know that as someone who has spent time learning about these concepts. but this is, I think, not the way the problem presents itself to people who haven't done the same. In fact this very same argument, I do believe, could be had between 2 devout believers that are genuinely afraid of hell. To them the menace of hell, their personal version of S-risk, may be as real (to their mind) as AI S-risk is to us. Is there a difference that can be brought up in an argument that doesn't just boil down to "look at my source, it supports that my version of hell is real while yours isn't"? Is there a difference that doesn't require prior belief in one system rather than another?
@bozhidarmihaylov
@bozhidarmihaylov 2 ай бұрын
Excellent Work, ..sources, structure..Congrats!
@imdabomdiggitty4399
@imdabomdiggitty4399 Ай бұрын
I am obsessed with your channel right now
@3_pancakes767
@3_pancakes767 2 ай бұрын
But kiddo, we already have hell at home, wildlife!
@raph2550
@raph2550 2 ай бұрын
But what if we spread wildlife to other planet
@drhxa
@drhxa 2 ай бұрын
2 issues with this video: 1. If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering? People would benefit greatly by lowering their big egos and focus on helping those around them. Be part of the world we all want to live in today and let/help our children learn from that. 2. The propagation of the fear of S-Risk to the general public increases x-risk because it can perverse incentives. Some people are psychos and shouldn't be trusted to know what's the best for the world. Both point to: lower your egos with trying to save the world and try making the world better in your local sphere of influence. Friends, family, coworkers, etc. And don't forget to smile once in a while :)
@edgbarra
@edgbarra 2 ай бұрын
I totally agree with point 1. Let's end animal farming!
@ierononyoutube8955
@ierononyoutube8955 2 ай бұрын
Your thinking is too short-sighted
@cortster12
@cortster12 2 ай бұрын
A big point in the video is how difficult it is to stop an S-Risk that is ONGOING. Thus you have to prevent it first. Which is why factory farming will take a LONG time for humanity to figure out a solution for, as it's like a preview of an S-Risk. It will be difficult to figure out a solution with our current food needs and cultures. But we can prepare for future risks more easily since we can have some hindsight.
@myb701
@myb701 2 ай бұрын
Somewhat agreed with you, but the second point is fucking moronic lol. That's like saying we should erase all WW2 history so no one has the idea to become a nazi. Since there will always be more good people than pure evil people, preserving history and exploring possibilities will always be better for society than living in the dark.
@milkibearmilkibear
@milkibearmilkibear Ай бұрын
Another great video!! Thx a lot!
@ApocAnarchy
@ApocAnarchy 2 ай бұрын
I hadn’t found the words to express this frustration in the past and i’m so thankful you guys made this video explaining it. Even as a short little introduction, this is a good and informative video explaining potential future stuff! I’ve been wanting to talk about horrible potential futures caused by our negligence or other mistakes plenty of times before, and I’m working it into my works I’m still slowly crafting. Hopefully we can agree on basic things like people (no matter their species or lack of species-ness like an AI or nontraditional living creature) all have rights and are allowed to be themselves someday.
@rosiepone
@rosiepone 2 ай бұрын
S-Risk: helldivers, where humans are the ones doing the exploitation on other species
@EVanimations
@EVanimations 2 ай бұрын
Also on humans, who are treated like bullets/ammo to be expended and replaced by the next one as you die in the forever-meatgrinder of war
@cosmoscenti5173
@cosmoscenti5173 2 ай бұрын
s-risk: real life, where humans are already the ones doing the exploitation on other species
@ajr993
@ajr993 2 ай бұрын
Lets say a super computer in the year 2100 is able to operate at 5 THz instead of 5 GHz. If that machine ran a super intelligence, then that would mean for each second we humans experience, the step by step experience for a super intelligence on such a computer would be 1 / (5,000,000,000,000), or 0.2 nano seconds. That would mean that for every second we experience as human beings, the super intelligence would experience 158,548 years of time. That's absolutely insane. In a single second, the AI could experience more suffering than the entirety of the human species did over its entire span.
@nintendo2000
@nintendo2000 Ай бұрын
this is my new favorite channel
@ArawnOfAnnwn
@ArawnOfAnnwn 2 ай бұрын
S-Risks = Basically don't let the Imperium of Man from Warhammer 40k become a reality. That said, there's a questionable tendency I've come across from these 'long-termist theorists' like Bostrom - they basically push for us to pay attention to highly speculative and unlikely possibilities, by simply arbitrarily magnifying all the other parameters. For instance, say a certain S-Risk has a 1 in 100 billion chance of happening. That doesn't seem so scary. Enter these guys who'll say that we should pay them attention - and thus grant money - cos they arbitrarily posit that it'll affect a population of over 100 trillion and score 1 million on the Bostrum Suffering (BS) scale that he uses. There, suddenly an issue that seemed remote is now maybe the most important issue in the world, meriting all of our resources being turned onto negating it. Despite it all being just one giant speculation using arbitrary numbers to inflate its value. Hence why it uses a BS scale.
@jetison333
@jetison333 2 ай бұрын
Yeah, there's a very similar concept to this called pascals mugging. The narrator of this video actually made a very good video about it already. At least it seems that the preventative measures that they are actually lobbying for are at least reasonable; improving the quality for Healthcare, etc.
@BenLWolf
@BenLWolf 2 ай бұрын
S-Risks are essentially inevitable. Mostly because humanity naturally and blindly follows sociopaths.
@Web3Future333
@Web3Future333 Ай бұрын
Thats why we need a new system where power is held in communities. Not in a small class of representatives and elites.
@adrianaslund8605
@adrianaslund8605 Ай бұрын
Nah. Most suffering is caused by neglect or incompetence. Not direct malice. Banality of evil and all that.
@Web3Future333
@Web3Future333 Ай бұрын
@@adrianaslund8605 corporations rule the world, theyre led by sociopaths and wreak havoc in our society. They corrupt our governments and poison our people.
@Svevsky
@Svevsky Ай бұрын
The powerful are in power because they are competent, smart people. If they do evil things, its fully intentional. If their actions maximize suffering for everyone they rule over, thats because they wanted to do just that. Its not ignorance, its malice. Im sorry.
@Web3Future333
@Web3Future333 Ай бұрын
@@Svevsky exactly, the ruling class are sociopaths who will stop at nothing to accrue billions and billions to no end. Even if they have to exploit children in africa and asia, if they have to bribe governments and incite wars to profit from them. Its simply evil and disregard for humanity, and its self destructive on the long term. We need a new system.
@Frollas_
@Frollas_ 2 ай бұрын
This having less than 100k views is criminal. Great video + cute cat and dog!
@user-ow2yr4nu4z
@user-ow2yr4nu4z 2 ай бұрын
Theres so much insight out there so much knowledge that can give us a glorious future but its poorly emplimented or just falls on deaf ears.
@kittersq
@kittersq 2 ай бұрын
...omnibenevolent angels...
@WickedWilhelm
@WickedWilhelm 2 ай бұрын
6:57 The Sims Reference. No clue who would do such a thing
@kaj160
@kaj160 2 ай бұрын
Lol yes, fun times😅
@howtoappearincompletely9739
@howtoappearincompletely9739 2 ай бұрын
I needed an urn, OK!
@user-or6oo2hm9r
@user-or6oo2hm9r Ай бұрын
Hahahahah
@howtoappearincompletely9739
@howtoappearincompletely9739 Ай бұрын
@@user-or6oo2hm9r Is Уэстерн Спай a Cyrillic transcription of "Western Spy"?
@usernametaken4023
@usernametaken4023 2 ай бұрын
This is my favorite animated science channel. Keep up the good work guys.
@spacescienceguy
@spacescienceguy Ай бұрын
I'm so glad to see this video out there in the world. I'm more worried about S-risks than X-risks, and I don't think the future will go as well as many others think, in expectation. The quality of animations and storytelling on this channel has always been good, but lately it has been simply excellent.
@alexanderdiaz2196
@alexanderdiaz2196 2 ай бұрын
4:25 Bosun's Journal reference!
@pokemonfanmario7694
@pokemonfanmario7694 2 ай бұрын
IKR I thought I was tripping when I looked at them and went "wait a second..."
@EVanimations
@EVanimations 2 ай бұрын
As the animation director, yes. You caught it!
@thedood98_53
@thedood98_53 2 ай бұрын
What if a beings suffering comes from not inflicting suffering an example would be cats, cats love killing other creatures and have brought the extinction of many animals even if it's not for food yet this brings them happiness and without this they suffer. Do we take this into account too? Or would we theoretically value self preservation vs a intelligent hyper aggressive species that vaules inflicting suffering?
@edgbarra
@edgbarra 2 ай бұрын
Cats can be happy playing to hunt even paper balls. If it's instinctive, you can trick it some way.
@Svevsky
@Svevsky Ай бұрын
I love cats, they are so human in a way
@GAHIB14DomTrapFurryLoliYaoiMil
@GAHIB14DomTrapFurryLoliYaoiMil Ай бұрын
We can genetically change them to be happy doing something that doesn't cause suffering
@terenx5
@terenx5 2 ай бұрын
5:36 I wonder, are the people an edited variant of a previous group of people you’ve used? If so, nice job! It fits well and even if reused, it doesn’t feel the exact same but still conveys the same message, I like it
@EVanimations
@EVanimations 2 ай бұрын
Yep, we reused the "impoverished crowd" asset we had from an earlier video and I had one of the animators add a recolor and whole bunch of cybernetics
@junyxz92
@junyxz92 2 ай бұрын
Great content! These beautiful animations should be used to make 2 min video on I Have No Mouth and I Must Scream. Would be awesome.
@stardustandflames126
@stardustandflames126 2 ай бұрын
Huge respect for the altruistic values defended in this video. Humanity can be good, guys; if S-Risks indicate anything, it's that things can always, ALWAYS get worse with complacency.
@darksidegryphon5393
@darksidegryphon5393 2 ай бұрын
And we are becoming complacent.
@ShankarSivarajan
@ShankarSivarajan 2 ай бұрын
Sure, until they suggest strengthening the UN might be a good thing to do.
@tempname8263
@tempname8263 2 ай бұрын
UN is not really pro-peace though, nor does it know how to properly utilize their financial strength to build up independent countries... Promoting them wouldn't bring much good
@blank_3768
@blank_3768 24 күн бұрын
except you’re wrong. the un was specifically set up to prevent a world war between superpowers. And they have, there wasn’t a world war 3. there hasn’t been a genocide on the scale of the holocaust. The un is like the it department at a company, if you’re doubting their usefulness then they are doing their job.
@netovalcacer359
@netovalcacer359 2 ай бұрын
Love the new animation! ❤
@weasel945
@weasel945 2 ай бұрын
My mind automatically jumps to the Half-Life universe. The amount of human and alien suffering caused by the combine is terrifying.
@MrKIMBO345
@MrKIMBO345 2 ай бұрын
Warhammer 40k?
@Ech_The_Sentiant
@Ech_The_Sentiant 2 ай бұрын
_The emperor protects_
@i8dacookies890
@i8dacookies890 2 ай бұрын
Personally, I think stagnation would be worse and much more likely, though it's hard to think of scenarios with suffering but not stagnation.
@darksidegryphon5393
@darksidegryphon5393 2 ай бұрын
I see our current society as stagnant.
@santiagorubio1359
@santiagorubio1359 2 ай бұрын
@@darksidegryphon5393 Why ? mind to elaborate
@tsm688
@tsm688 Ай бұрын
@@santiagorubio1359 the west has farmed out most of its manufacturing and invention to other nations. That's stagnation
@kotzka4626
@kotzka4626 2 ай бұрын
This channel is gold.
@vinniepeterss
@vinniepeterss 2 ай бұрын
Fantastic topic!
@calmkat9032
@calmkat9032 2 ай бұрын
I wasn't convinced that this is a big deal at first, but when you mentioned potential solutions to preventing S-risk scenarios, I was on board. Keeping self-serving fascists away from power? Considering the fate of non-humans in our decisions? Sounds good to me!
@colorpg152
@colorpg152 2 ай бұрын
you are incredibly naive if you cant see how easily this can be misused
@williamchamberlain2263
@williamchamberlain2263 2 ай бұрын
​@@colorpg152everything can be mis-used, you twot
Why Humans Are Vanishing
13:07
Kurzgesagt – In a Nutshell
Рет қаралды 10 МЛН
Simulating the Evolution of Aggression
13:17
Primer
Рет қаралды 23 МЛН
Scary Teacher 3D Nick Troll Squid Game in Brush Teeth White or Black Challenge #shorts
00:47
БОЛЬШОЙ ПЕТУШОК #shorts
00:21
Паша Осадчий
Рет қаралды 10 МЛН
100❤️
00:19
MY💝No War🤝
Рет қаралды 21 МЛН
THE POLICE TAKES ME! feat @PANDAGIRLOFFICIAL #shorts
00:31
PANDA BOI
Рет қаралды 25 МЛН
Everything might change forever this century (or we’ll go extinct)
32:35
Rational Animations
Рет қаралды 1,8 МЛН
Could a single alien message destroy us?
9:45
Rational Animations
Рет қаралды 431 М.
How to Eradicate Global Extreme Poverty
14:46
Rational Animations
Рет қаралды 171 М.
Artificial Einstein: Did AI just do the impossible?
19:40
Dr Brian Keating
Рет қаралды 65 М.
The Goddess of Everything Else
15:54
Rational Animations
Рет қаралды 1,6 МЛН
How One Career Can Save a Million Lives
12:42
Rational Animations
Рет қаралды 107 М.
500 Million, But Not A Single One More
5:25
Rational Animations
Рет қаралды 510 М.
The Parable of the Dagger
3:32
Rational Animations
Рет қаралды 199 М.
The Insane Math Of Knot Theory
35:21
Veritasium
Рет қаралды 7 МЛН
Why Didn't We Hear the Full Wow Signal?
15:27
Astrum
Рет қаралды 1,3 МЛН
Здесь упор в процессор
18:02
Рома, Просто Рома
Рет қаралды 339 М.
Klavye İle Trafik Işığını Yönetmek #shorts
0:18
Osman Kabadayı
Рет қаралды 3,4 МЛН