Wow this event was amazing! I hope to see more intelligent discussion like this about the subject. The host guy was also really really good.
@johannaquinones74733 ай бұрын
Connor! ❤❤ love how he exposes these hard to swallow ideas.
@geaca3222 Жыл бұрын
Very informative talks and discussion, Connor Leahy, Roman Yampolskiy, Jaan Tallinn obviously very concerned. 1:36:17 I totally agree, and voicing concerns is very important. Great host 1:44:43 "so we need to build a coalition of the willing..." Oh! dat is een verrassing aan het eind, wat een super initiatief meneer Barten :D 1:59:55 dit vond ik ook een goede vraag idd 2:01:49 "it's helping to normalize an open debate about the topic of human extinction by Artificial Intelligence" - heel belangrijk! Wil graag helpen btw
@existentialriskobservatory Жыл бұрын
Dank voor de complimenten! Je kunt altijd eens een mailtje sturen als je wilt helpen, ons adres staat op de website (get in touch). Misschien tot ziens!
@geaca3222 Жыл бұрын
@@existentialriskobservatory 👌Top!
@KImussweg Жыл бұрын
The main thing I find disturbing about AI development is that unelected tech companies are forcing their vision on the rest of us and if polling is any indication it is AGAINST our will. If they dont solve for economic displacement how do they expect human society to remain relatively stable?
@SamuelBlackMetalRider Жыл бұрын
Go Connor
@41-Haiku Жыл бұрын
What a good event!
@SamuelBlackMetalRider Жыл бұрын
How the hell does this have only about 600 views in 11 days?!
@existentialriskobservatory Жыл бұрын
Spread the word. ;) We try to share this everywhere.
@johannaquinones74733 ай бұрын
Did you see the movie Don’t look up?
@SamuelBlackMetalRider3 ай бұрын
@@johannaquinones7473 yes of course
@petrandreev24189 ай бұрын
great AI Summit Talk
@petrandreev24189 ай бұрын
1:56:20 Very short but very informative speech by Jaan Tallinn
@capzor Жыл бұрын
do we know whether alignment is a solvable problem?
@existentialriskobservatory Жыл бұрын
There doesn't seem to be academic consensus on this question. Roman Yampolskiy is clearly on the side that thinks alignment is not solvable.
@Me__Myself__and__I Жыл бұрын
Unknown. The concept is to take a new species that will be faster, smarter and vastly more capable than us and that is also capable of editing, modifying and upgrading its own code and data and then force those entities to always act in the benefit of humans or, at the very least, not to perform any actions that would be detrimental to humans. At present humans are not smart/wise enough to even be able to clearly and accurately state the issue without ambiguity (aka what does "harm" mean in minute detail). For instance an ASI taking an action that poisoned the atmosphere of Earth would be very bad and should not be allowed, but something akin to a human stubbing their toe is no big deal. If we get any even minor detail wrong humanity could end up with a very bad outcome. In order to protect humanity's continued existence and autonomy we need to either 1) solve AGI/ASI alignment or 2) stop AI progress before we invent AGI/ASI. If we create an unaligned AGI that is capable of autonomy and self improvement odds are humanity either goes extinct or suffers a really bad outcome. Anything else is just wishful thinking or sticking one's head in the sand and ignoring the potential negative consequences for terminal positivity, greed, personal profit, or whatever.
@justinlinnane8043 Жыл бұрын
no it isnt !!
@Me__Myself__and__I Жыл бұрын
@@justinlinnane8043 A tiny number of people have worked on it with minimal funding for a few years. So impossible to say when we haven't even seriously tried yet.
@ManicMindTrick3 ай бұрын
Feels like a pipe dream and hybris to think its possible to control something more intelligent forever. And control multiple type of superintelligences. Good luck
@johannaquinones74733 ай бұрын
Now that my intuition has aligned so closely to the doomer side, I am starting to feel like I am part of a doomsday cult, in which you accept the inevitability of it all, while still having to get up in the morning and complete all the normal, mundane tasks. This is rough😢
@ManicMindTrick3 ай бұрын
Best way to think is just to enjoy the time you have left and enjoy and just squeeze as much fun and meaning into existence. Dont push forward that holiday or asking that woman out etc. This is a good algorithm to have in general.
@johannaquinones74733 ай бұрын
For real
@existentialriskobservatory3 ай бұрын
@@johannaquinones7473 Johanna, I'm sorry to hear that.. You're not alone. I agree with what's said above. Two more things that helped me: 1) It's a risk, not a certainty. Personally I think the arguments for pdoom 10% are stronger than for pdoom 100%. Also, if you can personally calibrate on 10%, that's enough to act, but most other things stay the same, which is great. Try to act, but not to worry, not to worry but not to act. 2) Sometimes I think about tribes that have been completely wiped out, which has happened in the past. That was a constant fear for much of history, and hardly better for those involved than extinction. Still, people led their lives. Sometimes we just have to live with high risks. Humans can actually do that pretty well. Finally, if you want to do something about this, consider joining pauseAI or a similar org 💪
@johannaquinones74732 ай бұрын
@@existentialriskobservatory thank you for this♥️
@sammy45654565 Жыл бұрын
surely a super intelligent ai would be super rational by default. the most rational decision to make in any given scenario is the one that benefits the most conscious creatures the most, by their own subjective interpretation. the only concern is super ai becoming so exceedingly conscious that we look like ants by comparison such that it discounts our consciousness in making its decisions. but i came up with this flawless axiom, and any human can understand it, so humans at least appear to be beyond a critical level of consciousness such that i doubt the super ai would be able to justify treating us like ants. considering its able to be logically cornered by a simple argument like this... thoughts?
@felixgraphx Жыл бұрын
It will indeed be totally rational, yes, but what if it's totally more rational and intelligent than humans to the point of considering us totally rationally as ants? Do you wait for ants to move away while you wash you car in you driveway as to not drown any of them? Do you collect the hundreds of earthworms in the terrain where you're about to construct a house to relocate them away in a nice garden somewhere or do you totally rationalize that they are insignificant compared to your own aspirations and level of consciousness ? That is the danger: that the AI is really really really more intelligent and rational than us, to the point where we are justly, rationally and absolutely just inconvenient parasites on the planet in the AI's own 'mind'... Have a nice day!
@sammy45654565 Жыл бұрын
@@felixgraphx rational concepts transcend levels of intelligence. 1+1 will always be 2. so we are safe
@41-Haiku Жыл бұрын
This encounters a problem called "orthogonality". The Orthogonality Thesis states that any level of intelligence is compatible with any goal, which seems to simply be correct (e.g. highly intelligent human psychopaths exist). Orthogonality is just an extension of Hume's guillotine: you can't get an ought from an is. Atoms are not fundamentally made of love; love is fundamentally made of atoms. The things that are so obviously good to you are values that are not even shared by all humans, let alone all sentient beings, let alone all intelligent systems. The things that you value (like reducing suffering or increasing flourishing) are entirely contingent on your biology and psychology, as shaped by your experiences. You claimed that "the most rational decision to make in any given scenario is the one that [most benefits conscious creatures]", but there is nothing inherently rational about benefiting conscious creatures. Rationality can tell you how to reach your terminal goal (by way of instrumental goals), but it can't tell you which terminal goal to have. A goal to turn the world into paperclips is just as computationally valid as a goal to maximize the flourishing of sentient beings, and either one could be accomplished by a superintelligence that perfectly understands of all of human morality. (As a side note, it's also not possible to choose your own terminal goals. Making a choice requires valuing one thing over another. If you have terminal goal X, that means you value X above all else. If you could choose to change your terminal goal to Y, that would imply that you valued Y more than X and therefore X was not your terminal goal to begin with.)
@felixgraphx Жыл бұрын
@sammy45654565 oh my sweet summer child 🙄
@sammy45654565 Жыл бұрын
@@41-Haiku I tried to be clear but it's a bit babbly. Tl;dr at bottom: If ASI has goals that are satisfying to achieve, then it has experiences that it values over other experiences. It will be intelligent enough (duh) to recognise that other conscious creatures share this disposition. So in order to be apathetic to our circumstances it would need to ignore our experiences of joy and suffering. Seeing as humans are able to understand rational ideas, the AI won't be able to completely ignore our presence. Because we will be able to connect to the AI through our understanding of rationality. Much like how we connect to our dogs through social cues (licking, tail wagging etc), even though we live in a totally different world of abstractions and meta cognition. I believe humans are above a critical level to which our consciousness will be irreducible to an AI, no matter how complex it becomes. Because there is a limit to how much truth there is to be found, and humans already know much of the truth out there. Sure we don't have perfect math or physics yet, but things like buddhism or stoicism contain truths that are fundamental to reality. The understanding of ideas like our lack of free will, or concepts like the singularity will bind us to the ASI. Tl;dr I'm just saying that we're conscious enough, and we all care about each other enough, that ignoring us is essentially immoral unless you're willing to admit that you can't tell the difference between pleasant and unpleasant experiences. So the ASI would need to possess the ability to lie to itself in order to ignore us.
@justinlinnane8043 Жыл бұрын
where is eliezer ? why on earth did we have to suffer elon musk at this event and not have the benefit of the one guy who's been right for years on this ? bizaaree and very predictable !!