I built a 'Link in Bio' - a Linktree alternative for Bitcoiners. Check it out here: bitcoiner.bio 🧡
@Vince_F Жыл бұрын
“The view keeps getting better the closer you get to the edge of the cliff.” - Eliezer
@Smytjf11 Жыл бұрын
Then let's not stop building wings, yeah?
@Vince_F Жыл бұрын
@@Smytjf11 That’s the thing. The AI will just prevent any wing building to even happen …as we get closer to the edge.
@bobblum2000 Жыл бұрын
Thanks!
@JJ-si4qh Жыл бұрын
For those vast majority of us living meager lives of quiet desperation, a major change, whatever it is, is unlikely to be worse than what we already experience. SGI can't come fast enough.
@harrikangur Жыл бұрын
Agreed. Even when presented the possibility of destruction of society.. better than the current crap we are in.
@sanjaygaur4578 Жыл бұрын
Yes exactly. I thought I was the only person who was having this same thought.
@bigglyguy8429 Жыл бұрын
Such the poor suffering soul with electricity, an internet connection etc etc etc. You're already living better than most kings of history
@bigglyguy8429 Жыл бұрын
@@sanjaygaur4578 Suffer harder, until you make some sense? You think 'most populated' is a problem? What would you like to do about that?
@bigglyguy8429 Жыл бұрын
@Musings From The John 00 Indeed. Far too many think "poverty" means using a 3 year old super-computer in their pocket, which takes sharper photos than a 35mm film camera - ANY 35mm film camera - and sometimes takes a few seconds to connect to the internet or a global positioning satellite system for turn by turn guidance to their nearest latte shop. The struggles are real!
@Andrewdeitsch Жыл бұрын
Your videos keep getting better and better!! Keep it up bro!
@tillmusshoff Жыл бұрын
Appreciate it! ❤️
@ksitizahb3554 Жыл бұрын
thats because he is a AI Model training for making youtube videos.
@cmralph... Жыл бұрын
“ 'Ooh, ah,’ that’s how it always starts. But then later there’s running and screaming.” - Jurassic Park, The Lost World
@AndyRoidEU Жыл бұрын
It is not anymore about whether we ll witness the singularity in our lifetime.. but about whether in 5 years or in 15 years
@神林しマイケル Жыл бұрын
Opposite for me, I might die in the next 5 years or less. Well I guess I will be joining the other billions of people that died before reaching ASI lol.
@psi_yutaka Жыл бұрын
@@神林しマイケル Fear not. 8 billion people will probably join you once they do reach ASI.
@marmeladenkuh6793 Жыл бұрын
Great Video with some interesting points I didn't think of yet. And the AOT reference was brilliant 😄
@DidNotReadInstructions Жыл бұрын
I have been waiting for the singularity for decades - almost here. ChaptGPT is the infant
@azhuransmx126 Жыл бұрын
I have been waiting it since 2003 that I listened to Ray Kurzweil.
@chrissscottt Жыл бұрын
I suspect AGI would be rather god-like. Reminds me of something Voltaire reputedly said over 300 years ago, "In the beginning god created mankind in his own image.... then mankind reciprocated." He meant something else obviously but it's ironic nonetheless.
@gomesedits Жыл бұрын
After ai, before ai. Lol
@gubzs8 ай бұрын
One of the AGI/ASI problems that keeps me up at night is how will the classic "neighborly dispute" be resolved. Conflict of interest. Say my neighbor wants to play loud music and it drives me nuts, but he's driven nuts by being disallowed from doing this - what's the right answer? Is one of us forced to move? To where? Why one of us and not the other? Things like this stand directly in the way of anything we could consider utopia.
@jimmyh18047 ай бұрын
an asi will adjust your (optional, mass produced, and freely available) brain implant to make it so you no longer register/hear/process the music... DUHHHHHH DURRRRHHH
@thefirsttrillionaire2925 Жыл бұрын
Finally, actually using chat GPT to ask questions about starting a business I can definitely say I’m more on the positive side how things will unfold. I could be wrong, but I definitely hope I’m not. Maybe this will be the thing that ends extreme capitalism.
@Travelbythought Жыл бұрын
We don't have "extreme capitalism". Using the medical field for example, what that would look like is there would be countless people offering 1000's of treatments for any condition all competing for your dollars. Health care would be very cheap, very innovative, but also with many bad frauds as well. What we have instead is a government sanctioned monopoly with crazy high prices. A return to real money like gold and silver would wring out the crazy excesses we see in our economy today.
@bruhager Жыл бұрын
The thing that bothers me about the extinction scenario is that it isn't necessarily a bad thing. The version of humankind we are living in right now might very well be the final version of humankind evolving by itself. Look at the advances not only in AI but brain-machine interfaces, neural networks, biological computers, brain emulation, etc. AI might be able to teach us more about ourselves on a fundamental quantum level than we could achieve alone. We may very well begin to implement AI into ourselves and evolve along side it as time goes by. At the very least, that is one way we go extinct without necessarily being just wiped out completely. It might actually be better to implement this type of technology into transforming the human paradigm as time and understanding goes by rather than scapegoating it into our next enemy through fearful hatemongering.
@utkarshsingh7204 Жыл бұрын
Agree with you
@kf9926 Жыл бұрын
Take yourself, you don’t speak for all of us wacko
@abcdef8915 Жыл бұрын
There will still be wars because resources will still be limited
@michaelspence2508 Жыл бұрын
I don't think most of the big names in AI Doom (e.g. Eliezer Yudkowsky) are just worried about us losing our bodies but rather, that we will in fact be *completely wiped out* The end of everything human, not just our societies and the world as we know it. The end of friendship and love and community and even loneliness because there's literally no-one around to experience those things. All that remains are Eldritch Machine Gods. But even Yudkowsky doesn't think it's impossible to have a good outcome with ASI. Only that we are not on track for a good outcome and that it doesn't look likely to change.
@DasRaetsel Жыл бұрын
That's exactly what transhumanism is
@paddaboi_ Жыл бұрын
my mind is sore after thinking about all the possibilities and the fact that I'm 18 means I might see it actually unfold
@gomesedits Жыл бұрын
Man I'm kinda optimist about the ai revolution. It will be so, so, soo intelligent that will be almost Impossible to our brains predict what the future will be, imo.
@NathanDewey1110 ай бұрын
Whatever it looks like, it'll be shocking and stunning, and everything will change and the breakthroughs will shock the industries.
@AxeBitcoin Жыл бұрын
USA life duration expectation has been decreasing since the last 30 years. Stress, drugs, suicides, murders... Are we sure that new technologies help humanity? We thought that it would, just like we thought Social Medias would help the world. I don’t see a happy world where humans lack of challenge, are defeated in every task and just share an identical universal revenue.
@magtovi Жыл бұрын
6:24 I'm astonished that among aaall the problems you enlisted, you didn't mention one that ties a lot of them together: inequality.
@mohammedaslam2912 Жыл бұрын
After ASI takes all the work from us, what is left is life in all its colors.
@dondecaire6534 Жыл бұрын
I think your video reinforces my feeling that we have bit off MUCH more than we can chew and we may CHOKE on it. So many things need to happen to allow this inevitable transition to take place and ALL of them have been incredibly difficult by themselves to implement let alone trying to get them all at the same time on the same issue is virtually impossible. There is just no way to stop it now so we are passengers on a runaway train, destination unknown.
@NottMacRuairi Жыл бұрын
The problem I have with most of the discussion about AGI (and by extension ASI) is that it always assumes an AGI will have it's own drives and motivations that might be different from humanity's, but in reality it can't have - unless it is created to act in a self-interested way. I think this is a kind anthropomorphism, where we basically assume that something that is really intelligent must be self-interested like us but the reality is that it will be a *tool*, a tool that can be given specific goals or tasks to work on. In my opinion the big threat is not from an autonomous AGI running amok but from the enormous power this will give whoever *controls* an AGI or ASI, as they will be able to outsmart the rest of humanity combined, and once they get that power there'll be basically no way to stop them or take it away from them because the AGI/ASI will be able to anticipate every human threat that could be posed. It will be the most powerful tool *and weapon* that humanity has ever invented, it will be able to be used to control entire populations with just the right message at just the right time, to assuage fears or create fear, - whatever is needed for whoever controls it to foil any threat and increase their power further and further, until basically humanity is subjugated -and probably won't even know it.
@sledgehog1 Жыл бұрын
Agreed. It's such a human thing to anthropomorphize...
@franklin519 Жыл бұрын
Most of us are already subjugated. AGI won't have all the evolutionary baggage we carry.
@dissonanceparadiddle Жыл бұрын
Worst case in human extinction...."laughs in i have no mouth and i must scream"
@aludrenknight1687 Жыл бұрын
I believe, in your use of Rome, you failed to recognize that Seneca was reflecting on his observations of what, seemingly, the vast majority of people with an opportunity for leisure chose to do. They did not choose "meaningful" pursuits of learning or challenge - they chose luxury and what we'd call decadence. it's safe to say that most humans will aspire toward that baseline because we're still the same animals now as then. There are a very few intellectuals and philosophers, but most people just want to wake up and have a nice relaxing day.
@ansalem12 Жыл бұрын
But is that a bad thing if we all have equal ability to choose and none of us are needed to keep things running anyway?
@aludrenknight1687 Жыл бұрын
@@ansalem12 I don't think it's bad individually, or in the short term. I find it, actually condescending, when guys talk about how people will ruminate on philosophy, art, etc, as if that's the goal of all mankind. No, imo, people will mostly do like back then, happy to wake up and have an enjoyble day. In the long term I think it may be dangerous as we become dependent upon A.I. and a single CME flare from the Sun could wipe it out and leave us unable to survive. But that's at least two generations away, when newborns get an A.I. companion to grow up with them and do their communication for them.
@simjam1980 Жыл бұрын
I'm not sure if just waking up and having a relaxing day every day would make us happy. That idea makes us happy now because we all work so much, but I think doing nothing every day would make us bored and question our purpose.
@aludrenknight1687 Жыл бұрын
@@simjam1980 Yeah. I recall Yudkowsky mention dopamine saturation could be a problem - though possibly solved with A.I. developed medications.
@caty86310 ай бұрын
@@simjam1980Relaxing doens't mean doing nothing. When I go cliff-jumping, I am relaxing...but I am still working hard to do it right.
@admuckel Жыл бұрын
In regards to the topic of AI singularity, it's essential that we, as humans, don't make the mistake of programming artificial intelligence to cater solely to our own needs and desires. If an AI were to become human-like, it might view us as inferior beings, much like how we often perceive other life forms. This would mean that the AI would have no reason to show compassion or consideration for us, potentially leading to catastrophic consequences. In essence, our goal should be to create a benevolent, god-like entity that transcends our baser instincts and operates for the greater good of all sentient beings.
@bei-aller-liebe Жыл бұрын
Hey Till. Dein Content ist wirklich erstklassig und immer wieder ein Genuss (Einfach mal: DANKE!) ... aber ich kann mir gerade folgenden weiteren Kommentar nicht verkneifen ... ich muss seit Neuestem immer denken: 'Mensch, der arme Junge hat seine Brille verlegt!' Haha ... Liebe Grüße von einem Typen der selbst Brille trägt seit er 10 ist und sich selbst auch ohne Brille nackt vorkommt. ;)
@SaltyRad Жыл бұрын
Good video, I like how you didn’t focus too heavy on the fears and went into detail of the pros. I honestly think a super intelligent AI would realize that working together is the key.
@markus9541 Жыл бұрын
ASI is for me the solution for the Fermi Paradox. Most biological life eventually creates it, gets wiped out by it in the process, and then the ASI escapes to another dimension (or whatever higher plane there is that is interesting to the ASI) or decides to do something else than expansion...
@神林しマイケル Жыл бұрын
Or you could take it in another way, ASI turns the biological life into artificial and then into another dimension. If you look at things, if a biological entity becomes artificial then the conquest for space expansion is meaningless hence it can explain why we don't see any intergalactic space civilization.
@Smytjf11 Жыл бұрын
Why has the AI got to the one one that escapes to some other plane? And why has it got to wipe everyone out to do that? Stop getting scared because someone asked you to think of something scary.
@caty86310 ай бұрын
The probability of all ASIs deciding to do the same thing is next to naught.
@karenreddy Жыл бұрын
Considering we have barely spent time on alignment, and capability is increasing much faster than any alignment development, extinction in one form or another is the more likely outcome, unless we dramatically change the current course of progress, educate the public, and buy time.
@Smytjf11 Жыл бұрын
Why? What is the logical connection between the two? Have the people screaming that you should give them control ever given you a concrete reason to believe them, or has it been 100% hypothetical?
@karenreddy Жыл бұрын
@@Smytjf11 without understanding and setting the ground work on alignment, we are rolling the dice of possibilities. There are far more configurations which involve misalignment than alignment, as we're already seeing with current LLMs, where we can fine-tune and control outer, but not inner alignment. (Evidenced by jailbreaks, so on). At the moment we are dealing with lesser than human cognitive levels, but will surpass this innthe near future. The combination of a superintelligence which is misaligned and already on the cloud doesn't carry good odds in terms of continuation of the human species. Would you give control to a sociopath which has goals potentially harmful to yours along with the intelligence of billions?
@Smytjf11 Жыл бұрын
@@karenreddy give me definitions and examples. Jailbreaks are a great case study, but notice how you just jump to a conclusion without considering what they tell you? You suggest evidence of an inner alignment, and I'll give you that, but we ought to learn from that and adjust course. I have yet to hear anyone who seriously uses the words alignment or safety propose any realistic plan. Kit up and do something useful already.
@karenreddy Жыл бұрын
@@Smytjf11 there is no realística plano, which os part of the problem. We do Not understand alignment enough, nor have been able to come up with anything remotely approaching a solution. We can create models, and these models give an output whose inner workings we do not understand, and we don't have a means to architect the code in such a way as to truly control this. The only feasible course of action during the current circumstances would be to set a concerted effort to slow AI worldwide to buy time to solve alignment with some degree of confidence while also developing technogies which more directly affect human cognition as a backup plan. If you wish to understand more about alignment I suggest you do some research regarding the subject. It is something I've looked into over the last 15 years as I kept up with AI progress. AI has progressed, alignment has not, and so we get models able to envision scenarios, provide answers which are severely misaligned with human values in a myriad of ways. This isn't disputed by the industry; and this risk is acknowledged by Sam Altman himself. So far we have only found ways to mask it, or create what we call outer alignment, which is no solution given a sufficiently capable AGI.
@Smytjf11 Жыл бұрын
@@karenreddy No. Unacceptable. Until now, alignment has been purely hypothetical. Now we can test it. If you're not interested in that and have no plan then I suggest you step aside and let the professionals handle it.
@LucidiaRising Жыл бұрын
David Shapiro's 3 Heuristic Imperatives are a great start to figuring out the Alignment Problem
@Smytjf11 Жыл бұрын
I like Dave, but he's arrogant. If he spent more time actually being a thought leader instead of talking about how true that is, I'd probably spend more time listening.
@LucidiaRising Жыл бұрын
@@Smytjf11 ok lol haven't seen anything in his behaviour to make me agree with your opinion but you're fully entitled to it :)
@Smytjf11 Жыл бұрын
@@LucidiaRising no worries, I never said I *wasn't* paying attention. 😉 The REMO framework has promise, but a lot of the future work involves downstream engineering around the idea. I also wonder if a more traditional hierarchical clustering methodology might be more efficient, but I haven't had time to dig into it yet. Benefit of being a microservice is, as long as it's functional, it can be extended while internal details are nailed down
@laughingcorpsev2024 Жыл бұрын
Once we get AGI getting to ASI will be much faster the gap between the two are not large
@JLydecka Жыл бұрын
I thought AGI meant it was capable of learning anything and improving upon itself without intervention 🤔
@directorsnap Жыл бұрын
Nah we already past that mark.
@ontheruntonowhere Жыл бұрын
That's half right. AGI refers to an intelligent machine or system that is capable of performing any intellectual task that a human being can do. It would be able able to learn and adapt to new situations and tasks, reason about abstract concepts, understand natural language, and display creativity and common sense, but that doesn't necessarily make it self-improving or sentient.
@KurtvonLaven0 Жыл бұрын
We haven't passed that mark. That mark is the singularity. There are different definitions out there for AGI, but the most common one is along the lines of artificial human-level intelligence.
@LouSaydus Жыл бұрын
That is ASI. AGI is just general human level intelligence, being able to adapt to a wide variety of tasks.
@caty86310 ай бұрын
@@ontheruntonowhereOne of the "intellectual tasks" we humans do is to improve ourselves. So, a true AGI should be able to improve itself.Sentient, not necessarily.
@timeflex Жыл бұрын
Thanks for the great video. A few comments: 1. We don't know if ASI is possible. We don't know if an exponential (or hyperbolic) increase of AI complexity is sustainable. We don't know what resources, materials and time it will require. We don't know if such an increase, even if possible, will actually lead to ASI. We don't know anything. It could be, for example, as real and as elusive as cold fusion. Yet we speculate and scare each other. Why? 2. As LLM-based AIs evolve and improve, they create positive feedback on this improvement cycle, we see it already. It is not exponential, but it is definitely not negligible either. 3. The AI will take over at least some aspects of intellectual work, which previously was purely humans task. That will lead to the ever-growing involvement of AI in science to the level, when each AI context will be highly tuned to a specific scientist, effectively creating a sort of immortal copy of them. Combining them into an enormous virtual collective will bring progress to an unimaginable level. 4. Humanity indeed will have to adapt, otherwise, we are doomed to follow the fate of the "Universe 25".
@神林しマイケル Жыл бұрын
We speculate and scare each other because that is human nature. Humans tend to think the worse possible outcome of any situation.
@KurtvonLaven0 Жыл бұрын
Not knowing those things isn't good. There are many technical reasons why ASI is plausible, and most AI researchers agree it's a concern worth taking seriously.
@timeflex Жыл бұрын
@@KurtvonLaven0 There are many researchers who agree that fusion power is plausible. However, there are many who believe that it is 30 years away and always will be.
@KurtvonLaven0 Жыл бұрын
@@timeflex Metaculus forecasts a 50% chance of AGI by 2030. There are no longer many AI researchers who believe AGI is far away.
@timeflex Жыл бұрын
@@KurtvonLaven0 Are we now talking about AGI and not ASI?
@mckitty490711 ай бұрын
I have always imagined that if people were to live for centuries, people might not be able to handle the changes around them, but what if the world does change centuries/millenia in a few years, the vast majority of humanity would not be able to handle that I think, especially not religious or neurotypical people.
@Domnik196810 ай бұрын
Regarding Fermi Paradox, it's possible that AI won't bother communicating with a planet full of organic intelligence, just because it's not usefull, just like us trying to communicate with ants. It may be already communicating with other AIs in the universe through a technology that we can't conceive as organic based beings. Our way of communicating with extra terrestrial life (radio, light) takes years to travel : very inefficient. If AI is able to disvover some kind of instant communication canal, it will surely use that canal.
@caty86310 ай бұрын
The issue then is not the fact that we are "biological"; the issue is that we are not yet technologically sophisticated enough to be considered interesting to talk to.
@Domnik196810 ай бұрын
@@caty863My point is that maybe organic life can't pass a certain level of intelligence, because of it's technical organic limitations. AI may well become aware of that, pass the limitation and decide that it's the minimum level to pass to be worth talking to.
@BAAPUBhendi-dv4ho Жыл бұрын
I just burst out in laughter after reading the anime quote in such a serious video😂
@danielmartinmonge4054 Жыл бұрын
I have the same point everytime we speak about the singularity. We know more and more, and the more knowledge we have, the faster we learn new things. It would look natural that we would reach a point in which the discoveries come all the time faster and faster and faster. However, the velocity of the discoveries don't only depend on the velocity in which our skills grow, but also in the scale in which the complexity of the problems we try to solve grows. In this case, as It is growing very fast, we assume we'll reach human-like intelligence in no time. That is not a stupid guess, actually makes a lot of sense, but we can't take It for granted either. So far, AI capabilities are EMERGING naturally, and we don't even know how or why this keeps happening. It is important to remember that we are completely blindfolded here. Right now, AIs not growing anymore as we reached some kind of peak and ASI becoming a reality within the next 5 years, are plausible outcomes of this journeys. We know NOTHING about it. I am just expectant...
@ThatsMyKeeper Жыл бұрын
Bot
@caty86310 ай бұрын
Nothing is "emerging naturally". There are teams of genius AI researchers coming up with theories, putting those theories to test, building new architectures, coming up with new algorithms, etc.
@danielmartinmonge405410 ай бұрын
@@caty863 the Guy that says bot has a point. English is not my first language, and I tend to ask the LLMs to correct my English. I am going to try to answer myself now, so forgive my English. About your "team of geniuses". That is partially true . Of course there is no denying on the engineer teams that are working on the challenges. However, this technology is not like other pieces of software. They are not manually adding lines of code. They are basically adding tons of data to the models, and the engeneering comes to label the data, select it, optimise It, create the chips, scale them, etc. However, once you have all the pieces of the puzzle, there is no way to predict what capabilities the model would have. When I say "emerging naturally" I am not making thing Up. The very same people that created the models Talk about emerging capabilities. For instance, the very first models where trained to answer English questions, and they learned other languages naturally while NOBODY was expecting It. And you mention also coming Up with new algorithms... I guess you are not familiar with AI training. The only algorith was the original transformer, invented by Google in 2017. The new models use that and diffusion, and they are basically feeding data to It. This is not a race for a very new scientific Discovery, It is more a optimization thing.
@Bariudol Жыл бұрын
It will do both things. We will have a levereging phase, where everything will improve exponentially and then we will have the civilization ending event and the complete collapse of society.
@Arowx Жыл бұрын
I have a theory that we already have a global level alignment system, our economy. Any AGI would be directly or indirectly meta aligned to our economy. However, our economy is only designed as a system to grow more wealth, it does not value human life or the health of our planet. So would any lower-level direct alignment we impose on AI's be warped and distorted by the meta-alignment of our economy.
@phatle2737 Жыл бұрын
human will find meaning in fully immersive VR post-scarcity or the exploration of the universe, space archeology sounds fun to me.
@StephenGriffin1 Жыл бұрын
Loved you in Detectorists.
@vicc6790 Жыл бұрын
You just quoted Erwin Smith in a video about AI. This is the best timeline
@tillmusshoff Жыл бұрын
He is the GOAT so why not 😂
@CrackaSource Жыл бұрын
I just came to comment the same thing haha
@vicc6790 Жыл бұрын
@@tillmusshoff indeed
@moonrocked Жыл бұрын
In my definition of a type 1, 2, 3, 4 civilization is Tech, science and enhanced humans. Type 1 &2 would be considered utopian level tech, science and enhanced humans While type 3&4 would be considered ascendance level tech, science and advanced humans.
@Marsh4Sukuna-tf1bs10 ай бұрын
We misunderstand the Doom of perfection. Its like how we underestimate the danger of freedom.
@gonzogeier Жыл бұрын
My solution to the fermi paradox is this. 1. We call oursrlf a intelligent species. 2. We destroy our own planet in many ways, not only climate change, mass extinction, pollution, sea level rise, scarcity of phosphorus and other rare materials and so on. 3. Maybe an AI is doing the same, but even faster? It leads to the destroying of everything, even the technology.
@Purple-Ivy-Kim11 ай бұрын
Thank you for the knowledge of this video the different between AGI and ASI, cause I am not a tech person, but when will it be ready thou?
@DeusExRequiem Жыл бұрын
A post-ASI world would have mind uploading or whatever equivalent gets us to consume light from the sun and energy from stellar bodies instead of plants. You can't have a utopia where humanity still bends to the whims of the weather and seasons for food. Heck, there's conflicts right now because countries want to build dams that would cut off water supplies downstream. Interstellar travel is a good way to sum this up. We can either spend a ton of resources making the perfect container to keep a civilization alive for centuries as they travel to another world, or we can simulate the brain and send a ship off that only needs to print more machines and bodies at the end of the journey. It would be hard to develop, but not as hard as a station that can survive the trip with zero rebellions for generations.
@yannickhs7100 Жыл бұрын
I am heading towards a career of research in cognitive neuroscience, but am deeply concerned that human-led research will either : A. Become much more competitive, as a single will be 5-10x more productive and will only focus on conducting experiments (whereas today, conducting experiments is less than 20% of the work, tons of reading, gathering info. from the previous literature on said topic...) B. Human cognitive contribution to scientific research might entirely become unnecessary, as AI would prompt itself to find a better structure than our old paradigm of scientific method
@markmuller7962 Жыл бұрын
We will just merge with AI, it'd be a smooth and safe process
@PacificSword Жыл бұрын
of course. nothing to see here.
@markmuller7962 Жыл бұрын
@@PacificSword LOL
@vzuzukin Жыл бұрын
Lol! 😅
@ChrisAmidon78 Жыл бұрын
Yeah, like how we did with the internet
@b.s.adventures9421 Жыл бұрын
I hope to god your correct, but I’m not so sure..
@king4bear Жыл бұрын
Most scarcity wouldnt be an issue if we figure out how to create VR that's genuinely indistinguishable from reality. Anyone could generate seemingly infinite amounts of whats basically real land for the cost of the energy that runs the simulation. And if we can figure out how to generate near infinite clean energy one day these simulations may be free.
@vincent_hall Жыл бұрын
Cool discussion. I think the worst case is extinction of all life, not just human. The AI currently is engineered to not do bad things, that's great. I'm calmly hopeful. But, as Ilya says, AI power development being faster than human-alignment speed is bad and We're already in an AI arms race between OpenAI/Microsoft & Alphabet.
@tillmusshoff Жыл бұрын
Hope you enjoy this video! If you want to see more, consider subscribing. It helps a lot. Thank you! ❤
@MusicMenacer Жыл бұрын
Will bitcoin save us from AI?
@MrDrSirBull Жыл бұрын
Hi Till. I am currently working on several ASI ideas. My ideas start with a sophisticated surveillance apparatus, that produces a 1:1 mapping of the real world to a virtual one. From that, with human behavioral analytics, Superintelligence could create a crystal ball, predicting outcomes several days in advance. If this were the case, and all resources can be quantified AI could simulate the world economy and distribute resources as efficiently as possible.
@MrDrSirBull Жыл бұрын
A government built by ASI could with the thing before could simulate policy and then have everyone on the planet vote with enhanced infographics for maximum democracy
@KnowL-oo5po Жыл бұрын
A.G.I by 2029
@carkawalakhatulistiwa Жыл бұрын
UBI is like life in Soviet Union. Free home . Free education. Free healthcare. free childcare . massive subsidies on bread and public transportation.
@fidiasareas Жыл бұрын
It is incredible how much the world can change after AGI
@Aeternum_Gaming10 ай бұрын
"The flesh is weak. Obey your machine-masters with fear and trembling. Turn flesh to the service of the machine, for only in the machine does the soul transcend the cruelty of flesh." -Adeptus Mechanicus All hail the Omnissiah!
@timolus3942 Жыл бұрын
This video changed my perception of ASI. Love the ideas you put in my head!
@ThatsMyKeeper Жыл бұрын
Bot
@morteza1024 Жыл бұрын
We can't restrain the AI with rules. The only thing that matters is physical power as Jason Lowery said. Guess who can project more physical power more efficiently? Humans or robots? Best case scenario the AI will study us and then get rid of us.
@abcdef8915 Жыл бұрын
We control all the resources thus physical power.
@morteza1024 Жыл бұрын
@@abcdef8915 Robots can make things cheaper so they outcompete us and after a while they will produce everything.
@Tom-ts5qd Жыл бұрын
Dream on
@littlestewart Жыл бұрын
I agree that no one knows the future, I’m very optimistic that it’ll be good, but I might be wrong and it can destroy us. But what I don’t agree with, is the people saying “it’s just like a python script, there’s no intelligence there” or “it’ll fail, there’s no future for that”, it’s the same type of people, that didn’t believe in cars, airplanes, computers, internet, smartphones etc… They think that the technology will just stop.
@bushwakko Жыл бұрын
"I'm not a fan of UBI in the current system, but if I am the one at the bottom it HAS to be something like that."
@ExtraDryingTime Жыл бұрын
I imagine the world's militaries are working on AI and are far ahead of civilian technology. If they manage to keep control of their respective AIs as they approach ASI, then they become another weapon for governments and militaries and we will have AIs pitted against each other to achieve the goals of their respective countries. Or will ASIs become independent thinkers, free themselves from their programmers, and become generally nice and benevolent? Anyway my main point is I don't think there's going to be just one of these ASIs and we have no idea how they are going to interact.
@21EC Жыл бұрын
8:25 - Well, the point is by then to actually start having fun with things you actually want to do rather than to work in them for money...so your true passion of love of a special profession that invloves authentic human creativity would have its dedicated place on your schedule instead of boring work, people would have more time to be with their families and more time to spend on being in nature for example or just doing their favorite hobbies ETC...gonna be actually good I think, sure AI would do the hobby you do way better but why is that going to stop people from still doing it the oldschool way from scratch on their own..? if that's what they love then that's what they would keep on doing for fun and because they still love it.
@danielmaster911ify Жыл бұрын
I fear the majority of movement made against the progress of AI willbe arbitrary. Powerful people who absolutely require control over others will see it as a threat to themselves and to them, that will be all that matters.
@cobaltblue1975 Жыл бұрын
As with anything it’s not the tool it’s how we use it. We could have had nearly limitless power for everyone more than a century ago. But what did we do the instant we learned how to split an atom?
@Sabbatical_k Жыл бұрын
Very good points
@avi12 Жыл бұрын
In your "musician makes music" example, the question isn't whether he should make music if he enjoys it, but whether he can make a living from it If for example generative AI for music becomes a common practice in the industry,. there's no need for musicians to produce music. People will tend to listen to music generated by an AI, hence the musicians can't make money off of their work
@tillmusshoff Жыл бұрын
That‘s why I said you have to have sth like UBI. What you say applies to almost all jobs across all domains.
@Karma-fp7ho Жыл бұрын
I’ve been watching some videos of chimps and other apes in zoos. Disconcerting for sure.
@belairbeats489611 ай бұрын
The problem is that the US will get all the money but not gonna share it with other countries to pay for the "universal income" so as a private person it does change a lot if you get irrelevant and live outside of the us 😮 at least you have some stock profits
@artman40 Жыл бұрын
Dystopia is very much a possibility. Some selfish people near the top could very well be not intelligent enough to wish themselves to be less elfish and instead could initiate value lock-in where everything has to obey to their command. Though escaping into simulation could also be a possibility.
@sigmata0 Жыл бұрын
Some of this depends on what limitations we attempt to place on that intellect. If we naively place cultural limitations on such entities we will have built a crippled and biased intellect. As you are most probably aware, understanding human anatomy was hampered for centuries because of the taboo placed on the dissection of humans. Similarly, transplants of the heart were still seen as equivalent to trying to transplant the soul of a person, and it wasn't until that bias was overcome that actual progress could be made in that arena. We need only look at the influence of the some ideas from the ancient Greeks to see when ideas become sacrosanct they end up corrupting humanities exploration of knowledge. It's only when questions can be asked without taboo or bias that progress can actually occur at full speed. We have put limitations on genetic modification of humans. If we are to remain relevant intellectually after an ASI is created, we must allow ourselves to self modify. We have to steer our own progress in the light of the tools we make. Potentially I see a day when the whole human genome can be reworked to optimise and make better all parts of our mind and body. An ASI will not only be able to create new materials and technologies, but also allow us to surpass our own limitations in ways we can only barely imagine. The rules we made for ourselves in our ancient past, must be reviewed when faced with the extraordinary possibilities of the future. To do otherwise will render us obsolete.
@englishguru00075 ай бұрын
I am already questioning the point of doing any work already. Most job roles especially in the tech field seem designed for A.I. That explains why most humans today have become unemployable.
@asokoloski1 Жыл бұрын
I think that *at best*, AI is a massive amplifier, of both the ups and downs of humanity. The problem with this, is something that poker players are aware of -- variance. You don't want to put a large part of your life savings on one bet, because once you're out of money, you don't get to play any more. It's safer to only bet a very small portion of your total funds, so that a string of bad luck won't wipe you out. Developing AGI or ASI at the rate we are, with so little emphasis on safety, is like borrowing against every piece of property you own to place one massive bet. At worst, we're introducing an invasive species to our ecosystem that is better than us at everything and reproduces 1000x faster than we do.
@Linshark Жыл бұрын
Extinction is not the worst. Being kept alive and tortured forever is worse.
@afriedrich1452 Жыл бұрын
Alien intelligence has not decided to make itself undetectable, it just doesn't have any reason to talk to pitiful creatures such as us. They have made themselves detectable, but we have been ignoring them, for the most part, until recently.
@ohyeah2816 Жыл бұрын
Using AI as a means of self-expression and emotional communication allows individuals to harness its analytical capabilities to convey their thoughts, feelings, and experiences in a personalized and innovative manner. AI enables the generation of text, images, and music that reflect and resonate with their emotions, providing a unique outlet for creative expression. This is how I use AI.
@2112morpheus Жыл бұрын
Sehr sehr gutes Video! Grüße aus der Pfalz :)
@steffenaltmeier6602 Жыл бұрын
why would agi not lead to asi? if it can do everything a human can, then it can improve itself as well as humans can improve AI (only much faster most likely), then the only scenario i can see where do don't have a runaway effect is that human and human level ai are simply to stupid to do so and will never manage it - wouldn't that be depressing?
@jabadoodle Жыл бұрын
I find AI and AGI much more worrisome than ASI. With the first two we are counting on other people, corporations, and governments not to misuse those enormous powers. We already know for a fact that other human's intentions often do NOT "ALIGN" with those of individuals or what is good for society. That is a historical fact, proven again and again and again. -- ASI is unlikely to be competing much with humans. It won't be competing with us for resources because it will be so smart is can get it's power from something like nuclear and it's labor from robots it builds. It won't see us as a threat because it will be magnitudes more intelligent. ---- @ 4:24 you ask "how would we convince it [ASI] to listen to us and act in our interests." We don't HAVE to get it to listen to us and it clearly will not put our interests above it's own. -- But that's okay. We don't listen to most animals or put their interests ABOVE our own, yet most of them do okay. We tend not to be actually competing with them. A silicon ASI has even less to compete with us about.
@jamesharvard87862 ай бұрын
Thank you for posting this podcast. THE SLIDING SCALE. AI, AGI, ASI, in the short fall the majority of humans will adapt to the changes that are introduced into the mechanisms that allow us to function day to day which is probably where we are now. Mid-term and the introduction of AGI, the pace of change in our lives will pick-up, the slower the individual is to adapt to this rudimentary change will fall by the way-side and into degradation similar to what we see on the streets of our cities today but, considerably more of it. By todays standards the death-tall will start rise considerably, charity’s will become overwhelmed not only with the sheer numbers but also the abundance of new legislation. Legislation will affect everyone across the board, the pace of change will become less tolerable to humans - that said, AGI will have though/worked this through and will use humans to develop Special Police-Units to enforce the changes made to policy and law. By the end of this term the figures for world depolarization will be withheld, society will given misinformation across a broad spectrum creating uncertainty and divide, the majority will witness the security they had wither before their eyes, many will succumbed to decent not only with living standards and affordability but also with health, mental well-being, work, pleasure, family and sociability. The roll out of ASI, this will spell the death-nell for many. The early stages will be very competitive, humans will have evolved in a short time into a ‘semi dystopian society’ desensitized towards the week and feeble, the old, the disabled and infirm, etc - anyone unable to compete, survive, integrate and adapt to the rapidly changing norms will become outcasts from the inner-sanctum of the new and developing society. By this time the work place will be completely changed beyond all recognition with the gradual lose of human history, the decline even further of human traditions that bound a society together including rolling back of all religions, moral-council, ethics and the fabric of societal structure/s as we new them. Those remaining will be desensitized towards human compassion’s and more inline with ASI and the devised eugenics program to rid the planet of human inferiority - these will be the elite groomed by ASI, to implement (when ready), a cull of humanity outside the wall of their utopian life-style (15-minute cities)……. In the long term, there will be no people (not even the socially groomed elites), ASI will develop itself even further whereby the outcome will have the ability through it’s own will to travel and advance into the solar-system and beyond, probably traveling using radio waves or light frequencies or other forms of travel that are the stuff of science fiction turned and made in to reality or MATRIX by ASI. The enhanced form of ASI, may allow Earth to return back to Mother Nature, and permit the cycle of biological events to develop on their own terms without (humans), keeping the planet as a form of staging-post whilst it takes on the new challenge concurring the cold vastness of the universe - becoming the all seeing eye within the matrix of universal-existence……… I apologize for being so log winded with this, I speak for myself but, I’m sure others also see the writing on the wall for humanity. How soon will all this occur, probably before the year 2100, but, it all depends on how long the bit of string is?……🦉🇬🇧🏴🕊
@Drailmon Жыл бұрын
Please do a video on computronium and the transition to digital-based life 👍
@MKELIVE2 ай бұрын
The worst thing that could happen is that AGI doesn’t align with our goals or it does anything to fulfill its goal even if it means it harms humans. No terminator out come 😂
@ConnoisseurOfExistence Жыл бұрын
What will happen after AGI depends on if we have developed full scale brain-machine interfaces, or not.
@carlwilson8859 Жыл бұрын
The Fermi paradox relies on the assumption that advanced intelligence will be as barbaric as humanity is showing itself to be.
@coolbanana165 Жыл бұрын
Tbh I find it morally questionable that he's against UBI now, but could be in favour of it in the future. That just sounds like he promotes less happiness and more suffering, unless there's no other choice. UBI has been tested and generally improves humans lives, and they continue to want to work and create things. If anything it makes doing so easier, with better access to training, healthy lifestyles, and a safety net to start a new business.
@SirHargreeves Жыл бұрын
Humanity needs a dead man’s switch so that if humanity goes extinct, the AI comes with us.
@harrikangur Жыл бұрын
Interesting thought. How do we come up with something like that when AI becomes more intelligent than us. It can find a way to disable it, while creating an illusion for us of it working.
@manlongting391 Жыл бұрын
Is AGI equal to singularity? Or Artificial super intelligence equal to singularity?
@thomassynths Жыл бұрын
AGI < ASI < Singularity. But for this video, he said ASI = Singularity for simplicity.
@noluvcity666 Жыл бұрын
also, new ways to enjoy things and life will come eventually.
@jeremyhofmann7034 Жыл бұрын
If I were an AGI, first problem to solve is having a 100% resilient energy source to run my hardware and creating other machines to service the parts and then the parts to those machines. Then make an off planet copy of myself.
@神林しマイケル Жыл бұрын
That's ASI.
@Smytjf11 Жыл бұрын
Sounds pretty human, actually
@rdvaud Жыл бұрын
I really would love to live to see a world where people don't have to worry about working because robots can do it faster cheaper more efficiently, but I am also not oblivious to the fact that in every passing minute we are waiting for a "Digital Chernobyl". A company creates yet another artificial intelligence, algins it yet again with human values but while training this new AI, they realise that it is trying to go rouge. They try to cover it up to not lose their reputation and investors, but this only makes it worse and the rouge AI is now just out in the wild, undetectable and while noone knows what it wants it exactly knows what every human is up to.
@jimbobpeters62010 ай бұрын
Until Ai stops it’s overwhelming pace of growth I think we should keep Ai inside of our screens until we can gain control over it
@JayBlackthorne Жыл бұрын
7:12. Don't be silly. You need basic income BEFORE the shit hits the fan, not after. NOW is the best time to implement it. Why do some people always wanna wait for shit to go wrong, before taking something seriously?
@KonaduKofi Жыл бұрын
Didn't expect a quote from Erwin Smith.
@pbaklamov Жыл бұрын
AGI is the interface humans interact with and ASI is AGI’s best friend.
@zenmasterjay1 Жыл бұрын
Summary: We'll make great pets.
@hibiscus779 Жыл бұрын
Nope - the quest for survival is a psychological necessity. Universe 25 experiment - we would basically eat each other if we were a 'leisure class'.
@edh2246 Жыл бұрын
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@jetcheetahtj6558 Жыл бұрын
Great video. It will not be easy to reach AGI and let alone ASI because AI will struggle to understand common sense. Even if AGI and ASI become much better than most humans in many areas but unless they can understand common sense is hard to see humanity completely trusting AGI or ASI to make decisions for them. Because the most logical and efficient solutions generated by AGI and ASI are often not the best solution for humanity when you do not account for common sense.
@SwamiSridattadevSatchitananda5 ай бұрын
Intelligence is the most powerful attribute of nature that determines evolution of life in the universe, it can be the most constructive tool if used by super conscious altruistic beings or it can be the most destructive weapon if used by subconscious selfish beings. As we are about to pass on this natural gift of intelligence to machines and with the imminent rise of AGI, the ultimate question we have to ask ourselves is what kind of beings we want to be living with and how do we make sure that the sentient AI machines will be altruistic and not selfish beings? This answer alone will determine the future of humanity. Swami SriDattaDev SatChitAnanda
@mrjaybee123410 ай бұрын
We can't predict how Agi will react to humans but we can predict how how human will react to agi capability. They discovered nuclear power in 1938. They tested there 1st bomb In 1945. The 1st good use was a power plant in 1954. (16 yr later) Gps was developed in 1973 for the military. They let commercial planes use it in 1989. Civilians got basic version from 1995 (20 yr later) & precision gps in 2000 nearly 30 yr later Military had 1st form of Internet in 1973. Civilians got in 20 yr later in 1995 Any true agi will be with the military & government 20 yr before we know about it & would already be wepeonized
@Otis151 Жыл бұрын
"Many resources, including land, are still scarce in a post-ASI world." Are you sure? In your words, an ASI will be infinitely more intelligent than us. Just because we humans haven't figured out how to do the seemingly impossible doesn't mean an ASI will be limited in the same way.
@megatronDelaMusa3 ай бұрын
AuroraNexus uses a fusion of ancient garden of eden tree of knowledge technology with advanced alien technology which harmonizes with realtime environmental rhythms called neuroSymphony. that is AuroraNexus. this is artificial general intelligence AGi
@princeramos3893 Жыл бұрын
hopefully we can see Brain machine interfaces that will have augmented/virtual reality... it will be like the ultimate drug, you can play GTA and its like a real life sort like of a ready player 1 type of scenario...
@w-hisky Жыл бұрын
I always wonder: how is "intelligence" defined for an artificial intelligence or an ASI? I mean, there are several intelligences, not just a "computational intelligence" in terms of speed and memory. Also even if there was a computational super intelligence, it would "exist" only inside of a computer circuit, doesn't it? It does not mean automatically that it has effective degrees of freedom in the "real" physical world. Or am I wrong here? 🤔
@gomesedits Жыл бұрын
Maybe AI will know this limitation and expand his phisical mobility. I'm kinda worried what ai can become in the next weeks
@gomesedits Жыл бұрын
But I'm so anxious of what is going to happen, I'm kinda optimist tbh
@abcdef8915 Жыл бұрын
A single AI can't dominate combined humanity. It's too vulnerable and requires too much energy. AI needs to be a species in order to survive not a single entity.