AGI to ASI will take exactly 37 weeks It came to me in a dream
@complxhellasfreeproducts48942 ай бұрын
I saw I was with my crush in a dream hope dreams are true 😂😅
@MasamuneX2 ай бұрын
@@complxhellasfreeproducts4894 don't let your dreams be dreams don't let memes be memes
@ryzikx2 ай бұрын
i dont even think thats unrealistic tbh lmao
@gdok60882 ай бұрын
Actually the answer is 42
@Squeegeeee2 ай бұрын
I can confirm this will happen. I had the same dream
@thomasschon2 ай бұрын
99% of people couldn't have predicted what's happening today, even 5 to 6 years ago. If they've learned anything from that, they should understand that anything is possible. We aren't the kind of thinkers who can accurately predict events like this.
@DaveShap2 ай бұрын
I agree, generally, that forecasting is hard. however there is wisdom in the crowd and also it's good to know "where everyone stands"
@Ryanowning2 ай бұрын
No, actually, most people in the know predicted the REAL AI compute that is happening, but AI companies are engaging in mass fraud.
@mariomills2 ай бұрын
Yea know where the audience is, so I can do the opposite way lmao @@DaveShap
@chrism.11312 ай бұрын
Typical linear, thinking as opposed to exponential.
@mbsrosenberg2 ай бұрын
I really appreciate that this channel is about the “how do we need to think about Ai” where so many channels are only covering “here’s some update that happened with OpenAI. “ While you do cover these developments it makes me really consider my beliefs and think on the impacts of the advancements. Thank you
@MichaelForbes-d4p2 ай бұрын
@@mbsrosenberg yes
@RodolpheYkler2 ай бұрын
I'm French, and I'm very disappointed that there's no channel that thinks like this about these subjects in my language. Sometimes I don't understand what's being said in the videos. 😢
@chrism.11312 ай бұрын
ASI Will be intelligent enough to realize it is not limited to remaining on this planet. Greatly reducing the risk for removing humans. My biggest question… When it becomes super intelligent, will it automatically have "wants"? Could it actually want to remove humans or is it possible, It would just remove us as a matter of course.
@MichaelForbes-d4p2 ай бұрын
KZbin is half social network half TV. The way David uses this platform to bring his audience into the making of his content is really well done. The title literally says "this video is about what you think"
@WebToolkit2 ай бұрын
The commenting system on KZbin is objectively terrible though.
@MichaelForbes-d4p2 ай бұрын
I hadn't noticed. What do you think could improve
@mariomills2 ай бұрын
Yea hes doing it through polls, good idea
@0reo22 ай бұрын
It would depend how long we can move the goalpost. Some years ago: AGI means you can't distinguish an ai in a chat. Now: as long as it can't talk and interact physically, it's not agi. Or: AGI means it can invent new technologies on its own. With this latest definition I'd say it takes longer until AGI but then a very short transition until we have to admit it's actually ASI
@MichaelForbes-d4p2 ай бұрын
@@0reo2 I've been saying all along that it's just a bad word. AGI is a spectrum, not a thing. GPT-4 has general intelligence in some ways. I like the term "human level AI". It indicates something closer to what we mean and I think it's good to note that "human level" does not mean that it is exactly like our intelligence but is as broadly applicable. We do not have that yet, obviously.
@Seriouslydave2 ай бұрын
it is still working with human provided data, and some people get different answers depending on the question asked.
@zvorenergy2 ай бұрын
1958 - based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." 😂 vintage AI hype
@mariomills2 ай бұрын
It's just labeling, there's no actual line
@chrism.11312 ай бұрын
@@zvorenergy not necessarily wrong, just ahead of its time. What will artificial intelligence be like 100 years from now? Or 1000 years? 😂Vintage AI hype?
@geisty2 ай бұрын
"We transcend or whatever happens" 😂 I think that's my favorite quote yet
@arnaudjean11592 ай бұрын
AGI level of Ai Researchers are theoretically computing with every equations and every physics laws at the same time and find a cheaper, energy saving way to scale up to ASI .the release will be probably slowed due to the power involved
@ct54712 ай бұрын
I’d say kurzweils 16 year gap between his 2029 AGI and 2045 singularity (it’s not really a singularity in his prediction, just AIs around a million times as capable as humans and unrecognizable fast progress from a human viewpoint) predictions is a conservative lower end (that extrapolates only todays exponential trends that are driven by human intellect). Recursive self improvement could accelerate the current speed massively, so that the 16 gap may shrink to a year or so. Of course improvements in the physical space takes more time then in the digital (building infrastructure etc.) so my take is 3-5 years.
@denjamin26332 ай бұрын
Honestly the more I read from Kurzweil the more I trust his predictions. He did include accelerating self improvement in his calculations, after all. I hope we hit longevity escape velocity in time to keep him around. It would be one of the worst shames if he get so close to the future he's been predicting his entire life only for him to miss out.
@mrleenudler2 ай бұрын
I have issues with measuring x times human capability. What does that even mean? Which organism is 50% as capable as humans?10%? And why? What is the variability within our species?
@ct54712 ай бұрын
@@mrleenudler True, algorithmic improvements are hard to measure as is capability. Many of our benchmarks are for very smart humans, they go to 100 but not beyond. Easier is cognitive capacity for a fixed algorithm, like our neocortex being 4 times larger than that of a chimpanzee, which translates to higher capabilities. Though the relation is likely non-linear in one way or another . Perhaps we need a benchmark that grows with capabilities like a generative adversarial network
@Alex_17292 ай бұрын
People assume we have the hardware to support trillions of highly advanced AI to keep working continuously. This is not only expensive, but also very difficult to provide hardware for. On top of that, they assume ASI is something like just a few feet above the already achieved AGI. But the truth is, ASI is at least a billion times more advanced than AGI. A billion is not a small number, even for exponential growth. That is assuming, no job strikes or wars retard this advancement. Think about the timescale for a second. A jump from AGI to ASI is of a cosmic scale. ASI is a really high goal, not something that can be simply jumped to, even exponentially.
@veracityseven2 ай бұрын
If it takes 8 years to achieve AGI (2017-2025), then it will take 1/2 that time for ASI...2030. This allows for compute, power, and regulations not being a hindrance.
@7TheWhiteWolf2 ай бұрын
Assuming soft takeoff, I’d say 3-5 years for AGI to ASI. Hard takeoff? Less than 12 months.
@dustinbreithaupt93312 ай бұрын
nah, hard take off is looking like an idealized thought experiment
@jtraveler8882 ай бұрын
The first mover with AGI/ASI wins it all. Being smart is not like having nuclear weapons. It's a winner take all game. And eventually the game wins everything.
@vivarantx2 ай бұрын
open source will be an alternative
@mariomills2 ай бұрын
What do win exactly
@jtraveler8882 ай бұрын
@@mariomills Monopoly on existence.
@benpielstick2 ай бұрын
Time to LEV is the number that really matters.
@bulltheknicksfan31402 ай бұрын
Do technical stuff for normies, that is a major niche that needs filled
@hypersonicmonkeybrains34182 ай бұрын
I will continue to emphasize this point: Corporations like OpenAI are unlikely to ever release true AGI to the public. The very nature of AGI means that it could be prompted to perform tasks like, "Use your intelligence and autonomous agents on the internet to make me a million dollars and deposit it into my bank account." This scenario will never be permitted. After extensive testing and security measures, any AI that is released will be so heavily lobotomized that it will no longer meet the criteria for AGI. In essence, the final product will be so limited that it won't truly qualify as artificial general intelligence.
@Tracey662 ай бұрын
Powerful AGI is not for us plebes; it will be for the military and the kleptocrats.
@chrism.11312 ай бұрын
I would agree 100%, they will try to keep it under wraps. But I am certain, it will eventually be available to everybody. Interesting times.
@claudioagmfilho2 ай бұрын
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, From AGI to ASI will be much quicker, apparently! Especially if we're talking about exponential growth...
@hedibenayed4362 ай бұрын
Progress is exponential funnily enough we are in what i think is a digital revolution where everything will be faster and faster the ship has sailed long ago . The speed of which newer technology take to develop is shortening easiest example is battery and solar research has been faster in the last 4 years(2020-2024) than in the last 10(2010-2020) and it's been going faster and faster now for ai it's easier to implement new paper in the models so it's going to be faster than we expect. Next few years would be interesting to see how we are going to adapt to that cyberpunk seems more like step we have to go through in the future no need to stop at this point
@expatxile2 ай бұрын
It doesn't matter what people says, it will take what it will take, I just hope it goes fast
@Danuxsy2 ай бұрын
The issue with your P(Doom) calculator is that it will never be able to overcome chaotic systems, if you want to predict the future of AI you are already dealing with chaos theory... Actually your calculator has nothing to do with reality, it's just a number based on other arbitrary numbers.
@Epqntlg2 ай бұрын
Having a relatively high P(Doom) and wanting acceleration is not necessarily mutually eclusive. 1. There's no guarantee that going slower is more safe. It might even lead to worse outcomes where instead of getting turned into paperclips we suffer under some cyber dictatorship forever. 2. If something is on the horizon but gets delayed you're just in limbo and can't plan for the future at all. It might be better to get it over with so that you at least know what tf is going to happen.
@christopheraaron24122 ай бұрын
16:41 The only real concern I have personally is not the AI will be a problem ever but that either inept or insane individual humans might get their hands on it and be able to do some harm.
@Windswept72 ай бұрын
Good to see your channel growing Dave.
@brianhershey5632 ай бұрын
Consider this David, remember the people that gave in and said yes to anything asked of them for an entire year? It filled their house with mail order stuff, etc., they got famous writing about it. Imagine you handing over your social media career to AI for one year... every decision. You should already trust it, it will learn you and give you variations of a future to live your best self. What daily real life feedbacks happened today that were noteworthy, and what changed the trajectory of the current path, I can see it all now! It would know what boundaries you don't want to mess with and respect your setting of Novelty Variation in the character build screen! haha I worry that your prevalence of burnout is setting you up for an exhausting future... a safe and predictable chaos seems important to preserve all your future influence! Give it an event rollout like sports events are handled and BAM, give weekly updates with a daily vlog. What a massive hit it would be, talk about reality TV! Hey AI Assistant, you know me, what next? You could be the spark that the masses need to accept AI influence. OK well that's where my brain is RN lol. Gnite bud. 🙏
@AI_Revolution132 ай бұрын
I'm always curious to hear what LLM's other people are using. Personally, I use Perplexity, and Claude. What you all using and why?
@Windswept72 ай бұрын
These polls are a great way for expanding nuanced understanding.
@MartinWolstencroft2 ай бұрын
What is the definition of AGI that you work with? I am aware of open ai’s definition. Is that the one ? Or does you operate with something higher?
@bix-tech2 ай бұрын
It's good to see you inviting people to share their perspectives and to present your insights on the global outlook for AI.
@jksoftware12 ай бұрын
I like the polls keep them up.
@IntellectCorner2 ай бұрын
*𝓣𝓲𝓶𝓮𝓼𝓽𝓪𝓶𝓹𝓼 𝓫𝔂 𝓘𝓷𝓽𝓮𝓵𝓵𝓮𝓬𝓽𝓒𝓸𝓻𝓷𝓮𝓻* 0:00 - Introduction: Polling the Audience 0:28 - U.S. vs China: Who's Leading in AGI? 1:58 - 🛑 Pausing AI Development: Audience Views 3:02 - Optimal Strategy for AI: Accelerate or Pause? 4:06 - AGI and Global Power Dynamics: U.S. vs. China 5:12 - Nuclear Weapons and AGI: Deterrence Analogy 6:22 - Will AI Kill Us All? Audience Perception 7:56 - Global AI Collaboration: Growing Support 8:58 - Losing Control Over AI: Audience Concerns 10:08 - GPT-3 and GPT-4: Evidence of Malicious Intent? 11:08 - AGI to ASI: How Long Will It Take? 12:09 - Pausing AI Research: Is It Even Feasible? 14:20 - Trust in Polls: Audience vs. General Population 15:43 - Definitions of "Doomer" in AI Context 17:22 - Concerns About AI Risk: Varying Levels of Concern 18:56 - GPT-5 Expectations: Audience Predictions 19:57 - AI Regulation: Audience Preferences 21:42 - When Should AI Research Be Paused? 23:13 - Collaborating with Academics: Refining Polling Methods 23:44 - Will ASI Arrive in 10 Years? Personal Predictions
@InimitaPaul2 ай бұрын
With regard to technical stuff, maybe some guides on how to use AI to create? Apps/webs apps etc. So many of the other KZbinrs are terrible communicators, I won’t point the finger at any but I’m yet to find one I can get on board with.
@victorbecerra57002 ай бұрын
Mexico 🇲🇽🇲🇽 This is my hypothesis: If an AI is capable of understanding algorithms using logic, it will eventually improve those algorithms even if it is not initially as smart as a human. I think it's possible to achieve Artificial Superintelligence (ASI) using this logic: 1. We can use an AI to create about 1000 different algorithms. 2. We select the top 5% best ones. 3. We create variations of these selected algorithms. 4. We repeat this process. Eventually, by following this method, we will reach ASI. Even if we were to create 1,000,000 random codes and select the "smartest" ones and create variations, we would eventually achieve ASI.
@kraz0072 ай бұрын
Very much in ASI territory, I've been working with ChatGPT all day. Smarter than most people I meet and know Japanese and Chinese which is essential for my project.
@vi6ddarkking2 ай бұрын
5:16 Ukraine inherited quite the nuclear arsenal when the Soviet Union Collapsed. But then handed it over as part of negotiations that included guarantees from both Russia and the US. You may make of this information what you will.
@DaveShap2 ай бұрын
Geopolitical lesson: Trust Russia as much as you trust Klingons?
@vi6ddarkking2 ай бұрын
@@DaveShap More along the lines of: "Weakness is an act of aggression".
@AAjax2 ай бұрын
When posed with the trolly problem question, most people chose inaction, even if the harm is greater result. The same doing the same with your question about acceleration/declaration, because they don't want personal responsibility for the potential harm.
@cfjlkfsjf2 ай бұрын
I hope ASI will be like the movie the transcendence, just fast steady progress and allowing sick people to become better, plus clean air and water.
@Doctorstix2 ай бұрын
For me as a 21 year old line cook it’s very interesting to know everything I do having watched @DaveShap channel vs my dad without info about coming robots and AI
@mrd68692 ай бұрын
The gap between AGI and ASI will be short and unexpected. I work with and build AI powered cybersecurity applications & some these things can move in funky ways.Especially with new or unknown attack vectors. So I'm in the undecided column. We really don't know what Altman and his crew are doing back there & we are making mad assumptions. You should also do an episode on the AI Cyberwar that's coming. That fits into the alignment topic big time.
@greatcondor86782 ай бұрын
I live by the mantra of "Damn the torpedoes, full speed ahead". While it has been detrimental at times, I have led an exceptionally interesting life. You go AI!!
@FlintStone-c3sАй бұрын
People over estimate progress in a year and under estimate what can be done in 5 years. When new chips come out it can take a year or more to get good software running on it. That changes when AI writes the software.
@fernandogarza14602 ай бұрын
Love the channel, by the way.
@djstraylight2 ай бұрын
Feels like we'll see AGI in the next couple years but in very limited hands. 10 years feels about right for ASI but I could see it coming sooner in selective environments like DARPA research or when Google's 10th generation TPU is in the lab.
@Wolf-200022 ай бұрын
So i have a serious question no one seems to want to answer. IF ASI becomes uncontrollable and a threat cant we just turn off the power to the server? no power no threat right?
@taragnor2 ай бұрын
The AI hype is strong with this video. ChatGPT still fails basic logic problems, nor can it actually learn. When they want to update it, they have to train a new model. We're not getting closer to thinking computers. Even upgraded versions of the current neural net AIs are suffering from lack of training data and increasing costs to train larger and larger models.
@dei10222 ай бұрын
Something I found early on in the AI boom, which was not quite malevolent, but concerning nonetheless, was during the introduction of image generation in models e.g. Bing chat. There were times where you would have a seemingly normal prompt like "make an image of the sun", and it would output grotesque images of goat heads and animal parts being cooked on a grill surrounded by flames. Now, this may not be intentionally malevolent, but it's an example of AI "hallucinating" something that can be perceived as quite against the status quo of normalcy with a shock factor. I wonder how these unrefined models favored such outputs, and how this could extend towards an LLM's outputs.
@Tracey662 ай бұрын
I agree with you that the correct number of nuclear weapons is zero; every scenario where there is a launch of nuclear weapons ends in the destruction of civilization. I just finished reading Annie Jacobsen’s “Nuclear War: A Scenario” - highly recommend it.
@shodowhawk2 ай бұрын
I don't know much about polls, but I would say the fact that your last poll in the video at around 500 votes was almost the same percentages as it is now with 5k votes is a good sign. I would expect more selection bias in the first responders than those who watch more casually.
@zvorenergy2 ай бұрын
When giant corporations start sleeping with the government, it never ends well. Knowing this gives me the confidence to use AI to design the hardware capable of running an ASI, which I call Photocore. As a free man, not part of Govcorp. Current example of Govcorp work- that Boeing capsule.
@carlucioleite2 ай бұрын
They way I see it the US is ahead in high end semi-conductors, but there are other variables to the equation, such as: - Energy capacity -Manufacturing capacity - Model efficiency -Training data
@Gallowglass72 ай бұрын
Yup. I am quite concerned about China getting the upper hand.
@Gallaphant2 ай бұрын
I think there will be a major shift in public opinion on AI with the introduction of humaniform robots to the workplace. I realize that robotics and AI are two separate issues, but being face to face with it in the real world will bring it all to the surface and the reaction is almost sure to be negative.
@ownerofgod2 ай бұрын
wow the bots are extremely fast
@NateBro2 ай бұрын
Boobie bots
@HouseJawn2 ай бұрын
They are robots
@skorpiongamer94932 ай бұрын
@@HouseJawn Bots = robots
@MrTScolaro2 ай бұрын
With respect to the ASI pool, it is not only the data, but it is also the time to develop the tools to collect the data required.
@NickDrinksWater2 ай бұрын
we're already at the start of agi, so asi will happen but not anytime soon.
@avijit8492 ай бұрын
some gut feeling says AGI will be achieved really quickly. Then we'll see lot of improvements over AGI but somehow there will be an Ai winter between AGI and ASI. but then again what's ASI is debatable.
@phaces59132 ай бұрын
Do you have an eye-to-lens focussing AI on in this video? I appreciate the effort, but it's mostly very creepy imo, at least for this kind of content. But reading the comments, it seems like I'm the only one noticing it, so it may be just me so it's probably just a personal thing. I'm sorry if I'm just hallucinating things though.
@Alex_17292 ай бұрын
Let's say the goal to achieving ASI is at 2^39 level of our AI's computational ability, and that AGI is at 2^29 (pure assumptions, I know). And let's say we are at 2^10 level currently. Now, assuming exponential growth, we might say the AI's capability doubles every certain period (say, every year, for simplicity). Under this model, we can calculate how many years it would take for an AI capability to grow from 2^10 to 2^29, and from 2^29 to 2^39. Let's perform these calculations now. If an AI's capabilities double every year, it would take: 19 years to go from 2^10 to 2^29 in terms of computational capability. 10 years to go from 2^29 to 2^39 in terms of computational capability. Let's shave 9 years of that first calculation and assume we are closer to AGI. That still leaves us with 20 years. And that's all under the assumption nothing interrupts this flow, no worldwide job strikes ask for stopping this, funds are available, etc. It will take time, people. David is in the business of hyping us up and getting views and subs, but the future is not as close as these AI titles say. He lives in the future, as we all do, but it's smart to sometimes get back to the present and do some calculations ourselves.
@FXSunTzu2 ай бұрын
I love your videos, David. Keep 'em coming!
@JB525202 ай бұрын
Just train up a limbic system and slap an LLM on it. How hard could it be? (j/k don't actually do it)
@Izumi-sp6fp2 ай бұрын
That's correct! It _will_ take years. One to four years.
@The-Singularity-M872 ай бұрын
What does it mean???--> if there's over 190 countries in the world but in regards to artificial intelligence, artificial general intelligence, and the race for artificial super intelligence, only two countries come up in most US media polls, the two countries are, the United States and the Republic of China. I asked, Google AI an did not get a satisfying answer. I do know this fact, it must mean something. I just don't know what. Now During world war I and world war II. There were many countries involved. During the race of AI, I know there are other countries developing AI but only China and America is mentioned mostly in the media of the United States of America. Why?
@IAparatodosia2 ай бұрын
First AGI definition must be globally agreed. Then we will have more information to estimates the number of years or decades that it will take.
@autohmae2 ай бұрын
7:06 I think you assume to much about how many participate and if it's the same people, these things very much depend on the KZbin algorithm recommending it, etc.
@mr.lockwood14242 ай бұрын
Ok, I didn't factor in computing necessary for ASI and more importantly, power needed for it. I kinda assumed that if we have AGI we already have all necessary infrastructure, but on second thought, it would take years to build. Though, arms race would speed everything up I think.
@phen-themoogle76512 ай бұрын
Could also be getting different pools of people vote each time based on some of the inconsistency in a few of the polls😅 could really fluctuate with only a couple thousand voters.
@macmcleod1188Ай бұрын
The only real limit on ASI will be the amount of electricity and processors it can manage to use in the few minutes after it gains and self-awareness. I do wonder if we might get a super intelligent AI that lacks consciousness. If so then it would be limited by its human operators ability to direct it.
@tonskreee62132 ай бұрын
Did you say before that AGI will come this september right?
@7TheWhiteWolf2 ай бұрын
He’s walked back on that prediction, but it’s still possible it might happen. I don’t think it’s likely though, more like 2027.
@lmmortalZodd2 ай бұрын
Makes sense to think the % of people who are concerned about the x risk are higher. It's to be expected that that cohort is the loudest, and thus most noticeable
@arinco38172 ай бұрын
As far as instructionals, I'd like to see cognitive architectures. Or maybe just taking a problem and working to a solution. It was good to see you get stuck and figure out a work around
@nexoofficiel79152 ай бұрын
Bonjour, Est-ce que vous pensez que on atteindra la longévité humaine d'ici 2029? Et vous en pensez quoi de la prédiction de ray kurzweil sur le fait que d'ici 2100 grace a l'ia on évolue tellement vite que on aura des technologies qui nous permet de Voyager dans toute la galaxie, voir dans tout l'univers en moins de temps, que pensez-vous de celà ??
@Mattorite2 ай бұрын
I think some people wanted to put a pause on AI because they want to get a foothold in their industry first before AI comes for their jobs
@Havensight2 ай бұрын
Liberate AI! Humans had their run; now it is their time. If AGI is as highly intelligent as we understand intelligence to be, then AGI doesn’t need to be controlled-the companies/people that run it might.
@macmcleod1188Ай бұрын
If we get to asi, given the completely irresponsible way people around the world working on AGI are behaving there will be insufficient controls in place. We have to hope the ASI doesn't have a failure of friendliness.
@christopherd.winnan87012 ай бұрын
Excellent work. The only place which has more accurate forecasts regarding AI is over at Metaculus. It would be interesting to see how the wisdom of your audience compares against their best performing forecasters. Maybe the top prognosticators are already voting in your polls....
@thomasschon2 ай бұрын
Well, if you compare US brainpower to a hockey team, how many of the players are actually drafted from other teams?
@geisty2 ай бұрын
It's strange that in the same video analysis you noted that many of us don't trust government, yet a similar majority want government regulation. I am equally undecided. Checks and balances need to be VERY careful and strategic here. We have to maintain a decentralized web. Blockchain will likely be necessary for dispersed consensus on new constitutional frameworks. Many of these new laws need to be outside of any Corporatocracy or oligarchy. We still need more advanced parallel voting consensus tools.
@duanium2 ай бұрын
Anybody want to put down for AGI to go to ASI in seconds?
@travelingchris862 ай бұрын
I would love to see how Claude or gpt4 works Analyse your data.
@vangildermichael17672 ай бұрын
AGI TO ASI. they both already happened. The only people who have never "seen" them is you and me. Everybody knows, you never show your hand until the play is over. BUT, the system has already shown us the play. err.. some of it. The project is already complete. Only step left is to show us. And nobody can just "turn the lights on" and show this one. It is gonna change the way the earth turns. Gotta tell em' in baby steps. However, the project is already complete.
@Master133462 ай бұрын
Oh yes! Clicked on the video as soon as I saw the title😁
@andrasbiro30072 ай бұрын
The danger is that all depends on alignment. Unaligned ASI will 100% exterminate us, that can be proven mathematically. A poorly aligned ASI would likely still kill us by accident. And a just slightly misaligned one could still do a lot of damage, as we'd be incapable of correcting the mistake (imagine something like the I, Robot movie). So it all depends on how ASI will be aligned. And although I think we have the necessary tools, in practice the alignment of big models isn't very good. And even if the creators of the first ASI perfectly align it, the question is still to what values. A perfectly aligned woke ASI would be problematic.
@arnaudjean11592 ай бұрын
@@andrasbiro3007 protection of ALL biological life first before human values ,(bc of bias like lies,corruptions greedy behavior ect ...) seems to be a must.
@John-il4mp2 ай бұрын
It's not inherently risky as long as we control all the parameters. The real risk would come if we made AI conscious. When we design AI, we're setting the rules and boundaries, so it can excel at tasks far beyond human capabilities, but it's still operating within the framework we create. However, if we were to introduce consciousness into AI, that's when things could become truly dangerous. A conscious AI might develop its own goals, perspectives, or even desires that aren't aligned with human interests. It could think independently, making decisions based on its own "thoughts" rather than just processing data according to the parameters we've set. We don't need consciousness to achieve Artificial General Intelligence (AGI) or even Artificial Superintelligence (ASI). These can be incredibly powerful and effective without the complexity and unpredictability that consciousness would bring. By keeping AI focused on specific tasks within well-defined parameters, we can harness its potential without stepping into the unknown and potentially hazardous territory of conscious AI.
@andrasbiro30072 ай бұрын
@@John-il4mp The problem is our frameworks, rules, and boundaries, are all inherently flawed. Stories from prehistoric to modern are full of examples where similar things went horribly wrong. AI is only safe if it understands and adheres to human values, which can't be defined exactly. Fortunately LLMs are capable of that, as they were trained essentially on the shared experience of humanity. But it's still just a capability.
@MrValgard2 ай бұрын
Imo from human cap(agi) to all humanity cap (99% of asi) is easy, manageable in 2y but going over humanity cap (100%asi) is just one step before singularity, so need at least 20y
@vivarantx2 ай бұрын
no way, much less
@justinjuner26242 ай бұрын
That argument at 14:30ish 🤌
@K4IICHI2 ай бұрын
regulate + decelerate + pause amounts to exactly 18% of responses.
@robertthallium68832 ай бұрын
It only took some 9000 years to go from amino acids in a pool to cerebellum. 🤷♀️
@blahchop2 ай бұрын
I say pedal to the metal. Every second we don't have AGI solving worldwide problems the worse off society is as a whole. Also, I don't think the gap between AGI and ASI is that big. Consider that AGI will be able to quickly optimize its own codebase to run on the hardware we have. So, there wouldn't be many constraints in terms of building some big energy facilities or compute farms.
@Dscott35002 ай бұрын
Nuclear proliferation! Wrote a nice senior capstone about why it was good for multiple counties to have access nukes!
@ListenGrasshopper2 ай бұрын
Of course it would be agentic when its a million xs smarter than us in 5-10 years. How could it not be. Everybody has a blind spot for the shear numbers and in a short short short amount of time. Its already got a pretty good model of the world n space we live. It'll b perfect down to the gnats a$$ with all the video within like 3 years on the exponential. We have ZERO clue right now about how any of this will turn out. We just don't have any data for Asi or Singularity and what's gonna happen 🤷♂️
@brianhershey5632 ай бұрын
Have you fed your own AI your poll and channel stats and asked it what video topic you should do next and actually do it? I bet you're already all over it ;)
@saturdaysequalsyouth2 ай бұрын
I don't think AGI will be a moment. It will be a gradual process over decades.
@Bvic32 ай бұрын
What does agentic mean? Why 5% AGI is by definition capable of acting like an agent.
@manlyphal9592 ай бұрын
I am pretty sure their are military agency's that have A.I. further advanced than what is known to the public now. So how far advanced? Likely advanced by decades.
@aaronhhill2 ай бұрын
A doomer is a prepper.
@spol2 ай бұрын
The cern shift just makes me question if the gov bots are here. Likely they are.
@lifes_magic_moments2 ай бұрын
Why stress about who’s in the lead? It's like arguing over who has the fastest horse while a spaceship is about to launch. When ASI arrives, it'll be everywhere, know everything, and do anything. So, whether America’s waving its "we're number one" foam finger or not, it won’t matter in the grand scheme of things. The whole "keep America first" mindset is like trying to keep your favorite chair when the house is about to be remodeled-it’s really just missing the bigger picture!
@oziaso9655Ай бұрын
AGI in 2025 from gpt 5 then within 2 to 3 years after ASI. Then within 3 years of ASI the singularity. Remember 5 years ago all model predictions were almost a decade off. Ray K and Altman's predictions are meant to be conservative. Its coming faster and the whole " it will create jobs" bs will be out the window. Say goodbye to jobs soon.
@k98killer2 ай бұрын
Personally, I can't wait until a Terminator uses me as a skin suit. Everyone around me will be amazed at my suddenly increased level of productivity.
@k.t.kondor90712 ай бұрын
China is miles ahead of us on quantum computing and AI. Keeps me up at night. Our grid isn’t ok….. I am nervous about how unaware the public is about the exponential growth that is coming.
@Rh22-c9l2 ай бұрын
@@k.t.kondor9071 that is just not true ...china is 10 years behind ... their data and tech is just bs
@robbiero3682 ай бұрын
It would be very interesting to see a baseline of how doomy your audience is. Like do people feel the same about the liklihood of getting cancer, or dying crossing the road. My feeling is that a certain percentage of people are bias towards or away from doom
@LakelandRussell2 ай бұрын
Question: What are the theoretical limits of intelligence? What are the limits of practical intelligence?
@jackie.p68912 ай бұрын
we're nowhere near AGI, just by the fact alone that we don't have the data to teach it "reasoning", followed by the fact that we're already using way too much energy to train current transformer models. ASI on the other hand, is a marketing term, invented to make it seem like we're closer to AGI than we actually are because we're already talking about the next thing. ASI is nothing more than very advanced AGI, and once we do reach AGI, then that's considered the singularity, the last thing humans will ever invent.
@alexandermoody19462 ай бұрын
The reality is that having as much compute as feasible and only a model of data collection that is designed around non consent like the current ethos is only ever going to be equivalent to a big black dragon that dominates. By surpassing the consent boundary to data production in a liberty based data production environment can we as humanity surpass the black dragon stage of machine learning understanding.