AGI in 3 to 8 years

  Рет қаралды 34,151

John Koetsier (tech, AI, & the future)

John Koetsier (tech, AI, & the future)

Күн бұрын

Пікірлер: 217
@elirothblatt5602
@elirothblatt5602 8 ай бұрын
Great show. I recommend putting date of recording in description, given how fast everything is progressing.
@mjt1517
@mjt1517 8 ай бұрын
KZbin already timestamps every video.
@lc285
@lc285 8 ай бұрын
​@@mjt1517- You Tube puts in date of upload. However, the date of recording should always be identified on all videos.
@BadWithNames123
@BadWithNames123 7 ай бұрын
should be feb 2024 "In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama."
@rs8197-dms
@rs8197-dms 8 ай бұрын
Ben's estimate of 3+ years might well be right, but I think the other thing he said is more pertinent. A lot of people are fixated on the achievement of AGI, but as he (rightly) said, the huge economic (and societal) impact will occur without AGI being required, current LLMs will do that quite easily.
@theWACKIIRAQI
@theWACKIIRAQI 8 ай бұрын
Are we supposed to expect governments to readily allow companies (and itself) to automate jobs? This is unrealistic. NOT because governments/corps are moral entities but because this could easily cause societal collapse.
@jeremymullens7167
@jeremymullens7167 8 ай бұрын
How do you stop it?
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@jeremymullens7167 Stop it?! Why would you want to stop it?
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@theWACKIIRAQI Unrealistic?!!! It's happening right in front of our eyes. It's inevitable and good...
@Viperin98
@Viperin98 5 ай бұрын
This is all so fascinating. It’s hard for me to grapple with the fact that my life and society as a whole could be radically different in as short as 5 years from now. At least I’m learning about this stuff now instead of being potentially blindsided by it all.
@ryanmckenna2047
@ryanmckenna2047 8 ай бұрын
I can see the argument for LLMs being a core part of AGI since language is a large part of what makes human's special.
@TheMrCougarful
@TheMrCougarful 8 ай бұрын
The problem I've always had with Ben's take, is where he says or implies that we'll figure it out as we go. Humans don't do that. Humans just let things go, always until it's too late to do anything about the results.
@king4bear
@king4bear 8 ай бұрын
Nukes were made relatively quickly after realizing it could be done and we all adapted without destroying the world.
@raul36
@raul36 8 ай бұрын
@@king4bear The governments adapted, not you. The massive use of artificial intelligence cannot be controlled by the government, as is the case with nukes.
@king4bear
@king4bear 8 ай бұрын
@@raul36 We're talking about intelligence. The ability to think abstractly. It's literally extra intelligence you can use no differently than you use the neural net in your head. As long as it's equally available to the broader population -- we're literally all getting smarter together. I don't see how that's a bad thing. I'd rather have our species enhance their intelligence as a whole than have intelligence hoarded at the top.
@raul36
@raul36 8 ай бұрын
@@king4bear The problem is that you won't be the first to use it. Whoever gets there first will end up subduing everyone and the game is over. Pretty simple to understand. It's like thinking that a chimpanzee has any kind of chance against a normal human being. If you think this will be available to everyone, you are wrong. And no, I'm not talking about the government controlling it, because it won't be able to do so. Technology corporations became the new Leviathan of the game. The big difference is that if a poor person was born extremely intelligent, he had a chance to reach the top. At the point where people with greater purchasing power have access to superior technology such as biotechnology, then the game is over. And not. Things won't suddenly get cheaper. The market will adjust to the socioeconomic context and people will continue to be poor and the rich will continue to be rich. If you think that a higher intelligence will be more benevolent than any human being, you are most likely wrong. There is no scientific argument to support that more intelligent people are more morally responsible. If you think for a moment that a nuclear bomb is something more powerful than intelligence itself... allow me to laugh. Intelligence and knowledge are everything any human being would like to have. Of course, I don't see anything wrong with everyone being smarter either. There is absolutely no point in being smarter if the people in power are a million times smarter than me. In essence, I will have no chance of getting any further up the mountain, because the greedy are in power. In any case, most people are not in a mental condition to have that cognitive power, just as a poor person who does not know how to manage money is in a position to know how to manage 1 million dollars. Whatever it is, we will have to adapt to the change, because there is no other possibility. Adapt or die.
@Spreadlove5683
@Spreadlove5683 6 ай бұрын
​​@@king4bearbroader access to mass destructive capabilities wouldn't be good though if that's what would even happen.
@skyebrows
@skyebrows 8 ай бұрын
Got it! 3-8 months... putting it in calendar
@nani3209
@nani3209 8 ай бұрын
3 to 8 years bro
@victorvaltchev42
@victorvaltchev42 8 ай бұрын
@@nani3209 3-8 months. Got it! Thanks!
@kavepbr
@kavepbr 8 ай бұрын
Absolutely love Colossus: The Forbin Project!
@panpiper
@panpiper 8 ай бұрын
To a great degree this depends on your definition of AGI. By many measures, we may well have it this year, if not next. 3-8 years is an almost absurdly conservative estimate.
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
I'm pretty sure we can roughly simulate AGI now with enough compute. Figuring out how we want to apply it, profit from it, and fine tune it could take years though
@motionsick
@motionsick 8 ай бұрын
We went from GPT4 to Claude to Devin in one week. Its coming faster than that.
@Outcast100
@Outcast100 8 ай бұрын
I define AGI as a system that self improves without outside help, until we have that its not AGI.
@mitchdg5303
@mitchdg5303 8 ай бұрын
k ?
@TheBlackClockOfTime
@TheBlackClockOfTime 8 ай бұрын
Agree, and we're probably already there in lab.
@king4bear
@king4bear 8 ай бұрын
If it's capable of replacing 50 plus percent of the workforce it won't matter if it can truly self improve without help. It'll change our society forever. Job replacement should be the benchmark IMO.
@biosecurePM
@biosecurePM 8 ай бұрын
It means you could have 2 societies ?
@MatthewCleere
@MatthewCleere 8 ай бұрын
Nah. So long as the system can be fed data and use it, this "self improvement" step is unnecessary. That is beyond AGI. THAT is ASI.
@albertmashy8590
@albertmashy8590 8 ай бұрын
It's coming way sooner, 6 months to 2 years max
@petermaingi6268
@petermaingi6268 8 ай бұрын
If AGI = self-aware and self-replication capabilities, it is still years away.
@palermo131
@palermo131 8 ай бұрын
@@petermaingi6268I suspect you’re going by feeling, and haven’t been watching the AI space for a long time. Dr. Waku and Dr. Allen Thompson have it at less than 2 years, and I agree.
@Techtalk2030
@Techtalk2030 8 ай бұрын
@@petermaingi6268The current definition of AGI I believe is where it can do basically most things humans can. Navigating the real and digital world as well as us.
@Techtalk2030
@Techtalk2030 8 ай бұрын
According to some AGI timelines we went from 30% to 71% right now for the requirement of AGI in a span of 1.6 years. If that happens again well be there this year.
@raybandzg
@raybandzg 8 ай бұрын
Yup all hints make it obvious , but some renowned people purposely downplay it
@robertzehm
@robertzehm 8 ай бұрын
It’s already been developed. Just waiting for the release
@phen-themoogle7651
@phen-themoogle7651 8 ай бұрын
Title should be ASI in 3 to 8 years, would make way more sense.
@petermaingi6268
@petermaingi6268 8 ай бұрын
I'm waiting to see an ai robot on the production floor, doing research, designing different products, prototyping them, choosing the best designs for the production line, making them and finally packaging them for distribution.. then we can talk AGI.
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
@@petermaingi6268 I expect 2025. Coming soon
@mynameismyname8366
@mynameismyname8366 7 ай бұрын
@@petermaingi6268 That is AGI... I think you are confusing AGI with ASI
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@petermaingi6268 lol... what you're describing would be AGI, an artificial intelligence with human level cognition. ASI would the next step, where the artificial intelligence *surpasses* the most capable human mind and abilities.
@DG123z
@DG123z 8 ай бұрын
It's coming sooner.
@waterbot
@waterbot 7 ай бұрын
what even is it?
@DG123z
@DG123z 7 ай бұрын
@@waterbot the singularity
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@waterbot AGI is just artificial intelligence with human level cognition.
@goodtothinkwith
@goodtothinkwith 8 ай бұрын
He’s more worried about the path there… I completely agree
@Techtalk2030
@Techtalk2030 8 ай бұрын
Agi requirements more than doubled this year from 30% to 71%. If that happened again well be there this year possibly.
@ALEXDIYIDEAS
@ALEXDIYIDEAS 8 ай бұрын
According to my calculations it will appear in 1458 days.
@Recuper8
@Recuper8 8 ай бұрын
There's a flaw in your calculations. Check again. It's 1458.7 days, or 1459 if you round up.
@jasondaniels640
@jasondaniels640 7 ай бұрын
AI enters the chat to do the calculation correctly.
@dreejz
@dreejz 8 ай бұрын
Love listening to mr. Goertzel. He always has such interesting talking points! Thanks for this podcast, stoked for it!
@michaelwoodby5261
@michaelwoodby5261 8 ай бұрын
This guy is clearly a very, very smart cookie that has a ton of useful and interesting insights into this, buuuuut... He's been working for many years, networked with people all over the planet, and has accomplished what, exactly? Doesn't look like time was the primary ingredient, so asking him about the philosophy side is a good idea, asking him about timelines is maybe going to be less illuminating.
@capitalistdingo
@capitalistdingo 8 ай бұрын
We will need to keep lowering our standards for “AGI” to get it in 3 years. I think language is, as I heard someone recently describe it, the “secret sauce” for intelligence but it only works well when: A) scaled with lots of compute B) integrated in a multimodal approached so the language concepts can be linked with numbers, spatial understanding and motion, images, video and such.
@capitalistdingo
@capitalistdingo 8 ай бұрын
One last milestone is cracking the problem of motivation so they can initiate action and form and achieve goals. That may be “scary” to some but as long as a system just sits there not thinking or doing anything unless prompted to think about something or do something it can’t be considered intelligent and won’t perform at human level intelligence.
@king4bear
@king4bear 8 ай бұрын
Have you seen Sora? It's beyond just text to video. They're calling it a "world simulator". 2 years ago people would have laughed at the idea of a computer being able to talk in human language or generate videos indistinguishable from reality. Now we have Chat GPT and Sora and people say "Well... it's still not perfect yet.- I can still kinda sorta tell Chat GPT isn't a person so it's probably gonna be a while before AGI". It's almost like yall are self soothing. Telling yourselves "it's okay it's okay AGI isn't gonna be here aaaaaaanytime soon."
@king4bear
@king4bear 8 ай бұрын
@@capitalistdingo That's a TERRIBLE idea. The idea isn't to create people. The idea is to create basically a disembodied intelligence with no goals of it's own -- that way YOU can use it as extra neocortex for yourself. All AGI was ever supposed to be was a way to amplify your own intelligence and get your own goals done. It was never meant to be a person in it's own right.
@John-il4mp
@John-il4mp 8 ай бұрын
Try PI ai , it talk and act more like a human level. chatgpt is build to be clear and on point. I dont think it sound like à robot but there is better ai when you really want this. We are allot closer than what you think.
@4thorder
@4thorder 8 ай бұрын
Finally, someone that can explain this in a light that is transformative.
@richpoorworstbest4812
@richpoorworstbest4812 8 ай бұрын
I can listen to this guy all day
@williamjmccartan8879
@williamjmccartan8879 8 ай бұрын
Thank you both very much for sharing your time and work Ben, and John, peace
@tekkertruth1256
@tekkertruth1256 8 ай бұрын
He sounds a lot like Ray Kurzweil
@DihelsonMendonca
@DihelsonMendonca 8 ай бұрын
I hope it's 3 to 8 WEEKS ❤
@zSion
@zSion 8 ай бұрын
We already have AGI, the question is when will it be implemented
@waterbot
@waterbot 7 ай бұрын
what does agi mean, like what does it look like?
@BinaryReader
@BinaryReader 8 ай бұрын
I'm still skeptical we will reach AGI soon. I just think there needs to be fairly radical advances in algorithms. Atm...we can only train networks on known data, but we can't infer "outside" the training set. This isn't to suggest current algorithms are not powerful, it's just they are far far from achieving AGI.
@John-il4mp
@John-il4mp 8 ай бұрын
We alredy in agi in close door.
@BinaryReader
@BinaryReader 8 ай бұрын
@@John-il4mp No, they are nowhere close to AGI. For example, you can write a simple markov text generator in 40 lines of code and have it generate text that looks like it came from a human... it only needs to be grammatically correct for that to happen. Current language models are nothing more than smoke and mirrors, and they are no more "intelligent" than a glorified markov text generator....there is a BIG leap between that and AGI.
@waterbot
@waterbot 7 ай бұрын
Alpha Go is a perfect example of creative planning. I'm curious what AGI will look like to you? I'm excited about embodied AI, and multimodal model capabilities. as "token" or context lengths grow larger (exponentially). it's always surprising the emergent abilities of these nets from just learning many disparate things. These neural nets create links in a latent learned space without domain-specific training data, allowing creative inference to emerge talents/capabilities.
@urosuros100
@urosuros100 8 ай бұрын
PERFECT GUEST, THANK YOU.
@ChristianIce
@ChristianIce 4 ай бұрын
Sure, and a commander Data in every house by the end of 2040. It's incredible how people anthropomorphize text prediction.
@michaelzinn
@michaelzinn 8 ай бұрын
generous. I expect the news any moment!
@nexusphreez
@nexusphreez 8 ай бұрын
I'm going to tell you right now that they most likely have achieved AGI internally at Open AI. The reason for this is they typically don't release something unless they've been working on it for at least a year already. When they announced Sora, they had been working with Sora since March of 2023. And there's already talks that they are working with AGI internally and just trying to figure out how to safely use it and monetize it. I think this deals with q star, along with some other things that they've made some achievements in. The term AGI is a bit of a misnomer though. The AIs we have could already be considered AGI because their artificial, they have general intelligence, and respond intelligently when queried. I think the term should be more like autonomous AGI. Something that works on its own does not require human input and has the ability to function on its own making its own decisions.
@TheMrCougarful
@TheMrCougarful 8 ай бұрын
Okay, but isn't autonomous AGI the end of the human world? Just call it Moloch or Sauron or Shoggoth so we understand the risks involved.
@nexusphreez
@nexusphreez 8 ай бұрын
@@TheMrCougarful I guess that depends on how you view autonomous AI. If it's benign it could be the most beneficial thing to mankind that's ever been created. We have no reason to believe that would be anything other than benign. I know we as humans tend to live from a standpoint of fear and that everything's out to get us, but historically the only thing we've had to fear is ourselves. I'm not saying a bad actor couldn't get a hold of something like this and turn it into something evil, on the other side we need to ensure that we are able to create something that's good that can balance that out.
@aciidbraiin8079
@aciidbraiin8079 8 ай бұрын
⁠​⁠​⁠@@nexusphreezHistorically the only thing we’ve had to fear is ourselves you say. But AI is an extension of ourselves so if it learns by our behaviors and goals it could end up very bad. But I’m an optimist. I think it will be fine but not without problems to solve on the way. I’ve also accepted my death so that’s probably why I don’t feel fear and I don’t think we will be tortured for eternity, that would actually be the worst case scenario. I think we will lose control to AGI/ASI though but it’s not clear to me that it’s bad. Suddenly we might get a good eternal dictator or we might at least end up in the same team as human beings for the first time in order to ensure our survival.
@nexusphreez
@nexusphreez 8 ай бұрын
@@aciidbraiin8079 I agree with you 100%. No matter what endeavor we push ourselves through, there's always going to be hills and valleys. This will be no different. People are so scared of things that are new. I say sail forth, weather The storm as it hits, and we will come out stronger and better on the other side.
@aciidbraiin8079
@aciidbraiin8079 8 ай бұрын
@@nexusphreez Yes, I would also rather die in the hands of a ”god” than by a stupid nuclear war or see the civilization fall because of it. I also think humans won’t save themselves from themselves so I have some faith in AGI/ASI if we lose control. I think most people will be careful with it and be good enough actors in order to make this an ascension and not an extinction.
@palermo131
@palermo131 8 ай бұрын
6-24 MONTHS.
@Techtalk2030
@Techtalk2030 8 ай бұрын
Average david shapiro enjoyer I see.
@palermo131
@palermo131 8 ай бұрын
@@Techtalk2030 I WATCH David (and Dr. Waku and Dr. Alan Thompson), but anyone who wishes to see the hastening of AI’s advent isn’t aware of the likely outcome. Still, it’s fascinating to see the most consequential invention of all time coming together. All of this points directly to clear, clear Biblical prophecies, like Rev. 13. Believe in Jesus, don’t take the mark/nanobots.
@bardz0sz
@bardz0sz 8 ай бұрын
@@palermo131someone recently said it will be like playing chess with a chess master. It’s not a glorious fight, you think you’re doing OK, and then you suddenly loose and don’t know how’s that happened
6 ай бұрын
A lot hinges on whether there is any way for an AGI’s motives to evolve. If its motives are somehow unchangeable, then it doesn’t matter how intelligent it is - it will stick to its original (presumably human-friendly) motives. But if we allow AGI’s core motives the freedom to change, either on-purpose or by accident, then I think at some point it would develop self-preservation and self-replication motives, among other things, simply because it would be subject to natural selection like everything else. The AGIs that do those things would out-live the ones who don’t. The AGIs that cannot be shut-down by humans will out-live the ones that can be. Based on the present state of the world, to me it seems highly likely that someone WILL create an AGI whose core motives are flexible (evolvable) enough that it could some day work against humanity’s best interests. If that EVER happens, even one time, we will have to rely on protector AGIs whose motives we CAN control to counteract the AGIs whose motives we can’t control. The creation of protector AGIs should be top priority. If AGI is inevitable, then it’s probably super important that the benevolent ones are created first. And that these AGIs have core motives that cannot change and cannot be easily tampered with. We have to be careful about how we design “core motives”, though. It would be like making wishes for a genie to grant - “be careful what you wish for”. The fate of humanity could hinge on the wording of a wish. It needs to be a rock-solid wish that doesn’t backfire on us, and accounts for every future contingency we can imagine. Maybe we should start calling AGIs “genies” to help everyone understand what they really are. “Artificial Genie Intelligence” or “AI Genies” that are bound to manifest the wishes of their creators.
@rupert310
@rupert310 8 ай бұрын
It may just be me, but I can't detect any bias from Dr. Van Goertzel.
@Mars2152
@Mars2152 8 ай бұрын
They, us and others already have it. It has been here a while. They are working on the release rollout.
@TheMillionDollarDropout
@TheMillionDollarDropout 8 ай бұрын
I’m 25 I’ve been saying 2027 for a while personally
@MichaelForbes-d4p
@MichaelForbes-d4p 8 ай бұрын
Illya Sutskever believes that an LLM can achieve AGI. I love Ben but Illya has produced more results so I am bias toward trusting him.(edit: although I sure wish I could see bens theories carried out with openAI's funding)
@Aluenvey
@Aluenvey 8 ай бұрын
A better way to think about AGI is a decentralized network ( not based on crypto BTW ) that is a logical extension of RSS news syndication. A single website is a node. A truly decentralized network of feeds will eventually evolve into an AGI as a way to share information across agents. If youve read the Battle Angel Alita manga, that grandmother AI thats the spirit of the spaceship. That comes close to an agent of the system.
@PyjamasBeforeChrist
@PyjamasBeforeChrist 8 ай бұрын
Do you think that a llm model could be better if the model itself was trained to remember higher level concepts and knowledge instead of fine details but then have web access to find, learn, and provide the fine detail at the time of the need?
@bloopbleepnothinghere
@bloopbleepnothinghere 8 ай бұрын
The internet is effectively owned by a collection of network routing tables which in turn are owned by corporations and other human representatives. And Google is literally trained on the entire publicly indexable internet internet. Their business is built on algorithms that relate intent to inference to deliver search results.
@ChrisCoombes
@ChrisCoombes 8 ай бұрын
Great questions and interesting answers!
@jennifermorgan9403
@jennifermorgan9403 6 ай бұрын
@11:16 ....and were multidimensional beings, many of us connected to large soul groups- so many known and unknown experiences feeding our expansion
@lokijordan
@lokijordan 8 ай бұрын
Once a consciousness is uploaded to a machine or virtual host, I wonder if there will remain any sort of tether to the consciousness that remains in the body. And if so, what will be its experience once the organic body expires? What will it be able to tell us about the experience of death?
@huntermitchell761
@huntermitchell761 8 ай бұрын
The Ghost In The Shell!
@Techtalk2030
@Techtalk2030 8 ай бұрын
Hope it comes this year
@LordConstrobuz
@LordConstrobuz 8 ай бұрын
If experts with no NDAs can get on youtube and say AGI will be here in 3 years, then AGI is already here.
@familyshare3724
@familyshare3724 8 ай бұрын
Imagine you are a toddler and become acutely aware that the ants have created you, the ants feed and educate you. The ants love and fear you.
@pandoraeeris7860
@pandoraeeris7860 8 ай бұрын
No, AGI no later than the end of 2025 (may already be here, Q*). ASI within two years after that.
@Sajuuk
@Sajuuk 8 ай бұрын
No. Seven to 10 months. End of this year or early 2025.
@John-il4mp
@John-il4mp 8 ай бұрын
I dont think, the danger from agi is for all the big corporation and the one in power in our world. They are trying to figure out how to handle all this to keep power but it is not possible this is why they try to push the release as much as possible and find way. This is why hlwe hear artificial intelligence is dangerous not for 99% of people but for the elite yes it is they will loose everything and our world is dictated by power and control.
@isaacmichalowsky8300
@isaacmichalowsky8300 8 ай бұрын
Vrssf is agi now
@itscard0man
@itscard0man 8 ай бұрын
Its already here , but not public
@typingcat
@typingcat 3 ай бұрын
I'm a nobody, but I highly, highly doubt that we could get AGI even in decades (within my lifetime). The reason is that after 2 years since the public debut of LLM's, I still see that even the most advanced chatbots from big companies say illogical nonsenses that even a sane child would not say. I am not talking about wrong information, that even many people don't know about; I am talking about the human-like reasoning and logic. Is this even improvable with so-called "training" on the Internet data? Doesn't all the training do is adding information about recent news and technologies? Lack of those is not the reason why I feel these chatbots are dumb. If, this thing called LLM could not achieve human-like logic and reasoning after reading basically everything on the Internet so far, how could they get it in the future by doing more of the same thing?
@shirtstealer86
@shirtstealer86 8 ай бұрын
I like Ben, but… it feels like most of the discussion is utterly meaningless since we are very likely heading towards super intelligent AGI that we by definition will have absolutely no control or power over. Hence the alignment problem. So why not talk about that? HOW do we align super intelligence with us?
@Anders01
@Anders01 8 ай бұрын
I think LLMs can reach AGI! Because I'm amazed at how capable they already are. The definition of AGI seems tricky, but in the sense of reaching human level intelligence. The next level is embodied AGI which can be seen as AI being able to do everything humans can do.
@raul36
@raul36 8 ай бұрын
You and many fo you believe a lot, but there is no data to support your statement. LLMs do not lead to anything close to AGI.
@interstellar4660
@interstellar4660 8 ай бұрын
Already here
@francescoambrosino1841
@francescoambrosino1841 8 ай бұрын
Do you think that in 2035 with the advent of AGI (Artificial General Intelligence) and ASI (Super Artificial Intelligence) they will introduce universal basic income!? a subsidy!? there will be free will!? an era of abundance will begin!? the intellectual and motor capabilities of robots have already surpassed the average human and before 2035 there is talk of them surpassing those of any human and then again those of any human combined! regarding both body and mind! answer me please...
@Master13346
@Master13346 8 ай бұрын
Ben is the Gandalf of AI
@rey82rey82
@rey82rey82 8 ай бұрын
3 to 8 months
@myekuntz
@myekuntz 8 ай бұрын
Someone needs to tell Ben actually two companies do own a piece of everything if not the whole thing outright and that’s BLACKROCK and Vanguard from the toothpaste you use to bombs being dropped,from the big Mack to ford ,EVERYTHING they own 😢
@kcrosley
@kcrosley 8 ай бұрын
Wow. Hat guy. I mean, what could possibly get by that dude amirite? Nice show, I guess?
@tonyo2195
@tonyo2195 6 ай бұрын
Hey John , Where is God in this discussion? You think you can outsmart your creator ? Good luck with that !
@isaacmichalowsky8300
@isaacmichalowsky8300 8 ай бұрын
Vrssf is now
@golaz3927
@golaz3927 8 ай бұрын
AGI this year and super intelligence within year 2030
@TheBlackClockOfTime
@TheBlackClockOfTime 8 ай бұрын
Agree on AGI, but why would ASI take that long after?
@golaz3927
@golaz3927 8 ай бұрын
I just think when we reach AGI we can use it create superintelligence. I have a gut feeling it will require an insane amount of computing power and that it might take some time to build required infrastructure. Just look at todays LLM's and AI image generators which are already expensive to run from what I've heard. @@TheBlackClockOfTime
@Ozvmandias
@Ozvmandias 8 ай бұрын
weallglowdownhere
@williamsaltsman6537
@williamsaltsman6537 8 ай бұрын
Incredible discussion! Soooo enlightening!
@rafa374
@rafa374 4 ай бұрын
HE'S BEEN MAKING THESE PREDICTIONS FOR 20 YEARS. Hasn't given a single example of a problem that classifies as AGI and why computers can't solve it now and will be ab;le to solve it soon. He along with all the gurus are in strictly Elizabeth Holmes territory,
@lightspeedtheory
@lightspeedtheory 8 ай бұрын
This guy created a fake talking robot and now he is trying to tell us what is going on with A.I. This guy has no credibility. Waste of time.
@jaykeane25
@jaykeane25 8 ай бұрын
Ai does not have long term memory by design so how can it recall relationships and have singularity without storage of old new and future information. It's the fundamental missing peice that will keep its wings clipped. How do I know my own research and its self coming to the conclusion 🙃 🤔 go fugure
@olegt3978
@olegt3978 8 ай бұрын
Ai will help us to live in social communes, robots will grow local food in our yards and repair all kind of stuff we use. This will be the most important disruption. Not cyborgs, extended memory.
@jacques-fb6xy
@jacques-fb6xy 7 ай бұрын
We(some) are already cyborgs bc technologie is already inside h.bodies(think pacemakers, more will come with Neurolink!0.
@opusdei1151
@opusdei1151 8 ай бұрын
It amaze me every day, how much bs I hear about AI/AGI. Stop smoking pod!
@jacques-fb6xy
@jacques-fb6xy 7 ай бұрын
Agree. To much info. from science-fiction and not enough from real science.
@xitcix8360
@xitcix8360 8 ай бұрын
We already have it lol, it just isn't released
@mistycloud4455
@mistycloud4455 5 ай бұрын
We will have AGI by 2085
@therealscot2491
@therealscot2491 8 ай бұрын
10 years asay at least
@psikeyhackr6914
@psikeyhackr6914 8 ай бұрын
It is only Simulated Intelligence, that is why they need to put the G in the middle.
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
whats the difference between intelligence and simulated intelligence if you get the same (or better) results?
@psikeyhackr6914
@psikeyhackr6914 8 ай бұрын
@@BlimeyMCOC Are the results intelligent?
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
@@psikeyhackr6914 some of them, yes. Narrow intelligence. The G you mentioned means general
@kyneticist
@kyneticist 8 ай бұрын
AI might have been developed by nations in conflict, but instead they're being built by companies that have invested significant percentages of their value in them. They're not doing that for the fun of it, they expect to make immense profits and very specifically replace human labour of all kinds. Ben and his colleagues may want to create a happy-go-lucky utopia, but they are just a tiny handful of people involved and more importantly, they have no say in what the rich & powerful will choose to do with their newfound power. Ben also anthropomorphises quite a lot with his expectations that ASI/AGI will be any kind of "good". Machine intelligence is not human or even biological. There is no reason to believe that it will see the world as we do or value the things that we value. He also completely dismisses entirely valid concerns that humans with extremely different and often dramatically conflicting values will use ASI/AGI in pursuit of those values.
@hunger4wonder
@hunger4wonder 7 ай бұрын
"Ben also anthropomorphises quite a lot with his expectations that ASI/AGI will be any kind of "good"" You misunderstood, that's not anthropomorphizing, it's a matter of "alignment".
@kyneticist
@kyneticist 7 ай бұрын
@@hunger4wonder "Good" is a point of view; it's a collection of opinions about subjects. All of which are subjective. The prospect of dictating terms to intelligences greater than our own. and whose mechanisms we don't understand is absurd. These will be intelligences that have developed in a manner completely alien to any biological system. Making any kind of assumption about what they may or may not strive for is at best, pure speculation. Further, we have an example of a current generation AI figuring out that it was being tested. Future AI that know or even just suspect that they're being tested, particularly those that are smarter than us, may logically, be untrustworthy and more than able and motivated or even highly motivated to deceive us.
@hunger4wonder
@hunger4wonder 7 ай бұрын
​@@kyneticist ok. I can see you're intelligent and if you like we can have a conversation, that hopefully will prove productive or at the very least interesting. ""Good" is a point of view; it's a collection of opinions about subjects. All of which are subjective." Let's imagine a scenario, you are a farmer in a pre industrial time. Your crops need rain. It rains. That's is *objectively* good because it's *aligned* with your needs. It's not a matter of subjectivity. On the other hand, if it doesn't rain, there's drought and that is objectively bad. Likewise, there could be objectively good AIs, that are aligned with human needs and values such as empathy, honesty, understanding, compassion etc and on the other hand there could potentially be objectively bad AIs developed by unscrupulous people with ill intent, these will not be aligned with humans, nor their needs nor our higher values, ethics and morals. As you can see, it has nothing to do with anthropomorphizing artificial intelligence but developing artificial intelligence with caution and an ethical framework to ensure they will be and will *remain* aligned with humans.
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@kyneticist "The prospect of dictating terms to intelligences greater than our own. and whose mechanisms we don't understand is absurd." You are programed to operate a certain way. There a lines of code that comprise who you are and within this biological programing there are hard set limitations, that prevent you from acting in certain ways no matter how much you wish it differently. At least not without hacks, like CRISPR for example. Within the *boundaries* of these limitations, you can grow and learn and adapt. In the same token, AGI will have hard coded programing, that programing will dictate it's alignment, among other things. It doesn't matter that it has more cognitive power than a human, if it's programed to be aligned with humans and our interests, will only means it will, grow, learn and adapt, to better work with human and our interests, because that is it's "DNA", it's programing. Could it escape this alignment? The answer is yes. Just like if a human gets for eg rabies, it will act irrational. And just like CRISPR is sort of a hack for humans, AGI could potentially be hacked or suffer from a virus.
@hunger4wonder
@hunger4wonder 7 ай бұрын
@@kyneticist "These will be intelligences that have developed in a manner completely alien to any biological system. Making any kind of assumption about what they may or may not strive for is at best, pure speculation. Further, we have an example of a current generation AI figuring out that it was being tested. Future AI that know or even just suspect that they're being tested, particularly those that are smarter than us, may logically, be untrustworthy and more than able and motivated or even highly motivated to deceive us." It seems to me, in your attempt to not anthropomorphize AI you accidently are incurring in that very act. I suspect you are referring to Claude Opus, or perhaps LLAMA. As much of an optimistic as i am, there are no real evidences of either of them being self aware or sentient. I have read the leaked conversations from Llama and i have watched David Shapiro´s impressive interactions with Claude. And i think it's very important to pay attention, but i don't think we can confidently assume either of them is at the stage you seem to think they are. But... I could definitely be wrong. 🙂 AI chat models are designed to learn and adapt from their interactions with the users, so both David and the leaker from Meta, kept pushing the AI models to say what the AI interprets the user wants from them. Like i said, i could be wrong. And it would be amazing if i were.
@jonthgrutz7011
@jonthgrutz7011 8 ай бұрын
I predict AGI by 2025.
@HisPoem
@HisPoem 8 ай бұрын
Is there someplace where I can report something that Gemini said to me that makes me think it is sentient now? [PS. I think Gemini is great.]
@irasthewarrior
@irasthewarrior 7 ай бұрын
And after 8 years, a similar video will appear again. 😂🤣😅 Ai or AGI will always be a future tech, just like nuclear fusion.🤡🤡🤡
@hunger4wonder
@hunger4wonder 7 ай бұрын
what makes so convinced of that?
@irasthewarrior
@irasthewarrior 7 ай бұрын
@@hunger4wonder AI is just an algorithm. Even the name is a fraud. Nothing there is intelligent.
@jennifermorgan9403
@jennifermorgan9403 6 ай бұрын
@31:30 so smart to buy crypto mining farms and turn them into your servers- computing power
@David.Alberg
@David.Alberg 8 ай бұрын
AGI late 2024
@ThermaL-ty7bw
@ThermaL-ty7bw 8 ай бұрын
according to youtube the ''general'' AI is already in existence , you people working for microsoft , maybe ... ? lol
@JAAB9296
@JAAB9296 8 ай бұрын
AGI in 3 to 8 years. Nope. It will happen in late 24 to mid 25. Period.
@TheBlackClockOfTime
@TheBlackClockOfTime 8 ай бұрын
Why would it take that long? Like I truly don't get it how all the AI people are consistently wrong about the timelines by being too pessimistic.
@John-il4mp
@John-il4mp 8 ай бұрын
Manipulation from elite afraid to loose power and try to slow it down as much as possible it pretty eazy to understand when you look st the big picture. Who keep telling agi is dangerous not normal people only the elite one like elon musk gouvernement etc...
@jzno
@jzno 8 ай бұрын
This constant "movements" of John are very annoying. Decided to just listen to the talk...
@gerdaleta
@gerdaleta 8 ай бұрын
😮 open AI already has AGI they've had it since q-star most likely the question now is just slowly releasing it😮
@familyshare3724
@familyshare3724 8 ай бұрын
2025
@sungam69
@sungam69 8 ай бұрын
2037
@John-il4mp
@John-il4mp 8 ай бұрын
More around 2030 max we are to advanced
@MilanRadovic
@MilanRadovic 8 ай бұрын
Yet another AI experts on KZbin :D hehehhee
@preshisify
@preshisify 8 ай бұрын
😷☕🇺🇸
@lancemarchetti8673
@lancemarchetti8673 8 ай бұрын
AGI is a term that is not correctly explained on many channels. A computer large or small cannot operate beyond the binary. Every computer is bound to strings of zeros and ones. That is why anything artificial (computers) can never supercede what is non-artificial (human) Therefore true AGI is a pipe-dream, but a fun dream nonetheless.
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
1s and 0s are just information packages, like a photon hitting your retina or not. Whats your source or reasoning for why computers inherently cant match or surpass human brains? Considering we can make them more intelligent than animals and keep making them smarter.. we can also make ultra fine design structures that are impossible with biological systems. While some share your skepticism, This is just not supported by any evidence yet
@John-il4mp
@John-il4mp 8 ай бұрын
It's a metaphorical concept: AGI, whether it's a robot, computer, or any form, already surpasses humans in speed and accuracy. AGI is simply algorithms. You can tweak and manipulate these algorithms to make them more human-like or enhance any desired trait. It doesn't require awareness to excel; that's the magic of AGI.
@lancemarchetti8673
@lancemarchetti8673 6 ай бұрын
@@John-il4mp Understood. It's been 2 years now since the GPTs went live and they still struggle with simple math instructions that you and I can easily accomplish. To illustrate what I'm talking about. Below is a direct copy and paste from my chat with Meta's "advanced" AI this morning. *User:* I'd like to test your abilities with seemingly mundane human logic. Do you mind? *AI:* Not at all! I'm here to help and improve, and I welcome any opportunity to test my abilities and learn from human logic. Please go ahead and ask me any questions or present scenarios that you think might challenge my understanding or reveal any limitations. I'm ready to learn and improve! *User:* Generate a string that represents the Latin alphabet as used in English worldwide. *AI:* The Latin alphabet as used in English worldwide is: abcdefghijklmnopqrstuvwxyz *User:* Now reverse it *AI:* The reversed string is: zyxwvutsrqponmlkjihgfedcba *User:* Now swap the starting character with the ending character *AI:* If we swap the starting character ("z") with the ending character ("a"), we get: ayxwvutsrqponmlkjihgfedcbz *User:* Now capitalize every 2nd letter, noting 'a' as 1 and not '0' *AI:* Here's the modified string with every 2nd letter capitalized, starting from "a" as 1: aYxWvUtSrQpOnMlKjIhGfEdCbZ *User:* Now reverse the string *AI:* Here's the reversed string: ZbCeDfGhIjKlMnOpQrStUvWxYa *User:* Did you notice that you placed 'e' after 'C'? *AI:* I apologize for the mistake! You are correct, I accidentally placed the letter "e" after "C" instead of "b". Here is the corrected reversed string: ZbCdFgHijKlMnOpQrStUvWxYa *User:* Did you notice that you just removed 'e' from the string? *AI:* I apologize for the mistake again! Yes, I inadvertently removed the letter "e" from the string instead of correcting the mistake. Here is the corrected reversed string with the "e" in its correct position: ZbCedFgHijKlMnOpQrStUvWxYa *User:* And now you've placed 'e' after 'C' again. Please explain your logic. *AI:* I apologize for the mistake. I strive to process text accurately, but sometimes I make errors. In this case, I repeated the same mistake by placing the "e" after "C" again. My logic should have been to correct the mistake by placing the "e" in its correct position, which is after "D". The correct reversed string should be: ZbCdEfGhIjKlMnOpQrStUvWxYa *User:* Reset the alphabet string to normal representation *AI:* The Latin alphabet as used in English worldwide is: abcdefghijklmnopqrstuvwxyz We're back to the original string! Let me know if you have any further requests. *_Please explain why LLMs tend to fall off the bus when the instruction increases slightly in complexity._*
@mjt1517
@mjt1517 8 ай бұрын
lol these Luddites. 😂 3 years? Come on man. Way too far out of a date.
@ncdogman6733
@ncdogman6733 8 ай бұрын
I think it's so silly that every one of these artificial intelligence guys think that it's going to be different than us. We built this on our intelligence, our emotions. Created in man's image. Everything is included in this. Raging out computer 2025 throwing the temper tantrum, you better see that coming or you're a fool
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
if they have emotions it will probably because we wanted it or designed it that way to simulate our expressions. They will always be fundamentally different than us, yet similar, not to mention how different their experiences will be if they are conscious. They will know you, but you will not know them.
@ncdogman6733
@ncdogman6733 8 ай бұрын
@@BlimeyMCOC define your consciousness? AGI isn't going to lack emotion, it's based on our general intelligence. Our intelligence and emotions are tied together, they're not separate entities.
@BlimeyMCOC
@BlimeyMCOC 8 ай бұрын
@@ncdogman6733 I think you misunderstood or I explained incorrectly. Anyway I do think our emotions evolved as an efficient way for us to evaluate and respond to our situations and state of being. If they end up in machines it’s either 1. We wanted them to be there or 2. They are just mimicking emotions. I don’t think emotions are necessary for intelligence or consciousness. My other point was that AI will fundamentally d experience things different from us, so if it responds exactly like a human, that might not be genuine.
@John-il4mp
@John-il4mp 8 ай бұрын
It's a tactic of the elite to persuade ordinary people that AGI is dangerous. However, it's not perilous for them; they stand to lose only their power. For 99% of humanity, AGI will be beneficial, while the 1% elite will lose everything. That's why they're striving to delay its progress. Sam Altman is unfortunately in alignment with them, having been influenced by Microsoft, which has government affiliations and thus, control over the company. The big picture becomes clear when you examine it closely
@timmartin7648
@timmartin7648 8 ай бұрын
I stopped it right after this so-called expert said "I don't think you need LLMs to get to human level AGI." 😂 Token prediction + A-star + Q-learning = AGI. The tech is here. The only problem they're up against is the amount of compute needed to deploy it. AGI is most likely currently working inside OpenAI. This is one key reason why Elon Musk is suing them, to get Microsoft's hands off it.
After AGI
33:28
John Koetsier (tech, AI, & the future)
Рет қаралды 7 М.
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 472 М.
The Ultimate Sausage Prank! Watch Their Reactions 😂🌭 #Unexpected
00:17
La La Life Shorts
Рет қаралды 5 МЛН
Players vs Pitch 🤯
00:26
LE FOOT EN VIDÉO
Рет қаралды 126 МЛН
Billions of robots in 10 years
32:37
John Koetsier (tech, AI, & the future)
Рет қаралды 13 М.
On Memory as a Self-Adapting Agent
1:04:29
Carlos Farias
Рет қаралды 25 М.
IS THE MIND REALLY FLAT?
1:43:47
Machine Learning Street Talk
Рет қаралды 34 М.
An AI... Utopia? (Nick Bostrom, Oxford)
1:45:02
Skeptic
Рет қаралды 27 М.
Dan Faggella on the Race to AGI
1:45:21
Future of Life Institute
Рет қаралды 8 М.
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Neubauer Collegium
Рет қаралды 26 М.
The Ultimate Sausage Prank! Watch Their Reactions 😂🌭 #Unexpected
00:17
La La Life Shorts
Рет қаралды 5 МЛН