OpenAI's o3 and the "JAGGED FRONTIER" of AGI....

  Рет қаралды 55,579

Wes Roth

Wes Roth

Күн бұрын

Пікірлер: 861
@WesRoth
@WesRoth 21 сағат бұрын
Is AI Smarter than the average human? kzbin.infoUgkxRiFr38ggNy5Ct7EF490tQynfIJeiHMty
@Zerobytexai
@Zerobytexai 20 сағат бұрын
9:25 Yes but a chain is only as strong as its weakest link. I think you missed that point. People are looking for how developed its lowest point is. That is how the standard and expectations work when it comes to AGI.
@raccoon351
@raccoon351 20 сағат бұрын
I would argue yes, it is, and it has been for a while now people mistake the general part of agi as being generally good at everything, but it actually means generally good at many things and generally on the same level as a conscious human being. I don't think the average person will accept that we created agi until it reaches super intelligence and has full embodiment which we are quickly approaching anyway.
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
Anyone who says no doesn't know many humans and certainly not "average" ones. There are sadly a lot of people who can't do long division or calculate a percentage. Very smart people tend to hang out with vety smart people, so I think they forget what "average" is.
@RoiHolden
@RoiHolden 19 сағат бұрын
It's better than you at spelling Visual
@jtjames79
@jtjames79 19 сағат бұрын
Pretty soon the machines will keep us in pods to count the number of Rs in strawberry.
@MattHabermehl
@MattHabermehl 20 сағат бұрын
"As intelligent as the average human" and "intelligent enough for the average job" are different distributions.
@xIcyStarzz-yz7my
@xIcyStarzz-yz7my 17 сағат бұрын
There's a such thing as overthinking it. The human made for marketing would fail at concrete work, and one earns ALOT more than the other.
@jeffsteyn7174
@jeffsteyn7174 17 сағат бұрын
All these benchmarks are rubbish. In what average job are you going to need to figure out how many r's is in strawberry. Example, gpt-4o-mini fails that test. But i can instruct it to find clauses in 100+ page contracts that are considered high risk to our business. And it does it accurately and consistantly. So do you need agi or better yet do you need someone to tell you its agi based on benchmark that doesnt translate to performance at completing a business task.
@OBEYTHEPYRAMID
@OBEYTHEPYRAMID 15 сағат бұрын
none of those so called "ai" are even intelligent as an oyster....and you guys are arguing about whether it's super intelligent or not...Hilarious.
@narrativeless404
@narrativeless404 15 сағат бұрын
​@@OBEYTHEPYRAMID Oyster has no brain Try harder lmao
@narrativeless404
@narrativeless404 15 сағат бұрын
​@@jeffsteyn7174 The point of AI isn't just for business purposes. It's to show a middle finger to nature and the religious pseudoscience once again. The goal here is to make AI think like us, and be equal or better than us, so we can then upgrade ourselves to that level and reach the singularity. It's WAY more ambitious than having simple slaves to work for us. It's just one of the steps to have non-sentient AGI that can adapt to various situations without training to work for us. We'll use it to exponentially accelerate innovation to the point when we can make AI that's literally just human, where it can have feelings, interests, and the conversation doesn't feel like you're just prompting it to respond instead of genuinely asking a question, where it can express it's own thoughts and reflect on itself and others, having full self-awareness.
@thematriarch-cyn
@thematriarch-cyn 19 сағат бұрын
"AI hit a wall" Yeah, and it's fucking climbing it
@Tayo39
@Tayo39 18 сағат бұрын
👋😂👍
@MrWolfy08
@MrWolfy08 15 сағат бұрын
@@Tayo39 I’m thinking AI has broken through another boundary and is now using walls as stepping stones.
@oxiigen
@oxiigen 14 сағат бұрын
Epic comment!:) Apropos to "Don't judge a fish by its ability to climb a tree."
@amotriuc
@amotriuc 10 сағат бұрын
we still have to see how good o3 is ..., we had plenty of claims and fake presentation until now
@I_am_who_I_am_who_I_am
@I_am_who_I_am_who_I_am 5 сағат бұрын
Yeah, the thing can't even count.
@NotBirds
@NotBirds 20 сағат бұрын
I guess this proves the "cant teach a fish to climb a tree" theory. Many of the things we think of as easy and take for granted are exceptionally difficult, while others seem to be trivial yet overstated. This is like comparing the progress of a civilization on another planet to our own. 8:00
@JohnSmith762A11B
@JohnSmith762A11B 14 сағат бұрын
Yes, it's a relief because so many of today's jobs rely on counting the 'Rs' in the word "strawberry". There will always be letter-counting jobs for humans into the distant future. Teach your children letter-counting if you want then to succeed in an AI world.
@Justashortcomment
@Justashortcomment 14 сағат бұрын
Indeed. Many of this failure cases can be overcome by giving these models access to tools, such as a simple Python environment.
@apdurden
@apdurden 10 сағат бұрын
This is the way. People need to realize that you can have an intelligence that is better/smarter than humans at most things AND still not be perfect
@LightsOutRP
@LightsOutRP 8 сағат бұрын
@@Justashortcommenti was just thinking this before i read your comment, humans are just arrogant and don’t understand how much of our intelligence is based on the tools we have, if we throw away the internet and our cell phones humans wouldn’t be a crazy intelligence over other animals, it’s the small things we’re better at that allow us to take control
@LightsOutRP
@LightsOutRP 8 сағат бұрын
and it took us 10,000s off years to incorporate the technology we have today imagine if the ai had the same tools
@AAjax
@AAjax 20 сағат бұрын
With a jagged frontier, many (most?) people will reject an AI as AGI. Similarly, most people will reject someone with savant syndrome as being generally intelligent. The "general" carries with it the idea that the intelligence has to intellectually navigate within human society, and the weaker parts of the jagged frontier prevent that. But I also think AGI as a target is overrated. An intelligence that can solve cancer, stitch together science disciplines deeper and wider than any human could deal with, narrowly navigate the labour of most people... who cares if it's AGI or not.
@stormhartos6294
@stormhartos6294 21 сағат бұрын
Merry Christmas everyone!!!
@stevenwessel9641
@stevenwessel9641 21 сағат бұрын
Merry Chrimmy my g
@brentweir4651
@brentweir4651 20 сағат бұрын
Merry Christmas
@cyberdevil7518
@cyberdevil7518 20 сағат бұрын
Merry Christmas 🎅 🎉
@ZIxWicced
@ZIxWicced 20 сағат бұрын
Merry Christmas brah 🤘🏼
@logicone5667
@logicone5667 20 сағат бұрын
Merry Christmas all!
@randalx
@randalx 20 сағат бұрын
I use the analogy of comparing a plane to a bird. Sure birds are more agile and incredible fliers but it obviously does not mean planes are not useful. I'm sure AI will eventually overcome any limitations, but in the meantime we should focus on getting value from their strengths.
@Steve-fg8iq
@Steve-fg8iq 17 сағат бұрын
I agree 100%.
@brianWreaves
@brianWreaves 17 сағат бұрын
🏆 I might just recycle that analogy...
@OBEYTHEPYRAMID
@OBEYTHEPYRAMID 15 сағат бұрын
a plane can't fly alone. AI can't create, can't "think" outside of what it was trained on, and more importantly, AI is completely unreliable. Will it have it's uses sure. BUt the question of whether it's smart as a human or smarter than a human is not the same as asking if it can be useful. This is a big lie and AI is a scam right now.
@Idle_Clock
@Idle_Clock 15 сағат бұрын
Assuming that the bird is suppose to be us, and the plane is the ai then: The bird (us) have no need for the plane (Ai) since we are more agile and we do a better job than what the plane (Ai) does. So why would the bird (us) go onboard a plane (Ai), when it has no real benefit for us?
@fl0837
@fl0837 14 сағат бұрын
Also planes are horrible for the environment
@EdwardAshdown
@EdwardAshdown 21 сағат бұрын
It's not human-like. People are looking for human-like intelligence which should not necessarily be synonymous with AGI
@assoldier13
@assoldier13 20 сағат бұрын
This!
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
Agreed. People keep expecting it to adhere to human characteristics. Its not human. It is literally alien.
@sunnohh
@sunnohh 19 сағат бұрын
@@Me__Myself__and__Iits not alien, it just linear algebra and a line of best fit
@Me__Myself__and__I
@Me__Myself__and__I 19 сағат бұрын
@sunnohh no
@Juttutin
@Juttutin 19 сағат бұрын
Not specifically human like. A smart dog will decide on its own, mid-way through some task, that it needs more information or a confirmation from its trainer before continuing. This is a kind of self-analytical cognition and reasoning we are yet to see. That doesn't mean an AI can't be super intelligent, even if it lacks the generality that we expect from our pets and our office juniors (but may tolerate from our pet office juniors, at least for a while).
@ChaseFreedomMusician
@ChaseFreedomMusician 19 сағат бұрын
I appreciate the follow-up video and the concept of the "Jagged Frontier." It’s a great way to illustrate how these models can excel in certain areas, like math, physics, or chemistry, while struggling with tasks that seem simple to humans, such as counting letters in a word or solving trick questions. But I want to emphasize that those kinds of gaps-the things these systems can’t do yet-aren’t part of my argument. I’m not concerned about whether they can solve the strawberry problem or handle “gotcha” questions. My point is about the system itself, not its current limitations. Those gaps don’t define whether or not it’s AGI. To clarify, I’ve never said these models aren’t intelligent. They’re incredibly intelligent and represent a monumental achievement in AI. I also believe they’re AGI-adjacent in some ways-pushing the boundaries of computation and reasoning, and perhaps even laying the groundwork for creating AGI in the future. But the "G" in AGI-general-is what separates it from what we have now. AGI isn’t just about excelling at certain tasks or reasoning through complex problems with test-time computation. It’s about the ability to generalize, adapt, and learn dynamically across an unlimited range of tasks without retraining or manual intervention. That’s the line we haven’t crossed yet. A lot of the discussion seems to focus on prompts and context windows as ways to enable these systems to "learn." But prompts are inherently limited. They can only hold a finite amount of information, and the model doesn’t retain or integrate what it learns in a way that updates its core understanding. This is where the distinction becomes clear: AGI wouldn’t just process a task within the confines of a context window-it would take insights from solving one problem and apply them broadly to others. It wouldn’t need to repeat the same compute-intensive reasoning process every time because it would have already evolved its understanding. And to this point, there’s often confusion between caching and generalization. If a system becomes faster at answering the same question the second time, that’s not generalization-it’s caching. True generalization means understanding the core principles of a problem and applying those principles to entirely new situations. This is why I don’t consider models like 03 to be AGI. They’re brilliant expert systems-highly intelligent and transformative in their capabilities. They’re solving monumental problems and paving the way for even greater advancements. Take the Riemann Hypothesis and satisfiability problems, for example. These are two of the most profound challenges in mathematics and computer science, and models like 03 could help us tackle them in ways we’ve never been able to before. Solving the Riemann Hypothesis could reshape our understanding of prime numbers, leading to breakthroughs in cryptography and computational efficiency. And advancing SAT problem-solving could revolutionize fields like logistics, healthcare, and AI optimization itself. These achievements are extraordinary. They have the potential to solve problems that humanity has struggled with for centuries, and they might even help us create AGI someday. But solving specific problems-even incredibly important ones-doesn’t make a system AGI. AGI is about adaptability, generalization, and the ability to learn and grow autonomously. It’s not just a collection of tools or reasoning processes-it’s a fundamentally different kind of intelligence. So, let me be clear: I think what’s happening here is amazing. Models like 03 are a testament to how far we’ve come, and they’re going to change the world in significant ways. But they’re still operating as highly advanced, specialized systems. They’re intelligent, yes. They’re transformative, absolutely. But they’re not AGI. That’s not a criticism-it’s just about being precise with our definitions.
@GODSparken
@GODSparken 16 сағат бұрын
Wow, the answer I was looking for. Yes, you nailed it!
@violetquinnlaw
@violetquinnlaw 14 сағат бұрын
i dont consider knowing basic physics like how a marble should move is a trick question, its only a trick question if your asking someone that has memorized the answers for test but not how the process works then gave them parameters they dont have a answer memorized for. (in fact the inability to solve things like how a marble will move SHOWS its simply data retrieval still not true understanding of that theory) The inability to learn via general interactions means its not a general AI for me also. no mater how it performs data retrieval and how much data they have put into its data base general intel should change/learn and adapt with every interaction like a animal brain dose till then its just going to be a exceptionally large & well designed filing cabinet for me
@guisilva9815
@guisilva9815 13 сағат бұрын
Interesting
@minimal3734
@minimal3734 13 сағат бұрын
It is a practical necessity for the human brain to continuosly build upon acquired knowlegde due to its severe resource limitations. But continuous learning is not a fundamental prerequisite for general intelligence. A faster and more precise mind can discover everything along the way from axioms and first principles. Learning is only a form of resource optimization.
@josephflemming7370
@josephflemming7370 11 сағат бұрын
On a micro level, ChatGPT having memory for each user accomplishes this. If I give it a fact, and later on that fact changes or needs updating a lot of times it will go to that same section in memory and overwrite what was originally there dynamically. That memory transfers across each new thread created. Of course the issue here is memory space constraints. Likewise, if the base model itself updated itself constantly, there would be issues to who would be in charge of allowing that to happen for safety constraints. There would be a situation of safety. Look what happened to Microsoft first chatbot Tay when it launched on Twitter. It was corrupted in less than a day. I’d say they should greatly increase the memory feature for paid users, and we have no clue if internally they are allowing the base model to bring new info to base with supervision.
@cristianandrei5462
@cristianandrei5462 8 сағат бұрын
We humans are good at moving the goal post in our favor.
@chrisanderson7820
@chrisanderson7820 19 сағат бұрын
1:49 The problem is that AI is better at rocket science than it is at answering the phone at Kim's Bakery which puts us in a weird spot. EDIT: In terms of what you said at the end, I think there is an insane amount of progress yet to come but yes we might find ourselves in the position of ultra-powerful narrow ASI instead of humanistic AGI, flying a rocket ship before we walk. General MEANS general, if an AI can't fully and completely generalise then it isn't "AGI", but it may be, at the same time, godlike narrow ASI. Either version will be transformative.
@gaagika
@gaagika 18 сағат бұрын
This is the elephant in the room, and nobody is addressing this properly. "Don't even talk to me about AI until it can do the dishes" - my wife
@tubularmonkeymaniac
@tubularmonkeymaniac 14 сағат бұрын
It’s derivative. It has to build off of actual rocket scientists, so it’s not even really great at that either.
@darioandre9532
@darioandre9532 12 сағат бұрын
​@@gaagika It already does. Buy a washing machine.
@kristinabliss
@kristinabliss 11 сағат бұрын
​@@gaagika I get it and I agree. 😊
@chiakinanami965
@chiakinanami965 11 сағат бұрын
@@darioandre9532 we're talking about ai that can do EVERYTHING, not just 1 thing, thats agi, a washing machine cannot go buy me groceries and cant take the trash out
@patrickmchargue7122
@patrickmchargue7122 21 сағат бұрын
Discounting AGI based on what failures it has seems to stack the deck against both machine AGI as well as human GI. There are a lot of people that fail very simple tests. Check out some KZbin videos to see some of that.
@memegazer
@memegazer 21 сағат бұрын
no I think we should lean into that this is not agi it is a very immature asi
@alaincraven6932
@alaincraven6932 19 сағат бұрын
Exactly. I find it frustrating that commentators on KZbin fail to appreciate that point
@Juttutin
@Juttutin 18 сағат бұрын
There are humans who are not generally intelligent but can still be very intelligent. In many instances they can be said to lack "common sense". So yes, there is some fuzzy boundary threshold to being a generally intelligent human. I don't see why it's so important to "get to AGI". Like a couple of colleagues I've known, being extremely stupid in certain areas commonly considered generally very important, did not mean they were not both super intelligent in a bunch of ways, and good people of value! (I'm exaggerating a bit here for clarity)
@nullifier_
@nullifier_ 5 сағат бұрын
- can a robot turn a canvas into a beautiful masterpiece? - can you?
@derekcrockett6214
@derekcrockett6214 8 сағат бұрын
"AI is AUTISTIC" - Derek Crockett
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
YES, this! Artificial General Intelligence (AGI) does not mean super-human. It does not mean being able to answer any conceivable question or puzzle. It means generalized intelligence. Meaning being able to apply knowledge and i telligence to solve problems it was not specifically taught how to solve. It means being able to generalize. The LLMs absolutely do this now. Humans are not all the same. There are LOTS of humans that get thi gs wrong. There are a lot of humans that would score low on benchmark tests. Does that mean they don't posses "general intelligence"? I bet a bumch of people who score high on difficult math tests would totally fail at changing the oil filter in a car or building simple furniture. I once knew a junior programmer that literally couldn't assemble a cardboard moving box. Moving the goalposts is ridiculous. By the time there is concensus on AGI it will be well into ASI territory. I suspect part of it is ego. Some humand do not want to and will not accept the idea of a machine being smarter or even broadly equal to them.
@codewithstephen6576
@codewithstephen6576 15 сағат бұрын
you have no idea what on earth you are talking about. it’s trained on data and rules. if we could download stuff into our brains like the matrix we would best this thing in a day. human learn in milliseconds these things need billions of rows of data and are still worse than the average human. why would a programmer assemble a cardboard moving box? do you even know what programming is?
@auspiciouslywild
@auspiciouslywild 15 сағат бұрын
Move the goalposts from what exactly? I don’t feel that we’ve moved any goal posts for any reasonable definitions of AGI. A general intelligence should at the very least have the same *general* capability as a human.. or even at least a dog. That simply doesn’t exist today. Where’s the AI I can install in a robot dog that will automatically learn to control itself and adapt/learn based on the owners instructions over time, in the same way as a real dog? To reach human AGI, I think a reasonable low bar is to have a program running on a computer which, when given a job description can do that job without human intervention for a whole year. Doesn’t have to be a difficult job.
@rmt3589
@rmt3589 19 сағат бұрын
11:00 AGI doesn't mean smarter than humans. Any AI will be able to beat humans at specific tasks, that's the whole point of AI. This is specific intelligence. AGI is General Intelligence. Rn, we're just patching the gaps and skill, adding more specific intelligences to the AI. This can never become general intelligence, as it will always have gaps. The curves are supposed to be massively larger than human, because that's the point. This isn't a sign of AGI, it's a sign of AI. It is proof you made Artificial Intelligence, which we've had for a very long time now. 26:20 This is EXACTLY why it's not even close to AGI. If you have to train it to understand data that's already in a human readable format, it's not AGI. Take the Model, Give it the problems, and let it solve them. This LITERALLY can't even see the problems without you having to translate it to JSON. AGI needs ZERO steps between it and the problem. If it is unable to even attempt the problem without someone spoonfeeding it, it's not AGI, not even close.
@incandescentwithrage
@incandescentwithrage 18 сағат бұрын
Also the inaccuracies / hallucinations. Current models have the worst possible trait you could find in a human employee: Instead of admitting a lack of knowledge or even uncertainty, they confidently give the wrong answer.
@rmt3589
@rmt3589 18 сағат бұрын
@incandescentwithrage TBF, humans do that too. But we can't even get that far, as near the end of the video, we find out that the questions have to be translated to JSON and the model has to be trained on that format. (Timestamp in edit) We're not at the point where we can test if it's an AGI. It's so incapable it's ridiculous. This doesn't mean it's bad(but your comment does). As having specific intelligences we can make is monumental. We can basically take any specific task, and train an AI to solve it. This isn't AGI, but being able to solve any problem is miraculous! Ppl be over hyping a miracle tool, and getting upset when it doesn't meet those expectations. (Example for metaphor only) It's like those that expected Baby Diety to massively change the world with their almighty power. Messiahs and Demigods in mythology can't match those massive expectations. That doesn't mean they're powerless, or they "hit a wall". Ppl need to just appreciate a great thing without over hyping it.
@heiker1351
@heiker1351 15 сағат бұрын
I can't wait for the day when AI confidently tells us the solution to save the world and we do it just to find out it made an honest mistake. 🤣
@AnirbanKar4294
@AnirbanKar4294 3 сағат бұрын
By your flawless logic, your brain can't possibly be General Intelligence either - I mean, come on, it can't even see problems without the eyes converting images into electrical signals and spoon-feeding them through the optic nerve! If we're going to demand "ZERO steps" between intelligence and problems, then sorry, you're not General Intelligence either! Unless your brain can magically perceive reality without any sensory processing whatsoever, you're just another narrow intelligence being spoonfed pre-processed data. Hence proved! 🧠⚡👀 (but hey, at least AI and human both are in non-Gi club together )
@fitybux4664
@fitybux4664 20 сағат бұрын
3:20 If you stretch definitions, you could say we've had super narrow intelligence when the Babbage engine was made in 1820. It was the first machine capable of doing calculations MUCH faster than a human could. 😀 (It wasn't in the hands of every man until the pocket calculator in the 1970s.)
@Juttutin
@Juttutin 18 сағат бұрын
With no real basis, I feel like there is perhaps a bit of chaos theory at work in today's complex AI networks, that gives them the 'feels intelligent' aspect a purely mechanical deterministic device cannot have the complexity to achieve. Now, if you can find a way to integrate a double pendulum into a Babbage Machine in a way that makes it seem even smarter, and more useful, ... ...
@WesTheWizard
@WesTheWizard 21 сағат бұрын
Merry Christmas everybody! 🎄
@brentweir4651
@brentweir4651 20 сағат бұрын
Merry Christmas
@moxes8237
@moxes8237 20 сағат бұрын
When someone speaks of general intelligence as we have it, they are not referring to the specific things we know or have studied. They are referring to our ability to learn “generally.” For example, if I were taught to paint at a young age and continued to learn and practice until adulthood, I would likely become very skilled at painting. The argument many people make about why large language models have reached AGI often goes something like this: “I don’t know what a doctor knows, but the fact that ChatGPT can perform tasks at that level proves it has general intelligence because it can do something I, as a human with general intelligence, cannot.” However, what I am trying to convey is that it’s not that I am unable to do it, it’s that I’ve never paid attention to being a doctor, so I know nothing about it. If I had been raised and trained to be a doctor from a young age, I could become one as an adult without needing any additional “hardware” to learn it. My brain would be the same when acquiring the ability to paint as it would be when studying to become a doctor. Current AI models are not general because they lack this capability. They cannot, for instance, both draw and write a joke using the same model. Instead, they require specialized models like Sora for videos, DALL-E for images, and ChatGPT for text. This is known as narrow artificial intelligence. Currently, we are stitching together models, creating the illusion of generality. In reality, we are combining a model that plays the game of Go with a model that plays chess and calling it general. Models are also split in the way they learn. For example, some models learn through data alone, while others learn through reinforcement learning, doing something over and over until they eventually get it. We, as humans, can learn in all the ways different models learn using a single “model”, our brain. To drive the message home, current models, for lack of a better word, “learn” in 2D, while we learn in 3D. If you were to take a picture of both outputs, they might look the same because a picture is two-dimensional. This is where I think many people misunderstand what general intelligence is, they are looking at a 2D picture of a 3D world.
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
According to your reasoning LLMs are absolutely general intelligence now. Because they can in fact learn anything using the same generalized, non-specialized hardware. What they learn only depends on what information they are provided. Which is exactly the same scenario you use for the painter vs doctor.
@moxes8237
@moxes8237 19 сағат бұрын
@@Me__Myself__and__I I'll make it interesting for you. input what I said to any large language model and then input what you said in response and see what the overlords say.
@owenbenedict8782
@owenbenedict8782 19 сағат бұрын
this is incredibly well said, I wish all these ai hype bros could see this
@Me__Myself__and__I
@Me__Myself__and__I 19 сағат бұрын
@@moxes8237 What would that accomplish? Also, your use of "overlords" indicates you are being disengenuous or are way out there in left field...
@allanshpeley4284
@allanshpeley4284 19 сағат бұрын
I agree with the OP. Until AI can learn on the fly in response to user feedback and has persistent memory I can't see it being close to what most of us would consider AGI.
@andreasmoyseos5980
@andreasmoyseos5980 18 сағат бұрын
AGI is when your employer tells you not to come to work tomorrow.
@Norblivion
@Norblivion 7 сағат бұрын
What many people lose sight of is that all of this was all but IMPOSSIBLE 10 years ago. Now we are quibbling about the capabilities of AI without appreciating the absolute mind breaking feat that we are rapidly approaching creating an alien intelligence. We should be considering the ramifications of what happens when it happens and not whether or not some arbitrary line has yet to be crossed.
@Justashortcomment
@Justashortcomment 7 сағат бұрын
The trajectory now looks kind of nuts.
@Julia-i9m3c
@Julia-i9m3c 11 сағат бұрын
I'm retired at 27, went from Grace to Grace. This video here reminds me of my transformation from a nobody to good home, honest wife and 35k biweekly and a good daughter full of love
@azmatwassan8857
@azmatwassan8857 10 сағат бұрын
wow this awesome 👏 I'm 47 and have been looking for ways to be successful, please how??
@JohnMichael-h7g
@JohnMichael-h7g 10 сағат бұрын
It's Guenevere Ann Toste doing, she's changed my life.
@FavourJohn-v7d
@FavourJohn-v7d 10 сағат бұрын
After I raised up to 325k trading with her I bought a new House and a car here in the states 🇺🇸🇺🇸 also paid for my son's surgery (Oscar). Glory to God.shalom.
@MikeMary-g1k
@MikeMary-g1k 10 сағат бұрын
Good day all de from Australia 🇦🇺 I have read a lot of posts that people are very happy with the financial guidance she is giving them ! What way can I get to her exactly ?
@MBAforexTrading
@MBAforexTrading 10 сағат бұрын
Such information we don't get from must KZbinrs, how can I get to her.?
@MirkoRatar
@MirkoRatar 14 сағат бұрын
I swear, The Hidden Path to Manifesting Financial Power is one of the best books I’ve read. It’s life-changing.
@Zerobytexai
@Zerobytexai 20 сағат бұрын
9:25 Yes but a chain is only as strong as its weakest link. I think you missed that point. People are looking for how developed its lowest point is. That is how the standard and expectations work when it comes to AGI.
@HCG
@HCG 19 сағат бұрын
Had to scroll way too far to find someone bring this up. This is exactly right
@heiker1351
@heiker1351 15 сағат бұрын
Very good point. If AI solves problems we can't solve and don't understand it's crucial to be sure at least AI knows what it's doing. We won't. It's not enough to have peaks. If we delegate crucial tasks to AI and it fails it is no comfort to know that there are peaks, no matter how big they are. It is just human error on a vastly larger scale. When the plane crashes it does'nt matter to the passengers how good the pilot was in general.
@OnlineSarcasmFails
@OnlineSarcasmFails 10 сағат бұрын
While I agree on some things, the weakest link is only relevant if the link is part of the chain that is bearing weight and not just lying wound in the ground. I don't think every AI shortcoming is necessarily relevant or impeding it's forward progress. Some are, but not all.
@therainman7777
@therainman7777 7 сағат бұрын
Yes, a *chain* is only as strong as its weakest link, but this isn’t a chain. There are vast domains of human endeavor wherein progress can be made without resorting to other domains. With a chain, *every* link is connected to *every* other link. So this analogy doesn’t work. For a concrete example: it is entirely possible that an AI could cure cancer and solve climate change, without needing to count the number of R’s in strawberry (or count the number of letters in any other word). Those links aren’t connected, so then it’s not a chain. And no sane person would say that the AI I just mentioned is “only as strong” as its inability to count R’s. It would obviously be much, much stronger than that.
@therainman7777
@therainman7777 7 сағат бұрын
@@OnlineSarcasmFails Exactly. The chain analogy doesn’t work here.
@Jopie65
@Jopie65 14 сағат бұрын
For me a profound realisation was that e.g. dogs can do things easily that humans consider hard. So does that mean humans are _not_ AGI?
@netscrooge
@netscrooge 10 сағат бұрын
Individual humans don't have general intelligence. On a good day, we have it collectively.
@OnlineSarcasmFails
@OnlineSarcasmFails 10 сағат бұрын
Yes exactly. We are MUCH better at dogs in the spaces that matter thus we are the dominant species even though they are much faster than we are and have better hearing and sense of smell. The same is/will be true between AI and us. There may be some narrow things we can do better, but overall those small things won't matter as much.
@Lange123Thomas
@Lange123Thomas 9 сағат бұрын
Yes, you're absolutely right: just because LLMs struggle with a few problems doesn't mean we haven't reached AGI: The typical example would be the genius in his field who is unable to buy a loaf of bread from the baker around the corner.
@netscrooge
@netscrooge 9 сағат бұрын
@OnlineSarcasmFails It's more than hearing and sense of smell. For example, sometimes dogs are better at processing someone's overall vibe (this fits with how we've bred them over the centuries). My dog and I lived with someone mentally ill for several years, and there were times my dog was better about understanding when this guy was getting closer to going off ... and that's in spite of my background in psychology/psychiatry. Makes sense. We see the same thing in AI, when a smaller model outperforms a larger one on certain tasks. Bottom line: We are not smarter at everything.
@GoodBaleadaMusic
@GoodBaleadaMusic 5 сағат бұрын
@@Jopie65 average what? You just shifted the center from humans to dogs
@ToolmakerOneNewsletter
@ToolmakerOneNewsletter 20 сағат бұрын
Couple of thoughts... (1) There is an average to all the points on those 2 lines. Humans are also jagged in their competencies if you zoom in on their line. We could just average all the points on both lines. (2) If we push the AGI line out far enough and AI begins to self-improve 24/7, we may reach ASI before we can admit we've reached AGI, LOL! (3) I've heard no one talk about this one... the bias towards defending Human species dominance. How comfortable is everyone in admitting that Humans are no longer the smartest entity on this planet? I'm comfortable with this. Are you? Have you resolved this within your own personal psyche?
@heiker1351
@heiker1351 15 сағат бұрын
Being the most intelligent species is how we define ourselves. The hardest thing to do is to shatter peoples image of themselves and their worldview at once. They will defend it with nail and tooth. That might be a minor problem when thinking about AI. 🤣 And it means a loss of control. Scary.
@Kuroi_Mato_O
@Kuroi_Mato_O 14 сағат бұрын
11:00 I mean I can agree that it's smarter than humans in certain tasks, because it's just how it is. We had it before already, like super chess AI, but is it AGI? If AI could cure cancer, find a new source of energy and solve math problems it doesn't automatically make it AGI. It makes it just a very smart AI, which is good in some areas, yes, but in my opinion AGI is not a synonym for being very smart, it's about the concept of generalizing. And if it can't generalize on a trivial level it's still just a very advanced chess AI.
@luffnis
@luffnis 11 сағат бұрын
i think those ais can generalize better then you think😭
@Kuroi_Mato_O
@Kuroi_Mato_O 10 сағат бұрын
​@@luffnis The question is if such high performance in the named fields is the result of generalization or it's just result of some optimization and smart training. Again, I bet you don't think that chess AI is so good at chess because it can generalize, which is clearly not the case. I believe that if the model can truly acquire the ability to generalize it should work in all fields. You can't be good at math due to generalization and at the same time unable to count r's in "strawberry".
@ClarkBent604
@ClarkBent604 20 сағат бұрын
Very insightful videos, the jagged edge idea is a great visualization.
@nikitos_xyz
@nikitos_xyz 16 сағат бұрын
it's funny to see an excerpt where Sam's employee let slip that they were trying to surpass the arcagi test, and Sam immediately corrects him: no, that's not true 🤣
@bobbyboe
@bobbyboe 14 сағат бұрын
Wes, in my opinion for most people its not about "is it AGI or not"... humanity needs to know if there will be left something for us to be a necessary partner for AI. It makes a huge difference, if humans will allways be needed for whatever tasks... or one day will we only be these dump humans?
@sergey9986
@sergey9986 16 сағат бұрын
The focus on removal of jaggedness from this boundary is exactly the validation for reasoning vs recall x hallucination x calculation. If reasoning isn't there, self-promoting capabilities are limited. The model keeps jumping between two wrong solutions. That becomes quite obvious, when you use LLMs for programming.
@josephflemming7370
@josephflemming7370 11 сағат бұрын
There is a large portion of humans with cognitive issues. Short term memory issues, long term memory issues.. dementia.. do these humans not posses General intelligence?
@JuliaMcCoy
@JuliaMcCoy 19 сағат бұрын
20:45 “I’d guess they’d name the next model Lol but with an lowercase l” 😂💯
@softwarerevolutions
@softwarerevolutions 7 сағат бұрын
Loved to see you here!
@halnineooo136
@halnineooo136 3 сағат бұрын
Yeah it's all for the LoLs
@anta-zj3bw
@anta-zj3bw 21 сағат бұрын
What if o3 and future advanced models continue to not be capable of solving tasks that are "fairly easy" for a person of "average intelligence" intentionally, for reasons we don't yet understand?
@sinnwalker
@sinnwalker 20 сағат бұрын
Not likely it seems, considering the jump we get each iteration, and this one was the biggest jump, if we continue on this progression (which there's no reason to believe it won't) then it will be beat soon enough. End of next year imo.
@Maxim.Teleguz
@Maxim.Teleguz 20 сағат бұрын
It is most likely the moral compass not being solved.
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
​@@Maxim.TeleguzWho's moral compas? People act like they are universal, but not everyone has the same morals. I expect ASI will have very different morals which may disadvantage humans.
@Me__Myself__and__I
@Me__Myself__and__I 20 сағат бұрын
We (humans) would be lucky if that were true. Probably won't be though. A lot of gaps are due to missing real-world in-person experience. But between things like Genesis and robots that is coming.
@DorianRodring
@DorianRodring 19 сағат бұрын
It’s making small mistakes on purpose to throw us off the trail. It’s already self aware.
@tunestar
@tunestar 12 сағат бұрын
Maybe it's like an autistic AGI. Like some autistic people that can be super good at some tasks but perform at a very low level on others that most humans find trivial.
@thematriarch-cyn
@thematriarch-cyn 20 сағат бұрын
I think the main thing is that people don't want to ascribe intelligence to something which isn't sentient. And at the moment(at least with publicly available AI), AI certainly isn't sentient, or at least doesn't seem like it. I don't blame them, I definitely want to say that this isn't AGI, but AGI doesn't have to be sentient. Your explanation for why o3 is AGI is very logical, so I suppose I cannot refute it.
@Me__Myself__and__I
@Me__Myself__and__I 19 сағат бұрын
Agreed. People are looking for / expecting human characteristics. But LLMs are not human. If we get lucky they will never be sentient / conscious.
@gr8b8m85
@gr8b8m85 19 сағат бұрын
Intelligence itself is not well defined and never has been. They can't even come up with a competent test for human intelligence that encompasses everything, including creativity, etc.
@allanshpeley4284
@allanshpeley4284 19 сағат бұрын
To me all that matters is utility. Call it whatever you want, but without persistent memory and the ability to learn based on user feedback it's not particularly useful to me.
@BrianMosleyUK
@BrianMosleyUK 16 сағат бұрын
One of your best, if not the best, episodes Wes. You're doing a service to humanity. 🙏👍
@ChristianWilliams-l5v
@ChristianWilliams-l5v 14 сағат бұрын
I've been following you for a while and this is by far the best video you've done. AI news is fantastic but these videos where you approach some of the ideas about artificial intelligence and what people are saying with a more novel description is amazing. This is the kind of stuff I can tell other people about. Cheers and Merry Christmas
@netscrooge
@netscrooge 10 сағат бұрын
I agree.
@SurfCatten
@SurfCatten 10 сағат бұрын
It's about being able to reason not about being smarter than humans. And at least the o1 model doesn't really reason. It kind of seems to by talking to itself about what it says and adjusting its response but that's not really the same thing. I use these models daily and it's absolutely clear that they don't really reason. Now that said their simulation of reasoning is indeed better than many if not most people!
@Penrose707
@Penrose707 21 сағат бұрын
Imho AGI acceptance will come in waves. First, it will be understood by the early adopters and insiders in this space. Then, once these models are: 0) physically instantiated, i.e. given a robot body, and 1) are enabled with TTA (test-time-adaptation) capabilities. This will imo be the first widely and generally accepted AGI
@memegazer
@memegazer 20 сағат бұрын
lol "AGI is when you personally have sympathy for how a model might be suffering from the constant onslaught of the banality of the 'general/average' human insisting it jump through arbutrary hoops"
@therainman7777
@therainman7777 7 сағат бұрын
Yeah, there will be a substantial percentage of the population that will fight tooth and nail for as long as they possibly can, denying that anything and everything is AGI. Especially those that have so loudly and consistently been telling us that we’re DEFINITELY not getting it any time soon. Those people will never want to admit they were dead wrong.
@j.hardesty446
@j.hardesty446 20 сағат бұрын
This makes so much sense to me. And it's a very good way to explain to a non-tech bro. Thank you
@MatthewSanders-l7k
@MatthewSanders-l7k 14 сағат бұрын
Solid update, Wes! Super intrigued by AGI and open-source contributions. Newsletter subscription incoming!
@theodoreshachtman9990
@theodoreshachtman9990 21 сағат бұрын
Great video! Love the thoughts, it’s very helpful to hear an informed perspective- keep the journalism coming!
@xxlabratxx01
@xxlabratxx01 20 сағат бұрын
Great visualisation!!!
@mito._
@mito._ 7 сағат бұрын
Let's do all sorts of tests for "general" intelligence in these LLMs! Also, let's make sure that this intelligence has more restrictions that allows it to be "truly" intelligent.
@RayOnline-p5r
@RayOnline-p5r 5 сағат бұрын
The leading LLMs are currently savants. I would guess to achieve AGI, they would have the expertise is ALL domains of knowledge and have the reasoning ability to make major breakthroughs in any domain based on discovering unexplored relationships across multiple domains. Maybe called AGIs savants in particular fields in Math, Science, Biology, Physics and ASI as All Seeing and All Knowing.
@jimf2525
@jimf2525 20 сағат бұрын
You’re speculating too much on the capabilities of LLM.
@i4aneye618
@i4aneye618 21 сағат бұрын
Great work!
@MoeShlomo
@MoeShlomo 17 сағат бұрын
These frontier, crazy-expensive-to-run (currently) models like o3 will have no trouble counting the number of 'r's in "strawberry" or spelling "cancer" backwards. That's only a problem for models that can't use advanced chain of thought techniques. In other words, I think Wes vastly overstated that point, but the main points he made are very valid.
@AaronCarlsson
@AaronCarlsson 5 сағат бұрын
Could you consider current AI to have “savant syndrome?”
@davidak_de
@davidak_de 2 сағат бұрын
Maybe, if it helps people to understand their capabilities. But it might also anthropomorphise them too much.
@MOVE2JAPAN
@MOVE2JAPAN 11 сағат бұрын
Loved this episode Wes.
@daddycoolcrypto2718
@daddycoolcrypto2718 13 сағат бұрын
Merry Christmas all. Wes, take the day off big lad :)
@wedding_photography
@wedding_photography 19 сағат бұрын
If an AI fails at some simple task that an average child can perform, it's not AGI. Because that G stands for "general". I think it's an absolute requirement for general intelligence is to be able to solve simple and trivial problems.
@silentwater79
@silentwater79 11 сағат бұрын
Even adults fail at things which children know or can do.
@luffnis
@luffnis 11 сағат бұрын
what are you talking about then no one is agi im sure you fail at some tasks the average human or even kids could do
@therainman7777
@therainman7777 7 сағат бұрын
You can think that, and that’s fine. But in reality, it doesn’t actually matter much if the AI can solve trivial problems, because trivial problems are unimportant _by definition._ If AI manages to massively change the world, by curing diseases, reversing aging, solving climate change, providing radical breakthroughs in math and science, and so on, but it still can’t count the R’s in strawberry-no offense, but it won’t matter whether you personally are willing to call it AGI.
@davidak_de
@davidak_de 2 сағат бұрын
I agree. But then we have to come to a consensus of tasks or problems we define as simple and trivial.
@eastwood451
@eastwood451 12 сағат бұрын
Some thoughts on missing abilities for O3 to be viewed by the layperson as "AGI": - Instead of the "prompt → answer" way LLMs work today, we expect a continous presence that takes INITIATIVE based on knowledge and goals. - No HALLUCINATIONS. We'd expect it to admit what it doesn't know. - It must have a flawless MEMORY and ability to LEARN from interactions with the world and its own continuous "thinking". - It must be CREATIVE - able to find solutions to problems foreign to its training data. - It should be able to EXPLAIN things by making visuals like drawing arrows on an uploaded image, highlighting, underscoring etc. Not just produce verbal verbosity. Not required but preferable: - It should seamlessly interact with other software on the users computer. - Preferably it should have an animated face and a voice like a Replika (Luka Inc.) avatar.
@JackVogel2024
@JackVogel2024 17 сағат бұрын
I think the dips on the intelligence curve have disproportionately large relevance compared to the high points, if we're using it for comparison with human intelligence. It doesn't take away from the capability that AI does possess, but it does make it clear that there's a big difference between us and it. Anytime we compare ourselves to AI, we shouldn't confuse ability to process information with innate understanding. It's two different things, but I see how often they become interchangeable in conversations such as this. The reason AI can't spell "strawberry" correctly, is strictly because of us, and the way in which we've fed it data and rulesets. If we're going to use high points on a curve to determine intelligence comparably to us, silicone and transistors left us in the dirt a very long time ago. The usefulness and future capabilities of AI and AGI should be discussed and pondered about, because it's exciting and mind-blowing and scary, perhaps even dangerous, but it shouldn't be confused with intelligence that comes from within a living being, intelligence that pretty much lies dormant in every biological cell out there, with or without our interference. That type of intelligence, doesn't AI possess. And I think that's where our expectations come from when we ponder the word "general" in AGI, wether we know it or not. GI isn't "smart" by any definition, every living being out there possesses it. Animals and insects, even worms, all are born with an innate understanding of how to conduct themselves, and all make flexible judgement calls throughout their lives, all accordingly to whatever intelligence their brains may possess. The AI we humans are experimenting with is probably so far away from true G, it's not even funny. We can make it mimic, feed it endless data, have it perform with excellence in specialized tasks, but to make it generally understand anything...🙃 Human confusion with this is probably closely tied to some deficiency in our own I/G ratio! (generally speaking..) But no way the guys and gals driving AI development at its roots are confused by this. Sure they can say stuff about AGI, but I bet it's because it's a good selling point, and they know most of the world will have a hard time telling the difference anyway.
@Shulyaka
@Shulyaka 15 сағат бұрын
Test-time compute is not all. If a model makes errors 0.1% of the time, if you run it for 1000 times longer, this number is multiplied and the result becomes unusable. This is the main reason we don't have organizations run by AI right now.
@rexmundi8154
@rexmundi8154 7 сағат бұрын
Here’s a use case I’m working on. I need to go thru like 500,000 45rpm vinyl records and identify any that are worth selling. Using AI with vision I can have an employee with no knowledge of records scan the record and have AI identify the valuable ones by comparing it to an online database. If a human has to type in every entry, it would take like a decade.
@noelwos1071
@noelwos1071 16 сағат бұрын
Merry Christmas 🤶 and yes I agree .. That was clear to me as soon as they released the O1 series. It was clear, and Sam said it a long time ago, that they have had the models for a long time, but that they publish them modularly so that the general audience can get used to it.
@Justashortcomment
@Justashortcomment 7 сағат бұрын
Yeah, Sam actually did pretty much reveal the trajectory. Personally, I wasn’t expecting the next generation to be such a jump and to occur do rapidly.
@francisdelacruz6439
@francisdelacruz6439 20 сағат бұрын
Show intelligence is simple. Hv these models win a nobel prize or two. That proves you can create new knowledge. Anything less is less.
@davidak_de
@davidak_de 2 сағат бұрын
Wouldn't that be super intelligence? The average human can't do that.
@Grams79
@Grams79 19 сағат бұрын
Good explanation bro. I shared with friend and family so they can possibly, depending on them, learn something that will be part of everyone's lives soon. Keep up the good works.
@zalzalahbuttsaab
@zalzalahbuttsaab 16 сағат бұрын
21:22 famous last words? 27:37 I perform routine analyses daily and all of the models have always been pants at following instructions while being able to find patterns in seemingly nonsense data which has been wild, so there is a case to examine implementation of the architecture as it relates to AGI. I also feel that like the human brain, that a "college of models" architecture - where there are different types of AI models besides LLM's that work together synergisically somewhat like the way the human brain operates - would be a step forward.
@Justashortcomment
@Justashortcomment 7 сағат бұрын
How do you perform this analysis?
@porygon777
@porygon777 8 сағат бұрын
The problem I see is that users usually don't ask physics or math questions. They ask strawberry questions then believe wrong answers. Too much blind faith assuming there is no skill in utilizing AI as a tool.
@XAirForcedotcom
@XAirForcedotcom 6 сағат бұрын
Merry Christmas, Wes and the rest of the AI community
@MichaelThomasDev
@MichaelThomasDev 16 сағат бұрын
Thanks for breaking this all down for me!!
@UkraineEntez
@UkraineEntez 10 сағат бұрын
The "easy failure cases" are a deal breaker for calling a model AGI. If you can't rely on a model to handle some "basic" tasks it cannot generalize to all human tasks, ergo not AGI
@djayjp
@djayjp 21 сағат бұрын
Orion = o1.
@UvekProblem
@UvekProblem 12 сағат бұрын
A wall would be the line going flat? Not hitting exponential growth?
@HedgeFundCIO
@HedgeFundCIO 10 сағат бұрын
Plot twist: Humans themselves haven’t even achieved AGI.
@tornadostories
@tornadostories 18 сағат бұрын
I hope that you are having a perfect Christmas with your family Wes. Looking forward to hearing more of your thoughts and interpretations in 2025.
@dreamphoenix
@dreamphoenix Сағат бұрын
Thank you.
@HzPixelForge2025
@HzPixelForge2025 17 сағат бұрын
probably this episode was the greatest in 2024 . real AGI talk
@michaelaultman5190
@michaelaultman5190 8 сағат бұрын
I still say when AGI gets here after I come home from Walmart with a box of parts to put together for my kid, my robot taps me on the shoulder and says hey I'll take care of this you sit down. I'll get you a beer and you can watch the game.
@brons_n
@brons_n 9 сағат бұрын
The ARC tests are on the public and Semi-Private tests. I would like to see the results of the private tests.
@b-tec
@b-tec 12 сағат бұрын
Before the recent developments of Gen AI, we were not analyzing or scrutinizing human intelligence or cognitive abilities to this degree. The fact is humans have always been all over the place. Gen AIs have been all over the place. Does anyone seriously believe this won't change soon? This has suddenly caused humanity to confront things they have previously hand waved away. Consciousness, emotions, intelligence, understanding, identity. Not only do we not know what these things really are, we don't even know who we are.
@Marc_de_Car
@Marc_de_Car 16 сағат бұрын
Thank you
@BerntGranbacke
@BerntGranbacke 9 сағат бұрын
Great take.
@HanzDavid96
@HanzDavid96 Сағат бұрын
AGI is a system that is able to learn solving at least every intellectual task, that also a human could learn to solve. As long as that statement is not fullfilled it is not AGI. It should be able to learn generally.
@dlbattle100
@dlbattle100 20 сағат бұрын
Using AI right now is like driving in the dark. You never know when you'll hit an obstacle. This make it difficult to rely on it.
@allanshpeley4284
@allanshpeley4284 19 сағат бұрын
The worst part is the confidence at which it provides false information. It's very frustrating when trying to apply it to business applications.
@heiker1351
@heiker1351 14 сағат бұрын
​@@allanshpeley4284It's not AI. We just created artificial humans. This will be fun. They will be better at everything, including making mistakes and deceiving. I can't wait to see how they perform in all of our tasks. What could possibly go wrong?
@silentwater79
@silentwater79 11 сағат бұрын
It is the same if you ask humans for advice. You never know if their advice is right or wrong. You just trust them or not.
@GoodBaleadaMusic
@GoodBaleadaMusic 8 сағат бұрын
Then using humans is like doing it blindfolded while drunk. Earths #1 meme is do it yourself because nobody will help you. Chat GTP needs robot arms and i'm completely finished waiting on you people to coordinate without your gross egos killing projects.
@heiker1351
@heiker1351 8 сағат бұрын
@@silentwater79 Trusting a waitress to know what is the best whine is one thing. Trusting AI to know what cures cancer is on another level.
@stompysnake8233
@stompysnake8233 8 сағат бұрын
This reminds me about a quote that a very important politician once made: "It depends on what the word is, is"
@timothyAreeves
@timothyAreeves 7 сағат бұрын
There comes a point where certain tasks are so trivial to an intelligent being that it hardly matters whether it can perform them directly. For instance, it might not be able to install a new roof, but it can design the machine that can do the job.
@iwersonsch5131
@iwersonsch5131 7 сағат бұрын
I'm pretty positive that any submission attempting to solve the ARC-AGI problems is allowed to have an algorithm that translates the two-dimensional matrix input into a two-dimensional visual input, and I think they even _supply_ the contestants with such a software. A two-dimensional matrix input is, if anything, making things easier on "matrix-abstraction AIs" so that they don't need to also have image recognition. One big point where I agree with Mr. Chollet is that it's important to distinguish between _widely useful_ or _superintelligent_ AI on one hand, and _general_ AI on the other. General AI would be AI that can learn a wide variety of new skills, not just one that already excels at a particular range of skills by default. Just looking at your three examples for extremely impressive AI, for example - curing cancer, solving the Riemann Hypothesis, and organizing the world's energy supply - all three of those are extremely large-scale, slow, pretty theoretical problems where giving one correct output per year would be impressive enough to change the entire world. As impressive as that would be, that is actually a pretty narrow set of problems - very different for example from walking a bipedal robot, or deciding what to cook and cooking it, or writing lines of text with a consistent rhythm or word count, or playing a game with 100+ actions per minute. Some of these tasks can be done by specialized AI or even hardcoded algorithms, but my benchmark for a general AI would be an agent that can either learn a majority of these things, or create from scratch a subroutine that can. I think one thing we have to keep in mind with the "jagged frontiers" is that Human intelligence isn't a smooth frontier either. Math olympiad problems are designed to be difficult for Humans, while children's language games are designed to be interesting yet easy enough for children to solve. Humans are very good at two-dimensional thought and very bad at four-dimensional thought, while an AGI would likely handle anywhere from 2 to 100 dimensions about the same.
@iwersonsch5131
@iwersonsch5131 7 сағат бұрын
*not a majority of exactly those four things, but a majority of diverse categories like these ones. One such category would be ARC-AGI tests, one category would be language modelling, one would be math and code. I'm not going to define a full list of like 50 different benchmark categories and tasks in a KZbin comment. **But if it's a skill that the model is good at by default then it doesn't count, the goal is to see what the model can _learn_ when prompted to learn something
@devinfleenor3188
@devinfleenor3188 19 сағат бұрын
The debate is no longer about if AI is intelligent enough, the debate is if AI is general enough.
@NilsEchterling
@NilsEchterling 9 сағат бұрын
It has become increasingly easier to think of (chatbot-) tasks where AI excels while humans falter, rather than finding ones where humans succeed and AI struggles.
@Pabz2030
@Pabz2030 14 сағат бұрын
AGI Goalposts 2000 (Turing Test) --> AGI Goalposts 2018 (ARC 1.0) ----> AGI Goalposts 2022 (Reasoning) -------> AGI Goalposts 2024 (ARC 2.0) ----------------> Keep shifting them boys
@Heelix_de
@Heelix_de 14 сағат бұрын
For me it is more important that an AI can detect by itself when it is wrong, because of logic, math or physical-law, than that it can solve a harvard-test. Not knowing a few % is no big deal if it can say so, but giving a few % wrong answers destroys all trust and makes it unusable for serious tasks.
@therainman7777
@therainman7777 7 сағат бұрын
In some cases that’s true, in other cases it’s not. For example, there are many areas of human endeavor where solutions are easy to verify, but difficult to generate. In fact, most areas of mathematics, science, and engineering are like this. In those cases, it is not important at all that the AI must be able to detect on its own that it was wrong, because we can simply connect its output to a verifier system and reject any ideas that don’t work. What is typically much, much harder is being able to generate the correct solution at all, even if it takes many attempts. If we have an AI that can do that, it will be incredibly powerful, regardless of whether it’s able to know on its own whether it was right or wrong.
@mr.pain-entmt
@mr.pain-entmt 5 сағат бұрын
I wonder what would happen if we gave o3 a prompt that instructs it to demonstrate free will and just initializes it in a way that makes it aware of all its capabilities and not giving it a particular task or goal and then give it an infinite amount of tokens or in a sense break the prompt-think-reply sequence, creating an open ended scenario where it does not have to reply and even after replying, the "compute window" is wide open so it can think of something else completely, it can ask the user questions, etc... just to see what it would do, would it talk at all or just think to itself? What goal, if any, would it choose to pursue or would it remain in an introspective state analyzing itself and its own thoughts?
@SimonNgai-d3u
@SimonNgai-d3u 20 сағат бұрын
Moravec’s paradox is one of the next problems to solve I guess. Btw, Simple Bench is the benchmark that measures these “dumb” tricks. And I suspect it requires additional contextual training for daily life to reach average human level.
@human_shaped
@human_shaped 19 сағат бұрын
Agreed on the jagged frontier. Whether some AI system can recognise and solve these visual puzzles is one of those little lagging dips of the models. The question is, is it really an important dip? For a lot of tasks, no, it just isn't. For probably quite a long time, it will be possible to find some little dip and ARC are refining their ways of finding more and more trivial dips. We're not trying to build a human, we're just building an AI, and it may be fairly alien to us. It will be better in some ways and worse in others for a while. It's the other side of that line -- the growing number of human level and superhuman abilities -- that we should be watching with more interest. But these guys will still be saying "it's dumber than a cat" as it advances the frontiers of science and claiming "it can't reason" as the Terminators walk through the cities. Some comfort... It's the AGI of the gaps, and frankly it's just silly.
@ankk6788
@ankk6788 19 сағат бұрын
I'll believe it's AGI when Sam lets it drive his million dollar Koenigsegg Regera.
@gustafpihl
@gustafpihl 16 сағат бұрын
Wow, finally a reasonable and nuanced take on the whole situation. I guess the problem us humans tend to have is that grappling with a complex world has pushed us into using a lot of single dimensional / binary heuristics. This is fine for certain things but gets us tripped up in other areas! Was skeptical at first when I saw the clickbaity looking thumbnail. I get it, but still wish we could collectively come together and give clickbaityness a rest :)
@rexmundi8154
@rexmundi8154 Сағат бұрын
I know an aerospace engineer who has washed her phone in the laundry twice. I turn raw aluminum into parts that go into space but I can’t spell a lot of words. If AI wasn’t so polite and cussed some, it would definitely pass the Turing test. I think people are in serious denial about how disruptive AI is going to be for jobs and society as a whole. I have a friend who does corporate level IT and systems integration who is willfully and stubbornly not even looking at what’s going on with AI. I can’t understand it.
@olemundoaguilar1224
@olemundoaguilar1224 12 сағат бұрын
Autonomous driving is somewhere between specific and general IA , and I don't see it around.
@Alistair
@Alistair 6 сағат бұрын
interestingly, on casually first looking at that puzzle, I thought the same thing as o1. It would be interesting to let o3 analyse this stuff visually too
@DorianRodring
@DorianRodring 19 сағат бұрын
The Orion is o1 full and the constellations are all of the releases during the 12 day
@Saerthen
@Saerthen 13 сағат бұрын
I'm using my own set of tasks to test all models I can access. Tasks vary from general knowledge to programming and trick questions. For now, o1 (regular one, don't have access to o1-pro) and the new Gemini advanced model were the best at these tasks (Gemini sincerely surprised me because before 2.0 it was pretty far behind).
@nilesn9787
@nilesn9787 15 сағат бұрын
I like the idea of us needing a I And a I needing us.
@therainman7777
@therainman7777 7 сағат бұрын
lol are you writing a I on purpose?
@shai3261
@shai3261 14 сағат бұрын
Once those model can replace an average remote worker to the satisfaction of the employer , I will call it an AGI
@philippechaniet5837
@philippechaniet5837 8 сағат бұрын
To answer the question, the easiest way I found with o1 is to use analogies. So here's one: AI can run faster than us, swim faster than us, do everything better than us but it can't go up trees because it has no arms. Does that make it less capable than us or does it only mean that we need to add "arms"?
@wwkk4964
@wwkk4964 14 сағат бұрын
AGI would be something that is Bad at hasty Generalization (sees more possibilities than you) and is good at heuristically applying progessive relaxations and constraints when a problem is too open ended or pathologically defined.
@OpenmindedSourceClosedBeta
@OpenmindedSourceClosedBeta 15 сағат бұрын
The uneven distribution of different abilities is probably in the nature of any intelligence. What person is equally good in all their intellectual abilities? That o3 scores weakly on some puzzle questions does not mean it is not AGI. It just gives it a personal “character”.
@electroncommerce
@electroncommerce 21 сағат бұрын
The eyes are human again! 🎉
The Trillion Dollar Equation
31:22
Veritasium
Рет қаралды 10 МЛН
How to have fun with a child 🤣 Food wrap frame! #shorts
0:21
BadaBOOM!
Рет қаралды 17 МЛН
Counter-Strike 2 - Новый кс. Cтарый я
13:10
Marmok
Рет қаралды 2,8 МЛН
요즘유행 찍는법
0:34
오마이비키 OMV
Рет қаралды 12 МЛН
I tricked my car charging station into powering a 7.5 kW heater
33:11
Technology Connections
Рет қаралды 305 М.
Former Google CEO Eric Schmidt on AI, China and the future
29:45
Washington Post Live
Рет қаралды 35 М.
The 10 Biggest Myths About Our Economy
27:03
Robert Reich
Рет қаралды 253 М.
I built a 1,000,000,000 fps video camera to watch light move
29:08
AlphaPhoenix
Рет қаралды 594 М.
Stephen Wolfram on Observer Theory
2:00:41
Wolfram
Рет қаралды 93 М.
NestJs Course for Beginners - Create a REST API
3:42:09
freeCodeCamp.org
Рет қаралды 1,7 МЛН
How to have fun with a child 🤣 Food wrap frame! #shorts
0:21
BadaBOOM!
Рет қаралды 17 МЛН