lol… slowing down… gosh.. it’s hard to keep up. 01 was such a huge shift, and we just have the preview version at the moment.
@MrRandomPlays_198746 минут бұрын
Right, also have you heard about the Extropic thermodynamic computer that is like 100 million times faster than average GPUs when it comes to running AI? this thing is gonna allow us to run AI stuff in complete real time I think, crazy.
@icegiant100015 сағат бұрын
I dont need an AI agent to be vastly smarter than me. I need one integrated better. Im a developer, and going back and forth between Visual Studio and Chat-GPT o1-preview is a giant pain in the butt.
@Novascrub14 сағат бұрын
copilot supports o1 now
@aelisenko13 сағат бұрын
Have you tried Cursor? It's very good, but not perfect, still needs more integration but already extremely useful
@jessedbrown198012 сағат бұрын
Pythagora, cursor, copilot, but these are really weak models- I tested a system in SF last week which will set the foundation for no code conversation , and they were working on mind reading interfaces, so, shortly, we will not even need to speak to the AI to get it to help you with your development
@Kalamith11 сағат бұрын
I need both.
@aelisenko11 сағат бұрын
@@jessedbrown1980 Cursor can use any model you choose, o1-preview, o1-mini, etc, you can even provide your own API token
@o_o82515 сағат бұрын
If you’re feeling pessimistic, look at where AI-generated images and videos were just a few years ago.
@TheRealUsername15 сағат бұрын
It's still not good enough
@ja-fl8rb15 сағат бұрын
thats two diffrent things
@XYang202315 сағат бұрын
very different model/mechanism
@ricciozztin15 сағат бұрын
Just don't look too closely at the hands and fingers produced by today's AI.
@aaronsatko364215 сағат бұрын
@@ricciozztinThat has been fixed for over a year now by most of the recent models.
@DorianRodring7 сағат бұрын
Bro I understand that you want to pump out as many videos as possible but slow down and take a breath. Your videos lately have so many editing problems with audio quality overlapping and hearing your voice talk about 2 things at the same time or repeated or just randomly cut off. Money is good but put care into what you’re making.
@HansKonrad-ln1cg15 сағат бұрын
dont confuse test time training like in the paper used for arc with test time compute like o1 uses. test time training means that the model is literally continuously training or learning much like humans. test time compute just means that the model is allowed to think for longer and produce thinking tokens that are not necessarily visible to the user (also like humans).
@BloomrLabs15 сағат бұрын
Diminishing returns, even if true who cares? The progress is still amazing to date. The tools we have today are sufficient to build super intelligent systems, and it will continue to get better and better from here.
@Novascrub14 сағат бұрын
lol
@MrRandomPlays_198745 минут бұрын
Yeah and take into account also the super fast (100 million times faster than current GPUs) Extropic's computer that would accelerate AI way faster.
@galzajc125715 сағат бұрын
and dario amodei also said in a lex friedman interview a few days ago, they saw no slowdowns
@crawfordscott3d11 сағат бұрын
Pretty certain he mentioned data limits. Synthetic data has not been proven to work yet and had shown signs of leading to corruption
@flightevolution81323 сағат бұрын
@@crawfordscott3d We know synthetic data can work on a dozen different vector planes. As a simple example, alpha go was trained on 0 human GO data and STILL surpassed any human player in short order by just playing itself (synthetic training).
@ronannonbarbarian9174Сағат бұрын
@@flightevolution8132 Yes, but this is very narrow in scope and so was the data it got for the training. The LLM output (and input as a matter of fact) is quite wide in scope, if not universal, covering all possible subjects of human activity.
@GoodBaleadaMusic15 сағат бұрын
We we're all normal humans a year ago. It could stay they way it is right now and completely reconstruct our whole reality. I've heard heads of business, math, science, philosophy all be incredibly wrong and threatened by AI. Because it makes them normal. And makes everyone super human. Because now love, justice, and creative desire are the strongest drivers. Not acting perfect for 40 years so you can play golf at the good golf course.
@eric.waffles12 сағат бұрын
Video Summary: AI analysts say they've hit a wall. TheAIGRID: "This is crazy because..." Sam Altman: "There is no wall" TheAIGRID: "Now this is crazy because..." End of video
@The_MegaClan5 сағат бұрын
thanks for saving my time. I appreciate it
@SimonNgai-d3u4 сағат бұрын
I think o1 is really an underrated breakthrough in AI history. It unlocks LLM's reasoning ability with alogothms but not just more data only. I guess when we have used up all the data for training the base models, it's time to get back to using our intellegence to unlock those algorithms.
@blazearmoru15 сағат бұрын
AI is hitting a wall! That wall is AGI and the moment it breaks through everyone gonna be shitting themselves.
@TravelingChad13 сағат бұрын
I think 4o and Sonnet are solid models. They should really focus on building agents that use these models while working to make them less prone to hallucinations and more affordable. Agents will end up using way more tokens but will produce better results, and that’s going to be key. Let the agents figure things out, and if they need to switch to an o1, o2, or o3 model, they should have that flexibility. Plus, if they could pause training new models for a while, they’d free up a ton of compute power to improve memory and speed.
@Michael-px9il12 сағат бұрын
But they wont pause training because of the arms race. Nobody wants to be left behind especially superpowers. This is why an alleged budget of 30% for safety research has been thrown out the window and its more like 5%...Which is i think is a bit stupid. If proper precautions are not made nobodies going to be here to enjoy AGI for that long.
@ct547113 сағат бұрын
It could be that base models could plateau (after all they are trained on human and not superhuman data), like the accuracy of single inference output. o1 type of systems, that will soon be agentic, will however still exponentially drive absolute intelligence as inference cost and time diminishes. If you have a very good mathematician that is a million times faster than a normal mathematician of the same intelligence, one year is like a million years. And the gained insights built on top of each other. Plus they will act in swarms with self-specializing models. Today we may have 20 million scientists in the world, with agents you could scale that up to 20 billion and more, all communicating at electric speed. In total the base model doesn’t need to be superhuman, but the collective of agents will be. It’s subjective time and scalability. If you apply reinforcement learning via self play they will likely approach and perhaps slightly surpass peak performance (in single inference) of what is human possible throughout the domains. Self play in chess allowed for massively over human abilities quickly, but there we have quick immediate feedback for ground truth. Via agents interacting with the world this could be achieved as well, but for some things it may be slower (for others not, especially theoretical stuff). Still, as we apply more agents after a time of plateauing slightly above human levels this might also allow for continued increases of absolute intelligence of base model capabilities. But increasing reasoning cycle times and scaling up the number of agents is likely sufficient on its own to achieve something we may call superintelligence.
@gweneth595815 сағат бұрын
Why would anyone quote Garry Marcus? That is honestly beyond my understanding, that man is crazy... I just listened to some interviews in which he was present too and I couldn't listen to him, it was just plainly horrible.
@MrRandomPlays_198748 минут бұрын
What's the deal with the weird cutoff outros? it's really bizarre how TheAIGRID's videos almost always end up early and cut off before he finishes saying the full last sentence of the video.
@PCRetroTech15 сағат бұрын
You've repeated the same incorrect information about ARC-AGI as yesterday. Read the comments to understand the difference with o1.
@polodog745810 сағат бұрын
Oh man, the cliff hanger "On the other side, they are right that this next paradigm....." 13:14
@shibafujiwatches280813 сағат бұрын
Really like your videos. I had no idea what “the wall” was.
@punk39004 сағат бұрын
It is not slowing down, its accelerating at a lower pace.
@MojtabaSaffar-p1v6 сағат бұрын
The solution can be fined tune, specialized ai and test time one not synthetic data
@andreaskrbyravn85512 сағат бұрын
It is almost always like this with humans too either you are good at writing or painting being creative. Or you are good at logic, math and coding
@zerge6911 сағат бұрын
Are you really quoting an article from 2022?
@zen1tsu-sam10 сағат бұрын
think Altman means that, even within the previous paradigm, there are no walls or limits.
@zen1tsu-sam9 сағат бұрын
Transformer architecture is like the rockets from 2,000 years ago, which the Chinese used to play with for fireworks. But then space rockets came into being. While both are essentially rockets, they are fundamentally different. In the same way, AI skeptics often focus on the transformer architecture, but the true innovation lies in the space rocket-the real product-is far beyond those fireworks.
@tomcraver96594 сағат бұрын
Within months, Meta will probably release an open source inference time model, with a version small enough to run adequately on a PC with a decent GPU. If they're clever, Meta will build in the (optional) ability to have the local model contact their cloud model (for a small fee) for advice on the 5% most critical planning steps of 'thinking step by step'. The LLM vs Neurosymbolic thing is a non-issue - if necessary, they'll be integrated, and Ai progress will continue.
@notacreativenamee15 сағат бұрын
Its like eating raw steak (LLMs) expecting it to be a meal. Yes high quality raw steak makes the base for an amazing dish but its what you do with it that gets you that final delicious outcome
@ct547112 сағат бұрын
Imagine we get an AI winter … but it only lasts a few days as the next paradigm was already introduced in the past … did we even have an AI winter then?
@blijebij13 сағат бұрын
It makes sense, but it probably introduces saturation just for another series in time. With in the not to far future, it will likely shift to hardware limitation.
@zandrrlife15 сағат бұрын
First off that research was on using test time training layers to solve arc, openai isn’t using this because it’s literally a brand new architectural change lol. Learn what you speak on bro. Concerning performance saturation. I’m not even going to write a long ass comment why that is cap asf, outside the obvious financial reasons you would state this. Models cant even get pass ai explained simplebench. Even a gpt-3 to gpt-4 jump + test time training compute wouldn’t allow it to saturate, based on its current performance. If they saturate any benchmark, it’s because of data contamination. Gary is 100% right that symbolic-LM is the true solution, but the pretrain dataset needs to reflect that too. Imagine getting gpt4 level performance being fed only raw data, it’s a miracle honestly. It turns out, as I stated for over a year this caveman style of performance scaling is retarded, throw raw data at the model and try to perturb behavior post-training was never going to work, again check meta “physics of language models” series, all that shit needs to be learnt at pretraining. Failed first principles. Research shows these models reason in a symbolic autoregressive manner, how about we build out from that. Which informs everything else, including data composition.
@OpenAITutor10 сағат бұрын
Nope, there's no Evaluator Wall; I am building a new L.L.M. Evaluator that will change a few things. :)
@Justin_Arut15 сағат бұрын
Admitting it won't live up to the hype lowers investor confidence. Gotta keep that money train rolling.
@jessedbrown198011 сағат бұрын
There is no wall because synthetic data will build a bridge over the humman limit of available information --- Not actually a problem for deep learning to learn mmore with bigger systems, but they still need more data to get better, so we should move away from information silo-isolation if we wat to kee the explosion going, it is not about the ability of the system to improve, it is the data that is available to them
@brakertrax13 сағат бұрын
So, they’re basically stating that look
@Cory-v4w10 сағат бұрын
Give me that temporal precedence non-spuriousness. Just give it me.
@supernewuser15 сағат бұрын
bro talk into the microphone or equalize your sound levels
@froilen1314 сағат бұрын
Or use text to speech ai. Is not that hard
@Unkn0wn11339 сағат бұрын
Omg im sure half of the “leakers” are just him, and doing it to cause drama😂
@speaknice807813 минут бұрын
Hear that bubble "Pop" yet?
@Niels_F_Morrow10 сағат бұрын
I view AI, in its current incarnation, as a tool! And what else, with billions of dollars on the line and a company to defend, what else would you expect a CEO to do? I do not fault Sam for doing defending their product, defending their hardwork. Where I find fault is when anyone ascribes, directly or indirectly, this tool to be more than what it really is... And since Intel, AMD have had nightmare releases of late. It may come down to what Jensen can do to extend/prolong using silicon in the longer term. And, realistically speaking as well, it is extremely problematic when you rely on huge amounts of power, larger and larger amounts of compute, to compete with an organ that weighs 3 to 4 pounds is slightly larger than both of your fists put together and is extremely adept at using as much or as little in what ever you feed it.
@mediawolf111 сағат бұрын
12:07 I disagree, I think highly advanced (by AI standards) reasoning is required tor AIs to feel normal. Right now chatting with them feels like talking to a dumb machine with knowledge. Currently they can spout off on all manner of topics, and their output can be useful to learn something, but it's also transparent that they have no idea what they're talking about.
@boofrost12 сағат бұрын
Mate, do you actually review your video editing before you post them? Your cutting tool is rather on the rough end side, I'd say
@22_Letters13 сағат бұрын
Both sides can't be right, The side claiming AI is or has hit a wall are wrong
@zen1tsu-sam10 сағат бұрын
OpenAI is definitely testing a new model, but it’s certainly not Orion-if Orion even exists, it was likely trained at least a year ago. I believe they’re currently focused on training o2.
@atypocrat177915 сағат бұрын
maybe our jobs are safe for now.
@Trohn696915 сағат бұрын
AI right now isn’t cost effective. It also is wrong a lot of the time. It’s not taking any jobs in the next 2-3 years. Maybe not even 5-10.
@theforsakeen17714 сағат бұрын
@Trohn6969 bro u cappin , X replaced 80% of it's staff with 41 and 25% of ggl code is 41 generated, we re cooked
@Jorn-sy6ho4 сағат бұрын
There is no spoon
@draken53794 сағат бұрын
Garry isnt right at all. He is talking about deep learning in general, all of it. He isnt just talking about 'GPT'. GPT is just a name, it means nothing. They are all LLMs.
@MrlegendOr14 сағат бұрын
This can't be serious, everyone is in cline to trust everything that came out of Sam's mouth. All are forgetting that the main and the only one business model he have is AI, of course he will never say that the only product he have is struggling to meet the objectif growth... You are all so naive 🙃
@googleaccountuser311614 сағат бұрын
I thought AI was a real thing.
@alberteinstein866115 сағат бұрын
Ai winter
@RadiantNij15 сағат бұрын
You guys, you, Matts and Wes need to fact check instead of feeding us hear say... please🙏🏾
@pablopepsibra85477 сағат бұрын
ending vids half way through a your sentence needs to stop. other than that, keep it up dude
@Qui6Below15 сағат бұрын
Never been this early to a vid
@avraham449715 сағат бұрын
Never say “never”
@Qui6Below15 сағат бұрын
@@avraham4497 never
@borisbalkan70712 сағат бұрын
Gary Marcus lol
@f0urty515 сағат бұрын
Gary Marcus is a CLOWN and you should stop giving him airtime, you are harming the US AI industry in doing so.