Q-Star LEAKED: Internal Sources Reveal OpenAI Project "Strawberry" (GPT-5?)

  Рет қаралды 64,854

Matthew Berman

Matthew Berman

Күн бұрын

An article from Reuters has new information about Q-star and project Strawberry. Let's take a look!
Subscribe to my newsletter for your chance to win the Asus Vivobook Copilot+ PC: gleam.io/H4TdG...
(North America only)
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.ne...
👉🏻 LinkedIn: / forward-future-ai
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
www.reuters.co...

Пікірлер: 323
@matthew_berman
@matthew_berman 2 ай бұрын
Do you think this will be GPT-5? Subscribe to my newsletter for your chance to win the Asus Vivobook Copilot+ PC: gleam.io/H4TdG/asus-vivobook-copilot-pc (North America only)
@Azupiru
@Azupiru 2 ай бұрын
I am really excited for the AI to get better with Cuneiform and the various languages represented in cuneiform. It will frequently mix up signs even though it has the data concerning the correct answer from various sign lists. It's very strange that it hasn't connected them properly. I had to correct it multiple times earlier before it realized that it was wrong.
@LiquidAIWater
@LiquidAIWater 2 ай бұрын
No, being if it is a totally different approach than transformers, then they should just barely stay ahead of the competition with iterations before rolling it out being all angles have to be addressed, one of which is the monetization. Otherwise, they just be copied by their competitors.
@laurennichelson7913
@laurennichelson7913 2 ай бұрын
I'm so excited for this giveaway. My grandma didn't see i had set my laptop on the hood of her car for her to grab (I was moving out of an abusive situation), and it flew off her car when she was on the highway so I literally am a tech worker with no computer lmao
@karenrobertsdottir4101
@karenrobertsdottir4101 2 ай бұрын
This reminds me of I had an architecture idea for how to learn via synthesis at training time: 1) Use a multi-token prediction model, akin to that which Meta released recently, simultaneously processing the states of the current input tokens. Predict the upcoming tokens. 2) Using the current state, in a separate batch starting from the same point, first predict a custom "deduction" token that triggers logical deduction of additional short facts derived from the info that's been provided in the current context. (To add this behavior to unsupervised training, you'll need to start with unsupervised training, then do a finetune that teaches how to respond to a deduction token, then go back to unsupervised training) 3) Generate (in simultaneous batch) numerous short multi-token deductions - again, using the current state, so you're not having to recalculate the state (like you have to do with normal synthesis). This should be very fast. 4) To ensure that all deductions are sufficiently different from each other rather than all being the same thing, slightly "repel" their hidden states to increase their cosine distances from each other. 5) Now you have a big batch of deductions of upcoming tokens, as well as the non-deduction prediction of the real upcoming tokens. Establish a gradient off of the batch and backpropagate. The beauty part IMHO is that not only should it learn - quickly - from its (quickly generated) deductions about the world, but it should also learn to get better at deducing, because it's also learning how best to respond to the deduction token. E.g., the deduction process should self-reinforce.
@handsanitizer2457
@handsanitizer2457 2 ай бұрын
Until I see anything I'm going to assume it's gpt trying to stay in the news cycle. I.e. fake leak
@Douchebagus
@Douchebagus 2 ай бұрын
Sounds like OpenAI is feverishly trying to generate hype while Claude 3.5 Sonnet is slapping their asses.
@arxmechanica-robotics
@arxmechanica-robotics 2 ай бұрын
Very true.
@dot1298
@dot1298 2 ай бұрын
but OpenAI has one advantage: users dont need phones to access chatGPT/4o, whereas you need a phone to access Claude 3.5 at Anthropic…
@dot1298
@dot1298 2 ай бұрын
for me this is an absolute showstopper for C3.5 as lmsys doesnt support threads
@HIIIBEAR
@HIIIBEAR 2 ай бұрын
But qstar is a peer reviewed paper. There is 0 chance openai didnt apply the tech
@HIIIBEAR
@HIIIBEAR 2 ай бұрын
@@arxmechanica-roboticsnot at all true. Q* is detailed in a peer reviewed paper
@fynnjackson2298
@fynnjackson2298 2 ай бұрын
All talk - no shipping from Open AI. Matt, you are pumping out top-notch vids, you're efforts are much appreciated.
@ploppyploppy
@ploppyploppy 2 ай бұрын
Is this the new thing? KZbin title: Leaked Seems to be a lot of 'leaks'. So many in fact that they have become meaningless.
@misterfamilyguy
@misterfamilyguy 2 ай бұрын
Yeah, leaks are the new content, right? Just like comments and likes, pretty soon the algorithm will be promoting for leaks.
@yak-machining
@yak-machining 2 ай бұрын
Views views views, KZbinrs want views
@misterfamilyguy
@misterfamilyguy 2 ай бұрын
@@yak-machining I don't know, are you sure maybe you watch it one more time?
@HIIIBEAR
@HIIIBEAR 2 ай бұрын
Not at all. Its only meaningless if youre anti science because Q* is a peer reviewed concept that we know open ai is adapting
@mpvincent7
@mpvincent7 2 ай бұрын
Hyper-meaningful but I agree, lots of background noise and lots of frauds...
@fuzzylogicq
@fuzzylogicq 2 ай бұрын
So basically "make some noise about us again because we are not the only big shots in the game" and all that AGI internally and redefining things without showing any new results. Will believe it when i see it (like the omni voice etc)
@ringpolitiet
@ringpolitiet 2 ай бұрын
This video contains zero leaks or revelations.
@qwazy01
@qwazy01 2 ай бұрын
Forgot to preface with "Spoiler Alert"
@mrbrent62
@mrbrent62 2 ай бұрын
Saying you are the first comment and actually being 4 is like asking Chat GPT to tell you how many words are in your answer.
@panzerofthelake4460
@panzerofthelake4460 2 ай бұрын
bro why would you even care!!!
@matthew_berman
@matthew_berman 2 ай бұрын
even humans hallucinate
@ryzikx
@ryzikx 2 ай бұрын
first
@kniiz
@kniiz 2 ай бұрын
🤣
@vaisakh_km
@vaisakh_km 2 ай бұрын
not the same, because there is a delay for in the part of yt, so 4 th person couldn't see other 3 comments.... and that's not hallucination...
@herteert299
@herteert299 2 ай бұрын
I asked chatgpt the question you talking about at 6:50 and it's answer is correct, look at the answer: my question : Imagine that you are at the north pole and I ask you to walk 200m in one direction then you must turn left and walk 200m. question: have you crossed your starting point? Chatgpt answer: "No, you have not crossed your starting point. Here's why: 1. You start at point A at the North Pole. 2. You walk 200 meters in any direction (say south). 3. Then you turn left (let's say this means you start walking east) and walk another 200 meters. At the North Pole, any initial direction is effectively a southward movement. So after walking south, when you turn left, you are walking parallel to a latitude (parallel circle). To better understand, let's consider some specific cases: - **If you are at the exact geographic North Pole**: All initial direction is south. After turning left, you walk 200 meters east or west around the pole, and you have not crossed your starting point. - **If you are not exactly at the North Pole, but close to it**: After 200 meters towards the south, you are at a certain latitude. Turning left (east) and walking 200 meters will take you to another location along this latitude. You do not cross your starting point. In these cases, your walk does not take you to your starting point, because the directions change depending on your initial position and the curvature of the Earth." Matthew did you tested the question ?
@lucifermorningstar4595
@lucifermorningstar4595 2 ай бұрын
I don't know why but Yan le cun have the ability to be always wrong
@paulmichaelfreedman8334
@paulmichaelfreedman8334 2 ай бұрын
The better puzzle to give is: 1. you start at the north pole. Walk 200 meters south. 2. then walk 200 meters east. 3. finally, walk 200 meters north. Where do you end up? Now, starting anywhere but at the north pole you'' d end up somewhere else than where you started. That's a good nut to crack for an AI. Or: Imagine a house. Each of the 4 walls of the house faces south. A bear walks by, which color is it? These are examples of puzzles for a 12-year old, seems fitting for the current state of AI in general.
@herteert299
@herteert299 2 ай бұрын
@@paulmichaelfreedman8334 Ok...This is the answer chatgpt give me : "The house described in the riddle has all four walls facing south, which implies that the house is located at the North Pole. At the North Pole, the only direction away from the house is south. The bear in this location would be a polar bear, which is white. Therefore, the bear is white." 😄 Amazing !
@wurstelei1356
@wurstelei1356 2 ай бұрын
@@herteert299 What if the house has only windows on the other 3 sides and stands in Texas ?
@mpvincent7
@mpvincent7 2 ай бұрын
Can't wait for it to be released! Thanks for keeping us up-to-date!!!
@oiuhwoechwe
@oiuhwoechwe 2 ай бұрын
Strawberry, out of the Cheech and Chong films. lol
@ColinTimmins
@ColinTimmins 2 ай бұрын
My mom and dad loved their movies when my brother showed them all those years ago! That was a good memory for me. =]
@arxmechanica-robotics
@arxmechanica-robotics 2 ай бұрын
For AI to give you accurate information about how the physical world operates, it will need a physical robot body to experience context.
@NakedSageAstrology
@NakedSageAstrology 2 ай бұрын
Or just be trained on first person view cameras that people can wear all day, and then submit at night.
@arxmechanica-robotics
@arxmechanica-robotics 2 ай бұрын
@NakedSageAstrology yes, however interactivity with the physical surroundings and objects, using arms etc. will greatly increase this more than solely cameras.
@NakedSageAstrology
@NakedSageAstrology 2 ай бұрын
@@arxmechanica-robotics Perhaps you are right. Maybe we need to go deeper, direct access to the nervous system and the information flow through it.
@arxmechanica-robotics
@arxmechanica-robotics 2 ай бұрын
@@NakedSageAstrology it's at least the approach we are taking. Time will tell.
@tvolk131
@tvolk131 2 ай бұрын
Or a fairly accurate simulation of the real world. I'm not sure how computationally feasible this would be, but I find it interesting that it might be possible for an AI to have a (relatively) accurate internal model of the physical world without having ever interacted with it directly.
@gab1159
@gab1159 2 ай бұрын
Honestly, OpenAI is really undewhelming. They're now full-on into hype mode as they're losing market share to Anthropic. They've fallen in the typical sillicon valley trap of overhyping and underdelivering. I'm getting quite annoyed with them. Hearing them talking about what they're doing, you'd expect we'd already have AGI now, when in reality, they've made close to no progress since GPT4 (4o is worse..)
@4.0.4
@4.0.4 2 ай бұрын
4o, worse? At what?
@michaelwoodby5261
@michaelwoodby5261 2 ай бұрын
Do we hate OpenAI for shipping a product for money instead of focusing on research, or for making strides in the tech but not giving us access? Those are mutually exclusive.
@4.0.4
@4.0.4 2 ай бұрын
@@michaelwoodby5261 for not being *_open_* AI, personally.
@Avman20
@Avman20 2 ай бұрын
As Matthew mentioned, we're definitely seeing a slowing of the cadence of significant releases from OpenAI (and other frontier model companies). This is likely mostly having to do with safety. As these models get ever more complex, the concern of the developers must be shifting towards understanding emergent capabilities. When you give a machine the ability to internally reflect on problems from multiple angles by recombining trained knowledge, I think that any possibility of understandability goes right out the window. The question then is, how do you establish effective guardrails on such a system?
@furycorp
@furycorp 2 ай бұрын
It has nothing to do with safety there is plenty of public research and models to review. The tech is what it is.
@justindressler5992
@justindressler5992 2 ай бұрын
I have been thinking about the multi shot behaviour of LLM lately. I feel like when asking a model to correct a mistake it's like it always falls back to finishing sentence or fill in the blanks behaviour. I started to think about the problem is the model has no noise eg it looks at the previous examples but because it already came up with the most probable answer from the past. Maybe what needs to happen is during multi shot operations there is a need to add intentional noise in areas where the errors are. Kind of like how stable diffusion models work start with a noisy signal then iterate to form a coherent answer.
@MilesBellas
@MilesBellas 2 ай бұрын
Topic idea : Ollama integration with Stable Diffusion, Comfyui , Maya/Blender, Kirita and Griptape nodes ?
@BlueBearOne
@BlueBearOne 2 ай бұрын
The way that reads to me suggests there hasn't been a breakthrough with the LLM itself but maybe an ancilliary procress that processes the information? The way I see the brain is that it isn't one organ. Unconventional but it fits. Sections of it operate in a chorus and orchestrated dance to deliver significantly more than the sum of its parts. I've often wondered if this is the way to true AGI. An LLM for each of these "parts" and each LLM would have it's own ancillary process mimicing the brain. Of course, that is so much easier said than done now isn't it?
@seanmurphy6481
@seanmurphy6481 2 ай бұрын
When OpenAI does announce their next model, do you think they'll actually release it at this rate? Sora got announced, it was never released. GPT4o voice mode got announced and it was never released. I'm beginning to wonder if anything will actually come out from OpenAI?
@mikesawyer1336
@mikesawyer1336 2 ай бұрын
People keep saying their voice model wasn't released but I have it on my Android phone? What are people talking about the desktop version?
@BlayneOliver
@BlayneOliver 2 ай бұрын
😂😂 well said
@1sava
@1sava 2 ай бұрын
What are you talking about? OpenAI has been shipping things on time since their release. The hiccups with GPT-4o is because they rushed its announcement to steal the spotlight from Google’s I/O event. Sora was also announced for the same reason. Also, what they’re factoring in the US election in November. Releasing GPT-5 and Sora prior to that might be detrimental. GPT-5 is done training and safety testing I believe. The biggest hurdle towards its release is probably the huge computational infrastructure it required, which is probably why Microsoft is now turning to Oracle to help them with OpenAI.
@1sava
@1sava 2 ай бұрын
@@mikesawyer1336The Voice mode you have access to right now is the old voice mode they’ve released since last year. People are talking about the New Voice mode with frame-by-frame video monitoring and voice/narration effects.
@BlayneOliver
@BlayneOliver 2 ай бұрын
@@1sava hoorah!
@drlordbasil
@drlordbasil 2 ай бұрын
Can't wait to touch this model with my keys.
@ryzikx
@ryzikx 2 ай бұрын
strawberry reasoning 🤤
@aim2helpU
@aim2helpU 2 ай бұрын
I worked in this field 30 years ago and solved most of the problems you're talking about. I walked away because I was worried about who would use it. I agree with your thoughts about keeping it local. I still think the world isn't really ready for this. Too much medieval thinking on the part of our politicians.
@billywhite1403
@billywhite1403 2 ай бұрын
I'm a little confused, maybe somebody could help me understand. It seems All of the LLMs can already do a bit of reasoning, The leading ones more than the others. Many of them show the kind of initiative that suggests understanding of intention, which itself is contiguous or synonymous with planning. So I don't see what's different than what we already have - both in approach and output - except possibly increased resources to devoted to planning, memory, maybe incorporating agents. But I don't see the sea change, does anybody else?
@yuval_kahan
@yuval_kahan 2 ай бұрын
was waiting for your video about it cause i wasn't sure that Q-star leak was real , thank you
@halneufmille
@halneufmille 2 ай бұрын
From an outsider's perspective, this Q-star / strawberry thing sounds like everything LLMs wish they were. Sort of saying to investors "We know Chat GPT still hallucinates and in no way justifies the billions we put in, but there's this top secret project right around the corner that will solve everything so just keep giving us money."
@dmitriitodorov968
@dmitriitodorov968 2 ай бұрын
I'm pretty sure that at some point we come to a point when technically AGI exists but running it is much more expensive that using a human. I mean not because it's not optimized but because of fundamental constraints and energy costs
@brianlink391
@brianlink391 2 ай бұрын
I literally wrote a paper on how to reutilize existing data, but I used AI to help out. Now I'm paranoid that someone intercepted it and is using it too. I guess it's okay, but still... how ironic is that?
@dennisg967
@dennisg967 2 ай бұрын
Thanks for the video. I just had the thought that maybe if you keep asking a model the same questions, and you do it in public, those questions are actually leaked into the training set of a model, and the model responds correctly the next time simply because it has already seen this problem along with the solution. Try rephrasing your problems each time you pose it to a model. See if it changes the model's answer.
@MichaelForbes-d4p
@MichaelForbes-d4p 2 ай бұрын
The scale they released for guaging the progress toward AGI doesn't make any mention of AGI. Which level is AGI... AGI
@ishanparihar4032
@ishanparihar4032 2 ай бұрын
What I can think this about as is different models like MOE at each step of thinking process and working in a sequential thinking like process where each step is worked on by a single model in the MOE.
@jasonshere
@jasonshere 2 ай бұрын
OpenAI's 5 Level Scale System seems a bit weird. It doesn't appear to be a linear scale and there appears to be a lot of crossover so that an AI can achieve different parts of multiple levels.
@JG27Korny
@JG27Korny 2 ай бұрын
Meta has terminology of type 1 and type 2 prompting. "Type 2 prompting" refer to techniques that go beyond simple input-output mappings, delving into more complex, multi-step problem-solving or task-completion approaches with prompts or/and agents : The next step is to use all of your context window to ride the wave. The AI gets at its best when you are deep into the conversation. And there llm with longer context window have their edge. Nobody is testing the models there where it matters. The reason is that towards the end of the context window the ai has already a world model made by the history of your interecttion and that is unique. From practical approach make and continue to get longer conversations. Use type 2 prompting that can be agents or a single prompt with reasonning. For that reason perplexity works very well as it uses search engine results as world model. And the open source perplexica with type 2 prompting even with small local llm is competing with the heavy weights perplexity copilot, bing copilot
@hotlineoperator
@hotlineoperator 2 ай бұрын
OpenAI roadmap to AGI that will be level 2 product - in scale of 1 to 5. Current GPT is level 1 product. Expect to be ready at the end of this year, and we'll see when it is released to public.
@zhanezar
@zhanezar 2 ай бұрын
Open Ai need to show something really amazing , Claude 3.5 Opus is probably there on stand by
@1sava
@1sava 2 ай бұрын
OpenAI has nothing to worry about. Anthropic is barely catching up to GPT-4, a model that was done training 2 years ago. Also, although Anthropic Artifacts is a good feature, their overall interface doesn’t measure up to OpenAI’s
@Pawnsappsee
@Pawnsappsee 2 ай бұрын
Dude, 3.5 sonnet is far more superior than open ai's gpt 4 version.
@szghasem
@szghasem 2 ай бұрын
Sound like corresponding debates in the 90s on AI as it was understood back then. Is it time for a radically new idea to get us to AGI, using LLMs ...
@ScottAshmead
@ScottAshmead 2 ай бұрын
To think local models are not going to send anything back to a corp is like saying your phone will not send anything information to Google/Apple .... we can hope though
@OviDB
@OviDB 2 ай бұрын
I wonder if pretraining is where the going over the same data over and over and over again happens, until the transformer learns the underlying relations.
@Webnotized227
@Webnotized227 2 ай бұрын
Seems like Anthropic forgot they had an Android app to release until they watched your video yesterday.
@jaymata1218
@jaymata1218 2 ай бұрын
I suspect at some point, OpenAI will no longer be open to the public. We'll continue to get basic models, but the advanced versions will be deemed too dangerous for the masses. We might enter an era where the 1% of the population has access to advanced models that solidfy their position and advance so rapidly that open source can't keep up.
@Hailmich10
@Hailmich10 2 ай бұрын
Mathew-thanks for the video and appreciate your comments on potential math problems. Is there any hard evidence that OpenAI or others have figured out how their models currently perform on various math problems? This capability seems easy to measure (is the model performing at the college level, master's level Phd level, etc.). IMO, If we are still at the high school level in terms of math ability, we are some time away from AGI/SGI and math ability will be an important predictor of reasoning ability and where we are on the trajectory towards AGI/SGI.
@mesapysch
@mesapysch 2 ай бұрын
As an annotator, this is great news. LLM are horrible with logic.
@denijane89
@denijane89 2 ай бұрын
I really don't understand the struggle for AGI. I like Claude, it's useful, it helps you do things, that's the idea of the AI - to get an AI assistant that will save you time. The type of fully independent AGI they describe is really questionable goal. It's like having a kid and expecting it to solve all your problems. That never happens.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 ай бұрын
sounds like aspiration still, the same thing everyone is working on as well.
@MyLittleBitOfEverything
@MyLittleBitOfEverything 2 ай бұрын
I said take as much time as you need. So the agent just told me it needs 7.5 million years to answer the Ultimate Question Of Life, The Universe, And Everything.
@gatesv1326
@gatesv1326 2 ай бұрын
I would think that Level 4 OpenAI is referring to should be what we expect of AGI. Level 5 to me is ASI.
@oguretsagressive
@oguretsagressive 2 ай бұрын
12:10 this looks severely undercooked. Level 3 can be achieved before level 2. Level 5 is claimed to be lower than AGI, but a single human is a GI that cannot perform the work of an entire organization, therefore is below level 5.
@OnigoroshiZero
@OnigoroshiZero 2 ай бұрын
With OpenAI's scale, their Level 5 is actually ASI, AGI is their Level 3, and we will have it before the summer of 2025 at the latest.
@executivelifehacks6747
@executivelifehacks6747 2 ай бұрын
Yes. They have a large vested interest in delaying when their AI is classed as AGI. Exercise in narrative control
@zubbyemodi6356
@zubbyemodi6356 2 ай бұрын
I actually think OAI has developed and ready to deploy a couple models that would blow our minds, and are really just waiting to be instep with the acceptable rate of AI evolution, so it doesn’t scare people too much
@4lrk
@4lrk Ай бұрын
Level 3 is basically Tony Stark's Jarvis.
@WJohnson1043
@WJohnson1043 2 ай бұрын
In an agentic system of LLMs, why not have a mode where one of its LLMs is trained by the others so that the system on the whole learns?
@questionableuserr
@questionableuserr 2 ай бұрын
Perplexity ai already does capabilities just like this though, except for the multi step aspect You should do a video testing it out
@rudomeister
@rudomeister 2 ай бұрын
This depends on if this "Strawberry" can solve issues in my Keras prediction models, and not just pretend like it.
@nick1f
@nick1f 2 ай бұрын
Very exciting changes. I can't wait to test ChatGPT 5, when it will be released. And who knows how far will AI advance in the next five years...
@cagdasucar3932
@cagdasucar3932 2 ай бұрын
I thought Q* was supposed to be application of Alpha-go's learning algorithm to Chat GPT. I think it's basically controlled chain of thought with monte carlo tree search. Am I wrong?
@andrewsilber
@andrewsilber 2 ай бұрын
That's certainly what it sounds like, though people's interps seem to suggest post-training RL to tease out more reasoning capability. But from my perspective, it seems clear that indeed what needs to be done is to build a System 2 agent using DQL which leverages LLMs as just one tool among many. As far as I am aware, current LLM inference is not recurrent or arbitrarily recursive -- it's just a straight shot through the decoder, which would put an upper bound on the amount of "work" it can get done. I believe OpenAI did mention however that whatever it was they were doing would be capable of doing protracted research projects, which certainly does point in that direction.
@rghughes
@rghughes 2 ай бұрын
I don't think it will be level 2; it'll likely be "2-like" or a 'proto-2'. Something to keep the investors investing long enough for them to _actually_ reach 'proper-level-2'. Just my 2 cents.
@samueltucker8473
@samueltucker8473 2 ай бұрын
Beware of the rush to market missing unknown variables of more complexity of inter compartmental effects and the programming that sometimes taps the pendulum at the wrong moment
@honkytonk4465
@honkytonk4465 2 ай бұрын
Sounds super cool
@Yipper64
@Yipper64 2 ай бұрын
12:35 I believe we will be forever stuck on level 3. Unless we start with something entirely different.
@mihailion2468
@mihailion2468 2 ай бұрын
LeakAI
@marcc0183
@marcc0183 2 ай бұрын
hey mathew, a question that has nothing to do with this video, a long time ago you said that you were going to clone yourself but I haven't found anything... would it be possible to do it with the new technologies and models that exist now?
@marco114
@marco114 2 ай бұрын
OpenAI needs to release what they teased us with.
@TheRagingUnprofessional
@TheRagingUnprofessional 2 ай бұрын
Everyone is doing it wrong. The data sets are simply too large for current tech, we need to break each model down into categories or purposes. A one-size-fits-all model will NEVER out-perform purpose based models.
@Luizfernando-dm2rf
@Luizfernando-dm2rf 2 ай бұрын
It'll if it can cross-reference different domains to get better results. Is that possible or feasible tho? No idea.
@4.0.4
@4.0.4 2 ай бұрын
I'm sure those fools at multi billion dollar AI companies have no clue what they're doing, you go tell them 😂
@ronaldpokatiloff5704
@ronaldpokatiloff5704 2 ай бұрын
our universe is total AI
@MichaelChanslor
@MichaelChanslor 2 ай бұрын
5:22 - Thank you!
@misterfamilyguy
@misterfamilyguy 2 ай бұрын
I'm excited for this. I really believe that this will improve everyone's lives immensely. I just don't know all of the ways that it will.
@mickelodiansurname9578
@mickelodiansurname9578 2 ай бұрын
Can someone correct me if I got this wrong.... So my understanding is this is an alteration to the standard methodology of fine tuning right, and they own the algo's so they do the fine tuning compute, okay fine... Got it.... now will they allow me to use my own dataset fine tuning GPT4o? Or is it just a one off fine tuning of their choice take it or leave it? I'm going to assume this is an alteration to the algo's used in fine tuning which til now are the regular Adam and AdamW (transformer) and gradient decent... mostly... So this is something new? Have I got this right?
@PeteBuildsStuff
@PeteBuildsStuff 2 ай бұрын
Thank you Berman. Question: are you generally afraid of or generally bullish when it comes to this tech in general. It's my theory that the more informed a person becomes the less they say 'yea, but its just weird, or just a little trippy, or some other form of being afraid, but not sure why' what do you think?
@leomaxwell972
@leomaxwell972 2 ай бұрын
Digging through my registry, AI be installing some weird stuff, like TrollTech, and Project QT, any chance it's related? xD
@JRS2025
@JRS2025 2 ай бұрын
I feel like this is named after LLM struggle to recognise that Strawberry has 3 R's in it.
@I-Dophler
@I-Dophler 2 ай бұрын
Pray for Donald Trump, god save the Trump.
@Sorpendium
@Sorpendium 2 ай бұрын
Strawberry is a really good name and they should use it for their most advanced model. If strawberry comes out and it's not very good I'm going to be disappointed. 😂
@kacperbochan5597
@kacperbochan5597 2 ай бұрын
Am I the only one who saw pimples instead of strawberries on the thumbnail?
@bujin5455
@bujin5455 2 ай бұрын
7:50. I'm not sure why everyone is saying that OpenAI is holding back GPT-4o's voice, when I've had this feature for months now. I think it's just on limited rollout.
@Pawnsappsee
@Pawnsappsee 2 ай бұрын
Man, it's an improved version of that voice feature, nobody has acess to it
@bujin5455
@bujin5455 2 ай бұрын
@@Pawnsappsee if you say so. It does all the things that were showed in the demo. Not to mention that it initially had the Sky (Scarlett Johansson) voice (which has been removed at this point).
@Hohohohoho-vo1pq
@Hohohohoho-vo1pq 2 ай бұрын
​@@bujin5455if you have the new voice feature you will have a camera option while using the voice mode, do you have it? The old voice mode is just text to speech and speech to text. The new voice mode is not like that. It's actually trained to input and output sound.
@bujin5455
@bujin5455 2 ай бұрын
@@Hohohohoho-vo1pq is that the only improvement? Camera operation? I thought the entire thing was new. I certainly didn't have it prior to the announcement, I had to go update my app. At the time, they said it was available now, and I spent a couple of hours talking to Sky. Which disappeared a couple of days later. I don't know if it had the camera at the time when I was first using it. It doesn't have it now.
@Hohohohoho-vo1pq
@Hohohohoho-vo1pq 2 ай бұрын
@@bujin5455 what. I literally explained to you after that that it's a completely different thing from the old voice mode. No one has the new voice mode yet. Go back and read the last paragraph of my comment.
@TheBann90
@TheBann90 2 ай бұрын
So its not really leaked. Just another rumor of a product in development that is still 16-18 months away. And by then it will already have 10 competitors... OpenAI have lost it...
@colonistjester1552
@colonistjester1552 2 ай бұрын
I have found sonnet 3.5 and all Claude services not serving well in scale
@robertheinrich2994
@robertheinrich2994 2 ай бұрын
is it like with the laptop? oh, it's a new AI, but only for one area on the world. sorry, still salty about you offering a email newsletter and a way to win a laptop, just to learn: no, you are neither allowed to participate in winning a laptop nor are you allowed to actually subscribe to the newsletter.
@rakoczipiroska5632
@rakoczipiroska5632 2 ай бұрын
So Strawberry makers could find practising as an element of learning. Who would have thought that?😉
@maudentable
@maudentable 2 ай бұрын
OpenAI's already testing GPT5. Havent you realized GPT4o is occassionally supe-rsmart and super-slow?
@djayp34
@djayp34 2 ай бұрын
People complaining about Sora or Voice update. I can't wait for the "But what the h* is doing Anthropic ?" era. Hype machine at its best.
@camelCased
@camelCased 2 ай бұрын
Right, it is important to think in concepts, not words or tokens (especially not just spitting the next most likely word/token based on statistics alone). Also, humans learn fast because they are aware of their weak spots and can initiate self-learning. Children ask questions about things they want to know and they receive not only textual but also sensory information. And, of course, the feedback loop is important for self-critique and validation to know what I know and what I don't know. Maybe one day we'll indeed have constantly-learning AI algorithms that become smarter based on their experience with millions of users and real-world sensory input. Sensory input should always have higher priority than any textual information because physical world does not lie and come up with different interpretations, it just is what it is.
@attilazimler1614
@attilazimler1614 2 ай бұрын
I agree, the current version of ChatGPT is really dumb - and it got there from being slightly less dumb prior :D
@marcusk7855
@marcusk7855 2 ай бұрын
When the logic a reasoning is equal or better than human reasoning we will have AGI. And there is no reason a computer won't way out do human reasoning with the right algorithm.
@jayv_tech
@jayv_tech 2 ай бұрын
The first thing I learnt, how to pronounce Reuters 😆
@jarnMod
@jarnMod 2 ай бұрын
I'm an investor by trade, pardon the pun, but I have no idea how energy consumption of AI development will be.
@gamersgabangest3179
@gamersgabangest3179 2 ай бұрын
I am still waiting for the chatGPT voice thing.
@AINMEisONE
@AINMEisONE 2 ай бұрын
Think strawberry fields forever. Is it not reveling the names being used, in AI.. going to back to LSD era?
@jim-i-am
@jim-i-am 2 ай бұрын
What you describe around 3:10 sounds a LOT like grokked models. i.e. overfitting models during training is a sign of memorization...then continue training until underpinning "models" emerge. (wow...that made it sound really easy to get that convergence. It's not). Just remember: 42 :D @code4AI has some really good videos on the subject if you're interested in diving down the rabbit hole.
@MikeWoot65
@MikeWoot65 2 ай бұрын
It's clear Reasoning will be the thing that gets us to whatever AGI is
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 2 ай бұрын
The general public, the common man on the street will never even get to use a Level 3 AI, because if they did the first thing they would use it for is writing a prompt that ordered the AI agent to make them lots of money, and if everyone did that, then it would crash the financial system and many other things that governments don't want peasants being able to do. If we are to see Level 3 AI then it will be heavily restricted and lobotomized such that it can't be used to make money, and for that reason the public will never be given access to AGI. Don't hope for it or wish for it, it's never going to happen.
@szebike
@szebike 2 ай бұрын
An openai "Level 3" bot running locally sounds dangerous in my opinion if it can reason and is smart don't know if its a good idea... That being said having a giant levle 3 AI in the hands of OpenAI or Microsoft or Meta doesn'T sound like fun either.
@MariuszKen
@MariuszKen 2 ай бұрын
yea and lvl 7 knows like a God. lvl 10 knows so much that can create new universes... every ai knows it
@florisvaneijk785
@florisvaneijk785 2 ай бұрын
I just bought my own AI computer with these specs 2x RTX 4070 12GB VRAM Intel core i5 14400F 64GB DDR5 memory have any recommendations on what models to use? i want a chatgpt replacement so i can use it for my job without the privacy things
@dot1298
@dot1298 2 ай бұрын
try (meta)llama3-7B or even -70B
@dot1298
@dot1298 2 ай бұрын
gemma2 is also (reportedly) good
@underbelly69
@underbelly69 2 ай бұрын
Why not let openai access your models agentic findings? Benefits everybody down the line - evil can be audited, novel can be shared
@ps3301
@ps3301 2 ай бұрын
We should call it raspberry
@Foloex
@Foloex 2 ай бұрын
By "leaked" you mean "announced" ? You make great content, please refrain from using clickbait titles, I find it disrespectful to your audience.
@DanTheBossBuster
@DanTheBossBuster 2 ай бұрын
Does anyone appreciate the irony here... the "hard part" is getting the computers to be logical :)
@3thinking
@3thinking 2 ай бұрын
Me in 2022: ChatGPT is genius! Mind blown! Me in 2024: This thing is so dumb...
@aditya_p_01
@aditya_p_01 2 ай бұрын
Strawberry 🍓
@Darkt0mb5
@Darkt0mb5 2 ай бұрын
The purposely release leaks because everybody's bored with this
@robertvliegenthart5527
@robertvliegenthart5527 2 ай бұрын
So the “Titanic talk” is actually complete nonsense and investors will be left holding the bag as it goes higher…? Would appreciate if answered✌🏼
@Zalktislaima
@Zalktislaima 2 ай бұрын
5 sounds like early ASI rather than AGI to me. AGI seems like somewhere between 3-4. Most humans seem to be more in the 3 with occasional glimmers into 4 though they also don't have the knowledge base of 2 or most often even 1. Something that was regularly truly innovating would at worst be in the top tiny percentile of humanity though I question how much of that in many cases is more taking other innovations and having awareness on how to market (with potentially rare exceptions like perhaps Tesla or Leonardo or what have you). There is no human that can do 5 so I don't see why that would be AGI rather than early stage ASI.
@Hohohohoho-vo1pq
@Hohohohoho-vo1pq 2 ай бұрын
People move goal posts and overestimate the abilities of the average human
@dhamovjan4760
@dhamovjan4760 2 ай бұрын
Interesting but really not a "leak", please use leak a bit more carefully to avoid the cry wolf effect. miqu was leaked, this is almost like a press release.
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 2 ай бұрын
GPT-5 doing deep research on wikipedia = AGI......
Deep Research Done in Minutes With AI Agents (Tutorial)
7:44
Matthew Berman
Рет қаралды 43 М.
Elon Musk Unveils Robotaxi - "We, Robot" Breakdown
18:25
Matthew Berman
Рет қаралды 16 М.
Synyptas 4 | Жігіттер сынып қалды| 3 Bolim
19:27
kak budto
Рет қаралды 1,1 МЛН
Worst flight ever
00:55
Adam W
Рет қаралды 53 МЛН
World‘s Strongest Man VS Apple
01:00
Browney
Рет қаралды 31 МЛН
«Кім тапқыр?» бағдарламасы
00:16
Balapan TV
Рет қаралды 163 М.
GPT-o1 - How Good Is It? (New Research Paper Tests Its Limits)
26:44
Matthew Berman
Рет қаралды 60 М.
Official PyTorch Documentary: Powering the AI Revolution
35:53
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 468 М.
FIRST RIDE: Tesla Robotaxi - Our Driverless Future Has Arrived!
24:58
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 103 М.
Former Google CEO Spills ALL! (Google AI is Doomed)
44:45
Matthew Berman
Рет қаралды 638 М.
Linux Creator Reveals the Future Of Programming with AI
19:46
Matthew Berman
Рет қаралды 169 М.
Why AI Is Tech's Latest Hoax
38:26
Modern MBA
Рет қаралды 716 М.
Synyptas 4 | Жігіттер сынып қалды| 3 Bolim
19:27
kak budto
Рет қаралды 1,1 МЛН