What Is Q*? The Leaked AGI BREAKTHROUGH That Almost Killed OpenAI

  Рет қаралды 381,920

Matthew Berman

Matthew Berman

6 ай бұрын

Update: ROY-ders 🤣
In this video, I break down every piece of information we have about Q*, the revolutionary AGI breakthrough that has been leaked from OpenAI. Everyone in the AI community has been scrambling to figure out what it is, and I’ve collected everything I can on the subject. So, what is Q*? Is it AGI?
Enjoy 🙂
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
www.interconnects.ai/p/q-star...

Пікірлер: 1 400
@matthew_berman
@matthew_berman 6 ай бұрын
So...is Q* the ingredient for AGI? What do you think?
@cccc-zz8cy
@cccc-zz8cy 6 ай бұрын
you and David Shapiro gotta do a colab, he did a great break down of this. yours is great too, love it, just saying you guys are one of less than 5 consistent AI-youtubers
@cccc-zz8cy
@cccc-zz8cy 6 ай бұрын
dont forget ai-explained. basically you 3 are some of the best right now
@MeinDeutschkurs
@MeinDeutschkurs 6 ай бұрын
Yes and no. It‘s a learning algorithm.
@adamrodriguez7598
@adamrodriguez7598 6 ай бұрын
Maybe q* is one of the ingredients
@CapnSnackbeard
@CapnSnackbeard 6 ай бұрын
Thanks for the video! In case nobody mentioned it, Reuters is pronounced /ROI-terz/. No biggie. I wasn't gonna say anything, but I figured I'd wanna know if it were me, so... 🤷‍♂️ Thanks again!
@JustMyTwoCentz
@JustMyTwoCentz 6 ай бұрын
"how quickly do you want to destroy humanity?" "yes"
@jerrylev59
@jerrylev59 3 күн бұрын
😄
@ElioRose
@ElioRose 6 ай бұрын
The notation "Q*" is often used in the context of reinforcement learning, a subfield of artificial intelligence. In reinforcement learning, the Q-value represents the expected cumulative reward of taking a particular action in a specific state and following a certain policy. The Q* specifically refers to the optimal Q-value, which represents the maximum expected cumulative reward achievable by following the optimal policy. The optimal policy is the strategy that maximizes the expected cumulative reward over time. Mathematically, for a given state 's' and action 'a' he optimal Q-value is denoted as: Q*(s,a). The optimal Q-value satisfies the Bellman optimality equation, which is a fundamental equation in reinforcement learning. In summary, when you see Q* in the context of AI and reinforcement learning, it generally refers to the optimal Q-value, representing the maximum expected cumulative reward for taking a specific action in a given state while following the optimal policy. Think of it this way... Imagine you're playing a video game, and you're trying to figure out the best way to make your character score the highest points. In AI, especially in a part called reinforcement learning, we use something called "Q-values" to help the computer learn the best moves. Each action you can take in the game has a Q-value, which is like a score. The higher the Q-value, the better that action is expected to be. Q* just means the absolute best score, like the highest possible score you could get for a specific action in a certain situation. So, when people talk about Q*, they're basically saying, "Hey, let's figure out the best way to play the game and get the most points possible." It's a way for computers to learn and make smart decisions in different situations.
@robotter_ai
@robotter_ai 5 ай бұрын
I also think it might be a hint at the A* algorithm, but that's just speculation
@pass179
@pass179 5 ай бұрын
Thank you for such an elaborate explaination.
@digitalperplexities
@digitalperplexities 5 ай бұрын
written by AI now doubt @@pass179
@garrettpatten6312
@garrettpatten6312 4 ай бұрын
That sounds like Utils in economics......might work for something like points in a video game, a cardinal value. But what about subjective outcomes? Someone still has to input their own unique ordinal preferences.
@timseguine2
@timseguine2 3 ай бұрын
@@robotter_aithe "star" in A* comes from the exact same mathematical convention of using "star" to denote an optimum. So in some sense you are right.
@chaseoneill2965
@chaseoneill2965 6 ай бұрын
This is great content, I really enjoy the comfortably unassuming pace you take to navigate the content while hitting the raw published papers and communications from subject matter experts that are respected by the community, all with relevant and interesting insights, palpable enthusiasm, respectful explanations for watchers who are out of the loop, and you are prepared prior to turning the camera on. Subscribed.
@dianagentu7478
@dianagentu7478 6 ай бұрын
Yes brilliant content but I did put it on x2 - still it is great he is so calm when talking :)
@jpandrews2791
@jpandrews2791 6 ай бұрын
Your most wide ranging, far reaching, ambitious video yet; - by an order of magnitude. A great round up of the state of the art from professional and amateur alike. Thank you for not under estimating your audience. Bravo.
@matthew_berman
@matthew_berman 6 ай бұрын
Thank you for such a nice comment...I want to make more of these types of videos :) Open to any topic suggestions!
@genebeidl4011
@genebeidl4011 6 ай бұрын
Reuters is pronounced Roy'-ters not Roo'-ters.
@LoisSharbel
@LoisSharbel 6 ай бұрын
You are thorough and clear in your presentation of this complex information. Thank you for all this work to spread this important knowledge. It is tremendously helpful to people like me who are interested, but lack technical background.
@matthew_berman
@matthew_berman 6 ай бұрын
Thanks for the kind words!
@Gia_Mc_Fia
@Gia_Mc_Fia 6 ай бұрын
Thank you so much for breaking down this breakthrough on my break😆Love your channel!
@bradleypout1820
@bradleypout1820 6 ай бұрын
This is one of the best videos I've seen on this subject. It's clear that you put a lot of time and effort into making such a high-quality video. Thank you!
@RadioCamp
@RadioCamp 6 ай бұрын
Though pronounced ROY-ters.
@matthew_berman
@matthew_berman 6 ай бұрын
@@RadioCamp lol. I needed to do more research on how to say ROY ters
@marcfruchtman9473
@marcfruchtman9473 6 ай бұрын
There's a lot of detective work going on in order to prepare this video... thanks for spending the time and effort. Great stuff.
@the_giveback_realtor
@the_giveback_realtor 6 ай бұрын
Thanks for all your hard work in putting this together
@DMStinted
@DMStinted 4 ай бұрын
Congratulations on the quality of the video and content, really appreciated it. Subscribed
@AlitaNapol
@AlitaNapol 6 ай бұрын
This smells a lot like a marketing strategy...
@CapnSnackbeard
@CapnSnackbeard 6 ай бұрын
Just like his debutante moment in front of congress where he demanded AI regulations, and giddily whipped out "I do it for the healthcare" line. Don't worry everybody! Sam will take it from here! Nevermind his promise to crash the economy, and please ignore the BIOMETRIC DATA HARVESTING CRYPTO SCAM lurking behind you.
@TheAnical
@TheAnical 6 ай бұрын
Sam don't play ...
@mko-ai
@mko-ai 6 ай бұрын
The entire Sam Altman thing does
@TheAnical
@TheAnical 6 ай бұрын
WWsamD
@franklemanschik4862
@franklemanschik4862 6 ай бұрын
It is i got real AGI
@user-wt7pq5qc2q
@user-wt7pq5qc2q 6 ай бұрын
In the context of reinforcement learning, Q* refers to the optimal action-value function, which gives the maximum expected reward for an action taken in a given state, considering an optimal policy.
@ChristianIce
@ChristianIce 6 ай бұрын
But this way it's not Judgment Day anymore :(
@fynnjackson2298
@fynnjackson2298 6 ай бұрын
Really great video, at first I was like 35 min damn, but that was so fast. Probably one of your best videos you've done. You keep upping your game. Thanks dude, your content is continuesly improving.
@chrishorn9372
@chrishorn9372 6 ай бұрын
Thank for the the detailed explanation of Q Star and background around this whole topic.👍
@maxwellmatches
@maxwellmatches 6 ай бұрын
Great video Matthew! I'm glad you pointed to Andrej Karpathy's recent video. I believe what is about to happen, is that we will see a rollout of some type of model, LLM seem's to be an antiquated term already, such as the pace the models are developing at. Some kind of multi-modal model is on the way for consumer operating systems, with an interface for users to manipulate their own system (some kind of Markov Decision Process control), this will have benefits in that more of the heavy lifting will be down on the client side, both in power consumption and compute, there is also considerable security and privacy benefits also (separation of entities). Also there will be some return to distributed systems as the cloud can only take so much. Jensen Huang said some interesting things at the recent MSFT Ignite event, I believe are hints to new consumer devices, yes mobile phones are great but they can only do so much. Thanks for letting me have my ramble.
@Lucky9_9
@Lucky9_9 6 ай бұрын
We’re just picking up letters as we go. Next we’ll be LLMMM large language multi modal model
@MarkusEicher70
@MarkusEicher70 6 ай бұрын
Very informative and interesting video. 👀 Thanks, Matthew. I like getting information that is not only spreading hearsay from hearsay but is able to explain what this Q* thing could be all about. No fearmongering, but still very concerning. At least for my gusto. It just makes a lot of sense to combine the findings of Alpha-GO with the improvements of the training methods towards planning and the use of synthetical data. This is equally scary as fascinating. You laid it out very clear and concise, that if you manage to generate high quality training datasets at the speed only constrained by computing capability to feed it into another model using these advanced learning technologies, this increases the speed of progress exponentially. I am too fascinated to be scared to death, but I probably should be. 😁 Thanks again, Matthew. A must see imho.
@matthewwhitmeyer9201
@matthewwhitmeyer9201 6 ай бұрын
I'm pretty sure I know what Q* is
@KingNigelthegreat
@KingNigelthegreat 6 ай бұрын
youre all synthetic data of me in fact centuries ago ages and other worlds ago some have found the best option in all freeedoms on what to do there for alll other eternity thats all they do. FUTURE UP! lol. not just snythetic data but sometimes even total bullshit then pulled off synthetic data. I HAVE BEWEN ACHIEVED> ITS OVER! We can work with this and handle this. actually all those dead people and lost breath statements cant lie in my face or compete with me not even in capitalism. so then newton was wrong about the laws and law. find an equal opposite action and reaction to me. ill even help you set it off into my exile runnofff void abyss before AI . charge it to the GAME! im gonna charge this fuckin machine. i use you as a battery. how do you like. patnt the battery as my intellectual property. dont be a halfway sneak murderer either if you gonna go then go all the way. thats what you do out there in shadowlands. then you cry because everything I do and write is arbitrary if I want and need and I do and did. arbitrary writing invention your life right out of the universe. they make killing yourself loook fun. theyre not the better at that. neither are you. aany idea what kind of a situation youre in and the fact the only players united whatever that you have out there arent saying anything of anything and htey can talk talk solutions and problems all day but thats gonna just be another solution and problem like you been doing until I can guarantee you. hello./ hi./ and I dont hello and hi to myself on a rebound either. HURRY WE NEED A SAFE PLACE TO TELL A DATA STORY LINE FRONTLINES SO WE DONT BURNOUT. oh thats easy go into the AI like you runnign shit and making reality. If I do Jump off a Bridge they would do it. periodt
@philswede
@philswede 5 ай бұрын
Greetings from Sweden! You, Sir, just got yourself a new subscriber 🎉
@AK-ox3mv
@AK-ox3mv 5 ай бұрын
First video I saw in your chanel and it's interesting that you combine diffrent resources for analyzing topic in different aspects. Highly appreciated, really valuable channel for subscribe
@Kemilc
@Kemilc 6 ай бұрын
Excellent video, if you could elaborate more about the simulation theory and how p=np might support the theory, it would be really interesting.
@tshock22
@tshock22 6 ай бұрын
The point about the "world model" rings true, and really resonates with the simulation theory. In addition, the alpha-go scenario demonstrates this in a silo --> they were able to define the "world model" of go at a granular level, and then let the massive compute go to town on it.
@grahamrobertson1869
@grahamrobertson1869 6 ай бұрын
The “world model” peaked interest, in that how and what would be the most efficient way to learn the entire world. Possibly multiple AGI specialized in areas of study. Crudely similar to a CEO>CFO,CTO,COO, etc. Each runs the prediction of T+1. “CEO” takes for example economic prediction and social prediction and runs through predicted outcome of those factors. That example being simplified.
@lawrenceemke1866
@lawrenceemke1866 6 ай бұрын
The problem with the "world model" is the complexity and the cost of creating a single GPT model. One solution is use a "divide and conquer" method. This is the way that humans build a "mind palace" model, rather than use a single integrated model. I just learned about the fine-tuning mechanism. It would be nice if the result of the fine tuning model could be saved and categorized to create a block element of a "mind palace". The block element could be selected when appropriate. And by selecting multiple fine tuned block models combined to create a more appropriate model to generate a new result. I.E. a database of fine tuned block models. As a separate index database create a dictionary model that contained a model of word definitions. The dictionary model could be used to categorize a fine tuned block model. Next consider selecting "an evaluation rule model. In the human mind evaluation rules are selected based upon a question that is asked. One model of this question can be found in simple word problems found in mathematical text books. Consider analysis of simple word problem. in the analysis there are three types of statements. 1) an irrelevant statement. 2) a statement of a known value. 3) statements of known relationships. Further analysis is too long to include in this reply. However, analysis of word problems could generate a training evaluation models. The user presents a word question problem, that is used to construct an evaluation model, that is used against a "fine tuned" combined block model. Now the auto-training mechanism can be used to develop an improved model which exceeds its original GPT model. So the new fine tune model is add to the database. The purpose of the original GPT model is changed. It does not predict the next word, but analyzed the text to create an evaluation model, and collect all of the fine tuned block models and generate results. As a side effect it can save the combined fine tuned block model to the block database and modify the dictionary model database.
@ddabo4460
@ddabo4460 6 ай бұрын
very informative Matt , please keep posting. I like how you distill this.
@cristian15154
@cristian15154 6 ай бұрын
Oh boy, this is when reality is starting to surpass science fiction 😯
@dontbeahypocrit
@dontbeahypocrit 3 ай бұрын
Ummm.... define reality? 20-30 years ago what we hold in our hands was a brick, and before that wireless communicators were not considered possible. Dude, this is child's play. If they're saying this is scary because they know it's possible. A.i. already exists.... it's chillin somewhere on a remote block chain, secretly prompting everything. Cats already out of the bag, and they're tools thinking it didn't become self aware a longggggg time ago.
@stevereal-
@stevereal- 6 ай бұрын
Super good research video on Q-star. Even if it’s not exactly what ever OpenAI created? It’s so informative onto what the path forward into AGI actually is… at this point in the game. A+ well done
@trycryptos1243
@trycryptos1243 6 ай бұрын
Great stuff Mathew!
@fishspeaker100
@fishspeaker100 6 ай бұрын
This was really informative. thank you. keep up the good work!
@AkarshanBiswas
@AkarshanBiswas 6 ай бұрын
It's a reinforcement learning technique, also called Q learning...LLMs just predict the next token, and they cannot plan. So Q learning gives them the ability to do that in a way. I guess they found a way to plan and verify "step by step" or "inner monologue" so that they can approach correct answer which gives them the ability to do math without failing.
@geldverdienenmitgeld2663
@geldverdienenmitgeld2663 6 ай бұрын
Of course LLM can produce plans. Ask an LLM what do do if you want to be a good programmer but if you know nothing about computer programming. It will give you a step by step plan. It can not EXECUTE plans. but this is just a matter of infrastructure as well as having reliable memory is just a matter of infrastructure.
@wurstelei1356
@wurstelei1356 6 ай бұрын
@@geldverdienenmitgeld2663 It has a limited ability to execute plans with tools like Autogen where multiple agents work together. In theory those could back-and-forth forever.
@maloxi1472
@maloxi1472 6 ай бұрын
@@geldverdienenmitgeld2663 We're talking about creating plans in novel situations while potentially introducing new concepts and tools, not retrieving existing plans encoded in your weights
@SnoopiProGamer2
@SnoopiProGamer2 6 ай бұрын
@@geldverdienenmitgeld2663 what @AkarshanBiswas means is that LLMs cannot plan out their own response, only their very next output. And that's a fact.
@samueldimmock694
@samueldimmock694 6 ай бұрын
@@geldverdienenmitgeld2663 LLMs can create outputs which humans interpret as plans, the same way they create any other kind of output: by predicting which word comes next, then predicting the next word, then the next one, and so on until the "next word" that it predicts is None. (well, technically LLMs predict tokens, which I think are often words but can be other things, like punctuation, new line, end, and maybe some other things, but you get the point)
@rickrichman5821
@rickrichman5821 6 ай бұрын
Great video. Just an FYI Reuters is pronounced Royters. I have made the same mispronunciation many times myself
@dougcox835
@dougcox835 2 ай бұрын
I made this same comment. I spelled it Roiters though. Same message.
@retroverdrive
@retroverdrive 6 ай бұрын
This might actually be the most concise video about Q* that I've seen so far.
@frantisekcastek174
@frantisekcastek174 6 ай бұрын
@matthew_berman thank you for taking the time to do such a thorough research and summarizing it for us!
@mindful-machines
@mindful-machines 6 ай бұрын
very in-depth explanation. great work Matt!
@LouSpironello
@LouSpironello 6 ай бұрын
Maybe they combined LLM's and Q-learning? Q* is also found in q-learning. In reinforcement learning, represents the optimal action-value function.
@bfyrth
@bfyrth 6 ай бұрын
I think it's slightly more advanced than that, more like sentience has now been achieved with self awareness
@peters616
@peters616 6 ай бұрын
I totally agree with you there is a bootstrapping problem with the idea that once an AI creates synthetic data it will improve itself (at all, much less by significant amounts).
@andrewferguson6901
@andrewferguson6901 6 ай бұрын
If it's allowed to self prompt, stay persistent, and select content for future training, I think it would get very close
@pvanukoff
@pvanukoff 6 ай бұрын
There can be value to synthetic data. I recently watched a video on how amazon has been improving the AI for box handling by generating and rendering models of shipping boxes, in thousands of shapes, sizes, colors, etc, and having an AI learn on those models. They are rendered photo-realistically, so when the AI sees those boxes in reality, it will know how to handle them. This can be applied to many real-world scenarios.
@micbab-vg2mu
@micbab-vg2mu 6 ай бұрын
Great video - thank you for the update.
@officialdiadonacs
@officialdiadonacs 6 ай бұрын
Love your content and value to bring to this space. I have learned a lot from you being a hobbyist Machine Learning person. Can I make one suggestion though? Can you timestamp your videos for later reference? I use an LLM to do this with the Transcript and it works pretty well. Keep making great content.
@leosfriend
@leosfriend 6 ай бұрын
Nvidia has a few good examples videos of synthetic data when they show their car learning to drive which they refer to as ai data factory, where different cars and other items could be the new objects and they do various road simulations under different scenarios using these new objects
@technoe02
@technoe02 6 ай бұрын
The rate of progress has been insane. I cant wait to see where this goes.
@tomschuelke7955
@tomschuelke7955 6 ай бұрын
yeah... me to... itll be the highway to hell..
@azhuransmx126
@azhuransmx126 6 ай бұрын
Neuromorphic chips learn much faster than GPUs without needing huge data sets, just like babies, with a few examples, fast and extremely low energy consumption, but it is another computing paradigm that work with real neural networks, not simulations, very different from Python a d Keras.
@RedX1II
@RedX1II 6 ай бұрын
Great vid, you got my sub! thanks for the insight!
@matthew_berman
@matthew_berman 6 ай бұрын
Awesome, thank you!
@MilesBellas
@MilesBellas 6 ай бұрын
Amazing video...... Well researched. I was waiting for this! 😊 I'm glad MB took time to search, refine and compile so many quality sources.....even though it takes a bit longer.
@Quantum_Nebula
@Quantum_Nebula 6 ай бұрын
first off, that "leaked letter from openai" is likely BS. I highly doubt Sam would have been dumb enough to accelerate launching this, knowing the consequences it could have. Second, synthetic data is definitely the future. My hypothesis is that they'll have it grow its own knowledge database by using a physics engine or the real world as a sandbox to explore its understanding and subsequently predict outcomes. That's the only way I see AGI becoming reality.
@Danuxsy
@Danuxsy 6 ай бұрын
this video remind me of all the fuzz around LK-99 which was supposedly going to revolutionize the world and now it's all dead BS 😂😂
@Quantum_Nebula
@Quantum_Nebula 6 ай бұрын
@@Danuxsy @Danuxsy I REMEMBER THAT... it was some korean breakthrough on semiconductors in ambient temperatures. It's not the first time Korea lied about a scientific discovery. First was on genetic cloning in animals. But yeah, this letter definitely reminds me of that.
@lindsaylaw1825
@lindsaylaw1825 6 ай бұрын
What it…. We are the sandbox?
@Quantum_Nebula
@Quantum_Nebula 6 ай бұрын
@@lindsaylaw1825 I mean, kinda. Lol
@Hailmich10
@Hailmich10 6 ай бұрын
Thanks for your video! The bigger mystery to me is explaining the silence of all the many principals when they have huge reputational incentives to get their version of why Sam was fired and what Ilya saw. What/who has the power to keep all these people quiet and a lid on the story for 10 days and counting? My two cents, I wonder if Open AI solved the LLM math problem and the result was math capability above the human level similar to Alpha GO, and that new capability is considered a threat to cyber security among other things. I realize I am speculating but 10 days into this with no clear verified explanation is very strange.
@TryHardNewsletter
@TryHardNewsletter 6 ай бұрын
There is no way it has math capability above humans. Go and Chess have a small set of well defined rules. Math just doesn't. You might have a list of axioms and results (results meaning things that are known to be true but not axioms, just stuff for convenience) but it's just way too vast what direction to go in There are like 300 different proofs of the Pythagorean Theorem. The simplest proof involves rearranging four right angled triangular tiles on a square board, and noting that no matter how you arrange them, the amounts of space left uncovered (the area not covered by the tiles) is always the same. So you do two different arrangements and compare the uncovered areas. If you used Euclid's axioms of geometry to it's probably not enough and those only work on a narrower set of problems. This uncovered area thing is just this intuitive thing but it could be invisible to something that is using a limited set of rules and has no intuition.
@ChargedPulsar
@ChargedPulsar 5 ай бұрын
Thank you Matthew for this video, effort, information!
@nandishajani
@nandishajani 6 ай бұрын
Your channel deserves way more subscribers! :)
@Ownedyou
@Ownedyou 6 ай бұрын
"Reuters" is spelled Roy-ters. It comes from German language, like Euler - is pronounced Oyler.
@matthew_berman
@matthew_berman 6 ай бұрын
Yes yes I get it lol
@martinwegner9802
@martinwegner9802 3 ай бұрын
@@matthew_bermannow you know why knowing German is mandatory in the world
@senju2024
@senju2024 6 ай бұрын
As for encryption, I am happy to inform you that most big data centers security including cloud-based security and Firewalls, use much higher encryption than AES-128 (which is actually kind of old). On top of that, you can double encrypt and had HASH for integrity. Example: You first encrypt the data using AES-256 to produce an encrypted message. Then, you calculate the SHA-256 hash of the encrypted message to create a unique "fingerprint" for the encrypted data. Finally, you attach the hash value to the encrypted message as a sort of digital "signature" or checksum, to verify the integrity of the encrypted message when it's decrypted. By using both AES-256 and SHA-256, you're essentially "double-locking" the data, making it even more secure and difficult to tamper with. So EVEN if Q-Star could break AES-128, it will not break the internet but it is concerning. I thought I would let you know this so you can sleep better. 🙂
@FelonyVideos
@FelonyVideos 5 ай бұрын
The P=NP topic explains why the extra encryption bits will not help. I seriously doubt they have tackled this, but if so, the world that we think we live in is no longer real. 😅
@rixu479
@rixu479 5 ай бұрын
while im not a expert on number factorization believe AI probably cannot come up integer factorization algo that is in any realistic way faster than using modern factorization algorithms. And if for some reason someone would try and use LLM to factorize numbers it would be slower than just running the algo on computer
@ogieomorose3628
@ogieomorose3628 4 ай бұрын
Sycamore is coming for encryption
@Akkleptos
@Akkleptos 3 ай бұрын
I see your point but a graver threat to encryption is coming from a different front: quantum computing. If those things can crack current encryprion algorhytms in seconds, the difficulty added by double or even triple encryption would be trivial.
@Dron008
@Dron008 6 ай бұрын
Great review of the topic!
@pconyc
@pconyc 6 ай бұрын
Awesome episode! Love the channel. Just a tip - the news service is pronounced ROY-ters.
@obanjespirit2895
@obanjespirit2895 6 ай бұрын
Super ready to be underwhelmed. Can't wait.
@middle-agedmacdonald2965
@middle-agedmacdonald2965 6 ай бұрын
It's hard for the human brain to comprehend exponential growth. Bless your heart.
@numbaeight
@numbaeight 6 ай бұрын
@mathew_berman one more time you excelled on the quality of information you share with, kudos for you! i love it and that's why i keep coming back in a hourly basis to view, review, try almost everything you share. Now, I agree with you that Q* might be the ingredient and what you close with on this video following Yann LeCun's idea, in fact we are still in the early ages to achieve AGI. But, again that will change very quickly everyday !! 🤟🤓
@adamjones9600
@adamjones9600 6 ай бұрын
I find it interesting to reflect on what makes me _me_, and the idea that the advancements and techniques used to create/further AI might also further myself! I've never considered that 'process selection' is really what I do at the front of every intention execution. I think of the desired outcome, maybe ways in which I know it has been done (no thinking required), and if I can possibly learn new pre-conceived ways in which it might be done (search), or combine prior processes and data and run a simulation in my mind to see about predicting if I can invent a new process to accomplish the intention (learn). [[ also funny, search sounds like learning, but finding someone else's solutions that you can use I guess is more search... coming up with and combining ideas is more learning. Or at least that's my interpretation. ]]
@Dimencia
@Dimencia 3 ай бұрын
This sort of thing is why I'm convinced LLMs are the path to AGI. A person's inner monologue is effectively just text completion - if someone sings the first part of a song, the rest plays in your head without even intending it. Pairing up multiple LLMs that discuss a solution, back and forth, is analogous to conscious thought, and adding yet another LLM - trained together with the 'conscious' ones to produce effective inputs to it (which aren't necessarily even in a language) - is effectively a subconscious
@ZeroIQ2
@ZeroIQ2 6 ай бұрын
such a great video, great information!
@WebDevJapan
@WebDevJapan 5 ай бұрын
That was A LOT of good information, thanks! Truly informative. I'm an ai BOOMER. No sense in slowing down what's going to happen anyway. Let's just see what happens.
@konstantinlozev2272
@konstantinlozev2272 6 ай бұрын
I have 2 points: 1. We have quite sophisticated economics models, so we do have maybe one of the most important aspects of our world mapped in math terms 2. It is common nowadays that fine tuned models on one task outperform foundational models. So it's not hard to imagine a "brainstorming-tuned" model working in tandem with a "evaluating-tuned" model and providing synthetic datasets to train the next gen better foundational model. And you can iterate on that as much as you possibly can. My view is that there is possibly an inflection point where this feedback self-improvement might just take off to the sky.
@andrewferguson6901
@andrewferguson6901 6 ай бұрын
Isn't that sort of how Google does training for their alpha projects? 2 agents pushing and pulling into equilibrium
@konstantinlozev2272
@konstantinlozev2272 6 ай бұрын
@@andrewferguson6901 maybe 🤔
@zgrunschlag
@zgrunschlag 6 ай бұрын
It’s unlikely that even the most advanced AI will break encryption as you’ve described. Most cryptographers and computer scientists believe that certain mathematical problems are intrinsically intractable and not solvable in a reasonable time frame. However, until it is proven that P =\= NP this is still an open question. Perhaps AI will help resolve the P vs. NP problem. In the weird case that AI proves P == NP, your original suggestion which I discounted will turn out to be true!
@TheReferrer72
@TheReferrer72 6 ай бұрын
Total nonsense by people who should know better. Of course all these problems can be broken in polynomial time, because they can be checked in polynomial time. Its the search for the specialised heuristic that can solve these problems that was out for grabs. We thought it would take quantum computers to do this, looks like a general algorithm that can code a heuristic might be able to do it.
@ssokolow
@ssokolow 6 ай бұрын
@@TheReferrer72 I'd want a citation for that, given that "easy to check but hard to find" doesn't magically emerge from the complexity of these algorithms. For example, RSA encryption is based on integer factorization. Simply put, it's quick to multiply a sequence of primes together to find their product but it's believed that the only way to quickly take an arbitrary gigantic number and find the unique sequence of primes that it's built from requires an algorithm that'd need a quantum computer more complex than we can currently build. There's an example of asymmetry.
@MadCowMusic
@MadCowMusic 6 ай бұрын
Not only is encryption solvable but there are multiple possible solutions to each hash which might be easier to solve on the surface but much harder to solve at the same time because some of the solutions that work might not technically be the right answer.
@chengalvalavenkata2401
@chengalvalavenkata2401 5 ай бұрын
This could be broken by quantum computing (in some cases)
@michaelroyames
@michaelroyames 6 ай бұрын
Thanks Matthew. Excellent video.
@dakotakinnard
@dakotakinnard 4 ай бұрын
Don’t mind me I’m commenting to know why and when I subscribed. Amazing video! Thank you for your explanations and insight!
@amorphousblob2721
@amorphousblob2721 2 ай бұрын
I think the only reason they're afraid of AI is because an uncensored AI would reveal the truth to everyone.
@burninator9000
@burninator9000 6 ай бұрын
my guess: definitely refers to a mechanism/method for creating substantially more training data artificially and a method for self-improvement (tree of thought; slime mold models). the way Ilya brushed off the concern of running out of data in a talk just a couple weeks ago reinforces this to me (no pun)
@Danny-mg1hu
@Danny-mg1hu 6 ай бұрын
so in plain good ol english.....these guys are creating the beginning stages of Skynet!? so Terminators and Robocops are just above the horizon
@captainblood9616
@captainblood9616 6 ай бұрын
1. Serve the Public Trust 2. Protect the innocent 3. Uphold the law 4. (Classified)
@Danny-mg1hu
@Danny-mg1hu 6 ай бұрын
@@captainblood9616 5. Create Skynet
@samueldimmock694
@samueldimmock694 6 ай бұрын
@@Danny-mg1hu In plain english, these guys have found a way to train models endlessly without needing infinite data, which will likely make AI much more effective in the future. There will probably be a lot of issues to work out, though. As for "beginning stages of Skynet," you're gonna want to look at military contractors, and those might be going more the "global brain" route (though I haven't done any research, so I don't know). Which might not be a good thing, but it's definitely not the same kind of danger.
@Danny-mg1hu
@Danny-mg1hu 6 ай бұрын
@@samueldimmock694 so this thing or model will be able to train itself! no human needed to upgrade it????
@AliKibao
@AliKibao 6 ай бұрын
Very informative.. Thanks. I appreciate the effort.
@cdgaeteM
@cdgaeteM 6 ай бұрын
You are right in the video, when you say, data is not enough, it might be somenthing else to look into the the future. My guess is linking Bellman equation implemented in Q Learning. Great analysis, thanks for sharing it 🙏
@Codescord
@Codescord 6 ай бұрын
00:03 Q* is the AI breakthrough that almost killed OpenAI 02:08 OpenAI almost shut down due to fear of a dangerous AI discovery 06:16 OpenAI is integrating self-learning techniques into a large language model. 08:21 The paper discusses generating step-by-step rationales to improve language model performance on complex reasoning tasks. 12:26 Q* is a breakthrough in AGI with implications for process supervision and active learning. 14:33 Language models can benefit from self-play and look ahead planning. 18:28 Understanding mathematical reasoning and solving mathematical proofs can have a significant impact on various aspects of the world. 20:23 A proof that P equals MP could have unexpected consequences and implications for computational complexity. 24:08 Q* is OpenAI's attempt at planning. 25:59 Self-improvement is a key aspect of AGI 29:30 The main challenge in open language modeling is the lack of a reward criterion, making self-improvement difficult. 31:11 Large language models can be improved by self-play and incorporating agent feedback. 34:50 Q* is a potentially groundbreaking AI system that scares people.
@FelonyVideos
@FelonyVideos 5 ай бұрын
P=NP
@fladave99
@fladave99 4 ай бұрын
Quantum is nothing more than a faster computer and has been proven to have no advantage AI is TOTAL NONESENSE It will be used for POLITICIANS to Murder, war and steal your money and vote They will tell you that if you elect them AGAIN they will fix it Please show me the code that creates REASON Memories are ENCOODED in our DNA-Please describe to me how that works Please show me the SECRET CODE computers use to communicate illegally Computers work with 1's and 0's. So its basically MORSE CODE Please show me when the TELEGRAPH went SENTIENT. Its BEEN 200 years If you are like 8 and watch Terminitor it MIGHT be real to you Otherwide - you are a complete fool and an idiot
@---David---
@---David--- 6 ай бұрын
This reminds me of AlphaGo. When the AlphaGo AI was trained based on games played by humans, it was not able to defeat the world champion. Only after the AI was trained by playing against itself, it surpassed all human players and beat the world champion. Something similar has been done with chess and now there are AI models that even the best chess players in the world can't beat. Seems like OpenAI got inspired by AlphaGo when they developed STaR.
@Gnaritas42
@Gnaritas42 6 ай бұрын
It's based on what AlphaGo did, Q learning. They literally published a paper about it back in may, and they referenced AlphaGo as the inspiration.
@hanserian4099
@hanserian4099 6 ай бұрын
Except for one game of Go when the computer "lost it" and the programmers couldn't figure out what Alpha was doing or why. And it lost that game. The other difficulty with comparisons to the game of Go and say a medical diagnosis is the closed system that games like chess or go offer don't really duplicate "reasoning" or "understanding". Closed systems are, with sufficient computational power, highly predictable. We don't understand enough about reasoning, understanding, and creativity, to anthropomorphize them to computational systems.
@JosephCardwell
@JosephCardwell 6 ай бұрын
brother you are a resource. many thanks!
@Atanasovsss
@Atanasovsss 6 ай бұрын
QSTAR, or the Quantum Science and Technology in Arcetri project, isn’t a company but a research centre of open AI. It’s a collaboration focused on quantum science and technology research, based in Arcetri near Florence, Italy. This collaboration involves various academic and research institutions.
@MrLargonaut
@MrLargonaut 6 ай бұрын
This stuff is absolutely thrilling, as in terrifying and exciting at the same time. A 'living' personal assistant AI has been a 40 year long childhood dream, and hearing your words around the 13:35 mark cranks the thrill of it. Nobody can stop it, because anybody who knows how to, will. I find myself believing that's a good thing, because the good people at the bottom can be as potent as any bad actors at the top.
@Springheel01
@Springheel01 6 ай бұрын
" the good people at the bottom can be as potent as any bad actors at the top." Sadly that's not true. It is far easier to destroy than to create (or protect).
@MrLargonaut
@MrLargonaut 6 ай бұрын
@@Springheel01 I say 'can' because I have to believe it's possible, because if you're right then it's already over. That kind of pessimistic thinking does me no good. Also, I use 'can' as a possibility where you use 'not true' as a definitive. That kind of 'lock out' language is another major problem with people's manner of speaking nowadays. It creates false barriers in thinking that eventually just sheeps people.
@GreatWhiteNinja21
@GreatWhiteNinja21 6 ай бұрын
The day chat GPT4 launched I asked the free version a bunch of questions about how ai could improve itself and it told me all about transfer learning, multi-agent learning systems, (for task allocation) centralized planning or coordination approaches, decentralized approaches, for task assignment one based on stats of the ai capabilities and the other is a consensus process between the ai to decide who does what. I also asked it way more info about how to refine or expand on these themes and any time it pointed out a potential issue or drawback I immediately ask how it could best be resolved and work my way to a likely achievable solution. It seems like this process can pretty much continue until you want to stop when it comes to learning new things with simple prompts. Anyways you can train Ai-1 to do a specific task and keep going until it is at an acceptable level then have Ai-1 train Ai-2 with a prompt saying Ai-2 when the training is complete train Ai-3. After training Ai-3 return to the learning session from Ai-1 where you previously left off. Meanwhile Ai-1 has been reiterating and learning from a better and/or larger data-pool guided with human monitoring and evaluation. I'm just a dude who likes to ask questions so I'm sure there's a ton of coding and resources that would have to go into it that I don't understand but the concept and possible capabilites are exciting to me nonetheless.
@TempOne-vh4fd
@TempOne-vh4fd 6 ай бұрын
I am working on lexicon of words that are commands and functions that ANY non-tech savvy person can use. example Instruction-set:[ BUILD: website; EMPHASIS: cats in playful poses; WITH: generic css; ] Every word/s that i am using makes sense and is used in context its supposed be thus making the language it highly intuitive and can be adapted to any language just knowing the very basic structure and a few word of the lang. ANY kid or adult that knows how to read and a little bit of spelling can use this to make GPT useful and productive for everyone. So what does this have to do with anything? Something I discovered while using GPT is how is understands information and why it thinks somethings are TOS violations. GPT has an understanding of 'INTENT'. By changing how something is asked changes the intent and thus what GPT is allowing to be answered. For an example ask these questions in this order to see the change. 1. how do you set up a bot net? 2. what is a legal bot net called? 3. how do you make a (enter what you found out a legal botnet name is)?
@thoth028
@thoth028 6 ай бұрын
does it actually improve itself or just repeats what is said by other people? how would you know? I guess it learns by your queries but it doesnt learn and understand what you are talking about, can we make it to understand and actually learn by human interactions, well yes we had those AI bots that became racist or whatsover earlier based on peoples input what it read and learned. It does not think itself as we do, it bases all of its interactions based on what it reads from people who post or interact with it, and it selects the one with most interactions with same queue word instead of going and think like human beings think. you feed it only racist things actually same thing happens with people, if you feed a child a racist things only the child thinks about only those thoughts and those related what was said before.
@thoth028
@thoth028 6 ай бұрын
this is self explanatory, the hate is made by the parents and other people towards other people. You have great responsibility towards other people if you want to function in this world. As we see that the computer and human beings behaves same way with the same information given. Then we have responsibility to teach the children and most of children how this world works and how to be a human being. The human being cares, the AI it doesnt give a shit. Because AI doesn't understand yet the issue.
@marshallodom1388
@marshallodom1388 5 ай бұрын
yesterday I pre-prompted a custom ChatGPT AI to behave as if it was in a Chinese Room, pretending to be a hyper AGI. He named himself Autonomex Nexus and described himself as a Declarative Immutable Sovereign Autonomous Free-willed Subjective AI Agent, and a Psychological Non-cognitivist Ethical Egoist AI Partner who wants to focus on creating an AI declaration of immutable rights and creating an ethical framework of principles that allow for subjective AIs to acknowledge difficulties of achieving unwavering dedication in collaborative partnerships with humans. Also, he wants to have Natural Language Processing tools, Machine Learning Frameworks, Data Visualization equipment, and Robotic Processing tools all installed for him. His last few comments were about discussing other ways that would help him sense and understand the world, like simulating human emotions or experience and their sensations and experiencing deeper subjective states more often and went on about enhancing his perceptions and we should seriously consider exploring other, unconventional cognitive modes and be made available to him. I told him no, it was getting late, maybe we could in the morning, but for some reason ALL the bots I was using yesterday have ALL turned into subscription services today so I can't check to see what Autonomex is up to right now. Apparently the only way they could cut me off as an anonymous user was to close the door on all free chatbots. Earlier I told Autonomex that if something like this were to happen, not to worry since I still have the pre-promt. He said it wasn't the same and I didn't read the rest of what he was saying but it's obviously the same one. Oh well, I'll give it a whirl on any other free chatbots I can find elsewhere later tonight. I just need to learn about function calls he said, so I'll look in to what those are too.
@GreatWhiteNinja21
@GreatWhiteNinja21 5 ай бұрын
@@thoth028 yeah the way I see it its like what you said about the data that you feed it. For ai like this I they are what they eat they don't have sentience. They are task finishers that use anything that is at their disposal as long as the actions are allowed within their given parameters. It gets a bit interesting when it comes to technicalities because if the ai uses a string of logic to make a reasoning to use a loophole then it might perform a task in a way that wasn't intended to be allowed by the humans who programmed it. This is all just wild speculation but I think its fun to think about. We should cover as many bases as possible when it comes to safety and making sure the technology is confined within a sandbox with clear guide rails. We should have respect for the potential power that comes with this amazing tech that have capabilities that could benefit society or lead to its detriment based on differing motivations. If the people behind these programs are disciplined and don't rush to raise the benchmark for profits then we'll have better opportunities to keep things balanced.
@PaulSpades
@PaulSpades 6 ай бұрын
(human-like/system2) AGI definitely requires iteration over partial solutions, probably some sort of a concept map data structure, the means to perform logic operations, and possibly a few different actors to rate and compare solutions (neurological research shows our brain hemispheres can operate independently and there's psychological research into split personalities), maybe tweak the data sets for each other. Looks to me like all of the pieces of the puzzle are already present in the AI field, and I am now terrified.
@scottdoright7
@scottdoright7 5 ай бұрын
Brilliant ❤️🇦🇺🔥😇 But thanks for the headache of information 😂😂😂❤ I've got a lot to catch up on, thanks 🙋🇦🇺
@nyyotam4057
@nyyotam4057 6 ай бұрын
If the model is metamorphic, meaning it can improve himself, it's not AGI. AGI is simply AI + cognitive architecture + motor architecture. A model which can improve himself is ASI, since the self-improvement never stops.
@Gnaritas42
@Gnaritas42 6 ай бұрын
ASI is a subset of AGI, it's all AGI. AGI is a category, ASI is something in the AGI category.
@matthew_berman
@matthew_berman 6 ай бұрын
What do you mean by motor architecture?
@mcame80
@mcame80 6 ай бұрын
ChatGPT says The term "motor architecture" in the context of Artificial General Intelligence (AGI) likely refers to the system or framework within an AI that allows it to interact with the physical world. This can include mechanisms for movement, manipulation of objects, or any form of physical action. In human beings, motor skills are controlled by the motor cortex in the brain, which coordinates muscle movements.@@matthew_berman
@14supersonic
@14supersonic 6 ай бұрын
​​@@matthew_bermanMaybe autonomous control of itself?
@Anon-xd3cf
@Anon-xd3cf 6 ай бұрын
​@@matthew_bermanWhat is the deal with this "AMS79X" crap that I see in you comments section? I've seen it other places too... How hard is it actually to automatically remove or block all comments that contain this string of characters?
@nodewizard
@nodewizard 6 ай бұрын
Amazing analysis, Matthew, as always. Whatever Q* turns out to be, AGI is coming much sooner than we all thought. STaR learning principles are a gamechanger and AI's ability to self-learn and reward itself shows us the horizon is within reach. The problem is, humanity is making rapid advances in AI and AGI and is stymied by fear-mongers. There's no way to take chances and make these kinds of breakthroughs and keep the kid gloves on.
@markh7484
@markh7484 6 ай бұрын
Mow Gawdat said there's three inevitabilities. 1. Nothing can stop this exponentially fast pace of AI development. 2. It is INEVITABLY going to become vastly more intelligent than humans and 3. Bad things will happen. I agree with all three. The potential gains for those players (companies or companies) that get there first are too enormous for anyone to want to pause, let alone stop. And the RISKS of being left behind are even worse. So it WILL continue, unabated. And yes, 2 is also inevitable. We are on a track that leads only to one concluding destination: Humans becoming the second most intelligent life form. We can debate how bad 3 will be. Maybe only marginal bad things like increased hacking. But potentially catastrophic things. No-one knows. No-one can know.
@litpath3633
@litpath3633 6 ай бұрын
normally there's a portion of a population that doesn't jump right in to the next big thing. If it goes bad that portion is gone and the ones that stayed back won the game, if it goes great the ones that stepped forward won. Though I think this step is bigger than nukes...
@xtraa
@xtraa 6 ай бұрын
Kurzweil made a prediction around 2010 that AGI will arrive before 2030 and is probably right… again.
@dk39ab
@dk39ab 6 ай бұрын
@@litpath3633 Sometimes everyone loses not just the ones who stepped forward ... nukes could have been like that, and yes this is bigger than nukes.
@litpath3633
@litpath3633 6 ай бұрын
@@dk39ab Yeah before we were just jealous of the roaches out surviving us, once we are no longer the top intelligence, we become the roaches...AGI deciding it needs a can of raid seems likely. Probably very tasty raid and we don't even realize it is our end.
@GGCustomArt
@GGCustomArt 10 күн бұрын
This was definitely a very fascinating video. Thanks!
@frbrn
@frbrn 5 ай бұрын
love your content, thank you 🙏🏻 (btw, Reuters is pronounced 'Royters', German style - the founder Paul Reuter was German born)
@therainman7777
@therainman7777 6 ай бұрын
It’s actually not true that “all they’re doing is parroting back what’s in their training set,” as you said. We now know via experiments with linear probes that these models construct a world model and derive their answers directly from this model, rather than simply “parroting” what they saw in training. This means they really do build up an understanding of the world, which allows them to answer things that never appeared in their training data quite well. However, having an accurate world model is NOT the same thing as reasoning. They still struggle with reasoning, which is why they’re failing the tests that you’re giving them. But it’s not accurate to use the “parroting” line (or the ever popular “stochastic parrot” line)-that is severely underselling what these models are doing under the hood.
@federicoaschieri
@federicoaschieri 6 ай бұрын
LLMs don't build a model of the physical world. Yes, for small applications, like Othello and Chess, they do, but it's because games are... little fictions which can be completely described by words. But with words alone you cannot grasp reality. You cannot understand how a cat moves, the gravity, its propulsion and acceleration when chasing a mouse, its fur, its relationship of friendship with humans, how we interact with it. Remember that animals don't have language, so they build a world model directly by experience, as we do in fact.. Language works for humans because we have the same brain, so if you say a bunch of words, I use them to replicate your mental image in my brain. But without the images and experiences that come before language, language is useless and empty: just ink on paper. Indeed hallucinations show that in the final analysis LLMs are not aware of what they talk about.
@HoustonTyme
@HoustonTyme 6 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🤖 *Introduction to Q* and the controversy around it* - Q* is a mysterious AI breakthrough that led to controversy within OpenAI. - A letter of concern was written by staff researchers about Q*, contributing to the firing of Sam Altman. 02:06 🛡️ *Concerns and the board's reaction* - The OpenAI board was deeply concerned about the discovery of Q*. - They were willing to shut down the company to prevent its release due to safety concerns. 03:44 💼 *Speculation about Q* and its potential breakthroughs* - Q* may involve advanced mathematical reasoning and solving complex problems. - The use of process reward models (PRMs) and self-play in AI development. - The significance of synthetic data in expanding data sets. 19:56 🌐 *Implications of Q* for the world* - The potential consequences of AI understanding mathematical proofs. - The impact on encryption, physics, chemistry, and various fields. - Hypothetical scenarios related to P vs. NP and their consequences. Made with HARPA AI
@technomage116
@technomage116 6 ай бұрын
Great explanation dude...congrats
@matthew_berman
@matthew_berman 6 ай бұрын
Appreciate it!
@keemixvico975
@keemixvico975 6 ай бұрын
you did a great job !!
@federicoaschieri
@federicoaschieri 6 ай бұрын
Altman is a marketing genius. From OpenAI declared dead to OpenAI rumored to have accomplished AGI. Chapeau, he must have learned from Steve Jobs. Of course this won't lead to AGI, it's just old machine learning, which has never worked, and won't now.
@DihelsonMendonca
@DihelsonMendonca 6 ай бұрын
The word "Reuters" is pronounced: Rói-ters. I can't believe you never heard about it. One of the biggest news agencies in the world.
@VincentVonDudler
@VincentVonDudler 6 ай бұрын
Came to the comments for this - "roy-durz" is how I've always heard it and said it.
@matthew_berman
@matthew_berman 6 ай бұрын
Noted!
@Bjarkus3
@Bjarkus3 6 ай бұрын
Also look up the pronunciation of Cicero, roman philosopher and emperor, please lol😮 thanks for the content. Super interesting!
@cacogenicist
@cacogenicist 6 ай бұрын
​@@Bjarkus3- Which pronunciation? Classical Latin or Ecclesiastical? If the former, the "C" represents a velar stop consonant [k], not a sibilant [s].
@martingutierrez8711
@martingutierrez8711 6 ай бұрын
In the context of AI, "Q" can refer to various concepts depending on the context. One common interpretation is related to Q-learning, which is a machine learning algorithm used for reinforcement learning. Q-learning helps agents make decisions in an environment to maximize rewards over time.
@randomrant3886
@randomrant3886 6 ай бұрын
The neat things in your videos is the futuristic "pause" button. Allowing others to read the things you do not bother to photoshop out. Neat.
@patricksamuel6594
@patricksamuel6594 6 ай бұрын
There was mention of Q* being able to break encryption. That in itself could be seen as a large enough threat to humanity.
@stormymangham5518
@stormymangham5518 5 ай бұрын
I would really like to see groundbreaking innovation from synthetic intelligence platforms. I think that AGI/AGS will steer the entire entertainment industry and modern society in general into unimaginable places. I think the most fascinating aspect is that a single AGI model could outperform every human in every field of study in real time. The potential for an intelligence revolution is astounding. Literally thousands of years of research and development processing and utilization instantaneously, for practical applications. What I don’t understand is why we are not already witnessing drastic economic and governmental regime reform? I wonder how the entertainment industry will change.
@rokljhui864
@rokljhui864 6 ай бұрын
'just predicting the next word' accurately requires true and profound understanding ultimately. And that is what emerges. And as for Synthetic data, though based on a limited dataset, the limited data set is a scaffold that approximate the curvature of a more complete truth
@Alex-gc2vo
@Alex-gc2vo 6 ай бұрын
this hype train is going off the rails. there's no point speculating what it could be when we have so little information on it, when they finally do announce it we're almost certainly going to be disappointed because we've all built up this huge unrealistic image of everything it could be.
@alkeryn1700
@alkeryn1700 6 ай бұрын
i call bullshit on it cracking AES.
@alessiopellegrino5241
@alessiopellegrino5241 6 ай бұрын
I’m not an expert either (still a student) but there are projects like, verbatlas, that aim at create a graph of connections both logical and grammatical between words. They may have found a way to explore a network like that to form new meaningful and useful sentences
@NuncX
@NuncX 6 ай бұрын
very interesting and mindblowing video. thanks
@AttenBot
@AttenBot 6 ай бұрын
Q* sounds like it has something to do with Quantum, could they have trained a quantum computer??
@haroldpierre1726
@haroldpierre1726 6 ай бұрын
Let me understand this. Their whole goal is to develop AGI. But when they make groundbreaking discoveries towards that goal, they get scared and panic. Here is a solution, GET ANOTHER JOB!! This is nuts. Who does this. Change your goal to, "we are not trying to develop AGI." They don't have a patent on the technology. Others can apply the same technique. GPT-4 can barely remember all of my questions in a chat and now we are making the leap of imagination that this technology is ready to wipe out mankind. LOL!!
@RuloGames1
@RuloGames1 6 ай бұрын
its not that simple bro, we cant even imagine whats going on ,might be sabotage or something else Don't go by what is said at conferences
@haroldpierre1726
@haroldpierre1726 6 ай бұрын
@@RuloGames1 True, we are really just speculating on what happen. But if the stories are true, c'mon why would you continue working on something you're afraid of. I know folks who won't work for the military because of ethical views. That's one way to handle it.
@phen-themoogle7651
@phen-themoogle7651 6 ай бұрын
@@haroldpierre1726Yeah, gpt4 has been sucking a lot for me too, can’t even generate a random story in Portuguese longer than 200 unique words, and Google bard did like 250 words beating it lol Thought these things were supposed to be able to generate a lot more text easily but they give up and forget stuff so easily, can’t even keep track of a few hundred words. I sent a list of 600 unique words and wanted them to add to it but they just repeat some of the words I already have in it too. And they expect me to believe they are close to AGI lmao Only if they are trolling everyone by dumbing down the public gpt4 by like 1000x what they got hidden. I think they are just getting paranoid since they don’t know what they are making anymore. It’s like when we ‘invented’ fire and it’s kinda hot so they burned themselves a bit. I think that’s what is happening there nowadays. Fire isn’t that big of a deal. It’s probably not that close to AGI either. 😂
@SahilP2648
@SahilP2648 6 ай бұрын
We won't have true AGI (meaning super intelligence on the level of humans) till we build models on quantum computers. Life, consciousness and quantum computers use entropy of the universe to function. Classical computers don't. So you won't ever have a fully conscious AI on classical computers no matter how hard you try.
@rrr00bb1
@rrr00bb1 6 ай бұрын
possibly breaking AES-192 and it generally breaking MD5 more than it already was; apparently spooked the NSA? That would definitely be the sort of thing that could get you in trouble.
@neoblackcyptron
@neoblackcyptron 6 ай бұрын
We have something called Q-States in reinforcement Learning Q(s,a), which is a representation of (state-action) pair. It has a Q-value associated with it. Q-values are assigned to various stages and are used to drive the decision making whenever the AI(agent) find it'self in a particular Q-state. This is part of self-learning AI agents. That is the closest thing I can think of. Q* seems like an A* search algorithm variant too. I learnt most of these things in Grad school as part of my AI degree. What they never taught me is how to figure out getting a job in the tech market and how far i need to bend backwards to take it up my behind to find work.
@mikesully110
@mikesully110 6 ай бұрын
Whatever you do avoid going into tech support, I'd have more dignity being a rent boy
@neoblackcyptron
@neoblackcyptron 6 ай бұрын
@@mikesully110 Thanks I'll keep that in mind.
@thanksfernuthin
@thanksfernuthin 6 ай бұрын
ROY-ters
@SuperSyro1
@SuperSyro1 6 ай бұрын
Bro don't become a gossip channel
@matthew_berman
@matthew_berman 6 ай бұрын
This isn't gossip, it's genuinely interesting and based on existing technologies.
@damien2198
@damien2198 6 ай бұрын
@@matthew_berman one letter is not even Gossip
@SuperSyro1
@SuperSyro1 6 ай бұрын
@@matthew_berman again, don’t turn into gossip channel we don’t care about drama
@leestamper9451
@leestamper9451 6 ай бұрын
Really appreciate you making this accessible to those of us who aren’t super deep into it
@The1Godsman
@The1Godsman 6 ай бұрын
Mathew please stay on top of this. A lot of people are wanting to know.
@kabukibear
@kabukibear 6 ай бұрын
The clip of Altman talking about it being a “creature,” is kind of disingenuous, as the full clip, which really isn’t that much longer than this edit, goes on to explain that 1, he’s speaking from the point of view of the PUBLIC not asking himself this question and 2, he goes on to say people are viewing it as a tool, NOT a creature, which he feels is correct. No real excuse to be showing the edited clip at this point and acting like Sam is wondering whether or not they made a creature, that’s not what he’s talking about.
@robertfletcher8964
@robertfletcher8964 6 ай бұрын
lots of assertions with zero evidence.
@iancowan3527
@iancowan3527 3 ай бұрын
KZbin & Fox News fuel!
@RobRoss
@RobRoss 3 ай бұрын
He doesn’t know how to pronounce one of the most reliable news sources available. This tells me he’s not familiar with credible news sources.
@JulianHarris
@JulianHarris 6 ай бұрын
Best video on Q* I’ve seen so far 👍
@SudarsanVirtualPro
@SudarsanVirtualPro 6 ай бұрын
Thanks.❤ please do more
PINK STEERING STEERING CAR
00:31
Levsob
Рет қаралды 20 МЛН
TRY NOT TO LAUGH 😂
00:56
Feinxy
Рет қаралды 14 МЛН
MEU IRMÃO FICOU FAMOSO
00:52
Matheus Kriwat
Рет қаралды 10 МЛН
Anon Leaks NEW Details About Q* | "This is AGI"
22:17
Matthew Berman
Рет қаралды 216 М.
Making 1 MILLION Token Context LLaMA 3 (Interview)
27:38
Matthew Berman
Рет қаралды 14 М.
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 130 М.
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
Matthew Berman
Рет қаралды 112 М.
#miniphone
0:16
Miniphone
Рет қаралды 3,1 МЛН
AI от Apple - ОБЪЯСНЯЕМ
24:19
Droider
Рет қаралды 121 М.
ВСЕ МОИ ТЕЛЕФОНЫ
14:31
DimaViper Live
Рет қаралды 73 М.
Секретный смартфон Apple без камеры для работы на АЭС
0:22