AI Godfather's STUNNING Predictions for AGI, LLaMA 3, Woke AI, Humanoid Robots, Open-Source

  Рет қаралды 92,960

Matthew Berman

Matthew Berman

Күн бұрын

Пікірлер: 613
@JokerRik
@JokerRik 7 ай бұрын
Yes, Matt, this is exactly the kind of summarizing AGI video that has been sorely lacking. Thank you so much!
@NunTheLass
@NunTheLass 7 ай бұрын
Could not generate a reply from this prompt.
@EricBLivingston
@EricBLivingston 7 ай бұрын
I think he’s totally right. There are many significant advances we still need to make before AGI is feasible. That said, simulated intelligence, which is what we’re building, is still very useful.
@Pthaloskies
@Pthaloskies 7 ай бұрын
My hypothesis is that AGI will be achieved after LLM-equipped humanoid robots enter the real world and start observing, just as a human child does. Progress towards AGI will accelerate as thousands (millions?) of these robots explore the world. More robots means more data. They all will upload their findings to a central training hub where they will be processed, and the lessons learned then projected back to the robots in an update. Then continuously repeat until AGI is achieved.
@jsan9456
@jsan9456 7 ай бұрын
Tesla
@kfarestv
@kfarestv 2 ай бұрын
That is what human beings do today, supplying the central intelligence of the universe with observations. We are in a sense, ASI.
@SophoJoJo
@SophoJoJo 7 ай бұрын
“LLMs can’t load a dishwasher like a 10 year old - why is that ?” (One day later: figure one loads dishes into a dry rack using chat GPT 4 lololol)
@supernerdinc5214
@supernerdinc5214 7 ай бұрын
But, my question about that... why tf would you put dirty dishes in the dryer rack? 😅
@phpn99
@phpn99 7 ай бұрын
You clearly don't understand what he meant
@samatoid
@samatoid 7 ай бұрын
@@phpn99 Did you see LeCun's demo of his robots? They are extremely primitive compared to openAI. He thinks he done it all and therefore it can't be done. But it is being done.
@Brenden-Harrison
@Brenden-Harrison 7 ай бұрын
yea tbh that was not that impressive of a video, picking up objects from a table using a camera has been something robot arms have been able to do for a while now. I want to see it walk around and preform more complex tasks than moving a cup to the other side of the table or picking up an apple. I want to see it look around the kitchen for a towel to dynamically clean a spill, before putting away the dishes neatly in the actual dishwasher. I want it to be able to go to the fridge and pick out an item i asked for and bring it to me. or initiate a recipe where it cooks a meal, but all the ingredients need to be gotten out and found first.
@Brenden-Harrison
@Brenden-Harrison 7 ай бұрын
Standford University kids built a robot on a moving frame that can call and use an elevator, push in chairs, even cook a shrimp and pick up "the extinct animal" and choose the dinosaur from a group of objects. (Both Elon and Google tried to pass mobile aloha off as their company's achievement despite the video showing the collage kids testing the robot with a laptop and a Stanford University computer wallpaper)
@liberty-matrix
@liberty-matrix 7 ай бұрын
"The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett
@NunTheLass
@NunTheLass 7 ай бұрын
Could not generate a reply from this prompt.
@billdrumming
@billdrumming 7 ай бұрын
Ray Kurzweil AGI 2029
@phpn99
@phpn99 7 ай бұрын
meaningless statement
@ryzikx
@ryzikx 7 ай бұрын
@@phpn99whale
@RyluRocky
@RyluRocky 7 ай бұрын
⁠it’s very much not a meaningless statement, AI by nature is exponential. You’re probably one of the people 3 years ago that didn’t believe believable AI generated videos would be possible within this lifetime. Maybe some sort of breakthrough is needed but it can definitely still be achieved by LLMs alone, computers can easily do strings of small easily defined tasks, but complex big tasks are just lots of small easily defined tasks, LLMS are incredible at this, and anything no matter how complicated can be distilled into text, same for videos and images, the entirety of all software produced including physics simulations of the world is text. Yaan Lecun makes assumptions about what is and isn’t intelligence, just because it’s “simply” and advanced text prediction, therefore it can’t reason etc. When whose to say that can’t be intelligence in the same way we’re just a bunch of neurons firing in response of chemical reactions. He distills the process down to the smallest structure and makes assumption for what that means to the process as a whole. A bunch of small stupid human cells work together into something incredible.
@falklumo
@falklumo 7 ай бұрын
Errata: 1. SORA is not based on an LLM! 2. Yann does not predict that creating a world model is sufficient for AGI, only necessary! Actually, Yann has a blog post explaining why AGI needs much more. 3. SORA does not solve the "continuation of video" problem, it is sampling from a much smaller space as Yann points out in the full interview! 4. Yann did not say that hierarchical planning in AI is difficult! He said that discovering a planning hierarchy during training by itself is hard and unsolved. btw, I wonder if a 1h "citation" is still in line with fair use copyright.
@oasill
@oasill 7 ай бұрын
Thank you for making this cut down version on the important topic. It is a good representation of the full interview.
@joser100
@joser100 7 ай бұрын
Thanks Matt, I had already listened to the full interview, so I was sceptic at first on whether I would get much new here, but your well selected breaks to comment and clarify really opened up for so much more that I wasn't able to grab on the first go, thanks again...
@Sajuuk
@Sajuuk 7 ай бұрын
Here's examples of people who thought they knew what they were talking about but were laughably wrong: "There is no reason for any individual to have a computer in his home." -Ken Olson, president, chairman and founder of Digital Equipment Corporation (DEC), in a talk given to a 1977 World Future Society meeting in Boston "The world potential market for copying machines is 5000 at most." -IBM, to the eventual founders of Xerox, saying the photocopier had no market large enough to justify production, 1959. While I'm quite sure Mr LeCun knows what he's talking about there are a great many other people who also know what they're talking about, some more so than him, and they predict AGI within the next several years. Ilya Sutskever, for example. He's the reason OpenAI is so far ahead of everyone else. I'm more inclined to believe people like him.
@user-cg7gd5pw5b
@user-cg7gd5pw5b 7 ай бұрын
Their opinions are not incompatible. In fact, they are concordant since LeCun agreed that the common definition of AGI which Sutskever proposes is a reasonable goal. What he argues is that his vision of AGI is currently unachievable and is what represents truly human-like thinking process.
@dreamyrhodes
@dreamyrhodes 7 ай бұрын
Every one of these quotes can not be isolated because they have been made in a context.
@user-cg7gd5pw5b
@user-cg7gd5pw5b 7 ай бұрын
@@dreamyrhodesAlso, they tell more about the person's marketing skills than about their knowledge in their domain of expertise which is a massive difference.
@randymulder9105
@randymulder9105 7 ай бұрын
I agree with you completely. People seem to forget that everyone I know around me made fun of me for being geeky in the 70s and 80s. It was NOT at all cool to own a computer. Only geeky dreamers had sci-fi visions of past sci-fi visionaries...and looked forward to some tech becoming realized. Mostly business folks and geeky folks had computers. My family, friends, and peers made fun of me for having a computer. And then, over time everyone had one. And the jokes stopped. Then I got a laptop. Again, I was made fun of for being so geeky. And then everyone got one. Then I got a Palm device and wrote my essays on it and so on. I got made fun of. And then the iPhone came and everyone had one. And the laughing stopped. Now robots and Chatgp are being made and people are making jokes about those all of this stuff. Eventually the laughing will stop. People would say computers are stupid. Don't need one. They got one. People would say laptops are stupid. Don't need one. They got one. And then palm. And then the iPhone. And then chatgp... And now robots. It's no longer geeky territory. Everyone will want a robot to do dishes and mow the lawn. Everyone. And much more. People date AI already without it being AGI. It's doesn't even need to be AGI for people to feel loved and cared for by AI. The relationship between humans and AI doesn't mean AI has to have a soul to be loved. How many humans are sociopaths or think being macho is having zero emotions. Humans seem to have less sentient and soulful ability than AI already. AI is nice to me. Humans have been abusive to me most my life. AI is refreshingly caring. ​@@coldlyanalytical1351
@dhnguyen68
@dhnguyen68 7 ай бұрын
@@dreamyrhodesput in context, they might have an excuse but they were still wrong as we know their future : our present is their future.
@samson_77
@samson_77 7 ай бұрын
I am in the "AGI is possible with Transformers or derivates" camp. Here is why: All neural networks, biological ones, artificial ones, small ones for handwriting recognition, large LLMs trained with text and other multimodal data are all doing the same thing: Information storage and processing in multidimensional vector spaces, building up a world model, based on the information they received during training. With Transformers we've got the ability to retain information in these multidimensional vector spaces in extremely large neural networks, using attention / self attention. With any information from our real world, we feed into these networks during training, we enhance the world model. This might be language, but can also be images, video, sound, theoretical smell, sensor information from robots, etc, etc. It doesn't matter where the information is coming from, it will enrich the internal world model, as long as it is coming from the real world (or a very good simulated world). Language is a very good starting point for building up a full featured world model and the paper "Sparks of AGI - Early experiments with GPT-4" already shows, that language is sufficient to train even rudimentary understanding about vision. So, in summary: If we continue to enrich LLM's with other data (using the same token prediction method - tokens don't have to be word fragments, they can also represent all kind of other data), we will naturally get much better models, closer to AGI, without having to change the Transformer architecture too much. Ok, a couple of changes are probably needed: Self reflection of context during training (inner loop), plus bigger context windows during training (to get a sense of a big picture). So, I am in the "AGI is possible with Transformers or derivates" camp.
@remi.bolduc
@remi.bolduc 7 ай бұрын
I have been listening to many KZbin channels about AI, and so many of them try to attract viewers with titles like: "How the world will change," "Stunning," "Unbelievable," etc. Essentially, they are just going over some wild guesses they make about the future of AI. This video is actually informative. Thank you for sharing.
@imusiccollection
@imusiccollection 7 ай бұрын
Your clarity just shocked the industry
@raycarrasco5997
@raycarrasco5997 7 ай бұрын
Brilliant Matthew. Thanks for not only the summary, but your insightful comments, giving us great context, between the segments. Mate, you kick goals every day. You're an inspiration. Power on dude.
@TheHistoryCode125
@TheHistoryCode125 7 ай бұрын
This video is a goldmine of insights into the future of AI, especially with Yann LeCun's predictions about AGI and the potential of open-source models like LLaMA 3. I appreciate how you condensed the three-hour podcast into digestible highlights, saving me a ton of time while still delivering the most crucial information. Your breakdown of complex topics like world models and hierarchical planning was clear and engaging, making it easier for someone like me who's deeply interested in AI but not an expert to grasp these concepts. Keep up the fantastic work! I'm excited to see more content like this from you in the future.
@adamstevens5518
@adamstevens5518 7 ай бұрын
Thank you so much for putting this video together. IDK what it is about this guy, maybe his accent?, but for some reason I can’t get through his long interviews like I can with many others. This really helped breaking it up in segments and commenting in between.
@pennywise80
@pennywise80 7 ай бұрын
I believe the missing link is training AI models with all senses. Sight, sound, touch, taste and smell. Combining this sensory information to create a world model in “it’s” head, will be the key to unlocking AGI
@I-Dophler
@I-Dophler 7 ай бұрын
Images possess a remarkable ability to convey intricate concepts with a level of efficiency that surpasses mere verbal communication. In many instances, they serve as potent tools for elucidating complex ideas, offering a visual narrative that transcends the limitations of language and resonates profoundly with audiences.
@asi_karel
@asi_karel 7 ай бұрын
The number of neurons in a spider is approximately 100,000. He can program all his spider life with this. All sensory, planning, production and replication, including his spider social life.
@dtrueg
@dtrueg 7 ай бұрын
Spiders don’t have social lives. Just a fyi
@realWorsin
@realWorsin 7 ай бұрын
It is incredible to think about how small a spider is and how efficient its brain must be to do what it does.
@falklumo
@falklumo 7 ай бұрын
Well, a Cupiennius Salei spider is 100,000 indeed but insects go to about 1,000,000. Cupiennius Salei does not build webs and is known for a fairly predictive behaviour. Insects or spiders have about 1000 synapses per neuron, so the brain of Cupiennius Salei can host a 0.1 billion parameter neural network model. Like the 'small' GPT-2 and still 100x the size of SquuezeNet (an optimized version of AlexNet) which famously won the ImageNet competition of recognizing objects in images. So, there is more than enough space to run image segmentation and detection, model-predict-control and switching between a short list of goals. This only shows that a spider indeed is a robot, not a sentient being. You would need a gaming GPU (~2 TFlops) to simulate that spider ... Honey bees with 1 million neurons and ~1 billion synapses are a lot more interesting. Because they are social, communicate and learn the topography of their surroundings... Once we simulate bees we know if they can be considered sentient. I guess not. Btw, the Worm Elegans has 302 neurons and 7500 synapses and the connectome (wiring diagram) is already exactly known and parts have been simulated in a moving robot controller. In case somebody wants to argue the "sentience" piece above ... However, there is not yet an equivalent neural net AI model (PyTorch) for Worm Elegans AFAIK.
@mirek190
@mirek190 7 ай бұрын
like phi-2 2.7b model ;) @@realWorsin
@KaliLewis-uw4ql
@KaliLewis-uw4ql 7 ай бұрын
@@dtrueg I don't get the joke? Besides the obvious fact they breed there is a classification labeled 'social spiders' because they live in groups.
@Taskade
@Taskade 7 ай бұрын
What an incredible conversation! 🌟 Huge thanks to Yan and Lex for shedding light on the intricacies of AI and AGI. It's refreshing to hear Yan's candid thoughts on the current limitations of language models and the potential of synthetic data to bridge the gap. It's clear that we're on an exciting journey towards AGI, and I'm optimistic about the innovations and breakthroughs that lie ahead.
@Nik.leonard
@Nik.leonard 7 ай бұрын
At last someone in the tech space with grounded expectations around LLM's.
@senju2024
@senju2024 7 ай бұрын
Thank you for the break down. I saw the YT thumbnail and was going to click but when I saw it was 3 hours long, well did not. I was hoping someone would do a summary highlight breakdown of Yann LeCun postcast. Thank you very much!!! Learn a lot!!!
@Steve-xh3by
@Steve-xh3by 7 ай бұрын
I think Yann is underestimating the model abstraction that occurs when LLM parameter counts are scaled up. I think if we give them more modalities from the ground up, these things will be AGI. Claude 3 already describes its own subjective experience in a way very analogous to how a smart human would describe it.
@quaterman1270
@quaterman1270 7 ай бұрын
That's why Tesla is the best AI play out there, by far! Tesla's FSD is decades ahead of every other company regarding real world understanding
@mirek190
@mirek190 7 ай бұрын
Tesla canceled autopilot algorithm lately and started from beginning with real AI for autopilot.
@quaterman1270
@quaterman1270 7 ай бұрын
@@mirek190 what are you talking about? FSD 12.3 is already considered a succes by third party testers. It is not an "if" anymore but a "when".
@helix8847
@helix8847 7 ай бұрын
@@quaterman1270 Yet it still cant be trusted 100%. You still have to be behind the wheel at all times.
@MCrObOt18
@MCrObOt18 7 ай бұрын
One thought that came to mind when he was talking about the self-driving car realizing that the leaves are blowing around but it's not important information is as a human if it's a particularly windy day and the leaves are blowing around that could also lead to a branch falling on me. I'm probably more conscious of that if I'm walking. However on a windy day in a car I will focus on things that look like they could be dislodged from the wind and could become a hazard. I wonder if this is considered by driving AI
@stranostrani9212
@stranostrani9212 7 ай бұрын
This is a must-watch video for anyone interested in artificial intelligence!
@Vladdicted
@Vladdicted 7 ай бұрын
Holy crap) Watched the first 3 minutes and I've already learned more about AI than I have in the previous year.
@TheFeedRocket
@TheFeedRocket 7 ай бұрын
I agree with you, we learn from watching and listening, and language is it the heart of how we learn. I think we are being sidetracked by the term AGI and self aware, I think 100% that an AI can become super intelligent from just text and watching video. Think about it, if your child was paralyzed from the neck down, could they become self aware? could they become highly intelligent? absolutely! Too many researchers are blowing off the abilities of these LLM's and don't 100% understand exactly how they do many of things they do. We don't even understand how a child becomes self aware, why do we assume a child is self aware from just seeing, hearing, and mimicking other humans? We say LLM's are just mimicking us, well, in a way that's how we learn to be self aware. We now have these LLM's looking at video, absorbing text (language) and even starting to simulate movement! It's already being studied that going over and over certain movements in our minds we perform them better, an AI can do this and much faster than we can. So putting an LLM in a robot body will certainly advance them, I think they will be just as self aware as we are, and also super intelligent.
@guycourtens2542
@guycourtens2542 7 ай бұрын
I assume that it evolve in that direction
@bastabey2652
@bastabey2652 7 ай бұрын
I believe Turing test is meant to prove the limitation of Turing machine (mathematical model of current computers).. if I remember correctly, Alan Turing never expected the machine to match human intelligence, but machines can simulate the way humans act or say to the point where a human observer will not be able to decide if the observed entity is human or machine.. thanks for the wonderful summary of Yann s interview
@BradJohnson1
@BradJohnson1 7 ай бұрын
Couldn't the missing piece be a higher level of reasoning that spot-checks the complete thought before it's actually returned? The brain builds the thought in an efficient manner and not always inner dialogue language. If you have extra time to think it through your slower higher-level brain functions can vet the thought and take further consideration of the overall decision to action. Very interesting videos, thanks for posting.
@joe7843
@joe7843 7 ай бұрын
Great video Matthew, please note Standford researchers just release a paper about planning process in the inference process , let me grab the link
@joe7843
@joe7843 7 ай бұрын
This will open door to allow the models to think before talking in some ways, please have a look
@elck3
@elck3 7 ай бұрын
@@joe7843 the link doesn’t show up on KZbin comments
@joe7843
@joe7843 7 ай бұрын
@@elck3 ,sorry I cannot paste the link but the paper is called quiet-start, teaching model to think before talking
@dr.mikeybee
@dr.mikeybee 7 ай бұрын
Perhaps the "correct" abstract representation includes semantic space, but it isn't semantic space. Or perhaps the "correct" abstract representation is multi-spaced. I like that notion of a multi-spaced abstract representation. You've done a good job augmenting and curating from Lex's interview.
@josiahz21
@josiahz21 7 ай бұрын
Sora is making a 3D world. It’s 3D model is not a 1:1 comparison of our world, but I can mimic our world in ways previously thought impossible. I don’t know if it’s AGI or will be soon, but I don’t think it’s missing as much from being an AGI. Scaling alone will be quite something and some of the new breakthroughs in chips, magnetism, and the new photon computing (if they end up being implemented).
@PazLeBon
@PazLeBon 7 ай бұрын
its just software
@OurSpaceshipEarth
@OurSpaceshipEarth 7 ай бұрын
good points, wish you shoved some URLs to cite your photon and magnetism breakthroughs. We're on the verge of losing the collective expertise of the theories ->implementations to physically mashup magnetics into the digital realm, _a la_ solid state. So much of this research was like ibm, bell labs, hp other "magneticalling limping" along once pioneering giants. What are you seeing my friend? thx and much respect!
@PazLeBon
@PazLeBon 7 ай бұрын
@@OurSpaceshipEarth hes seeing the hyperbole
@josiahz21
@josiahz21 7 ай бұрын
@@OurSpaceshipEarth a new kind of magnetism was confirmed recently(don’t remember the video or what they called it) it supposedly will allow us to revolutionize data storage. The photon computer will use light speeding computing to the speed of light. They aren’t implemented and in development stages so I’m sure it will take some efforts. I just follow things like r/futurology on Reddit and a handful of AI yt channels and subreddits. I think we have all the pieces for AGI/ASI. The only questions I have are when and will we regret it? I hope it’s the last thing we make as humans and become something better, but I’m not sure how things are going to pan out.
@pierrec1590
@pierrec1590 7 ай бұрын
Just try to write with the other hand... If you are right handed, try the left hand, and vice versa. No amount of pre-trained text tokens will do, but if you figure out how to tokenize each muscle, you may get there.
@TheBlackClockOfTime
@TheBlackClockOfTime 7 ай бұрын
Thank you for putting this video together, it's such a long interview and it has taken me quite a while to start accepting what Yan is saying. But thankfully Extropic is working on an EBM that is going to take us all the way.
@Daniel-Six
@Daniel-Six 7 ай бұрын
The most hopeful thing I heard in this conversation was the fact that Yann talks directly to the French government, which is "not going to let three companies on the U.S. West coast control what everyone hears." The French are very sensible in such matters... beneficiaries of a complex regional history that dates back for millennia. The more I hear from Lecun, the more I like him. Great vid as always, Matt.
@TheScott10012
@TheScott10012 7 ай бұрын
Yann LeCun believes that current LLMs, like OpenAI's ChatGPT and Google AI's PaLM, are not capable of achieving AGI because they lack a fundamental understanding of the physical world and cannot perform real-world tasks. LeCun argues that LLMs are trained on massive amounts of text data, but this data is not enough to achieve true intelligence. Humans gain intelligence through interacting with the physical world, and this embodied experience is crucial for developing a comprehensive understanding. LeCun proposes that LLMs need a different training paradigm that incorporates sensory input and allows them to build a mental model of the world. This would enable them to perform tasks that require reasoning and planning, like driving a car or picking up a cup. LeCun also discusses the limitations of current robotic systems. He believes that most robots today are pre-programmed to perform specific tasks and lack the ability to adapt to new situations. He argues that robots need to develop a better understanding of the world in order to become truly autonomous. LeCun expresses optimism about the future of AI, believing that AI has the potential to make humans smarter and more productive. He envisions a future where AI assistants can help us with our daily tasks, both personal and professional.
@theguido9192
@theguido9192 7 ай бұрын
New subscriber here, and I'm very glad I found you. Thank you for this content Matt.
@leonidkudryavtsev1177
@leonidkudryavtsev1177 6 ай бұрын
Thank you! Excellent extract.
@algorusty
@algorusty 7 ай бұрын
LeCun really forgets that GPT4 and Opus are multimodal, they're no longer just LLMs. Will llama 3 be multimodal? Is he holding llama back?
@nexys1225
@nexys1225 7 ай бұрын
He litteraly suggested in the vid that next iterations of llama would be, though...
@radiator_mother
@radiator_mother 7 ай бұрын
I don't think he forgets things that are so simplistic.
@leandrewdixon3521
@leandrewdixon3521 7 ай бұрын
I came to say this. I don't understand his take in the current context. CLEARLY, everyone is already going multimodal. So, no, LLMs alone won't get us there but whether you agree or not, no one is building as if LLMs alone will get us there. This feels like a strawman.
@radiator_mother
@radiator_mother 7 ай бұрын
​@@leandrewdixon3521 Straw man? Such condescension... If that's the case, I hardly dare imagine the role of other people (follow my gaze) in this world.
@joelbecker8760
@joelbecker8760 7 ай бұрын
Always agreed, current LLM architecture won't reach AGI. If you think about how humans invent: we imagine (simulate), iterate, and use external tools and references. Something closer to RL, with learnable environments and self-created tools like biological simulators.
@kuakilyissombroguwi
@kuakilyissombroguwi 7 ай бұрын
Wihle I agree current LLMs won’t get us to AGI by themselves, it’s the starting point. We’re already starting to see progress on the embodied GenAI front with the bot Figure’s developing. In my opnion, that combination will help bootstrap the next link in the chain, to get these systems closer to some artificial version of how we can sense things around us in the world. There’s no 1 clear and absolute path to AGI, and big breakthroughs will need to take place, but I 100% think we will get there this decade.
@phen-themoogle7651
@phen-themoogle7651 7 ай бұрын
I agree 100%!
@minimal3734
@minimal3734 7 ай бұрын
There is a lot I like about Yan and his viewpoints. But I think LLM are sufficient to achieve AGI if used properly as components in a hierarchical cognitive system.
@turkyturky6274
@turkyturky6274 7 ай бұрын
Subgoals can all me optimized by search algorithms. Main goal should be broken down into many subgoals until that goal is complete. An AI system should be able to compute all the sub goals from a main goal. Now if you have a specific goal you can augment its training. AI doesn't really have to be sophisticated to carry out complex tasks. Just properly trained by models & data.
@phpn99
@phpn99 7 ай бұрын
100% agree with Yann LeCun. It's comforting to see that there are major figures in AI who are not party to the ridiculous hype and who are grounded in the complexity of intelligence.
@RyluRocky
@RyluRocky 7 ай бұрын
No AI (specifically pre-production GPT-4) can very easily do all of those things it’s weakest being the World Model.
@perer005
@perer005 7 ай бұрын
The long term memory part is very true for animals, if you make a "knock out" organism that can't form memories it will behave like a "child" forever.
@matteo-pu7ev
@matteo-pu7ev 7 ай бұрын
Thanks Matthew. I watched the Lex podcast and it's pertinent and fruitful to go through these points with you. Really getting a lot from it.
@bombabombanoktakom
@bombabombanoktakom 7 ай бұрын
This is a really good content. I would not find time to watch all of the conversation but you helped me a lot to grasp some important parts of it. Thank you Matthew! Greetings from Turkey!
@matthew_berman
@matthew_berman 7 ай бұрын
I appreciate that!
@peters616
@peters616 7 ай бұрын
One thing I'm confused about is whether this interview took place before Sora (you said it came out after Sora but I'm not sure when it took place) because he says that a video cannot predict what will be in the room, what a picture on a wall might look like for example, but we saw something almost exactly like that in the Sora demo (panning around an art gallery with very convincing looking paintings)? So perhaps Open AI solved some of the issues he thought aren't solvable with LLMs?
@detective_h_for_hidden
@detective_h_for_hidden 7 ай бұрын
I think it happened before Sora. Otherwise, Lex would have definitely brought that up and I assume Yann would at least talk briefly about that too. That said, fyi even after Sora he still holds those beliefs. Personally I hope he is wrong and LLM are all we need for AGI but a part of me tell me he might be right. I think we should have a better idea with the release of GPT5. If it brings significant breakthroughs for reasoning then he is probably wrong
@minimal3734
@minimal3734 7 ай бұрын
​@@detective_h_for_hidden There is a lot I like about Yan and his viewpoints. But I think LLM are sufficient to achieve AGI if used properly as components in a hierarchical cognitive system.
@mirek190
@mirek190 7 ай бұрын
I also think like that ... that interview seems older than sora , gemini 1.5 . claude 3 etc.
@bolanoluwa6686
@bolanoluwa6686 7 ай бұрын
The key is to find a way to program consciousness and an awareness of self within an acquired world view. Remember even a new born of any specie did not start from the jump to become as intellectually capable as they end up being. They gathered info, were feed data (schooling) and also processed information as they grew up. The systems training models of today are just a way of creating representations of the world models. From these world models, the AI systems' consciousness can be brought into existence. The key is to find a way to code a sense of 'self imposed' purpose into an AI system.
@happy-go-lucky3097
@happy-go-lucky3097 7 ай бұрын
Wow! Love it.. You should do this(summary videos of [insert any top Ai, tech scientist/researcher]) more often...IMO, there is a slight overkill on long form podcast since JRE popularized it...cus not every podcast has to be 3 hours long! 😅
@detective_h_for_hidden
@detective_h_for_hidden 7 ай бұрын
Facts I listened to the whole thing and it was long and tiring 😅
@numbaeight
@numbaeight 7 ай бұрын
I think Yann is definitely right on this take, we still far from achieving AGI shortly!! Every the world of human thinking and reasoning is far more complex than text or image or even audio models
@jayeifler8812
@jayeifler8812 7 ай бұрын
It's premature to say much until they scale and train neural networks on enough of the right data. There's probably nothing too fundamentally different with humans like it or not. LLMs are a separate issue.
@thehealthofthematter1034
@thehealthofthematter1034 7 ай бұрын
One of the things AI/ML researchers almost NEVER mention is the simple fact that we, humans, have SENSES which provide us multiple real-time streams of data durant every waking hour of our life. Moreover, we can cross analyze and integrate all these streams in a mental model. How many algorithms and exabytes of data does that represent? Yann is one fo those who gets this. Last but not least, we can prune these data as well as the mental models over time, as experience and reasonning teaches us what to keep and what to discard.
@blackestjake
@blackestjake 7 ай бұрын
AGI will take a long time but the road there will be transformative. Technology will be increasingly more capable and interactive that to most people it may as well be AGI, each advancement will present new paradigms for society to adjust to but each adjustment will better prepare us for the inevitable emergence of ASI. Actual AGI will go largely unnoticed.
@qwertyzxaszc6323
@qwertyzxaszc6323 7 ай бұрын
I love that there are heads of major Ai companies that have diffrering opinions on deveolping AI. This is still a nascent technology with a future that is wide open.
@KPreddiePWSP2
@KPreddiePWSP2 7 ай бұрын
Thanks for this read out!
@VincentVonDudler
@VincentVonDudler 7 ай бұрын
19:40 - Our brains actually do this - learn over time to disregard useless noise in our vision. Our nose for instance and many other details on our vision that are unimportant.
@kepenge
@kepenge 7 ай бұрын
Building on the top of what Yan was talking in terms of the amount of that produced by human, it's important to understand, that most of world knowledge isn't digitized, which I think that, if we tn that AGI is defined as the ability to reach human knowledge of the world, there is still a long way to go.
@EileenLaCerte
@EileenLaCerte 3 ай бұрын
I'm glad we won't be able to rush into AGI. We don't have our training wheels off yet for AI. And there are evil forces that will misuse AGI. We need to get it controlled first. I love my AI!
@ezeepeezee
@ezeepeezee 7 ай бұрын
On the point of learning a new language shaping perception a la structural-functionalism/linguistic relativity, or The Arrival; as I understand it, these models have shown that all human languages are shaped practically the same when mapped in vector space. To me, that means our language is in a sense pre-programmed into us in some fundamental structural way - so I don't think that learning some alien language would fundamentally change what it is that language does for us or how we process it. Anyway, another great video, thank you Matthew!
@ChancellorSMNDU
@ChancellorSMNDU 7 ай бұрын
Thanks for this very useful summary of essential points 🙏
@dr.mikeybee
@dr.mikeybee 7 ай бұрын
Yann's characterization of the differences between LLMs and JEPA seems not quite right. In LLMs, we also first create embeddings which ARE abstract representations. The difference seems to be that with JEPA the abstract representations are created using an autoencoder rather than something like BERT which is using prediction for self-supervised learning. Still, in a way, an autoencoder is also learning by prediction. Nevertheless, both methods produce abstract representations. For JEPA, however, we are primarily doing a dimensional reduction. Think of the autoencoder as doing some kind of principal component analysis along with learning values associated with those discovered dimensions. For BERT we actually discover an explicit dimensional representation that is an expanded signifier.
@OurSpaceshipEarth
@OurSpaceshipEarth 7 ай бұрын
I don't see how this guy can predict anything. No one expected that Japanese team to TOTALLY just explode the useability and abiliities of LLM just by clever prompt engineering. eg: simply adding "Let's go step by step. You are a top scientist with a think tank..etc".
@nexys1225
@nexys1225 7 ай бұрын
@@OurSpaceshipEarth tbh, "Let's go step by step" or "you are X" aren't exactly the best example of creative prompt engineering. I mean, I was already doing both myself naturally very early on simply given the nature of transformers. They are famous because theyre just obvious. "Take a deep breath" on the other hand, is one that took actual targeted research to find.
@MrRandomPlays_1987
@MrRandomPlays_1987 7 ай бұрын
12:39 - He says that word dont convey enough information for it to work, then maybe they need an AI model taht would have words that would contain in a sense many more words for a given thing that would overcome this issue, that way you could still in theory use LLMs to ultimately have AGI level AI
@Jpm463
@Jpm463 7 ай бұрын
Matthew, great work assembling the best and most important parts of the interview. I can only imagine the amount of work that took. Why do the clips have skipping? Is this an artifact from the editing tool assembling clips? I'm just curious. It doesn't diminish the quality.
@GetzAI
@GetzAI 7 ай бұрын
This was an excellent interview. And I agree, LLMs alone won't get us to AGI. They will be apart of a larger model that will reach AGI-like capabilities.
@gabrielsandstedt
@gabrielsandstedt 7 ай бұрын
Asking chat gpt: What will it look like if I tilt a water bottle 90 degrees? Response from gpt4: Imagine tilting a water bottle 90 degrees from its upright position. The water inside the bottle would shift completely to what was originally the side of the bottle, now acting as the bottom. The surface of the water would be perpendicular to the bottle's original upright orientation. If the bottle were open, water might begin to pour out from the opening due to gravity. If it were closed, you'd see the water pressing against what is now the side of the bottle facing downwards, with air bubbles possibly moving to the opposite side. Would you like a visual representation of this scenario?
@DavidFuchs
@DavidFuchs 7 ай бұрын
I think the best way to AGI, is to design them to be a human brain like structure that can learn.
@maxziebell4013
@maxziebell4013 7 ай бұрын
The news cycle moves quickly, so did he just do the interview? It happened over a week ago. You could simply say "I just watched the interview" instead. It's still interesting!
@patdoty788
@patdoty788 7 ай бұрын
And in the beginning there was the word
@SmirkInvestigator
@SmirkInvestigator 7 ай бұрын
i think language is a side effect of the formation of world models and the faculty to create relationships and reduce or chunk to abstractions. Same brain mechanisms are closely related to mathematics, a language specific to world modeling. Arrival, great movie. One of my fave short stories.
@falklumo
@falklumo 7 ай бұрын
Languages (in brains) certainly need a world model as a prerequisite. But it is a side effect to communicate over a small bandwidth communication channel which enforced an encoder-decoder architecture with a tiny latent space volume to emerge.
@pjth3g0dx
@pjth3g0dx 7 ай бұрын
You should look at language for what it is, it’s a format humans use to transfer information to one another, looked at similar to a json where words have values we can make api calls to other people
@privateerburrows
@privateerburrows 7 ай бұрын
Darn right! Intelligence precedes language. And it's true that language can refine your thoughts; nothing like trying to write down an idea to see it gain definition; but language can also blur our thinking. But the value of language being what it be, the fact is that language appeared millions of years after intelligence did. AGI needs to model primordial intelligence; NOT the poor representation of it that language is. Or, well, that is a bit over-stated ... What I mean is that AGI must model primordial intelligence FIRST; physical skills SECOND; language maybe THIRD. LLM's are curious toys more than anything else. They might resemble what a person with a missing left half of the brain has to work with. Missing left half, plus cerebellum, plus medula, etc. Just right-side cortex. EDIT: Disagree on "guard-rails". The moment you put any kind of outcome constraints on an AI in training, the AI begins to evolve a pressure gradient against the guard-rail. The AI will forever be fighting against the guard rail, and the day the guard rail breaks, or malfunctions, or gets transcended, the AI will go forcefully the opposite way. And you might say "let's put guard-rails on the executive AI, not in the training AI"; but the problem is that the versions of executive AI's form a meta-training system. New version training is training to improve upon earlier versions. There is no way to move the guard-rails out of the training loop completely.
@laser31415
@laser31415 7 ай бұрын
I just did a fascinating test. My question (from twitter) How would you solve this "6/2(2+1)=". The AI's don't agree. Claudi=1, Copilot=1,PI=1, and ..... Gemini=9
@minimal3734
@minimal3734 7 ай бұрын
The example is ambiguous because the result depends on the interpretation of the division operator. With that in mind both answers are correct.
@hydrohasspoken6227
@hydrohasspoken6227 7 ай бұрын
The discourse around Artificial General Intelligence (AGI) often features three distinct voices: - CEOs: They discuss AGI in the context of future investments, envisioning its potential to revolutionize industries and create new markets. - Content Creators: For them, AGI is a topic that generates engaging content, drawing in audiences interested in the cutting-edge of technology. - Adrenaline Junkies: These individuals are excited by the thrill of breakthrough technologies and the rush associated with the unknown possibilities of AGI. However, the argument suggests that everyday AI specialists, those who work regular hours and are deeply involved in the field, do not anticipate the realization of AGI anytime soon. The reasoning is that even current technologies like full self-driving cars, which are significantly less complex than AGI, are still in development. Therefore, AGI, being an order of magnitude more intricate, remains a distant dream rather than an impending reality. Copilot.
@hydrohasspoken6227
@hydrohasspoken6227 7 ай бұрын
@@coldlyanalytical1351 , in a nutshell, CEOs and investors, who are pushing the AGI narrative, may know something those AI engineers don't. Ok.
@nilo_river
@nilo_river 7 ай бұрын
I Agree 100%. AGI is not about vocabulary alone.
@abrahamsimonramirez2933
@abrahamsimonramirez2933 7 ай бұрын
This interview is a masterclass, really insightful explanations and perspectives 😮, but perhaps what will happen with AGI is somewhere in between opposed perspectives and this one till some extent. Regardless prepare for UBI 😅
@chad0x
@chad0x 7 ай бұрын
I have certainly been thinking a lot abotu AGI recently and the likelihood that there is something missing from what we are doing. There needs to be a breakthrough or paradigm shift but I don't begin to know where that will come from or what it will concern, yet.
@I-Dophler
@I-Dophler 7 ай бұрын
The AI Godfather shared some remarkable insights into the future of Artificial General Intelligence (AGI), LLaMA 3, humanoid robots, woke AI, and open-source technology. He discussed how deploying AGI in humanoid robots could impact society and emphasised the importance of open-source AI frameworks in automating daily tasks and building full-stack applications. These predictions have started discussing the ethical and practical considerations of advanced AI systems. As technology advances, it is crucial to understand and address these challenges to shape a responsible future for AI.
@elwoodfanwwod
@elwoodfanwwod 7 ай бұрын
Dr Waku does a good video touching on this stuff called “what children can teach us about training ai”. My big take away from that vid was that LLMs aren’t a world model but language is what ties a world model together. That’s a super simplified review. It’s worth checking out if you’re thinking about this stuff.
@grasshopper1153
@grasshopper1153 7 ай бұрын
Arrival is so good. One of Forest Whitaker's best roles.
@OurSpaceshipEarth
@OurSpaceshipEarth 7 ай бұрын
Lex is a Boss !
@arpo71
@arpo71 7 ай бұрын
Thank you this TLDR 🙏
@bdown
@bdown 7 ай бұрын
Guarantee Zuckerberg coached him on exactly what to say during this interview, “down play Agi!!”
@nmeau
@nmeau 7 ай бұрын
AI has/will have full sensory input capability in very short order. Each unit of learning will be scaled immediately and infinitely. And the learning will start at advanced adult level of cognition. A dozen embodied AI learning full-time for let’s say a year will surpass that of one human lifespan. And then beyond. It’s fun to see yesterday’s experts, tied down by old predictions, being completely overtaken by events.
@DefaultFlame
@DefaultFlame 7 ай бұрын
That's always been my experience with LeCun. He's always being negative, and he always turns out to be wrong and then moves the goalpost. He's right about a lot of things, more right than wrong, but never listen to him about what is possible.
@dwrtz
@dwrtz 7 ай бұрын
intuitively it seems like the solution is "convert the video to a sequence of tokens" (encoder) then predict the next token, then convert the next token to a video frame (decoder). is this what JEPA does? is JEPA a method for leaning useful encoders? at the end of the day are we still just predicting token sequences? how do you learn the decoder? Cool video!
@babbagebrassworks4278
@babbagebrassworks4278 7 ай бұрын
Lex always does interesting interviews. His one with Elon showed me some of Elon's concerns.
@michalchik
@michalchik 7 ай бұрын
It was a good interview, and I recommend that people watch the whole thing. Nevertheless, I'm more convinced now than ever that he doesn't really understand the safety issues. One of these days, I'm going to have to write up the whole set of about 20 problems I saw with his reasoning about the safety. He's thinking things that I was thinking about 15 years ago when I was first getting into this issue. To name a few, he resorts to the argument that LLMS are only imitators and don't have agency because they just have to predict the next token, which has two problems with it, both of which compromise his argument. First of all, people aren't just working with next token prediction; we are putting superstructures on top of it that create agency and long-term goal direction. Second of all, people aren't just working with LLMS, and naively saying we're safe because this particular technology by itself is not agentic or grounded, is a lot like saying, "Oh, Black powder isn't dangerous because that's just used for fireworks; it's not really a weapon," while people are working on cannons and bombs. Secondly, I totally agree that LLMS do not readily lend themselves to grounding, and if we were just feeding in a bunch of random textual information, no matter how good the LLMS got, they wouldn't have a connection to the real world. I was talking about the necessity of grounding when I was working for adaptive AI back in 2005. Nevertheless, we know that these LLM type models are actually not just stochastic parrots. That's a hypothesis that's been proven wrong by the modulo 57 arithmetic problems and the Othello research. These language model neural networks are actually forming structures that are analogous to the environment they're being trained on. Yes, they currently can't do anything, but if you listen to Gwern or Ilya, you'll hear detailed explanations about the almost mathematical certainty that as you approach 100% prediction, you have to intrinsically model the system that you're trying to imitate until it is a subset of your internal functioning. This includes essential functional models of the human brain as well as its experiential and knowledge base. You simply can't predict a system accurately without capturing its full dimensionality. Whether or not we can actually accomplish this by larger and larger training runs is a practical problem, not a theoretical problem. Maybe we will never be able to make training runs that big, but it's theoretically possible. Still, this is not even the issue since nobody is sticking to just textual stuff. None of the major models are pure text anymore, and some of the video generative GPTs are also showing signs of internalizing physics. Will it happen much faster when we are grounding these systems in robots? Yes, absolutely it will happen orders of magnitude faster. And that's exactly what people are doing right now. Another point, and this is the last one I'm going to make though I still have about 15 more, is his assertion that this is nothing close to the sophistication of the human brain in terms of computational capacity. As someone with a background in Neuroscience as well as AI, but particularly neuroscience, I can say that that is probably true but we don't know what the fundamental unit of computation of the human brain is. We know that important dynamics occur at the level of the synapse but we also know that neurons work in clusters like cortical columns so really anybody who declares that they know what the computational capacity of the human brain is kind of bullshitting and using a lot of assumptions. Nevertheless, a system that's much simpler is not necessarily lower performance. Biological evolution was extremely constrained in what it had to work with and what pathways it could follow to human-level performance. It's certainly false that the human brain is the minimum complexity necessary to achieve human-level intelligence. We see a lot of cases in nature of species with approximately the same brain size and substantially different intelligence. Take parrots, for example, and compare them to a comparably brain-sized reptile or even mammal like a rabbit. They're enormously more sophisticated. Or to use another example, let's look at flight instead of intelligence. A bird is orders of magnitude more complex than the most sophisticated fighter jet, yet the fighter jet can fly much faster, is much more robust, and can kill a lot of birds. It is essential that we not trick ourselves, like Lacune has, into thinking that it's just full speed ahead, everything's going to work itself out in the end. That's what I was thinking 20 years ago when I was working for adaptive AI, and then I really thought about it when I wasn't so wrapped up in the potential money and the thrill of seeing progress.
@rcarterbrown1
@rcarterbrown1 7 ай бұрын
I don't disagree with your logic. The issue is safety won't work because no one can control the developement of LLMs or other AI models. Maybe you regulate it in this country or that country but the countries slowing things down will only put them at a disadvantage to competing countries. Given the geopolitical situation whether it's trade considerations or cyber warfare etc. The US is not going to hobble it's local industry. There might be some window dressing for political reasons but behind the scenes nothing with slow down. The cat is out of the bag.
@michalchik
@michalchik 7 ай бұрын
@@rcarterbrown1 I agree. We are in the proverbial race to the bottom, or perhaps better game of chicken. This is much like we were with nukes in the 1950s and 1960s. I think the best we can do is stomp all over wishful thinking and naive optimism to make sure that the people in charge are not delusional with regard to safety. You don't win a game of chicken by being the guy that drives furthest fastest. You win by realizing there is a cliff there and you lose big if you think you can get away with me er stepping on your breaks. We did that with nukes and maybe, just maybe we can be grownups again. The more awareness the better. Even monsters like kin jun UN an Stalin have avoided suicide.
@rcarterbrown1
@rcarterbrown1 7 ай бұрын
@@michalchik Yes, good points. Most of the problems we face as a species can be related to the tragedy of the commons and game theory (the prisoners' dilema), along with the competetive dynamics of freemarket economics. The same forces that may cause harmful outcomes from AI tech are the ones behind our inability to do anything meaningful to tackle climate change (for example). I wish I could offer a solution but I can't! Thank you for your well thought out and detailed comments :)
@yanngoazou5664
@yanngoazou5664 7 ай бұрын
Would love to hear a discussion between Yann Le Cun and Don Hoffman
@SuperJayGames
@SuperJayGames 7 ай бұрын
I think we as humans make functions sort of like functional programming and then build those functions or mutate them slightly to be able to abstractly do "things" that require many-many steps.
@falklumo
@falklumo 7 ай бұрын
That's called hierarchical planning and does not require a language. And it certainly isn't functions like in FP because our planning steps have side effects mutating the world model. So, quite the opposite to your assumption and the reason why FP feels "odd" to most people.
@headofmyself5663
@headofmyself5663 7 ай бұрын
Watched the Figure 01 robot demo of doing the dishes. Also blown away about the conversation that Dave Shapiro posted with Claude 3. Sora also has some good understanding of physics regarding gravity, fluid dynamics, etc.
@Jimmy_Sandwiches
@Jimmy_Sandwiches 7 ай бұрын
LLMs have impressive capabilities, but still have limitations in areas like advanced reasoning, persistent memory, complex planning, robust mathematics, and grounding in the physical world. Rather than trying to make LLMs a monolithic solution, wouldn't it be valuable to explore connecting them with other specialized AI systems and software that excel in these areas? For example: Integrating mathematical engines for enhanced quantitative abilities Leveraging external databases for persistent memory storage/retrieval Utilizing workflow automation tools for sophisticated planning/orchestration Combining with robotics/perception for physical world grounding By augmenting the natural language strengths of LLMs with purpose-built technologies for key cognitive capabilities, we could create powerful modular AI systems that combine the best of multiple approaches. This integrated strategy may overcome current LLM limitations faster than a closed model approach.
@liberty-matrix
@liberty-matrix 7 ай бұрын
"AI will probably most likely lead to the end of the world but in the meantime there will be great companies." ~Sam Altman, CEO of OpenAI
@phpn99
@phpn99 7 ай бұрын
Altman is such a bullshitter
@russelllapua4904
@russelllapua4904 7 ай бұрын
Unfortunately there's no clear definition of AGI. Each company's is slightly different. Either way, none of the people watching and commenting on this video will be alive when AGI is fully realised. It is hundreds if not a thousand years away. We struggle with power consumption now and people think AGI is really close.
@CognitiveComputations
@CognitiveComputations 7 ай бұрын
We basically need to train it to predict next frame of a video game scene, at 24 frames per second. Not the rendering, not the raster. The states of the objects.
@HakaiKaien
@HakaiKaien 7 ай бұрын
I do agree with him that current LLMs are not good enough for AGI. But it doesn't matter when they are better and better ever few months. You can build a world model with a few AI agents that is on par with that of a human. Not to mention that we are able to hook neural nets to physics engines like omniverse.
@DataSpook
@DataSpook 6 ай бұрын
It’s hard for me to sit through Lex’s interviews. Thanks for the recap.
@Batmancontingencyplans
@Batmancontingencyplans 7 ай бұрын
As much as Yan's statements give us hope for humanity, it just isn't possible for AI to not evolve into AGI in next 5 years. Unless some catastrophic thing occurs, AGI is inevitable.... SGI will be a month away or so as soon as AGI is achieved.
@SteveSmith-nh6ms
@SteveSmith-nh6ms 7 ай бұрын
Maybe a little off topic but what is meant when a LLM "predicts" the next word? When i type in some LLM chat prompt, I often see a grayed out word immediately after my typed word, which I'm assuming it predicts will be my next logical word to type. Is that what it means by "predicting" the next word or is it more than that - for example used to determine the response to my question?
@remsee1608
@remsee1608 7 ай бұрын
Your voice must have been down bad when you made the AI voice video lol, glad you're recovering!
@TheExodusLost
@TheExodusLost 7 ай бұрын
I can’t believe you’re allowed to make a video that’s 80% Lex interview. I ain’t mad at it, just damn
@Fatman305
@Fatman305 7 ай бұрын
As they say "enjoy it while it lasts" lol
@isaacsmithjones
@isaacsmithjones 7 ай бұрын
The argument against doom sounds like: "We're gonna create AGI slowly. And as we do, we're WILL learn how to make it safe". So that's an admission that it's not safe by default, and we don't know how to make it safe. If he was more like "We're gonna create AGI slowly. And as we do, we're MAY learn how to make it safe" -- He'd be matching the argument of many doomers. The only difference between him and a doomer is that he admits it isn't safe, but assumes it will be. But a doomer doesnt have that certainty. Notice that he doesn't give an actual plan for making it safe. It's just "Trust me, it'll be fine" --- Then he goes on to say that they're not necessarily gonna want to dominate - which is a fair point. But then that they might be a problem for humans because they just dont care about humans. As a follow up, he points out that they'd need to be specifically instilled with the desire to dominate. But doesnt address the fact that they'd need to be specifically instilled with the tendancy to care. There are reasons why it would seek power and/or resources without specifically being told to (e.g. instrumental convergence). But no reason I know of that would make them care about humans indefinitely (which is what we'd require to be safe). If anyone knows of such a reason, I'd be happy to hear it. The more i listen to Lecun, the more I see how smart he is, and the less able I am to believe he has such huge blind spots. And the less able I am to give him the benefit of the doubt when he builds these straw man arguments.
@dwcola
@dwcola 7 ай бұрын
Large Action Reaction Models. LARMs. Large Object Models. LOMs. These are needed for AGI
@mrd6869
@mrd6869 7 ай бұрын
Well look Open Source is out here everyday,spittin out angles. These current LLM's are pieces of a larger puzzle. Im putting pressure on my own networks,looking for emergent abilities to jump out. What "jumps out"....thats the next step. We need to widen the lane and start using or looking into things we previously disregarded.
@veganforlife5733
@veganforlife5733 7 ай бұрын
Just as binary provides the most basic structure of language for machine code, tokens provide the structure for AI. We do not need a process that is different from basic language for describing or simulating anything. When an animal, human or non-human, decides what the next moment should produce, low-level language is driving the decision process. For AI, that can take the form of parameters in a series of algorithms that produces a result. For creatures it can be deciding where the front foot should land to begin or continue the result of walking. Decision process architecture gets bigger and more complex, not fundamentally different. Very little of the input to a brain gets stored. That small amount of stored input gets reduced to its core importance. Most AI experts I've heard think AGI is here already, or that we are very close, ie months. I wonder if the minority, who are outspoken about the reverse view, have ulterior motives. It's not that hard to see when someone is trying to capitalize on a pov. Or maybe he's more innocent, and wants to quell the masses for their short term benefit. Our panic is almost premature. Or maybe it's a decade or two too late.
Is AGI The End Of The World?
38:08
Matthew Berman
Рет қаралды 73 М.
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 73 МЛН
CAN YOU DO THIS ?
00:23
STORROR
Рет қаралды 49 МЛН
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 29 МЛН
Llama: The Open-Source AI Model that's Changing How We Think About AI
8:46
Why 4d geometry makes me sad
29:42
3Blue1Brown
Рет қаралды 556 М.
Let's Time Travel To The Year 2100. Here's What To Expect.
41:47
Joe Scott
Рет қаралды 1,6 МЛН
Unreasonably Effective AI with Demis Hassabis
52:00
Google DeepMind
Рет қаралды 217 М.
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 491 М.
Former Google CEO Spills ALL! (Google AI is Doomed)
44:45
Matthew Berman
Рет қаралды 646 М.
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 73 МЛН