Understanding AI from the nuts and bolts

  Рет қаралды 35,867

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Пікірлер: 218
@foxabilo
@foxabilo 10 ай бұрын
I don't think I've seen so much cope this year, this pair will still be saying LLM's can't do X after they are better at everything than any human on earth. "But it's still just matrix multiplications, it can't smell"...yet.
@lightluxor1
@lightluxor1 10 ай бұрын
Indeed. Something is really wrong with them. The talk reminded me the old claims that man would never be able to flight an object in the sky.
@mannfamilyMI
@mannfamilyMI 10 ай бұрын
Amen brother
@AndreanPatrick
@AndreanPatrick 10 ай бұрын
.
@marcodasilva1403
@marcodasilva1403 8 ай бұрын
Great comment. Never has a subject made me feel smart quite the way AI does. I've never in my life seen so many seemingly intelligent people talking complete nonsense.
@SchmelvinMoyville
@SchmelvinMoyville 8 ай бұрын
@@marcodasilva1403oh shut the fuck up dude. if you knew shit you’d be on the show not typing your pretentious ass comment
@Seehart
@Seehart 10 ай бұрын
Some good points, but I have to say I disagree with the consensus expressed in this conversation. Would be nice to include someone with the counter-hypothesis, because without that, it devolves into strawman. LLMs as described, out of the box, are analogous to a genious with a neurological defect that prevents reflection prior to saying the first thing that comes to mind. Various LLM cluster model's have addressed that weakness. Tools such as tree of thought have greatly improved the ability to solve problems which when done by a human requires intelligence. If you want to know if these systems can reason, I recommend starting with the paper "GPT-4 can't reason". Then follow up with any on several papers and videos that utterly debunk the paper. (Edit: accidentally clicked send too soon)
@jd.8019
@jd.8019 10 ай бұрын
I think this is a fantastic episode. I know I'm personally biased, as I was the fan which was mentioned at the end of the talk, but I think something to keep in mind is that Brandon has been doing this for years: the better part of 2 decades. As mentioned many times, the nuts and bolts are his jam, and if you are someone who is thinking about going into ML/Data science for a career or school, this is an example of someone who has been looking at ML as a long term project, in addition to applying those frameworks to real-world problems/applications. Additionally, he's been trying to synthesize and democratize that core knowledge into bite sized packets of information that are very understandable, even for those with a limited amount of formal mathematical training. Personally, I don’t think I’ve seen an MLST episode where either Tim or his guest(s) have spent more time smiling/laughing while staying on topic. That enthusiasm is so infectious! These are complicated and serious topics, but that injection of light-hearted fun exploration of these topics make them seem approachable. Here’s a question for us to think about: did anyone feel lost during this episode? Now, pick another MLST episode at random, and again, repeat the previous question to yourself. There are not even many people who even appear on this channel (experts in this field mind you), that could honestly say that there wasn’t a part of almost any randomly chosen episode that they didn't experience this confusion (of varying degree, of course). With that thought out of the way, I feel I could present this episode to my mother or a high school student, and they, for the most part, would be able to follow along reasonably well and have a blast while doing so; that says something! So, major kudos to Tim for a fantastic episode, and kudos to Brandon for being a part of it!
@Self-Duality
@Self-Duality 10 ай бұрын
“So as soon as you take an action, you break your model.”
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
16:06 You can also strip a human brain down to its neurons and find no consciousness. Unless this guy has solved the hard problem of consciousness, his statement doesn't really mean anything...
@marcodasilva1403
@marcodasilva1403 8 ай бұрын
This is such an underrated comment. I skipped to this timestamp and decided not to watch the video. These guys clearly aren't the sharpest tools in the shed 😂
@paxdriver
@paxdriver 10 ай бұрын
I love sharing this channel's new releases. Love your work, Tim.
@dr.mikeybee
@dr.mikeybee 10 ай бұрын
No, LLMs are not super-intelligence. On the other hand, they can summarize context from large corpora and basically have access to everything they've been trained on. People don't have those abilities. So if your point is that LLMs and the agents that run them aren't perfect, you have a point.
@keithallpress9885
@keithallpress9885 10 ай бұрын
The story of the robot arm reaching under the table is exactly what humans do when they hack bugs in games. Like the old ex army gunner back in the 80s that my managers challenged with an artillery game problem. He immediately loaded the maximum charge and shot the shell directly through the mountain then went back to his job muttering "Stupid game" as he departed.
@ModernCentrist
@ModernCentrist 10 ай бұрын
I don't think these guys know how close we are to AGI God.
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
3:10 Disagreed, you can come up with an abstract riddle or puzzle that doesn't exist but requires logical reasoning to solve and GPT4 will do it just fine. It has the capability to reason, not just to store and retrieve information.
@Srednicki123
@Srednicki123 10 ай бұрын
the jury is still out on this one, has not GPT4 solved all 10/10 coding problems from a well-known coding test from 2020, but 0/10 from the same one from 2022?
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
@@Srednicki123 Memorization certainly helps, but it _can_ reason. It can't reason consistently, but the fact that it sometimes can says enough. If a capability is demonstrated like this, getting the consistency up is just an engineering problem that will be solved.
@ShpanMan
@ShpanMan 10 ай бұрын
How crazy is it that this "expert" is so incredibly wrong on the main point of all of this? At least saved me time watching this ignorant discussion.
@pallharaldsson9015
@pallharaldsson9015 9 ай бұрын
13:12 "I would include other sequences of other types of inputs". Yes, LLMs, are limited to texts, or linear sequence of tokens. We also think in pictures, at least 2D (a projection of 3D we see), and we can infer the 3D and have a mental model of that, even with time dimension (4D spacetime), but most of us are very bad at thinking about 4D spatial, or 5D etc. since we never can see it. But my point is, while we CAN traverse e.g. 2D images (or matrices) sequentially, pixel by pixel across, then line by line, that's not at all how we do it, so are linear sequences limiting? To be fair, we do NOT actually see the whole picture, the eyes jump around (saccades) and your mind gives the illusion you're seeing it whole, so maybe images, and thoughts in general are represented linearly? When you play music, it's linear in a sense, but for each instrument, or e.g. for each finger that plays the piano. So again, sequences are ok, at least parallel sequences. How do the text-to-image (or to-video) models work? The input is linear, but the output is 2D, and we also have the reverse process covered. Those diffusion processes start with pure random noise but I've not understood yet how each successive step to a clear picture can work, does it work linearally, because of the linear prompt? [Sort of in a way of a painter's brush.]
@BrianMartensMusic
@BrianMartensMusic 10 ай бұрын
LLMs used for coding can sometimes be dangerous and lead you down the completely wrong pathway to doing something. I asked ChatGPT, with some back and forth, to write some code to read a .wav file and run a very basic dynamic range compressor on the samples. It had no concept of the fact that .wav files are going to be PCM blocks and just assumed that samples would be []float64. Jetbrains AI assistant was much more helpful, in my experience, and knew that you would need a library to decode PCM blocks (and directed me to the most popular one!). It's a bit of a niche subject, but it was rather alarming to me.
@_tnk_
@_tnk_ 10 ай бұрын
Love this episode! Brandon is super eloquent and the topics discussed were illuminating.
@mattborman5780
@mattborman5780 10 ай бұрын
Rohrer describes our ability to excessively imbue things with agency. Rohrer is guilty of making this very mistake in his assumptions about human intelligence and understanding. Language models are certainly not humans, but humans are almost certainly more similar to language models than Rohrer would like to think. Much of human learning and processing doesn't play out via formal or symbolic systems, rather, we are also very skilled statistical learners. Rohrer seems immune to decades of research coming from Connectionism. There is considerable evidence that we are capable of learning not just surface features, but also abstract category structures and models via statistical learning. The extent to which these processes underlie human cognition is an open, empirical question. I do agree with Rohrer that our definitions of intelligence have been too anthropocentric. We need not use human intelligence as a lens for viewing a model. If we choose to use that lens, as Rohrer has done here, we should do so in an informed way.
@TheMarcusrobbins
@TheMarcusrobbins 10 ай бұрын
I don't understand this confusion over whether it is intelligent or not. It clearly is. The intelligence is the prediction of the next word. The understanding is the statistical inference. The words we speak betray the structure of the world through their statistical correlations. Our subjective experience of the world *is* what statistical correlations feel like, viscerally. It's really not that difficult to understand. Yet everyone seems to be simultaneously confused by and unable to release this notion of cartesian dualism. Experience is ambient and implicit in the physical world, it doesn't need to be conjured, like seem biblical miracle. It just is. And so it is obvious that GPT4 or whatever has subjective experience just as everything else does AND it is obvious that much of that subjective experience aligns with our own due to the isomorphic set of statistical correlations within the machine and our own minds. It's absolutely obvious to me and it really bugs me that no one else can see it. Maddening.
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
I don't think it's that people refuse to understand, I think it's that "smart" people have been told they're smart their whole life and their paycheck relies on them being blind to it. They will be replaced.
@Komaruluten
@Komaruluten 10 ай бұрын
Okay, so we have to make a distinction here. There are two main kinds of definitions to use for “intelligent”: an outcome-oriented definition and a process-oriented definition. In other words, the ends and the means. Now, the ends of both human and LLM intelligence are pretty similar: We can both learn and solve problems. However, humans are still superior at solving novel problems, aka problems that are outside of the dataset. Okay, why is this? It’s because of the means. The means that humans use to learn and solve problems are very different and more applicable to novel problems than what LLMs use. The means by which LLMs are intelligent is simply by predicting the next word. What comes after “scrambled?” Eggs, obviously. So what are the means by which humans are intelligent? Well, researchers are still actively attempting to figure this out, and they are still not even close to understanding the full complexity of the answer to this question. However, there is one pretty strong consensus in the research: Relational reasoning is central to human intelligence. LLMs are already engaging in one form of relational reasoning: understanding covariance. They understand which words usually go together and which don’t. However, merely understanding covariance is not enough to replicate the extraordinary abilities of human intelligence. There are many types of relationships that LLMs must use at a foundational architecture level in order to replicate our intelligence like opposition, comparison (including temporality), causality, hierarchy, and spatial. Expanding on what I said earlier about relational reasoning being central to human intelligence, human intelligence is in large part the efficiency with which we build, maintain, and compare sets of relational bindings in working memory. What does this mean? Example from Chess: 1. **Building**: Recognizing patterns and relationships between pieces, like how they can move and work together. 2. **Maintaining**: Keeping track of the game's changing dynamics, like remembering opponent moves and adjusting strategies. 3. **Comparing**: Evaluating different moves by imagining their outcomes and choosing the best one based on strategy and potential future positions. As for whether AI is conscious or not, I truly do not know. It’s far too early to say.
@countofst.germain6417
@countofst.germain6417 10 ай бұрын
Lmao, this is so wrong for so many reasons.
@memegazer
@memegazer 10 ай бұрын
@@Komaruluten I agree this is a valid point, but tbf few that want to argue these type of semantics bother to establish this context. Which I think is a type of moving the goalpost when somebody is making the case that say an LLM has sufficient intelligence to model langauge. So discussions turns into people talk past each other about what is philosophically significant in that example.
@memegazer
@memegazer 10 ай бұрын
@@Komaruluten So using the LLM example again. The idea about why it should be called "intelligent" is bc the LLM is what arrived at the model of language used for prediction, so it is not just a stastical parrot in the sense that it does a database search. This would give the wrong impression about what the engineers know about how the model arrives at a given output relative to input or how accurately the engineers can predict the expected output and these issues are not trivial in my view bc they center around alignment, and what we intend these models to be useful at doing.
@d.lav.2198
@d.lav.2198 10 ай бұрын
As regards agency in LLMs, Searle's Chinese Room Argument springs to mind. You cannot reduce agency to syntax. Agency is a response (individually, socially and evolutionarily) to the problems, cares and concerns that embodiment entails for the organism.
@d.lav.2198
@d.lav.2198 10 ай бұрын
Guess I should have watched to the end first!!
@psi4j
@psi4j 10 ай бұрын
Why does Sam Harris think he’s qualified to speak on any subject?
@SchmelvinMoyville
@SchmelvinMoyville 8 ай бұрын
Get ‘em tiger
@objective_psychology
@objective_psychology 10 ай бұрын
Another layer of complexification is missing. Current AI models are much more akin to a brain *region* than to a whole brain.
@didack1419
@didack1419 10 ай бұрын
One big-enough LLM could be used to power a concert of subagents to create more dynamical systems. As a next step.
@objective_psychology
@objective_psychology 10 ай бұрын
@@didack1419 Probably yes. I have a growing suspicion, the more I study AI and neurology, that consciousness arises at such a level, where it can focus the channeling of output (and/or short-term memory) to one particular corner of the network (one brain region) at a time, and so this is where higher-level properties necessary for our consciousness, like selective attention, awareness and decision-making, seem to emerge. As all good neurologists and psychologists realize by now, we do not consciously perceive everything going on in the brain simultaneously, and if we did it wouldn't work. Some mechanism takes all this and funnels it into a single output, and the input-output model of AI has held up in recreating this for at least the lowest level. But many in the field have made the mistake of assuming it ends there, thinking the collapsing of (sensory) input is the end goal. Now we're realizing that we need some other process to handle a bunch of different outputs as its inputs and "decide" what to with them. Easier said than done, but it seems to be the only path forward.
@objective_psychology
@objective_psychology 10 ай бұрын
I forgot to mention, crucially, that the final bundle of outputs is a continuous process which feeds back into the network at at least the lower levels, resulting in a continual feedback loop. This is where self-awareness and the immediate perception of the flow of time come in.
@fteoOpty64
@fteoOpty64 10 ай бұрын
I love the description of a car in terms of personality it seems to possess. Yes, most who are into cars have such an experience. In fact, I used to race mine on the trace track. There were days where I did not feel like wanting to push very hard but the car seems to "want it". The result was unexpectedly enjoyable and surprisingly exciting drive. It was the pleasant surprise I needed at the time.
@devfromthefuture506
@devfromthefuture506 10 ай бұрын
I love the vibe of quarantine. The remotes podcasts today give my nostalgic feelings
@XOPOIIIO
@XOPOIIIO 10 ай бұрын
Chat GPT is not designed to be boring, it's boring because it's reward function is to predict the next token, it's easier to predict it when it's more mundane and obvious.
@belairbeats4896
@belairbeats4896 10 ай бұрын
8:12 "... you dont do something worthy with capabilities, if chatgpt can replace you..." I think there is the big issue... what if people dont have the capabilities... how will they earn money and find purpose? And if the bar of chatgpt and agents gets higher, more and more will fall into this category... what will people do to earn money amd find purpose if they are not needed.
@agenticmark
@agenticmark 10 ай бұрын
this has been without a doubt my favorite so far. The simulation and the reward shaping part was so cool to hear from pros!
@SerranoAcademy
@SerranoAcademy 10 ай бұрын
Great interview! Brandon is one of the very top ML popularizers out there. Huge fan!
@CYBERPOX
@CYBERPOX 10 ай бұрын
I concur, it all boils down to semantics... if all of these intelligent oposing positions to your point can conceive this the model will break. What is the empirical truth of how an LLM can operate? It needs a computer, something to compute it. How does a computer computer? By the simple cycling of power, on and off, 0's and 1's in continuous incredibly complex and at light speed. How does a human brain function fundamentally? Well it's provided blood by a pump, and the pump is in a feedback loop with the brain and other organs. Some organs provide the ability to convert nutrients, distribute, filter etc. In the end this system creates ELECTRICITY. This electricity is Cycled on and off, or 0's and 1's, in a highly complex algorithmic pattern... "Conscious experience" is the process of stimulus and reaction, on and off. I can elaborate virtually endlessly, but food for your brain at the least. 37:01 - 39:01
@markwaller650
@markwaller650 10 ай бұрын
Amazing conversation- thanks !
@michaelwangCH
@michaelwangCH 10 ай бұрын
In today's academics they do not care about fundamentals, only thing they care is the number of publications, more better. It is not their job to explain to stats, ML students, how DNN does work on fundamental level. Because the most prof. and TAs do not know, therefore they ignore those uncomfortable questions. End the day students have to figure out by themselves and MIT is no exception. After college you have some fuzzy concept about ML, but you do not understand it on fundamental level. That is end up with trial and error method - that is wast of time for 8 years of your life.
@Jononor
@Jononor 10 ай бұрын
Many seem to be making arguments about human intelligence analogously to the God of the gaps argument. Human intelligence are whatever machines and animals cannot (yet) do. This will likely leave us less and less room as technology progresses.
@terjeoseberg990
@terjeoseberg990 10 ай бұрын
I used to spend all day writing code. Now I spend all day debugging code.
@MagusArtStudios
@MagusArtStudios 10 ай бұрын
The thing that is critically wrong with these AI models is they are trained on text. After watching the video I have a lot in common with this guy your interviewing and I agree on the idea of open sourcing code because it can often be vague and hard to understand. I also built AI systems and found that the AI can have all the text-vision it needs to understand the environment but it's stilll critifcally limited by the input of text.
@darylallen2485
@darylallen2485 10 ай бұрын
But why should processing a symbolic representation of the world be a limitation to seeing? Humans don't process light waves in the brain. Light hits a photoreceptor in the eye which triggers a synaptic signal that travels down the optic nerve. It's a symbolic representation of light. There is no obvious reason ai shouldn't be able to see using a similar mechanism.
@didack1419
@didack1419 10 ай бұрын
​@darylallen2485 because the words themselves might be limited on how much information they convey, the reason we can communicate content to other humans is because they also have learned about physical reality. The relations between tokens could be too underdetermined for this to be possible.
@MagusArtStudios
@MagusArtStudios 10 ай бұрын
New AI and others such as Tesla's AI system are using Vision only as input to train their systems. @@darylallen2485 The AI can do well with text-vision input but it wouldn't be able to truly experience the world beyond the words in the text-vision description which is just not how we experience the world at all.
@mattborman5780
@mattborman5780 10 ай бұрын
​@@didack1419 If anything, the trajectory of language models over the last 30 years suggests that the knowledge that can be encoded by and is recoverable from language well exceeds what we previously thought. It seems like we're still discovering those limits. Your objection raises a couple of important points, though: Language models are trained on text generated by humans whose knowledge and language use reflects their embodied experience and what they've learned about physical world. They may miss out on aspects of knowledge by only borrowing from human embodied experience via language. Just because a language model can learn about the physical reality from language doesn't mean that humans must have learned it that way. Results showing surprising alignment in color judgments by blind and sighted people suggests that humans can learn a lot about the physical world from language alone. This suggests that maybe language isn't such a limited signal
@TheManinBlack9054
@TheManinBlack9054 10 ай бұрын
The discord link is expired. Plz update it
@lionardo
@lionardo 10 ай бұрын
you can represent any type of input with symbols. What is your view on hybrid systems like neuro-symbolic ANN and HD computing with ANN?
@agenticmark
@agenticmark 10 ай бұрын
Brandon was such a delightful guest! This was great.
@SecondaryChuckle
@SecondaryChuckle 10 ай бұрын
Good Mythical Morning 90 minute AI special
@EskiMoThor
@EskiMoThor 10 ай бұрын
What's missing from current humanity? I mean, how do we make better people, more people more like Brandon?
@jacobtanner2903
@jacobtanner2903 Ай бұрын
Some of this nuts and bolts argument feels like saying something like: I took the car apart to the nuts and bolts and realized that a car cannot drive because it is made out of nuts and bolts and nuts and bolts cannot drive.
@CharlesVanNoland
@CharlesVanNoland 10 ай бұрын
I consider curiosity and exploration to be identical. The trick is making it intrinsically rewarding to discover novelty at higher and higher levels of abstraction. After you've learned all the basic possible sensory inputs and motor outputs to be had they cease to be rewarding but then you chain them together to do something new that's rewarding unto itself because it's a new pattern but at a higher level of abstraction. Then you build hierarchies of these reciprocal sensory/action chains (which includes just internal volition within an internal model to form internal thought or an internal monologue) to achieve new conceptually novel activities and actions. This naturally requires a capacity for even detecting abstract patterns. It's the learning itself and detection of successively more complex and abstract patterns that drives curiosity/explorative behavior. What I've been interested in more recently is all of the research about the basal ganglia (striatum, globus pallidus, etc) and the interaction between the cerebellum and cortex. Cerebellum literally means "little brain" and they've discovered that it's not just a motor output stabilizer, but is critical for the cortex to do everything it does - from vision to audition to thought and of course controlled and refined motor output. If the neocortex is like a logarithmic spatiotemporal hierarchical pattern detector/generator then the cerebellum is a sort of linear time window for associating virtually anything going on in the cortex via feedback through the thalamus, and with feedback from the basal ganglia for reward prediction/integration into behavior. Seeing that someone like Brandon here is thinking a lot of the same things that I have about a general intelligence and robotics makes me really excited that we're definitely super close to a future of infinite abundance and humans being able to transcend preoccupation with sustaining our biological existence (aka 'The Singularity'). This was a great MLST, thanks Tim!
@fhsp17
@fhsp17 10 ай бұрын
hey guys you do know what AI do is called associative reasoning right? you can even make new data a reasoning path if you introduce it gradually in context. aka phase change into semantic from spatial ur welcome.
@fhsp17
@fhsp17 10 ай бұрын
theres so much u guys missing in training u have no idea
@Max-hj6nq
@Max-hj6nq 10 ай бұрын
Could you relate this comment to a layman ?
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
GPT knows more about physics and color than you. It knows more about physics and color than me.....but you think it can't do a metaphore? 12:00 Absolutely braindread 😂 but we're clearly not using the same tool.
@videotrash
@videotrash 10 ай бұрын
It 'knows' so much about physics that it actually also 'knows' just as many incorrect things that have been written down in various places as it does know accurate facts. Some would argue that effectively makes it more of a highly sophisticated search engine (in regard to theoretical information - the same is obviously not the case for writing stories etc) than an entity with any consistent knowledge about the world whatsoever.
@minimal3734
@minimal3734 10 ай бұрын
@@videotrash "'knows' just as many incorrect things that have been written down in various places as it does know accurate facts" I disagree. I'm with Geoffrey Hinton, who said about LLMs "these things do understand".
@bgtyhnmju7
@bgtyhnmju7 10 ай бұрын
Great chat. Brandon - you're a cool guy. I enjoy your thought on things, and your general enthusiasm. Cheers dudes.
@johntanchongmin
@johntanchongmin 9 ай бұрын
Great conversation! I really agree with almost all of Brandon's points. General agents may not be considered intelligent for to be optimal at all scenarios. RL is brittle and hard to train! Great insights.
@dag410
@dag410 10 ай бұрын
Great talk!
@costadekiko
@costadekiko 10 ай бұрын
Great chat, very enjoyable! Brandon is quite wells-spoken! PS. For the uninitiated (if there is even any such following this channel), Brandon, I guess for the sake of keeping things as accessible as possible, mostly describes language modelling in the very last segment, not really the Transformer. The Transformer is just one of the ways to do it.
@paigefoster8396
@paigefoster8396 10 ай бұрын
12:00 "[Won't understand range of embodied experiences] unless it is explicitly represented in text, which is just a very narrow drinking straw." Indeed!!!
@chazzman4553
@chazzman4553 10 ай бұрын
Best channel for AI in the world and our galaxy also.
@paxdriver
@paxdriver 10 ай бұрын
52:14 I conjecture some natural functions like refractive index or diffusion of subsurface scattering algorithms from nature might be that nudge. My research project aims to repurpose ray tracing render engines to find physics based rendering functions that apply to activation/cost functions in machine learning. It'd be awesome if Rohrer investigated that bridge from physics to apply to exploration.
@ShpanMan
@ShpanMan 10 ай бұрын
The irony of this guy being much more confidently wrong than GPT-4.
@federicoaschieri
@federicoaschieri 10 ай бұрын
"Religious beliefs in the capabilities of LLMs", I laughed out loud. So true and so sad. Thanks guys, very interesting discussion.
@vslaykovsky
@vslaykovsky 10 ай бұрын
I like these ASMR interviews, they make me feel AI in my guts!
@robyost6079
@robyost6079 10 ай бұрын
I've seen lots of bad history essays, including those written by LLMs like ChatGPT. It's usually shallow, generic, and beautifully written beyond what most undergraduates are capable of. However, simply regurgitating facts is not what historical thinking or good historical writing are about regardless of what data scientists might think. Teachers who are particularly enamored of using ChatGPT for everything may not be qualified in their subject to be in front of a classroom.
@theatheistpaladin
@theatheistpaladin 10 ай бұрын
When you make decisions based on goals and change the system, that falls into system dynamics.
@diga4696
@diga4696 10 ай бұрын
Absolutely amazing. Thank you.
@slmille4
@slmille4 10 ай бұрын
“If GPT is helpful, then what you’re doing is probably not particularly intelligent” that explains why ChatGPT is so good at regular expressions 😅
@lightluxor1
@lightluxor1 10 ай бұрын
Amazing hubris!
@andrewcampbell7011
@andrewcampbell7011 10 ай бұрын
“religious belief in the capability of language models”. Amen brother. These things are wonderful tools for some very specific use cases, but suggesting they have human intelligence is an injustice to the miracle of human cognition.
@jeff__w
@jeff__w 10 ай бұрын
Brandon Rohrer was an ideal guest! Of course what he had to say was insightful and fascinating but also he was charming and engaging.
@ashred9665
@ashred9665 10 ай бұрын
Good conversation, calls a spade a spade. Cuts through the hype.
@DJWESG1
@DJWESG1 10 ай бұрын
This is where my thinking led also. It would need to experience its inputs, not simply have it programmed.
@NullHand
@NullHand 10 ай бұрын
But this is EXACTLY what the new Artificial Neural Networks DO. LLMs are just the most recent, trendiest subsets. There is NOT a bank of programmers keying in all the millions of responses and associations that the current "AI"s are called upon to do. The "programmers" basically set up a curriculum (database of tokens), and craft a reward/punishment metric that , in the lingo of the industry, is used to "train up" an AI. This is experiential learning, just a little more abstracted than how we experience it as children. Just as our children are taught about poisonous snakes from books, rather than direct experience in the wild.
@keizbot
@keizbot Ай бұрын
If we assume the world is materialist, then humans can also be stripped down to nuts and bolts with nothing remarkable too
@colintomjenkins
@colintomjenkins 10 ай бұрын
Yeah but can it change a duvet cover :) Great conversation as always, thanks both,
@MrBillythefisherman
@MrBillythefisherman 10 ай бұрын
Agency: If you put an LLM into a robot that needs to charge itself every so often and has a sensor saying its low then what is different between our basic agency of solving hunger and its of solving power? Surely, if in a team of robots, it would figure out it needs to go off and charge soon but recognise another robot is using the charging point. We look to have this already in the latest OpenAI 1X robot video - no? These robots look to have agency. Sure there isnt higher level objectives yet but this all just seems a scaling problem - the basics are solved and done.
@udoyxyz
@udoyxyz 2 ай бұрын
So He is a Mechanical Engineer doing AI? Wow
@dr.mikeybee
@dr.mikeybee 10 ай бұрын
What's the point of discovering something that isn't new? So if you are trying hard, you are working on the edge. Yes, LLMs can't write code that's never existed before, but they can plan. And if they plan something new by cobbling together code that isn't new, that's still pretty good.
@minimal3734
@minimal3734 10 ай бұрын
The majority of the views expressed here will prove to be irrelevant very soon, I think.
@g0d182
@g0d182 10 ай бұрын
They reasonably already prove irrelevant
@Rasenschneider
@Rasenschneider 10 ай бұрын
Everything is special!
@ryoung1111
@ryoung1111 10 ай бұрын
GPTn, for better or worse, is the ultimate groupthink.
@rrr00bb1
@rrr00bb1 10 ай бұрын
I keep looking at The Assembly Hypothesis. It looks kind of like Forward-Forward; and seems so biologically plausible. Apparently, random networks of Neurons create GPTs: ie:, embeddings, and next token prediction; emergently.
@wbiro
@wbiro 10 ай бұрын
What is missing? Original thinking. They cut and paste what already exists.
@daxtonbrown
@daxtonbrown 10 ай бұрын
As a mechanical engineer and also programmer, I see the same thing. You always have to do real world tests because your best designs always meet unexpected hiccups. Also, my AI results on chemical formulations for geopolymers always need to be cross checked, many errors.
@DjWellDressedMan
@DjWellDressedMan 10 ай бұрын
A= Humanity
@JamilaJibril-e8h
@JamilaJibril-e8h 10 ай бұрын
Nothing is missing its a standalone model if we follow the models of AI
@irasthewarrior
@irasthewarrior 9 ай бұрын
You know what's missing ? The I. It is A, but not I.
@g0d182
@g0d182 10 ай бұрын
●●●●●Summary of this video: Moving the goal post fallacy
@ricardodsavant2965
@ricardodsavant2965 10 ай бұрын
I would like one to cook meals, wash clothes, and tidy up my flat.
@DJWESG1
@DJWESG1 10 ай бұрын
Therfor it needs to lack intelligence 😅
@ricardodsavant2965
@ricardodsavant2965 10 ай бұрын
@@DJWESG1 - 🤣 ☕
@nycgweed
@nycgweed 10 ай бұрын
hey thats bart simpson not sam harris
@JoseGarcia-w5d7u
@JoseGarcia-w5d7u 10 ай бұрын
Did you know that us humanos have a physi Maching motor, if maybe we were fisycal robot
@aiamfree
@aiamfree 10 ай бұрын
it's not just what it has seen...
@joekennedy2599
@joekennedy2599 10 ай бұрын
You are insane😊
@Alphadeias08
@Alphadeias08 10 ай бұрын
sam harris is an llm :)) i don't like sam i think he lives in a bubble.
@odiseezall
@odiseezall 10 ай бұрын
Incredible amounts of hubris.
@JoseGarcia-w5d7u
@JoseGarcia-w5d7u 10 ай бұрын
Just like the Anunaki did to us.
@xegodeadx
@xegodeadx 10 ай бұрын
I wonder if anyone will ever read this but Instead of humans creating a Ai Y don’t we make a computer designed a Ai Like its own baby Similar to the science behind deep neural networks Couldn’t we give a computer like the quantum computer codes for deep neural networks and (maybe some biology stuff(or something along those lines)? Give it a blank slate and let it run wild Essentially my idea is to let robots create robots with no human interference
@ea_naseer
@ea_naseer 10 ай бұрын
genetic algorithms... no too slow.
@xegodeadx
@xegodeadx 10 ай бұрын
@@ea_naseer I am computer illiterate and kno jackshit about anything Ai Can u explain y it’s slow
@ea_naseer
@ea_naseer 10 ай бұрын
@@xegodeadx what you've hypothesised already exits they call them evolutionary algorithms. Look them up we use them to design antennae the best they've been used for is scheduling. They are usually slow to converge to the correct answer, when they aren't it's because of the way they are designed which is difficult to get right and if you give them millions of neurons... combinatorial explosion ie solving the problem with a computer is slightly better than solving it by hand both are painfully slow.
@xegodeadx
@xegodeadx 10 ай бұрын
@@ea_naseer if humans can’t comprehend consciousness and a computer is to slow at recreating it. Where do we go from there. Could you give a quantum computer consciousness equations or questions and see what its answers are. If a computer itself can’t create/birth its own computer I don’t think there will ever be Ai Unless ether or dark matter or some weird science thing is discovered and mastered
@ea_naseer
@ea_naseer 10 ай бұрын
well you'd have to define consciousness and a computer can birth another computer it's called recursion it takes up too much memory so no one does i. i think there is a disconnect between the public view of AI which forgive me for saying but like you is a cult like expression of wanting a brain in a vat and what goes in academia which is just building capable machines than the ones we have now. @@xegodeadx
@renanmonteirobarbosa8129
@renanmonteirobarbosa8129 10 ай бұрын
Ai needs better memes
@lebesguegilmar1
@lebesguegilmar1 10 ай бұрын
Amazing episode! Brandon is very inteligent and explicationsb is didatic. Congratulations from the Brazil.
@joekennedy2599
@joekennedy2599 10 ай бұрын
Do you have a dish washer?
@synthaxterrornotmr.z569
@synthaxterrornotmr.z569 10 ай бұрын
Nice!.....
@KilgoreTroutAsf
@KilgoreTroutAsf 10 ай бұрын
"Sam Harris" being a "public intellectual" and "sounding so clever". Promising start for an April's fools episode.
@joshuasmiley2833
@joshuasmiley2833 10 ай бұрын
I am a huge fan of this podcast! It’s great to catch them just a little behind Current events . It’s crazy that through pre-simulation to real world there is already at least one company that has created a robot that is doing dishes, folding laundry, sweeping floors, and cooking in a new environment it has never been in by teaching itself. This podcast, two days old🤔; it’s amazing how fast technology is moving.😉😉It seems to me the problem of agency is as good as solved. It’s just a matter of time and technologically, time is moving very fast. It’s taken billions of years for us to acquire agency. It’s also good to remember that when humans are babies, they take quite some time usually one and a half years just to learn how to walk . I guess I don’t think agency is a magic only biology inhabits.
@gustavoalexandresouzamello715
@gustavoalexandresouzamello715 10 ай бұрын
What company is this?
@joshuasmiley2833
@joshuasmiley2833 10 ай бұрын
@@gustavoalexandresouzamello715 I can’t remember which university but it’s a start up company out of a University. I was blown away when I saw it. My guess is that another company such as Optimus from Tesla will buy them out and we will see this in mass production which many large industrial companies have contracts for already for 2024
@palfers1
@palfers1 10 ай бұрын
The current state of the art chatbots suck tremendously at simple arithmetic. Building logic and reasoning ability atop such a shaky foundation seems like folly. They lack the ability to inspect and check their own statements.
@NullHand
@NullHand 10 ай бұрын
From the perspective of a hard sciences University student, most English and Literature majors suck tremendously at geometric proofs and differential equations. They seem to lack the focus and rigorous deductive reasoning skills. Is this because they are an inferior architecture? Or is it maybe because during the "Reinforcement Learning with Human Feedback" during the extensive trainup of their neural nets those qualia and responses were not heavily weighted?
@makhalid1999
@makhalid1999 10 ай бұрын
Wait, the Saudis pay you more than 17TB???
@diophantine1598
@diophantine1598 10 ай бұрын
I have those solutions. I developed a model architecture which seemingly proves P=NP
@aslkdjfzxcv9779
@aslkdjfzxcv9779 10 ай бұрын
the only thing missing from AI is artificial intelligence.
@grobolomo2055
@grobolomo2055 10 ай бұрын
what the fuck was that sam harris segment in the beginning? not funny, just bizarre and off-putting
@ajohny8954
@ajohny8954 10 ай бұрын
Wrong! Very funny
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
Imagine thinking Sam Harris reads books all day? Why does he always have thinktanks on his Podcast then? He's incredibly insulated to the world.
@_tnk_
@_tnk_ 10 ай бұрын
calm down yo
@agenticmark
@agenticmark 10 ай бұрын
triggered af. call a manager, quick!
@alexandermoody1946
@alexandermoody1946 10 ай бұрын
Training on the whole internet was clearly not a good teacher.
@zerge69
@zerge69 10 ай бұрын
Smug.
@ilyosjonnishanov4533
@ilyosjonnishanov4533 5 ай бұрын
Wow! If you find gpt useful then you're not was spot on😅
@billyf3346
@billyf3346 10 ай бұрын
sell sundar pichai. buy antithesis. ☮
WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE...
1:49:11
Machine Learning Street Talk
Рет қаралды 91 М.
Building a GENERAL AI agent with reinforcement learning
1:57:12
Machine Learning Street Talk
Рет қаралды 24 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
The Elegant Math Behind Machine Learning
1:53:12
Machine Learning Street Talk
Рет қаралды 133 М.
Demis Hassabis - Inside DeepMind's 20-Year Plan to Build AGI
1:01:34
Dwarkesh Patel
Рет қаралды 179 М.
Inside Google’s New $1 Billion Office at MIT
13:08
Samuel Bosch
Рет қаралды 45 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 1,1 МЛН
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 82 М.
#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)
1:40:07
Machine Learning Street Talk
Рет қаралды 18 М.
Decompiling Dreams: A New Approach to ARC?
51:35
Machine Learning Street Talk
Рет қаралды 15 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН