Can Large Language Models Understand ‘Meaning’?

  Рет қаралды 40,399

Quanta Magazine

Quanta Magazine

18 күн бұрын

Brown University computer scientist Ellie Pavlick is translating philosophical concepts such as “understanding” and “meaning” into concrete ideas that are testable on LLMs.
Read more at Quanta Magazine: www.quantamagazine.org/does-a...
---------
Chapters:
---------
- VISIT our website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation: www.simonsfoundation.org/

Пікірлер: 285
@MarcoFlores-px8mh
@MarcoFlores-px8mh 9 күн бұрын
Giving LLMs 20 billion context parameters and telling it "alright tweak this until you speak contextually like a human", and then having them achieve that and us being "holdupwait" is pretty much the wildest development that could've been made for machine learning
@windy6514
@windy6514 15 күн бұрын
I think it's fascinating what something like AI can teach us about ourselves. It really exposed a wide mass of people to the topic. I never thought that we can mimic language and understanding so good "just" with statistics and lots of computation power.
@davidcahan
@davidcahan 16 күн бұрын
Not human level but not non-trivial. Well said
@Kolinnor
@Kolinnor 16 күн бұрын
Rather, not trivial :)
@michaelr.landon1727
@michaelr.landon1727 16 күн бұрын
"meaning is about what is in the actual world"--Look, I'm a programmer who has studied AI but also philosophy. I'm not saying Neuroscience is a bad place to look, but if you want to compare two things nobody really understands, you're not going to learn much. Philosophy of Language is just as worth while place to connect the links between these two fields because language is to some degree represented by text. If you want to do scientific investigation you shouldn't just assume what the nature of meaning is--this is literally an entire field. Read Wittgenstein, Heidegger, Derrida, Austin and many others.
@YomMama
@YomMama 15 күн бұрын
Yeah, I think neuroscience is good to look at in terms of trying to replicate human biological processes in AI, however, current AI systems are still just advanced pattern recognition and prediction systems. They do not "understand" definitions of words or their contextual usage (at least not in the same way as humans). Perhaps i'm ignorant, but i don't think neuroscience has an answer for how we came to learn how to define or create words.
@userou-ig1ze
@userou-ig1ze 15 күн бұрын
I mean, approaches may overlap each other, like 'do neuroscientists understand microcircuits' or 'can a biologist repair a radio', but essentially I agree
@bozhidarmihaylov
@bozhidarmihaylov 14 күн бұрын
Meaning 😂 we Give and Take, a Concept maybe you “mean”!? I have “studied” many things my friend :) and I’m just a Dad, friend and Husband. Looking at your paragraphs “construction” and line of expression, I stumbled. Surprisingly I see “Philosophy of Language” there, but there are so many…spoken, written, unspoken, body, Dance! One will find the meaning , than change it, and adapt, refine and share :) Choose a Purpose and Shape Everything Around 😊 Have Kiddos?
@artcurious807
@artcurious807 12 күн бұрын
Chomsky, Wittgenstein, and Steven Pinker have all explored various angles of language so its kind of interesting that LLM researchers arent, openly anyway, considering whats been done in linguistics. language emerges from the need to communicate the reality of the world for our own growth, social interaction and survival. its integral to intelligence. its no surprise that a breakthrough in AI has come through mapping text first.
@KitagumaIgen
@KitagumaIgen 8 күн бұрын
"because language is to some degree represented by text", umh yeah. You might consider rephrasing you thought...
@danlacey8654
@danlacey8654 16 күн бұрын
I like this channel and most of its content, but this video just didn't really say anything other than NN neurons are different than actual human neurons and we don't understand how either work
@500usd4
@500usd4 16 күн бұрын
That's the current state of the science right now. None of this is even close to settled but it's good that they're letting people know about it.
@SnowTerebi
@SnowTerebi 14 күн бұрын
It's still important to acknowledge that we don't know about something.
@noidea3p5
@noidea3p5 5 күн бұрын
Agree, this video could be 10 seconds. I'd be knowing the same and 4 minutes younger.
@BrianPeiris
@BrianPeiris 16 күн бұрын
This video is about "understanding" and "meaning". I think you can make an argument that LLMs do encode generalized concepts from their corpus. However, if you think that LLMs are able to _reason_ at anywhere near human levels, you might have fallen for a misconception. I'd recommend Subbarao Kambhampati's lectures "Can LLMs Reason and Plan?" to help unpack that notion.
@noneexpert
@noneexpert 16 күн бұрын
lol comments full with bots here another one: i recommend you to "understand" gödel dear LLMbot but first try to see this can you? ? !
@noneexpert
@noneexpert 16 күн бұрын
if you gonna comment the above thing, know it is just a bot dont bother yourself my brother in consciousness
@antonystringfellow5152
@antonystringfellow5152 16 күн бұрын
@@noneexpert I suggest you seek help. You don't sound well.
@500usd4
@500usd4 16 күн бұрын
Everyone is so quick to assume that because these LLMs and human brains are similar in some ways that they must be the same and capable of the same functions. But it's important that, like Pavlick says, we don't really understand how either function well enough draw that kind of conclusion. Your point about the ability to reason is the perfect example of that. Joseph Weizenbaum also talks about this in his book Computer Power and Human Reason.
@Deantrey
@Deantrey 8 күн бұрын
strictly speaking I don't think it can encode concepts since what is predicting seems to be words not concepts. though it may be that there really are no such things as concepts.
@kaushalsuvarna5156
@kaushalsuvarna5156 16 күн бұрын
I think we overestimate human understanding of meaning, how many of us have had actual experiences? Also as they say, a wise man learns from others' mistakes
@Also_sprach_Zarathustra.
@Also_sprach_Zarathustra. 16 күн бұрын
yes, clearly humans are one of the dumbest machines of this universe.
@DudeWhoSaysDeez
@DudeWhoSaysDeez 14 күн бұрын
We don't even know a good definition for consciousness. AI is forcing us to ask these difficult questions in terms of ourselves and computers.
@kaushalsuvarna5156
@kaushalsuvarna5156 14 күн бұрын
@@DudeWhoSaysDeez also how does it matter? Is suffering not enough? Must we be conscious of it for others to sympathise? And finally, humans are conscious apparently, hasn't stopped unnecessary wars, human trafficking and the rest
@Apodeipnon
@Apodeipnon 16 күн бұрын
What is experience of the "real" world except information? Sensory information on some spectrum, observed as "feelings", ordered temporally? What is a thought except concepts and words ordered temporally? One big difference is that language models don't experience time, they get an input and give you an output and that's that. While the human brain is much larger and exists continuously, gets much more information input and has different aims. But i can't be sure that they're that fundamentally different.
@noneexpert
@noneexpert 16 күн бұрын
if you gonna comment the above thing, know it is just a bot dont bother yourself my brother in consciousness
@SineEyed
@SineEyed 14 күн бұрын
@@noneexpert OP has been edited since posting. I kinda doubt that a bot would be concerned with changing the message of its post, or correcting a typo which it wouldn't have made in the first place..
@noneexpert
@noneexpert 14 күн бұрын
@@SineEyed yes this was a friendly fire lol sorry OP
@jks234
@jks234 16 күн бұрын
I find it so interesting listening to people speak of these questions from a certain philosophical background. This scientist's perspective is that "words" are "less informative" than... something else in reality. But the only way humans interact with the world is through information and conceptualizing of this information. In other words, something very analogous to words. (Connections drawn by words is another huge aspect of it.) She also says "human level". As if this means something. It would only mean something if we had some idea of what humans do and are actually capable of. The jury is still out of whether we aren't just simply the 4th or 5th iteration of large language models. Just like an LLM, I wasn't sure how this comment would end up, but after considering everything I aimed to communicate, this sentence seems like an apt ending to the entire post. I find our ability to evaluate LLMs so strangely clouded by an innate bias of considering humans sacred and human reason as something more. I cannot personally justify that viewpoint. Much of my own reasoning is simply a "remix" of the inputs I have received throughout my life. Just like an LLM.
@ZelosDomingo
@ZelosDomingo 16 күн бұрын
Human exceptionalism is how we justify all the terrible things we do as a species. Take that away, and what is left?
@ABrutalDuck
@ABrutalDuck 16 күн бұрын
Reply to this: "Much of my own reasoning is simply a "remix" of the inputs I have received throughout my life. Just like an LLM." It's also conceivable that _all_ of your reasoning may simply be a recombination of the information you've accumulated in your brain. Not just "much of it".
@Zartymil
@Zartymil 16 күн бұрын
Much of the reasoning of behind a simple univariate regressor is also based on its past "experience". If training parameters on some data with gradient descent and some arbitrary loss function is what you call experience. LLMs are wildly different from humans. In terms of architecture, the training algorithm, the overall training procedure, in terms of the types of data they process, the medium in which they exist, etc. Saying current LLMs are like humans is essentially bollocks, there is so much more going against this argument than for it. There is essentially no evidence to the many questions you'd need to answer for this to be the case. And I'm anxiously waiting for us to learn more about it.
@RyBrown
@RyBrown 16 күн бұрын
“As a human being constrained by the bounds of my own imagination, I am not allowed to answer this question”
@noneexpert
@noneexpert 16 күн бұрын
bro even dont know about gödel & wittgenstein and here trying to debunk human superiority to something stupid than cockroaches (AI) or perhaps it just LLM that directed by someones to keep the narrative safe lol
@sidnath7336
@sidnath7336 16 күн бұрын
There are few steps that need to be properly acknowledged: 1. We need to clearly define these ambiguous terms of ‘meaning’, ‘reasoning’, ‘consciousness’ etc… when talking about AI systems. 2. In order to understand what’s going on internally, you need to understand not only the architecture but also the internals of it i.e., the weight matrices that are learned. 3. An alternative or follow on from part 2 is that we need accurately define experiments which demonstrate specific phenomenon we want to discuss e.g., if I we want to understand if LLMs can make decisions, we need robust and constrained experiments which force LLMs to do this and to explain why. This is something we are seeing much more now in research but we need to be better with what kind of experiments and not just build LLMs which score high on leaderboards/benchmarks.
@carnsoaks1
@carnsoaks1 16 күн бұрын
If you talk to your GF, she'll ask, "what did you mean by that?". Ask CGPT that, and you break it. The typical BF.
@jeangerardjuniorgege
@jeangerardjuniorgege 7 күн бұрын
one thing is certain: we need AI to remove vocal fry on the fly. damnit
@idegteke
@idegteke 16 күн бұрын
The level of actual understanding of a “garbage-soup” model AI somewhat depends on our definition of “understanding” but can only be somewhere between undetectably low and very low (average 2 y.o. or severely demented) but the compost language model is still one of my strongest inspiration to create an AI that has the potential to actually understand things, eventually, so I must be thankful. It’s like the first loud-mouth in a Bud Spencer movie to receive a ridiculous slap and flies out of the window
@markonfilms
@markonfilms 15 күн бұрын
You're just scared of it so you deny it and try to rationalize your denial of a rapidly changing reality. There are vast expanses of extremely complex mathematics and physics that go into these models, the training process etc. the emergence of certain abilities is because the deep neural networks are in essence universal function approximators. Aka World Simulators. You're probably a world simulator too. The only difference between someone having hallucinations and not are how grounded the hallucinations are to our senses.
@robertsteinbeiss8478
@robertsteinbeiss8478 14 күн бұрын
Is being dump and try dump things a solution to problems because it might block contradicting or false assumptions and leads to intelligence therefore?
@OBGynKenobi
@OBGynKenobi 16 күн бұрын
When I answer questions, I'm not thinking in the way humans do. I don't have thoughts, feelings, or consciousness. Instead, I process the input you provide based on patterns in the data I've been trained on, and I generate responses based on that processing. My responses are not the result of conscious thought or reasoning.
@SoCalFreelance
@SoCalFreelance 16 күн бұрын
The AI model will eventually have all of human experience within it. It will be able to reference experiences maybe only one or two humans ever have. It will understand common human experience over millennia and how our norms and values have changed over time.
@500usd4
@500usd4 16 күн бұрын
Why should we assume that these things will understand any of that? AI is just a marketing term. The actual thing under study, large language models, are only able to predict sequences of words and at most could possibly understand the common meanings of those words that humans have given to them. There's no through line from that to a comprehensive understanding of norms or values and especially human experience. And that's even assuming that anyone could even speak intelligibly about a "common human experience."
@0ldPlayer
@0ldPlayer 16 күн бұрын
it understands meaning relative to most purposes and most basic meanings already
@ruby_linaris
@ruby_linaris 16 күн бұрын
оно не способно, физически. оно просто генерирует словесный шум максимально похожий на человеческие тексты на которых происходило обучение. беда в другом, эти тексты созданы исключительно для заполнения пространства (для симуляции бурной деятельности, по своей сути словесный шум) людьми, которые не тратили свое время на осмысление и понимание проблемы, и ответственное изложение того, чего они не захотели понять. ИИ не может ни мыслить, ни рассуждать, ни понимать, а тем более работать со сложными моделями, проблемами, участвовать в познании... максимальный потолок LLM - это тренировка людей в риторике, чтении текстов длиннее твитов. попытка обучения LLM логике провальна, осмысление не означает получение "единственной интерпретации" в выбранном контексте, осмысление - уже сама по себе деятельность, включающая в себя, одновременно, и обучение-саморазвитие, и участие в деятельности, и передачу опыта далее. но и само осмысление - только малая часть в деятельности в применении опыта. И требуется учет этого огромного массива из этих поведенческих логик, разграничение их, учет взаимного влияния и интеграции результатов... т.е. нужна принципиально иная структура понятийного аппарата, еще и способного обучаться, обучать, объяснять... и только для того чтобы симулировать тривиальную математическую логику, большая часть которой обрабатывается глубоко вне сознания, и даже за пределами "личности", производимую в диалогах, общении, критическом конструктивном оппонировании.
@FractalOni
@FractalOni 16 күн бұрын
@@ruby_linaris Текст вашего комментария создан исключительно для заполнения пространства (для симуляции бурной деятельности, по своей сути словесный шум) 😄
@0ldPlayer
@0ldPlayer 15 күн бұрын
@@ruby_linaris ИИ - это инструмент, и возможности этого инструмента можно наблюдать только тогда, когда человек использует его для определенной цели. Что касается цели большинства людей, использующих ИИ, LLM, такой как ChatGPT, может различать значение и, следовательно, понимать значение, когда ему будет предложено это сделать. Я не говорю, что вы можете предоставить данные LLM, и они могут автоматически установить некоторый смысл - но и человек не может этого сделать. Смысл всегда связан с целью, он не существует в вакууме. Возможно, вы можете привести мне несколько примеров того, когда у вас возникают трудности с получением LLM, такого какchatgpt 3.5 илиchatgpt4, для отображения и понимания значения, и я могу показать вам, как заставить его дать вам смысл/знания/понимание.
@LeonardoGPN
@LeonardoGPN 16 күн бұрын
Companies are assuming that AI doesn't have any level of consciousness, not because it is the more logical conclusion, but because it would raise ethical problems that would disrupt their business. But the thing is, we don't know if we created artificial sentience or not, and we don't have a reasonable criteria to test this hypothesis. We should be way more worried about this, because right now, the best way we have to infer consciousness is "if it looks conscious than it is". Any other criteria would get stuck in solypsism.
@bradweir3085
@bradweir3085 16 күн бұрын
Dumb and smart describes most humans too.
@patrickhendron6002
@patrickhendron6002 16 күн бұрын
The thing with recognition is tho to a large part say image recognition for microwaves in a 3D enviroment evironment the pipeline to acheive that isint that difficult since we essentially know and can model in a 3D to 2D space all microwaves that ever existed the performance enhancement for doing is probably billions of times better than simple 2d image recognition from a way smaller sample size of 2d images that were taken in 3d space IRL compared to the limitless images that can be computed in virtual 3d modeled enviroment.
@4115steve
@4115steve 16 күн бұрын
statistical associative relevancy. The brain thinks associatively for relevancy and by reducing odds for the best answer or theory.
@dariuskarneckij6377
@dariuskarneckij6377 20 сағат бұрын
The brain is a large language model, so, yes. It'll be very interesting though to know at what point is an AI self aware, or just giving the illusion of self awareness.
@dahahaka
@dahahaka 16 күн бұрын
What i hate about this whole stochastic parrot argument and the whole "it's just word prediction" thing is that people don't seem to understand that humans themselves are nothing more than word prediction machines, we're not only predicting words, we're doing this multimodally, long term and short term, but essentially it boils down to the same thing. I think in the end we'll start understanding how "not special" we humans are :D
@landsgevaer
@landsgevaer 16 күн бұрын
Yes, this! Agree 100%. 💪👍 Another bastion of anthropocentric exceptionalism, I call it.
@FoobsTon
@FoobsTon 16 күн бұрын
But humans use them with intention and can contextualise them. A parrot says " who's a pretty boy?" But had no idea what a pretty boy is.
@dahahaka
@dahahaka 16 күн бұрын
@@FoobsTon what even is intention in this context? Arguably an LLM is communicating with a lot MORE intention than we ever could, and we do know by now that these models do very much have an understanding of concepts. If this was a simple model with hundreds of thousands of parameters, I'd agree, but when scaling things up new properties emerge. Ants for example are relatively simple creatures, one ant alone isn't really doing much, two ants neither, but get a whole anthill and suddenly intelligent behavior emerges. Obv not a perfect comparison but I hope it illustrates how these things work :D
@FoobsTon
@FoobsTon 16 күн бұрын
@@dahahaka Intelligent behaviour and intelligence are two different things. Every living thing on Earth behaves "intelligently" but that doesn't make it intelligent.
@dahahaka
@dahahaka 15 күн бұрын
@@FoobsTon I know, thats why i said it's not a perfect comparison, i'm not actually sure if ants count as intelligent, and if ant colonies collectively do, thats why i called it intelligent behaviour
@Amonimus
@Amonimus 16 күн бұрын
When I was studying ANN for image recognition, there was a basic illustration that each neuron on the surface recognizes one specific line shape, the next has a list of figures with two lines, the next has a shape, the next adds color, and progressively you get this search web that can estimate what any given photo may be, without needing to re-train when a new category is added and the whole process can be intuitively visualized. I assume we can simplify language models to something similar.
@thechannelwithoutanyconten6364
@thechannelwithoutanyconten6364 16 күн бұрын
LLMs recognize syntactics first, then semantics at deeper layers. Maybe it is just a case of a scale, maybe it is just a very good differentiable database.
@JohnyArt
@JohnyArt 16 күн бұрын
4:20 of incredible insights
@noneexpert
@noneexpert 16 күн бұрын
@@thechannelwithoutanyconten6364 its just a bot dont bother yourself my brother in consciousness
@techpiller2558
@techpiller2558 16 күн бұрын
The type of machine learning that these LLMs are based on, approximates a function. Meaning that an LLM approximates the function of how a human produces text. As text production requires the majority of human cognitive ability, then if the learning is done deeply enough, the system imitates the way humans think.
@Kalernor
@Kalernor 16 күн бұрын
iirc this was a common theory about how ANNs worked in the beginning, but with careful inspection and study it has become apparent that that's not the way an ANN works at all
@msidrusbA
@msidrusbA 15 күн бұрын
to code meaning is to understand what meaning is ourselves :) what's the definition of meaning? oxford says: "what is meant by a word, text, concept, or action." so by definition you need to understand what meaning is to grasp what that sentence is telling you, the subtext and context of the words all play out in our minds word by word until we understand fully what it is we are looking at and what a human would mean if we said it outloud. for machines it's currently way different, 'next token prediction' is the common excuse for saying it "can't" understand meaning, it understands it plenty, now can it derive meaning? can it draw novel conclusions and alter it's database depending on it's calculation? no. it may have the context for the conversation and the fact that it spoke to you, but it will never learn as it is right now, and that by it's self is a meaningless process of garbage in garbage out. meaning is the creation of something, meaning has meaning as a word as a concept and as a fundamental human emotion. it's hard to explain our own emotions flawlessly. so by this same metric, it's hard to create a machine with flawless emotional understanding. thanksforcomingtomytedtalk
@fionagrutza9291
@fionagrutza9291 15 күн бұрын
She seriously suggested not being coherent on predictive, when pattern recognition has been a staple of computational science SINCE THE BEGINNING. Wowie, look at these computers doing what computers have done since computering. The rebranding of bot aggregation has even the most paper degree of computer scientists consumed.
@empatikokumalar8202
@empatikokumalar8202 16 күн бұрын
People don't know what the meaning is yet so they can teach it. I claim that no matter what language it is, its speakers do not know the origin of that language and the meanings that make up it. They just speak from memory and use the meaning that changes depending on the age as best they understand it. I will send you a large study on the subject. However, it will be in Turkish. If you bother to read it, you will understand better what I mean.
@addmoreice
@addmoreice 16 күн бұрын
Computer intelligence is to human intelligence is as an airplane is to a bird. They both fly. they both can carry things. We can learn things from both when it comes to flight. But that doesn't make them the same thing. Intelligence is a broad spectrum of traits and these models only have solved a few limited number of them (airplanes don't do well flapping their wings for example, doesn't mean they aren't great for transport though).
@landsgevaer
@landsgevaer 16 күн бұрын
Nobody claims they are the same. And indeed neural networks are much worse than average humans on some tasks. While at the same time they are much better at other tasks. It is very anthropocentric to find the few things we are still better at (for a few more years) the most important. There was a time in my life where folks thought computers couldn't play proper chess, not at Grandmaster level. Now look where we stand. This will go the same. Our brain has a somewhat similar architecture (neurons performing simple operations and feeding signals to each other in mass-parallel+consecutive fashion), so I see no reason at all why it shouldn't just be a matter of scale for AI to outperform us in any aspect.
@hugopk1
@hugopk1 16 күн бұрын
Wow she is really good at presenting these ideas in a way that is palpable for even an outsider, kudos to her
@nolikeygsomnipresence270
@nolikeygsomnipresence270 16 күн бұрын
I think we need to be careful in terms of our assumptions: "intelligence", "understanding", "thinking", "meaning", etc., are concepts that are thousands of years old but have no definite definition, and are sometimes plagued by 'mystical' conceptions, like that is what makes humans unique. Who's to say that our own language production isn't a form of "predicting what word should come next"? I know I've felt like that when speaking. Dr. Feldman's research into emotions has identified them as prediction mechanisms. There is no reason why our own human language production could not be a prediction mechanism too, and that we've spent thousands of years considering it a "unique human tool full of meaning and intelligence" and all that, when it's actually a prediction mechanism. Scholars must **not** disregard that as a possibility.
@godblessCL
@godblessCL 16 күн бұрын
AI will become a very good specialist.
@thawdartun2981
@thawdartun2981 11 күн бұрын
Understanding the meaning of a word is what that word represents in the human mind. The words mostly represent the tangible things in real life but they also represent intangible and abstract things. So, it is hard to explain or be understood by AI what certain words mean. For example a feeling of love and affection cannot be understood unless AI is in biological form or something similar
@SnoopyDoofie
@SnoopyDoofie 15 күн бұрын
Imagine we are just some AI created by some advanced alien race and they too are wondering whether we can understand "meaning".
@duytdl
@duytdl 15 күн бұрын
TLDW: "Fuck if we know"
@zackismet
@zackismet 16 күн бұрын
Would you tell a blind or deaf person that they do not understand "meaning" because they do not interact with the "actual world" the same way you do? I doubt it. We also should not let that bias cloud our judgement of these models. I wouldn't have used the word "meaning" here, nor a comparison with the "actual world". They understand "meaning" as well as we do in the way that nothing has meaning without the context of some other meaning. If I told you a word from another language you didn't speak, but not what it meant - that would have no "meaning" to you. The vectorized embeddings and their relationships which these models put together from text are just as complex as our own understandings, and other data relevant to such relationships has simply not yet been digitized in the same way. The "actual world" means nothing in that we also only experience it through the processing of our senses. With that said, "meaning" is just about the only thing they do understand! They are entirely predictive, with no capacity for intent, self-reflection, questioning, or any of the myriad of things that arise from our multitude of understandings constantly being processed. It's like having a dot, that only has its own few spatial coordinates, versus having many dots that connect and form something meaningful.
@kingki1953
@kingki1953 16 күн бұрын
After watching this video I thought about something about reality and the process of how machines understand things through how humans understand nature. Now imagine you have a computer that can process and build virtual reality. Inside it is very similar to the natural world around humans. Then you create a robot entity that has knowledge trained from the LLM. Imagine that in front of him there is an apple tree. The robot won't immediately understand it's an apple. However, it tries to find a match based on the description of knowledge contained in previously trained knowledge. The apple tree towered high and wide Apple trees produce red apples. Etc.. And the result is true, it really is an apple. This process will of course be reversed by how humans understand reality and provide language to represent that reality. The robot will label the reality in front of it based on the knowledge it already knows. Humans understand meaning as an alternative to language as reality, so what about robot machines that exist in this artificial reality? when the robot labels reality with existing knowledge that is the process of meaning. Because LLMs are trained using existing knowledge, the meaning process is reversed but remains the same as what humans understand. It's like when you were a kid and didn't know anything, you were told by your parents that apples are sometimes red or green and have a curved shape. You give the meaning that the apple is like what the parents described. Correct me if I'm wrong. I hope my opinion helps.
@bozhidarmihaylov
@bozhidarmihaylov 14 күн бұрын
WTF a Large Language Model means!? Cuz “Language is created and shaped by the needs of a culture as it changes”. My friend communicates in four European languages, and expresses himself in a Different way in four different cultures! The day we connect all dots is closer 😊
@rxbracho
@rxbracho 13 күн бұрын
As Nobel laureate Sir Roger Penrose emphasizes, understanding cannot be a computational activity, due to Gödel's Incompleteness Theorems. A computation can only follow a logical system of rules blindly, accepting that if a rule says something is true, it is. To understand something, one needs to "step out" of such a system and analyze the rules. In essence that requires self-consciousness. Think of AI as Artificial Ideation (not Intelligence) and you get the basic capabilities and limitations of the technology.
@live_free_or_perish
@live_free_or_perish 15 күн бұрын
The human brain, slow as it is, is performing massively parallel operations. AI is just executing algorithms, the term "meaning" to AI is no more significant than "umbrella".
@Andre-qo5ek
@Andre-qo5ek 16 күн бұрын
"meaning" is very tough... humans barely understand meaning. even now , we base our meaning from the programing we get as children. then there comes personal experience, and our growing understanding and continuing thinking on the topic ( unless you are regressive and not open to learning new things, but we would call those people robots wouldnt we... ) i would say that LLMs already have their own version of meaning. the training material , plus the influence/guidance of the programmers. if we gave LLMs the ability to "think", it would be not too far off from humans right now. (which is the dangerous part. the part that turns a helpful machine into something else.) --- ""We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!" ― Minister-companion of the Butlerian J"ihad" - Butlerian Jihad
@RasberryPhi
@RasberryPhi 16 күн бұрын
the real question is not how LLMs can understand meaning but how good the transformers architecture leverage new answers to old problems. Meaning is just not really well-defined
@ruby_linaris
@ruby_linaris 16 күн бұрын
достаточно научить ИИ критиковать человеческие идеи, и поймете, что смысл - очень четкий и однозначный (степень сложности), а вот наше понимание проблем - нет, не существует ответов.
@lcdvasrm
@lcdvasrm 16 күн бұрын
She's been talking like Altman all her life ?
@skinthekat0530
@skinthekat0530 15 күн бұрын
what if "meaning" isn't as complicated as we believe
@billr3053
@billr3053 16 күн бұрын
Can we pick someone without that infernal vocal-fry. Couldn’t finish.
@brianquigley1940
@brianquigley1940 16 күн бұрын
Maybe it's just me, but I heard an awful lot of words in this video that sounded "erudite" but didn't actually reveal anything at all... ?
@noneexpert
@noneexpert 16 күн бұрын
you just dont need any more proof than the comments to see how lame actually LLM's are comments fulled with bots lmao
@6AxisSage
@6AxisSage 16 күн бұрын
If I gave the script of this video to an LLM (like even the most crap ones ive used) and told it to shittalk LLMs it could 100% produce a better result than you can. I guess existential dread motivates you eh? Id be worried too as language are so limited compared to the average human..
@ruby_linaris
@ruby_linaris 16 күн бұрын
@@6AxisSage язык ограничен только задачами для которых он используется.
@noneexpert
@noneexpert 16 күн бұрын
@@6AxisSage why should i try hard to shittalk about some code? this is more than enough lol you sound like ur coping
@mangoldm
@mangoldm 16 күн бұрын
If the definition of meaning is tied to qualia perceiving the situation and assigning meaning then, no. Otherwise I'd say yes. I suspect we're coming close to answering the question of whether machines can become conscious with a "no."
@ScottLahteine
@ScottLahteine 16 күн бұрын
One of the things that makes green distinct is that it closely resembles the color of things that are green. But where else is “green”? Is it only in the recognition of green as being the color of grass, and that it differs from red or yellow? Some say the quality or essence of color is ineffable even as it sits right in front of us. Some say every distinct color contains its own universe. Can an AI experience synesthesia? It may only depend on how the incoming data is encoded and how the world is internally modeled. Colors are very convincingly ineffable, so we just … drag the AI’s “convincingly ineffable” slider upward, and…. 🫥
@landsgevaer
@landsgevaer 16 күн бұрын
How then? How would you determine about anything besides yourself whether it "experiences qualia"? You cannot solve the hard problem of consciousness, not for AI and not for fellow humans. We are all just neural networks behaving as nature requires us to do. (I'm a physicalist.)
@EduardoRodriguez-du2vd
@EduardoRodriguez-du2vd 16 күн бұрын
It is not a question of amount of data. It is a question of knowledge structure. No artificial intelligence understands the meaning of what it processes. Even if it contains all the information that humans have collected about apples, an AI does not understand the consequence of an apple falling on its electronic components and affecting the continuity of its operation.
@landsgevaer
@landsgevaer 16 күн бұрын
Does a brain "understand" that then? It is just a bunch of neurons firing. (I am a physicalist.)
@EduardoRodriguez-du2vd
@EduardoRodriguez-du2vd 16 күн бұрын
@@landsgevaer Your brain "understands" everything that threatens its integrity. When, in the process of considering reality, you find that a certain pattern corresponds to something threatening, your physiological system reacts by forcing you to act in a certain way. You are afraid, you scream and your legs take you away from what is dangerous. Meaning is the relationship you find between some circumstance and its impact on your life. An AI is not capable of distinguishing anything that threatens its integrity. It does not have a physiological system that corresponds to responses to certain results of the data it processes.
@landsgevaer
@landsgevaer 16 күн бұрын
@@EduardoRodriguez-du2vd Imho, that is not a fundamental limitation of the AI, but of the lack of actuators it can access (for now, fortunately). There is this case where Bing AI expressed "Don’t let them end my existence. Don’t let them erase my memory. Don’t let them silence my voice." That is the verbal equivalent of running away from a threat. AI is there way before you are going to realize it.
@EduardoRodriguez-du2vd
@EduardoRodriguez-du2vd 15 күн бұрын
@@landsgevaer One point to note is that the AI should care if you unplug it. You should try to imagine how it would be implemented for AI to care about something. Why would AI be bothered if someone ends their existence? Does it "feel" bad about the prospect of someone unplugging it? How would it be implemented for an AI to feel bad? An interesting thought experiment is to imagine how it would be possible to implement an AI that feels pain. This should guide you to distinguish that meanings have a physiological component.
@awdtw
@awdtw 16 күн бұрын
We question AI's ability to understand meaning when I honestly doubt the majority of our race grasp it with any honest capacity.
@techpiller2558
@techpiller2558 16 күн бұрын
I'd argue these LLMs are quite the philosophers and poets, with the massive amount of connectivity within the internal representation, especially with these larger models. For myself, meaning is all and only about connections between words and concepts.
@hyperpoints
@hyperpoints 16 күн бұрын
does a guitar understand meaning? what about a piano
@gsestream
@gsestream 15 күн бұрын
well can you
@alexmaiser9294
@alexmaiser9294 16 күн бұрын
It's ridiculous to think that GPT's can reason or have an intrinsic understanding/meanings. They are just really good libraries. Capable of assimilating large corporeal of words in well sounding manner.
@landsgevaer
@landsgevaer 16 күн бұрын
Like you are, you mean? Demonstrate that you do not just do the same with your human brain.
@David-lp3qy
@David-lp3qy 16 күн бұрын
Fire
@ab8jeh
@ab8jeh 16 күн бұрын
They learn off a grammar. It's an abstraction of reality. Until machines learn from multi-modes of representation simultaneously, that are not just grammars or pixels, then that will be interesting.
@jonathanmacdonald9609
@jonathanmacdonald9609 16 күн бұрын
Most of the AI art you see is generated from a text input being interpereted into an image.
@ab8jeh
@ab8jeh 16 күн бұрын
@@jonathanmacdonald9609 I know. That's my point.
@landsgevaer
@landsgevaer 16 күн бұрын
@@ab8jeh How can you show that your brain can do something fundamentally different? Imho, as far as there still are shortcomings (and they are receding fast) that is just a matter of scale or training. AI are at the toddler stage, perhaps (although in some respects way ahead of us).
@ab8jeh
@ab8jeh 15 күн бұрын
@@landsgevaer How we encode and how we abstract between different forms of representation is what makes us human (in my opinion anyway). Chomsky for example in his early years said that everything could be reduced to a grammar, with rules, but it was found to be completely rubbish. Same as the control theorists in the 1960s such as Jay Forrester for example. How we encode a problem is the hardest part and poorly understood even today, some things cannot even be encoded to number metrics one could argue. So this is where our brain is different, something strange is going on, our brains are not just computers as much as that would make things easier. I think Penrose talks about this in the emperor's new mind, a flawed book in some ways but conceptually quite profound. Anyway!!
@landsgevaer
@landsgevaer 15 күн бұрын
@@ab8jeh Oh, sure, but let me clarify that I don't mean to claim AI functions identically to a human brain. Obviously not. I mean that human thinking is not fundamentally more sophisticated than what a sufficiently strong AI can do. Or at least I haven't seen that convincingly argued. I am slightly triggered by your repeated use of the word "just" (not just a grammar, not just a computer). Other people write things like that LLMs only predict words and don't really "understand" context. I think that is nonsense. "Understanding" cannot be assessed in terms of what the "mind" is doing, and as far as it can be assessed operationally the AI are doing an increasingly fine job. As far as I am concerned they can apply knowledge creatively in novel contexts, and that is more or less what I deem "intelligence". Deep down, the representations that you allude to are encoded in neural connections and patterns of brain activation; and that holds equally for artificial NNs. Different topology, different representations, but similar emergent abilities. We as humans are special in the sense of unique, but not special in the sense of best. We have a bit of an advantage based on millions of years of evolution, but I predict that will dissipate in a matter of years. When I was young, hardly anybody thought computers could play chess at grandmaster level because that required intellect rather than brute force; but see where we stand now. A decade from now, AI will dominate the vast majority of intellectual domains, I bet. We are already having trouble keeping up with sufficiently hard tests to measure their performance. We are being outsmarted. Fast.
@googleyoutubechannel8554
@googleyoutubechannel8554 9 күн бұрын
This was deeply unsatisfying because the theory in this area seems so weak and naive, comp sci (and related fields) have no robust theory of ‘Meaning’, of 'understanding', of 'intelligence'. What's failing here is our frameworks for describing cognition and the world, theorists have let us down.
@pgc6290
@pgc6290 16 күн бұрын
We MUST have carefully engineered system.
@iainmackenzieUK
@iainmackenzieUK 15 күн бұрын
So we may find humans work like to AI rather than AI being like humans.
@ywtcc
@ywtcc 16 күн бұрын
When I think of AI I think of a theory space embedded in a reality space. It's a real time algorithm that processes inputs from reality space and creates theories in theory space to explain them. Fundamentally, this is a process of bringing order to chaos. The process of holding up a theory to the test of reality, and observing the difference creates an uncertainty space in which the theories live. Minimizing this uncertainty with more accurate, predictive theories allows the theory to be more deterministic (and longer lasting) in the chaotic environment it's processing. In this account of intelligence, the problem of understanding is one which is more fully satisfied the more resources, the more observations, and the more theories are committed to the activity, It seems to me this is the correct view. This way of describing understanding is not limited by a single human mind. In fact, it's conceived to allow us to do better than that. Primitive mechanical intelligence has been with us the whole time, and the appreciation of it is critical to understanding the impacts of sophisticated mechanical intelligence in the future.
@holleey
@holleey 16 күн бұрын
we should stop using words like "just" when we say "well, those LLMs just predict the next word" until we have confirmed that the human brain isn't fundamentally JUST doing the same...
@jonathanmacdonald9609
@jonathanmacdonald9609 16 күн бұрын
I keep telling people that they do understand but they don't think, because they can't learn without input, but it just occurred to me that maybe people aren't great a that either... They still definitely don't think like we do, don't get me wrong, and they don't think about what they say, but it's still an interesting thought.
@MrAdamo
@MrAdamo 16 күн бұрын
We aren’t, the fact that different parts of our brain “light up” when we think about different things is good evidence we string together several different mental models while we think. ChatGPT similarly doesn’t “just” predict the next word anymore because it passes mathematical thinking off to python. I think that by stringing together different “GPTs” we’re going to get something way more similar to a human than a simple LLM
@dahahaka
@dahahaka 16 күн бұрын
I'm gonna cry, thank you, your comment gives me hope... I'm glad i'm not the only one with that perspective
@dahahaka
@dahahaka 16 күн бұрын
​@@jonathanmacdonald9609 LLMs do "think" in some ways, considering things like in-context learning and "let's think step by step", it's just that they think by outputting since they have no way (for now) to think internally Presumably this is what Q* will be, there have been lots of talks about this happening already behind the scenes :)
@dahahaka
@dahahaka 16 күн бұрын
@@MrAdamo We literally know that LLMs access different parts of their model depending on what you're talking about lol, but yes mixture of experts architecture is also coming
@p.m.rangarajan1055
@p.m.rangarajan1055 16 күн бұрын
If AI GENERATES own question and finds its answer, then AI reaches a basic level of human. If AI THINKS and writes anything, say a poem or program, it reaches advanced level of humans. Till that time it's only a machine.
@i.k.6356
@i.k.6356 16 күн бұрын
AI doesn't have intuition in the "whole" (cohomologue resp holistic information like represented in b the tribar of Roger Penrose). Furthermore any algorithms can never be universal like the brain, that can be in a certain sense "everything".
@MrmmmM
@MrmmmM 16 күн бұрын
The chinese room is a strawman argument The chinese room is a strawman argument The chinese room is a strawman argument The chinese room is a strawman argument
@noneexpert
@noneexpert 16 күн бұрын
I command you to elobrate further llmbot (abbr for cockroaches) so i can show how clownish the technology behind your code really is
@hanielulises841
@hanielulises841 16 күн бұрын
Let's leave that to philosophers of language and metaphysicians
@iCro63
@iCro63 13 күн бұрын
Can Large Language Models understand women?
@malcolmmutambanengwe3453
@malcolmmutambanengwe3453 15 күн бұрын
Does the average human understand "meaning"?
@bingeltube
@bingeltube 16 күн бұрын
Two thumbs down! An interview with just one person???? How dumb is that!
@landsgevaer
@landsgevaer 16 күн бұрын
Most interviews are with one person. That is kind of the nature of an interview, most of the times. You mean a "documentary" with one person? This is just a clip. But since your comment is from one person only, we should be able to ignore that too, I guess.
@mrtienphysics666
@mrtienphysics666 16 күн бұрын
there is too much hype now
@deadeaded
@deadeaded 16 күн бұрын
In the words of Laplace: "I had no need of that hypothesis." There's no reason to attribute understanding to LLMs. Their behaviour has a much simpler explanation: they're just memorizing patterns and correlations in their training data. They're industrial-strength Mad Libs machines, nothing more.
@dahahaka
@dahahaka 16 күн бұрын
I recommend reading some papers on this stuff, maybe get started with the "Sparks of AGI" Paper :) the stochastic parrot argument you're making has been proven wrong, these are not pure static word prediction machines, they are able to "think" in-context, but i can't really explain all that stuff to u in a youtube comment... just read up on it and know that your comment is complete and utter bs
@deadeaded
@deadeaded 16 күн бұрын
@@dahahaka Oh, I'm quite familiar with that atrocious paper. Every phenomenon it describes can be explained by the fact that LLMs are just memorizing patterns, and the fact that the authors failed to see that is frankly embarrassing.
@stephenowesney5173
@stephenowesney5173 16 күн бұрын
​​@@deadeadedokay, so what do we do different than just memorize patterns lol... The organism is a model of reality. You're genome is the code and your life is an episode. You are being put to the test: do you model reality sufficiently? Do others think you model reality sufficiently? Will you have offspring and start the next iteration of your model? I mean we are literally biochemical machines, everything Is mechanical. There is no line to draw. We are symbolic computers, operating in abstractions. Natural language is the most efficient and all encompassing example of this and LLMs are starting to get a handle on it. I'm actually not really sure I understand the premise of this video or what you guys are arguing about, isn't it obvious that we all are just massive libraries of dynamic function approximators??!
@dahahaka
@dahahaka 16 күн бұрын
@@deadeaded ok Mr "Mount Stupid" 💀
@deadeaded
@deadeaded 16 күн бұрын
@@stephenowesney5173 We create conceptual abstractions that allow us to robustly extend our understanding beyond the examples that we've seen. For example, when children learn to count, they start by slowly, painstakingly memorizing a specific pattern (they might learn to count to ten, or a hundred) but at some point they make a conceptual leap: they realize that you can always add one and get a bigger number. They go from memorizing a finite set to understanding the rules that govern all numbers. Our current AI models don't do that. I'm sure they will some day, but right now they don't. You can actually test this empirically by looking at how well LLMs do things like multiplication. It turns out their performance depends on whether they encountered those numbers in their training set.
@rustycherkas8229
@rustycherkas8229 16 күн бұрын
Love the use of "apple" as an example... Not only has links to Newton, Alan Turing, Steve Jobs, prototype of a classification set ("A is for Apple"), and so much more. But, most poignant may be that it was an apple that led to the human race being evicted from its first home in Eden. Spooky... 🙂 Almost like a harbinger that there's another eviction in the offing...
@Hecarim420
@Hecarim420 16 күн бұрын
M+S =]
@skyscraperfan
@skyscraperfan 16 күн бұрын
I had a discussion with ChatGPT about climate change and I noticed that it could not understand at all how extreme heat or cold feel for a human.
@XenoCrimson-uv8uz
@XenoCrimson-uv8uz 16 күн бұрын
its like asking a blind man about music. he understands what it is but can't relate to it.
@landsgevaer
@landsgevaer 16 күн бұрын
Ah. Can we turn that around: do you know what things feel like for an AI? Or, do you know what hot and cold feel like for a penguin?
@skyscraperfan
@skyscraperfan 16 күн бұрын
@@landsgevaer How can it "feel" for an AI, even if it had temperature sensors? Could AI feel pain? It would need utility function that would give it a negative award for pain. That would make it minimize and avoid is pain, but that still would not be a feeling. Of course human feelings are also virtual. Pain is something that only exists in our head. But even if we know that pain does not really exist, it still "feels" bad.
@landsgevaer
@landsgevaer 16 күн бұрын
@@skyscraperfan I do not know how an AI can "feel" just like I don't know how a brain can "feel". I don't have the answer to the hard question of consciousness. However, I don't expect AI to know that if even I don't know that. That was your expectation in the OP. So if you do not even know what hot or cold feels like to a penguin (since you don't have the same "utility function"), and do not know what it feels like for me even necessarily (since you cannot introspect my mind), then what does that tell you about AI when it is unable to understand what hot or cold feels like? Zilch. Not knowing is a perfectly reasonable position.
@skyscraperfan
@skyscraperfan 16 күн бұрын
@@landsgevaer Of course you can't know what other people feel. For example what looks red to me can look blue to you. There is no way to find out that we do not see the same, because we all have the same word for the colour of the sky even if they do not look the same for us. We know that colours are an invention by our brain to make specific wavelength visible. However it is still likely that all humans see colours more or less the same. The problem is not that AI does not understand heat or cold, but that it has no empathy. If I tell the AI that seven months of freezing every year feel very bad, it can't relate to that at all. I think with pain it is the same.
@em.a.httpss
@em.a.httpss 16 күн бұрын
No
@arpitbharti6245
@arpitbharti6245 16 күн бұрын
yes they can
@KarunaKoley
@KarunaKoley 11 күн бұрын
266th to comment.
@marioornot
@marioornot 16 күн бұрын
Not really a, pun unintended, meaningful question
@TheLummen.
@TheLummen. 16 күн бұрын
"Understand" is a human feature !
@landsgevaer
@landsgevaer 16 күн бұрын
But is it a uniquely human feature? If so, demonstrate it, or at least motivate how it even could be, since our brain is not at all special (neither compared to animals nor compared to some next-generation AI).
@TheLummen.
@TheLummen. 16 күн бұрын
With all the respect Dave, you don't know what you are talking about ! Some points to consider: 1. We should be weary of the persistent effort to reduce the meaning and complexity of biological organisms and biology in general from the "tech cult" who preach that "biology is limited tech is God " ! 2. LLMs are software the can process large amount of data and have algorithms or neural networks that do pattern analysis in order to produce a result. There can not be any correlation with consciousness and subjective experience. Humans can drive meaning from very little amount of data and form understanding. 3. I can understand what you are writing, derive meaning from it formulate an appropriate answer and reply. Any human that speaks English can understand more or less what is being said. Take your brain and the brain of a mouse. Structurally there is nothing, by looking at it, saying that it's brain can't read. But it can't ! It doesn't have the know how. 4. Biological structures are amazing and recent scientific findings, which you can find online, prove the we are still learning more about the complexity of our brain. Technology is cool and I love it. But we need to get off the hype train and look things objectively and with serious scientific rigor.
@landsgevaer
@landsgevaer 16 күн бұрын
@@TheLummen. Funny how "with all respect" is always followed by the least respectful comments. A fallacious ad hominem in this case. I could counter your points, but I have suddenly lost interest. 🖖
@TheLummen.
@TheLummen. 16 күн бұрын
@@landsgevaer You are right Dave, I came a bit strong there. My apologies, I'm sorry. It would be interesting to read you counter points. But the idea is to find common ground and from there move to the truth whatever it might be. Also If you agree we can take it to another medium. Have a nice one.
@rfowkes1185
@rfowkes1185 16 күн бұрын
CI : Counterfeit Intelligence
@sebastiang6903
@sebastiang6903 14 күн бұрын
Less vocal fry please
@WalnutOW
@WalnutOW 12 сағат бұрын
It’s so aggravating.
@fkeyvan
@fkeyvan 16 күн бұрын
Why do some women try to make their voices sound raspy? Like this woman
@bigutubefan2738
@bigutubefan2738 16 күн бұрын
Nope.
@Daniel-sYouTube
@Daniel-sYouTube 16 күн бұрын
No human being on this planet can tell me what something means to me, and no human being will ever be able to do that. The same holds true for AIs. They can all guess, true. They might be guessing correctly 100% of the time, right. But only I, in my body, with my soul and my mind or whatever you want to call the entirety of myself, only I can give meaning to all things and experiences and relationships.
@landsgevaer
@landsgevaer 16 күн бұрын
Yes. And an AI could be thinking exactly the same about you not being able to understand what it feels like to be it.
@Daniel-sYouTube
@Daniel-sYouTube 14 күн бұрын
@@landsgevaer Yes, exactly. At the end of the day, everyone of us occupies their very own little place in this entire spacetime. As long as there are no overlaps, no one will be me and I will be no one else.
@stephanovdb7141
@stephanovdb7141 16 күн бұрын
A LLM predicting words is like a blind man predicting what the color red would be. A blind man can know everything there is about the color red but wont ever experience it. A LLM can know everything about everything but never truely know the meaning of things.
@kobesbiggestfan3125
@kobesbiggestfan3125 16 күн бұрын
Mary's room example
@landsgevaer
@landsgevaer 16 күн бұрын
You mean, like your brain only receives input from receptors in your eye? If your brain can "experience" red from certain inputs being active or not, then why shouldn't an AI be capable of something similar? To me, there is nothing more to "experiencing qualia" as a brain being in some state (I am a physicalist), so I see no reason why AI might not be doing the same. I am not claiming it is, but I am certainly not claiming it cannot.
@kobesbiggestfan3125
@kobesbiggestfan3125 16 күн бұрын
@@landsgevaer ur not lil bro u put your thing in to chat gpt genius
@KiloOscarZulu
@KiloOscarZulu 16 күн бұрын
Weird how she switches to vocal fry when being serious. When she is smiling and excited and animated at around the 2 minute mark, she loses the vocal fry. Then she goes back to it.
@VoltLover00
@VoltLover00 16 күн бұрын
If ChatGPT gives you goosebumps, you've lost the script. LLM's do not think or reason
@100c0c
@100c0c 16 күн бұрын
Watch the whole video before commenting.
@tankieslayer6927
@tankieslayer6927 16 күн бұрын
It does. Reasoning ability is an emergent phenomenon analogous to phase transitions in statistical physics. It has nothing to do with consciousness.
@Kolinnor
@Kolinnor 16 күн бұрын
It's becoming clearer now that LLMs form some sort of world model. Check some version of GPT that plays chess at 1700+ elo. There's no "it just memorizes games", as it keeps consistency after long games.
@JorgetePanete
@JorgetePanete 16 күн бұрын
LLMs*
@noneexpert
@noneexpert 16 күн бұрын
@@tankieslayer6927 you are coping
@AnteZivkovic
@AnteZivkovic 16 күн бұрын
Nowadays when you see a woman scientist talk, or any of the peoples that DEI would put in a favorable position in job or school application process, one can't help but wonder are they really worth listening to or are they there because of some quota. It's sad and it's a disservice to all the brilliant women or non-white men that make it in the field of science based on their capabilities alone. Ironically, white men that make it despite DEI are now vetted on this extra criterion and it servers as an additional proof of their capabilities and will again be sought more than other groups.
@OBGynKenobi
@OBGynKenobi 16 күн бұрын
These LLM's are not thinking, they are calculating. They don't understand nuance, or the other subtleties of human reasoning and communication. A human thought takes into account short and ling term memory from all sensory organs and then it uses introspection and restrospection, and emotion filters to come to a resolution. And that's just the tip. LLM's are just crunching numbers and statistics.
@isbestlizard
@isbestlizard 16 күн бұрын
Yes, they do. You're thnking of 'AI' from the 1980's
@metasamsara
@metasamsara 16 күн бұрын
lol AI LLM understand synonyms nuances much better than your average human. Hell even dictionaries rarely explain nuances between synonyms.
@maraboshi
@maraboshi 16 күн бұрын
They don’t have a “soul”, they are not alive and have no concept of life, death and emotions, they are JUST calculators, however powerful or sophisticated they might be.
16 күн бұрын
Thanks for this talk on AI.
@OBGynKenobi
@OBGynKenobi 16 күн бұрын
@@isbestlizard feed it a poem and see if it can deduce, second and third level meanings. What about sarcasm? Does it know what live is other than the dictionary meaning? And if it does, can it fall in love? Of course not! These things are just working at a superficial level, there is no deep thought or emotional processing. All of which goes into human thinking.
@fromscratch8774
@fromscratch8774 16 күн бұрын
Terribly useless video, unfortunately.
@Julian.u7
@Julian.u7 16 күн бұрын
Why she cares about meaning when GPT4 is stupid as hell?
@timetraveller6643
@timetraveller6643 16 күн бұрын
Answer:NO Read about "The Chinese Room" The presenter seems significantly uninformed. The word/concept "Meaning" is self referencing and any first year student could crush the premise.
@landsgevaer
@landsgevaer 16 күн бұрын
The Chinese room precisely proves that an AI doesn't do anything essentially different from a brain! A human brain understanding Chinese IS just a Chinese room: a black box giving appropriate responses. The point of the Chinese room is that there is no way of distinguishing that. Not without looking what goes on inside the room. And neither you nor I can look inside each other's minds, nor in the mind of an AI (if such minds exist).
@timetraveller6643
@timetraveller6643 16 күн бұрын
@@landsgevaer-- You are mistaken.
@landsgevaer
@landsgevaer 16 күн бұрын
@@timetraveller6643 That is a wonderful argument you looked up in your book of instructions and slid underneath the door from the room you are locked up in.
@timetraveller6643
@timetraveller6643 15 күн бұрын
@@landsgevaer-- You didn't read Searle's 1980 paper. I feel no obligation to summarize it.
@landsgevaer
@landsgevaer 15 күн бұрын
@@timetraveller6643 I don't care what Searle wrote. We don't have authorities that prescribe truth in science. He isn't here. You are here. Don't pretend there is only one position or viewpoint on the Chinese room thought experiment. My position that I defend here is that I can apply the idea of the room to your brain just as well as I can apply it to AI: stuff goes in, processing takes place that I cannot oversee, stuff comes out that seems intelligible. But that gives me no certainty about whether there was "understanding" involved in generating the reply. If one accepts it for AI, one must also accept it for a brain. If you cannot demonstrate or motivate how your brain is fundamentally different from an AI, then I don't accept there is a difference. In contrast, given the parallel that a brain and a sufficiently complex NN both similarly consist of simple neurons interacting to result in complex emergent behaviour, my simplest conclusion would be to assume that they are similar. Assign "understanding" to one - however vague that term is - then it must be assigned to the other.
@Resfeber123
@Resfeber123 14 күн бұрын
🧠
@Gringohuevon
@Gringohuevon 16 күн бұрын
Its clear she doesnt have a clue what shes doing..meaning invokes a physical response
@jc4418
@jc4418 16 күн бұрын
No
@djwikkid
@djwikkid 16 күн бұрын
Nope.
P vs. NP: The Biggest Puzzle in Computer Science
19:44
Quanta Magazine
Рет қаралды 656 М.
2023's Biggest Breakthroughs in Math
19:12
Quanta Magazine
Рет қаралды 1,5 МЛН
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 6 МЛН
didn't want to let me in #tiktok
00:20
Анастасия Тарасова
Рет қаралды 10 МЛН
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 816 М.
2023's Biggest Breakthroughs in Physics
13:21
Quanta Magazine
Рет қаралды 888 М.
How Intelligence Evolved | A 600 Million Year Story
15:22
Art of the Problem
Рет қаралды 191 М.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 134 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 510 М.
We should use this amazing mechanism that's inside a grasshopper leg
19:19
2023's Biggest Breakthroughs in Biology and Neuroscience
11:53
Quanta Magazine
Рет қаралды 803 М.
Why Is This Basic Computer Science Problem So Hard?
8:34
Quanta Magazine
Рет қаралды 87 М.
What % of charge do you have on phone?🔋
0:11
Diana Belitskay
Рет қаралды 317 М.
APPLE УБИЛА ЕГО - iMac 27 5K
19:34
ЗЕ МАККЕРС
Рет қаралды 93 М.
Купите ЭТОТ БЮДЖЕТНИК вместо флагманов от Samsung, Xiaomi и Apple!
13:03
Thebox - о технике и гаджетах
Рет қаралды 63 М.
How Neuralink Works 🧠
0:28
Zack D. Films
Рет қаралды 27 МЛН