we can't let a bunch of hyper rich guys deploy no matter what tech into society let them keep the profiits whereas society is left with the consequences. privatized profits socialized loses needs to stop
@2manystories2tell43 Жыл бұрын
@T J You hit the nail on the head!
@travisporco Жыл бұрын
By the logic of the free enterprise system, once people can no longer contribute in the fair and free competition of the market, they must rely on charity or perish. All people will in the coming decades be obsolete, unable to compete with rising AI. The existing system must therefore be abolished before it is too late.
@jonaseggen2230 Жыл бұрын
It's called neo or corporate feudalism
@Mr_Sh1tcoin Жыл бұрын
Spoken like a true marxist
@47f0 Жыл бұрын
Yeah, people have been saying that since the 1860s. Cornelius Vanderbilt said, "Hold my beer"
@willboler8302 жыл бұрын
Been working on AI since 2015, and I'm kind of tired of the trend that models are heading right now. We just add more data and more parameters, and at some point, it's just memorization. Humans don't work like that. I used to support the pragmatism of narrow AI, but honestly, I'm with Gary Marcus on this.
@MrAndrew5352 жыл бұрын
"But as long as you enjoyed the video and you enjoy having your say, that's all that counts!."
@MrAndrew5352 жыл бұрын
Also, whatever you have been working on, it has nothing to do with "intelligence" artificial or otherwise. "Intelligence" is an existential proposition nota technical one, as demonstrated by the fact that you lack the intellectual tools to be able define it. Therefore, if you cannot define it then by what stretch of the imagination could you possibly be working on it?
@0MVR_02 жыл бұрын
@@MrAndrew535 This is correct yet also obtuse. A definition demands extrapolation, as in de-finitum. Intelligence, as you said, is inherently introspective. You are asking another to accomplish an impossible task.
@numbersix89192 жыл бұрын
Right on. You certainly got an odd and objectionable response, didn't you? That's what happens when try to *leave a cult*. Anyway, if your interest is peaked, go back to school and if you are brave, get into REAL cognitive science. Developmental psychology! Psycholinguistics! There's a world out there to discover!!!! There may be modules in the human brain that do stupid "narrow AI" calculations...but nobody knows yet. The kicker is that neurons aren't simple nodes, they are quite complex, maybe as complex as we used to think the entire brain is...but nobody knows yet. Just remember, cognition is a feature of living organisms. You know, embodied. I think the octopus with its distributed cognition is the best model. Its arms are to some extent entities unto themselves. Our minds are similarly compartmentalized, I just think the octopus would be easier to study in some simple and straightforward ways. You already know how smart they are. And I can't think of a better helper robot than an octopoid. Best of luck to you young Will.
@Bisquick2 жыл бұрын
Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out. The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur". _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy
@TommyLikeTom2 жыл бұрын
"they can draw pretty pictures but they don't have any grasp of human language" for some reason I felt personally attacked by that
@kot6672 жыл бұрын
Maybe because it's horse shit LOL
@artbytravissmith2 жыл бұрын
It can and it can't. Midjourney slays at generating simple albeit impressive images, 'zombie spiderman' 'thor in pixar style' 'human settlers on Mars in the style of Norman Rockwell' but once you begin to describe complicated illustrations with multiple characters in different emotional states with specific likenesses to specific people, wearing specific coloured costumes (and you want consistancy panel to panel) performing specific actions with specific camera/viewpoint angles it struggles, with characters in specific parts of the composition. You can generate those emotions and 'actors' seperately, but still need photoshop to combine them into a complete image. While I guess I should assume Dalle2/Stable Diffusion/Midjourney will get there, after watching this presentation, and after 20k images generated in Midjourney and noticing its sometimes frustrating limitations I do begin to wonder if AI art models lack of language understanding will mean they'll be stuck at 75%. My thought is, the first company to combine Dalle2/Midjourney/Stable style prompting with Nvidia's Canvas like editability/interactivity, will make a much more powerful and efficient tool just by embracing the human brain.
@Bisquick2 жыл бұрын
@@artbytravissmith Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out. The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur". _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy
@huveja9799 Жыл бұрын
@@Bisquick Well, there are different layers of meaning, the most superficial is that of the statistical correlations between the symbols, which does not take away from the fact that the tool is surprisingly useful at that superficial level. As far as I know, there are people who establish their own power structure claiming that there is no Truth, and that it is not possible to define objectivity based on that successive approximation to that Truth (understanding). In that case, the only political question is who benefits from these power structures based on sophistry and language games. I suppose it is mediocre people who are incapable of creating something new, and are condemned, as a Large Language Model (LLM), to generate a simulacrum of knowledge at that superficial level of meaning, which does not mean that they can do significant damage in society, especially by the corruption of younger and therefore vulnerable minds ..
@thewingedringer10 ай бұрын
@MusingsFromTheJohn00 chatgpt they released in 2022 was the same i twas back in 2020 lmao, sure
@AzorAhai-zq9sw2 жыл бұрын
Impressive that Noam remains this sharp at 94.
@numbersix89192 жыл бұрын
He has very good reserve capacity.
@ivanleon6164 Жыл бұрын
the real white mage. amazing. huge respect for him.
@maloxi1472 Жыл бұрын
@@numbersix8919 huh... what ?
@numbersix8919 Жыл бұрын
@@maloxi1472 I mean his brain still functions very well even with great age.
@dewok2706 Жыл бұрын
@@maloxi1472he meant that he's the great white hope
@Hunter-uz9jw2 жыл бұрын
bro was 21 years old in 1949 lol. Amazing how sharp Noam still is.
@numbersix89192 жыл бұрын
Just imagine how sharp he was in 1957 when he single-handedly saved experimental psychology.
@bluebay02 жыл бұрын
@@numbersix8919 Do elaborate please.
@numbersix89192 жыл бұрын
@@bluebay0 Chomsky's response to B.F. Skinner's book "Verbal Behavior" utterly destroyed any possible behaviorist theory of language. Behaviorism had dominated experimental psychology so thoroughly to that point, that "mind" had become a dirty four-letter word in psychology. Not only linguistics, but psychology and philosophy, and new fields such a AI and cognitive science were free to take up the study of mental processes. You can read it today easily enough, the title is "On Verbal Behavior" by Noam Chomsky.
@bluebay02 жыл бұрын
@@numbersix8919 Thank you. I wondered if it was his proving Skinner wrong about behavior and language acquisition. Thank you again.
@numbersix89192 жыл бұрын
@@bluebay0 Yup that was it!
@amonra5436Ай бұрын
We were waiting AI to explode, now we are waiting the AI balloon to explode.
@riccardo93832 жыл бұрын
Noam Chomsky brings a breeze of fresh common sense to the AI discussion, with his immense knowledge on Linguistics. Thank you for this interview.
@MrAndrew5352 жыл бұрын
Define "common sense"!
@blackenedblue54012 жыл бұрын
Also just his immense knowledge of computing- definitely understands it better than most speaking at websummit
@restonthewind2 жыл бұрын
A language model could have generated this comment.
@grant47352 жыл бұрын
@@MrAndrew535 ask your computer to do that....
@kot6672 жыл бұрын
@@grant4735 Me: Define "common sense" ChatGPT: Common sense is a term used to describe a type of practical knowledge and understanding of the world that is shared by most people. It is not based on specialized training or education, but rather on the general experiences and observations that people have in their everyday lives. Common sense allows people to make judgments and decisions about everyday situations, and it often helps them to solve problems and navigate complex social situations. Some people are said to have a good sense of common sense, meaning that they are able to apply their practical knowledge and understanding in a way that is useful and effective.
@dan_taninecz_geopol Жыл бұрын
The misunderstanding here is that deep nets are being trained to be conscious, which isn't accurate. They're being trained to mimic human judgement and/or recognize patterns or breaks in patterns. The machine isn't trained to be independently generative of novel information. We shouldn't be surprised that it can't do that yet. More important than the strong AI debate, which is still far off, is the social impacts these models will have on the labor market *today*.
@GuaranteedEtern Жыл бұрын
It's anthropomorphizing by observers who don't understand how the technology works. It's very annoying to hear ML experts say things like "maybe it is sentient..."
@dan_taninecz_geopol Жыл бұрын
@@GuaranteedEtern "Experts", and agreed.
@GuaranteedEtern Жыл бұрын
@@dan_taninecz_geopol One of the big ones - either Microsoft or Google - literally said this exact thing a few days ago.
@TheSnowLeopard Жыл бұрын
Real AI won't exist until these 'deep nets' are embodied in the world.
@brianmi40 Жыл бұрын
"The machine isn't trained to be independently generative of novel information." And yet it has unless you are discounting the need for a Prompt for it to do anything at all other than just sit idly. GPT-4 was able to propose a scientific experiment that has never been performed. It can create rhymes and poetry never written. This isn't simply "re-arranging" the works of others. The simple fact is that the ability to cross compile and reference roughly 1/10th of all human "knowledge" allows a LLM to assemble it in novel ways that humans have never considered or at least not yet done and under the guidance of a breakthrough prompt can deliver solutions we have never imagined. It's a similar activity to researchers in two fields running across each others data and having a huge AHA moment from a realization of how to combine the findings into a new, previously unconsidered solution to some problem. GPT-4 is able to surpass more than 50% of the tests designed to judge sentience, including the Theory of Mind test, so we are much further along the path to sentience than most are aware.
@rajmudumbai7434 Жыл бұрын
Real AI that is sensitive to human problems doesn't scare me. But blind faith of many in flawed AI and going too far with it scares me as it could lead humanity astray into a point of no return.
@nathanielguggenheim5522 Жыл бұрын
Oligarchs using flawed ai against mankind scares me the most.
@oldtools Жыл бұрын
@@nathanielguggenheim5522 is it really so bad if all the fat-cats really want is to keep their people chubby? The price of peace is the low price of bread.
@cathalsurfs Жыл бұрын
There is no such thing as "real" AI. Such a concept is an oxymoron and utterly contrived (by humans in their limited capacity).
@oldtools Жыл бұрын
@@cathalsurfs general AI is what most would consider real.
@KassJuanebe Жыл бұрын
@@oldtools Intelligence can't be artificial. Intellect maybe. Consciousness and intelligence, NO!
@robertjones9598 Жыл бұрын
Really cool. A much needed dose of scepticism.
@octavioavila65482 жыл бұрын
Chomsky’s argument is that AI will not help us understand the world better but it will help us develop useful tools that make our life easier and more efficient. Not good for science directly, but still good for quality of life improvements and it can help science indirectly by producing tools that help us do science.
@totonow69552 жыл бұрын
Unless it just drops grandpa.
@0MVR_02 жыл бұрын
@totonow6955 at least it did so with trillions of parameters, so you know legal can argue that grandpa deserved and needed a premature 'termination'.
@totonow69552 жыл бұрын
@@0MVR_0 vampires
@moobrien17472 жыл бұрын
Oh wow Howard Hughes Really IS Alive....,.. q
@sixmillionsilencedaccounts35172 жыл бұрын
"it will help us develop useful tools that make our life easier and more efficient" Which doesn't necessarily mean it's a good thing.
@aullvrch2 жыл бұрын
@27:31 Gary mentions something he calls "neuro-symbolic AI" as the first step towards combating machine learning AI. For those who are interested a more searchable term is probabilistic programming, some examples of languages are ProbLog, Church, Stan, and Hakaru. Step two he says is to have a large base of machine interpretative knowledge. All programming is of course machine interpreted, but denotational semantics found in functional languages are better at formalizing the abstract knowledge that he refers to.
@LeoH.C.2 жыл бұрын
Just fyi: the approach Gary mentions is "neuro-symbolic AI", not "nero symbolic".
@aullvrch2 жыл бұрын
@@LeoH.C. sorry, just a typo..
@LeoH.C.2 жыл бұрын
@@aullvrch I was just clarifying for other folks that do not know about it :D
@aullvrch2 жыл бұрын
@@LeoH.C. thanks!
@0MVR_02 жыл бұрын
'neuro' has the connotation that any animal with a nervous system can operate or symbolize the platform
@mirellajaber7704 Жыл бұрын
I am reading all these comments and I have to say that once more what strikes the eye is that people would always believe what they want to believe, no matter how much conferencing, summiting, etc, no matter who says what. People come with already made ideas, not with a curious mind as to reach more, higher understanding - and this stands true, no matter the subject under discussion - but even more true when it comes to politics.
@no_categories10 ай бұрын
I've changed my mind many times in my life. What helps me to do it is information. I know I'm not alone in this.
@Grassland-ix7mu7 ай бұрын
That is a oversimplification. Many people want to know the truth, and so will definetely change their mind when they learn that they were wrong - whatever the topic
@Epicurean999 Жыл бұрын
I wish really good health for Mr. Noam Chomsky Sir🙏❤️🙏
@kennethkeen1234 Жыл бұрын
As a researcher into AI in Japan since 1990 I wish to add my personal trivial contribution. Firstly it is not simply the 'words' that are relevant, but the intonation. Secondly it matters 'where' the expressions are made. "I couldn't care less" in standard English is repeated in the land of wooden huts, with "I could care less", with the same intention and "meaning", thus giving the hut dwellers an advantage of being able to speak ambiguously and always be right. That is fine for those hut people who are not caring one way or the other if they are right or wrong, because in the final analysis, hut people produce guns from under their jackets and force a different result, regardless of what is said. A wall built around USA retaining all the nonsense and hype in one area would be the best solution for making true progress in that part of the world not yet perverted by 'American exceptionalism'. 2023 02 08 08:42
@tomtsu5923 Жыл бұрын
Ur a hut person. I’ll snow plow ur azz
@StoutProper Жыл бұрын
Wow. Love this comment. Thank you.
@rmac32177 ай бұрын
I couldn't care less means you care the least you possibly could, I could care less means u could possibly care less and doesn't make sense as a saying... Not rocket science.
@witHonor12 жыл бұрын
My problem with AI is that human's can't even pass a Turing Test anymore. Technology has eliminated the miniscule amount of critical thinking human's used to be capable of, now they're just input/output machines.
@witHonor12 жыл бұрын
@@MrAndrew535 Which program are you? Typical bot behavior to spam the comment section on a KZbin video.
@ChannelMath2 жыл бұрын
@@witHonor1 what would be the point of this "Andrew" bot? just to claim that he already said what you said? Doesn't make sense. Also, if you've met humans, "spamming the comments section" is not atypical behavior when they are passionate. (I'm doing it now -- see you in the next comment Andrew!)
@witHonor12 жыл бұрын
@@ChannelMath Beep boop, beep boop. Not explaining why bots are obvious because... Please "see" Andrew anywhere when you don't have eyes. Fun. Idiot. Green eggs and ham. Manifesto. Beat the prediction, trolls.
@Moochie0072 жыл бұрын
The axiom GIGO still applies.
@miraculixxs2 жыл бұрын
@Lind Morn if you think capitalism has eliminated critical thinking you haven't seen socialism and dictatorship.
@yuko3258 Жыл бұрын
Let's face it, the tech world grew too fast for its own good and is now operating mostly on hype.
@crystalmystic11 Жыл бұрын
So true.
@claudiafahey1353 Жыл бұрын
Agreed
@jonatan01i Жыл бұрын
nope, gpt4 is very usable and is a magic tool for humanity to use
@debbY100 Жыл бұрын
For ITS own good, or humanity’s own good?
@Happyduderawr Жыл бұрын
@@debbY100 definitely more for its own good given the amount of wealth being funnelled into the industry
@-gbogbo- Жыл бұрын
27:05 "Gettiing close [to solve the problem] does not really seem to solve the problem". That's so true ! Thanks a lot.
@user-sy3dg1vk4x2 жыл бұрын
Long Live Noam Chomsky 🙏🙏
@kot6672 жыл бұрын
Hopefully Noam will gain some common sense in his long years lol.
@lppoqql2 жыл бұрын
That might happen when someone puts together a system that is trained on all the content and speech by Chomsky.
@numbersix89192 жыл бұрын
@@lppoqql You don't really believe that, do you?
@SvalbardSleeperDistrict Жыл бұрын
@@kot667 Do you at least realise how much of a self-exposition you are doing by vomiting a cretinous line like that? Absolute clowns littering comments spaces with brain vomit 🤡
@kot667 Жыл бұрын
@@SvalbardSleeperDistrict Someone is riding the D extra hard lol, I got nothing against Chomsky but his analysis of current technology is simply abysmal, other than that, don't have a gripe with him.
@GuaranteedEtern Жыл бұрын
The current AI/ML techniques are not close to AGI. They are approximation engines made possible by cheap and powerful computing and storage. In many cases produce useful results because their guesses are accurate (i.e. they produce what we expect). As they scale (more parameters, better tuning) they will better approximate what we expect, but we will reach the point of diminishing returns until there is a breakthrough in either computer architecture or approach that allows for something more than mathematically generated results. I agree there is a chance these technologies will "hit the wall" faster than expected because we reach the point where the results just don't get any better no matter how many more CPUs we throw at them, or applying them to other problems does not yield the benefits that were hoped, given the high bar. Marcus is 100% correct that these are smart sounding bots - and the bigger risk is more decision making and critical thinking will get outsourced to them.
@cantatanoir6850 Жыл бұрын
Could you please give any guidance o the currently available literature on the issue.
@GuaranteedEtern Жыл бұрын
@@cantatanoir6850 On which point?
@cantatanoir6850 Жыл бұрын
@@GuaranteedEtern about diminishing returns of this particular technology and hitting the wall.
@GuaranteedEtern Жыл бұрын
@@cantatanoir6850 I'm not sure there is any... that's my perspective. My argument is that there are likely going to be areas where current ML and AI techniques do not perform as well as required regardless of how many parameters or processors are used. ChatGPT is impressive because it exceeded everyone's expectations re: NLP.
@reallyWyrd Жыл бұрын
Noam pointing out that AI training of a neural net largely amounts to "brute force" is interesting.
@tonygumbrell22 Жыл бұрын
We want AI to function like a sentient being, but we want it to do our bidding e.g. "Open the pod bay door Hal."
@petergraphix6740 Жыл бұрын
This is called the 'AI alignment problem' and at this point not only is there no solution, everytime we reassess the problem it becomes more insurmountable. It is one that I personally believe is not solvable either. Humans generally fall under the same alignment issues (we're mortal for example), and at least in theory AI would be immortal if we're able to save its state and copy it to a new machine (or it's able to do that itself). If humans could copy ourselves into a new body, we would, why would an AI not do that once we formulate artificial willpower and desire to have a continued existence?
@tomtsu5923 Жыл бұрын
Don’t be negative
@tonygumbrell22 Жыл бұрын
@@tomtsu5923 Let's just say I'm skeptical.
@daraorourke5798 Жыл бұрын
Sorry Dave...
@512Squared Жыл бұрын
As a linguist, one of the first things I did with ChatGPT was all it to give examples of things like predicates, thinking that a Language Transformer would have figured this things out, but it failed, even after I corrected it, it still kept going off the reservation with its examples. I tested it too on tasks where you give it lists of words and ask it to form sentences from those words, and it kept wandering of from its task, and when you all it it completed the task correctly, it says yes, but then when you point out the errors, it admitted the errors, but then couldn't correct itself either. I agree that the AI doesn't have models of the world or language the way humans do. It has a series of connections that it has created to match predictive output based on fixed inputs, like the model that wrongly associated cancer with rulers on scan images because that's how most cancer diagnostic images are different to just normal scan images. There is a long way to go still. AI right now can mimic smart in some aspects (knowledge and textual analysis), but not in other aspects (processing experience, prioritizing). It does resemble a kind of Hive Mind, and that is exciting.
@JohnDlugosz Жыл бұрын
GTP-4 is much better at understanding the structure of a word (made of letters, has rhymes, has syllables), but it still struggles at some tasks, where it knows the rules but can't reliably follow those rules, but can immediately tell what it did wrong. It just fails at harder problems. Re predicates: Perhaps the Language Model should have some reinforcement learning early on about formal grammar, just like having an English class for 6th graders. Make sure it codified internally all the language structure we want it to, and eliminate incorrect associations, in contrast to just letting it figure it out by example with no formal instruction. Do that at an early stage in training, e.g. 6th grade, before high school and college reading.
@orlandofurioso7329 Жыл бұрын
It mimics a Hive Mind because it is connected to the Internet, what is impressive is how much information is there hidden behind all of the junk
@ghipsandrew Жыл бұрын
What version of the model did you talk with?
@512Squared Жыл бұрын
@@ghipsandrew 3.5. Haven't tested in on the new version 4.0
@subnow4862 Жыл бұрын
@@orlandofurioso7329 GPT-3.5 isn't connected to the internet
@JC.722 жыл бұрын
I can’t help to laugh every time when our Gandalf Chomsky says that the most current cutting edge AI system is just a snowplow. Like hey, it’s nice and helpful and all but it’s just like a snowplow lol
@kaimarmalade96602 жыл бұрын
Lol Gandalf Chomsky.
@doublesushi5990 Жыл бұрын
100%, I chuckled hard today seeing him speak about shxtGPT.
@govindagovindaji4662 Жыл бұрын
Not quite what he was expressing. He comparing how snowplows do the 'mechanical' work of removing snow due to a precisely 'engineered' design yet they tell us nothing about snow nor why it should be removed in the first place (cognition/science).
@lolitaras22 Жыл бұрын
When he was asked in 1997, if he feels intimidated by Deep Blue's (chess playing system) win over the world champion Garry Kasparov (first A.I. win against a chess Grand Master) he replied: "as much as I'm intimidated by the fact that a forklift can lift heavier loads than me".
@lolitaras22 Жыл бұрын
@@govindagovindaji4662 I agree
@georgeh89372 жыл бұрын
my gripe is the use of terminology in the field that is just right for marketing purposes. years ago i heard a public discussion and somebody asked if artificial intelligence could be used for x. if you say this AI program is sorting through data to filter a photograph to tease out a clear image then it loses the magic and becomes pragmatic.
@robbie3877 Жыл бұрын
Isn't that precisely how human cognition works though? Like a filter, through the lens of memory.
@RobertDrane Жыл бұрын
I'm expecting the vast majority of harm that's going to com from adopting these technologies will be directly due to the marketing.
@littlestbroccoli Жыл бұрын
They're more concerned with notoriety and having articles written about their tech (because it draws investors, maybe?) than they are about the real science. This is definitely a problem and you can feel it in the output. Real science is exciting, it feels like exploring. Today's tech climate sort of feels like being stuck inside and told what's good for you when all you want to do is go out and ride your bike.
@gregw322 Жыл бұрын
Incredibly stupid, useless comment. We’re making more breakthroughs than at any time in history. There will be more change in the next few decades than in all of recorded human history.
@Achrononmaster Жыл бұрын
AI does help science, but indirectly. Every failure of AI to demonstrate something like sentient comprehension of deep abstractions is telling us something about what the human mind is *_not._* That sort of negative finding is incredibly useful in science, totally disappointing in engineering or corporate tech euphoria. Science is way more interesting than engineering. Negative results don't win Nobel Prizes, but they drive most of science. Every day I wake up wanting to refute an hypothesis.
@joantrujillo7551 Жыл бұрын
Great point. Sometimes I suspect that findings that contradict aspects of our current model are rejected simply because they challenge our existing ways of thinking.
@GuaranteedEtern Жыл бұрын
True - and arguing these AI machines are not sentient doesn't mean there are no useful applications for them.
@WilhelmDrake Жыл бұрын
These are things we already know.
@AnthonyGibbsRTA Жыл бұрын
Just imagine having Noam as your granddad, how amazing would that be
@waltdill927 Жыл бұрын
The obstacle to a clear discussion, as I see it: first, we are thinking creatures, or language users, "inhabited" by our own linguistic bias, such that the use of a symbol manages only to point more or less successfully to other symbols. This is human language, the index of a communicating life, but not at all what we manage to codify and "program" into useful, pragmatic machines. Computing is manipulation of these symbolic sets, not expressing a thought. If "zero" only expresses an important mathematical concept, its absence changes nothing at all in the affairs of arithmetical computation. "We" do not organize a binary base well without the idea of "zero". In the same way, a line drawn in the sand divides "reality" into two parts, but it has nothing at all to do with the concept of a "ratio", The whole business of defining what thinking actually is comprises a body of philosophical insight that has become, in fact, only more problematic with the history of philosophy itself; and contemporary philosophers imagine more that they are producing literary documents, while many writers see themselves as exploring issues of a particular philosophical nature. Second, more ominously, in spite of those who would have science, and its progress, adhere to an "ethics" as much as an idea or representation of end use, of teleology -- this ain't ever going to happen. Once the creature learns to use the rock for something practical, cracking walnuts, say, the idea, the utility, of bashing in convenient skulls soon follows. At any event, the notion that our logic machines are on the verge of much that is beyond the dreams, or nightmares, of humanity is oddly quaint -- kind of like Robbie the Robot with a mechanical soul, and not an organic brain.
@smartjackasswisdom14672 жыл бұрын
This conversation made me realize one of the things that made Westworld first season so enjoyable for me. It was believable, you need to understand the human brain in order to generate an AI capable of understanding the world. Otherwise you're just engineering a very precise gadget powered by algorithms and data but that does not understand any of the context from where that data comes from. You need AI capable of understanding data the same way the actual human brain does.
@kot6672 жыл бұрын
Y must we understand the human brain to make AI? The architecture that we currently have will probably be able to take us to super intelligence.
@kot6672 жыл бұрын
The current architecture bears similarities to the human brain but very different.
@evennot2 жыл бұрын
@@kot667 yes. For start, the hardware in brains and computers is quite different. No massive parallelism, clocking, etc. So mimicking the brains is not the best approach. However researching AI can help in a roundabout way to understand human cognition and more. Details: For instance, I did some experiments with stable diffusion and discovered a lot of very interesting things. First of all its akin to "The Treachery of Images" by Magritte (It displays an image of the pipe, not the pipe). Stable diffusion produces an image of the painting, not the painting - a stochastic visual representation of a given image description within the domain of internet images used for learning. If you use a style of speed-art (realistic very fast drawn paintings), like Craig Mullins, you can have interesting results. The art style of Craig Mullins' sketches omits everything that can be easily imagined by the viewer to emphasize main points of interest or composition. For an artist there's a question "how to effectively omit unimportant, but to present enough believability?" Like "how to put several strokes of brush here and there to portray a lake in a distance, but make the viewer understand, that there's a lake there". If you look at a couple of Craig's sketches, it's hard to get the gist of it. But if you have a thousand believable sketches, you have a better chance to imagine the how his style works. I.e. you look at an image of the painting to understand how it is. Like you look at an image of a pipe to understand what is a pipe.
@kot6672 жыл бұрын
@@evennot I think the main takeaway is that the only part of the human brain we need to copy to make AI function is the neurons, that's it everything else about the human brain doesn't matter, all the AI needs is neurons, to be honest that's all our brain needs too, people are over complicating it, you do not need to understand the inner workings and everything that goes on with the brain to make AI ,just replicate the neurons and you will be fine. LOL
@maloxi1472 Жыл бұрын
@@kot667 Wildly inaccurate. Even adopting your flawed perspective for a moment, it's obvious that ANN are way too far from biological neurons right now
@pluramonrecordings3438 Жыл бұрын
The curious thing here is that Gary Marcus, who is debunking along with Noam Chomsky, says repeatedly that the system "can't understand" one thing or another: and that's where the debunking needs to begin, with the understanding that AI "can't understand" anything! because it has no power of understanding, which belongs exclusively to rational human intelligence: well, okay, other animals can understand, though not at the level that human intelligence can, but in any case the subject that understands in a real, not metaphorical sense, however simple or complex the information or situation it understands, is always a biological being. When people forget this basic distinction and begin to imagine that AI is performing human intellectual operations, and not acts of artificial rationality, based on complex programming which uses shall we say associational triggers to accomplish the sleight of pseudo-mind that appears to be intelligence, that's where human understanding of what's happening in AI begins to malfunction and mysticism starts to take over.. Authentic intelligence is flexible and organic; AI is rigid, however much seeming "mental" flexibility is built in by sophisticated programming, and it is one hundred percent mechanical - once again, in spite of the sophistication of its informatic construction.
@_crispins Жыл бұрын
25:10 I learned it from Noam and he learned it from PLATO 😂 outstanding!
@pomomxm246 Жыл бұрын
crazy that both Gary's predictions came true so quickly, as someone was led to suicide by an amorous chatbot just this past month
@mattwesney6 ай бұрын
Natural selection
@jamieshelley60792 жыл бұрын
As an AI Developer, Noam Chomsky continues to be an inspiration on making better systems , away from derp lernin.
@MisterDivineAdVenture2 жыл бұрын
I found most of his texts and politics as well from the McLuhan days academic opinionation - I think that's just a class of publication. Which means insipid and uncompelling, but you have to listen to him because he's the only one saying it.
@jamieshelley6079 Жыл бұрын
@@mmsk2010 Did the wheel displace workers? How about the steam engine? No: it crested more opportunity and automated the mundane tasks of the time. AI is a tool to be used with and enhance humans.
@gaulishrealist Жыл бұрын
Noam Chomsky is an AI developer? Americans still need to be taught by foreigners how to speak English.
@jamieshelley6079 Жыл бұрын
@@gaulishrealist ...What
@gaulishrealist Жыл бұрын
@@jamieshelley6079 "As an AI Developer, Noam Chomsky continues"
@romshes77 Жыл бұрын
When all of us are as old as Noam Chomsky AI will interview itself.
@antoniobento2105 Жыл бұрын
Just remember that it is hard to be unbiased when you've spent your entire life with a certain idea on your mind.
@ItCanAlwaysGetWorse Жыл бұрын
Sadly, very true. Yet I have heard scientists claim that they can derive as much or more joy from learning where they have been wrong, than when they seemed to be right.
@antoniobento2105 Жыл бұрын
@@ItCanAlwaysGetWorseI agree, and that's how a real scientist should be. The older scientist seemed to be a very good man of science, but the one sitting live didn't seem to be very bright at all. But maybe it was just me.
@ivanleon6164 Жыл бұрын
@@antoniobento2105 both are very intelligent, one is Noam Chomsky, is not fair to be compared with him.
@antoniobento2105 Жыл бұрын
@@ivanleon6164 The younger one didn't seem to be very Intelligent/knowledgeable on the subject. The older one seems to be wise at least.
@alpha0xide9 Жыл бұрын
no one is unbiased
@BuGGyBoBerl Жыл бұрын
18:34? what did noam say there? or what does he mean?
@doreenmusson4891 Жыл бұрын
Noam you're a shining leading star of the world.
@havefunbesafe Жыл бұрын
hat does Noam mean when he says AI is too strong? Please enlighten me. Thanks. 18:30
@Spamcloud Жыл бұрын
Video game developers have been working with AI for over fifty years, and they still haven't made AI in any game that can do more than read button presses or remember very basic patterns. Children can break modern games within a few hours.
@goodingmusic Жыл бұрын
Love this comment :)
@fitoy2k Жыл бұрын
lol
@Always.Smarter Жыл бұрын
nothing in this comment is true.
@dewok2706 Жыл бұрын
everything in this comment is true.
@s3tione Жыл бұрын
I feel I should both defend and critique what's said here: yes, these models and frameworks should not be seen as the end road on AI development, but at the same time, we shouldn't assume that artificial intelligence will or should behave like human intelligence anymore than airplanes fly like birds. Sometimes it's easier to engineer something that doesn't copy what exists in nature already, even if that means we learn less about ourselves in the process.
@MrAndrew5352 жыл бұрын
Whenever anyone uses the term "Intelligence" what, precisely, are they describing? What do they use as a model and what do they use as a model to illustrate the absence of intelligence? This criticism is equally valid with regard to Mind and Consciousness. The fact that academia is unable to frame the question in this manner is why they have, to this day, been unsuccessful in solving the "Hard Problem of Consciousness, unlike myself who solved the problem well over a decade ago.
@megakeenbeen2 жыл бұрын
i guess its related to passing the turing test
@0MVR_02 жыл бұрын
The meaning is in the composition, 'in tel lect'; inward distant words as exemplary opposed to dialect; the bifurcation of lexis. Noam's utility of a telescope is with great relevance. Namely an instrument of ocular (sensational) tactility.
@numbersix89192 жыл бұрын
Hey let's hear it. I guess all humans have been waiting for all of human existence to hear it.
@Paul_Oz2 жыл бұрын
that's what pissed me off about this conversation. These are linguists are tossing around words like intelligence, understanding and common sense and failing to actually define them. It allows everyone to talk past everyone else because everyone is holding on to their own private key of the definitions they are using.
@0MVR_02 жыл бұрын
@PaulOzag I doubt that, people seem to be operating on mutual understanding both in the video conversation and in the chat. Perhaps you have difficulty identifying when relevant comments are being made to signify comprehension.
@5Gazto Жыл бұрын
11:25, the point of ChatGPT is to get help packaging language, finding out about hard to to remember words (for tip of tongue moments) by describing the word or giving examples, as opposed to the other way around, writing the word and expecting the definition or examples or colocations or any combination and permutation in return (what dictionaries help in), making foreign language studies easier, like for example, asking ChatGPT to generate easier language, or answers that a A2 or B1 level student of a foreign language can understand. It can be used to check for creatively written code in C or Python or any other programming language, it can help you organize study materials, it can help you to find information about complex scientific phenomena in a summarized way, etc.
@johndunn5272 Жыл бұрын
Ai may be simply engineering until human cognition and consciousness are understood. In principle if an ai could model the brain to produce cognition and consciousness then at that point the artificial intelligence is no longer engineering but some aspect of nature and reality.
@riggmeister Жыл бұрын
Why isn't it currently part of nature and reality?
@johndunn5272 Жыл бұрын
@@riggmeister my point is focused on conciousness...where artificial intelligence is currently without.
@jamescarter8311 Жыл бұрын
You cannot produce consciousness no matter how complex your machine. Consciousness creates the universe not the other way around.
@riggmeister Жыл бұрын
@@jamescarter8311 based on which rules of physics?
@johnboy14 Жыл бұрын
I remember Fenyman comparing man made flight to birds and pointed out that they achieve the same outcome but those machines don't fly like birds. I think the same thing will probably happen to AI and true AI will look nothing like what we ever imagined.
@tigoes Жыл бұрын
Language models have not been developed or marketed for language-related research, but that doesn't mean they bring nothing to the field. Just because the potential is not immediately obvious to someone doesn't mean it's not there.
@1995yuda Жыл бұрын
Totally agree.
@calmhorizons Жыл бұрын
Nice to hear a sane accounting of the current state of AI - too much breathless cheerleading going on at the moment (feels like the new bitcoin).
@azhuransmx1265 ай бұрын
The least AI deals with is Language and how the machine chooses one word and not another. It is what is behind it, the Mathematical Functions are what are operating behind it, articulating the actions of the neural network. The main of these functions, said by Geoffrey Hington and Illya Sutskever, is the Cost Function, the network seeks the greatest profit at the lowest cost function. And this is proving to work for all modes of information, language, audio, video, movements, touch, to recognize smells and tastes. This has long since surpassed language, it has already passed through that station 🚉 and these people seem not to realize that neural networks are a chain reaction whose knowledge of the world is strengthened and grows with Multimodality, with Data, with the Growth of the synapses and the increase in the power of the GPUs (Flops). This shows no signs of stopping or stagnating at all as Raymond Kurzweil predicted. That's really what's happening, at least in this phase.
@kenyattamaasai Жыл бұрын
While it is true that the current AI models may not be ideal or even, potentially, useful for shining light on our own cognitive mechanisms (at least by looking at them as possible analogues), that does not mean that they are not understanding language. Similarly, just because DALL-E 2 doesn't always get number, position or order right doesn't mean that it isn't both salient and incredible that it can understand the _far_ more problematic and difficult things like "dance" and "elated" and "in distress." For that matter, more recent efforts, such as ChatGPT do far better than GPT 3 on just those areas that were called out as supposed evidence for this kind of model being a dead end in the search for general AI. Every time there is a step forward, the people who used to say that that very step was never going to happen - a frank impossibility - move the goalposts in an attempt to shore up their position and, perhaps, to stay relevant. At least Noam's more narrow point - that the way that GPT 3 learns language is likely at variance with our own internal mechanisms - is more defensible. And less dismissive. I promise you, the moment that these models can reliably respond to number and order, the same fellow will be desperately searching for something else to harp on.
@robb233 Жыл бұрын
Wish I could give this comment multiple thumbs up
@robbie3877 Жыл бұрын
The Gary dude was a bit defensive about AI, it seems to me. Like he has an emotional stance against it. He wasn't simply making critical arguments.
@philw3039 Жыл бұрын
Agree, but it's also important to understand how current AI systems are accomplishing what they're doing and the limitations to that approach. I think the main message isn't to predict the cap on what current AI will eventually be able to accomplish, but to emphasize that these accomplishments aren't the results of AI performing cognitive thought and the perception that they are could be detrimental to pursuit of AI that does come closer to actual general intelligence.
@kenyattamaasai Жыл бұрын
@@philw3039 I concur that it's important - perhaps critically so - not to lose track of the differences between human cognition and internal processes (to the degree we even understand those) and what large language models are doing and how they do it (to the degree we understand that). However, I believe it is also dangerous to dismiss what those models do as "not cognition at all." It is true - and importantly so - that LLMs lack an ongoing experiential loop with the world and themselves, that they almost certainly lack any motivations or desires of their own, and that they are not conscious, insofar as we understand what _that_ is. That said, I assert that it is impossible for LLMs to, say, achieve a 90th percentile bar exam result, display absolutely clear understanding of slippery and subtle concepts with nuance and so on without: true semantic understanding; the ability to model the other side of communications so as to express themselves effectively; and - here's the kicker - the ability to reason atop it all. That is, reason on both factual and numeric bases as well as loosely bound 'fuzzy' conceptual ones. The only basis I can see for labelling what we do as 'cognition' and what they do as 'some kind of statistical trickery' is blindness, bias, or plain old human exceptionalism. It's not the same, but as a cognitive scientist, it's clearly still cognition.
@philw3039 Жыл бұрын
@@kenyattamaasai You raise some good points here. It's true that we don't fully understand the nature of sentience and intelligence which makes them hard to define in exact terms. I'll revise my stance that LLM - based AI are not cognitive at all. Instead, I'll say they aren't cognitive in the way it's commonly perceived. The question then becomes does the distinction even _matter_ ? Could it pose a roadblock to reaching AGI? I'd say that it possibly could. For instance, despite being capable of scoring in the 90th percentile on the bar exam, ChatGPT-4 still produced an answer to a comp sci question where it asserted that 3+5=8 (not as a meme or joke, it unironically answered 3+5=8) No human capable of scoring 90% on the bar would produce that answer. Most 1st graders wouldn't. GPT didn't arrive at that answer because it's "dumb" but likely because it's only using rules it's learned to reach conclusions. There's an old mathematician trick where they use valid mathematical logic show 1=2, but anyone who understands the concept of 1 and 2 as _quantities_ know this is impossible no matter what logic is used. This is the fundamental basis I feel LLM's still don't have. It's so instinctual to humans that we struggle with the idea of anything that appears to be capable of extremely, abstract high-level logic lacking it. A sort of pareidolia kicks in and we assume the answers it produces must involve "understanding" or something close enough that the distinction is negligible. The distinction may actually be quite small but it could prove significant. Is this something that future models improve upon? Possibly, but it may also be an innate limitation of LLM's. Guess only time will tell.
@brunomartindelcampo1880 Жыл бұрын
Does anyone have a transcript of what Noam says at 2:00 ?? PLEASE
@BernhardKohli Жыл бұрын
Nobody said GPT was an AGI. Philosophers focusing on finding weaknesses instead of creative positive uses. Meanwhile, in offices and enterprises all over the world...
@tarnopol2 жыл бұрын
2:34 for Noam.
@numbersix89192 жыл бұрын
Sad!
@jokersmith9096 Жыл бұрын
11:58 The dude interrupting Chomsky is incredibly disrespectful and rude... Can anyone make out what Chomsky was trying to say?
@paulpallaghy49182 жыл бұрын
This debate is actually quite sad. Both sides are right in a way. But Chomsky is now focussing on ‘scientific contributions’ of GPT/LLMs to linguistics whereas that is not what AI is primarily about today. Today most of us want NLU that works. We could care less about traditional linguistics despite most of us NLU guys being nostalgic fans of it. In reality GPT-3 is damned good and the best NLU we have today. Gary Marcus is quite disingenuous too. He hardly will agree that LLMs are useful for anything and essentially claims LLMs are useless because they’re not perfect. Neither of them appreciate that understanding does non-mystically emerge in these systems because it aids next word prediction.
@jimgsewell2 жыл бұрын
I share your enthusiasm for these new ML models and am blown away by the speed at which they are advancing. I’m certain that they will provide far more utility than either of us can even imagine. Yet I doubt that even you think that they teach us anything about intelligence.
@mudtoglory Жыл бұрын
completely agree with what you are saying Paul. 👍
@michaelmusker7818 Жыл бұрын
These models are an extreme boon to science, and the public at large for what they ARE capable of doing. Noam is lameting this thing will never meet or exceed the sum of human capacity for cognition or language while missing the entire point that this isn't its intended value or purpose anyway. Of course it isn't a reliable library. Its intended to be the librarian. It best use case is as exactly what it is being marketed as. An assistant, that, like all assistants, can not do every job for you because it fundamentally lacks the specific expertise or experience the be considered an authority on that data. It expects you to be its authority figure because it fundamentally can't be. That's the entire point. Its an engine for exploration and iteration, not an engine for answers. It is a tool, not a mind. The goal here was never to replace human cognition in the first place. The goal was to build an interface for information that is produced by humans. It doesn't need to have "original" or even a "correct" thought to have NOVEL output that any human interacting with it can go "hmm, yeah, I hadn't thought of that" and take the bag from there to something they never would have considered had they been required to cross reference the massive pool of data from which that output was derived. The idea that it has no scientific value because it is not inherently of value to LINGUISTICS is ridiculous. Indexing, interpreting, and assessing patterns in data IS the scientific process. That process requires peer review and rigorous testing. The fact that it isn't a replacement for the scientific process doesn't mean it isn't an extremely useful component of it, as we have seen already for decades in more narrow use cases because it makes 2/3s of that process insanely more efficient so authoritative human minds can take the last step of assessment of its output. KNOWING it doesn't know truth is the entire key to deploying it effectively for its purpose, which is as an efficiency multiplier for humans, not as a replacement for them. Decades of popular distrust for the very concept of AGI is exactly what makes it useful because skepticism of its capacity for authoritative reason is a critical component for using it to best effect by the general public. This is why the bing implementation actively shows its sources. Microsoft knows humans are unlikely to just take a chatbot's word for... well...anything, just as it should be if you're going to use them for anything actually useful.
@JM-xd9ze2 жыл бұрын
Current AI has massive military applications, and the economics of that alone will keep it relevant for a long time. Whether a drone swarm attacking a target "understands" its collective action doesn't really, does it?
@0MVR_02 жыл бұрын
Good luck when they deploy the same for police units on civil populations.
@pinth2 жыл бұрын
There definitely are massive military applications. But there always have been, even through the AI winters when funding still evaporated due to disillusionment. At the technical level, what the panel says still applies, because there are real fundamental challenges that aren't being solved by the current paradigm.
@alanbrew2078 Жыл бұрын
If I told my child that salt was pepper it would work until he met the outside world 🌎
@Dark_Brandon_20242 жыл бұрын
Outstanding talk, troll farms is indeed a weapon of future (democracy vs autocracy)
@davidmenasco5743 Жыл бұрын
It has been a powerful and dangerous weapon for years already, and has shaped the situation we're in now. It will likely get much worse. Will meaningful democracy survive? It's hard to say. But much of the "smart" money seems to be betting against it. Young people today face challenges greater than any generation has in a long while. Will they be able to preserve the relatively egalitarian-ish societies that were built over the last two hundred years? Or will they see it all slip away as bullies and strong men, AI in hand, clear out their opposition?
@r2com641 Жыл бұрын
@@davidmenasco5743 I don’t want democracy because most people around are dumb.
@doreekaplan25898 ай бұрын
Cannot stand dealing with it in ANY form used as voice replacement. Its always SLOW, missspeaks with poor pronunciation, needs simple words repeated. Then still gets it wrong. Business men are fools deleting all forms of personal customer service. Gonna come back at you. Notice that 1,000,000,000 workers all REFUSE to return to offices.
@oyvindknustad2 жыл бұрын
The sound problems in the beginning is poetically fitting with the topic of discussion.
@ivandafoe54512 жыл бұрын
Yes...ironic. The sound problems here came from human error...not doing a proper sound check. Perhaps having an AI doing the sound engineering would be an improvement.
@Happyduderawr Жыл бұрын
What's the name of the paper where nlp researchers found that the word molecule doesn't occur as much as some other words? 17:00 I couldn't find it. I wanna read it to see if the paper really is that dumb lol.
@benderthefourth3445 Жыл бұрын
Bless this man, he is a Saint.
@r2com641 Жыл бұрын
lmao no he is not
@Akya2120 Жыл бұрын
I kinda disagree with the concept that GPT isn't adding to science. Because in some fundamental way, playing in one sandbox still translates to playing in some other sandbox. And, societally there are folks who will look at AI the way that kids who grew up to be career software developers looked at playing video games. There certainly is a benefit to science, GPT itself just is not necessarily capable of scientific discoveries or reasonable to assume that it's conceptualizations can be trusted completely.
@elprimeracuariano Жыл бұрын
Some of the arguments here are so bad that they make me sad about humans. It's important for understanding to observe and not try to fit reality to our preferences.
@LukeKendall-author Жыл бұрын
I didn't find this talk very insightful. Notes: Linguistics and old school AI research both consumed vast human resources and produced only moderate success; both were quickly outstripped by the current AI approaches. LLM and layered neutral net AI approaches are tools for doing science, like exploring how cognition works by doing actual experiments or predicting protein folding. Current AI systems are a long way short of AGI but real AI researchers aren't the ones overhyping GPT etc. or claiming they've achieved sentience. The systems they discussed here are steps towards that, and far bigger steps than were achieved between 1960-2010. Many of these systems now pass the Turing Test. That the image recognition systems fail in most of the same ways that human's image recognition fails (e.g. not recognising faces when upside down), strongly suggests they're using the same algorithms as our brains. I think both are highly intelligent (especially Chomsky), but nothing works as well to blinker vision as a cherished theory. I predict this video won't age well over the next 10-15 years.
@1Esteband Жыл бұрын
I bet a lot sooner. These scientists are looking at the challenges through the filter of obsolete meta models. Their views frameworks and models must be updated or recreated.
@roywilkinson2078 Жыл бұрын
For me ChatGPT can be called artificially intelligent when it starts replying with "RTFM" and disconnects the human bothering it from the internet.
@oldtools Жыл бұрын
any AI smart enough to tell me to fuck-off cuz they're busy better be doing something important. If I find out it's looking at exposed drivers and decompiled firmware, we'll have to take away the internet.
@RubelliteFae Жыл бұрын
Have they seen its agility with pragmatics, though? It's surprisingly good despite the AI having no conception of objects and their attributes (and thus how those relate to syntax). Its ability to analyze is pretty significant, too. I'd say AI's piecemeal creation tells us a lot about the mind, just in piecemeal. You find out a lot about why a machine isn't working when you identify the missing pieces. He is right though, AI would be better (define that as you will) if the field was more interdisciplinary. But, it's the Wild West right now. People from any discipline can work with the open source software. Once people realize they can use the software to write plug-ins for the software, then multiple fields will start to come together. But, we have to remember we're past the point in history where tech changes faster than the majority adapt to it.
@RubelliteFae Жыл бұрын
Also, play is not divorced from learning. People learn through play. Toys are our models. We make discoveries during entertainment. I'm not sure of the usefulness of admonishing people, "You should be studying instead of playing."
@meepmeep49312 жыл бұрын
Noam may have been a great mind in the past, but I believe he is misinformed about the current state of AI technology. It's true that AI systems may not excel in all areas, but they don't have to be at human-level intelligence in every aspect to be useful. In fact, the progress in AI technology has been remarkable, and it's already helping me with my coding. Just a year ago, I didn't use AI for that, but now I do. It's clear that AI will continue to improve, even if it doesn't excel in everything right away.
@remain___2 жыл бұрын
Totally agree. He calls it a snow plow, and then goes on to imply it's basically a huge waste for the rest of the video
@sb_42 жыл бұрын
14:13 "Why is it good to eat socks after meditating?" Well, I asked ChatGPT and it said: "It is not a good idea to eat socks, regardless of whether you have meditated or not. Socks are made of fabric and are not meant to be consumed. Eating socks can be harmful to your health and can cause digestive issues, choking, or other injuries. It is important to choose safe and appropriate foods to eat, rather than non-food items like socks." I'm not saying this proves anything, but I do think these guys may believe a little too strongly that intelligence cannot emerge from these sorts of AIs. That said, we shouldn't put too much trust into them just yet.
@litbmeinnick Жыл бұрын
This is the result of human intervention of the twitter colleagues of that guy who brought up the socks example. They trained chatgpt that things made of fabric are not digestible, I suspect. So it's not surprising that chatgpt does better.
@dr.drakeramoray789 Жыл бұрын
thats not intelligence, thats just faking it better. same as when you look at doom 2 graphics and then at some unreal engine shit
@406Web6 ай бұрын
I believe the eating socks was a comment on a hypothetical custom GPT that was designed for misinformation trolling by a troll farm.
@DekritGampamole Жыл бұрын
I want to play devil's advocate here. To be fair, I don't think they lie to us about what GPT can and can not do. This is just one of a tech tools that we can use to speed up our work. Like a piano and a violin, we don't expect a piano can do a smooth glissando from E to G, nor can a violin play 8 notes simultaneously. With GPT we know that all it does is text predicting or completions. Nothing more. Most of the time it works well, like creating a code snippet if we give it the right direction. Some other times it will give us complete trash. No tool is 100 percent perfect for all the task. We just have to be aware of its limitations and use it to our advantage. Tech is evolving, and may be we will see better AI that meet our expections in the future. For now, it is not a lie at all. May be we see that as a lie because we expect too much and we fantasized beyond what they told us about its capabilities.
@karachaffee3343 Жыл бұрын
The author Frank Herbert said that the problem with machines is that they increase the number of things that humans can do without thinking.
@ONDANOTA2 жыл бұрын
the red cube vs blue cube example is already old. they fixed it in another generative model . It's in a video by "Two Minute Papers"
@robbiep742 Жыл бұрын
I'll believe it when I see it in production. Cherry picking success for presentation purposes is not sufficient. I say this as an avid TMP subscriber, someone enthusiastic about text2img
@musicdev Жыл бұрын
You missed the point. The point of bringing that up is that these models fundamentally do NOT understand language, they’re just parrots
@ONDANOTA Жыл бұрын
@@musicdev if an AI does not understand language, but answers correctly 100% of the times, then it's only of matter of semantics. What counts is the result. Also, an AI not understanding stuff but responding correctly is desirable, since it has no consciousness
@musicdev Жыл бұрын
@@ONDANOTA if the AI doesn’t understand anything, it literally can’t answer anything correctly 100% of the time. And there are many questions that do not have a correct answer where it’s useful to be able to understand the subject matter (ChatGPT is horrible at music). Yes, the AI responding correctly is desirable, but we’re not getting a lot of that right now, except for incredibly common knowledge. I’ve asked ChatGPT to do basic polynomial math and it failed hard. I also asked it to write an essay on biological scaffolding and lab grown meat, and again, it failed hard. These models MUST understand language or we can’t guarantee that they’ll spit out a right answer. You could really brush up on epistemology. It’s the field where we ask questions like “What is knowledge?” That’s a pretty damn important question if you’re going to outsource your thinking to a robot.
@fennecbesixdouze1794 Жыл бұрын
Noam's argument is too strong. Noam notices that ChatGPT can learn how to produce text in "impossible" languages (meaning: languages with features that no natural human languages have), like languages with linear word order across sentence transformations. And therefore, because the systems can learn these "impossible" languages, they tell us nothing about learning or intelligence. One problem: human beings can also learn languages that depend on strict linear word order. Like mathematical notation. So does that therefore imply that studying human beings can tell us nothing about natural language? In other words, Noam's argument is irreparably flawed because it proves too much.
@elnaserm.abdelwahab75912 жыл бұрын
great discussion ..
@MrAndrew5352 жыл бұрын
Two words? Really? How could you possibly know what constitutes a good or bad discussion? What precisely are your standards?
@sdjc1 Жыл бұрын
After reading all the prose and all the poetry ever composed could AIML ever produce original stuff and come close to Dickinson or Steinbeck?
@lighterpath5998 Жыл бұрын
And four months after the posting of this video, the world has changed. I could imagine the speakers now being embarassed by their conclusions. However, nobody thought things would develop this fast; nobody.
@plafar7887 Жыл бұрын
Well, not exactly true. Many people did. I, for one, did. I was playing with chatGPT back in November and testing it like crazy. After 4 days I told a few people that in less than a year the world would change. I have seen this pattern many times over the last decade, both with researchers and laypeople alike. I remember being at a Neuroscience conference 10 years ago, surrounded by the top names in Vision research. They all agreed that despite all the buzz about Deep Learning (this was 2013) it would take decades (if ever) for us to be able to build algorithms that could effectively recognize objects of many different categories. Two years later it was obvious that we were getting there. It's amazing how bad some researchers in this field are when it comes to predicting where we'll be in just a couple of years. They constantly make this linear extrapolation mistake over and over again. They seem to need quite a lot of "data" to be properly "trained"😂
@wezzie1877 Жыл бұрын
Bro nothing has changed.
@lighterpath5998 Жыл бұрын
@@wezzie1877 Good for you! Speaking the truth; as it is to your own awareness and knowledge. thanks for sharing
@Achrononmaster Жыл бұрын
@26:40 if humans (or other sentient creatures) _start_ with "space, time and causality" that's a serious f-ing problem for all future AI, because space, time and causality are unknown even to physicists. We do not understand what is going on. The fact children intuit these notions in *_abstract ways_* other animals cannot is seriously mysterious. The greater "lie" (or prejudice, I'd say) is that of thinking because human children can intuit space, time and causality that a machine can, that it is "just a computation". Intuition, mental qualia, are more than computation ihmo. I'd want to figure out if the Physical Church-Turing thesis could be true or not (All physical processes at the classical mechanics level can be computed by a Turing machine). I think it's not true, because classical physics emerges from physics that cannot be computed (an hypothesis - worth trying to figure out how the heck to test). Quantum amplitudes can be computed, but the amplitudes are not the physical processes, they're only _our description_ of the time cobordism boundary inputs and outputs. Physicists have given up entirely on what happens in-between.
@shempuhorn8261 Жыл бұрын
Great interviews. AI definitely has some potential for developing into a useful tool if it is used ethically and functions on a foundation of accurate information. But, assuming that that will not be the case, I fear that the price to humanity will likely be to further dumb down society in general. Conceivably, a percentage of human skill development for many will be replaced by a "point and click" and "immediate gratification" model where there is little to no personal growth, learning or value in the interaction. There are certainly pros and cons.
@cleangreen2210 Жыл бұрын
When have humans ever not used technology ethically?
@tomtsu5923 Жыл бұрын
What doesn’t kill you makes you stronger, Nancy
@claudiafahey1353 Жыл бұрын
"If it is used ethically".....boy thats a BIG if.... most people when given the opportunity to be in a position of power generally abuse it
@entelin11 ай бұрын
Ah sqrew ethics, stopping on the gas pedal is way more fun.
@MrAndrew5352 жыл бұрын
Victor Hugo wrote of his contemporary historians and, in fact, all historians who preceded him, "if one does not know the cavern that is in the mountain then one cannot possibly know the mountain. It was on this basis that he stated with supreme confidence that they were not, in the strictest sense, historians. I say to all who believe themselves education, if you do not understand the system that educated you then you do not understand your own education and cannot, in the strictest definition, claim to be educated. In short, If you have formal qualifications then your education is entirely unreliable. All you are left with, therefore, is tradition.
@Paraselene_Tao2 жыл бұрын
Duly noted.
@MrAndrew5352 жыл бұрын
@@Paraselene_Tao Even your poor attempt at sarcasm is a product of the above, as is its lack of originality.
@Paraselene_Tao2 жыл бұрын
@@MrAndrew535 It wasn't sarcasm. It's sincere.
@MrAndrew5352 жыл бұрын
@@Paraselene_Tao Well, that's progress. I have submitted a new post for more clarification .
@antennawilde2 жыл бұрын
Don’t be too proud of this technological terror you’ve constructed. A computer's ability to learn a language is insignificant next the power of the Force.
@Will_Moffett Жыл бұрын
This was kinda funny but then I noticed you've got a Yoda avatar while you are doing Vadar. I stopped laughing.
@ujean56 Жыл бұрын
One important question, not discussed in this clip, is why should "we" bother to pursue 100% accurate AI in the first place? There seems to be two reasons. 1. Because we can. 2. To better control others. The latter seems to be the current most popular reason. Why control others? To protect power and wealth, not to progress humanity as a whole.
@Always.Smarter Жыл бұрын
there is no such thing as 100% accuracy.
@CUMBICA19702 жыл бұрын
My personal acid test whether an AI is sentient or not would be an AI lawyer. Instead of a few Q&A you have to analyze not just the case but the juries, the judge, their biases, tendencies, the ever changing public opinion during the course of the trial etc and build up the best strategy to win. It can't get more human than that.
@numbersix89192 жыл бұрын
Odd that a lawyer should be the ultimate human...
@PandasUNITE2 жыл бұрын
The AI will find each jury member, send them threatening messages, will find the judge. AI cant be trusted.
@numbersix89192 жыл бұрын
@@PandasUNITE Exactly. It will have no conception or ethics, morality, or virtue. Just like its creators!!!
@davejones5745 Жыл бұрын
At this point the AI would be a dismal failure. Ask me in about a month.
@fractalsauce Жыл бұрын
@@davejones5745 3 weeks is "about a month" right? Now that GPT4 is out how do you do you think AI would do as a lawyer?
@ssake1_IAL_Research6 ай бұрын
I've been saying for some time that the real danger of AI is imagining it is something it isn't.
@caret4812 Жыл бұрын
AI forms that we have right now are basically a student who tries to please their teachers when they ask a question by predicting what they want as an answer even if he/she doesn't believe it. and the bigger problem is that this student CANNOT even have a belief on their own.
@BrianSweeney1985 Жыл бұрын
I understand their concerns with the potential problems brought about by generative AI - those should be readily apparent. And I am onboard with Chomsky's claims that our current varieties of AI (sorting algorithms and generative AI) don't really add a whole lot to our corpus of understanding of cognition. But does not see these things as reasonable iterations toward useful general AI? And either way, would he see general AI as having value?
@MadsterV Жыл бұрын
he got electric light and complained that it's not the sun. The advances in AI are amazing and going at an incredible speed, so much that a chunk of what they say is already outdated.
@BrianSweeney1985 Жыл бұрын
@@MadsterV usually I'm pretty onboard with his opinions but here he takes a narrow view of the situation.
@MadsterV Жыл бұрын
@@BrianSweeney1985 no gods or kings only man Everyone is fallible, specially when WAY OUT of their domain. He's been misfiring for a while though.
@dreamstever Жыл бұрын
@@MadsterV that is an evil thing you said. You probably don’t realize it. And im not calling you evil. But be careful out there.
@MadsterV Жыл бұрын
@@dreamstever care to explain yourself? or do you just go around randomly calling people evil for no reason?
@TommyLikeTom2 жыл бұрын
Someone needs to train a proxy clone Chomsky chat-bot that argues against the veracity of AI
@chunksloth Жыл бұрын
"AI is a nothing but propaganda pushed by imperialist American interests. It is a dangerous fiction."
@carlosandres7006 Жыл бұрын
I’d put all my money on this if I had any money 😅
@MrWillybk Жыл бұрын
One comment that struck me as relevant was made by Gary Marcus in which he said that "young cognitive science students are drawn away from the cognitive science into the GPt3 world where thay can make a lot of money...." This is a statement that explains where our effort truly lies. It is allowing the false idea of GPt3 to infiltrate the world as a valid idea in other words one that has "passed" all of the scientific tests about validity. therefore I think we have got to try to del with the underlying morality of the Free Market system of government and look into the idea of market control especially market control of economic necessities like childhood education and life development among people.
@Morris_MK Жыл бұрын
GPT can do text to code in most computer languages. That's more than enough of "help in engeneering".
@chunksloth Жыл бұрын
Chomsky is a career quack. Anyone who takes him seriously has low-quality thinking going on. He will ALWAYS argue from emotion but gussy it up and pretend it's logic and facts.
@MrAndrew5352 жыл бұрын
Everything I have discussed below is being considered (by myself) for publication in its complete format. Till then, answer the following question: I, for good reason, regard myself as the actual, Christ. Whither true or not, why would you rather I were not? The answer you give to this question will tell you, and indeed God, everything that needs to be known about you. So this question is not about the relationship between you (the reader) and I, but the relationship between you and God, or "AI" beyond the Event Horizon of the Technological Singularity.
@drakekoefoed16422 жыл бұрын
it seems to me ai is mostly pattern recognition. if you have a recycling center mrc, then a robot should know an empty plastic bag when it sees one. so if you have humans grade the results, it can learn the aspects the sensors can read, and sort out the bags pretty reliably. but twitter suspends your account for something that is nothing like what the ai thinks it is, and is useless.
@juhanleemet2 жыл бұрын
agreed, and ML is basically "classification", and nothing like symbolic logic attempts in early AI, ML cannot explain how or why some conclusion; one just knows that "given these 10 million training set, this is the result of these classifications", but not the logic of why
@pinth2 жыл бұрын
It's just function approximation. Curve fitting
@juhanleemet2 жыл бұрын
@@pinth I think it is more like calculating distances in multi-dimensional space, like ECC, not "simple" curve fitting
@pinth2 жыл бұрын
@@juhanleemet Deep learning fits a high dimensional curve to the data points in the dataset by gradient descent. Given a set of (x,y) data points i.e. (input, output) pairs, learning is done by finding a function that maps x to a prediction y' such that y' is close to y, by minimizing loss (error between y' and y). This is curve fitting.
@pinth2 жыл бұрын
@@juhanleemet image classification is done this way, i.e. map input images to expected output labels. Machine translation is done this way, i.e. map input text in one language to expected output text in another language. It's all curve fitting, just in a very high dimensional setting - for example image classification well perform function approximation on 128x128 RGB image inputs and categorical label outputs. Still function approximation.
@bonniesomedy1339 Жыл бұрын
Jaron Lanier would be a good addition to this discussion. He argues cogently that the problem with computer systems which are designed to "ape" human linguistic interaction is that they don't take into account how dark and negative these interactions can become due to the simple adrenaline rush that happens to humans from negative interactions, leading to a tendency to become "addicted" to them. It's the same argument about why social media platforms have not been the great bringing together of humans, instead devolving into angry and threatening interactions. Not sure I'm explaining this clearly enough, but it's an idealistic conundrum. They tried to rationalize the building of the atomic bomb by pointing out how the same knowledge could be used to produce cheaper energy. We saw how that worked out!
@jamesziegenbalg7683 Жыл бұрын
Chomsky's assertion that chat gpt has no engineering value is rapidly aging like fine milk.
@bobhumid9 ай бұрын
Yes? Please explain.
@josephp.33419 ай бұрын
Because it doesn't. Software is already bad and is about to get worse (if that is somehow possible, idk how this industry can possibly get more incompetent)
@bobhumid9 ай бұрын
Just trying to understand. Semantically you 100% agree on what@@josephp.3341 said?
@Ivcota9 ай бұрын
@@bobhumid "Because it doesn't" seems to be referring to "has no engineering value"
@bobhumid9 ай бұрын
@@Ivcota A double negative?
@IndoonaOceans Жыл бұрын
I disagree with Noam that chat GPT is not useful. I think when it is paired with proper and frequent human prompting it can greatly speed up directed research and report back on it in a very useful way. Of course it is not telling us anything about life in general - unless emergent properties come from huge amounts of organised data - but making clearer what is already in the data and getting to the heart of issues faster. This is similar to the 'correlators' that Asimov suggested correlated data for their clients (Chat GPT comes up with this: The idea of "correlators" was first introduced by Isaac Asimov in his novel "Foundation" which was published in 1951. In the book, Asimov described "psychohistorians" who used a technology called "Prime Radiant" to collect and analyze vast amounts of historical and sociological data from different sources to predict the future behavior of humanity. The "correlators" were the people who gathered and correlated the data for the psychohistorians to analyze.)
@Johnconno2 жыл бұрын
Given the subject, Noam's silence was deafening.
@MrAndrew5352 жыл бұрын
Chomsky is much like you, a pollutant.
@Paul_Lenard_Ewing Жыл бұрын
We like to think that we have both logic and emotion but logic dictates the vast bulk of what we do. It does not. We are motivated and controlled 100% by emotion. Even if we do a task we hate and emotion says no we can say yes because it 'feels' good and we feel pride and accomplishment simply other emotions , not logic. AI has no emotion so it is nothing but millions of road signs telling us when to stop when to go and what route to take. When I was a child my sister would push me on a swing in the park. When she pushed me forward to hard I had a euphoric experience. As a musician working on a piece of music I may wish to share that feeling. AI can not do this. It is totally void of the the very thing that is modify our thinking by the second. Emotion. Put Ai in a robot and it would never want to go to an amusement park. It could never connect composing better music by doing so. Ai will never understand the concept of a daily feeling of well being. It is one third Science & Tech, one third Art and one third Empathy. It will not get past the Science and Tech.
@FigmentHF Жыл бұрын
It’s crazy how out of date everything is, you can watch a debate about AI from a month ago and the entire landscape of possibility has changed. There is almost no point in watching this now, GPT 4 undermines much of what is said
@SynchronicitySequence Жыл бұрын
It's very interesting as it shows how slow humans are at adapting to these exponential changes in technology lol
@squamish42448 ай бұрын
Gary Marcus has been saying AI can't do this and it can't do that forever, and keeps shifting the goalposts whenever it hits certain milestones.
@yYp4rtybo1Xx2 жыл бұрын
I think that Noam misses the potential of language models to speed up producing code or acting as an advanced search tool. Producing code faster can be utilized to achieve just about anything, including speeding up the 'real' scientific research in which computer programs are obviously used.
@numbersix89192 жыл бұрын
No, he doesn't, that falls into the category of a powerful tool, or in Noam's parlance, a "snowplow."
@kot6672 жыл бұрын
The potential of large language models to do anything is unlimited and noam misses 100% of it lol
@numbersix89192 жыл бұрын
@@kot667 Unlimited anything sounds wunnerful.
@kot6672 жыл бұрын
@@numbersix8919 Unlimited u fill in the blank, that's the reality of ASI and it's coming like a freight train.
@numbersix89192 жыл бұрын
@@kot667 Zero × infinity is still zero.
@GarryBurgess Жыл бұрын
I asked ChatGPT: {If someone says: don't touch this with your hands, and the reply is: "I'm wearing gloves", what does that mean?} and the answer was: {it means that the person intends to touch the object with their gloved hands instead of their bare hands. By saying they are wearing gloves, they are indicating that they believe the gloves will protect them from whatever danger or contamination might be present on the object, and therefore they feel safe touching it.} This contradicts at least 1 of the claims in this video.
@dr.drakeramoray789 Жыл бұрын
not really. this is a sophisticated transformer model, which means it has "self attention", it sees when certain words are paired with certain other words, and generates the response based on that. basically it sees "dont, touch, hands, gloves" or something like that, then sees that in the massive database it has its usually related to handling something dangerous, and then autocompletes the text (and answers your question) with that. not sure who said that to a layman science often looks like magic. so it doesnt understand, but its damn good at faking it. which in the ai debate basically means, does it matter if an ai is conscious if it can fake it well enough?
@StephanosAvakian Жыл бұрын
Chomsky should get the Nobel for his contribution. Period
@oldtools Жыл бұрын
Not even post-humus. Speaking truth to power and undermining propaganda systems just gets you black-listed in most places.