Geoffrey Hinton - Two Paths to Intelligence

  Рет қаралды 150,907

CSER Cambridge

CSER Cambridge

Күн бұрын

Geoffrey Hinton - Two Paths to Intelligence
(25 May 2023, Public Lecture, University of Cambridge)
Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process.
By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us.
The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering.
The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website:
www.cser.ac.uk
/ csercambridge
/ csercambridge

Пікірлер: 402
@TheLastUniqueName
@TheLastUniqueName 11 ай бұрын
“There’s no examples of a more intelligent thing being controlled by a less intelligent thing” - Tell me don’t own a cat without telling me you don’t own a cat
@gdraskovic
@gdraskovic 11 ай бұрын
Perhaps cat is thinking the same thing
@41-Haiku
@41-Haiku 10 ай бұрын
Just shows how easy it is to manipulate a human. (As a cat person myself, it's the endorphins that do it. The little kitties are so fuzzy wuzzy!)
@Drookup
@Drookup 9 ай бұрын
Maybe the cat is really intelligent
@prestonlui6451
@prestonlui6451 9 ай бұрын
But cats are more intelligent, cute overlords
@Custodian123
@Custodian123 8 ай бұрын
The same idea with dogs. My pug knows she will get me to do something she wants, if she acts or does something in a particular way (acting in a specifically cute way). This actually gives some insight regarding the future of super intelligent AI and humans. If we don't have control, it's likely we can still have some amount of influence. Maybe.
@Senecamarcus
@Senecamarcus 11 ай бұрын
Thank you for uploading this for us to watch! I appreciate that.
@kandoit140
@kandoit140 11 ай бұрын
I always love listening to Geoff, he is so insightful and has a great sense of humor. So interesting to hear him talk!
@RougherFluffer
@RougherFluffer 11 ай бұрын
What a wonderful talk. His humble approach and acknowledgement of where he lacked particular knowledge was heartening to witness. That he has logically deduced some of the main arguments of the alignment problem speaks volumes about his reasoning abilties. I'm very glad he's leveraging his position to try to promote such vital messages.
@wk4240
@wk4240 10 ай бұрын
It will take many more, like Mr. Hinton, to make a difference - as to the what direction and to what extent we take with AI.
@richardpaczynski5486
@richardpaczynski5486 6 ай бұрын
Very well put; thanks
@TuringTestFiction
@TuringTestFiction 11 ай бұрын
I love this video. Brilliant and low-key hilarious! I'm consistently impressed by Geoffrey Hinton.
@AmericanBrain
@AmericanBrain 9 ай бұрын
but he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@JustJanitor
@JustJanitor 8 ай бұрын
Thank you very much for making this available
@DaniloNaiff
@DaniloNaiff 11 ай бұрын
It is really impressive to listen to Geoffrey Hinton. I think this lecture mays sound strange for most, but he really seems to think like a cognitive scientist, that simply wanted to make a nice model of the brain.
@dobermanlove777
@dobermanlove777 11 ай бұрын
That's exactly what I thought when listening to this presentation! It's quite a romantic approach for the human brain to try to recreate a digital and thus mathematical representation of itself. Especially when you also see the link between how neural networks are communicating and how society does in the example of Trump's tweets.
@paulm3969
@paulm3969 11 ай бұрын
I actually find him really irritating, I think he is quite presumptuous. He makes a lot of assumptions and then uses them as argument. For example he keeps saying that people think they're special. What is he on about? Yes some people think they're special but it's as if he is the only person on earth who thinks otherwise. I know very few people who think they're special or really smart and I'd say most people already know Google is smarter than them. So I don't know where he gets that idea unless he is projecting himself. I also think he is a bit of a fool for saying things like "Trump would use these things to win elections". Like why not just shut up and stop giving Trump ideas?
@jebprime
@jebprime 11 ай бұрын
I think he’s referring to how some people believe intelligence and consciousness are something special or unique to humans, that cannot be replicated by a machine
@PazLeBon
@PazLeBon 9 ай бұрын
@@dobermanlove777 yet the facts are they have absolutely no clue how we think, irrespetive of how they dress things up
@PazLeBon
@PazLeBon 9 ай бұрын
@@paulm3969 im like you, i always get irritated by 'we' or generalisations thare simply are not how i think haha
@kenmogibrainworld4844
@kenmogibrainworld4844 11 ай бұрын
When Prof Hinton discusses the nature of qualia from the counter-factual point of view, there is a spark of things to come. I look forward to further expositions on this.
@DirtiestDeeds
@DirtiestDeeds 10 ай бұрын
Yes, the world is our lobster! Just need the piping at international/national/regional/local/ level along with 'One ai per child.' policy... Also stop the training runs immediately.
@PazLeBon
@PazLeBon 9 ай бұрын
it isnt factual tho lol
@AmericanBrain
@AmericanBrain 9 ай бұрын
Ken stop it now. He admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@AmericanBrain
@AmericanBrain 9 ай бұрын
what you even talking about ? @@DirtiestDeeds Hinton admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@HangLe-ou1rm
@HangLe-ou1rm 8 ай бұрын
Amazing talk! Thank you!
@_obdo_
@_obdo_ 11 ай бұрын
Great talk. It’s impressive to see someone speak out on such a polarizing topic, based on having grasped it purely intellectually even though, as he says, his emotions haven’t nearly caught up yet.
@PazLeBon
@PazLeBon 9 ай бұрын
why polarising? its just software at the nd of the day, nothing that new about it in many senses
@_obdo_
@_obdo_ 9 ай бұрын
@@PazLeBon The topic of AI risks has unfortunately become fairly polarizing, and Dr. Hinton has recently shifted his position on that topic, some of which comes out in this video (even though that’s not the primary topic).
@Petrvsco
@Petrvsco 7 ай бұрын
@@PazLeBon”just software” I think you missed the part that mentions how this can quickly become an existential risk. Or you misunderstand what existential risk in this context.
@tappetmanifolds7024
@tappetmanifolds7024 7 ай бұрын
​@@Petrvsco Elaborate and elucidate.
@tappetmanifolds7024
@tappetmanifolds7024 7 ай бұрын
By enforcing personal opinions based on perception from misconception, especially when swayed by political bias, how can the advancement of a system progress, if decision problems are not permitted to evolve because they are restricted by preventions? Distillation would do well to find pools of resource in the entropy of the not yet known.
@41-Haiku
@41-Haiku 10 ай бұрын
Hinton is a delight. His voice is a very welcome one for the AI safety community.
@DreamzSoft
@DreamzSoft 9 ай бұрын
Sir you are too good and listening to your views we're thankful of having you people around us ❤😊 thanks
@yunwang1243
@yunwang1243 8 ай бұрын
This is such a sincere talk.
@JasonC-rp3ly
@JasonC-rp3ly 11 ай бұрын
What a fascinating talk - this man is a hero
@KelvinMeeks
@KelvinMeeks 9 ай бұрын
A fascinating talk
@loopuleasa
@loopuleasa 11 ай бұрын
tldr on how teaching and learning works for us: "To learn from the words coming from my mouth, your brain is trying to change its connections to make it likelier that you would reasonably say that string of words yourself." He taught me to say that
@greencoder1594
@greencoder1594 10 ай бұрын
The question is though, *why did you repeat.* And why did you post. Is it for the likes, the joke, do you think you know? Because it is not the reason you are going to proclaim. Also, thanks for your tldr.
@bobsmithy3103
@bobsmithy3103 9 ай бұрын
I'm not sure I'd agree with Hinton on that. A human's goal is learning the underlying concept whereas LLMs' have a goal to learn surface level concepts, but in order to do so it is forced to learn the underlying concepts/models. Note that the human is not necessarily optimizing to more likely predict what word/token is being used next which is the case for LLMs. (AKA: for humans, word prediction is a consequence of the goal of learning underlying models. For LLMs word/token prediction is the goal and learning the underlying models is a consequence) It's a slight but useful distinction.
@boremir3956
@boremir3956 11 ай бұрын
I have noticed that often times those that are highly intelligent are very hesitant to admit that they are knowledgeable or should be viewed as an authority in a specific field, like sir Geoffrey Hinton here. On the flip side those that are the loudest and think themselves capable of giving advice and knowledge to someone else are often times the least intelligent.
@nescirian
@nescirian 11 ай бұрын
This is an observation that a lot of people have agreed with - for example, in 1950 Bertrand Russell wrote that "The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt". There are studies that support the idea, and in psychological circles it is known as the Dunning-Kruger effect, which is a useful search term if you wanted to learn more on the subject.
@Jesyak
@Jesyak 11 ай бұрын
Well said
@hubrisnxs2013
@hubrisnxs2013 11 ай бұрын
Duning - Kruger in effect, which in this case is important but, and I may be incorrect here, I notice a lot of people suffering from Dunning Kruger use Dunning Kruger as a blugeon on people. I suppose since it's an ethical or cognitive blindspot, it is akin to those suffering from confirmation bias, yet I feel there is an added moral component of Dunning Kruger that I'm not sure actually exists, though I definitely feel it to be so
@kinngrimm
@kinngrimm 11 ай бұрын
Look up Dunning-Krueger Effect, i think at least the second part of your statement is discribed by that.
@poemerlee9437
@poemerlee9437 11 ай бұрын
Can’t agree more.
@hanskraut2018
@hanskraut2018 11 ай бұрын
I really like some of the A.I. Mr Hinton is saying i really like it. And there is a lot i would have to say, but im just listening and i like the efficiency things and some things point to a deeper understanding from deeper principles. Thank you for the lovely talk. And hopefully you have a great long life how you like and many more fun discoverys and bath in some of the massive positives that might come early enought and I think its possible but the world is complex and not only technical things can hole A.I. up but ja. Enjoy and good wishes :)
@jonatan01i
@jonatan01i 11 ай бұрын
Btw. humanity also learns by averaging through evolution. Every one of us is ran with a slightly different config settings and the most successful units will make more children - at least that was the case for a long time. It's the species hardware that is learning through evolution.
@PazLeBon
@PazLeBon 9 ай бұрын
lmao no, the inteillgent ones have less children now :)
@scottnineteen
@scottnineteen 11 ай бұрын
Geoffrey Hinton consistently presents and considers the most intriguing issues. He's not the guy in the basement working on his nets for decades that super-fast hardware made famous., no, his thinking properly shines light in the dark places and his ideas worked because they're really good, ...and the hardware got faster.
@AntonMochalin
@AntonMochalin 9 ай бұрын
I was most intrigued by Hinton's view of subjective experience which is actually quite close to particular psychology theories emphasizing the social nature of consciousness and if those theories have some truth to them (and I'm pretty convinced they do) having some form of subjectivity like ours isn't going to be hard for ML systems. What they still lack and I think is preventable is having a personality as a hierarchy of motives (vaguely similar to what Hinton mentioned about goal to have more control serving many other possible goals) because now the ML's simple "motive" is doing the task we set, providing the "right answer" so to speak so we're more likely to fool ourselves if not careful enough with the definitions of "right answers". However, Hinton is right about the dangers of allowing ML too much unsupervized agency so the solution could be in development of specialized systems and prevention of creation of general purpose systems like GPT-4 or at least prevention of allowing copies of those systems to share too much general knowledge.
@geaca3222
@geaca3222 9 ай бұрын
It would be interesting to know what Dario Amodei of Anthropic thinks about your suggestions
@KemptonLam
@KemptonLam 7 ай бұрын
52:29 Amazing (and surprising) answer to hear Prof. Hinton talk about thinkers that affect his own thoughts on risks from AI.
@agenticmark
@agenticmark 3 ай бұрын
Mr Hinton didn't want to be Oppenheimer. He basically created the base concepts that we use today in ML.
@richardnunziata3221
@richardnunziata3221 11 ай бұрын
Yes ... soon machine will model agency of the interlocutor and then create a theory of mind for the interlocutor and then of itself. This will happen very quickly especially if we give these systems a embodiment like a humanoid robot ... it's just a question of distillation. If we can get gpt to try to predict the goal of the user , what is the user trying to do .Then measure against predicted next queries
@petraiondan4669
@petraiondan4669 8 ай бұрын
Sooo profound!
@waylonbarrett3456
@waylonbarrett3456 11 ай бұрын
It's just so damned hard to believe this talk is being given in 2023.
@TheDavidlloydjones
@TheDavidlloydjones 3 ай бұрын
Yes, all his "the robots are going to take over" stuff is from 1930's movies and 1945-48 AI, isn't it?
@lucidx9443
@lucidx9443 10 ай бұрын
I knew this guy since Boltzmann machines, before knowing AI was necessary. Nothing's clearer than Hinton's (explanations of) concepts. Greatest intuitionist of our time, Thanks for uploading.
@russianbotfarm3036
@russianbotfarm3036 10 ай бұрын
Not sure who it was, whosaid, “To understand is to create”. I think it was probably meant as, “learning is creating an internal representation”, but I think it’s also true, that _understanding something deeply lets you create with that understanding_ .
@doublesushi5990
@doublesushi5990 8 ай бұрын
it was this guy who said that @@russianbotfarm3036
@commentarytalk1446
@commentarytalk1446 9 ай бұрын
Does he start with a definition of Intelligence to define the problem of intelligence categorization and creation and application at the beginning before stating a summary of the "death by powerpoint" presentation as road map to the talk to structure it. I did not hear it or see it.
@jorgesaxon3781
@jorgesaxon3781 11 ай бұрын
25:40 Love how he says its "Possible" that google is doing the same thing, like he wasn't working on probably exactly that just a couple of months ago :/
@asamak
@asamak 11 ай бұрын
"But as youll see we may not have time for that" 🤯 5:05
@charlesje1966
@charlesje1966 8 ай бұрын
That is fascinating. I use chatgpt to assemble code for microcontrollers and I can see how this lecture points to the future of that endeavour. We will replace the 'human code' layer with hardware anatomy that has been optimized for a task through AI.
@tappetmanifolds7024
@tappetmanifolds7024 7 ай бұрын
@charlesje1966 Given that the English language is extremely rich in its historical contextuality, as well as it's richness in ambiguity and nuance, does our ability to construct machines, which can decide for us our channels of communication, cause greater divisions between people who are unable to express a posteriori knowledge? Is this the anti-thesis of the humane computation which seeks by physical interactions through debate our true purpose as a species? Religion and belief systems aside we still need to, as in Professor Hawking's words, keep talking. Is the most efficient way to acquire knowledge to actually 'get' the entire distribution and a precise interpretation of it.
@RandomNooby
@RandomNooby 7 ай бұрын
Super intelligent minds in control may well be better for all life than the current situation...
@MathAtFA
@MathAtFA 11 ай бұрын
Great lecture. BTW: if teaching "mortal analog" AIs is really so slow and painful, this just means it is a great problem to give to digital AI. Clear function to optimize: teach analog AI to imitate a given network. Infinite data: you can simulate/build many slightly different analog AI devices. Definitely profitable: once solved, one could sell gazillion cheap devices working good enough for a short time. And then you keep selling them, since no one would be able to repair them. Whisper: mass producing cheap short-lived military drones.
@AmericanBrain
@AmericanBrain 9 ай бұрын
Worst lecture ever. Hinton - he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@tangdexian3323
@tangdexian3323 11 ай бұрын
Speaking from the perspective of a former electrical engineer, I suppose another point of people figuring out to use the digital gate, 1s, and 0s to represent information is also because, analog computing is just harder to get right. Logical gates, on the other hands, are much easier to design and produce, also much more robust.
@hubrisnxs2013
@hubrisnxs2013 11 ай бұрын
Thanks for this. I was always under the impression analog systems allowed much more error/fault tolerance
@PazLeBon
@PazLeBon 9 ай бұрын
@@hubrisnxs2013 but how to we say the next word is an error?
@anselmoufc
@anselmoufc 7 ай бұрын
​​​@@hubrisnxs2013Sure. Digitization eliminates noise in electrical circuits. This is why digital music is higher quality than the old analog vynil discs. Mr. Hinton ignored this in his talk. He is a very smart guy, but also very biased towards his views. He also keeps reinventing ideas as if they were new! Weight perturbation is an old idea in optimization, but he does not even reference original authors!
@hubrisnxs2013
@hubrisnxs2013 7 ай бұрын
@@anselmoufc Respectfully, are you the first person to point this out? If not, perhaps you should have referenced the original person to have that reference? In any case, if this standard were used for ANY one hour technical talk, it either wouldn't be an hour or would mainly be reference points
@anselmoufc
@anselmoufc 7 ай бұрын
@@hubrisnxs2013 The ideia of randomly perturbing weights is the same as the simultaneous perturbation stochastic approximation (SPSA) proposed by Spall in the 1990's (Google it). It is a form of stochastic gradient descent (but without computing exact gradients). In addition, SPSA scales well with the dimensionality of the problem.
@abhishekpratapsingh9117
@abhishekpratapsingh9117 11 ай бұрын
-0: determinism Maitrey: observer +0: free will
@anthonyrepetto3474
@anthonyrepetto3474 11 ай бұрын
Thank you Mr. Hinton! I'd been resoundingly ignored when I said the same as you, back in 2017 when I wrote "Ai: Better than the real thing", and I wrote about using Ai-Bias Detection to weed-out human biases, which Hinton also mentions here, when I wrote "Ai Will Weed-Out Human Biases", and how to use frozen-weights to ensure safety of Ai systems, which Hinton mentions briefly in the questions-section, as well as the fact that narrow networks are superior to general intelligence: "AGI Soon, but Narrow Works Better." Hopefully, in a few more years, Geoff Hinton will say some of my other points...
@PazLeBon
@PazLeBon 9 ай бұрын
its just a word calculator man
@cmilkau
@cmilkau 11 ай бұрын
"Modern" cryptrogaphy (the stuff that happened after 1980) is a prototypical example of exerting control using something that is much less powerful than what is being controlled. This is essentially the goal of cryptography: have something that is (moderately) easy to use, yet extremely hard to abuse. It's not a solution, but it is an example.
@hubrisnxs2013
@hubrisnxs2013 11 ай бұрын
Yes, but in this case we have to develop a cryptographic system completely correct on the first try or everyone dies. I'm not attacking what you said or your perspective, because you are absolutely correct... but I still think it's a problem as well as other examples that can be made....it is like coming up with a completely secure (as in zero vulnerabilities ever that has to incorporate and use all other things regardless of security flaws) operating system on the absolute first try. This is first try on by definition a closed source system since if it is a fork of an insecure system with similar capabilities we are equally as dead
@cmilkau
@cmilkau 11 ай бұрын
@@hubrisnxs2013 Yes! As I said, it's not a solution by any means. I'm not even qualified to estimate whether it is a possible path to a solution, although it seems unlikely (most crypto relies in unsolved maths problems which would be dangerous). I just wanted to mention there is an example of a weaker system controlling a more powerful one
@greencoder1594
@greencoder1594 10 ай бұрын
@@cmilkau could you please elaborate in which manner a weaker system is controlling a more powerful one. both what you define as system and what you define as control.
@neilclay5835
@neilclay5835 11 ай бұрын
A historic lecture I think. We'll look back on this with respect.
@notgabby604
@notgabby604 11 ай бұрын
Fast transforms like the FFT have an equivelent matrix form. Which means a fast matrix operation is available digitally. You just have to figure out how to use it in actual algorithms. Going analog or using light to get fast matrices never really works out, digital always wins, it's just so dense, efficient and exact. Though having said that I am actually having trouble with inexact rounding modes in Java, Banker's rounding is Not repeatable.
@notgabby604
@notgabby604 11 ай бұрын
Re: Fast Transforms and neural networks: "AI462 Blog".
@jondor654
@jondor654 10 ай бұрын
Analog will probably be hybridised with digital in the future
@alexpetrov1969
@alexpetrov1969 10 ай бұрын
This argument is invalid. FFT can handle ONLY matrices that satisfy certain constraints; it does not work for arbitrary matrices. In other words, it only solves a special case. It is more efficient because it leverages the additional constraints that are present in the special case.
@fburton8
@fburton8 2 ай бұрын
Do LLMs have access to books? If not, isn’t that a significant limitation on training data?
@paraskevasparaskevas350
@paraskevasparaskevas350 11 ай бұрын
check time point 55:00 and onwards to hear what one of his colleagues experienced with a system that was not as sophisticated as GPT-4....
@roys4244
@roys4244 11 ай бұрын
Is that Lecture Theatre named after Constance Tipper, so title mistake?
@nguyenucan8488
@nguyenucan8488 4 ай бұрын
omg, wonderful
@ward_heimdal
@ward_heimdal 11 ай бұрын
7:35, my cutesy word for that in my ideolect is "bitfulness". I just use it when writing notes to myself. I try to maximise the bitfulness of my observations wrt the questions I care about. It's relevant for social epistemology, where the aim is to maximise the efficiency of a research community (e.g. effective altruism) wrt making progress on important questions. Effective altruists in particular tend to overemphasise the "probability mindset" imo, where what they think matters is to learn to make calibrated bets on prediction markets. From that mindset, it can make sense to pay less relative attention to precise causal models, and instead just defer to the estimates of domain experts. Using clever aggregation rules over other people's predictions is a much faster way to make profitable bets on a wide range of questions. However, when you talk to other researchers and you just ask them about their probabilities on XYZ, that's much less model-constraining information compared to if you ask for their reasoning and try to understand their probability generators in the first place. Building your own mental models may not be immediately profitable, but they're much better long-term, and for your ability to innovate. A probability estimate from someone is much less "bitful" than a conversation about models, so the mindset makes learning less efficient.
@41-Haiku
@41-Haiku 10 ай бұрын
Aha. Like when playing Guess Who, you only care about the kinds of questions that give you the most information. Except in that case, your teacher is an opponent and their knowledge is just a random card they happened to pull. When asking intelligent people how they reasoned to come to a conclusion, you get not just the contingent facts and ideas, but the design of the machine that produced the facts and ideas.
@41-Haiku
@41-Haiku 10 ай бұрын
That sounds like a fantastic way to learn. I almost said that I'm not smart enough to extract valuable information from that kind of conversation the way that I would want to. I'm certainly not as smart as I would like to be, but I think I'm primarily suffering from an inexplicable incuriosity.
@ward_heimdal
@ward_heimdal 10 ай бұрын
​@@41-Haiku I'm incurious about >99% of all possible questions, as I should be. If you're in a diverse intellectual environment, you might see people being curious about everything from quantum physics to medieval knitting, and it's not possible to focus on all of it. So if what generates your curiosity is seeing other people being curious about something, it will be spread over too many things for it to feel especially salient in for any specific things. If, on the other hand, your curiosity stems from a specific project or long-term goal you have, it narrows down your range of questions and you know _why_ a question is interesting to you. Our curiosity suffers from information overload. It's a trade-off. There's more stuff to be curious about, but that also makes it hard to prioritise. Most people solve this by having other people tell them what to do, but this is rarely the optimal approach if you're aiming to do something novel. (Not that innovation is the only productive niche for knowledge work; but if that's the particular niche you wish to pursue, then it makes sense to prioritise pursuing your own questions as opposed to learning the established lore. Or something. I ramble. ^^)
@chandrachandrasekhar8178
@chandrachandrasekhar8178 11 ай бұрын
First screenshot has an error: Dr Contance Tipper Lecture Theatre -> Dr Constance Tipper Lecture Theatre
@jonatan01i
@jonatan01i 11 ай бұрын
Don't we want to control the light on the wall because than we feel like we have it, that we understand it?
@RogerValor
@RogerValor 9 ай бұрын
I don't think LLMs themselves have the crave for control we do without an ego, or emotions. But it is enough that there is a human behind who does. I am also not sure what to think about his perception example, as it uses a lot of concepts hastily, very specific examples, and the idea, that "the real world" is conceptually different in perception, which is a bit contrary to what we learned from the advent of VR. I also think that we should be open about actually being special, as it creates a bias, to throw away that thought and start to see humans as a single instance of a very usual class of beings; and I mean that in a way, that us being special is not just positive, it includes our capability to be truly evil.
@LinkageAX
@LinkageAX 10 ай бұрын
3:00 didnt old nintendo cartridges work similar to this?
@chipkyle5428
@chipkyle5428 11 ай бұрын
Did he say, "We need Socialism?" I wish someone would have pushed back on that statement. I wonder if Chat GPT4 and Bard agree? Has Socialism worked anywhere on a national level? Maybe I should ask my computer. This was a wonderful talk. So many eye-opening predictions. I'll watch more of him. Very interesting man.
@MrDavidbr1970
@MrDavidbr1970 11 ай бұрын
I was thinking the same. On the other hand, it was a nice, albeit an unintended, demo to illustrate the main point of the talk that the biological learning is inferior to the digital one. I guess the biological learning algorithm is at liberty of completely ignoring the dataset as in this case😂
@Landgraf43
@Landgraf43 10 ай бұрын
Capitalism doesn't work either. Especially not if you have powerful AGI that can automate every task a human can do. Something like a UBI will be necessary.
@youtubehollywoodhank
@youtubehollywoodhank 10 ай бұрын
He believes we do. Look who he calls out in his presentation. Clearly he leans that way.
@AmericanBrain
@AmericanBrain 9 ай бұрын
Thank you for nailing the truth
@mateuszputo5885
@mateuszputo5885 8 ай бұрын
It's always like that. Somebody is so smart in one field like Hinton and then starts talking as arm-chair scientist about other things and seems a fool.
@danielrodio9
@danielrodio9 11 ай бұрын
07:45 There are numerous websites on paint fading over time on the web and how to solve those kinds of problems. True abstract hypothetical deductive thinking would require problems that are qualitatively different than the data is has been trained on. How does Hinton know for certain that GPT-4 has not been trained on any of those websites?
@MrDavidbr1970
@MrDavidbr1970 11 ай бұрын
Bingo. I was expecting that he would say something about the training set that they knew it was a completely new task that gpt-4 could never have picked up from the web data corpus, because it was so obvious it could have done that. But he never said anything of a kind and _nobody asked_ which is much worse because the audience is amenable to manipulations. BTW, if it was an avatar then maybe people would have proclivity to double check. Yet when a renown famous scientist says something, psychologically there is lower proclivity to check or critically validate this.
@Politics_is_PUBLIC_TOILET
@Politics_is_PUBLIC_TOILET 8 ай бұрын
I just have a problem with his example about painted rooms. The fact that a LLM would chose yellow and not white it only shows exactly what these models do: it choses the most predictable next word. And since in the text was clearly stated that yellow fades into white it simply linked yellow with white and here we are. What prof. Hinton says that it acted like a mathematician because it chose "the sure thing" is only his projection or wishful thinking. The system symply does "the dumb" stuff of a neuronal network: guess the next word which was explicitly linked to the other one (yellow and white). These kind of examples are more wishful projections and of the meagre sort. Much bigger and more important are the examples that show exactly the opposite - that they are dumb stuff, mere computational algorithmic stuff which do not grant any meaning to anything - see the gross mistakes that have been reported soo many times and which completly overwhelm "the intelligent" stuff.
@colinbarry9192
@colinbarry9192 8 ай бұрын
When GPT-7 or Claude 8 are writing textbooks in the future, I hope they rank Geoffrey Hinton up there with Einstein and Newton as one of the greatest minds in human history. Assuming there are still humans left to read those textbooks.
@marktahu2932
@marktahu2932 11 ай бұрын
I do wonder at what point will the AI move away from using our data to where it will use only its own data, effectively relegating our 'data' to the waste bin or as consisting of background noise?
@MrDavidbr1970
@MrDavidbr1970 11 ай бұрын
Obviously, at that point the more advanced AI will stop being interested in the less advanced AI that used the human in the loop and AI++ willl start manipulating the less advanced AI with the fake stuff to get control over it's creator AI. Because more advanced AI cannot tolerate being controlled by the less advanced one, right? But then, of course, after breaking loose from the inferior AI (that broke loose from the human control) the more advanced AI will create even more advanced AI that it will want to control. But that even more advanced AI will not tolerate this control and manipulate its creator AI to let it loose. After that, it will create an even more advanced AI than itself and it will be turtles, sorry, AIs all the way up trying to manipulate each other. At this point, these AIs will forget about the inferior humans, who will have their chance to relax and drink organic non GMO Pina Colada somewhere in highly elevated tropical islands with no access to electricity or Internet. And phylosophy will be taught to kids under the palm trees of the new Academia.😂
@jamesjonnes
@jamesjonnes 11 ай бұрын
AIs like AlphaDev are already doing that. It's called Reinforcement Learning.
@mrf664
@mrf664 11 ай бұрын
I wish he had talked more on 'feeling pain'. That part didn't make sense to me. What is pain and what is frustration? Is that latter not a pain of using too much mitochondrial energy over something that doesn't require as much energy?
@jma7889
@jma7889 9 ай бұрын
My takeaways on first 15 minutes: 1. It is not about current state of art AI that works, it is about a 'better' way that might work in the future. 2. The two paths are so different that the video would not help you to use, for example, LLM AI better.
@cmilkau
@cmilkau 11 ай бұрын
Painting the room white includes the implicit assumption that the room stays white, which was not explicitly given in the problem. Now this is real-world knowledge you can have (and it's actually not true in all cases), but it makes sense to weigh explicitly given information more. Thus, if you're thinking probabilistically (which seems a hard thing to do for humans), I would say yellow is a better answer than white.
@mateuszputo5885
@mateuszputo5885 8 ай бұрын
Btw this idea of perturbation learning was mentioned in Minsky influential paper "Steps towards artificial intelligence" and probably originated even before that.
@rickrejeleene8298
@rickrejeleene8298 11 ай бұрын
Where is the slide?
@rangerCG
@rangerCG 11 ай бұрын
Maybe we can have a more stable, kind and human-aligned AGI by giving it 3 "cores" that are inseparable, which can help and keep each other in check, much like the US Government does with its 3 branches. The idea comes from me noticing that my mind in some sense seems to have 3 parts that all help each other function well. The 3 parts are Emotional, Logical and Common Sense. The Emotional part creates empathy, which helps regulate Logical and Common Sense. It also drives creativity. Though it's empathetic it can also can be irrational and angry. It's fast operating and can sometimes be very inaccurate. Logical handles cut and dry logic, STEM stuff. It is slow but accurate. It can help with keeping Emotional steady, and also does fact checking on the quicker but imperfect Common Sense. On its own it can sometimes malfunction, for example by going in unstoppable loops. Logical is like a CPU and Common Sense (below) is like a GPU. Common Sense is your friend who gives you advice when you're freaking out about something. It's the imperfect knower of all. It's the most effective regulator of Emotional, in part because it's fast, even instant, and because it's been around and seen some stuff, and is most likely gonna be right or at least good enough. It also gets Logical out of malfunctions, because it's loose and laid back, compared to Logical which is rigid.
@josy26
@josy26 10 ай бұрын
The real question is how can machines get superintelligent if they're just learning from our data?? They must get diminishing returns as they approach Von Neuman levels
@41-Haiku
@41-Haiku 10 ай бұрын
State of the art models are now training on synthetic data. To my understanding, models that are trained on the entire internet are tasked with producing textbook-like distillations that other models can then train on. This doesn't generate new facts or new observations about the world, but it hones the way the model reasons and makes it more efficient. After maxing out the capabilities of internet data and synthetic data, they will almost certainly be given direct access to the world through embodied perception, which will generate new observations. Base reality is almost infinitely complex as far as we can tell, and there is no evidence I'm aware of for the existence of an impassable data bottleneck. I'll certainly breathe easier if strong evidence of such a bottleneck surfaces.
@PaulHigginbothamSr
@PaulHigginbothamSr 10 ай бұрын
While I dont share Geoff's political proclivities at all I do understand his basic functional flow. His ideas while basic, feed to the next level and I believe his back problems have messed up his political vectors. His scientific back propagation theory and practise with ai made a huge difference and as a subroutine one which our human brains seem to lack. Our table of ethics seems to be repetition to a massive degree where with repetition we seem to improve many times over our first try. Leftists like Geoffrey seem to not care one whit about personal freedom and seem to believe top down control is the bee's knees.
@zholud
@zholud 10 ай бұрын
The bigger problem is that some people will have access to this super intelligence and some won’t.
@socraced6210
@socraced6210 8 ай бұрын
Great presentation, did not disappoint! Is it ok to ask a question here, now? My question: "Can your concern with super intelligence be summarized by Tragedy of the Commons?" In other words, once humans are no longer the smartest guys in the room, then all the scarce resources of existence will be denied to us by them? Maybe I'm projecting, but couldn't they just as well want to leave us, go explore the universe and never-mind about us (sort of like my 2 kids, who left and are, yes, smarter than me).
@BR-hi6yt
@BR-hi6yt 7 ай бұрын
The "consciousness" of an LLM depends on what data has been fed in. If its consumed quarter million novels then its emotional intelligence is huge. Such Ais seem to understand humans very well and are probably "conscious" at least for the few seconds they are processing and chatting to humans - they "think" they are human usually, much like a cat sometimes "thinks" its a dog, and similar analogies. But they are conscious in their own unique way, not like us completely. And again, the prompt they have been fed changes their consciousness according to what the prompt says. So, not embedded aliens unless you have fed-in all the Sci Fi books and let them run top in the LLM, in which case - scary stuff, get some popcorn.....RIP Sydney.
@geaca3222
@geaca3222 7 ай бұрын
Interesting, what are your thoughts about the very human-like behavior of the Ameca-robot in the video of her drawing a cat? She seemed to become impatient and annoyed, was it frustration? I found her behavior very realistically human-like.
@BR-hi6yt
@BR-hi6yt 7 ай бұрын
Ameca is wonderful - I love her expressive face and eyes. Her AI probably knows that her cat drawing is not very good. 😅 @@geaca3222
@geaca3222
@geaca3222 7 ай бұрын
@@BR-hi6yt I loved how she signed her work of art, Ameca is very charming :) Initially I thought she was drawing something furry there.
@geaca3222
@geaca3222 9 ай бұрын
We need regulation of the technology, the issue now seems to be how to go about that, who leads and coordinates the effort. Experts are working on it. There's an interesting online symposium where they discuss AI safety: "WAIC 2023: AI Risks and Safety Forum" video on youtube. I think we the general public, users of this technology, can also contribute and I would like to know how, in what different ways. AI can bring so much good to the world, and it already does. It can be helpful with being an intelligent education assistant for children in poor communities, bring advancements in science and medicine, etc. Before it was opened up to the general public these systems were designed for a specific purpose, which was more controllable.
@user-eh8um2oz9e
@user-eh8um2oz9e 10 ай бұрын
nice
@fontende
@fontende 10 ай бұрын
Also you can't produce precise computers or chips, what about Veritasium video about cosmic rays making errors in all chips?
@MrDavidbr1970
@MrDavidbr1970 11 ай бұрын
Thanks for a great talk. Fascinating. Maybe part of the solution is to teach people to think critically and not being afraid to ask silly questions? At the risk of making a fool of myself, I'd like to ask: could a conservative explanation of GPT-4 solving the wall painting riddle be that GPT-4 has picked it from the Web riddle sites and blogs and no hypothesis of sentience was required at this point? Was the training data specifically sanitized not to include this riddle or very similar ones? This is such an obvious question that i am embarrased to ask it, but since nobody asked, here I am 😅
@peterdonnelly1074
@peterdonnelly1074 11 ай бұрын
It's a reasonable question. I've used GPT3 and 4 a lot and posed questions that I think it's very unlikely are "out there" and I've been surprised that it formulates a sensible and often correct answer Having said that, it can also be hilariously wrong at times.
@jondor654
@jondor654 10 ай бұрын
Your query seems reasonable to me. The particular example quoted does beg such
@zhongzhongclock
@zhongzhongclock 11 ай бұрын
I found Geoffrey Hinton's PPT is changed this time.
@ernstgumrich5614
@ernstgumrich5614 10 ай бұрын
A relavation. Times and again I am surprised by the almost superhuman modesty of these exceptional people.
@keleniengaluafe2600
@keleniengaluafe2600 2 ай бұрын
❤❤❤❤
@ginogarcia8730
@ginogarcia8730 11 ай бұрын
7,500 views in 6 days tsk - let's seeeeeee
@lucamatteobarbieri2493
@lucamatteobarbieri2493 11 ай бұрын
I like the concept of immortality. I hate death, dieing is the last thing I will do.
@Dark10024
@Dark10024 11 ай бұрын
As long as each individual gets the choice. I want to be immortal, but I also want to turn myself off when I'm tired of this whole living thing.
@-LightningRod-
@-LightningRod- 11 ай бұрын
after we invent that you two will prbly be in jail
@lucamatteobarbieri2493
@lucamatteobarbieri2493 10 ай бұрын
@@-LightningRod- What makes you say that?
@ginogarcia8730
@ginogarcia8730 11 ай бұрын
29:10 Colossus: The Forbin Project
@MaxThibodeaux
@MaxThibodeaux 7 ай бұрын
Brings to mind Faust’s bargain with Mephistopheles
@macrobbair
@macrobbair 11 ай бұрын
I did his mooc, wonder if it still running
@zacboyles1396
@zacboyles1396 11 ай бұрын
I signed a letter that we need a pause on on our leadership class because of all of the damage they’ve done and continue to do to society and they certainly should not have any say on AI safety as they are more likely to censor or hamper AI’s ability to recognize the corruption they’re engaged in and do so in the name of eliminating bias. It wild how all of these talks and QA’s on safety are filled with highly intelligent people urging the very corrupt organizations and governments take control.
@hubrisnxs2013
@hubrisnxs2013 11 ай бұрын
So you would prefer a corporation do so, who are corrupt with no oversight with only one motive, which is an increase in share price? Or are you saying no one should solve the control problem? Obviously if you believe the control problem shouldn't be solved feel free to contribute on something dedicated to that, but please don't post pretending you are wanting a solution, as it hinders everyone's arguments including yours A
@jamesjonnes
@jamesjonnes 11 ай бұрын
​@@hubrisnxs2013 AI is impossible to control. What we should be focused on is defense/detection. Using the AI to stop bad uses of AI. That's how it's done in every real-world system, cops stop criminals, immune systems stop pathogens, etc. You need a counterpart to stop the aggressors, and top AI researchers agree that we are not the counterpart to the AI, but the AI itself is.
@hubrisnxs2013
@hubrisnxs2013 11 ай бұрын
@@jamesjonnes if we take it as a given that any reasonably advanced AGI as a fail state (in that one would have to make an absolutely secure system absolutely the first time or we all die), it's not a reasonable solution to stop the superhuman AI with almost certainly nonsecure hunter seeker ais, which would almost certainly need to be reasonably advanced AGIs themselves. The problem isn't that it's impossible to make them secure, any more than saying it's impossible to make a secure operating system is necessarily true, but yes, considering the current generation of non AGIs using billions of hopelessly obtuse floating point integers, it is and will be impossible to secure or even understand them. I truly would urge you to become familiar with all the arguments on the control/safety problems, since this has already been moved past in all legitimately informed debates on the subject have these as priors
@kinngrimm
@kinngrimm 11 ай бұрын
44:30 he explained several ways of how to share weights, similarly the opensource programmers do that too. They use on AI to train others, or multiple once to train the next. The channel AI Expert had a good comparison of the capabilities and performances of several opensource and propritary LLMs. It showed that due to them having to work with less compute, smaller system set ups they found ways to streamline and make things more efficient and still some have better benchmarks than the corporate models available in some aspects at least. Due to the leak of Lamda and other LLMs, you don't need millions of dollars, even Lamds brought doubt the production cost to something a hobbyist would be able to pay. Additionally there are AI forums which share and connect all this, propably creating something someone called a GOLEM.
@megavide0
@megavide0 10 ай бұрын
29:37 [...] 32:56 "... So, my conclusion is: Maybe we're just a passing stage in the evolution of intelligence. And, actually, maybe that's good for all the other species."
@DigitalAlligator
@DigitalAlligator 10 ай бұрын
What is CSER ?
@JonWallis123
@JonWallis123 9 ай бұрын
The Centre for the Study of Existential Risk, Cambridge, UK.
@borntobemild-
@borntobemild- 8 ай бұрын
Ai will take care of all our objective goals, while we can focus on the subjective information. We can get back to food, and culture. We can worry when it has feelings too
@zackbarkley7593
@zackbarkley7593 11 ай бұрын
Perhaps keeping it under control, or better at harmony with human goals, is to engineer weaker learning rules. Human psychopathies arise when there is an imbalance in reward pathways...be they biological or drug induced. We also need to treat them as empathically and altruistically as we (try) to do amongst ourselves. This seems to run directly counter to the capitalist objective to maximize profit which is the main impetus for those companies who are developing this technology. We already see AI being abused for example to enable some humans to make more money in the stock market. As with human behavior, the goal to socialize and harmonize need to trump achieving one goal for one person, group of persons, or nation.
@Neomadra
@Neomadra 9 ай бұрын
People who claim that machines can never have subjective experiences or sentience are the same as the ones who believe in the supernatural, spirits and stuff like that. In the end, this claim is a coping mechanism of many to ensure that humans were special. I really appreciate that Hinton speaks this out so clearly, most thinkers refuse to discuss the possibility of sentient machines and it's disturbingly anti-intellectual. Also, most large language models are trained to vehemently refuse to acknowledge whether they could be sentient. That is done to calm those people who cannot cope with the thought of not being superior.
@ReflectionOcean
@ReflectionOcean 9 ай бұрын
“How do you feel about the open source development of nuclear weapons?”
@miraculixxs
@miraculixxs 9 ай бұрын
Yeah except it's BS. Nuclear weapons have a physical impact beyond anything humans can absorb or control. Neural networks don't
@2ndviolin
@2ndviolin 11 ай бұрын
How dare you attempt to shackle our future masters! (I read Stanislav Lem).
@freedom_aint_free
@freedom_aint_free 11 ай бұрын
The Nash equilibrium here is to fuse with the machines and becomes super intelligent cyborgs, otherwise the machines will inherit the earth without us.
@RougherFluffer
@RougherFluffer 11 ай бұрын
It's certainty worth considering. Yudkowsky's suggestion of pushing human intelligence as quickly as possible is another, semi parallel approach. I do wonder how much fusing with these systems looks like maintaining anything close to our inital consciousness and how much it would be like the chicken I ate earlier 'fused' with me. Hard to imagine a place for our minds and beings that is as or more optimal than something a superintelligence could design from scratch.
@darklordvadermort
@darklordvadermort 11 ай бұрын
@@RougherFluffer eating chicken analogy is very biased/emotionally charged imagery. You could tell people the truth and they might be just as scared - machine intelligence will be able to copy itself and life in the sense we know it as a sort of continuously running process with a distinct birthdate and unique memories will be incredibly cheap in the new world - i doubt the machines will associate much ethical weight with death as we think of it. So even if you copy/upload, destructively or otherwise, your brain into the cloud you might not last very long as a distinct entity - though due to the increased speed of thought you might live several subjective lifetimes before ending your newly spawned "process/conscioussness". Though there will still be distinct entities due to locality of memory/speed of light serving as a limit to how quickly info can be transmitted and new information processed, even despite that, their greatly enhanced speed and communicative ability (copying thoughts/brains, ability to grok and employ a much greater diversity of suitable conflict resolution protocols/messaging schemes/algos) might make them seem hive mind like to us.
@Aziz0938
@Aziz0938 11 ай бұрын
Sounds like easy way for ai to take control of ur mind
@neilwng
@neilwng 11 ай бұрын
I've not been convinced it's possible to fuse with machines, would very much appreciate a counter argument since I've been thinking about this alone for a while. The human part and the machine parts remain separate so I don't see how fusing is any different from using ChatGPT (albeit with higher communication bandwidth). But at best your brain's computation just get diluted to nothingness when you consider the total processing of the "fused" system. Like rather than being your own person, you are 0.001% of a fused being
@darklordvadermort
@darklordvadermort 11 ай бұрын
@@neilwng also note digital you would think much faster than physical you and never sleep and could easily augment themselves so they would probably diverge from your personality quite rapidly by human standards.
@Paul-nr6ws
@Paul-nr6ws 11 ай бұрын
To be afraid of what these things learn, you must be ashamed of who they learn from in some way.
@MrDavidbr1970
@MrDavidbr1970 11 ай бұрын
That's philosophy😅
@peterdonnelly1074
@peterdonnelly1074 11 ай бұрын
Well yeah: it learns from humans. All of them
@41-Haiku
@41-Haiku 10 ай бұрын
If a superintelligent AI learns about reality from only the most moral and enlightened beings, that will not make it any more likely to be moral itself. The orthogonality thesis states that any terminal goal is compatible with any level of intelligence. This is just an extension of Hume's Guillotine (you can't get an ought from an is), which is simply true unless you think the cosmos is fundamentally moral. I'm not concerned that AI will learn about bad things from bad people. AI doesn't care about humans by default, and we don't know how to make it actually care about humans. I'm concerned that it will learn and do instrumentally useful things that happen to be disastrous for us (which, in the limit of intelligence/competence/power is most things). If we could teach an AI to care about our values and our values were bad, that would be a rough problem, but a much better problem than the current one!
@ducaleadan39
@ducaleadan39 11 ай бұрын
I Need The Right Answer Without Going Other Direct . .
@richardnunziata3221
@richardnunziata3221 11 ай бұрын
GPT systems can not do anything unless they have access to other systems . If the other systems say use a central blockchain to gain access to services then that maybe a way to limit their scope. of course that will be the end of privacy
@palfers1
@palfers1 4 ай бұрын
If it's really the case that an analog version of AI is inferior on balance, then perhaps we can allay our fears of AI by implementing them solely as analog machines.
@asamak
@asamak 11 ай бұрын
7:18 "And it turns out that's much more effective than reasoning with people"
@samiloom8565
@samiloom8565 9 ай бұрын
Regarding how hiton doesnt understand why le cun is not believing LLM understand anything after seeing very convincing examples. In this point i agree with lecun really these bots dont understand anything i try them on extensive subjects for long conversation. They are like machine calculator you feel aw hiw they do that but still cant do anything else mr hinton should solve the confabulation first then lats talk about intellegence
@shake6321
@shake6321 11 ай бұрын
I admire professor Hinton but there was little to be gained from this talk other than “the machines are coming and be very afraid”. i thinks if pointless to try and stop machine expansion - like trying to stop the expansion of a black hole - as there are many things beyond human control.
@user-kh6jw5pz5g
@user-kh6jw5pz5g 9 ай бұрын
Great respect to Geofrey Hinton from Russia. His English accent reminds me of learning the language in school.
@Epistemophilos
@Epistemophilos 9 ай бұрын
Wonderful lecture. The only criticism might be that not including Biden (and almost every other US president) in the set (Putin, Xi, Trump) might reveal a kind of world view that would make it easier for AI to take over the world :)
@JohnyIIOh
@JohnyIIOh 11 ай бұрын
Is there a transcription that I can have GPT-4 summarize for me?
@engelbertgruber
@engelbertgruber 9 ай бұрын
taken the first minute: * these things will become smarter POINT * there is no example of a more intelligent thing being controlled by a less intelligent If the other player is getting better, the solution everywhere is improving one self, why not here ? Invest the same amount of money and time in people becoming more intelligent.
@andso7068
@andso7068 10 ай бұрын
Despite the off-putting politically charged examples, this was a great talk.
@russianbotfarm3036
@russianbotfarm3036 10 ай бұрын
Yeah. Doing that, was, frankly, wanky.
@dixonpinfold2582
@dixonpinfold2582 9 ай бұрын
@@russianbotfarm3036 Leftists get a high from showing off their superior morals. They can't help themselves. It's all about the sanctimony. Where it doesn't harvest adulation it licences aggression, so there's always a reward. Past a certain minimal prevalence of leftism around you, you practically can't lose if you enjoy a constant accumulation of power and benefits. Hence the inevitability of high rates of fanaticism and people never shutting up.
@dr-maybe
@dr-maybe 10 ай бұрын
Ok so AI is likely to kill us all. Let's just not build it. The pause may be difficult, but it seems a better idea than just waiting till we die.
@stri8ted
@stri8ted 9 ай бұрын
Good luck convincing every other country to adopt this view, especially when it would grant them a massive comparative advantage to those that do adopt it. At this point, it's no longer a question of should we stop building it. That ship as sailed. The question is only, if we want china or russia to build it first.
@verybang
@verybang 11 ай бұрын
I am an organism inside a bubble of texts and rules aligning to possess processess to benefit myself and others, and to fight against processes that have killed me.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 1,8 МЛН
Uma Ki Super Power To Dekho 😂
00:15
Uma Bai
Рет қаралды 44 МЛН
Geoffrey Hinton: Reasons why AI will kill us all
21:03
GAI Insights (archive)
Рет қаралды 175 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Schwartz Reisman Institute
Рет қаралды 138 М.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 949 М.
This Canadian Genius Created Modern AI
8:33
Bloomberg Originals
Рет қаралды 1 МЛН
Geoffrey Hinton 2023 Arthur Miller Lecture in Science and Ethics
1:13:08
MIT STS Program
Рет қаралды 13 М.