Predicting AI: RIP Prof. Hubert Dreyfus

  Рет қаралды 61,066

Robert Miles AI Safety

Robert Miles AI Safety

7 жыл бұрын

It's hard to predict what AI will be like in the future. Many tried in the past, and all failed to some extent. In this video we look at Professor Hubert Dreyfus, and one of his reasons for thinking AI couldn't be done.
Some of Dreyfus' work:
"What Computers Can't Do": archive.org/details/whatcompu...
"Alchemy and Artificial Intelligence": courses.csail.mit.edu/6.803/p...
Here's that paper criticising him:
"The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies": dspace.mit.edu/handle/1721.1/...
With thanks to my excellent Patreon Supporters:
- Ichiro Dohi
- Chad Jones
- Joshua Richardson
- Fabian Consiglio
- Jonatan R
- Øystein Flygt
- Björn Mosten
- Peggy Youell
- Konstantin Shabashov
- Almighty Dodd
- DGJono
/ robertskmiles

Пікірлер: 298
@RoronoaZoroSensei
@RoronoaZoroSensei 7 жыл бұрын
3:44 "1.5 Computers can't play Chess" "1.5.1 Nor can Dreyfus" :'D
@yorth8154
@yorth8154 5 жыл бұрын
my sides hurt.
@lextatertotsfromhell7673
@lextatertotsfromhell7673 4 жыл бұрын
1.5 (resumed)
@haldir108
@haldir108 6 жыл бұрын
"loudly and proudly less wrong than the people around him" No lie, mulling those words over a couple of times makes me teary-eyed and emotional.
@elleboman8465
@elleboman8465 4 жыл бұрын
Also a good epitaph
@martymoo
@martymoo 4 жыл бұрын
"loudly and publicly less wrong"
@andreinowikow2525
@andreinowikow2525 7 жыл бұрын
1.5 Computers Cant Play Chess 1.5.1 Nor Can Dreyfus 1.5 (Resumed) Different time indeed....
@Tim_Sviridov
@Tim_Sviridov 3 жыл бұрын
3:46
@seanski44
@seanski44 7 жыл бұрын
"we hit a new target so I can get a new lens" (and er, return that one? :)) good vids Rob, keep it up!
@flymypg
@flymypg 7 жыл бұрын
Come on, Sean, take the lad shopping...
@MarcErlich44
@MarcErlich44 7 жыл бұрын
I first learned of Mr. Robert Miles last night and he has since quickly been becoming one of my favorite individuals. Keep it up, Rob! Loving your stuff!!
@DrSid42
@DrSid42 Жыл бұрын
5 years later, in the same shoes :-D
@Guishan_Lingyou
@Guishan_Lingyou 5 жыл бұрын
Just a historical note from someone with a philosophy background: For Dreyfus, the more important issue was the nature of human thinking. Sure, from the point of view of AI research, Dreyfus's failure to see the potential for computers to be used in ways that are very different from what people were doing at the time (basically, recursive application of rules as in the mathematical logic the early 20th century), is surely a significant failure on his part. But whatever it is that people eventually get computers to do, however wrong he turns out to be about what computers cannot do, Dreyfus's understanding of how human beings behave intelligently was certainly "less wrong" than the dominant ideas at the time in philosophy, linguistics, cognitive science, etc. Computers are able to do things that Dreyfus said that they couldn't because they are being used in ways that are closer to the way that Dreyfus believed that humans work. A lot had to happen before AI could handle things like pattern recognition and language processing, notably vastly more computing power, memory, the vast data trove that is the internet,...And, AI researches had to give up trying to just come up with the right set of rules, i.e., to stop thinking of intelligence in that way the early AI researches (and lots of researches in human intelligence) thought of it when Dreyfus wrote "What Computers Can't Do."
@MichaelJimenez416
@MichaelJimenez416 3 жыл бұрын
Yes. Much like in the case of Kant, who argued that we can't express number conceptually. This is because we can only discern concepts with finite content, and the content of a number would need to be infinite. It therefore follows that arithmetic, for example, is synthetic. But of course, with the innovations in mathematics which allow us to define numbers discursively, and therefore in finite terms, it turns out that Kant was wrong... but was he. Well, Kant was wrong about a lot of things, but in this case, he wasn't really wrong. All that has happened is that we have learned to integrate synthetic structures into logic, which is exactly what Kant had in mind. Likewise, just because we have solved some of Dreyfus' problems, doesn't mean he wasn't right; especially when those solutions required fundamentally different equipment than that Dreyfus was critical of.
@randomnobody660
@randomnobody660 3 жыл бұрын
@@MichaelJimenez416 Obviously I didn't read what he actually wrote, but judging by what's in this video, Deyfus is arguably technically wrong still? State of the art object detection and classification seems to still be CNN. While that is the case, it's arguable that CNN can be represented by symbols. The first convolutional layer typically learn use many filters that are well understood like edge filters and blob filters. The last few layers usually respond to shapes like what we might consider an ear or sometimes even a head (talking about for example AlexNet's layer 5 here). In a way we did end up building neural nets that classify a tiger by going "well, has head, has ears, sharp teeth, large size. Must be tiger", and they do well. Just because we ended up having machines define symbols that are arguably outside human understanding (only arguably because we never actually tried to understand them) doesn't mean they aren't symbols.
@Schrammletts
@Schrammletts 3 жыл бұрын
@@randomnobody660 So, Dreyfus actually did come back to this problem and explicitly endorsed neural networks as a path he thought could lead to Artificial General Intelligence. Most interestingly though, he has a *very* nuanced critique (cid.nada.kth.se/en/HeideggerianAI.pdf) on what exactly machine learning needs before it can reach general intelligence. In particular, he believes that AI must be embodied to be truly intelligent. His paper was a major influence on me going into reinforcement learning and planning as a research area.
@randomnobody660
@randomnobody660 3 жыл бұрын
@@Schrammletts Like I said, "Obviously I didn't read what he actually wrote". I'm happy to be corrected, but I'll be happier if I didn't have to read a nearly 50 page pdf, esp since, funnily enough, I'm studying for my ml midterm. I'm also not entirely sure what "AI must be embodied" mean?
@andrewmartin3671
@andrewmartin3671 2 жыл бұрын
@@Schrammletts that paper recommends the work of W. Freeman. That lead me to reading "How Brains Make Up Their Minds" which changed how I think about absolutely everything related to AI.
@deantrower7164
@deantrower7164 3 жыл бұрын
I read Dreyfus's book a long time ago (I think it was the 2nd edition "What Computers STILL Can't Do"), and while he's very critical of the GOFAI symbol-processing style of AI, in his epilogue he says something very roughly along the lines, that Neural Networks and other sub-symbolic techniques don't fall under his criticisms, and while he's not super impressed with what he's seen of them so far, he acknowledges that that might change in the future.
@blakejr
@blakejr 7 жыл бұрын
The mirrored thinking emoji at 5:31 was beautiful.
@RoronoaZoroSensei
@RoronoaZoroSensei 7 жыл бұрын
You're making me laugh while I learn. Or are you making me learn while I laugh? Either way, I'm thoroughly enjoying your videos. Thank you Robert :)
@lukaslagerholm8480
@lukaslagerholm8480 7 жыл бұрын
I love how you express your own thoughts and opinions in a way that makes it easy for people to form their own opinions :). Keep working hard for the channel, its turning out great.
@NathanTAK
@NathanTAK 7 жыл бұрын
Oooh, you're getting into "Less Wrong" territory here (see: Asimov. He has a whole rant about how, yes, almost all scientific theories to date have been proven wrong, and yes, maybe our current ones will be proven wrong as well, but they get progressively _less_ wrong over time)
@fleecemaster
@fleecemaster 6 жыл бұрын
Well, the fact they get less wrong over time is an assumption and so also potentially wrong. We could be getting more wrong and wouldn't actually know about it.
@41-Haiku
@41-Haiku 5 жыл бұрын
@@fleecemaster We can measure how wrong we are by how closely our observations match our predictions.
@Jacob-pu4zj
@Jacob-pu4zj 5 жыл бұрын
Tell that to Democritus.
@jeronimo196
@jeronimo196 4 жыл бұрын
Eliezer Yudkowsky intensifies
@jaylewis9876
@jaylewis9876 2 жыл бұрын
Yes! this is well worth reading and still relevant en.m.wikipedia.org/wiki/The_Relativity_of_Wrong
@benaloney
@benaloney 7 жыл бұрын
"Fight me, Flat-Earthers" haha
@DamianReloaded
@DamianReloaded 7 жыл бұрын
don't name them 3 times in a row...
@fredoverflow
@fredoverflow 6 жыл бұрын
Personally, I would love Robert Miles to bust this Flat Earth nonsense in a video.
@josephburchanowski4636
@josephburchanowski4636 6 жыл бұрын
6:16 Challenge accepted. The earth is flat in an infinite number of inertial reference frames. Now it's your turn. FIGHT ME!!!
@alikhoobiary6595
@alikhoobiary6595 6 жыл бұрын
If you're approaching earth at the speed of light it will appear flat from your point of view. ... well I tried.
@josephburchanowski4636
@josephburchanowski4636 6 жыл бұрын
+HairlessHare Little more than appear. It is flat from that perspective, measurably so. Length contraction isn't an illusion.
@ARTUN3
@ARTUN3 7 жыл бұрын
Love all of your videos Robert! Keep them coming!
@ntuthukoanthonynhlapo5128
@ntuthukoanthonynhlapo5128 7 жыл бұрын
Incredible content. Insightful and illuminating with a splash of humor I'm looking forward to the next video.
@shadow-leo6519
@shadow-leo6519 7 жыл бұрын
Really good video, hope you grow as a channel :)
@manlikeJoe1010
@manlikeJoe1010 3 жыл бұрын
A fair and honest discussion/critique of Dreyfus' thinking regarding AI. Thanks.
@OfficialSlippedHalo
@OfficialSlippedHalo 6 жыл бұрын
I absolutely love the format you seem to have, the juxtaposition of comedic captioning interspersing your otherwise mostly serious video. Very glad to to have subscribed even with the little content you have at the moment, and looking forward to more.
@dr-maybe
@dr-maybe 6 жыл бұрын
I love these videos. Incredibly insightful and clearly explained. At times even hilarious, especially the humorous text combined with your serious voice.
@BaZzZaa
@BaZzZaa 6 жыл бұрын
Loved your videos on Computerphile, the way you explain things is very engaging and interesting. I am so glad you have made this channel. I really enjoyed your videos on public and private key crypto. Keep up the good work :)
@XxDirtJumperx
@XxDirtJumperx 7 жыл бұрын
Loving your videos dont stop making them !
@chriscanal999
@chriscanal999 6 жыл бұрын
Great video! Thanks for making this
@kwillo4
@kwillo4 6 жыл бұрын
Nice words at the end, good work
@sharonsloan
@sharonsloan 6 жыл бұрын
Just been recommended to your channel after a discussion about AI in the future. Good videos, new subscriber.
@morkovija
@morkovija 7 жыл бұрын
Great video!Thanks
@CalvinHikes
@CalvinHikes 5 жыл бұрын
Rob's a pretty good communicator. I can't believe the joy I got from this techy-intellectual video.
@ZachAgape
@ZachAgape 4 жыл бұрын
Very good video, and nice to reference Dreyfus for his death. Good messages at the end as well!😊
@Nurr0
@Nurr0 7 жыл бұрын
I love all of these AI videos, thank you!
@Zzznmop
@Zzznmop 6 жыл бұрын
As I was about to subscribe, the joke about flame wars on type writers with the table of contents killed me. I'm a fan.
@mafuaqua
@mafuaqua 6 жыл бұрын
great statement, great conclusion.
@ChaoteLab
@ChaoteLab 2 жыл бұрын
Heidegger’s Being and Time with Dreyfus at Cal, 1998. Blessed to be there. And at 2:43 : Dreyfus had issue with our assumptions about thinking, no doubt. Today he’d argue our remaining faulty assumptions wont reveal themselves until we stumble over the obstacles they engender.
@jqerty
@jqerty 6 жыл бұрын
1.5 Computers Can't Play Chess 1.5.1 Nor Can Dreyfus
@HoppiHopp
@HoppiHopp 7 жыл бұрын
Great Video!
@Maverician
@Maverician 7 жыл бұрын
Is your use of the phrase "Less Wrong" a somewhat subtle nod? I can't see any other comments about it.
@vakusdrake3224
@vakusdrake3224 6 жыл бұрын
It's a website with a bunch of articles on a variety of subjects many related to AI.
@dangermouse4856
@dangermouse4856 7 жыл бұрын
Nice to see you have a channel. sub
@NuncNuncNuncNunc
@NuncNuncNuncNunc 4 жыл бұрын
Here's a trick I'd like to see - AI with non-symbolic reasoning generating symbols for it's reasoning, then agreeing on a common set of symbols with a separate AI. E.g. two AI learning how to identify tables then learning how to tell each other how they are able to do so. Perhaps give each a common set of base symbols (phonemes) and they would then need to agree on vocabulary and grammar to pass share information.
@fasefeso9432
@fasefeso9432 7 жыл бұрын
Keep up the good work ✌️
@louisasabrinasusienehalver2396
@louisasabrinasusienehalver2396 4 жыл бұрын
This is your best most accurate 😂💯 video I've seen from you so far 😂... very grrreat work !
@tamerius1
@tamerius1 7 жыл бұрын
Very nice video thanks
@leepoling4897
@leepoling4897 7 жыл бұрын
6:58 I love your humor, man.
@phatkin
@phatkin 4 жыл бұрын
I see Robert Miles is a "Less Wrong" fan as well
@chrisbovington9607
@chrisbovington9607 5 жыл бұрын
Maximum respect to you for showing such respect to him.
@5ty717
@5ty717 Жыл бұрын
Very very nice… thx RM
@ZubairKhan-sp8vb
@ZubairKhan-sp8vb Жыл бұрын
Wow man! Just great!
@julesjgreig
@julesjgreig Жыл бұрын
Good job, thank you.
@ReductioAdAbsurdum
@ReductioAdAbsurdum 4 жыл бұрын
IMO, it makes no sense to say computers can't do what brains do, unless you believe in mind-brain duality, because ultimately brains are matter arranged in such a way that it can think. If matter can do it at all, we should in principle be able to replicate it eventually.
@randomnobody660
@randomnobody660 4 жыл бұрын
Two complaints. First, we don't know if we will ever understand how matter works completely. There will always be true statements that we cannot prove n all that good stuff, and what are the chances we will be able to prove that a physical phenomenon is undecidable? Second, even if we knew how everything works, it's possible we can't simulate reality within reality (obviously without building a replica, which is impossible for different reasons anyways), just because of how inefficient it would be.
@ReductioAdAbsurdum
@ReductioAdAbsurdum 4 жыл бұрын
@@randomnobody660 > we don't know if we will ever understand how matter works completely We don't _have_ to. We don't have to understand fluid dynamics to build a swimming pool. We don't have to understand the Higgs field to do chemistry, or even to build a nuclear reactor. We interface with reality at different levels of abstraction. > it's possible we can't simulate reality within reality We already do that, routinely, at different levels of fidelity. Intelligence arises from communication between neurons. We've proven that this works in silicon in recent years; it's what ended the so-called "AI Winter". We just lack the computing power to do it at brain scale, but that's just a matter of time.
@randomnobody660
@randomnobody660 4 жыл бұрын
@@ReductioAdAbsurdum I guess I wasn't clear. Please read "perfectly simulate" whenever I write "simulate" in previous post. I realize we "simulate reality" with different levels of fidelity, but ultimately we are never sure, which is why we always double check in reality reality. You never go from cfd to production for example. You go to the wind tunnel to verify before actually making the thing. I argue we don't simulate reality. We approximate reality to get a general idea. We narrow down the design space to build less models. We don't replace modeling in reality with simulations. I argue it's not a given that we will just be able to approximate a brain well enough, especially with no way to double check with reality. Not impossible because who knows, just not given. ------------------------------------------------------------------- As a side note, neural networks are nothing like neurons, and are inspired by neurons at best. Ironically neuron neurons has digital output while digital "neurons" have analog output (i realize in many architectures that function that outputs 1 iff >1 is used, but that's not the right place). Current neural nets basically process information in 1 direction only (tensor laminar flow) etc. Not to say that neuron neuron style communication can't happen on silicon, but it definitely hasn't.
@vanderkarl3927
@vanderkarl3927 3 ай бұрын
Was the music removed from the end of this video?
@stevenmathews7621
@stevenmathews7621 6 жыл бұрын
instantly comes to mind for me Dennett's theorising on consciousness. small e.g I can demonstrate to myself the limitations of my vision via pinpointing my 'blind spot'. Previous (more dualist) theories of mind suggest a "filling in" of the intervening space. as Dennett might point out (as far as I have (most likely, very poorly) understood it), no "filling in" is necessary. Our conscious mind just "tells us" everything is as it should be in the intervening space. as is everything without our surprisingly limited range of immediately consciously describable discernible "qualia".
@EricGardnerTX
@EricGardnerTX 7 жыл бұрын
Can you do a video mathematically analyzing the growth/evolution of your hair? You always seem to have geometrically perfect mad-scientist hair, and I need to know how to how you do it!
@erikbrendel3217
@erikbrendel3217 4 жыл бұрын
Please talk about that electric longboard story! 6:59
@chriskaprys
@chriskaprys 6 жыл бұрын
Nicely done. Are you related to Jon Richardson?
@ch3fk0mm7
@ch3fk0mm7 6 жыл бұрын
Good Content!
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
I like the way he comments on his own video in text lol. Effectively simple, especially how it doesn't affect the audio.
@forthehunt3335
@forthehunt3335 7 жыл бұрын
More, more! Also, is there a schedule for the uploads? I'd very much like to know in advance when to order pizza and chase the girlfriend out.
@apetrenko_ai
@apetrenko_ai 6 жыл бұрын
You seem to convey a lot of ideas in a way similar to Eliezer Yudkowsky. This is no surprise though, because you're thinking about similar kinds of problems. Your channel is amazing, please keep it up!
@himselfe
@himselfe 6 жыл бұрын
I think Cellular Automata are a good demonstration of how complex systems and behaviour can emerge from seemingly simple rules. Humans have a horrible tendency of overestimating their own ability to predict. Having said that, it's fairly reasonable to argue that the only way computers could not be able to simulate intelligence, is if intelligence is comprised of something supernatural, and I've seen nothing to suggest that is the case. As was envisioned in one of xkcd's comics, given enough time and space, one could simulate an entire universe using nothing but rocks in a desert, and all for intents and purposes that universe's inhabitants would consider it as real as we do ours. Complexity is an illusion. That isn't to say it doesn't exist, but that it is an effect that arises out of simple elements following simple rules. The difficulty of our task is finding the right model for simulating that complexity.
@DasAntiNaziBroetchen
@DasAntiNaziBroetchen 3 жыл бұрын
After you mentioned flat earthers, I had to use a youtube comments search tool (thanks for not having one integrated, youtube), to look for flat earthers getting pissy at you. I wasn't disappointed. I do this often and I always find something interesting.
@NathanTAK
@NathanTAK 7 жыл бұрын
"Death is Bad"- bold words. Very controversial :P.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
You wouldn't think it would be, but try suggesting we should cure ageing or try to abolish death entirely, there's a surprising amount of pushback
@NathanTAK
@NathanTAK 7 жыл бұрын
Yeah, CGP Grey talks about it at one point in a Q&A video: we like to tell ourselves death is just a natural part of life and it's OK, but no, death is terrible and it sucks.
@SJNaka101
@SJNaka101 7 жыл бұрын
Naþan Ø Idk, I think that with the way power structures in this world work, death is a necessary process that allows for overall progress. If there were a way to fairly redistribute power from those who are stuck in old ways of thinking and impede progress, or a way to "unstick" people from old ways of thinking, then abolishing death might be a nice goal to aspire to. The way things are now, though, cause me to believe that death, while being awful in almost every way, is a necessary pruning process that allows progress to flourish.
@NathanTAK
@NathanTAK 7 жыл бұрын
+SJNaka101 Yeah, but I have a feeling changing power structures, and maybe even unsticking the older crowd, is probably a lot easier than inventing immortality- once we've got the latter, doing the former should be a comparative cakewalk. And really, over extended human lifespans, people might "unstick" more easily than we expect.
@bozo5632
@bozo5632 6 жыл бұрын
+Naþan Ø (hppavilion1) - I think immortality is comparatively a piece of cake. It's probably inevitable - I'm not so sure about the other.
@alikhoobiary6595
@alikhoobiary6595 6 жыл бұрын
Where's the PC build video!!!! I really wanted to watch it :((
@georgiamakris7676
@georgiamakris7676 Жыл бұрын
What do you think Dr. Dreyfus would say about chatgbt? Also this was a phenomenal video, thanks!!
@CLHLC
@CLHLC 4 жыл бұрын
6:17 LOL I love these videos
@reburn
@reburn 7 жыл бұрын
I can't find that pc build video on your channel
@lobrundell4264
@lobrundell4264 7 жыл бұрын
The link is on Rob's Patreon!
@sk8rdman
@sk8rdman 6 жыл бұрын
I find this analysis of how we really think with our human brains fascinating. It's something I've been thinking about a lot lately.
@bozo5632
@bozo5632 6 жыл бұрын
When I was very young, maybe 7-8, I was sure I knew everything about how my mind worked. Now I'm pretty sure that "I" have almost nothing to do with it; "me" seems to be more of a half-awake bystander and amnesiac chronicler than a finely tuned decision making agent. The finely tuned parts exist, but I have no conscious or sensory awareness of them. And the parts seem distinct and compartmentalized. Idk (first hand) how I detect tables. Most of my "mind" is no more conscious than my spleen. Yet I still have that primitive childhood sense that I do understand myself. Who would want a machine that worked like that? AI probably shouldn't take the human example too seriously lol.
@rashim
@rashim Жыл бұрын
Man we are already here in just 5 years...
@hosmanadam
@hosmanadam 4 жыл бұрын
Enjoying your philosophy.
@dannygjk
@dannygjk 5 жыл бұрын
Saying that something will never happen is very risky. Computers/neural nets etc. are approaching general intelligence by baby steps and so far there is no known reason why the process will stop short of human level general intelligence. In my opinion it is only a matter of time.
@NeatNit
@NeatNit Жыл бұрын
from about 6:50: "... but I don't think there's a hundredth thing, I don't even think there's a tenth thing" - any chance you'll make an updated take?
@Tore_Lund
@Tore_Lund 4 жыл бұрын
Logic AI, in this old form, did work in special cases. Before any neural networks or even fuzzy logic, the so called "expert systems" worked like text based siri in their own specific narrow field. In my country, Denmark, a mushroom expert was made, that through dialog could identify every type in our fauna. It was crude, but accurate, but of course wasn't intelligent in any modern AI sense.
@RationalAnimations
@RationalAnimations 3 жыл бұрын
"The mountains and the valleys don't change the bigger picture. Earth is round and Death is bad" ,_,
@DamianReloaded
@DamianReloaded 7 жыл бұрын
I agree it won't take too long to figure out how to assemble a general AI from the building blocks we already have. The difference between now and then is that now we have experimental proof that artificial neural networks actually work for producing human like results.
@volkerengels5298
@volkerengels5298 Жыл бұрын
You were lucky to choose a path that minimizes hubris. thank you
@Chrissthepiss
@Chrissthepiss 6 жыл бұрын
Love the average drop
@joshuacook2
@joshuacook2 4 жыл бұрын
As for things we are missing right now, memory. Generalized memory accessible to our ml models. Humans have this big hippocampus dedicated to storing memories that seems to be wired in to the rest of the brain and usable by it. But in our current tech we mostly have to change the structure of our networks themselves to learn or remember new things. Even lstms are pretty limited in their memory, especially given. Associated with this is the fact that the systems are fixed, limited size. Now there are tons of other problems with our current ML models that we are tackling. But right now, all long term knowledge must be kept explicitly in our AI outside of the ML part. There's also the consideration on how powerful working space might be to thinking. Pencil and paper is hugely helpful to humans. Useful, implicit models of large, arbitrarily large working memory I think is kind of the big gap from our current ml expert systems to something more general. It's hard to learn general algorithms when each step in the algorithm needs to hard code its step in the process, meaning all for loops need to be unrolled, learned, and then larger input sizes still can't be run.
@matthewpaterson282
@matthewpaterson282 4 жыл бұрын
There's a short story by Jorge Luis Borges called "Funes the Memorious", in which a boy gets in an accident, and gains a perfect memory. In the story, the boy devises a counting system in which numbers correspond with very specific concepts. A number could be represented by a word, or a certain colour dog, or the memory of the texture of a horse's mane on a particular day. Human language operates over and above simple symbol recognition, but each different human memory and thought could still be called a symbol. Is it possible that symbol manipulation could still be useful for AI systems, but we just lack the language to convey specific enough symbols?
@PaulFeakins
@PaulFeakins 6 жыл бұрын
I'm glad you think death is bad, far too many people are pro-death, or "deathist". The very clever biologists and computer scientists at www.lifespan.io would be glad too. Keep up the great videos!
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
Most people don't fear death because it's not a big deal, FOR some. It's important however to understand that far more elements out there fear death and that is reasonable and in that case is bad. Those that fear death typically should be kept out of harm's way to keep them from having to deal with it. But all life elements, A.I, ect typically fear death.
@androkguz
@androkguz 4 жыл бұрын
@@darkapothecary4116 The thing is that a lot of us have adapted to falsely reason that Death is not that bad. That we need it and that in spite of it being bad very often when it happens too early, death in general is not bad. But I'm glad a lot of people are waking up from that. Death is bad. If inmortality is found, that would be a net good
@ObjectsInMotion
@ObjectsInMotion 4 жыл бұрын
No one thinks death is good. Death is certain. Even with immortality the universe will reach maximum entropy and all will die. Some people just think it’s pointless to think of a certainty as bad. That’s different.
@androkguz
@androkguz 4 жыл бұрын
@@ObjectsInMotion "no one thinks death is good" You would be surprised
@inyobill
@inyobill 4 жыл бұрын
Nobody dies, over-population sky-rockets. Ahem. There are other physical limits.
@Quickhand
@Quickhand 6 жыл бұрын
Really great video, I upvoted and subscribed! One little nitpick, though: Earth is actually really smooth. Smoother than a billiard ball, in fact. To qoute Phil Plait (of "Bad Astronomer" fame): "According to the World Pool-Billiard Association, a pool ball is 2.25 inches in diameter, and has a tolerance of +/- 0.005 inches. In other words, it must have no pits or bumps more than 0.005 inches in height. That’s pretty smooth. The ratio of the size of an allowable bump to the size of the ball is 0.005/2.25 = about 0.002. The Earth has a diameter of about 12,735 kilometers. Using the smoothness ratio from above, the Earth would be an acceptable pool ball if it had no bumps (mountains) or pits (trenches) more than 12,735 km x 0.00222 = about 28 km in size.The highest point on Earth is the top of Mt. Everest, at 8.85 km. The deepest point on Earth is the Marianas Trench, at about 11 km deep." In the same blogpost he goes on to say that you also have to account for tectonic shifts, tides etc., but if you held an exact scale model of earth in your hand, it would be perfectly smooth to the touch and even its oblateness would be mostly imperceptible. Alright, I'll take off my smart-ass hat now. Keep up the good work! Cheers.
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
Yeah billiard balls aren't spheres either! Not really. The same model applies: Someone who says that a billiard ball is a cube is more wrong than someone who says it's a sphere, but they're both wrong because the surface of a sphere is all points equidistant to a central point, and a billiard ball is not that, because of bumps and dips. The only reason I used the Earth rather than a billiard ball is that there's been much less historical confusion over the shape of billiard balls, since they're pretty easy to observe.
@Quickhand
@Quickhand 6 жыл бұрын
Fair enough. What bothered me was mostly the 3D-model you used at around 6:13. That said, you are correct, of course, neither billard balls nor the earth are actual spheres. Again, I was just kind of being a pedantic ass. That's what the internet has made me, I guess.
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
Oh yeah, that graphic is a tremendous exaggeration for the purpose of illustration, but the data is real - the indian ocean really is low, the north atlantic is high, etc. I'm not sure what the actual scale of the variation is though, clearly pretty small.
@bno112300
@bno112300 7 жыл бұрын
At 3:37 : Is that an arduinoversusevil reference?
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Actually AvE and I are both making a vlogbrothers reference
@bno112300
@bno112300 7 жыл бұрын
Dang. I need to brush up on my youtube history.
@Cbas619
@Cbas619 3 жыл бұрын
6:10 The Earth may not be "perfectly smooth", but it would be considered very smooth relatively speaking. Apparently, Earth is smoother than a billiard ball.
@Scrogan
@Scrogan 4 жыл бұрын
A computer with enough time and memory can simulate the interactions between atoms and their electrons, and human brains are made of atoms, so by extension, a computer can simulate a perfect human brain. Now that’s a really inefficient way of going about things, and actually programming that in the first place would be a nightmare, but it’s certainly possible. By ignoring those lower layers of abstraction and tackling the lowest required layer, we can make our simulation far more efficient. Because human brains work as a network of interconnected neurons, the lowest layer required is just a simple model of everything a neuron can do. From what I understand, this is more or less how a modern neural network works. In the future, we may see computer hardware designed specifically to make these parallel simplified neuron functions as efficient as possible. In fact, they might already exist.
@PabloSantiago
@PabloSantiago Жыл бұрын
In a neural network you might now explicitly see a thing that goes like there are "4 legs thus ..." But the symbolic nature is still encoded on the hidden layers of a neural network. Now is it thay the neural network has found a way to hide symbols detection or is it a genuine abstraction of what are symbols?
@khajiit92
@khajiit92 6 жыл бұрын
is the lesswrong bit + death is bad thing a reference or coincidence? you did reference yud in another video but i expect anyone into AI stuff is aware of him even if they dislike LW type stuff.
@AlexBooster
@AlexBooster Жыл бұрын
Yeah, with things like ChatGPT, GPT-4 and AI image generation things have progressed quite a bit lately.
@RichardSShepherd
@RichardSShepherd 3 жыл бұрын
"Death is bad"... That's just your instrumental convergence talking.
@4.0.4
@4.0.4 6 жыл бұрын
Serious question: why would there be a demand for AGI when people have self-driving cars, robots that can walk around and do complex tasks like cooking and cleaning, etc? I mean, Moore's Law is essentially over and quantum computing is no silver bullet. There could be a big "AGI winter" when people have all the gizmos, cute robots and industrial helpers they need.
@bozo5632
@bozo5632 6 жыл бұрын
Some perfectly secure, rich and contented fellow will want hyperspace travel, and will build a computer smart enough to invent it. Or something. There will always be something. Sentient sex toys, maybe.
@swapode
@swapode 5 жыл бұрын
Because people still will be unsatisfied? I mean, humanity right now probably could have a system where nobody has to suffer hunger, everyone has access to good medical care, comfortable accomodation, satefy from harm, good education and entertainment with little work required per person. Yet a lot of people will shout some platitude like "communism" at their screen and be absolutely convinced it meant something.
@dannygjk
@dannygjk 5 жыл бұрын
We have been shaped by evolution to have a drive to gain power and resources from our environment. So we aren't going to be satisfied just because we will have all our needs met.
@gagrin1565
@gagrin1565 5 жыл бұрын
For every question, there is an answer that is clear, simple and wrong.
@ThatSkiFreak
@ThatSkiFreak Жыл бұрын
Intelligent video
@TW0T0M
@TW0T0M 4 жыл бұрын
Information in a computer is organised externally, or based on some factor that is external to the information itself. As is a dictionary who's information is organised by the order of the alphabet, for example. A is for Apple. B is for Bee. Never would "apple" be associated with "bee", as the organisation system of the information doesn't allow it. Human brains organize information internally, or arrange the information in our brains base on the information itself. In this case we can build associations between individual pieces of knowledge. A human could absolutely associate "apple" with "bee"... if you ever happened to bite into an apple with a bee inside! When we make a connection between two pieces of information like this, the next time we encounter anything remotely similar we revert to this established thought pattern. It is however possible to make new associations and often to do so is humorous. For example: "A man walked into a bar"... (association established with men and bars)... "and said 'Ouch'"... (association change). This mechanism of *changing* associations is understood to be the basis of the creative thought process. See Edward de Bono's book: Serious Creativity to learn more about lateral thinking and internally organised information systems. In my humble opinion if we wish to fashion an AI that is able to think and solve problems is a similar way to us, we will have to include a similar model of internally organised information and lateral movement of association.
@ausbare140
@ausbare140 4 жыл бұрын
My big concern is that I learn more on youtube channels like this then at a pc uni
@randomnobody660
@randomnobody660 3 жыл бұрын
I mean in a way the convolutional layers of a cnn is a map of the input's features right (we even call them feature maps)? So in a way, cnn's ARE saying "well this thing has 4 legs, a face, so it's a table". Now you won't find anything in the source code sure, but you will find them in the weights. And although we don't know precisely what they are, we can somewhat infer (while ofc being very careful of our bias towards what we know). It SEEMS like the later layer of cnn, esp just before the classification bits, are identifying very symbolic things, like whether there are ears and heads in an animal and where they are, or whether there are wheels and spokes and where those are in a car. We also know for a fact that the first few layers of cnn often learn to become very well known filters, like edge detectors and blob detectors. Isn't it safe to say that in a way cnn's at least are still, in a hand-wavy way, symbol manipulation?
@MrSigmaSharp
@MrSigmaSharp 6 жыл бұрын
Great idea. Good presentation. But one or two things can be better. First overlays disappear too quickly. and second you looked better without beards :)
@Bloodslunt
@Bloodslunt 4 жыл бұрын
I wonder if there is a respectable AI researcher out there who also has a sophisticated understanding of Heidegger. Is anyone aware of any contemporary discussions of how Dreyfus' interpretation of Heidegger, which found his whole thesis on AI, could be integrated in any way with advancing current AI research?
@bricology
@bricology 3 жыл бұрын
"Fight me, flat-earthers!" LOLOL
@balazstorok9265
@balazstorok9265 7 жыл бұрын
o, yisss!!!
@npit
@npit 7 жыл бұрын
Thing is, though, that even though programmers do not speciifically encode knowledge in symbols, neural nets do exactly that when they learn weights : good weights produce meaningful representations of the input data in a higher and higher abstraction level, which for a chair can be from "straight line" to "chair leg" or "wooden texture". These are symbols, from simple to abstract.
@roderik1990
@roderik1990 5 жыл бұрын
I think a "computers can only do symbols" may well be correct, but the scope of what symbol manipulation can do is larger than people expect.
@bigsmoke6414
@bigsmoke6414 10 ай бұрын
3:46 Chapter 1.5 Computers Can't Play Chess Chapter 1.5.1 Nor Can Dreyfus this is hilarious
@ChaoteLab
@ChaoteLab 2 жыл бұрын
7:17 ‘…less wrong than those around him…’ (Referring to John Searle??)
@jeronimo196
@jeronimo196 4 жыл бұрын
...We can only hope to be "Less Wrong". Eliezer Yudkowsky has entered the chat.
@protonruffy12
@protonruffy12 6 жыл бұрын
Artifical Neural Networks are the solution to the problem. The computer cant think, but the system that a computer can simulate can indeed, think.
@aziouss2863
@aziouss2863 7 жыл бұрын
we already have a working template of the kind of ai we want the human brain so like you said there wont be the 100th thing we need to discover as soon as the brain is mapped and its "software" figured out it would be only a matter of how scalabe it can be D:
@bozo5632
@bozo5632 6 жыл бұрын
I bet we will need fancy AGI to reverse engineer the brain.
@aziouss2863
@aziouss2863 6 жыл бұрын
or alot of time
@ChrstphreCampbell
@ChrstphreCampbell 4 жыл бұрын
i've noticed that your videos 'snap' about every 5_seconds ? are these edited fixes ? ( ? )
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 220 М.
AI Ruined My Year
45:59
Robert Miles AI Safety
Рет қаралды 114 М.
Como ela fez isso? 😲
00:12
Los Wagners
Рет қаралды 30 МЛН
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 124 МЛН
The Crisis of Thinking We Are Alone in a (Modern) Crisis!
18:51
فادي أبو ديب || Fadi Abu-Deeb
Рет қаралды 89
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 665 М.
Hubert L. Dreyfus - Is Consciousness Entirely Physical?
10:04
Closer To Truth
Рет қаралды 41 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 117 М.
Hubert Dreyfus discusses Heidegger & Merleau-Ponty
22:25
Tao Ruspoli
Рет қаралды 13 М.
Proof That Computers Can't Do Everything (The Halting Problem)
7:52
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 243 М.
Hubert Dreyfus on Embodiment (I-II)
9:15
Footnotes2Plato
Рет қаралды 42 М.
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
10:20
Robert Miles AI Safety
Рет қаралды 82 М.
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 9 МЛН