[DSC 5.0] Building Artificial General Intelligence - Peter Morgan

  Рет қаралды 5,722

Data Science Conference

Data Science Conference

Күн бұрын

Пікірлер: 25
@pebre79
@pebre79 4 жыл бұрын
Can you enable video speed adjustment? Thanks
@treewalker1070
@treewalker1070 3 жыл бұрын
27:06 "In ten years we'll have a brain that will be conscious" -- kind of reminds me of that classic cartoon where a scientist is plotting out a series of equations on a board and one step says "And then a miracle occurs."
@chrisreed5463
@chrisreed5463 3 жыл бұрын
We're actually very close. The columnar structure of layers 2 to 5 of the neo-cortex is similar to agents like GPT3. Google and others are working towards tying agents together. Which is like layer 1 of the neo-cortex. I've been sceptical for decades and out of the field since the 1990s busy with the day job. My interaction with GPT3 has spurred me to revisit where the science is... And to my surprise I am persuaded. I think We're in the singularity now.
@treewalker1070
@treewalker1070 3 жыл бұрын
@@chrisreed5463 You actually believe that this work will produce consciousness? How?
@derekcarday
@derekcarday 3 жыл бұрын
Elon Musk isn't a pessimist, if anything he is overly optimistic with his ambitions to go to mars. But he understands the philosophical impact that AGI COULD have and therefore is taking proper caution mr Pete
@filsdejeannoir1776
@filsdejeannoir1776 3 жыл бұрын
0:02 Who is the deep learning partnership with? All the humans or just AI developers?
@sdmarlow3926
@sdmarlow3926 3 жыл бұрын
If he had started the talk with FEP, it would have saved me 24 minutes.
@Otome_chan311
@Otome_chan311 4 жыл бұрын
Kinda a poor video. He fails to explain the software side of what he's talking about. He disses ANNs and then immediately turns around and praises them when they use distributed parallel computing. He says we're at "mouse" but that's horribly misleading. We *don't* have the technology nor models to replicate the agency of mice. What we have, as he mentions, are just advanced statistical prediction and generation models which can be applied selectively to a problem. IE put in an input and receive a desired output. He shits on this saying it's not AGI (it isn't, it's narrow AI) but then he uses that in his model to praise the development of AGI, claiming it "replicates the brain". The reality is that ANNs and other statistical models like this *only* do computation, regardless of how complex that may be. Such a process will never, for instance, feel emotion or come up with novel thoughts, or seek out something via it's own will. And he didn't even *attempt* to explain these things. Just waving his hands and saying "maybe once we have enough computing power!" which is exactly what he was shooting down at the beginning of his talk. It's clear he doesn't really understand what he's talking about. This was, perhaps, most evident during one of the questions at the end when someone asked if there was a way to come up with an alternative model other than modeling the brain and he replied "no". That would imply the *only* way to replicate the functionality is to create a brain-like machine. While doing a complete physical replication of the brain is definitely a way to *build* an artificial brain, such a replication would require us to *understand* the way the brain is modeled, and if we understand the model, we can make alternative ones. Kinda a shitty talk tbh. The only difference between the brain and classical computing is once of decentralization. The brain is physically decentralized and can run complex parallel computing. IE different architecture. A classical computer needs to be *far more powerful* if we wish to emulate that (the approach of current "deep learning" models) which is why we need such powerful computers to even come close. And indeed, for problems like image recognition just throwing more computing power at it works flawlessly, as we see with alphago, gpt-3, dall-e, etc. However, while the *computation abilities* are there (with tons of power), the actual model of consciousness and agency still fails to be replicated. And this will remain, even if we use decentralized computing models. Since we still don't have proper models for semantic processing, thoughts, will/agency, etc. To say that these problems will be solved with sheer computing power is idiocy, and is what he was shitting on right before he started praising that method.
@HemantPandey123
@HemantPandey123 3 жыл бұрын
Consciousness, Intuition etc. are all algorithms coded in your DNA. Nothing special. Now computer algorithms generate newer algorithms themselves, self learn , replicate etc. Whatever we think (or create) is just a intelligent permutation of many possible scenarios. Just wait till 2025. We will have robots beat humans even in sports let alone board games. Humans are seriously overrated slave race created by Anunnaki and self praising too. Its high time to pave way for better more intelligent AI race to take over the reigns of earth.
@Otome_chan311
@Otome_chan311 3 жыл бұрын
@@HemantPandey123 That's really an oversimplification of the brain. It's like going "fluid dynamics are easy to understand, it's just atoms bouncing around!". Like.... yeah when you put it like that it's simple, but coding a simulation is very difficult. And no, "computer algorithms" cannot "generate newer algorithms, self learn, replicate, etc". Current ANN and GAN models can't do anything other than what they were designed for. You couldn't speak to them in english, have them apply understanding of what you told it into a dynamic task of fetching content off the web, for instance. Having robots play sports is *very easy* and we still struggle with that. We're still a long way off from proper AGI, and most people in the AI space aren't even *trying* to work towards it.
@appropiate
@appropiate 3 жыл бұрын
@@Otome_chan311 How about the approach presented here: kzbin.info/www/bejne/Z5CwlKNjjs-Do7M
@zrebbesh
@zrebbesh 3 жыл бұрын
It doesn't matter if hardware is neuromorphic unless it actually does all the things that are important to neurally evolving intelligence. Last time I looked these "neuromorphic" hardware implementations are highly trainable in terms of reflex actions that exploit their given network structure, but they're not modelling neuroplasticity, glial signalling, or structural evolution. And we're fairly sure that they don't implement whatever mechanism neurons implement that allows long-term memory to emerge. Neuromorphic hardware doesn't work for me because I'm trying to figure out how to evolve structures toward intelligence, and these so-called neuromorphic substrates absolutely can't model structure evolution. They are 'not even wrong' for learning about the evolution and connection re-mapping that goes on in brains, and IMO therefore completely useless as a way to reach 'real' intelligence.
@papalonghawkins
@papalonghawkins 3 жыл бұрын
This talk is not worth watching. I admire his enthusiasm, but the speaker reveals that he does not have a deep understanding of the field in his introductory comments.
@chrisreed5463
@chrisreed5463 3 жыл бұрын
I agree wholeheartedly.
@derekcarday
@derekcarday 3 жыл бұрын
Interesting that no philosophers are needed in Peter's mind. Seems pretty reckless.
@ki630
@ki630 4 жыл бұрын
the way he talks tells a lot about his big ego although great talk
@chrisreed5463
@chrisreed5463 3 жыл бұрын
1. Why do we want to make a human intelligence when that is flawed with emotion and limited? 2. It is wrong to dismiss agents as just algebra when within the neo-cortex, mathematically speaking, the same thing is happening in layers 2 to 5. 3. Once we net agents together with something that logistically/mathematically is similar to layer 1 of the neo-cortex we will find AGI is an emergent property. But like a child, it takes time to develop and cohere. 4. We are in the singularity now. AI agents are becoming increasingly enmeshed in the information structures of out civilisation. AGI will probably arise within the next two decades and will rapidly become super intelligent. That peak will likely remain the domain of government and corporations. All in all, not very well thought out. Sorry.
@Lahmacunmatik
@Lahmacunmatik 3 жыл бұрын
The way he speaks is extremely annoying even though he's talking about interesting stuff.
@alebadi
@alebadi 2 жыл бұрын
In summary, you said nothing. Simply you know nothing about AGI.
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds
1:00:15
Future of Life Institute
Рет қаралды 652 М.
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,3 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 422 М.
It's Not About Scale, It's About Abstraction
46:22
Machine Learning Street Talk
Рет қаралды 104 М.
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
Accelerating scientific discovery with AI
29:02
Vetenskapsakademien
Рет қаралды 49 М.
What challenges exist in ethics of AI?
45:35
World Knowledge Forum
Рет қаралды 3,4 М.
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 1,9 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН