Really good topics. Of course, each approach and each question would be worth an entire university department level of research. I agree with Yee(?), the Chinese researcher, that if we want a chance at safe GAI we need to understand and safeguard against human failings like bias. I'd add greed, striving for power over others, and maniacal focus toward narrow goals (overoptimization) for starters.
@Macatho5 жыл бұрын
I think we'll get there about a hundred years quicker by just copying nature. Sadly that is the last thing we will do.
@agiisahebbnnwithnoobjectiv2283 жыл бұрын
The objective function of animal brains and therefore AGI is impact maximization. You heard it first from me.j
@ValerianTexeira5 жыл бұрын
The AGI expert usually compare it to the workings of human brain. However, major part of human intelligence spent on satisfying their biological needs and comforts or avoiding the pain and discomfort, only a small part spent on doing other intelligent work or learning. People should take this in to consideration while building AGI, which do not have any biological need to satisfy. Will it make the AGI fundamentally different than the human intelligence, is my basic question.
@TheFrygar5 жыл бұрын
This is a good observation, and I think it is currently unclear whether those biologically driven preferences are in any way necessary for the kinds of intelligence we might want to create. For example, "pleasure" is an experience associated with the satisfaction of biological (and mental) needs and comforts, but it is also a component of creativity - we often feel a sense of satisfaction in pursuing a particular line of creative inquiry.
@ValerianTexeira5 жыл бұрын
@@TheFrygar Agreed! Creativity/inventions can also emerge in the satisfaction of human basic biological necessities as the statement goes; "necessity is the mother of all inventions". However IMO, most of today's AGI's intelligent machine, reinforcement, adversarial, unsupervised, etc. learnings, I believe, need not have the component of biological necessity.
@TheFrygar5 жыл бұрын
@@shivakumarcd do I sense sarcasm?
@lasredchris5 жыл бұрын
@@ValerianTexeira I think if we built biology into the AGI, not saying it turn it into Skynet but it have a need to survive and may view humans as an obstacle in it's way. I think pleasure and pain were biological mechanisms built into us to help us survive
@lasredchris5 жыл бұрын
To add into your point though, I don't think enough researchers have emphasized your point of we don't have to emulate the whole brain. Just the parts for intelligent/creative work and learning
@TimeLordRaps5 жыл бұрын
I think it shows how fast the field is advancing that I thought these talks took place almost 2 years ago when in reality it was only 10 months ago
@matsf82685 жыл бұрын
Thanks for posting but quite crappy sound so hard to hear what first speaker talked about
@jamesrmore4 жыл бұрын
Fascinating panel all around, could have been much longer. I was happy to go on and research the individuals. Good questions also, we will probably pick and choose which components we model in different AI and super AI applications. It made me think about at least a layer of AI which models human learning, so as we interact with ai even as we do now with our cell phones, the interaction is understandable or explainable and shapeable to us humans. At least until we significantly evolve and develop new ways of communication with AI language and symbolic will be involved. So there are pros and cons of the parent / baby learning model we can learn from, but it definitely seems like the fastest and best way to make progress, and for us to stay in contact with our creations. The case for quickly learning, how to generalize graphics like the elephant case, or quick learning by "helping" and inference seem clearly beneficial. Further, i believe we are already seriously being influenced in a negative way by ai by our biases shown in our interactions say with Facebook. Negative self reinforcement bubbles. Peace all.
@VRreando5 жыл бұрын
The moderator bias against Yi Zeng is really pathetic... and a clear example of why we need a Decentralized Global approach to AGI , not one based on western "experts"
@godbennett5 жыл бұрын
Excellent
@scientious2 жыл бұрын
I hadn't realized that the field was this far behind. No one on the panel even understands what AGI is and some of the concepts being stated are completely incorrect.
@Hasshirkhanbri4 жыл бұрын
I've an idea regarding AGI. how can i share it ?
@bryzvyy16744 жыл бұрын
write a paper, or tell everything here.
@agiisahebbnnwithnoobjectiv2283 жыл бұрын
The objective function of animal brains and therefore AGI is impact maximization. You heard it first from me.gj
@agiisahebbnnwithnoobjectiv2283 жыл бұрын
The objective function of animal brains and therefore AGI is impact maximization. You heard it first from me.
@XOPOIIIO5 жыл бұрын
AGI doesn't need to have sense of self. But it probably find itself out exploring the world. If it will explore the world through surfing internet it will take a time to put two things together and find out what is this AGI in the news.
@lasredchris5 жыл бұрын
Yea it doesn't. Have you read "Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes
@thyrahoward68655 жыл бұрын
Fun point! Our bodies set the ground level for us to self correct. If that makes sense. Our bodies program the base functions which we generally have no control over then it system.run active_memories.exe and now each second becomes a peice of data or input to be processed. This all takes place in millisecond while our internal system prepares for the next function. Cool right. Super fascinating!
@KristoferPettersson5 жыл бұрын
"explainability is a key component for human intelligence" => /narratives/ is a key to human /relations/. I think most of us find it hard to explain anything. ;) I wonder if Bach means that the ability to reduce observed relations into a comprehensive model with a /predictable set of controls/ is /desirable/?
@agiisahebbnnwithnoobjectiv2283 жыл бұрын
The objective function of animal brains and therefore AGI is impact maximization. You heard it first from me.j
@KristoferPettersson3 жыл бұрын
@@agiisahebbnnwithnoobjectiv228 Certainly an objective which helps us determine salience in observations of various forms is important (and measuring 'impact' is an aspect of salience) , but I think most researchers propose that there's more than one objective (or biases) which gives rise to human behavior. I don't think it's possible to reduce AGI to a single minmax problem. The show of intelligence is not an optimization process (though it might benefit from optimization).
@ShadowTwister283 жыл бұрын
Buy AGI?
@aklascheema5 жыл бұрын
Moderator had interesting ideas, would've liked to hear from the others a bit more.
@lasredchris5 жыл бұрын
Biologically inspired approach How are things connected Understanding bias
Also, the moderator really needs to stay neutral. ;)
@thyrahoward68655 жыл бұрын
Also saying that replacing humans with robots is the same as replacing birds with planes is an invalid argument. We use planes for transportation to get from point a to b not to watch them fly and poop luggage. A better attempt would be planes and cars both are used for transportation but were as flying will get you to your destination faster than driving. With that logic plane should have made cars obsolete but we still use cars. Why? Because planes have their limitations just a human engineering does. I cant fly a plane to the store down the street but I can drive. I cant afford to have a worker who can produce results I can't comprehend but I can hire another human who doesn't need to learn to say it in plain English.
@mistycloud44553 жыл бұрын
A.G.I will be man's last invention
@Captain_Of_A_Starship5 жыл бұрын
First, we need a better perceptron, why.. because of this headline [Snails use two brain cells to make "complex decisions", a team of scientists has found.] Tell you the rest once that is complete.
@puddles55015 жыл бұрын
being incoherent doesn't make you seem smart
@agiisahebbnnwithnoobjectiv2283 жыл бұрын
The objective function of animal brains and therefore AGI is impact maximization. You heard it first from me.gj
@Gabriel-pt6tq2 жыл бұрын
I pretty much have no idea what these people are saying.
@kimberly59463 жыл бұрын
Omfg wtf I cant...uuuugggggg
@AirSandFire5 жыл бұрын
The lady from DeepMind actually reminds me of an AI.
@mrpicky18685 жыл бұрын
do you feel threatened by smart woman? )
@entivreality4 жыл бұрын
Pretty rude moderator, especially how he kept cutting off the Chinese guy