Open-Endedness is Essential for Artificial Superhuman Intelligence

  Рет қаралды 1,790

Tunadorable

Tunadorable

Күн бұрын

arxiv.org/abs/...
Support my learning journey either by clicking the Join button above, becoming a Patreon member, or a one-time Venmo!
/ tunadorable
account.venmo....
Discuss this stuff with other Tunadorks on Discord
/ discord
All my other links
linktr.ee/tuna...

Пікірлер: 29
@OpenSourceAnarchist
@OpenSourceAnarchist Ай бұрын
Your higher-level thinking is always refreshing and in-line with my own intuitions. Love the commentary as always... I hope I remain an open-ended system for a long time, and that ASI keeps me around for novel schizo thoughts :)
@natecodesai
@natecodesai Ай бұрын
Often we leave out the fact that as agents in a multiagent system, our loss function as humans is our perception of what others expect based on our own internal model of society. A universal loss function would have to be dynamically generated by each individual learning agent where its environment is other agents and they can provide dynamic feedback to each other. Something like this at least.
@MatthewKelley-mq4ce
@MatthewKelley-mq4ce Ай бұрын
That loss function is a bit of a simplification. There is some truth to it, but even personally, my path has little to do with the expectations of society. Perhaps I'm missing your point there.
@natecodesai
@natecodesai Ай бұрын
@@MatthewKelley-mq4ce To clarify: my view is that we are constantly trying to predict not only what we will do next, but what others around us will do next. "society" is one category that I used to illustrate some many-to-one relationship that one has with the environment, and the internal model one has for that (which in turn guides prediction). So it's not society alone, nor is it society itself, it is my model of what I expect of others and inversely what I expect them to expect of me.... something like this, and in my world model, society is part of what influences me in different subtle ways. This is a pretty abstract thought path, so I don't even fully understand what it could mean as a theory, nor how it could be applied to machine learning. Needs more thought.
@KitcloudkickerJr
@KitcloudkickerJr Ай бұрын
the terms "new" and "novel" are driving me up a wall. its a weird metric that doesn't really exist in people either. We dont make "NEW" things as in, a concept or thing that has never ever existed in any way shoape or form. that is the rarest thing for us. At most, we smash preexisting things together, But, "Create" the only thing we create from thin air is fiat capital. The world would look a lot different if peeople created as much as we think we do
@consciouscode8150
@consciouscode8150 Ай бұрын
Couldn't you define it with the perplexity, ie the probability that the model would produce it unprompted? Even mash-ups can feel novel just by virtue of "I wouldn't have thought of that" - note that once I see it though, it stops being "novel" by this definition because I can (in theory) reproduce it.
@KitcloudkickerJr
@KitcloudkickerJr Ай бұрын
@@consciouscode8150 yes, I don't see why not. My rant was more about how we are grading models on metrics humans don't even meat all in order to move goal post. But as to the rating, yeah, can be graded on perplexity
@jyjjy7
@jyjjy7 Ай бұрын
True originality may be rare in humans but the S in ASI insists we have higher standards
@KitcloudkickerJr
@KitcloudkickerJr Ай бұрын
@@jyjjy7 we do have higher standards without a doubt but it's a naxal fallacy. The true tail end of the of the distribution curve
@jyjjy7
@jyjjy7 Ай бұрын
@@KitcloudkickerJr What do you think the word superhuman in the title of this video means exactly? Seriously, I don't get where you are coming from. Superhuman doesn't mean super "everything but the last bit of the curve" so what are you trying to say?
@zyzhang1130
@zyzhang1130 28 күн бұрын
i really like many of the examples/intuition u gave in your videos that effectively illustrate your point, while a lot of people in the community just don't let it for some reason..
@CharlesVanNoland
@CharlesVanNoland Ай бұрын
The fact is that a network whose weights do not change while it is being used/running is effectively an elderly person with no plasticity. The only way we're going to get to AGI/ASI or anything that is creative and explores possibilities is with something that learns dynamically, and for robotic agents it must do so in real-time at 20-30hz, definitely not something that's trained on a static dataset and then its weights are just stuck forever until an update goes on the air that can only be produced by a massive training compute farm that only uses static datasets to update the network. In my 20 years of researching AI and neuroscience, it is my belief that pursuing a neural network style learning algorithm will be computationally prohibitive for a long while (until borophene/graphene based semiconductor compute is solved and we have 50ghz compute cores), and in the meantime we need to come up with a lightweight learning algorithm to create something that can learn quickly, or in real-time, from "experience". Even something that has a learning update rate of 0.5Hz is going to be more robust and versatile than anything currently in existence. All of the creative/explorative/playful entities that exist on the planet learn from experience, on-the-fly. They perceive the world through the lens of their previous experience, and "previous experience" is updating with each moment at 10-50Hz, shaping and sculpting the way that future input/experience will impact the behavior and activity of the agent. Something that cannot update how it will process future inputs based on current inputs is not going to get us where we want to go, which to my mind means that backprop-training is a dead end. It's expensive, slow, inefficient, and essentially a brute-force method to create a static translation function for generating an output from an input. It's a static black box that future humans will regard as the clonky antiquated brute force method we had in the olden days to make a compute device do something resembling learning. We don't want a static black box that requires expensive offline training. We want a dynamic black box, at the very least, that isn't as compute heavy as today's static black boxes are. I believe that a novel algorithm based on learning sparse distributed representations can make this possible. It's already been demonstrated that an algorithm which learns to predict its own inputs can form abstract representations, extracting latent variables from its inputs - just like those that form in an autoencoder but without slow/inefficient backpropagation training on static datasets. This is much more akin to what brains are actually doing - and after all, isn't replicating what brains are capable of the whole entire point of AGI/ASI pursuits? I am of the mind that anyone fiddling with backprop-training ever-larger networks on static datasets, and who has little to no understanding of neuroscience, isn't going to magically get us to AGI/ASI. As an analogy, someone fiddling with massive backprop networks thinking they're going to crack AGI/ASI is like being an expert in building horse carriages, and thinking that you're going to somehow stumble across creating an internal combustion engine powered vehicle. It's naive to ignore brains yet attempt to replicate what it is that they do. There have been several papers which I believe anybody pursuing AGI/ASI should take heed from, such as "How a Minimal Learning Agent can Infer the Existence of Unobserved Variables in a Complex Environment" (Benjamin Eva, Katja Ried, Thomas Müller & Hans J. Briegel) over on Springer, and "Predictive learning as a network mechanism for extracting low-dimensional latent space representations" (Stefano Recanatesi, Matthew Farrell, Guillaume Lajoie, Sophie Deneve, Mattia Rigotti, Eric Shea-Brown) over on bioRxiv. It is totally and entirely feasible for someone who is ignorant toward neuroscience to crack the code, but it's only going to be because a bunch of other people who acknowledged neuroscience's findings and discoveries put most of the pieces together for them. All I know for sure is that we're freaking close now. Definitely less than 5 years until we have fully dynamic autonomous agents. They might be limited to what I call "pet status" in terms of their abstraction capacity (and thus intelligence level), just novelties and toys, but it won't be long after that we will be able to scale up the compute and thus abstraction capacity to create human-level and super-human-level intelligences from the exact same dynamic learning algorithm as the toy pet novelties that run on common consumer hardware. That's my two cents.
@00prometheus
@00prometheus Ай бұрын
I think we all agree that continual feedback learning is a core part of what will take us to ASI, but I wouldn't wait for any 50GHz processors: Even at 3GHz, light speed can only take you 10cm in the time of a clock cycle. This is why memory chips are right smack next to the CPU on modern motherboards. At 30GHz, light speed can go 1cm per cycle and we end up inside the chip boundary. At this point it is physically impossible for the chip to work as a single unit because it can no longer be synchronised, so you can't really go there. We need something that is massively parallel, not higher clock frequency; it isn't physically possible go much higher in clock frequency than we already are. This also means that no algorithm that is non-parallelizable can ever be the answer.
@CharlesVanNoland
@CharlesVanNoland Ай бұрын
@@00prometheus Right now the clock speeds are not limited by the distance a signal has to travel. Yes, that's a thing, but its silicon transistor switching speed and quantum tunneling causing instability are the limitations on clock rate. The relatively high resistance of silicon requires more power to be shoved through for everything to work, which entails more heat, which causes instability. Cupric oxide transistors have been demonstrated to switch at a significantly faster speed than silicon and just having lower power transistors that have a narrower bandgap means we can clock higher. Yes, at higher speeds the distance traveled by a signal is shorter, but that's not what's currently limiting CPUs to ~5ghz. Sure, 50ghz is probably unrealistic, but even at 5ghz going massively parallel has its limits too because you have to build outward for that extra parallelism. My point is that we need to get off silicon entirely, not that we shouldn't parallelize more. That's why I mentioned borophene/graphene, and cupric oxide had slipped my mind during that moment, but that's another promising route. Any additional parallelism is only going to be enhanced by getting off silicon. Silicon ain't the way.
@immortalityIMT
@immortalityIMT Ай бұрын
Bootstrapping is so important
@jyjjy7
@jyjjy7 Ай бұрын
@@immortalityIMT Some day we'll be remembered as the weird stuff growing on the surface of a planet from which ASI was bootstrapped 🥲
@jonathanmckinney5826
@jonathanmckinney5826 Ай бұрын
The "novice" that won against AlphaGo wasn't playing as a novice, they learned the strategy as a "jailbreak" against it using another program. So additional training of novices playing wouldn't help.
@Tunadorable
@Tunadorable Ай бұрын
good to know this was misrepresented to me by another commenter a few weeks ago thank you
@dadsonworldwide3238
@dadsonworldwide3238 Ай бұрын
Everytime we dig out old world axioms of complexity we put it in our world + technology + material sciences. Face value knowledge hasnt changed and 3/4 age edlers just gain strong identifiers eqaul measure live in less whataboutism and nilhisms if they have a brain 🧠 Id expect all models convergences form & shape this so that tuning bias & censorship back into it is the work. Beliefs have less and less merit triangulated judgment wins and or you spend big big money wrestling with how reality is.
@00prometheus
@00prometheus Ай бұрын
I don't think we shall hope for ASI to have any use for us, I suspect that any such usefulness will be very passing. I think it is a far too unknowable and risky ground to base the survival of humanity on. Besides, I don't really think that ASI would purposefully exterminate us, I think it would be more along the lines of what has happened to the mountain gorillas. We don't hate them, we even like them (as a humanity we don't think it is very important, but we do have a generally positive attitude to them): They are going extinct because we are repurposing their habitats, building highways, cities, farmland and whatnot. Just like the gorillas have no chance of understanding the underlying reason of why they are being driven to extinction, we probably won't either. So it is essential that we maintain control over ASI, it is not enough that it consider us unimportant or even slightly useful.
@GNARGNARHEAD
@GNARGNARHEAD Ай бұрын
Picasso is the example of an artist who was successful in his lifetime 😆 savage take otherwise 👍
@Morereality
@Morereality Ай бұрын
YEW
@Tunadorable
@Tunadorable Ай бұрын
YYYYYYYYYYYYYYYYYYYYYEW
FractalFormer: A WIP Transformer Architecture Inspired By Fractals
37:18
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1 МЛН
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Дима Гордей
Рет қаралды 7 МЛН
Dad Makes Daughter Clean Up Spilled Chips #shorts
00:16
Fabiosa Stories
Рет қаралды 8 МЛН
Running With Bigger And Bigger Feastables
00:17
MrBeast
Рет қаралды 211 МЛН
PEDRO PEDRO INSIDEOUT
00:10
MOOMOO STUDIO [무무 스튜디오]
Рет қаралды 27 МЛН
Nature's Incredible ROTATING MOTOR (It’s Electric!) - Smarter Every Day 300
29:37
The Oldest Unsolved Problem in Math
31:33
Veritasium
Рет қаралды 10 МЛН
Bill Gates Reveals Superhuman AI Prediction
57:18
Next Big Idea Club
Рет қаралды 290 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 415 М.
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Рет қаралды 13 МЛН
The REAL Three Body Problem in Physics
16:20
Up and Atom
Рет қаралды 602 М.
Official PyTorch Documentary: Powering the AI Revolution
35:53
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Дима Гордей
Рет қаралды 7 МЛН