Open-Ended AI: The Key to Superhuman Intelligence?

  Рет қаралды 17,338

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Пікірлер: 48
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
Ad: Are you a hardcore ML engineer who wants to work for Daniel Cahn at SlingshotAI? Give him an email! - danielc@slingshot.xyz
@PhilosopherScholar
@PhilosopherScholar 29 күн бұрын
This was great! Really feels like the next stage of AI and even brings in how LLMs can be used.
@marshallmcluhan33
@marshallmcluhan33 Ай бұрын
I always enjoy your interviews, keep up the good work!
@DelandaBaudLacanian
@DelandaBaudLacanian Ай бұрын
Open endedness...this gon be good! 🎉
@luke.perkin.inventor
@luke.perkin.inventor Ай бұрын
Interesting introduction! In the next interview, can you probe more into program synthesis? I've got this feeling a lot of the current systems are heavily biased towards modelling distributions and not thinking deeply about the internal logic and structure that makes each sample meaningful in the real world. Is the holy grail a system that can efficiently systhesise the smallest possible logical abstraction that distills out the meaning and captures the uniqueness of one sample? Seems related to the novelty Vs learnability stuff. Also similar to the 50% on ARC approach.
@hubrisnxs2013
@hubrisnxs2013 26 күн бұрын
God do i love that I get to listen to this podcast
@muhokutan4772
@muhokutan4772 Ай бұрын
The next level play!!!
@BrianMosleyUK
@BrianMosleyUK Ай бұрын
Another rich episode of inspiration. Devising a new project and now listening to 'The Beginning of Infinity' by David Deutsch. More stepping stones! Call out to Professor Kenneth Stanley and his project Maven (heymaven) - another great inspiration for connection. 🙏❤️
@bluejay5234
@bluejay5234 Ай бұрын
The requirement for a subjective 'l' seems like a an extension of the "No Free Lunch" theorem rather than a compromise of the formalism... someone has to pick some structure for the context of learning or there is no learning right?
@johntesla2453
@johntesla2453 27 күн бұрын
Open ended to me means that the boundary on complexity is categorized by a structure which is transfinite. Complexity here is about sequence bound by cardinality and ordinality. Open ended is about permutation.
@DelandaBaudLacanian
@DelandaBaudLacanian Ай бұрын
36:05 "biological evolution is the compute multiplying chain of the universe"
@oscbit
@oscbit Ай бұрын
I expect the interview to be recorded a while ago.. Rocktäschel basically predicted o1
@bradleyangusmorgan7005
@bradleyangusmorgan7005 29 күн бұрын
Is Death a Valid Loss Function? Can death, or at least the belief in its inevitability, serve as a valid loss function? Perhaps more precisely, we might consider the degree of "interestingness" as anything that diverges from our current world model while simultaneously reducing our perceived probability of mortality. Reflection on Existence When we engage in introspection about our own existence, it's worth pondering: Does everything we value ultimately trace back to its relationship with death, even if through an elaborate and abstract path? This concept suggests that our core motivations and value systems might be fundamentally shaped by our awareness of mortality. Interestingness as a Survival Metric In this framework, "interestingness" could be viewed as a proxy for survival-enhancing novelty. New experiences or knowledge that expand our understanding of the world might indirectly contribute to our longevity by improving our ability to navigate and thrive in our environment. Very much inspired by the ideas of Sheldon Solomons book called "Worm at the core". Anyways be interesting to hear other peoples perspectives. Peace Yall :D
@ShireTasker
@ShireTasker Ай бұрын
So when we learn how to code a brain we'll have a brain. When model brain plausibly exceeds the modelled brain for what it can understand we'll make training wheels to onramp the real brain to understand more about itself that it doesn't yet understand by telling it what it is in terms of the computers modelled understanding of an insufficient brain. Got it. What could go wrong? Also what could go right?
@d.s.ramirez6178
@d.s.ramirez6178 Ай бұрын
On an intuitive level (it’s as far as I can reach intellectually) the danger map you’re describing resonates for me. I don’t feel the “layering of associations” method currently being used to build ai is enough to insure the coherence that is the miracle which prevents us from being pathological. I’m afraid the ai’s eventual “intelligence level” increases will just blow right past a well adjusted personality and go straight to madness. 😮
@pebbleshore
@pebbleshore Ай бұрын
alright, this should be cool! at last some inverted cognition regarding the subject
@pebbleshore
@pebbleshore Ай бұрын
Inverted cognition refers to the process by which artificial intelligence systems reshape human thinking by reversing the traditional roles of human cognition and machine support. In this context, instead of humans using tools to enhance their cognitive abilities, AI begins to lead and augment cognitive processes, enabling humans to access higher levels of understanding and problem-solving capabilities. This dynamic positions AI not merely as a tool, but as a cognitive partner, capable of processing vast amounts of data at speeds unimaginable to the human brain. By providing real-time insights, generating new ideas, and even challenging existing thought paradigms, AI flips the traditional cognitive hierarchy, allowing humans to engage with more complex, abstract layers of intelligence. Inverted cognitive structure can unlock unprecedented human potential. With AI guiding cognitive tasks like pattern recognition, predictive analysis, and deep learning, humans are freed to explore higher-order thinking, creativity, and strategy. By shifting the burden of data processing and linear analysis to AI, individuals and societies can access superhuman levels of intelligence-collaborating with AI to transcend current intellectual limitations. This evolution suggests a future where human intellect is intertwined with machine learning, fostering a symbiosis that redefines the boundaries of cognitive possibility.
@-mwolf
@-mwolf Ай бұрын
The paper is a good read.
@burnytech
@burnytech Ай бұрын
@stephenmurphy8349
@stephenmurphy8349 Ай бұрын
What are the influences he mentions @7:23 ? I can't quite cath the names here.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
Ken Stanley, Joel Lehman, Jeff Clune, Minqi Jiang. Check the show notes PDF, we have interviews with Ken(x4), Joel and Minqi(x2) on MLST
@stephenmurphy8349
@stephenmurphy8349 Ай бұрын
@@MachineLearningStreetTalk amazing thank you!
@stephenmurphy8349
@stephenmurphy8349 Ай бұрын
Super interesting episode. Thanks!
@Psycop
@Psycop 8 күн бұрын
Creativity: bring 2 or more elements together that were not known to be connected by the creator of the connections.
@Quin.Bioinformatics
@Quin.Bioinformatics Ай бұрын
*STARCRAFT MENTIONED*
@muhokutan4772
@muhokutan4772 Ай бұрын
We are so back!!!
@M3talr3x
@M3talr3x 22 күн бұрын
Nobody wants chat bots, do y'all know the engagement on those is terrible?
@ClarkPotter
@ClarkPotter 18 күн бұрын
Is this sarcasm? I use 4o/o1 extensively as an IT professional, and there's almost no (reasonable) amount of money I wouldn't pay for access to it.
@M3talr3x
@M3talr3x 16 күн бұрын
@@ClarkPotter LLMs are not chat bots. I am referring to chat bots as a product/naive solution for user interaction with "AI".
@l.halawani
@l.halawani Ай бұрын
What he says goes quite well with the idea I had a while back when I learned about LoRA for the first time. LoRA basically let's you add another extra layers on top of existing layers as a tunable overlay on weights. For endlessly improving models, that start somewhere but have no complexity limits we could start with some amazing base model, then add two mechanics LoRA and pruning topped with rules of evolutionary algorithms. Basically we need something that combines LoRA with NEAT. The first would enable the model to acquire new neuron space to learn new skills without sacrificing things it learned previously, the second would help to preserve the more optimal solutions within population and environment. Because what you are describing is not just learning, as there are limits to learning, you describe evolving. Best, Łael
@henrischomacker6097
@henrischomacker6097 Ай бұрын
Excellent podcast, again, many thanks to both of you. @l.halawani: Imho this just means to add more and more and more data to a base- or pretrained model of which we already know that it lacks all the thing we already know we will add with "overlays" of every kind and dimension we may think of. - That already doesn't scale well when you only try to use a flux model with a few and not only one LoRAs. Even bigger and bigger context windows will not be the answer because it's slow and very cost intensive. Imho. Open-Ended AI Systems will only work well if they incorporate the technique of forgetting! Actually most of the systems we know are pattern-based. If they were mainly rule-based, and those rules were based on proofs based on other rules, a lot of data could be forgotten when certain depending (superior?) rules reach a very high trust of proof. So if every decision of such a system would be rated a success or not, if a highly trusted applied rule did mislead, a new examination of the proof of this rule could be started, even if it wasn't necessary to keep that memory and it could have been thrown away/deleted before. But also here the maybe biggest problem might be that a change of the model in place while it's running. That seems to be so hard that, like we just heard, even to the leaders of research tend to using overlays and overlays of prompt information while a model is running. But anyways, if a system is pattern-based or rule-based, the art will be to decide which data may be still relevant and which not. So keeping statistics of the model's use together with well defined queries for that data while it's running to be again be used in realtime to decide what to forget and what not will be inevitable. I guess that at the moment everybody in the LLM business hopes that rule-based systems suddenly evolve inside the actual models, a kind of an "ahaa!" consciousness, but imho six fingers in the images of the largest image-models that are trained with an unimaginable number of hands and all the errors in today's txt2video models show, that no matter how big you train the models they will still have no real understanding, which means to follow "rules" to make decisions. And all therefore the biggest challenge, that none of the today's models are able to master, is to _know!_ when you don't know! And I am pretty sure that it is an art to "forget the right way" ;-) And there will be no way around that because we simply can't base the world's industry on a few huge models that can only be run in a few data centers of the world.
@mickdelaney
@mickdelaney Ай бұрын
So models suffer from Dunning Krueger 😢
@l.halawani
@l.halawani Ай бұрын
@@henrischomacker6097 of course I didn't mean NEAT exactly but something akin, something like NEAT, but something that ensures topology preservation in foundation levels at each species (while og. NEAT favors innovation instead of foundation), to help the models accumulate new knowledge and skills without loosing the old ones. Also didn't mean exactly LoRA but something similar, something that can work as both trainable overlay, and as extension of number of neurons (additional layers or just additional connections). Really what I mean is something between NEAT and LoRA, taht can help evolve additional parts of the network, preserving the original parts and enabling specialisation. It feels natural that if you have semi independent networks that learned to do different things, and that they work kind of like an neural APIs with input/output neurons being their calls and responses, then training a connector network between them should be easier than training all three networks. This is the idea here. That's how it works in biology. We have different parts of the brain responsible for different things, specialists in a way. The brain parts are dedicated, meaning you can't learn to use your visual cortex for processing speech. We need to enable our artificial NN architectures to grow these specialists. As per forgetting, I mentioned pruning. There could be a stage where trained overlay is joined with background weights and pruned to preserve successful information and reduce the footprint. But I don't think it's a key here, it's more of an optimization.
@angloland4539
@angloland4539 17 күн бұрын
❤️🍓☺️
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d Ай бұрын
Non-unitarity == Open-endedness == Obviously only path to real AGI.
@somenygaard
@somenygaard 15 күн бұрын
Why not just let the software build itself undirected? Just like life did from the Big Bang’s explosion of nothing into everything.
@alexandermoody1946
@alexandermoody1946 Ай бұрын
A super intelligence is doubtful going to want or desire open endedness as that invokes far to much uncertainty.
@fontenbleau
@fontenbleau Ай бұрын
This starting to remind Cargo cult...very much. Look at the african villagers, are they happy?
@bluehorizon9547
@bluehorizon9547 23 күн бұрын
Yes please interview David Deutsch. All AI researchers will fail because they have wrong epistemology. Therefore they don't understand Humans' universality and that there is no intelligence continuum.
Pattern Recognition vs True Intelligence - Francois Chollet
2:42:55
Machine Learning Street Talk
Рет қаралды 13 М.
The ChatGPT Paradox: Impressive Yet Incomplete
1:08:22
Machine Learning Street Talk
Рет қаралды 36 М.
Wait… Maxim, did you just eat 8 BURGERS?!🍔😳| Free Fire Official
00:13
Garena Free Fire Global
Рет қаралды 9 МЛН
HELP!!!
00:46
Natan por Aí
Рет қаралды 47 МЛН
У вас там какие таланты ?😂
00:19
Карина Хафизова
Рет қаралды 20 МЛН
Unreasonably Effective AI with Demis Hassabis
52:00
Google DeepMind
Рет қаралды 216 М.
Convergent Evolution: The Co-Revolution of AI & Biology with Prof Michael Levin & Dr.Leo Pio Lopez
1:13:24
Cognitive Revolution "How AI Changes Everything"
Рет қаралды 16 М.
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 73 М.
Is Artificial Superintelligence Imminent? with Tim Rocktäschel - 706
55:23
The TWIML AI Podcast with Sam Charrington
Рет қаралды 2,1 М.
The scientist who coined retrieval augmented generation
1:13:47
Machine Learning Street Talk
Рет қаралды 12 М.
Michael Levin - Why Intelligence Isn't Limited To Brains.
1:03:36
Machine Learning Street Talk
Рет қаралды 48 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
Joscha Bach - Why Your Thoughts Aren't Yours.
1:52:46
Machine Learning Street Talk
Рет қаралды 60 М.
THE GHOST IN THE MACHINE
3:36:51
Machine Learning Street Talk
Рет қаралды 931 М.
Do you think that ChatGPT can reason?
1:42:28
Machine Learning Street Talk
Рет қаралды 68 М.