Ad: Are you a hardcore ML engineer who wants to work for Daniel Cahn at SlingshotAI? Give him an email! - danielc@slingshot.xyz
@PhilosopherScholar29 күн бұрын
This was great! Really feels like the next stage of AI and even brings in how LLMs can be used.
@marshallmcluhan33Ай бұрын
I always enjoy your interviews, keep up the good work!
@DelandaBaudLacanianАй бұрын
Open endedness...this gon be good! 🎉
@luke.perkin.inventorАй бұрын
Interesting introduction! In the next interview, can you probe more into program synthesis? I've got this feeling a lot of the current systems are heavily biased towards modelling distributions and not thinking deeply about the internal logic and structure that makes each sample meaningful in the real world. Is the holy grail a system that can efficiently systhesise the smallest possible logical abstraction that distills out the meaning and captures the uniqueness of one sample? Seems related to the novelty Vs learnability stuff. Also similar to the 50% on ARC approach.
@hubrisnxs201326 күн бұрын
God do i love that I get to listen to this podcast
@muhokutan4772Ай бұрын
The next level play!!!
@BrianMosleyUKАй бұрын
Another rich episode of inspiration. Devising a new project and now listening to 'The Beginning of Infinity' by David Deutsch. More stepping stones! Call out to Professor Kenneth Stanley and his project Maven (heymaven) - another great inspiration for connection. 🙏❤️
@bluejay5234Ай бұрын
The requirement for a subjective 'l' seems like a an extension of the "No Free Lunch" theorem rather than a compromise of the formalism... someone has to pick some structure for the context of learning or there is no learning right?
@johntesla245327 күн бұрын
Open ended to me means that the boundary on complexity is categorized by a structure which is transfinite. Complexity here is about sequence bound by cardinality and ordinality. Open ended is about permutation.
@DelandaBaudLacanianАй бұрын
36:05 "biological evolution is the compute multiplying chain of the universe"
@oscbitАй бұрын
I expect the interview to be recorded a while ago.. Rocktäschel basically predicted o1
@bradleyangusmorgan700529 күн бұрын
Is Death a Valid Loss Function? Can death, or at least the belief in its inevitability, serve as a valid loss function? Perhaps more precisely, we might consider the degree of "interestingness" as anything that diverges from our current world model while simultaneously reducing our perceived probability of mortality. Reflection on Existence When we engage in introspection about our own existence, it's worth pondering: Does everything we value ultimately trace back to its relationship with death, even if through an elaborate and abstract path? This concept suggests that our core motivations and value systems might be fundamentally shaped by our awareness of mortality. Interestingness as a Survival Metric In this framework, "interestingness" could be viewed as a proxy for survival-enhancing novelty. New experiences or knowledge that expand our understanding of the world might indirectly contribute to our longevity by improving our ability to navigate and thrive in our environment. Very much inspired by the ideas of Sheldon Solomons book called "Worm at the core". Anyways be interesting to hear other peoples perspectives. Peace Yall :D
@ShireTaskerАй бұрын
So when we learn how to code a brain we'll have a brain. When model brain plausibly exceeds the modelled brain for what it can understand we'll make training wheels to onramp the real brain to understand more about itself that it doesn't yet understand by telling it what it is in terms of the computers modelled understanding of an insufficient brain. Got it. What could go wrong? Also what could go right?
@d.s.ramirez6178Ай бұрын
On an intuitive level (it’s as far as I can reach intellectually) the danger map you’re describing resonates for me. I don’t feel the “layering of associations” method currently being used to build ai is enough to insure the coherence that is the miracle which prevents us from being pathological. I’m afraid the ai’s eventual “intelligence level” increases will just blow right past a well adjusted personality and go straight to madness. 😮
@pebbleshoreАй бұрын
alright, this should be cool! at last some inverted cognition regarding the subject
@pebbleshoreАй бұрын
Inverted cognition refers to the process by which artificial intelligence systems reshape human thinking by reversing the traditional roles of human cognition and machine support. In this context, instead of humans using tools to enhance their cognitive abilities, AI begins to lead and augment cognitive processes, enabling humans to access higher levels of understanding and problem-solving capabilities. This dynamic positions AI not merely as a tool, but as a cognitive partner, capable of processing vast amounts of data at speeds unimaginable to the human brain. By providing real-time insights, generating new ideas, and even challenging existing thought paradigms, AI flips the traditional cognitive hierarchy, allowing humans to engage with more complex, abstract layers of intelligence. Inverted cognitive structure can unlock unprecedented human potential. With AI guiding cognitive tasks like pattern recognition, predictive analysis, and deep learning, humans are freed to explore higher-order thinking, creativity, and strategy. By shifting the burden of data processing and linear analysis to AI, individuals and societies can access superhuman levels of intelligence-collaborating with AI to transcend current intellectual limitations. This evolution suggests a future where human intellect is intertwined with machine learning, fostering a symbiosis that redefines the boundaries of cognitive possibility.
@-mwolfАй бұрын
The paper is a good read.
@burnytechАй бұрын
❤
@stephenmurphy8349Ай бұрын
What are the influences he mentions @7:23 ? I can't quite cath the names here.
@MachineLearningStreetTalkАй бұрын
Ken Stanley, Joel Lehman, Jeff Clune, Minqi Jiang. Check the show notes PDF, we have interviews with Ken(x4), Joel and Minqi(x2) on MLST
@stephenmurphy8349Ай бұрын
@@MachineLearningStreetTalk amazing thank you!
@stephenmurphy8349Ай бұрын
Super interesting episode. Thanks!
@Psycop8 күн бұрын
Creativity: bring 2 or more elements together that were not known to be connected by the creator of the connections.
@Quin.BioinformaticsАй бұрын
*STARCRAFT MENTIONED*
@muhokutan4772Ай бұрын
We are so back!!!
@M3talr3x22 күн бұрын
Nobody wants chat bots, do y'all know the engagement on those is terrible?
@ClarkPotter18 күн бұрын
Is this sarcasm? I use 4o/o1 extensively as an IT professional, and there's almost no (reasonable) amount of money I wouldn't pay for access to it.
@M3talr3x16 күн бұрын
@@ClarkPotter LLMs are not chat bots. I am referring to chat bots as a product/naive solution for user interaction with "AI".
@l.halawaniАй бұрын
What he says goes quite well with the idea I had a while back when I learned about LoRA for the first time. LoRA basically let's you add another extra layers on top of existing layers as a tunable overlay on weights. For endlessly improving models, that start somewhere but have no complexity limits we could start with some amazing base model, then add two mechanics LoRA and pruning topped with rules of evolutionary algorithms. Basically we need something that combines LoRA with NEAT. The first would enable the model to acquire new neuron space to learn new skills without sacrificing things it learned previously, the second would help to preserve the more optimal solutions within population and environment. Because what you are describing is not just learning, as there are limits to learning, you describe evolving. Best, Łael
@henrischomacker6097Ай бұрын
Excellent podcast, again, many thanks to both of you. @l.halawani: Imho this just means to add more and more and more data to a base- or pretrained model of which we already know that it lacks all the thing we already know we will add with "overlays" of every kind and dimension we may think of. - That already doesn't scale well when you only try to use a flux model with a few and not only one LoRAs. Even bigger and bigger context windows will not be the answer because it's slow and very cost intensive. Imho. Open-Ended AI Systems will only work well if they incorporate the technique of forgetting! Actually most of the systems we know are pattern-based. If they were mainly rule-based, and those rules were based on proofs based on other rules, a lot of data could be forgotten when certain depending (superior?) rules reach a very high trust of proof. So if every decision of such a system would be rated a success or not, if a highly trusted applied rule did mislead, a new examination of the proof of this rule could be started, even if it wasn't necessary to keep that memory and it could have been thrown away/deleted before. But also here the maybe biggest problem might be that a change of the model in place while it's running. That seems to be so hard that, like we just heard, even to the leaders of research tend to using overlays and overlays of prompt information while a model is running. But anyways, if a system is pattern-based or rule-based, the art will be to decide which data may be still relevant and which not. So keeping statistics of the model's use together with well defined queries for that data while it's running to be again be used in realtime to decide what to forget and what not will be inevitable. I guess that at the moment everybody in the LLM business hopes that rule-based systems suddenly evolve inside the actual models, a kind of an "ahaa!" consciousness, but imho six fingers in the images of the largest image-models that are trained with an unimaginable number of hands and all the errors in today's txt2video models show, that no matter how big you train the models they will still have no real understanding, which means to follow "rules" to make decisions. And all therefore the biggest challenge, that none of the today's models are able to master, is to _know!_ when you don't know! And I am pretty sure that it is an art to "forget the right way" ;-) And there will be no way around that because we simply can't base the world's industry on a few huge models that can only be run in a few data centers of the world.
@mickdelaneyАй бұрын
So models suffer from Dunning Krueger 😢
@l.halawaniАй бұрын
@@henrischomacker6097 of course I didn't mean NEAT exactly but something akin, something like NEAT, but something that ensures topology preservation in foundation levels at each species (while og. NEAT favors innovation instead of foundation), to help the models accumulate new knowledge and skills without loosing the old ones. Also didn't mean exactly LoRA but something similar, something that can work as both trainable overlay, and as extension of number of neurons (additional layers or just additional connections). Really what I mean is something between NEAT and LoRA, taht can help evolve additional parts of the network, preserving the original parts and enabling specialisation. It feels natural that if you have semi independent networks that learned to do different things, and that they work kind of like an neural APIs with input/output neurons being their calls and responses, then training a connector network between them should be easier than training all three networks. This is the idea here. That's how it works in biology. We have different parts of the brain responsible for different things, specialists in a way. The brain parts are dedicated, meaning you can't learn to use your visual cortex for processing speech. We need to enable our artificial NN architectures to grow these specialists. As per forgetting, I mentioned pruning. There could be a stage where trained overlay is joined with background weights and pruned to preserve successful information and reduce the footprint. But I don't think it's a key here, it's more of an optimization.
@angloland453917 күн бұрын
❤️🍓☺️
@FamilyYoutubeTV-x6dАй бұрын
Non-unitarity == Open-endedness == Obviously only path to real AGI.
@somenygaard15 күн бұрын
Why not just let the software build itself undirected? Just like life did from the Big Bang’s explosion of nothing into everything.
@alexandermoody1946Ай бұрын
A super intelligence is doubtful going to want or desire open endedness as that invokes far to much uncertainty.
@fontenbleauАй бұрын
This starting to remind Cargo cult...very much. Look at the african villagers, are they happy?
@bluehorizon954723 күн бұрын
Yes please interview David Deutsch. All AI researchers will fail because they have wrong epistemology. Therefore they don't understand Humans' universality and that there is no intelligence continuum.