Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network (April 3, 2024)

  Рет қаралды 194,150

Simons Foundation

Simons Foundation

Күн бұрын

Machine learning methods such as neural networks are quickly finding uses in everything from text generation to construction cranes. Excitingly, those same tools also promise a new paradigm for scientific discovery.
In this Presidential Lecture, Miles Cranmer will outline an innovative approach that leverages neural networks in the scientific process. Rather than directly modeling data, the approach interprets neural networks trained using the data. Through training, the neural networks can capture the physics underlying the system being studied. By extracting what the neural networks have learned, scientists can improve their theories. He will also discuss the Polymathic AI initiative, a collaboration between researchers at the Flatiron Institute and scientists around the world. Polymathic AI is designed to spur scientific discovery using similar technology to that powering ChatGPT. Using Polymathic AI, scientists will be able to model a broad range of physical systems across different scales. More details: www.simonsfoun...

Пікірлер: 335
@mightytitan1719
@mightytitan1719 6 ай бұрын
Another banger from youtube algorithm
@JetJockey87
@JetJockey87 6 ай бұрын
Yes but not for everyone, only those with the capability to appreciate this for what it is
@DreadedEgg
@DreadedEgg 6 ай бұрын
@@JetJockey87 Edgy teenager says what?
@comosaycomosah
@comosaycomosah 5 ай бұрын
facts
@tonyb8660
@tonyb8660 5 ай бұрын
1 good vs 1e9 bad suggestions
@zimzob
@zimzob 4 ай бұрын
@@DreadedEgg the implication here is he was less than two years of age when he signed up for a KZbin account… I suppose anything is possible these days
@antonkot6250
@antonkot6250 6 ай бұрын
It seems like very powerful idea, when AI observes the system, then learns to predict behaviour and then the rules of this predictions are used to delivery math statement. Wish the authors the best luck
@chrisholder3428
@chrisholder3428 3 ай бұрын
For anyone that does not do work with ML, the takeaway of symbolic regression as a means of model simplification may seem quite powerful at first, but often our rational to justify neural net usage is precisely due to the difficulty in the derivation of explainable analytical expressions to phenomena. People like Stephen Wolfram suggest that perhaps this assumption of assuming complex phenomena can be model analytically is precisely why we are having problems advancing. The title of the video to seasoned ML researchers sounds like the speaker will be explaining techniques to analyze neural net weights instead of talking about this.
@heliocarbex
@heliocarbex 6 ай бұрын
00:00-Introduction 01:00-Part I 03:06-Tradititional approach to science 04:16-Era of AI (new approach) 05:46-Data to Neural Net 13:44-Neural Net to Theory 15:45-Symbolic Regression 21:45-Rediscoverying Newton's Law of gravity 23:40-Part II 25:23-Rise of foundation model paradigm 27:28-Why does this help? 31:06-Polymathic AI 37:52-Simplicity 42:09-Takeaways 42:42-Questions
@GEMSofGOD_com
@GEMSofGOD_com 5 ай бұрын
So is this headline clickbait, as usual? Or could you provide timestamps with the main conclusions drawn shortly and clearly?
@orbatos
@orbatos 4 ай бұрын
​@@GEMSofGOD_comyep, it's just bog standard stuff that RNNs have been used for since they were first developed, just with more horsepower now.
@GEMSofGOD_com
@GEMSofGOD_com 4 ай бұрын
@@orbatos I've now noticed an interesting Newtons metric part of this talk. Such searchers of patterns are interesting.
@cziffras9114
@cziffras9114 6 ай бұрын
It is precisely what I'm working on for some time now, very well explained in this presentation, nice work! (the idea of pySR is outrageously elegant, I absolutely love it!)
@gumbo64
@gumbo64 6 ай бұрын
John Koza had Genetic Programming which is basically the same thing in the 90s. He made documentaries, talking about reusing learnt functions and everything, very interesting. Didn't really take off though, it just suffers from being slow like most evolutionary methods (unless you parallelise massively like OpenAI Evolution strategies) and can't learn more complex tasks that deep learning can. In another timeline it could've got more attention and maybe become better than neural nets
@Fx_-
@Fx_- 6 ай бұрын
@@gumbo64maybe its application will be better suited for some other situations or environments or scales in the future if NNs hit some type of thing they cannot overcome quickly enough.
@caseymurray7722
@caseymurray7722 5 ай бұрын
@@Fx_- We're currently hardcapped on current AI models with hardware but I am building a full stack system that takes advantage of currently existing hardware with implementing a daughter board to speed up the analogous computational requirements for large scale implementation. You'd be surprised at how little you need extremely large supercomputers when you scale more efficiently. Well and also leverage quantum computers for their relation with randomness.
@Bartskol
@Bartskol 5 ай бұрын
So here we are, you guys seems to be chosen by algorithm for us to meet here. Welcome, for some reason.
@andrewferguson6901
@andrewferguson6901 6 ай бұрын
It makes intuitive sense that a cat video is better initialization than noise. It's a real measurement of the physical world
@lbgstzockt8493
@lbgstzockt8493 6 ай бұрын
I think it is mostly the fact that, as he said, cats don't teleport or disappear, so you have some sense of structure and continuity that aligns with the PDEs you want to solve.
@allenklingsporn6993
@allenklingsporn6993 6 ай бұрын
​@@lbgstzockt8493 You're saying the same thing. "Structure and continuity" come from this measurement of the real world (it's a video of a real cat, experiencing real physics).
@fkknsikk
@fkknsikk 6 ай бұрын
@@lbgstzockt8493 Sounds like you've never had a cat. Structure and continuity is not a guarantee. XD
@erickweil4580
@erickweil4580 6 ай бұрын
I think this is the ultimate proof that cats are fluids, so it helped the fluid simulation.
@fernandofuentes7617
@fernandofuentes7617 6 ай бұрын
@@fkknsikk lol
@giovannimazzocco499
@giovannimazzocco499 6 ай бұрын
Amazing talk, and great Research!
@jim37569
@jim37569 6 ай бұрын
Love the definition of simplicity, I found that to be pretty insightful.
@nanotech_republika
@nanotech_republika 6 ай бұрын
There are multiple different awesome ideas in this presentations. For example, an idea of having a neural net discovering new physics, or simply of being the better scientist than a human scientist. Such neural nets are on the verge of discovery or maybe in use right now. But I think the symbolic distillation in the multidimensional space is the most intriguing to me and a subject that was worked on as long as the neural networks were here. Using a genetic algorithm but also maybe another (maybe bigger?) neural network is needed for such a symbolic distillation. In a way, yes, the distillation is needed to speed up the inference process, but I can also imagine that the future AI (past the singularity) will not be using symbolic distillation. Simply, it will just create a better single model of reality in its network and such model will be enough to understand the reality around and to make (future) prediction of the behavior of the reality around.
@Mindsi
@Mindsi 6 ай бұрын
We call it abstraction🎉🎉🎉🎉
@shazzz_land
@shazzz_land 6 ай бұрын
And with all this advancement we don"t have fresh good water and we don"t have long term stable electricity and not enough minerals for development
@denzelcanvasYT
@denzelcanvasYT 6 ай бұрын
@@shazzz_landthats because of the higher ups/elites not AI or technology.
@AB-wf8ek
@AB-wf8ek 5 ай бұрын
​@@denzelcanvasYT People don't fear AI, they fear capitalism
@ambatuBUHSURK
@ambatuBUHSURK 3 ай бұрын
none of that will ever happen lol. Neural nets cannot reason. Theory is important. Science & physics alone aren't just data & statistics. Those two are actually pretty new to science.
@Electronics4Guitar
@Electronics4Guitar 6 ай бұрын
The folding analogy looks a lot like convolution. Also, the piecewise continuous construction of functions is used extensively in waveform composition in circuit analysis applications, though the notation is different, using multiplication by the unit step function u(t).
@Mindsi
@Mindsi 6 ай бұрын
Oragami manifold🎉🎉🎉🎉🎉🎉🎉of course🎉🎉🎉🎉🎉🎉🎉🎉
@nigelrhodes4330
@nigelrhodes4330 6 ай бұрын
Folding goes into compression and data theory and is the basis for the holographic universe theory.
@myuse3
@myuse3 6 ай бұрын
Thought the same thing. Can do the Evaluation as a convolution of the two activation functions. Nevertheless, i guess the representation is somewhat more intuitive this way, as the middle part can be extracted as well if needed.
@rpbmpn
@rpbmpn 6 ай бұрын
Thought the same! (This vid appeared in my recs after watching the 3B1B convolutions video!) On what he's actually describing with the folding (11:10), I think it's actually pretty easy to miss, since he assumes you kind of anticipate or half-understand what he's about to say, so he goes over it pretty quickly So for anyone who coming to this completely naive or who might have missed it the first time, like I did... The chart (d) essentially traces out chart (c) while (b) is increasing, then traces it in reverse while (b) is decreasing, and then traces it forwards again as (b) increases again Some people might get slightly mad at me for pointing out the obvious Well, it IS simple, and it's easy enough to intuit why it would happen once you see it, BUT it is only obvious once you see it, and it's easy to miss in real time (at least I think!)
@laalbujhakkar
@laalbujhakkar 6 ай бұрын
I came here to read all the insane comments, and I’m not disappointed.
@Michael-kp4bd
@Michael-kp4bd 6 ай бұрын
We love our crackpots don’t we folks
@primenumberbuster404
@primenumberbuster404 6 ай бұрын
;) The typical crackpots are here to submit their opinion and here I can't even get past half of it for how insanely hard this topic this.
@jondor654
@jondor654 6 ай бұрын
Great minds are .,..,...
@maynardtrendle820
@maynardtrendle820 6 ай бұрын
It's so cool when people are simply arrogant, and offer nothing to counter those ideas with which they take issue! Keep it up!
@Bloodywasher
@Bloodywasher 6 ай бұрын
Well then, allow me. EUUUUUAAAHHHHH EUAHHHHH AAAAA SKYNET GRAY GOO!!! Omg I DON'T UNDERSTAND MATH HOW CAN YOU DO IT BY YOURSELF? Ancient aliens!!!! David Ike, D.u.m.b.s, Robert Bigelow taco bell space station!!! REEEE SCREEEEEEE. You're welcome. Also I looove math and science and astronomy. Happy learning!
@FrankKusel
@FrankKusel 6 ай бұрын
The 'Avada Kedavra' potential of that pointy stick is immense. Brilliant presentation.
@sadface43
@sadface43 6 ай бұрын
Read another book
@ryam4632
@ryam4632 6 ай бұрын
This is a very nice idea. I hope it will work! It will be very interesting to see new analytical expressions coming out of complicated phenomena.
@hyperduality2838
@hyperduality2838 6 ай бұрын
Solving problems is the essence of the Hegelian dialectic. Problem, reaction, solution -- The Hegelian dialectic! Neural networks create solutions to input vectors or problems, your mind is therefore a reaction to the external world of problems! Thesis (action) is dual to anti-thesis (reaction) creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Vectors (contravariant) are dual to co-vectors (covariant) -- Riemann geometry is dual. Converting measurements or perceptions (vectors) into ideas or conceptions is a syntropic process -- teleological. Your mind is building a "reaction space" from the input or "problem (vector) space" to create a "solution space" and this process is called problem solving or thinking (concepts) -- Hegel. Targets, goals, or objectives are inherently teleological and problem solving is a syntropic process -- duality! "Always two there are" -- Yoda. Syntropy is dual to increasing entropy -- the 4th law of thermodynamics!
@donald-parker
@donald-parker 6 ай бұрын
Being able to derive gravity laws from raw data is a cool example. How sensitive is this process to bad data? For example, non-unique samples, imprecise measurements, missing data (poor choice of sample space), irrelevant data, biased data, etc). I would expect any attempt to derive new theories from raw data to have this sort of problem in spades.
@Vinzmannn
@Vinzmannn 5 ай бұрын
That is a really good question.
@benjamindeworsop8348
@benjamindeworsop8348 6 ай бұрын
This is SO cool! My first thought was just having incredible speed once the neural net is simplified down. For systems that are heavily used, this is so important
@comosaycomosah
@comosaycomosah 5 ай бұрын
been in the rabbit hole lately so glad this popped up you rock miles!
@AVCD44
@AVCD44 6 ай бұрын
What an amazing fck of presentation. I mean, of course the subject and research is absolutely mind-blowing, but the presentation in itself is soooo crystal clear, I will surely aim for this kind of distilled communication, thank you!!
@GeneralKenobi69420
@GeneralKenobi69420 6 ай бұрын
Jesus christ, okay KZbin I will watch this video now stop putting it in my recommendations every damn time
@jumpinjohnnyruss
@jumpinjohnnyruss 6 ай бұрын
You can press 'Not Interested' and it should stop suggesting it.
@Am33304
@Am33304 5 ай бұрын
@@jumpinjohnnyruss I don’t think that’s what he’s talking about.
@seanmcdonough8815
@seanmcdonough8815 4 ай бұрын
Ha I have surrendered to it just to get it off too, but unless you hit not interested, it will come back even if you watched it. But I don't want to say not interested and have KZbin think that I don't like a i because I do (btw) this video was damn interesting, thumb up for me
@tehdii
@tehdii 5 ай бұрын
I am re-reading once again the book By David Foster Wallace History of Infinity. There he describes the book by Bacon Novum Organum. In book one there is an apt statement that I would like to paste 8. Even the effects already discovered are due to chance and experiment, rather than to the sciences. For our present sciences are nothing more than peculiar arrangements of matters already discovered, and not methods for discovery, or plans for new operations.
@kalla103
@kalla103 4 ай бұрын
yooo, i'm so glad i came across this! i've been thinking about how neural networks can teach us about our own thinking and pattern finding; i'm glad there is discussion about it
@kalla103
@kalla103 4 ай бұрын
okay it's not about what i initially thought, but whoa. this polymath approach sounds excellent. i feel it's similar to how people who study many different fields can be quicker to grasp a novel problem
@randomsocialalias
@randomsocialalias 5 ай бұрын
I was wondering or missing the concept of Meta-Learning with transformers, especially because most of these physics simulations shown are quite low-dimensional. Put a ton of physics equations into a unifying language format, treat each problem as a gradient step of a transformer, and predict on new problems. In this way, your transformer has learned on other physics problems, and infers maybe the equation/solution to your problem right away. The difference to pre-training is that these tasks or problems are shown each at a time unlike the entire distribution without specification. There has been work to this on causal graphs, and low-dimensional image data of mnist, where the token size is the limitational factor of this approach, I believe.
@andrewferguson6901
@andrewferguson6901 6 ай бұрын
This is a brilliant idea. I hope this goes places
@devrim-oguz
@devrim-oguz 6 ай бұрын
This is actually really important
@toddai2721
@toddai2721 5 ай бұрын
I would say this is not as important as the book... called "where's my cheese". Have you seen it?
@zackbarkley7593
@zackbarkley7593 6 ай бұрын
Well not sure this will go anywhere except maybe modify some of our archaic equations for nonlinear terms. The problem is probably related to NP hardness and using more expansive nonlinearity methods to crack certain problems that are more specified. We will always not know what we don't know. Using more general nonlinear models was bound to greatly improve our simulations. The real question for NN is this the MOST ACCURATE or most INSIGHTFUL and BEST of nonlinear methods to do so? Somehow I doubt this, but it's certainly a nice proof of principle and place to venture off further. To put all our faith in it might be a mistake though. We might be looking at long predicted by mathematicians limits to reductionism, and our first method to not overfit billions of parameters will give us an illusion that this is the only way, and we could be looking at a modern version of epicycles. If we want to really go further we need to use such models to not just get better at copying reality, but finding general rules that allow it's consistent creation and persistence through time. Perhaps one way to do this would be to consider physical type symmetries on weights.
@slurmworm666
@slurmworm666 6 ай бұрын
RE: what you said at the end there - You're thinking of PINNs, check out Steve Brunton and Nathan Kutz
@isaacaraya3848
@isaacaraya3848 5 ай бұрын
Hmm do you think resonance and harmonics might fit in here. I imagine that patterns of connections within NN/neural networks that are self-stabilizing in some way would tend to persist throughout iterations (a kind of memory). Physics gives us resonance and harmonics that describe periodic behavior in everything from atoms to predator-prey relationships to solar systems. The fourier transform essentially gives us a logic chain to describe any signal, but as some combination of periodic frequencies instead of linear lengths. It is a concept that arises again and again. Both quantum and relativistic perspectives of spacetime are highly influenced by periodic or near-periodic behavior. Maybe this is fundamental to NN as well and the cat videos taught the AI how to recognize low-dimensional periodic relationships in data. Which could explain why it helped as a preset for totally unrelated data. I'm not exactly sure if that was at all similar to what you were suggesting but it seemed related in my mind. Half-baked thought sources: www.quantamagazine.org/how-the-physics-of-resonance-shapes-reality-20220126/ www.sciencedirect.com/science/article/abs/pii/S0893608012002584 (machine learning with adaptive resonance)
@dougclendening5896
@dougclendening5896 4 ай бұрын
Of course we only know what we know. Won't modeling the known better, lead to discovering what sticks out abnormally? This will probably lead us to newer discoveries, quicker.
@zef3k
@zef3k 4 ай бұрын
Great presentation! My main takeaway is that we need a more unified approach to neural network models. Interoperability is important and can substitute for or even supercede the quality increase of pre-training.
@sinankupeli628
@sinankupeli628 4 ай бұрын
one of the best suggestions of the algorithm. there is a phrase widely used in education circles nowadays: 'Learning how to learn' and it is often criticised as human babies are already born with the ability to learn. But in the case of machines I suspect that is the way to go. They lack the genetic encoding we embedded in our biological systems for so long. Maybe we should treat these early machine learning models as their DNA?
@novantha1
@novantha1 6 ай бұрын
I can't shake the feeling that someone is going to train an AI model on a range of differently scaled phenomena (quantum mechanics, atomic physics, fluid dynamics, macro gravity / chemical / physical dynamics) and accidentally find an aligned theory of everything, and they'll only end up finding it because they noticed some weird behavior in the network while looking for something else. Truly, "the greatest discoveries are typically denoted not by 'Eureka' but by 'Hm, that's funny...' "
@rugbybeef
@rugbybeef 6 ай бұрын
The problem is thinking about these things as if the universe is distinguishing between scales. Any true "theory of everything" will by definition be scale invariant and the structures we see at different scales will be a natural result of the fundamental phenomenon at that level. We don't discuss that human beings very rarely exist entirely independently. If there is a human being in a place, there is an assumption that they had parents, were raised to maturity/independence, and that must have occurred in a finite time period. These are such basic assumptions that no one would believe someone who claims they came into being fully formed and were an independent creation by a God or randomness. We cannot know what the original person or primordial ooze came to be simply by looking at our current local environment.
@zookaroo2132
@zookaroo2132 6 ай бұрын
Just like the guy who finds a severe vulnerability in linux ecosystems, accidentally by just benchmarking a database. And shits, that happened recently lol
@IwinMahWay
@IwinMahWay 6 ай бұрын
Someone watched pi..
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
There's a paper on Feature Imitating Networks that's gotten a few good applications in medical classification, and subtask induction is a similar line of thought. FINs are usually used to produce low dimensional outputs, but I was thinking about using them for generative surrogate modeling. FINs can help answer the question of how to use neural networks to discover new physics. An idealized approach would turn every step of a coded simulator into something differentiable. It occurs to me that the approach of this talk, and interpretability research generally, is essentially the inverse problem of trying to get neural networks to mimic arbitrary potentially nondifferentiable data workflows.
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
This is a great talk, laughed a lot at "literally".
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
Surely genetic algorithms struggle heavily with local minima. Does PySR avoid this with whatever method it uses?
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
I love the idea of using a foundation models approach for PDEs of different families to deal with small sample problems.
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
Never heard of either SR or program synthesis until this talk but both seem related to my interests, glad I watched this!
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
Adversarial examples for science is fucking insane and I love that guy's question.
@workingTchr
@workingTchr 6 ай бұрын
Reminds me of a sociology paper with tons of seemingly complex math that, in the end, says something like, "school bullying is exacerbated when it goes unaddressed." So what was all the math for? Credibility.
@kpaulwell
@kpaulwell 6 ай бұрын
one might reason out the implications of what he said here without him having to also provide the vision for how his work might be applied. or give it to a gpt and let it do it for you
@kpaulwell
@kpaulwell 6 ай бұрын
My point being, he's no philosopher, but he's demonstrating something profound beyond his ability to express it
@karmavil4034
@karmavil4034 4 ай бұрын
Best 25 minutes of my life.. Physician telling me a language model is just a chatbot refined.. I can do that. Let's go!
@briancase9527
@briancase9527 6 ай бұрын
Training LLMs on code doesn't teach them to reason a bit better, it teaches them to reason a LOT better. It makes sense if you think about it: what do you learn when you (a human being) learn to write software? You learn a new way of thinking.
@coda-n6u
@coda-n6u 6 ай бұрын
Cool idea! Essentially, we can deduce symbolic, testable scientific theories from deep learning models using things like PySR. Making foundation models (which are trained on a wide variety of phenomena, not necessarily related to the area of application) for specific scientific application gives ANNs an advantage. Simplicity (explainability, legibility) comes from familiarity with a problem area, so we should be training models on lots of diverse examples to help them “get used” to solving these types of problems, even if the examples may seem irrelevant (cat videos & differential equations 🐈) Interesting application of explainable AI 🎉 Congratulations on your research
@tom-et-jerry
@tom-et-jerry 6 ай бұрын
All i always wanted to hear is in this video ! thanks !
@ralobottle7666
@ralobottle7666 6 ай бұрын
This is the reason why I like KZbin
@colmaniot6331
@colmaniot6331 4 ай бұрын
Grate path to walk on .. wish luck to the lecturer and hiss fellow researches
@startcomplaining9781
@startcomplaining9781 6 ай бұрын
Great presentation. Its marvelous to see a take on AI from a broad, scientific/mathematical perspective without too much focus on technicalities. Really exited to see how this might improve or add to our understanding of the/(this?:) ) universe.
@JorgetePanete
@JorgetePanete 6 ай бұрын
It's*
@startcomplaining9781
@startcomplaining9781 5 ай бұрын
@@JorgetePanete Thank you for pointing this out. It shows that LLms are already surpassing humans (like myself) in many respects - Chat GPT makes no spelling mistakes.
@ainbrisk545
@ainbrisk545 6 ай бұрын
interesting! was just learning about neural networks, so this is a pretty cool application :)
@hyperduality2838
@hyperduality2838 6 ай бұрын
Solving problems is the essence of the Hegelian dialectic. Problem, reaction, solution -- The Hegelian dialectic! Neural networks create solutions to input vectors or problems, your mind is therefore a reaction to the external world of problems! Thesis (action) is dual to anti-thesis (reaction) creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Vectors (contravariant) are dual to co-vectors (covariant) -- Riemann geometry is dual. Converting measurements or perceptions (vectors) into ideas or conceptions is a syntropic process -- teleological. Your mind is building a "reaction space" from the input or "problem (vector) space" to create a "solution space" and this process is called problem solving or thinking (concepts) -- Hegel. Targets, goals, or objectives are inherently teleological and problem solving is a syntropic process -- duality! "Always two there are" -- Yoda. Syntropy is dual to increasing entropy -- the 4th law of thermodynamics!
@staticinteger
@staticinteger 4 ай бұрын
Wow this is incredible and sort of confirms some thoughts I’ve had about neural networks and the compression of knowledge.
@MDNQ-ud1ty
@MDNQ-ud1ty 6 ай бұрын
The "folding analogy" is incorrect. That is not how composition works. It works only in this case because of the very specific nature of the "first layer"(in his example).
@Gunth0r
@Gunth0r 6 ай бұрын
Indeed.
@bub19992
@bub19992 5 ай бұрын
Can you tell me more about what is incorrect?
@Jandodev
@Jandodev 6 ай бұрын
So am i the only one that going to point out that SORA from OAI is basically a generalization for a 3d engine that might let us preform experiments!
@clownhands
@clownhands 6 ай бұрын
This is the first exciting concept I’ve heard in the current AI revolution
@macmcleod1188
@macmcleod1188 6 ай бұрын
I don't know about all the fancy stuff but as a programmer this makes me 30 to 50% more productive and my daughter, who is a manager, makes her about 10 to 15% more productive.
@Myblogband
@Myblogband 6 ай бұрын
Nice! I interpret this as, “these are the standard models - we can use them to kind of explain why AI is growing so exponentially in languages we can’t even understand, but really - we have no idea what’s going on and this is why to complex for our linear models.”
@isaacaraya3848
@isaacaraya3848 5 ай бұрын
Very cool visual at 28:12 - where would harmonic analysis fit?
@imakeoscillations7026
@imakeoscillations7026 6 ай бұрын
That notion of pre-trained NN's discovering new mathematical operations and generalizations is so fascinating! It's so difficult to imagine there would be huge conceptual holes in our version of mathematics, but there's no reason why they couldn't exist! They're probably already there in our foundation models, just waiting to be discovered!
@madmartigan8119
@madmartigan8119 6 ай бұрын
Slime mold is my favorite way of imagining it
@Gunth0r
@Gunth0r 6 ай бұрын
My ass smells like fish and I haven't eaten fish in a good while.
@yannickzelle5796
@yannickzelle5796 4 ай бұрын
Thank you for your talk. I found it extremly interesting. I have some comment on your statement that simplicity is implied by utility : Differential Equations are very useful in describing our world, however they are at least in my mind not simple and to most people also not familiar. I would love to discuss about it !
@MurrayWebb
@MurrayWebb 6 ай бұрын
Incredible lecture
@samfrancis1873
@samfrancis1873 6 ай бұрын
This is some ingenious work
@orbatos
@orbatos 4 ай бұрын
I was pretty surprised to see this not actually purpose much of anything other than using tools to analyse patterns, three same tools that have been in use for decades. Is this a venture pitch? Throwing more processing at it helps, but doesn't "solve" anything on its own.
@jsdutky
@jsdutky 6 ай бұрын
Regarding simplicity: I think that you are missing something important about the addition operation that makes it "simple". We are also familiar with division (the arithmetic operation) and it is also useful, but we would not say that division is "simple" in the same way the addition is simple (or we would say that addition is simpler than division, even though both are "familiar" and "useful").
@samuelwaller4924
@samuelwaller4924 6 ай бұрын
That is because addition is infinitely more "useful" than division. Literally any group of things, whether physical or not, coming together in some sense is addition. There are a lot of things next to each other in the universe lol. It is because it is so fundamental that it seems so "simple", because it is and they are just two different ways of saying the same thing.
@jsdutky
@jsdutky 6 ай бұрын
@@samuelwaller4924 I was thinking of simplicity in an algorithmic sense: addition can be performed by a simple and fast parallel circuit, while division must be performed in a stepwise, linear way, where each step depends on the result of the previous one. Multiplication is similarly simpler than division, whereas subtraction exactly as simple as addition. My point is that these arithmetic operations are not "simple" or "complex" just because of our subjective experience with them, but because different operations actually have different innate properties, and it is a glaring flaw of analysis to think otherwise.
@JordanService
@JordanService 5 ай бұрын
This was amazing-- confirms my suspicions.
@braveecologic2030
@braveecologic2030 6 ай бұрын
I'm going to state the obvious. That is smart. Yes it draws questions about AI explainability regarding deep learning NNs but what this chap is saying is quite brilliant. For me, as long as the conventional approach is combined with the model he is propounding, there should be some excellent science out of that. Then there can be even more science when we start to understand the reasons and mechanisms by which the deep learning neural networks some humans build are doing and are capable of what they are so. Let's not miss the point of what he is saying, at least what I interpret that he is saying... The NN is finding some order through patterns, it really is those patterns that are probably most related to something interesting, ie of scientific interest, then we can sift through the rest of the noise to see if something was missed, let's say we do that if questions are presented that don't have an answer. So all in all, it is a very powerful way of cutting through the fluff. If we then want to scientifically describe the fluff itself, it is now more distinct. I think what this guy is saying is brilliant. Incidentally, I think we ultimately find out that deep learning neural networks come to sensible decisions because the have the fidelity to tap into the innate intelligence structure of reality itself, but that is a next topic, although entirely pertinent.
@aatkarelse8218
@aatkarelse8218 5 ай бұрын
model mining? brain digging? fascinating, i guess we gonna need some tools to uncover these gems from the nural nets or do we need to build the nets/models in a more comprehensive way?
@abhishekshakya6072
@abhishekshakya6072 4 ай бұрын
It is probably dumb or anyone else have trouble understanding folding analogy at 12:15? Is he suggesting that the planes are superimposed with one top of other? Or is he suggesting that sum of figure a and figure b lead to figure c? Can anyone help me in understanding it?
@memory199726
@memory199726 5 ай бұрын
Serious questions here, isn't his "folding analogy" just superposition of waves? Or I am missing something?
@darmawanutomo3998
@darmawanutomo3998 6 ай бұрын
35:21 Good pretrained in some epochs by using Polymathics results does not mean training from scratch has a worse error. It is just a matter of time the good model will have the same quality.
@MrLuftkurort
@MrLuftkurort 5 ай бұрын
Right, the point is energy efficiency and optimized speed/quality for multiple applications. The pretraining is done once for the foundation model, which safes efforts for the various latter applications.
@caxsfSpeedster
@caxsfSpeedster 6 ай бұрын
Amazing lecture!!
@__-de6he
@__-de6he 5 ай бұрын
I didn't get what is the reason to use symbolic regression. Analytical relationships/models are not the same as symbolicly representables. "Derivability" is required.
@markseagraves5486
@markseagraves5486 5 ай бұрын
Fantastic. At 55 minutes though, it is suggested that we don't have a simple concept like + built into us. Perhaps not in a blank neural net, but we for example are not born with a blank slate. It is clear that any toddler understands in some way, the concept of 'more' and 'less' even though they lack empirical understanding. With sufficiently robust generalized data sets based on physical principles, information theory as language and perhaps even the nature of emotions, given enough GPUs to sustain large inter-operational neural nets, would this not give rise to something more than the sum of it's parts?
@AB-wf8ek
@AB-wf8ek 5 ай бұрын
As an artist using image generation models, it's become obvious that foundational models trained on very wide content perform much better in general. It's similar to an artist drawing nudes and studying skeletons in order to draw fantasy characters better. It's also been shown that newer foundational models that have their dataset neutered do not perform as well, even though they might be higher resolution, or generate more detail. This is why I think it could be argued that training is transformative and falls under fair use. Unfortunately the marketing has been centered around making images that looked like other people's work (copying) which is a mistake. This has attracted people to file lawsuits against AI companies. This could be mitigated if AI companies worked closer with actual artists in order to better understand the creative process and how that relates to presenting the technology as a tool for artists, similar to how this presentation is illustrating how to use these tools for scientists.
@Xsiondu
@Xsiondu 4 ай бұрын
Here's an April fools episode idea. Tales from the dark side episode where you discuss the extremely creative solutions you had to come up with while trying to do chase films on some UFO test object. Deliver it in a straight face with digressions on how it would be cool if / or how you are waiting to see the technology make it's way into the civilian market ect . For the photos have blurry images of birds or maybe say that the reason pictures are blurry is because UFOs are just blurry. Even when in the hangar
@mrtommy8875
@mrtommy8875 5 ай бұрын
Polymathic AI 🤖 is a wonderful idea 💡
@skyacaniadev2229
@skyacaniadev2229 6 ай бұрын
For "+," I do think it is simple because I hypothesize that the human brain does have built-in neurons specifically for counting small numbers (usually 5-9 varying between persons), so when you are an infant, you don't actually need to learn to count objects under this number (I suspect that in certain area of the brain, likely hippocampus, there are this amount of special neurons that are served as synaptic placeholders for the visual cortex in object identification. Then, it serves as the starting point to further learn the abstract concept of "+." That is also why "+" is the first mathematical operation that most humans (if not all) learned. If nothing is built-in, I wonder if someone can teach a human multiplication without them knowing addition. This experiment would be highly unethical, tho.
@DensityMatrix1
@DensityMatrix1 6 ай бұрын
This is already well known. It's called 'subitizing'. I believe the research showed that subitizing is not implemented in separable neural substructures.
@bub19992
@bub19992 5 ай бұрын
Thoughts from my deep ignorance Regarding the idea that " +" might be assumed ( in replies) to be the first mathematical operation of human behavior. I wonder what would be different if looking at this from my perspective "What if “ - “ is actually the first mathematical operation? What if the second operation, the “+ “ is the process of filling in the vacuum caused by the first “ - “ ? The first loss of coherence.. as an identifiable cellular membrane (ovum) being fully formed and then losing that coherence by the separation of the membrane experiences as a gap forced by penetration of new foreign material (sperm) that then becomes assimilated, exchanged. Not either or, + or - but shared - is part of + . And always was.
@azertyQ
@azertyQ 5 ай бұрын
Could you pre-train some layers (i.e. turn the standard activation functions for a few layers into pySR estimated functions) as a way to increase/change the dimensionality of the input data? Possibly could decrease the number of layers needed or time taken to train the network. If not run early training with parallel genetically pruned custom activation layers to approach the space from different paths while trying to find the minimum loss.
@ankitkumarpandey7262
@ankitkumarpandey7262 6 ай бұрын
Awesome explanation
@mike-q2f4f
@mike-q2f4f 6 ай бұрын
Fine-tune an LLM to interpret neural nets. Iterate and maybe symbolic regression (i.e. language) will help us supercharge LLM training. But hallucinations could be a major issue...
@michaelcharlesthearchangel
@michaelcharlesthearchangel 6 ай бұрын
I already did that in February when I trained ChatGPT on quantum punctuation markers and de-markers.
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
Anthropic did this for GPT2
@Acheiropoietos
@Acheiropoietos 6 ай бұрын
I tried this with my gynoid, but she she kicked me in the nuts.
@mathematik1865
@mathematik1865 4 ай бұрын
Quote (16:40): State of the art for symbolic regression... 25 days later a paper was released where so called KAN's where used to do symbolic regression, and I am pretty sure that this will be the state of the art. I know it was used only on small datasets and has some other flaws, but this is not worth talking about since we will make it work. They also refrence Miles Cranmer.
@piotr780
@piotr780 4 ай бұрын
KANs does not scale well
@wissenschaftamsonntagwas4772
@wissenschaftamsonntagwas4772 5 ай бұрын
Yes AI is definitely faster generating random ideas, and is also quicker fitting these random ideas to a data set. It’s a very powerful tool.
@TrailersReheard
@TrailersReheard 6 ай бұрын
"Science today will be this one: the experimentalist arrives with a data collection unit, the theorist arrives with a Neural network and symbolic regression algorithm, we sit down, we plug in both machines, observe the two machines performing the scientific inquiry for us and then real Understanding comes. We did our super ego duty. The science is done out there for us. And maybe while the scientists sit there they come up with a truly novel idea together but it is pure curiosity, surplus, since the science is already done."
@49819d
@49819d 6 ай бұрын
At 17:53, he has a plot on the right side, but he seems to attain only an expression in the variables x and y. There is no equation, so how is he even able to make a plot against those 2 variables? If you try plotting some of the given expressions by equating them to a constant (e.g. 2(x+sin(y+1.3))=3 ), you don't get anything that looks like his plot. If there is a 3rd variable (e.g. z, or something like f(x, y)), then the plot should be a 3D plot. Instead, the plot is 2D.
@thatonekevin3919
@thatonekevin3919 6 ай бұрын
it's a mistake, they're implicitly equated to 0
@neekonsaadat2532
@neekonsaadat2532 6 ай бұрын
Fantastic work, I thought we would take AI in this direction and here we have that reality.
@hyperduality2838
@hyperduality2838 6 ай бұрын
Problem, reaction, solution -- The Hegelian dialectic! Neural networks create solutions to input vectors or problems, your mind is therefore a reaction to the external world of problems! Thesis (action) is dual to anti-thesis (reaction) creates the converging or syntropic thesis, synthesis -- the time independent Hegelian dialectic. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Vectors (contravariant) are dual to co-vectors (covariant) -- Riemann geometry is dual. Converting measurements or perceptions (vectors) into ideas or conceptions is a syntropic process -- teleological. Your mind is building a "reaction space" from the input or "problem (vector) space" to create a "solution space" and this process is called problem solving or thinking (concepts) -- Hegel. Targets, goals, or objectives are inherently teleological and problem solving is a syntropic process -- duality! "Always two there are" -- Yoda. Syntropy is dual to increasing entropy -- the 4th law of thermodynamics!
@ThePyrosirys
@ThePyrosirys 6 ай бұрын
Are you aware of the fact that you didn't understand what this video is about at all?
@hyperduality2838
@hyperduality2838 6 ай бұрын
@@ThePyrosirys You can treat input vectors as problems, watch the following:- kzbin.info/aero/PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa Problems are becoming solutions (targets) via optimization -- a syntropic process, teleological. Neural networks are therefore syntropic as they learn as they converge towards goals and solutions. The learning process is teleological as your goal is to achieve a deeper understanding of reality. Perceptions are dual to conceptions -- the mind duality of Immanuel Kant. Machine learning is based upon the Hegelian dialectic if you treat your input vectors as problems!
@hyperduality2838
@hyperduality2838 6 ай бұрын
@@ThePyrosirys Minimizing prediction errors is a syntropic process -- teleological. "The brain is a prediction machine" -- Karl Friston, neuroscientist. Syntropy is the correct word to use here and means that there is a 4th law of thermodynamics -- duality. Average information (entropy) is dual to mutual or co-information (syntropy) -- information is dual! Your brain processes information to optimize your predictions -- natural selection.
@Yes-ux1ec
@Yes-ux1ec 6 ай бұрын
This is very interesting, can you please expand more
@Cloudbutfloating
@Cloudbutfloating 6 ай бұрын
So what you are saying is: Our mind creates models based on patterns we observe to predict reallity? How does that imply that information is dual? What do you even mean by "informatiom is dual?" and how do you apply Hegelian dialectics here? Tesis/Ant refere to conceptets that are contradictory to each other
@BlakeEdwards333
@BlakeEdwards333 6 ай бұрын
Beautiful lecture. Been saying high dim data -> NN -> theory would be a good approach for many years now! Glad to see people working on this. 😊
@joeunderwood8973
@joeunderwood8973 6 ай бұрын
35:16 Yes, doing the model from scratch with traditional machine learning is worse compared to the pre-trained generative network, but only for the *same time frame*, if you give the traditional machine learning approach more *time*, then it can out-perform the pre-trained generative network, while the pre-trained network will just keep on spitting out the same type of results.
@joeunderwood8973
@joeunderwood8973 6 ай бұрын
a proper comparison would require a 3 dimensional chart comparing model error vs #samples AND training time+network evaluation time.
@joeunderwood8973
@joeunderwood8973 6 ай бұрын
The better approach is to use the pre-trained generative network to bootstrap samples for the genetic programming("Scratch-AViT-B") model thus getting the best of both.
@JTan-fq6vy
@JTan-fq6vy 2 ай бұрын
17:20 If genetic algorithm is a bruce-force algorithm, why using it? Is its time complexity less than the bruce-force algorithm, similar to dynamic programming used in RL?
@mollynaquafina
@mollynaquafina 6 ай бұрын
my man just reinvented the wheel with already existing meta and unsupervised learning. good luck ig
@DensityMatrix1
@DensityMatrix1 6 ай бұрын
You might want to think about simplicity in terms of Kolmogorov complexity e.g your NN should try to emit the least complex, in the Kolmogorov sense, syntax tree. Also, I think "+" is simple because it is closed over the field of integers. I think that if your operation takes you from one domain to another its more complicated. In that way you might consider using Category Theory. You could think about penalizing models that "move' further away into other mathematical spaces from a 'base" space.
@coda-n6u
@coda-n6u 6 ай бұрын
Kolmogorov complexity can be thought of the ideal “lower bound” for a compressor/predictor in unsupervised learning. But it’s also uncomputable which would make it hard to implement in practice 😅
@DensityMatrix1
@DensityMatrix1 6 ай бұрын
@@coda-n6u true, I think I was trying to get at a weighting of symbols used. I’m not sure if that could be learned or would have to be assumed. I think 1+1 is simple because is in some ways assumed ( forgetting Russell) whereas something difficult like say the Kullback-Liebler Divergence is defined in terms of simpler primitives Edit: big picture would be you need some sort of error term to trade off against accuracy otherwise your tree grows without bound either in depth or complexity of the operators Consider it something like dropout or pruning.
@coda-n6u
@coda-n6u 6 ай бұрын
@@DensityMatrix1 Yeah that's interesting! I feel like any theory with a sufficiently complex symbolic representation could be factored into smaller bits that could themselves be learned as features. It's a big search problem, so I guess it's about allowing the algorithm to search deeply + generate complicated symbolic representations, but having it bias towards shorter ones (since they're more likely to be true). Honestly a big problem I have no idea how to solve.
@lemurpotatoes7988
@lemurpotatoes7988 6 ай бұрын
Solomonoff induction isn't tractable for beings with finite compute and AFAIK there's no standout best approximation to it. Myopic piecemeal modeling is probably better in many cases than trying for a theory of everything.
@goranlazarevski7241
@goranlazarevski7241 5 ай бұрын
30 mins to say that you can fit simpler models to a neural network data-generating process, and another 30 to say that more training data (even if relegated to what we call “pretraining”) improves performance. ps: things are simple because they are ubiquitous and they are ubiquitous because it’s how the world works (law of conservation of mass and energy, i.e. addition), not because it’s “useful”
@Daniel-qr6sx
@Daniel-qr6sx 4 ай бұрын
this was very interresting
@frederickbrown8212
@frederickbrown8212 6 ай бұрын
Simplicity is the absence of relative complexity.
@99bits46
@99bits46 6 ай бұрын
I would love to see some breakthrough in Dark Matter regime. There is so much data regarding Dark Matter yet no theory to back it up.
@DougMayhew-ds3ug
@DougMayhew-ds3ug 6 ай бұрын
The issue is discovering the higher-ordering principle which subsumes a continuum of self singularities and discontinuities. Linear math works well in-between the singularities, but cannot extrapolate through them, in a sense they are like mathematical worm-holes. Attempts to linearize across the discontinuities will fail. A whole harmonically-related series will only be properly understood from the perspective of a higher-ordering principle, similar to the idea of projection from a higher magnitude to a lower dimensional space, or from the idea of negative curvature. The point is the epistemological assumption of a static model is problematic, the real world has static islands which are bounded within areas of great change, and so the basic function changes completely there, that is to say, the dynamics of change themselves change. So to bridge that gap you can’t just ignore it, or flatten it, you have to seek how to remap it in such a manner that it is no longer infinite, but cyclical, as Gauss did with the complex number domain.
@Gideonrex1
@Gideonrex1 5 ай бұрын
Yeah, I read that like 5 times and have no idea what you’re trying to say.
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 4 ай бұрын
@@Gideonrex1 pretty sure neither does he
@scaomath
@scaomath 4 ай бұрын
33:16 Mark my words. There won't be any foundational-level model can achieve 5-digit of accuracy like the finite difference does for PDE, which was popularized three hundred years ago by Euler. Using the model alone (without the help of non-blackbox outer algorithms or second-order optimizer), no matter you have 1000 billion params, or what, never. 1000 years later our AI overlords will still use finite difference (maybe the BDF table will be learned by blackbox).
@nicholastaylor9398
@nicholastaylor9398 5 ай бұрын
Did you see the Lifestyle Trader ad? Proof that money is not just a commodity but logarithmic.
@rugbybeef
@rugbybeef 6 ай бұрын
Am I confused? It feels like he is explaining the calculus of variation and linear algebra. The elemental functional priors he seems to be talking about are literally the concepts of functions and groups of related functions existing in hierarchial topics like trigonometry grouping sine, cosine, tangent together because they are mutually dependent and reduce the parameter space. Students may ask why we learn both sine and cosine when we could just learn one and use a parameterized offset for the other. The synthesis in seeing how together they can convert a two positions into single time parameter given a fixed length and a pivot point. Similarly, an ellipse can be described by these same two equations with a single parameter, t for position along the curve and the axis lengths. These are all model building concepts from statistics though. Am I missing something? It feels like he is explaining statistical model building. Yes, parsimony is great and admirable in a model. The push for larger and larger model is simply brute forcing and filling out the solution space with so many variables that it would be difficult for an answer to not exist if the idea previously existed in the world. However, they suck at low context situations where they need to make deductive leaps. If I'm talking about fear of a need to "abort", whether the conversation is happening at Kennedy Space Center or in a medical examination room completely change what we are talking about. If I don't tell ChatGPT the context, it may suggest language talking about "T-minus" for one contexts or "weeks" in another. At some level we are simply talking about different methods of representing temporal, spatial, social, economic, etc relationships and how abstracted from the ideas of initiating, terminating, increasing, decreasing, linear, exponential, repetition, regular, irregular, stochastic, or predictable. Whether one uses the term "sine" or "wave-like" or "repeating" is all just representation of the same linguistic concept
@vethum
@vethum 6 ай бұрын
Briliant ideas
@ardaozkut1089
@ardaozkut1089 4 ай бұрын
Is the part in 12.40 just convolution or am I just dreaming?
@Sumpydumpert
@Sumpydumpert 4 ай бұрын
Yea it’s great if ur trying to use a system In a system to garner traction
@mikl2345
@mikl2345 6 ай бұрын
So if you sought to get what an LLM knows out into some equations we could understand, what could they be like?
@jfverboom7973
@jfverboom7973 6 ай бұрын
With enough inputs you can make any curve or field match the current data. So it this even science ? I am very skeptical. It will provide very little real insight, when you have inscrutable AI model able to predi t something. It might as well be the oracle of Delphi.
@echoeversky
@echoeversky 4 ай бұрын
Well more fun than AI churning out what are the most lethal substances. Modern nerve agent analog were in the outputs.
@STEM671
@STEM671 5 ай бұрын
Specific density squared ; Volume Quebed ; Vice Versa und AUgmentation Cycle
@STEM671
@STEM671 5 ай бұрын
Flux TEMP composite material Augmentation Cycle und NEURO CELLULAR GEN_REGEN CYCLE @ NEUROPLASTICITY U V STABILIZER 7:50
@jabowery
@jabowery 5 ай бұрын
Symbolic Regression is starting to catch on but, as usual, people aren't using the Algorithmic Information Criterion so they end up with unprincipled choices on the Pareto frontier between residuals and model complexity if not unprincipled choices about how to weight the complexity of various "nodes" in the model's "expression". A node's complexity is how much machine language code it takes to implement it on a CPU-only implementation. Error residuals are program literals aka "constants". I don't know how many times I'm going to have to point this out to people before it gets through to them (probably well beyond the time maggots have forgotten what I tasted like). This whole notion that "+" is just what we're used to is intellectual poison.
@zestyindigo
@zestyindigo 6 ай бұрын
someone so smart, only listenable at 4x
@emreon3160
@emreon3160 6 ай бұрын
This is very trival knowledge if one has an open mind, but its great that it is now formally been empirically proven for those out there that need proofs.
@Kadag
@Kadag 6 ай бұрын
36:36 becoming more basically intelligent because of understanding spacio temporal connectivity. The flashing faces in peripheral vision illusion it shows us The monsters we create when we lack that.
@Sairfecht
@Sairfecht 4 ай бұрын
What happens if it’s trained on Schrödinger’s cat videos?
@PrivateSi
@PrivateSi 6 ай бұрын
Surely the problem with AI is Fudge In = Fudge Out, so if the Standard Model (and especially attempts to fix it) is full of fudge then fudge will result. I'm not saying the model outline below is correct, but if it is, or something pretty similar, no physics AI would come up with it, even if fed all the accepted (potentially) useful papers, and (filtered, biased, artefact-ridden) data.. -- POLECTRON FIELD: cell: a + & a - particle split by Full Split Energy as a positron+ & electron-. Bonds to 12 neighbours MATTER: p+ / e- = half cell (& a cell as +-+ / -+-)? Polarises field as + & - shells. SPIN: centre polarisation axis LECKY: total absolute charge. MASS: cells/lecky inside particles. INERTIA: field rebalances behind mass with a kick STRONG GRAVITY: field repels mass. DARK ENERGY: voids grow as lecky shrinks cells and is lost to gravity gradients DARK 'MATTER': galactic lecky gradient. Denser field slows acceleration and TIME, thinner field aids acceleration BIG BANG: more proton-antiproton pairs malformed as proton-muon than antiproton-antimuon so hydrogen beat antihydrogen POSITRONIUM: e+p. Muon: ep_e. Proton: pep. Neutron: pep_e. Tau: epep_e. Neutron mass is halfway between muon and tau ANTIMATTER: 1,2 e_p pairs annihilate. 3: proton+anti proton or muon+anti muon. 4: neutron+anti neutron. 5: tau+anti tau WEAK FORCE: unstable atoms form and annihilate e_p pairs. BETA- DECAY: pep_e => pep e. BETA+: pep + new e_p => pep_e p NUCLEAR FORCE: neutron electrons bond to protons. ENTANGLEMENT: correlation broken by interaction? Physical link? BLACK HOLE: atoms cut into neutrons fused as higher mass tau cores (epep). Field rotates. Core annihilates: ep => cell? PHOTON: cell polarisation/lateral shift wave. LONGITUDINAL WAVE: gravitational wave, neutrino: 1 to 3 cell wave DOUBLE SLIT: photon/particle field warps diffract and interfere, guiding the core. Detectors interfere with guides ENTROPY: simplicity. Closed system complexity reduces over time. Uniformly (dis)ordered (hot)/cold field is simplest
@rugbybeef
@rugbybeef 6 ай бұрын
This is not an endorsement of your alternative model, but the skepticism of models and digging deeper the conceptual ruts that we dig ourselves into. In flat world, we are all just lengths...
@PrivateSi
@PrivateSi 6 ай бұрын
@@rugbybeef .. and widths unless it's a 1D flat world... I'm not into the Holographic Universe even though 2D is technically simpler than 3D - just not when we live in a 3D reality. Gravity, Dark Energy and Dark Matter need to be linked to one field, might as well make it an EM particle field. Neutron Mass is halfway between Muon and Tau bar a tiny bit of binding energy. I don't know why this relationship is not mentioned by anyone but me.
@rugbybeef
@rugbybeef 6 ай бұрын
@@PrivateSi So Ive always wondered about this as vision is a 2D diminishment of our 3D world, I always believed that flatlanders would only see the lengths of their colleagues in a 1D analogue. Like if their square friend had distinct colors on each face, they would see and could infer their colleagues vertex. However differentiating a circle of radius 1 and a square of width 1 that rotated synchronously each time you tried to move around it would be impossible.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,1 МЛН
Generative Model That Won 2024 Nobel Prize
33:04
Artem Kirsanov
Рет қаралды 116 М.
DID A VAMPIRE BECOME A DOG FOR A HUMAN? 😳😳😳
00:56
Não sabe esconder Comida
00:20
DUDU e CAROL
Рет қаралды 25 МЛН
The Mystery of Spinors
1:09:42
Richard Behiel
Рет қаралды 979 М.
The Big Misconception About Electricity
14:48
Veritasium
Рет қаралды 23 МЛН
Why Is There Only One Species of Human? - Robin May
59:22
Gresham College
Рет қаралды 1,2 МЛН
"Why we might be alone" Public Lecture by Prof David Kipping
25:41
Cool Worlds Classroom
Рет қаралды 1,1 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
Is the Future of Linear Algebra.. Random?
35:11
Mutual Information
Рет қаралды 338 М.
Robert Greene: A Process for Finding & Achieving Your Unique Purpose
3:11:18
Andrew Huberman
Рет қаралды 13 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,1 МЛН
DID A VAMPIRE BECOME A DOG FOR A HUMAN? 😳😳😳
00:56