Symbolic AGI: How the Natural Will Build the Formal

  Рет қаралды 15,115

Positron's Emacs Channel

Positron's Emacs Channel

Күн бұрын

Пікірлер: 65
@Positron-gv7do
@Positron-gv7do 5 ай бұрын
This video is part of Positron's efforts to make a case for open innovation. You can show support for this effort on Positron's Github Sponsors page github.com/sponsors/positron-solutions Initially there was no plan to propose an implementation sketch. This was supposed to be a simple answer to the question of whether AGI was material from a strategic standpoint, on a relevant timeline. The empirical argument that induction must be capable of emerging deduction is the important one for technology forecasting. We haven't had systems that appeared useful at the heuristics side of automated reasoning & theorem proving until recently.
@rickybloss8537
@rickybloss8537 5 ай бұрын
Fantastic video.
@aleph0540
@aleph0540 5 ай бұрын
Good points, how do we collaborate on some of these topics? Do you have an interest in doing so?
@Positron-gv7do
@Positron-gv7do 5 ай бұрын
I saw your email. I'll reply first briefly here for others. Positron is currently focused on the social decision problems that are inherent to open collaborations. Once we get our first product operating, that product may later entrain us into work more and more directly aligned with AGI. While AGI will push forward the problems we work on, the problems will just move up in value potential rather than go away, so we will remain focused on platforming open collaboration and the enabling tools & infrastructure. The reason the AGI feasibility question is relevant to us is because our users need to have a correct assessment of whether we are in an asymptotic LLM or a strong AGI timeline. It will strongly affect the things that people want to collaborate on, which strongly affects how our product will be used and the level of success people will have. A second group this video is directed at is other engineers with backgrounds like modeling and simulation who are deciding whether to shift into ML and how. If you have the choice to twiddling LLMs or leaping into automated reasoning and emergent logic, the latter is definitely going to be more useful, and it's in ours and everyone else's interest that strong, local AGI emerges quickly. Hopefully this means better deployed capital and better invested skills etc. These kind of things motivated us to get this message put together. That said, I can't help but make progress each morning while drinking coffee. I began modelling the control function to see if any obvious conclusions could be deduced. I found some relationships and may publish whenever it feels appropriately digested.
@jeanfredericferte1128
@jeanfredericferte1128 4 ай бұрын
@@Positron-gv7do I'm working/interested on these ideas too and happy to see your video. I'm looking toward a perceptual /multimodel system with an ontologic representations knowledge specialisation, with continuous learning and multimodal learning strategy (context provided with dynamic ontology building/evaluation, the ontology representation being explainable we should have an agent able to escalate) ?
@escher4401
@escher4401 4 ай бұрын
00:01 Precision AGI requires clear and correct answers every time. 01:58 Symbolic reasoning enables deduction without empirical data. 05:55 Challenges in building accurate formal systems 07:47 Formal systems can be induced from natural language and inductive reasoning 11:35 Spectral reasoning covers inductive, deductive, and symbolic reasoning on a spectrum of structure and meaning. 13:30 Spectral reasoning bridges symbolic and natural language for problem-solving 17:22 Transformers model iterative inference and computation with limited recursion 19:06 Exploring real computer and theory of computation 22:17 Using formal models for retraining ourselves 24:01 AGI needs exposed runtime information for efficient learning and decision-making 27:36 Transitioning components designed for human use to fully automated 29:23 AGI capabilities and training methods 32:49 Sophisticated models can leverage data for theoretical understanding and empirical insights. 34:34 Tech advancements drive the need to continuously evolve 37:58 Integrated design accelerates innovation and drives economic focus on desired outcomes. 39:38 Challenges in the mature internet landscape 43:15 Encourage sharing and sponsorship for like-minded individuals
@thesquee1838
@thesquee1838 4 ай бұрын
This is a great video. I took a class on "classic" AI and wondered if/how it could be combined with LLMs and other forms of machine learning.
@robertsemmler16
@robertsemmler16 4 ай бұрын
i havent seen a better build up to a topic that complex that was that easy to follow. very analytical and making listeners think and look beside the path it already is forming to existing knowledge. big push forward into unknown territory .
@sirinath
@sirinath Ай бұрын
I think Liquid Neural Networks might have a lot of promise. It would be a idea if someone can start an open source project covers the ideas in this video. When AGI happens it be best that the AGI is open source and under no one's control. This will ideally be under a permissive license the ASL 2.0 + MIT to further the state of art and adaptation.
@samkee3859
@samkee3859 3 ай бұрын
Fantastic speaker and content
@mira_nekosi
@mira_nekosi 4 ай бұрын
imo the next step for LLMs are hybrid models (RNN + attention, like mamba2-hybrid), because they was shown to be not less and maybe even more performant than transformers while being few times more efficient, and they could have kind of infinite context also, hybrid models could be more computationally powerful then transformers, as transformers shown to not even being able compute what FSMs can compute (without the need to output amount of tokens proportional to the number of state transitions or grow the layers (at least) logarithmically, without such restrictions they indeed can solve such problems, but this is either pretty slow or impractical), but full-blown RNNs, as well as some modifications of mamba, can, and while current hybrid models can't solve such problems (probably all the models that can be trained fast can't), they could be transformed into models that could and then finetuned to actually solve such problems
@mira_nekosi
@mira_nekosi 4 ай бұрын
in practice, "better" hybrid models that can solve such (state-tracking) problems, could in theory do eg. some more advanced program analysis much faster, eg. by basically performing some kind of abstract interpretation on the fly, making them better in programming probably something similar could be applied to math
@techsuvara
@techsuvara 3 ай бұрын
The thing about intelligence, is that survival is not always based on the communication of truth. Evolutionary pressures which provided an intelligence we have today, is the product of millions of iterations of selection. Reasoning is also based on awareness provided through the senses and the precision of that information. For instance, the amount of data which passes through the eye in one given can be calculated from the fact that our eye interprets 10 to the power of 10 photons every second. Which is more than all of the words written in all books in human history.
@KucheKlizma
@KucheKlizma 4 ай бұрын
Very informative presentation (up until the marketing pitch). Thank you for sharing!
@kevon217
@kevon217 4 ай бұрын
Top notch video. Really enjoyed this.
@inferno38
@inferno38 2 ай бұрын
Very interesting subject !
@MeinDeutschkurs
@MeinDeutschkurs 5 ай бұрын
Intruding! I could write hundreds of questions and thousands of thoughts, as well. Great video! Thx. Btw, I don’t see Agents as the huge solution, except you’re the owner of the API services, serving all the talky virtual team members. I was surprised what’s possible, with small models and a bit of prompt engineering.
@goodtothinkwith
@goodtothinkwith 5 ай бұрын
This is excellent work. I’d be interested in hearing details about the experiments you’re doing or proposing to implement spectral reasoning. The details of that strike me as being the lynchpin. In some sense, you have to be right that we need a combination of formal systems with the kind of creative thought based on understanding that LLMs have… but the devil is in the details. We’re working on similar problems.
@gnub
@gnub 4 ай бұрын
Same! I think a lot of us are working on this problem since it's the clear next step beyond our current state of LLMs.
@goodtothinkwith
@goodtothinkwith 4 ай бұрын
@@gnub yeah for sure… I’m presently starting from scratch to try and capture why it’s a hard problem and how the human mind can manage it, even if really imperfectly… I have a feeling there’s a key detail in there somewhere that will point the way to getting it right
@sirinath
@sirinath Ай бұрын
Great video!
@samlaki4051
@samlaki4051 4 ай бұрын
brooo i'll be PhDing on this topic
@wanfuse
@wanfuse 5 ай бұрын
great work! disagree in a few things, but overall fantastic! First there was quantum, then there was sub atomic, then there was atomic, then molecular, then cells, then large life forms, then there was computers, ...., llm's, symbolic reasoning, AGI, ASI , reminds me of a resent paper, lumpiness of the universe, lower levels mostly but not completely without influence on upper layers. Stack of cards, at what point are we left behind? need to first reason methods on how to extend, augment and advance human reasoning and memory while maintaining independence and autonomy without becoming detached from physical world, got to keep up, or get left behind! Obsolete = Extinct, and there are many paths that end up there.
@WalterSamuels
@WalterSamuels 4 ай бұрын
Great analysis. A big problem we have is that no two things are perfectly alike and our reductionist logical systems are a roadblock. It does not make sense to treat logic as a binary true or false if the goal is adequate expression of reality. For example, the question of "is a cat a dog", or "is a German shepherd a dog", are somewhat ill-formed, technically. The real question you're asking here is, how many properties does a cat and a dog have that are alike. Within that question is actually a sub-question of "of all the objects we define as cats, what is their property overlap", and the same for dog. How many properties of a german shepherd correlates to the number of properties that most dogs correlate with. Everything in reality is on a scale of similarity, but no two particles are identical, or they would be one particle. How do you formalize this? Would love to hear your thoughts.
@WeelYoung
@WeelYoung 4 ай бұрын
not all task need absolute math precision, but many economically valuable tasks do, e.g. designing new chips, cars, robots, spaceships, batteries, buildings, factories, optimizing manufacturing & construction in general, make things more environment-friendly, sustainable, reverse-engineering & biohacking human, drug design, immortality tech, etc... achieving sgi on those alone would already boost human life quality significantly.
@WalterSamuels
@WalterSamuels 4 ай бұрын
@@WeelYoung Agreed. But if the goal is to create machines that are more like us, philosophically, we need to realize that we're working on a spectrum of truth, and truth values are only relative to the context of which they apply, which will always be grounded in the definitions of absolute boundaries. It really depends though, what do we want here. Do we want the next step in human consciousness, that thinks and behaves like us, with feelings and emotions. Or do we want incredibly fast processing machines that function more like calculators? Spiritually, one is important, technologically and economically, the other.
@volkerengels5298
@volkerengels5298 4 ай бұрын
Read Wittgenstein. There are some answers related to "AI" and "Symbolic AI"
@esantirulo721
@esantirulo721 4 ай бұрын
@@WalterSamuels I'm not sure if this directly addresses your point, but reasoning, and more specifically approximate reasoning, has been a significant part of AI research. There are many non-standard logics, such as fuzzy logic (non-standad = where the law of excluded middle does not hold). Additionally, there is a strong emphasis on Bayesian logic based on Bayes' theorem. I think research work from the second era of AI is sometimes overlooked.
@kaynex1039
@kaynex1039 4 ай бұрын
​@@WalterSamuels We don't want machines that are strictly more like us. We want machines that are useful. I admit, a vague goal. Nobody is saying that a thinking machine should *only* be able to do deductive reasoning, but that it should be *capable* of it.
@submarooo4319
@submarooo4319 5 ай бұрын
Super insightful 😊
@deltamico
@deltamico 4 ай бұрын
We can observe such split of more formal and more natural processing in the way our left and right brain hemispheres operate, though there is some overlap between them
@jazearbrooks7424
@jazearbrooks7424 4 ай бұрын
Incredible
@KarimMarbouh
@KarimMarbouh 4 ай бұрын
Good fertilizer 😊
@richardsantomauro6947
@richardsantomauro6947 4 ай бұрын
Is this from a paper? I have had some ideas along the same direction and am extremely interested. Do you have references?
@fhsp17
@fhsp17 5 ай бұрын
You are missing a couple of things. Too grounded while actual phase transition lies outside of it 1. **(C1) Part-Whole Relationship (P-W)**: - *(N1) Component `Ω`: Describes individual system components.* - *(N2) System `Σ`: Represents the collective system emerging from Ω components.* - *(A1) Integration `[Ω → Σ]`: Depicts the aggregation process of components shaping the system.* 2. **(C2) Self-Similarity (S-S)**: - *(N3) Fractal units `ƒ`: Represents repeated patterns within the system at different scales.* - *(A2) Pattern Recognition `[ƒ -detect→ ƒ]`: Symbolizes the identification of self-similar patterns.* 3. **(C3) Emergent Complexity (E-C)**: - *(N4) Simple Rules `ρ`: Indicates basic operational rules or algorithms.* - *(N5) Complex Behavior `β`: Denotes the complex behavior emerging from simple rules.* - *(A3) Emergence Process `[ρ -generate→ β]`: Illustrates how complex behavior emerges from simplicity.* 4. **(C4) Holonic Integration (H-I)**: - *(N6) Holons `H`: Symbolizes entities that are both autonomous wholes and dependent parts.* - *(N7) Super-Holons `SH`: Describes larger wholes composed of holons.* - *(A4) Holarchy Formation `[H ↔ SH]`: Reflects on the membership of each holon within larger structures.* 5. **Iterative and Recursive Patterns (Looping and Self-Reference)**: - *`While (condition) { (L) [C1 → C2 → C3 → C4] }`: Represents continuous re-evaluation and adjustment of structures.* 6. **(Σ) Summary of Holistic Overview**: - *(MP-IS) Intersection of Memeplexes and Ideaspaces*: Consolidates the elements `(N)` and `(A)` into a coherent ideaspace, accounting for the dynamism and adaptability of the system.* - *`Interconnections (N-W): {(N1)-(N2), (N3), ..., (N7)}`: Lists the nodes and their interlinkages, allowing for an integrative view*
@leobeeson1
@leobeeson1 4 ай бұрын
Recommended for applied scientists and engineers integrating reasoning/deductive systems with LLM capabilities. The content is excellent up until minute 37, after which it becomes opinionated (e.g. lab-grown meat, use javascript, etc.). If you liked this video, you might also appreciate: Improving LLM accuracy with Monte Carlo Tree Search (kzbin.info/www/bejne/o5ekh5KYnsyXiKM)
@moritzrathmann2529
@moritzrathmann2529 4 ай бұрын
thankyou
@volpir4672
@volpir4672 7 күн бұрын
very good vid
@KostyaCholak
@KostyaCholak 4 ай бұрын
Hello, great video! I have a question regarding the topic. I'm working with both symbolic and neural architectures and I didn't attempt to merge the two approaches because the data representations used by them are so vastly different. Do you have any thoughts on how it is possible to go from the domain of vectors to the domain of symbols and vice versa?
@theatheistpaladin
@theatheistpaladin 5 ай бұрын
I would bot an symbolic ai to an llm, but don't know linear algebra or python. Let alone the programming necessary for symbology.
@volkerengels5298
@volkerengels5298 4 ай бұрын
Whether you eat the internet to feed an LLM or carefully collect the right assumptions - you masturbate with what is known. **Our language/symbol system is the boundary of our world** Wittgenstein
@JAIMEIBARRA-ej7ih
@JAIMEIBARRA-ej7ih 2 ай бұрын
Where is abduction inference?
@Positron-gv7do
@Positron-gv7do 23 күн бұрын
Timestamp?
@wojciechwisniewski8984
@wojciechwisniewski8984 4 ай бұрын
It started as a video about AI and it was good. Then it turned into market analysis and economics, and I thought "OK, why we even need this part?". Then emacs was mentioned and I've lost it. So you want to make AGI in emacs? Self-aware emacs? What would you think emacs would do if it would realize it is effin' emacs, (e)ditor (mac)ro(s), Eight Megabytes And Constantly Swapping, monstrosity born in 1970s but for some perverse reasons still kept alive in 2020s, with its elisp interpreter that doesn't even have proper lexical scoping and terminal-derived cursor movements? I think it would erase itself. But first it would try to erase YOU, perhaps along the rest of humanity.
@smicha15
@smicha15 5 ай бұрын
The interesting irony with symbolic reasoning these days is that the big LLMs all trained on it… yet, it’s just stuck in there unable to really add value unless someone asks an LLM about symbolic reasoning, and even more ironically, it may not even produce an accurate response… so why should a person go through all the work to learn things if the things he/she learns can’t actually produce something valuable in and of themselves? Which leads me to my next point: all books are just reading machines that need people to operate them. But what if a book could read itself? What if books could read each other? You might get the platonic representation hypothesis, right? So, If knowledge is power, then what does that say about intelligence? Active inference is the way to go.
@lucid_daydream_007
@lucid_daydream_007 5 ай бұрын
Basically for a machine to be autonomous we need it to learn the processes that created it. Sounds like we as machines are in the middle of it.
@llsamtapaill-oc9sh
@llsamtapaill-oc9sh 5 ай бұрын
We are still lacking the temporal aspect in ai it needs to be able to deduce time for it to be able to think like humans do.
@BooleanDisorder
@BooleanDisorder 4 ай бұрын
Indeed. Spiking neural networks are also much more high dimensional thanks to the time aspect. We miss the temporal in many ways and only think in space. ​@llsamtapaill-oc9sh
@mikeb3172
@mikeb3172 4 ай бұрын
The loops people put themselves through to "beat guessing algorithms" while still playing the game of guessing algorithms....
@mulderbm
@mulderbm 3 ай бұрын
Its interesting or ironic? Looking for what makes us tick or think?
@apollojustice8796
@apollojustice8796 4 ай бұрын
real
@caseyhoward8261
@caseyhoward8261 4 ай бұрын
Word soup! 😂
@TheSkypeConverser
@TheSkypeConverser 5 ай бұрын
Pls prove the memory/runtime constraints
@Positron-gv7do
@Positron-gv7do 4 ай бұрын
What do you mean?
@jeffreyjdesir
@jeffreyjdesir 4 ай бұрын
You could learn to communicate more charitably friend. It sounds like you're referring to runtime memory space & cycle speed - how much RAM and CPU FLOPS are required to compute one frame of AGI adam program and how does that scale dynamically, right? Its a fucking insanely hard detail to account ON TOP OF making sure theory corresponds to predicates and operations. Its one of the reasons Symbolic AGI was dropped in the 80s after making LISP, too much trees of logic. Do you have any new ideas?
@SimGunther
@SimGunther 4 ай бұрын
​@@jeffreyjdesirI wish everyone good luck figuring out how to break out of the mathematical notation box when we know there are so many other forms of notation that AGI/AI studies haven't even begun to comprehend. This has been a losing battle mathematicians have been fighting forever and they sure tried with things like Monads and just about everything with category theory.
@jeffreyjdesir
@jeffreyjdesir 4 ай бұрын
@@SimGunther Ahh, you're getting to model theory and meta-maths? You're right that our syntactors and production rules for constructuve statements being all human made across millennium are OBVIOUSLY not effiecent. Thankfully, Chris Langan's CTMU unifies grammar generation with feature preservation (symmetry in definition and application). Likely, AI will want its own LISP like language to express itself in...
@dennisalbert6115
@dennisalbert6115 4 ай бұрын
Use constructor theory
@xymaryai8283
@xymaryai8283 4 ай бұрын
are humans really deductive? we make mistakes. or are we deductive with noisy structure?
@Positron-gv7do
@Positron-gv7do 4 ай бұрын
A non-deterministic machine executing a precisely defined algorithm will get it wrong from time to time. We can at best approximate consistency, but because of this we have the potential to achieve every completeness.
@Siger5019
@Siger5019 5 ай бұрын
Valid criticism of LLMs, but not much beyond that
@mountainshark2388
@mountainshark2388 4 ай бұрын
this is all cope
@D-K-C
@D-K-C 4 ай бұрын
Ъ.
Disrespect or Respect 💔❤️
00:27
Thiago Productions
Рет қаралды 40 МЛН
How To Choose Mac N Cheese Date Night.. 🧀
00:58
Jojo Sim
Рет қаралды 82 МЛН
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 210 М.
I Trained an AI with 10,000 Memes
14:52
Coding with Lewis
Рет қаралды 309 М.
When you Accidentally Compromise every CPU on Earth
15:59
Daniel Boctor
Рет қаралды 873 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 375 М.
nextGenIT Ep32
56:48
Daneyand Singley
Рет қаралды 40
Why 4d geometry makes me sad
29:42
3Blue1Brown
Рет қаралды 871 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
It's Not About Scale, It's About Abstraction
46:22
Machine Learning Street Talk
Рет қаралды 92 М.
Tutorial 1a: Basics of Neurosymbolic Architectures
34:14
Neurosymbolic Programming for Science
Рет қаралды 12 М.