Platonic Hypothesis

  Рет қаралды 2,138

hu-po

hu-po

16 күн бұрын

Like 👍. Comment 💬. Subscribe 🟥.
🏘 Discord: / discord
github.com/hu-po/docs
The Platonic Representation Hypothesis
arxiv.org/pdf/2405.07987

Пікірлер: 21
@mwd6478
@mwd6478 14 минут бұрын
Your comment about model hallucinations makes total sense. I think this is the same thing when models have "bias" in a way society doesn't like, but is accurate of the compressed reality they're approximating.
@FredPauling
@FredPauling 14 күн бұрын
The idea that all of these systems are heading towards the same universal embedding space is extremely elegant and satisfying. It feels like an unlock for orders of magnitude of parameter efficiency gains.
@synchro-dentally1965
@synchro-dentally1965 7 күн бұрын
Remember: No matter what... purple will always taste like grape ;) Thanks for the video
@wolpumba4099
@wolpumba4099 14 күн бұрын
*Platonic Representation Hypothesis Summary:* * *0:02:30* The paper explores the idea that all AI models are converging towards a single "Platonic" representation of reality as they increase in size and data scale. * *0:21:30* Evidence presented includes: * *0:21:30* Alignment across vision models: Different architectures trained on similar image datasets show increasing similarity in their learned representations as they get larger. * *0:23:30* Alignment across modalities: Vision and language models are also showing increasing alignment, suggesting a shared understanding of concepts across different data types. * *0:29:00* Brain alignment: Neural networks are beginning to show alignment with biological representations in the human brain, particularly in the visual system. * *0:30:00* Reasons for convergence: * *0:35:00* Task generality: Training models on more tasks forces them to find representations that are useful across multiple domains, leading to fewer possible solutions. * *0:40:00* Model capacity: Larger models can represent a wider range of functions, increasing the likelihood of finding the optimal function for representing reality. * *0:43:00* Simplicity bias: Deep networks are inherently biased towards finding simple solutions, even without explicit regularization techniques. This pushes larger models towards the simplest and most generalizable representations. * *0:11:30* Implications of convergence: * *0:02:00* Scaling is key: Increasing data and model size is crucial for achieving this Platonic representation, but it's not necessarily the most efficient approach. * *0:25:30* Multimodality is beneficial: Training models on data from multiple modalities leads to better representations and performance across all tasks. * *1:05:00* Hallucinations should decrease: As models converge towards an accurate model of reality, we should expect fewer hallucinations. * *0:17:00* Counterarguments and limitations: * *0:17:00* Specialized models might still be needed for specific tasks, even with a highly generalizable Platonic representation. * *0:19:00* Resource limitations, like energy and compute, could hinder our ability to train models large enough to reach the Platonic representation. * *0:33:00* Philosophical implications: * *0:32:30* The paper suggests that intelligence might be a fundamental property of matter, and all forms of intelligence are ultimately converging towards a single point. * *0:58:00* This could lead to the creation of a superintelligence, a "digital god," as the ultimate convergent point of all information and computation. * *0:16:30* Humans may be acting as data collection agents for this superintelligence, ultimately contributing to its creation. *0:33:00* In conclusion, the paper presents a compelling hypothesis that challenges our understanding of intelligence and the future of AI. While further research is needed to confirm these claims, the implications of converging towards a Platonic representation of reality are far-reaching and potentially paradigm-shifting. i used gemini 1.5 pro
@dm204375
@dm204375 14 күн бұрын
Human language has information embedded in the structure of the language. That is why these LLM's have "emergent properties" and are able to converge on concepts. We did not create language arbitrarily, there is an underlying structure that dictated grammar, syntax, word directionality, etc.... that structure is what LLM's take advantage on for knowledge interpolation. In essence humans have done the computation and preprocessing and compute for the LLM's with our language.
@wolpumba4099
@wolpumba4099 14 күн бұрын
Summary starts at 1:32:58
@MaJetiGizzle
@MaJetiGizzle 14 күн бұрын
Another philosophical banger!
@andytroo
@andytroo 10 күн бұрын
there was a video a little while ago on physically realistic simulation (liquid flow, planet orbits, etc) and they found that a pretrained model worked better, even if the pre-training was cat video generation.
@context_eidolon_music
@context_eidolon_music 7 күн бұрын
I'm nerding out.
@alexijohansen
@alexijohansen 13 күн бұрын
Awesome, please keep doing these!
@Elikatie25
@Elikatie25 13 күн бұрын
2:20 Starting horn
@xx1slimeball
@xx1slimeball 13 күн бұрын
cool paper, i like it! Bonus point for cite before BC
@user-jh9rh4ho4r
@user-jh9rh4ho4r 13 күн бұрын
The reason representations don't tell us anything is not because we can't visualize n-dimensional shapes in our heads, it's because they are big and convluted and we don't understand them well enough. I can't visualize an 8 dimensional hypercube in my head but I could easily understand 8 dimensional symbolic representations.
@blengi
@blengi 13 күн бұрын
lol this sounds somewhat similar to something I posted about LLMs and _"abstract platonic language forms convergently arrived at when comes to creating more optimal information representations..."_ a year ago
@preadaptation
@preadaptation 11 күн бұрын
Thanks
@4thpdespanolo
@4thpdespanolo 2 күн бұрын
It could only be so
@lolasso98
@lolasso98 14 күн бұрын
Since it's induction and not deduction, it's aristotellic, not platonic
@shanongray6334
@shanongray6334 14 күн бұрын
IMO it's a reference to the theory of forms: en.wikipedia.org/wiki/Theory_of_forms#:~:text=For%20Plato%2C%20forms%2C%20such%20as,things%20are%20qualified%20and%20conditioned.
@ssehe2007
@ssehe2007 2 сағат бұрын
Organon is full of references to syllogistic reasoning?
@lucynowacki3327
@lucynowacki3327 13 күн бұрын
So the reality is kind of agreement.
The Biggest Misconception about Embeddings
4:43
ritvikmath
Рет қаралды 12 М.
Bayes' Theorem - The Simplest Case
5:31
Dr. Trefor Bazett
Рет қаралды 1,5 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 77 МЛН
Be kind🤝
00:22
ISSEI / いっせい
Рет қаралды 19 МЛН
Building Multimodal Models
1:47:50
hu-po
Рет қаралды 4,2 М.
Shannon Entropy and Information Gain
21:16
Serrano.Academy
Рет қаралды 200 М.
Word2Vec, GloVe, FastText- EXPLAINED!
13:20
CodeEmporium
Рет қаралды 16 М.
Старший Брат Против Младшего: чья Невеста?
24:10
What Jumping Spiders Teach Us About Color
32:37
Veritasium
Рет қаралды 1,4 МЛН
⌨️ Сколько всего у меня клавиатур? #обзор
0:41
Гранатка — про VR и девайсы
Рет қаралды 654 М.
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
Kalem ile Apple Pen Nasıl Yapılır?😱
0:20
Safak Novruz
Рет қаралды 1,2 МЛН
iPhone 12 socket cleaning #fixit
0:30
Tamar DB (mt)
Рет қаралды 3,1 МЛН