Super Basic Intro To Hyperdimensional Computing

  Рет қаралды 136

Richard Aragon

Richard Aragon

Күн бұрын

Пікірлер
@Ale_Kitsune
@Ale_Kitsune 27 күн бұрын
Thank you so much Richard. I always find little stuff on the subject. Your videos are very valuable
@skeletonmasterkiller
@skeletonmasterkiller 27 күн бұрын
Thank you so much for this
@skeletonmasterkiller
@skeletonmasterkiller 27 күн бұрын
Is there any advantage to using alternate representations like a grey coded vector or binary spatter codes?
@richardaragon8471
@richardaragon8471 27 күн бұрын
The #1 thing I have learned from my research into this is that EVERYTHING is a variable. It is a variable whether you encode the data as the shape of a Poincare curve vs a toris . Why? I could not offer the first guess as to why. I know it is a variable though. So the way you mathematically represent it is 100% a variable.
@skeletonmasterkiller
@skeletonmasterkiller 27 күн бұрын
@@richardaragon8471 one interesting idea I have is to try and encode the state of the network itself as a hyper dimensional state and whenever a new query comes in traverse over the good states to find the answer
@richardaragon8471
@richardaragon8471 27 күн бұрын
@@skeletonmasterkiller I can't believe you are mentioning this lmfao. I just haven't made a video on it yet. I figured out exactly how to do it yesterday. colab.research.google.com/drive/1hSuy5n_jyBplQlTGx4rMKQ3IbonBdgEr?usp=sharing
@richardaragon8471
@richardaragon8471 27 күн бұрын
@@skeletonmasterkiller I made a podcast video about it on my second channel lol: kzbin.info/www/bejne/pXvQi5SXeLOSnJo
@skeletonmasterkiller
@skeletonmasterkiller 27 күн бұрын
@@richardaragon8471 wow nice this is very similar to neural gas models or kohonen self organizing maps, but being so data dependent makes the network brittle. It is a matter of where the network gets its supervisory learning signal from. It can get it from the data itself, from self supervision, but it can also get it from previous 'experiences' it had from solving problems related to other data sets it has encountered. So my idea is a little different. During each step of training or fine tuning an LLM I want to create a high dimensional state vector and store it. The sequence of state vectors that lead to a "positive" result are bound (binding/aggregation) together this is the experience vector. We use the best experience vector to formulate a state plan for the network which would require modification to the weight update for the task it is currently encountering. So the training and inference are both conducted during the learning phase and During inference time we form a state plan that traverses the state vector space and of all the sequences observed this far chooses the best sequence that predicts or solves the task given before it. If no plan is found the network is retrained on the samples that it got wrong using intermediate states and expanded (check out zig zag products and expander graphs they are really cool) until it solves the task.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 748 М.
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
What is mathematical thinking actually like?
9:44
Benjamin Keep, PhD, JD
Рет қаралды 10 М.
You Can Control How Well AI Models Learn By Controlling Physics
32:09
RAG + FINETUNING LLM
10:26
GenAI Research Insight Hub
Рет қаралды 186
How To Focus On The Right Problems
16:57
Y Combinator
Рет қаралды 32 М.
Why Did I Decide To Get Into AI Research?
18:17
Richard Aragon
Рет қаралды 122
I Truly Think The Universe Is Hyperbolic And Not Euclidean
20:46
Richard Aragon
Рет қаралды 302
The First Map Of Digital Space
20:23
Richard Aragon
Рет қаралды 90
The Physics of Different Geometric Dimensions
20:02
Richard Aragon
Рет қаралды 65
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН