Excellent talk, very interesting developments with the energy transformer
@Lippman-y9z Жыл бұрын
thanks for sharing
@maxkho00 Жыл бұрын
Ngl, this was pretty confusing. For one, the two energy formulae at 12:32 are only equivalent if i=j, i.e. if the contribution of each feature neuron is evaluated independently; now, the second formula can be intuitively understood as representing the extent to which the state vector's shape in the latent space matches the shape of each of the memories, but the first formula is harder to conceptualise, and it's never explained how the first formula can be practically reduced to the second (i.e. why not considering the interdependencies between the feature neurons in the energy formula doesn't make a practical difference). Secondly, without an update rule or at least a labelled HLA diagram, it was really hard to visualise the mechanics of the network; I had to pause the video and google the update rule to understand how dense Hopfield networks are even supposed to work. Dmitry did make the very vague statement that "the evolution of the state vector" is described, in some way, by the attention function, but he didn't explain in what way (is it the update rule? Is it a change vector? Is it something else? What does "V" correspond to? etc), which was pretty frustrating. For anyone watching, the attention function is the update rule where V is a linear transform of K; the value of the attention vector is substituted for Q, and the formula can be applied recursively. In general, I think more high-level explanations ─ especially within a consistent framework ─ would've been very helpful.
@joeysmoey3004 Жыл бұрын
For your first point, this is not true because the square of the sum is not the sum of the squares. There are cross terms which give you the non-independence.
@revimfadli46664 ай бұрын
I wonder if this can somehow link with state space models like mamba, or with liquid networks
@xynonners4 ай бұрын
there's a paper proving diffusion and modern hopfield networks are identical
@michaelcharlesthearchangel9 ай бұрын
Only geniuses realize the interconnectiveness between the relationship between Hopfield Networks and Neural Network Transformer models then latter Neural Network Cognitive Transmission models.