Finally! It’s happening. The Combination of all your beautiful findings.
@BradleyKieser8 ай бұрын
Absolutely brilliant, thank you. Exciting. Well explained.
@po-yupaulchen1668 ай бұрын
Thank you. in RG-LRU, h_{t-1} should be not inside the gates ( inside the sigmoid function) in the original paper, right? it should slow down the training processes. I am so surprised that finite memory can meet the performance of transformers with crazy infinite memory. Also, it seems traditional rnns like lstm will soon be replaced by RG-LRU. So curious if some people can compare those rnn and show what is wrong in the old design.
@codylane21048 ай бұрын
How can we use it locally? Can we at all? LM Studio can't download it. 😞
@MattJonesYT8 ай бұрын
It's made by google which means it will have all the comical corporate biases that the rest of their models have. It will produce useless output really fast. When someone makes a de-biased version this tech will be much more interesting.
@Charles-Darwin8 ай бұрын
Surely this provides or could provide massive efficiency gains. If I touch a hot plate and feel the heat, the state is sent to my relevant limbs to retract...but then shortly thereafter this state fades and I can then proceed to focus on other states. What are neurons if not a response network to environmental factors. Google will probably be the first to an organic/chemical computer