Thank you so much for this. You don’t know how badly I needed this right now. Please extend this series to transformers, if possibly any LLM as well.
@jairjuliocc5 ай бұрын
Thanks You.Can you explain the entire self attention flow? (from postional encode to final next word prediction). I think it will be an entire series 😅
@TheMLTechLead5 ай бұрын
It is coming! It will take time
@Gowtham255 ай бұрын
It's really good and usefull... Expecting for training an llm from the scratch for the next and interested in KAN-FORMER...