My favorite data science youtuber these days! Thank you So many channels now are about pure hype of delivering AI news with no substance... But you are an inspiration. Damn I want to read a paper a day at least now!
@code4AI9 күн бұрын
Do it! Smile ....
@deter311 күн бұрын
how do you measure model's generalization capability , this is a really fuzzy and vogue concepts and we keep using it while do not have clear measurement .
@monologtr_13 күн бұрын
hows fine tuning vision low mıdel with ocr, vqa custom datasets
@rikhendrix26113 күн бұрын
What determines if the task is the same? Is it the instruction prompt? And what defines the size of a task which is correct for LoRA?
@novantha112 күн бұрын
Your intuition, basically. It’s tricky because some tasks will be in distribution, even when dealing with unique data, while some tasks will explicitly not be in distribution. Here’s a couple of things to consider: For simple math, let’s say addition, subtraction, multiplication and division, do you think that a new equation outside of the example equations is in-distribution or out of distribution? For logical reasoning problems, do you think that a problem with a similar structure to a problem in the training set is in distribution or out of distribution? For creative writing, do you think that a model being asked to write stories in the same genres as the training examples is in distribution or out of distribution? It gets really nuanced, and I think the only way to really understand this is to approach them on a model-by-model and dataset-by-dataset basis.
@vladimirnadvornik825413 күн бұрын
If I understand it correctly, then doing full fine tuning and running SVD on the difference between the finetuned and the original model would create a LoRA that does not suffer from this problem. Is it correct?
@EvanGoodwin-bl7zq12 күн бұрын
Could you train LORA's at different ranks, scaling up and measuring performance? Then when you reach an acceptable level of performance you determine - or improvement falls below a certain level - you stop the process. It might involve some upfront costs, but I assume you would save on inference down the line because the 'acceptable' LORA would be computational more efficient than the full trained model. It would depend on the use case. If you are doing lots of inference, it would definitely payoff down the line. It would be interesting to see the costs of training multiple LORA's in this way vs full training.
@vladimirnadvornik825411 күн бұрын
LORA is not more efficient for inference. Either you can merge the LoRA into the model, then it is exactly the same or you can compute the LoRA separately and then it is less efficient.
@EvanGoodwin-bl7zq8 күн бұрын
@@vladimirnadvornik8254 Ok, then perhaps a better approach would be to train a LORA on different model sizes - 1B, 3B, 8B (which are computationally more efficient - and stop when acceptable accuracy is reached or improvement falls below a certain level.
@jonmichaelgalindo12 күн бұрын
What about undertraining loras on each block and merging as you go? You update all the parameters, and no single lora "overpowers" the original data vectors.
@code4AI9 күн бұрын
??? If you "undertrain" a fine-tuning mechanism, then you have a broken fine-tuned weight tensor structure. Why merge something that is not working into the pre-trained model?
@NLPprompter13 күн бұрын
I'm guessing Lamini AI company doing something like this to achieve what they said better than RAG...