๋ค์๊ณผ ๊ฐ์ด ํ๋ ์ ๋ฉ๋๋ค. print("saving the GGUF model...") quantization_method = "q8_0" # "f16" "q8_0" "q4_k_m" "q5_k_m" peft_model.save_pretrained_gguf( "gguf", tokenizer=tokenizer, quantization_method=quantization_method, ) print("Try with ollama!") ''' 1) copy a reference Model file and moify the gguf file in the first line 2) ollama create -f Modelfile 3) ollama run '''
A few comments in following the code. 1. an error with mismatch of peft version and unsloth, please give us the versions for the packages. ( I manually sloved the issues. No module named 'peft.tuners.lora.layer'; 'peft.tuners.lora' is not a package) (my system uses unsloth 2.0 and peft 0.5.0) 2. model name should be full path (model_name = "unsloth/llama-3-8b-bnb-4bit", not model_name="llama-3-8b-bnb-4bit")
@teddynote7 ะฐะน ะฑาฑััะฝ
1. you can find easy install commands @ link below github.com/unslothai/unsloth/tree/main#installation-instructions It's very hard to specify the exact version of each library due to dependencies (CUDA, Python, Torch, etc, ...) try using mamba if you haven't tried it yet 2. thx for letting me know. I have updated the .ipynb file
@heejuneAhn7 ะฐะน ะฑาฑััะฝ
@@teddynote the dependent packages are quite system-dependent. colab-new = [ "tyro", "transformers>=4.38.2", "datasets>=2.16.0", "sentencepiece", "tqdm", "psutil", "wheel>=0.42.0", "numpy", "protobuf
@@teddynote ํด๋น ๋ ธํธ๋ถ ์ฝ๋ฉ ์ผ๋ฐ์ผ๋ก ๊ตฌ๋ ์ ์๋์ด ๋ฐ์ํ๊ณ , ์ธ์ ๋ค์ ์์ ์ ์๋ฌ๊ฐ ๋ฐ์ํ๋ ๊ฒ ๊ฐ์ต๋๋ค. ํน์ ํ์ธํด์ฃผ์ค ์ ์์ผ์ค๊น์? ''' WARNING: The following packages were previously imported in this runtime: [torch,torchgen] You must restart the runtime in order to use newly installed versions. Restarting will lose all runtime state, including local variables. '''
๊ฐ์๊ธฐ ์ฝ๋ฉ์์ ์ค๋ฅ๊ฐ ์ ๋๋ค. ใ ใ ๋ชจ๋ธ ๋ก๋ํ๋ ์์ค์์ ๋๊ธฐ ์์ํ๋๋ฐ. ์ ๋์๋๋ฐ. ์ ๊ทธ๋ด๊น์ ใ ใ ImportError: Unsloth: If you are in Colab, we updated the top cell install instructions - please change it to below then press Disconnect Runtime and then Restart it.
Colab์์ ํ์ธํ๋ ์ค์ ๋ชจ๋ธ ํ๋ จํ๊ธฐ > SFTTrainer ์ฌ์ฉ ๊ตฌ๊ฐ์์ ์๋์๋ฌ๊ฐ ๋ฐ์ํ๋๋ฐ ์์ธ๊ณผ ํด๊ฒฐ๋ฐฉ๋ฒ ์ ์ ์์๊น์..ใ ใ -> NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.