Welcome to my channel and thank you for watching my first video! This marks the beginning of our journey together through the fascinating world of AI. I'm eager to hear your feedback and would greatly appreciate any tips on how to make our explorations more informative and engaging. Your opinions and suggestions are incredibly important to me, as they help me create content that not only informs but also provides deep insights into the topic of AI. I also warmly invite you to join our Discord community, where we can continue our discussions and learn together. A huge thank you to everyone supporting me! Every bit of support motivates me greatly, and I look forward to learning and discovering more about AI with all of you. See you soon on Discord! ❤
@sashasasha74919 ай бұрын
Thank you! Very informative for beginners. I have successfully launched LLM following your guide!🙏🙏
@Aris_289 ай бұрын
Thank you so much for your kind words about my guide! I'm really just dipping my toes into the world of KZbin video creation, so it means a lot to hear you found it helpful.
@DilukshanN78 ай бұрын
wow! so beginner friendly! thank you so much!!!
@Aris_288 ай бұрын
Glad it was helpful!
@archi_designer8 ай бұрын
2:41 Ctrl click with mouse , instead of copy paste
@justmerain40537 ай бұрын
Hey, thanks for the video. I scoured the entire Internet in search, but found nothing. When you play a role with a character over time, most likely due to the growing amount of context and the answer is getting slower and slower, the number of tokens per second is falling, is there any solution to this problem?
@Brandon-x9v8k7 күн бұрын
I notice my generations stop short. Is there something that I need to tweak? I'm running a 3090.
@Aris_283 күн бұрын
Can you share the error with me on discord?
@Brandon-x9v8k2 күн бұрын
@ I dont see any error in cmd but the gen will stop text for example if its making a list 12 it will stop short and I see the sentence is not completed so I write a prompt to finish and it writes the rest it’s odd
@CristianAguilarnavarro6 ай бұрын
Thank
@kumarmadaparthiАй бұрын
How can I restart the server once closed?
@Aris_2826 күн бұрын
Which server?
@ustawnickustawnick7 ай бұрын
when i close the start.bat file and try to run it again afteri i closed it, nothing happens is that normal?
@Aris_287 ай бұрын
No, this is not normal, try reinstalling.
@ustawnickustawnick7 ай бұрын
@@Aris_28 i have reinstalled many times and nothing changed
@Aris_287 ай бұрын
Okey would it be possible for you to share the error code on my discord so I can help you better?
@ustawnickustawnick7 ай бұрын
@@Aris_28 yeah
@got2go4word20 күн бұрын
bat file did not run.
@Aris_2819 күн бұрын
Did you send error code on discord?
@Lakosta8268 ай бұрын
how can i Unistall this after installation?
@Aris_288 ай бұрын
Delete Folder
@auroracronenwerth65352 ай бұрын
please, slow down your videos, so talking and skipping topics, so i don't have to constantly stop and replay them countless times, going to other streams to undersatnd what you were saying, until i can finish and then listen to your next topic, followed by the same. Please, if making a tutorial, expect that people have no clue as to what you are talking about. That's what I've had to learn the hard way. So, keep your videos to the point, but, give more information as to where and how the files can be obtained as well as a guide how to do that. Just FYI. Thank you for your Videos!
@kedarkhamkar19055 ай бұрын
i did the same way u did but i got error bro can u help please llama_model_load: error loading model: unable to allocate backend buffer llama_load_model_from_file: failed to load model 10:57:28-925412 ERROR Failed to load the model. Traceback (most recent call last): File "C:\Users\Kedar\Desktop\text-generation-webui-main\text-generation-webui-main\modules\ui_model_menu.py", line 244, in load_model_wrapper shared.model, shared.tokenizer = load_model(selected_model, loader) File "C:\Users\Kedar\Desktop\text-generation-webui-main\text-generation-webui-main\modules\models.py", line 93, in load_model output = load_func_map[loader](model_name) File "C:\Users\Kedar\Desktop\text-generation-webui-main\text-generation-webui-main\modules\models.py", line 271, in llamacpp_loader model, tokenizer = LlamaCppModel.from_pretrained(model_file) File "C:\Users\Kedar\Desktop\text-generation-webui-main\text-generation-webui-main\modules\llamacpp_model.py", line 103, in from_pretrained result.model = Llama(**params) File "C:\Users\Kedar\miniconda3\envs\textgen\lib\site-packages\llama_cpp_cuda\llama.py", line 338, in __init__ self._model = _LlamaModel( File "C:\Users\Kedar\miniconda3\envs\textgen\lib\site-packages\llama_cpp_cuda\_internals.py", line 57, in __init__ raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: models\wizardLM-7B.Q4_0.gguf
@Aris_283 ай бұрын
Can you share it on my discord with me? Did you fix it?