Ollama - Loading Custom Models

  Рет қаралды 37,330

Sam Witteveen

Sam Witteveen

Күн бұрын

Jackalope7B. - huggingface.co/openaccess-ai-...
GGUF versions - huggingface.co/TheBloke/jacka...
My Links:
Twitter - / sam_witteveen
Linkedin - / samwitteveen
Github:
github.com/samwit/langchain-t... (updated)
github.com/samwit/llm-tutorials
00:00 Intro
00:31 Jackalope 7B
01:10 Jackalope 7B GGUF
02:21 Make a model file

Пікірлер: 44
@the_real_cookiez
@the_real_cookiez 4 ай бұрын
This is so awesome. With the new gemma LLM, I wanted to load that model in. Thank you!
@jasonp3484
@jasonp3484 2 ай бұрын
Outstanding my friend! I learned a new skill today! Thank you very much for the lesson
@5Komma5
@5Komma5 5 ай бұрын
That worked. Thanks. If the Model page lacks information and there is a similar model available you can get the modelfile by loading the model and using ollama show --modelfile
@samwitteveenai
@samwitteveenai 5 ай бұрын
yes they have added some nice commands since I made this video. You can also make changes now when using the model is running and export those to the setup/modelfile I will try to make a new updated video
@MasonJames
@MasonJames 5 ай бұрын
Would also love to see a new version - I've referenced this one several times @@samwitteveenai thank you!
@AKSTEVE1111
@AKSTEVE1111 3 ай бұрын
It worked like a charm, thank you! Just need to look at the model with my web browser.
@Canna_Science_and_Technology
@Canna_Science_and_Technology 9 ай бұрын
Do you have a video about setting up a pc for running LLMs? What GPU, memory, what software needed. So on?
@yunomi26
@yunomi26 6 ай бұрын
Hey so I wanted to build a rag architecture. Can I use one of embedding models from the MTEB and make that model through ollama, then use it withgenerate embedding api of ollama to generate embedding? But then, it can only use the api for the model which is running - and i wanted mistral to generate completion and gte for embedding. how do you think i can solve it?
@Annachrome
@Annachrome 8 ай бұрын
Thanks for introducing me to Ollama! I am running open-source models on LangChain, but having trouble with models calling (or not using) custom tools appropriately. Would you mind making a tutorial for initializing agents without openai models? Perhaps with prefix/format_instructions/suffix kwargs. 🙏 All the docs, tutorials, deeplearning courses use openai models.... 😢
@BikinManga
@BikinManga 2 ай бұрын
thank you, your example modelfile template. save me from headache of loading yi custom model. it's perfect!
@guanjwcn
@guanjwcn 9 ай бұрын
can’t wait for Windows version to try it out.
@dib9900
@dib9900 2 ай бұрын
Where can I get expected for the model Parameters & Template values for a given model if the Modelfile is not included with the model to convert to ollama format? I'm specifically interested in Embeddings model, not the LLM. For example, for this model SFR-Embedding-Mistral-GGUF
@jasperyou645
@jasperyou645 Ай бұрын
Thank you for your sharing! I just want to know could I just run Jackolope in unquantized version by Ollama? It seems GGUF file is used to store the quantized model.
@carlosparica8131
@carlosparica8131 7 ай бұрын
Hello Mr Witteveen. Thanks for the informative video! May I request if you could do a more indepth explanation on what a model file is and how it works, more specifically on TEMPLATE?
@bimbotsolutions6665
@bimbotsolutions6665 8 ай бұрын
AWESOME WORK, THANKS A LOT...
@nicolashuve3558
@nicolashuve3558 4 ай бұрын
Hey, thanks for that. Where are models located on a mac? I can't seem to find them anywhere.
@t-dsai
@t-dsai 8 ай бұрын
Thank you Mr. Witteveen for this helpful video. One question: Is it possible to have the ollama settings directory to a custom place instead of the default "~/.ollama"?
@StevenSeiller
@StevenSeiller 8 ай бұрын
+1 🤏 My system drive is small compared to my TBs data drives.
@HunterZolomon
@HunterZolomon 5 ай бұрын
Appreciate this a lot, thanks! The stop parameters in your example don't seem necessary as a default though (even detrimental for some models, they stop halfway through the response), and could be explained a bit more thoroughly. You could do a clip going through the parameters, starting with PARAMETER num_ctx ;)
@noob-ep4lx
@noob-ep4lx 4 ай бұрын
hello! Thank you so much for this video but I ran into a problem, my storage was full halfway through my installation and the progress bar paused (stuck on 53.48%) and whenever I close and re-enter the code, it would check 5 files and skip 5 files and pause there. Is there any way to fix?
@samwitteveenai
@samwitteveenai 4 ай бұрын
I suggest just go in and delete the model files and start again
@nitingoswami1959
@nitingoswami1959 9 ай бұрын
It doesn't support multi threading 😭😭
@julian-fricker
@julian-fricker 9 ай бұрын
Thanks for the great video, you should give LM Studio a try. Makes finding and downloading models easier, can make use of gpu and allows you to run these models with a Chat-GPT compatible API.
@user-xs8lo3yh6o
@user-xs8lo3yh6o 7 ай бұрын
icant find any folder with models inside in ubuntu, only temp files,
@KratomSyndicate
@KratomSyndicate 5 ай бұрын
models are located in \usr\share\ollama\.ollama\models\ or in wsl2 located in \\wsl.localhost\Ubuntu\usr\share\ollama\.ollama\models\
@thaithuyw4f
@thaithuyw4f 8 ай бұрын
what is the folder you put the model file? I even can't see the primary model folder of ollama. Even I use realpath and which,...
@thaithuyw4f
@thaithuyw4f 8 ай бұрын
oh sorry, now I found the model folder of ollama, only files started by sha256.... so I think your download folder is just a somewhere folder. but when I run create I get the error, even when using sudo: ⠦ transferring context Error: rename /tmp/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c16463723071254256853 /usr/share/ollama/.ollama/models/blobs/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c1646372307: invalid cross-device link Do you know why?
@samwitteveenai
@samwitteveenai 8 ай бұрын
not sure why. you have a blob like that, normally it will be a named file and then ollama will make the blob etc. and copy to the right location. Your model (text) file should specify the path to the llama cpp file.
@RobotechII
@RobotechII 9 ай бұрын
Really cool, very Docker-esque!
@MichealAngeloArts
@MichealAngeloArts 9 ай бұрын
Thanks for sharing this. Is there a link to that model file you show in the video (on github etc)?
@samwitteveenai
@samwitteveenai 9 ай бұрын
I updated it in the description but here it is - huggingface.co/TheBloke/jackalope-7B-GGUF/tree/main
@MichealAngeloArts
@MichealAngeloArts 9 ай бұрын
@@samwitteveenai Sorry I didn't mean to ask about the HF model files (the GGUF) but the model 'configuration' file used by Ollama to load the model. Obviously plenty of 'model files' terminology in the loop 😀
@responsible-adult
@responsible-adult 9 ай бұрын
Jackalope running wild (template problem? ) Really liking the the ollama series, but having a Jackalope problem. Using the jackalope configuration text file I tried to copy from the video, when I run the subsequent model the "creature" goes into a loop and starting generating questions for itself and answering them. I think it's related to the template. Please post the exact known to work configuration file jackalope. Thanks!
@samwitteveenai
@samwitteveenai 9 ай бұрын
@@responsible-adult you can take the template from something like mistralorca by loading that model and using "/show template". sounds like you have an error in that or possibly you are using a lower quantized version??
@MasonJames
@MasonJames 9 ай бұрын
I'm also stuck on this step. My modfile works, but the resultant model doesn't seem to "converse" well. Not sure how to troubleshoot the modfile specifically.@@MichealAngeloArts
@savelist1
@savelist1 9 ай бұрын
Hi Sam wondering you have not done any llama Index videos ?
@samwitteveenai
@samwitteveenai 8 ай бұрын
I have done a few but didn't release them, for a variety of reasons (changing the API etc) I will make some new ones. I do use LlamaIndex for certain work projects and it has some really nice features.
@nitingoswami1959
@nitingoswami1959 9 ай бұрын
I have 16 gig ram and Tesla graphic card but ollama is still taking time to generate the answer it seems like it only uses the cpu to do the work but how I can utilise both cpu and GPU simultaneously 🤔🤔
@LeftThumbBreak
@LeftThumbBreak 9 ай бұрын
if you're running a tesla graphic card i'm assuming you're running on a linux machine and not a mac. if that's so are you sure you're running the linux distro? i run ollama all the time on gpu equipped servers and its running on the gpu.
@nitingoswami1959
@nitingoswami1959 9 ай бұрын
@@LeftThumbBreak running on Ubuntu but when I send the 1st request using curl and at the same time when I send the 2nd request then it waits for 1st request and then it will process the second one why is it happening is it due to cpu or not having multi threading?
@wiltedblackrose
@wiltedblackrose 9 ай бұрын
I've had a lot of issues running ANY model with ollama. It keeps crashing on me. Did you have that too? (Btw, there is an issue open right now...)
@samwitteveenai
@samwitteveenai 9 ай бұрын
so far for me it has been pretty rock solid on 2 macs that I have been running it on.
@wiltedblackrose
@wiltedblackrose 9 ай бұрын
@@samwitteveenai So I assume in CPU only mode... That explains it. The issue I was facing was with cuda
Ollama meets LangChain
6:30
Sam Witteveen
Рет қаралды 48 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Beautiful gymnastics 😍☺️
00:15
Lexa_Merin
Рет қаралды 14 МЛН
Became invisible for one day!  #funny #wednesday #memes
00:25
Watch Me
Рет қаралды 59 МЛН
ПРОВЕРИЛ АРБУЗЫ #shorts
00:34
Паша Осадчий
Рет қаралды 6 МЛН
How Many Balloons Does It Take To Fly?
00:18
MrBeast
Рет қаралды 161 МЛН
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 448 М.
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 38 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
The Secret Behind Ollama's Magic: Revealed!
8:27
Matt Williams
Рет қаралды 30 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 885 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 67 М.
Телефон-електрошокер
0:43
RICARDO 2.0
Рет қаралды 1,3 МЛН