Unlock Ollama's Modelfile | How to Upgrade your Model's Brain using the Modelfile

  Рет қаралды 15,105

Prompt Engineer

Prompt Engineer

Күн бұрын

In this video, we are going to analyse the Modelfile of Ollama and how we can change the Brain of the Models in Ollama.
A model file is the blueprint to create and share models with Ollama.
The modelfile includes the following Instructions viz. FROM, PARAMETER, TEMPLATE, SYSTEM, ADAPTER, LICENSE AND MESSAGE.
Link: github.com/ollama/ollama/blob...
Let’s do this!
Join the AI Revolution!
#ollama #modelfile #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
🕵️‍♀️ Join my Patreon: / promptengineer975
☕ Buy me a coffee: ko-fi.com/promptengineer
📞 Get on a Call with me - at $125 Calendly: calendly.com/prompt-engineer4...
❤️ Subscribe: / @promptengineer48
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: / prompt48
TIME STAMPS:
0:00 Intro
0:30 Download Ollama
1:15 Startup Ollama
4:10 Introducing the Modelfile
5:15 Modelfile in Depth
7:46 System in Modelfile
8:20 Construct Custom Model from Modelfile
9:18 Test the new Custom Model
10:43 Messages in Modelfile
12:57 Next Video Conclusion
🎁Subscribe to my channel: / @promptengineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!

Пікірлер: 39
@drmetroyt
@drmetroyt 3 ай бұрын
Thanks for taking up the request ... 😊
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
🤗 Welcome
@renierdelacruz4652
@renierdelacruz4652 3 ай бұрын
Great Video, Thanks very much.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
You are welcome!
@TokyoNeko8
@TokyoNeko8 3 ай бұрын
I use the web ui and I feel it's much easier to manage the modelfiles and the obvious history tracking oc the chat etc etc.
@fkxfkx
@fkxfkx 3 ай бұрын
Great 👍
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Thank you! Cheers!
@user-ms2ss4kg3m
@user-ms2ss4kg3m Ай бұрын
great thanks
@PromptEngineer48
@PromptEngineer48 Ай бұрын
You are welcome!
@enesnesnese
@enesnesnese 16 күн бұрын
Thanks for the clear explanation. But can we also do this for the llama3 model built on the ollama image in Docker? I assume that containers do not have access to our local files
@PromptEngineer48
@PromptEngineer48 16 күн бұрын
Yes, you can
@enesnesnese
@enesnesnese 16 күн бұрын
@@PromptEngineer48 how? Should I create a file named Modelfile in container? Or should I create in my local? I am confused
@PromptEngineer48
@PromptEngineer48 16 күн бұрын
@@enesnesnese you should create the modelfile in local and you could run the model created from this modelfile in container
@enesnesnese
@enesnesnese 16 күн бұрын
@@PromptEngineer48 got it. Thanks
@khalidkifayat
@khalidkifayat 3 ай бұрын
nice one, questions was how to use mistral_prompt for production purposes OR sending to client ??
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Yes. U can push this to your Ollama login under your models. Then anyone will be able to pull the model by saying like Ollama pull promptengineer48/mistral_prompt . I will show the process in the next video on Ollama for sure.
@khalidkifayat
@khalidkifayat 3 ай бұрын
​@@PromptEngineer48appreciated mate
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 ай бұрын
do you have a video showing how to use crewai and ollama together?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
kzbin.info/www/bejne/fXzVZoiIf9uBerM
@JavierCamacho
@JavierCamacho Ай бұрын
Stupid question. Does this creates a new model file or it just creates a instruction file for the base model to follow instructions?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
New Model File
@JavierCamacho
@JavierCamacho Ай бұрын
@PromptEngineer48 so the size on driver gets duplicated...? I mean 4gb of llama3 plus an extra 4gb for whatever copy we make?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
@@JavierCamacho No the old is not used. just the new one
@JavierCamacho
@JavierCamacho Ай бұрын
@@PromptEngineer48 thanks
@michaelroberts1120
@michaelroberts1120 2 ай бұрын
What exactly does this do that koboldcpp or sillytavern does not already do in a much simpler way?
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
basically if i can get the models running on ollama, we open another door of integration.
@autoboto
@autoboto 3 ай бұрын
This is great info. One thing I have wanted to do is migrate all my local models to another drive. With Win11 I was using wsl2 with Linux ollama then I installed windows ollama and lost the reference to the local models. I rather not download the models again. In addition would be nice to be able to migrate models to another SSD and have ollama reference the alternate model path. OLLAMA_MODELS in windows works but only for downloading new models. When I copied models from the original wsl2 location to new location ollama would not recognize the models in the list command Curious if anyone has needed to relocate the high number models to new location and have ollama able to refence this new model location
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Got it
@UTubeGuyJK
@UTubeGuyJK 2 ай бұрын
How does modelfile not have a file extension? This keeps me up at night not understanding how that works :)
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
I will find the reason and give you a night's sleep.
@robertranjan
@robertranjan 2 ай бұрын
❯ ollama run mistral >>> does a computer filename must have a extension? A computer file name does not strictly have to have an extension, but it is a common convention in many computing systems, including popular operating systems like Windows and macOS. An extension provides additional information about the type or format of the data contained within the file. For instance, a file named "example.txt" with no extension would still be considered a valid file, but the system might not recognize it as a text file and may not open it with the default text editor. In contrast, if the same file is saved with the ".txt" extension, the system is more likely to open it using the appropriate text editor. One popular file like `Modelfile` without an extension is `Dockerfile`. I think, developers named it like that one...
@EngineerAAJ
@EngineerAAJ 3 ай бұрын
Is it possible to prepare a model with RAG and then save it as a new model?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
To prepare a model for RAG, we would need to do finetune the model separately using other tools, get the .bin file or gguf file, then convert to Ollama intergration mode.
@EngineerAAJ
@EngineerAAJ 3 ай бұрын
@@PromptEngineer48Thanks, I will try to take a deeper look into that, but something says that I won't have enough memory for that :(
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
Try on runpods
@romanmed9035
@romanmed9035 Ай бұрын
how do I find out when the model is actually updated? when was it filled with data and how outdated are they?
@PromptEngineer48
@PromptEngineer48 Ай бұрын
U will have to put a different name for the model...
@romanmed9035
@romanmed9035 Ай бұрын
@@PromptEngineer48 Thank you. but I asked how to find out the date of relevance when I download someone else's model and not make my own.
@PromptEngineer48
@PromptEngineer48 Ай бұрын
if you ollama list command in cmd, you will see all the list of models in your own system
Adding Custom Models to Ollama
10:12
Matt Williams
Рет қаралды 21 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
The day of the sea 🌊 🤣❤️ #demariki
00:22
Demariki
Рет қаралды 44 МЛН
3 wheeler new bike fitting
00:19
Ruhul Shorts
Рет қаралды 33 МЛН
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 417 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 100 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,2 МЛН
How to Use Llama 3 with PandasAI and Ollama Locally
13:55
Tirendaz AI
Рет қаралды 15 М.
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 37 М.
RAG from the Ground Up with Python and Ollama
15:32
Decoder
Рет қаралды 23 М.
Customize Dolphin Llama 3 with Ollama!
17:28
AI DevBytes
Рет қаралды 2,5 М.
Ollama: The Easiest Way to Run Uncensored Llama 2 on a Mac
11:31
Ian Wootten
Рет қаралды 29 М.
Непробиваемый телевизор 🤯
0:23
FATA MORGANA
Рет қаралды 543 М.
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 5 МЛН