Fascinating! I have been calling ollama using llama 3.2:3b through python which allows me to manage context with my own memory structure, which only recalls what is necessary to complete the current query. I have found this to be extremely useful, since supplying the whole context simply reduces the response to something less than useful.
@Studio.burnside4 күн бұрын
Hey Matt, I want to start off by saying I *never* leave comments on KZbin videos. I just don't really ever. That being said, I really wanted to share positive recognition with you for the work you're doing to share knowledge around these bleeding edge tools with the world. It can be very confusing for people to enter technical spaces like these. The way you facilitate & share information, and organize your speech, and the timeliness of your videos, all lead to these technologies becoming more accessible to people -- which is amazing! So kudos to you and keep killing it!!
@hasbuncristian12 сағат бұрын
@@Studio.burnside mentions aside, those amazing guayaberas, COOL AF
@prabhicКүн бұрын
Thank you Very simple and elegant explanation about complex topics
@nexuslux6 күн бұрын
This is the most important channel on ollama on youtube.
@DranKof2 күн бұрын
THANK YOU. I hadn't realized that my models were "forgetting context" at the 2k mark because of this default value. I always thought it was just because they were overriding themselves with their own new information and that was "just AI being AI" -- my use cases have me floating around 1.5k to 2.4k so it was only barely noticeable only some of the times and never really worth a deep dive. Thanks again!
@piero9576 күн бұрын
Great video, as usual. Every "obvious" argument you talk about , I learn a lot of "not so obvious" fundamentsls concepts. Thank you!
@chizzlemo30945 күн бұрын
really great technical info explained to us dummies, thank you
@danielgillett3706 күн бұрын
Thanks Matt! I really appreciate your content. It sounds like, if I create a trained agent, the I would start trying to make the context smaller, not bigger; so as not to use overly large models and context sizes. I'm still learning. Very green. 😐 I'm learning (slowly) how to build agents that I can sell. All of your help is much appreciated. I will buy you some cups of coffee when I can earn some money.
@volt56 күн бұрын
Thanks for explaining this. I was thinking of context as memory of previous completions - didn’t realize that it is also used for output. I’ve been playing with larger contexts, following one of your videos on adding num_ctx to the model file, and noticed that my chat responses were getting bigger. I’m going to try passing num_predict in the api request to limit this. The notes on your website are very helpful.
@dukeheart5566 күн бұрын
Thanks, I enjoy your information and appreciate it. I adjust the num_ctx based on the number of token I want to send. It seems to work well in managing memory usage. I have 3 RTX 6000 so I have a lot of wriggle room. But I do agree that "hole in the middle" is a problem if you don't RAG. Thanks again.
@КравчукІгор-т2э6 күн бұрын
Thanks Matt! It's as concise, clear and interesting as always!
@MrNuganteng5 күн бұрын
is it possible to do reranking on our RAG application using ollama? Your insight is always interesting
@scottrobinson216 күн бұрын
Thanks, Matt. Great video. Where is context normally stored? Vtam or ram or both.
@Zellpampe3 күн бұрын
Thanks for the video. Two questions: 1. Are tokens the same across foundation models? E.g. the word token is tokenized into to ken by both Open AI and Anthropic? Or does one tokenize to tok en? Or even to k en? 2. If yes, what is the common origin of tokenization?
@technovangelist3 күн бұрын
The token visualizer I showed had different options for llama vs OpenAI. Not sure how they differ. For the most part it’s an implementation detail most don’t need to know about.
@jparkerweb5 күн бұрын
Such great info @Matt. I couldn't find the `num_ctx` param for options via the OpenAI API anywhere in the official Ollama docs. Thanks for sharing!
@technovangelist5 күн бұрын
I don't know if the openai api supports it. there is a lot of stuff that the openai api can't do which is why Ollama uses the native api first.
@jparkerweb5 күн бұрын
@@technovangelist you are right, after looking some more and rewatching your video, I was confusing an OpenAI API call with your curl example. It would be cool if Ollama could take advantage of an optional "options" parameter to do stuff like this though. Either way, thanks for the great content 👍
@technovangelist5 күн бұрын
if building anything new you should never use the openai api. there is no benefit and only downsides. that’s mostly there for the lazy dev who has already built something and doesn't want to do the right thing and build a good interface. it saves maybe 30 minutes vs getting it right.
@MeinDeutschkurs5 күн бұрын
The largest num_ctx I used on my M2 Ultra was 128k, during experiments of keeping „everything“ in the window. I came to the same results, especially above 60000 tokens. RAG is fine, especially to find similar things, but I cannot imagine a true history in a vector store. I tried to summarize past messages, but honestly, simple summaries are not enough. I have no clue how to handle really huge chats.
@GrandpasPlace6 күн бұрын
I run a context of 8192 for most models (after making sure that is an acceptable size) I tried bigger context sizes but they seem to cause strange results, as you mentioned. It is good to know it is memory related. Now, is that system memory (32g) or video memory (12g)?
@bobdowling69326 күн бұрын
If I set num_ctx=4096 in the options parameter to generate() and then set num_ctx=8192 in the next call but use the same model name does ollama reload the model to get a version with the larger context or does it just use the model already in memory with the larger context size?
@HassanAllaham5 күн бұрын
it just use the model already in memory with the larger context size
@JohnBoen5 күн бұрын
This is what I have been looking for :)
@eziola6 күн бұрын
Thanks Matt! The fact the only visible parameter about context size doesn't tell you the context size is baffling 😮.
@alx84395 күн бұрын
Cover the topic of flash attention as a measure to reduce the memory footprint for accommodating the large context. I think it will be closely related to this topic
@renobodyrenobody5 күн бұрын
Thanks, in-ter-est-ing and btw nice shirt.
@AvacadoJuice-q9b6 күн бұрын
Thanks! Learned a lot.
@ArnLiveHappy56786 күн бұрын
This is the exact same problem I am crossing at the moment. I have a chatbot, sometimes a single input could be 8000tokens(basically system log), so I have set the ctx window 12k.. the problem with summarize is that it will drop lot of info from the input. Do you think RAG would help?
@jparkerweb5 күн бұрын
For large summarization tasks, sometimes a "map-reduce" approach works well. Basically, try to semantically chunk your large text into smaller chunks. Summarize each smaller semantic chunk and glue them all together. If the new combined chunk is small enough to summarize by the LLM, then go for it, otherwise repeat the process. I have been able to distil down entire books into a few hundred tokens with this approach (for fun).
@acbp96996 күн бұрын
great video🎉❤
@F3366 күн бұрын
if your pc reboots of it like recalculating the python mathlib and fonts and/or using a GPUI, check your bios settings that pci-reset on crash is disabled... -.-
@technovangelist6 күн бұрын
not really relevant to anything in this video, but interesting for the day i might use a pc or the python mathlib.
@thiagogpinto6 күн бұрын
awesome content
@jimlynch93906 күн бұрын
So what memory are we talking about, RAM or VRAM or something else?
@technovangelist4 күн бұрын
in general in this video I am talking about memory of the model, or context
@Tommy314166 күн бұрын
I recently set mistral small to 120k and forgot to reduce the number of parallels back to 1 before executing a query. That was a white knuckle 10 minutes I can tell you. Thought the laptop would catch fire 🔥
@AntonioCorrenti-b4e6 күн бұрын
The problem arises mostly with code. Code fills up a lot of that context and every revision or answer from the llm does the same The solution is to summarize often but keep the original messages for the user or for later retrivial.
@BlenderInGame6 күн бұрын
My max context is 5000 on a laptop CPU RAM of 12 GB (9.94 GB usable) for llama3.2
@IIWII96 күн бұрын
Another way to determine the context length is to ask the model. I asked “Estimate your context length “. The model responded: “The optimal context length for me is around 2048 tokens, which translates to approximately 16384 characters including spaces. This allows for a detailed conversation with enough historical context to provide relevant and accurate responses.”
@technovangelist6 күн бұрын
the model in most cases doesn't really know. if you get an answer that makes sense, it's luck
@alx84395 күн бұрын
It's just hallucinating based on some general nonsense it was trained on. It has nothing to do with the real capability of this specific model you're asking
@thenextension91604 күн бұрын
Gunna hallucinate bro
@NLPprompter6 күн бұрын
that's why ladies and gentlement stop asking LLM to count strawberry "s" we saw letter, they (LLM) saw tokens