AI Model Context Decoded

  Рет қаралды 7,273

Matt Williams

Matt Williams

Күн бұрын

Пікірлер: 46
@aim2helpU
@aim2helpU 6 күн бұрын
Fascinating! I have been calling ollama using llama 3.2:3b through python which allows me to manage context with my own memory structure, which only recalls what is necessary to complete the current query. I have found this to be extremely useful, since supplying the whole context simply reduces the response to something less than useful.
@Studio.burnside
@Studio.burnside 4 күн бұрын
Hey Matt, I want to start off by saying I *never* leave comments on KZbin videos. I just don't really ever. That being said, I really wanted to share positive recognition with you for the work you're doing to share knowledge around these bleeding edge tools with the world. It can be very confusing for people to enter technical spaces like these. The way you facilitate & share information, and organize your speech, and the timeliness of your videos, all lead to these technologies becoming more accessible to people -- which is amazing! So kudos to you and keep killing it!!
@hasbuncristian
@hasbuncristian 12 сағат бұрын
@@Studio.burnside mentions aside, those amazing guayaberas, COOL AF
@prabhic
@prabhic Күн бұрын
Thank you Very simple and elegant explanation about complex topics
@nexuslux
@nexuslux 6 күн бұрын
This is the most important channel on ollama on youtube.
@DranKof
@DranKof 2 күн бұрын
THANK YOU. I hadn't realized that my models were "forgetting context" at the 2k mark because of this default value. I always thought it was just because they were overriding themselves with their own new information and that was "just AI being AI" -- my use cases have me floating around 1.5k to 2.4k so it was only barely noticeable only some of the times and never really worth a deep dive. Thanks again!
@piero957
@piero957 6 күн бұрын
Great video, as usual. Every "obvious" argument you talk about , I learn a lot of "not so obvious" fundamentsls concepts. Thank you!
@chizzlemo3094
@chizzlemo3094 5 күн бұрын
really great technical info explained to us dummies, thank you
@danielgillett370
@danielgillett370 6 күн бұрын
Thanks Matt! I really appreciate your content. It sounds like, if I create a trained agent, the I would start trying to make the context smaller, not bigger; so as not to use overly large models and context sizes. I'm still learning. Very green. 😐 I'm learning (slowly) how to build agents that I can sell. All of your help is much appreciated. I will buy you some cups of coffee when I can earn some money.
@volt5
@volt5 6 күн бұрын
Thanks for explaining this. I was thinking of context as memory of previous completions - didn’t realize that it is also used for output. I’ve been playing with larger contexts, following one of your videos on adding num_ctx to the model file, and noticed that my chat responses were getting bigger. I’m going to try passing num_predict in the api request to limit this. The notes on your website are very helpful.
@dukeheart556
@dukeheart556 6 күн бұрын
Thanks, I enjoy your information and appreciate it. I adjust the num_ctx based on the number of token I want to send. It seems to work well in managing memory usage. I have 3 RTX 6000 so I have a lot of wriggle room. But I do agree that "hole in the middle" is a problem if you don't RAG. Thanks again.
@КравчукІгор-т2э
@КравчукІгор-т2э 6 күн бұрын
Thanks Matt! It's as concise, clear and interesting as always!
@MrNuganteng
@MrNuganteng 5 күн бұрын
is it possible to do reranking on our RAG application using ollama? Your insight is always interesting
@scottrobinson21
@scottrobinson21 6 күн бұрын
Thanks, Matt. Great video. Where is context normally stored? Vtam or ram or both.
@Zellpampe
@Zellpampe 3 күн бұрын
Thanks for the video. Two questions: 1. Are tokens the same across foundation models? E.g. the word token is tokenized into to ken by both Open AI and Anthropic? Or does one tokenize to tok en? Or even to k en? 2. If yes, what is the common origin of tokenization?
@technovangelist
@technovangelist 3 күн бұрын
The token visualizer I showed had different options for llama vs OpenAI. Not sure how they differ. For the most part it’s an implementation detail most don’t need to know about.
@jparkerweb
@jparkerweb 5 күн бұрын
Such great info @Matt. I couldn't find the `num_ctx` param for options via the OpenAI API anywhere in the official Ollama docs. Thanks for sharing!
@technovangelist
@technovangelist 5 күн бұрын
I don't know if the openai api supports it. there is a lot of stuff that the openai api can't do which is why Ollama uses the native api first.
@jparkerweb
@jparkerweb 5 күн бұрын
@@technovangelist you are right, after looking some more and rewatching your video, I was confusing an OpenAI API call with your curl example. It would be cool if Ollama could take advantage of an optional "options" parameter to do stuff like this though. Either way, thanks for the great content 👍
@technovangelist
@technovangelist 5 күн бұрын
if building anything new you should never use the openai api. there is no benefit and only downsides. that’s mostly there for the lazy dev who has already built something and doesn't want to do the right thing and build a good interface. it saves maybe 30 minutes vs getting it right.
@MeinDeutschkurs
@MeinDeutschkurs 5 күн бұрын
The largest num_ctx I used on my M2 Ultra was 128k, during experiments of keeping „everything“ in the window. I came to the same results, especially above 60000 tokens. RAG is fine, especially to find similar things, but I cannot imagine a true history in a vector store. I tried to summarize past messages, but honestly, simple summaries are not enough. I have no clue how to handle really huge chats.
@GrandpasPlace
@GrandpasPlace 6 күн бұрын
I run a context of 8192 for most models (after making sure that is an acceptable size) I tried bigger context sizes but they seem to cause strange results, as you mentioned. It is good to know it is memory related. Now, is that system memory (32g) or video memory (12g)?
@bobdowling6932
@bobdowling6932 6 күн бұрын
If I set num_ctx=4096 in the options parameter to generate() and then set num_ctx=8192 in the next call but use the same model name does ollama reload the model to get a version with the larger context or does it just use the model already in memory with the larger context size?
@HassanAllaham
@HassanAllaham 5 күн бұрын
it just use the model already in memory with the larger context size
@JohnBoen
@JohnBoen 5 күн бұрын
This is what I have been looking for :)
@eziola
@eziola 6 күн бұрын
Thanks Matt! The fact the only visible parameter about context size doesn't tell you the context size is baffling 😮.
@alx8439
@alx8439 5 күн бұрын
Cover the topic of flash attention as a measure to reduce the memory footprint for accommodating the large context. I think it will be closely related to this topic
@renobodyrenobody
@renobodyrenobody 5 күн бұрын
Thanks, in-ter-est-ing and btw nice shirt.
@AvacadoJuice-q9b
@AvacadoJuice-q9b 6 күн бұрын
Thanks! Learned a lot.
@ArnLiveHappy5678
@ArnLiveHappy5678 6 күн бұрын
This is the exact same problem I am crossing at the moment. I have a chatbot, sometimes a single input could be 8000tokens(basically system log), so I have set the ctx window 12k.. the problem with summarize is that it will drop lot of info from the input. Do you think RAG would help?
@jparkerweb
@jparkerweb 5 күн бұрын
For large summarization tasks, sometimes a "map-reduce" approach works well. Basically, try to semantically chunk your large text into smaller chunks. Summarize each smaller semantic chunk and glue them all together. If the new combined chunk is small enough to summarize by the LLM, then go for it, otherwise repeat the process. I have been able to distil down entire books into a few hundred tokens with this approach (for fun).
@acbp9699
@acbp9699 6 күн бұрын
great video🎉❤
@F336
@F336 6 күн бұрын
if your pc reboots of it like recalculating the python mathlib and fonts and/or using a GPUI, check your bios settings that pci-reset on crash is disabled... -.-
@technovangelist
@technovangelist 6 күн бұрын
not really relevant to anything in this video, but interesting for the day i might use a pc or the python mathlib.
@thiagogpinto
@thiagogpinto 6 күн бұрын
awesome content
@jimlynch9390
@jimlynch9390 6 күн бұрын
So what memory are we talking about, RAM or VRAM or something else?
@technovangelist
@technovangelist 4 күн бұрын
in general in this video I am talking about memory of the model, or context
@Tommy31416
@Tommy31416 6 күн бұрын
I recently set mistral small to 120k and forgot to reduce the number of parallels back to 1 before executing a query. That was a white knuckle 10 minutes I can tell you. Thought the laptop would catch fire 🔥
@AntonioCorrenti-b4e
@AntonioCorrenti-b4e 6 күн бұрын
The problem arises mostly with code. Code fills up a lot of that context and every revision or answer from the llm does the same The solution is to summarize often but keep the original messages for the user or for later retrivial.
@BlenderInGame
@BlenderInGame 6 күн бұрын
My max context is 5000 on a laptop CPU RAM of 12 GB (9.94 GB usable) for llama3.2
@IIWII9
@IIWII9 6 күн бұрын
Another way to determine the context length is to ask the model. I asked “Estimate your context length “. The model responded: “The optimal context length for me is around 2048 tokens, which translates to approximately 16384 characters including spaces. This allows for a detailed conversation with enough historical context to provide relevant and accurate responses.”
@technovangelist
@technovangelist 6 күн бұрын
the model in most cases doesn't really know. if you get an answer that makes sense, it's luck
@alx8439
@alx8439 5 күн бұрын
It's just hallucinating based on some general nonsense it was trained on. It has nothing to do with the real capability of this specific model you're asking
@thenextension9160
@thenextension9160 4 күн бұрын
Gunna hallucinate bro
@NLPprompter
@NLPprompter 6 күн бұрын
that's why ladies and gentlement stop asking LLM to count strawberry "s" we saw letter, they (LLM) saw tokens
Don’t Embed Wrong!
11:42
Matt Williams
Рет қаралды 12 М.
Optimize Your AI Models
11:43
Matt Williams
Рет қаралды 13 М.
Family Love #funny #sigma
00:16
CRAZY GREAPA
Рет қаралды 47 МЛН
How Strong is Tin Foil? 💪
00:25
Brianna
Рет қаралды 68 МЛН
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 35 МЛН
小丑揭穿坏人的阴谋 #小丑 #天使 #shorts
00:35
好人小丑
Рет қаралды 49 МЛН
AI Vision Models Take a Peek Again!
10:27
Matt Williams
Рет қаралды 9 М.
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 51 М.
GPU vs CPU: Running Small Language Models with Ollama & C#
8:16
Bruno Capuano
Рет қаралды 1,1 М.
7 New AI Tools You Won't Believe Exist
14:09
Skill Leap AI
Рет қаралды 74 М.
Master Ollama's File Layout in Minutes!
10:43
Matt Williams
Рет қаралды 8 М.
Upgrade Your AI Using Web Search - The Ollama Course
8:12
Matt Williams
Рет қаралды 25 М.
What is Agentic RAG?
5:42
IBM Technology
Рет қаралды 22 М.
LangChain vs LangGraph: A Tale of Two Frameworks
9:55
IBM Technology
Рет қаралды 22 М.
Fine Tune a model with MLX for Ollama
8:40
Matt Williams
Рет қаралды 36 М.
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 62 М.
Family Love #funny #sigma
00:16
CRAZY GREAPA
Рет қаралды 47 МЛН