This channel is straight up priceless. No fluff. Real deal development how-to. Thank you
@samwitteveenai10 күн бұрын
Thanks it's appreciated.
@davidmccauley782218 күн бұрын
I would love to see a simple example of how to fine-tune a vision model with ollama.
@suiteyousir16 күн бұрын
Thanks for these updates, quite difficult to keep up with all the new releases nowadays
@chizzlemo309418 күн бұрын
OMG, this is exactly what I need. Thanks so much.
@bigfootpegrande17 күн бұрын
Miles and IA? I'm all for it!
@gr8tbigtreehugger17 күн бұрын
Cool to see how you approached NER using an LLM. I've been using SpaCy.
@samwitteveenai17 күн бұрын
I normally use Spacy for anything at scale. You can use LLMs to make good datasets for custom entities and then use that to train the Spacy model
@NeuralDev11 күн бұрын
Could you make a more in depth tutorial about finetuning model to improve their accuracy for this type of task ?
@loudmanCA18 күн бұрын
Really appreciate your channel! Could you make a video to help us better understand what specs are required for using LLMs locally?
@sridharangopal15 күн бұрын
Great videos, Sam. Learnt so much from your videos. RE: Llama vision model on Ollama, I have been trying to get it to work with both pictures and tools but it looks like it can only do pictures and structured output and no tool calling support yet. Any idea on how to get around this limitation?
@marouahamdi42938 күн бұрын
I love this video as always! I have several invoices from which I want to extract the information and save it into an Excel file. I imagine this is doable with this structured output technique. If you have any advice on how to do it, I’m all ears!
@samwitteveenai3 күн бұрын
Extract it as JSON and then use something like Pandas or openpyxl to save it to excel.
@Stewz665 күн бұрын
So, is intelligent document processing and document classification possible with open source vision models ??? Wheels turning...
@samwitteveenai3 күн бұрын
Yes, but it definitely helps if you fine-tune the model for your particular use case.
@justine_chang3916 күн бұрын
do you know if this model would be good for getting the coordinates of stuff in images? For example I would like to get the coordinates of a dog in an image, the model might return a bounding box [[x1, y1], [x2, y2]]
@samwitteveenai15 күн бұрын
These models are probably not good enough for that at the moment, but certainly things like the new Gemini model can do that kind of task.
@parnapratimmitra653318 күн бұрын
Very informative video regarding Vision based models with structured outputs. If possible, could you also make a video on a simple langchain or langgraph app using vision based models of ollama for reading and describing into structured outputs, all the images in a document let's say pdf? Thanks in advance
@protovici147617 күн бұрын
Check out ColPaLi.
@austinlinco16 күн бұрын
I literally thought of this yesterday and was using a system prompt to force it to respond as a dictionary Wtf is up with 2025 being perfect, and what’s the catch
@nufh18 күн бұрын
I have tried it, it depends on the model itself.
@PandoraBox194315 күн бұрын
very useful
@pensiveintrovert431818 күн бұрын
The amount of hacking you have to do to just get "ok" results says it all. Not production quality, and won't be any time soon.
@brando281818 күн бұрын
Have you tried it with better models than were used here?
@adriangabriel321918 күн бұрын
that's not to be expected with models of that size
@pensiveintrovert431818 күн бұрын
@@brando2818 the whole point of using Ollama is to run open source models, on your own hardware. OpenAI, Anthrop\c, Google already offer structured output.