Awesome, man! I was not aware of customizing Ollama with this kind of Python script! Thanks :)
@ammadkhan46879 күн бұрын
Hi, I love all your videos. Could please make a video on getting structured output using ollama. I have use-case to extract specific information from the image and get the output so that automatically the data will be added in database. thanks in advance.
@NiceTechViews540312 күн бұрын
impressive this llva! my original plan was to detect objects via yolo7 ..give the detected objects to ollama to get some text..and let this text then sound via a loudspeaker. llva ist detecting much more object i guess!? - thx for your video 🙂
@blackstonesoftware70745 ай бұрын
This is quite useful! It gives me some great ideas for my own local apps!
@joebywan2 ай бұрын
Rad video, thanks dude. Why's the image path take a list, but supplying multiple images to it doesn't work?
@wasgeht24095 ай бұрын
Thanks :) Is it possible to use this model as an ocr alternativ to get for example informationen from a jpeg image which is an id-card ?
@sumukhas54185 ай бұрын
This will be too much heavy for just that Instead considering yolo would be a better option
@wasgeht24095 ай бұрын
@@sumukhas5418 Thanks for the answer :) Actually I am trying pytesseract to read id-card information, which are photographed by a phone and the results are not very good :/ Do you have some ideas, how I could get some better results?
@derekchance81974 ай бұрын
Are there models that recognize a photo and then vectorizes it?
@AlissonSantos-qw6db5 ай бұрын
Nice, very helpful! Is it possible to create embeddings of pictures with the model?
@declan60522 ай бұрын
How can I modify this code to use my local GPU? It seems to default to my CPU but can't find any way to do this easily
@NiceTechViews540312 күн бұрын
it is using my GPU..i have py39, CUDA 11.2 and cuDNN 8, 2019 Visual Studio, GTX 1660TI “Tuning sm_75”
@R8R8095 ай бұрын
Thanks for the video, how to make sure that I install Ollama on the GPU not on the CPU?
@GuillermoGarcia755 ай бұрын
Riding the awesomeness wave again!
@rajm53493 ай бұрын
can we get the answer in different languages as per the client requrement just like in hindi or tamil or japanese etc if possible
@yuvrajkukreja97274 ай бұрын
how to add long term memory in this local llm ???
@jaykrown3 ай бұрын
This was very helpful, my first time getting results from a multimodal LLM directly using Python.
@brpatil_0073 ай бұрын
Is ollama and llava is free to use and I have spec 16GB/1TB RTX 3050Ti what no. of model is suitable for my device 13B one or else. And I already using ollama basic 4GB model in my device is it ok to run 13B model and some Other model like OpenAi or Gemini API??
@giovannicordova48035 ай бұрын
If my local ram is 8 gb, which ollama model would you recommend to use?
@WebWizard9775 ай бұрын
deepseek-coder ❤
@WebWizard9775 ай бұрын
deepseek-coder ❤
@aaronbornmann98352 ай бұрын
Thanks for your help you legend
@timstevens3361Ай бұрын
what gpu ? how much vram ?
@fastmamajama4 ай бұрын
wow this is too easy to be real. i am using opencv to record videos of flying saucers. i could record images and use llama to verify if there is a flying saucer in it. can i also search videos with videos: instead of images:?
@potatoes10005 ай бұрын
is this fully offline? I am not sure you downloaded the 13B 7.4Gb package
@naturexmusic25673 ай бұрын
Help me out ,it took less than 10 seconds to get the output , but for me it is like taking 3mins to run , of course it runs , i am happy but it is too late
@santhosh-j7e2 ай бұрын
My computer takes more than an hour , the system is installed with a 4GB 3060 GPU , what can I do
@naturexmusic25672 ай бұрын
@@santhosh-j7e I dont know man , i was like working it for my hackathon , i tried like all pc ,like pentium , i3 , i5 ,i7 but no difference.
@Isusgsue5 ай бұрын
What a nice vid. Can I do a ai without using open ai ?
With 4-bit quantization, for LLaVA-1.5-7B, it uses less than 8GB VRAM on a single GPU, typically the 7B model can run with a GPU with less than 24GB memory, and the 13B model requires ~32 GB memory. You can use multiple 24-GB GPUs to run 13B model