excellent video, cant wait for more visual model examples especially with ScreenAI for agents who browse the web
@OumarDicko-c5i4 ай бұрын
Thank you for your video
@paulmiller5914 ай бұрын
This is an exciting sub-field. We have a lot of clients making observations so keen to try this. Happy travels Sam.
@FirstArtChannel4 ай бұрын
Inference speed and size of the model still seems reasonable longer/larger than a Multimodal LLM such as LLaVA, or am I wrong?
@samwitteveenai4 ай бұрын
honestly its a while since I played with LLaVA and mostly I have just used it on Ollama, so not sure how it compares. Phi3-Vision is also worth checking out. I may make a video on that as well
@sundarrajendiran2722Ай бұрын
Can we upload multiple images in the demo and ask questions which have answer in any one of the images?
@SenderyLutson4 ай бұрын
I think the the Aria dataset from Meta is also open
@samwitteveenai4 ай бұрын
interesting dataset. Didn't know about this. Thanks
@miguelalba21063 ай бұрын
Do you know how good the dataset should be in terms of completeness for fine tuning? I have lots of images-texts of clothes, but in some there are more details than others, so I guess during training the model will be confused. Ex. There are thousands of images of dresses with only the color, and thousands of images with color + other details
@SonGoku-pc7jl4 ай бұрын
thanks, we will see phi 3 with vision for compare :)
@AngusLou4 ай бұрын
Is it possible to make the whole thing local?
@ricardocosta93364 ай бұрын
Ty my dude
@willjohnston82164 ай бұрын
Do you know if they are going to release a model for real time video sentiment analysis? I thought there was a demo of that by either Google or OpenAI?
@samwitteveenai4 ай бұрын
not sure but you can do some of this already with Gemini, just not realtime (publicly at least)
@SenderyLutson4 ай бұрын
How many VRAM do this model consume on while running? And the Q4 version?
@samwitteveenai4 ай бұрын
the inference was running on a T4 so it is manageable. The FT was on an A100.
@unclecode4 ай бұрын
Fascinating. I wonder if there is any example for fine-tuning for segmentation involved. If so, the way we collate the data should be different. I have one question about the timeline at 15 minutes and 30 seconds. I noticed a part of the code that splits the data set into train and test. But after split it says `train_ds = split_ds["test"]` shouldn't be "train"?. I think that might be a mistake. What do you think? Very interesting content, especially if the model has the general knowledge to get into a game like your McDonald's example. This definitely has great applications in medical and education fields as well. Thank you for the content.
@samwitteveenai4 ай бұрын
just look at the output from the model when you do segmentation and copy that. Yes you will need to to update the collate function. The "test" part is correct because it is just setting it to train on a very small number of examples, in a real training yes use the 'train' with is 95% of the data as opposed to 5% on the test.
@unclecode4 ай бұрын
@@samwitteveenai Oh ok, that was for just video demo, thx for clarification 👍
@unclecode4 ай бұрын
@@samwitteveenai Thx, I get it now, the "test" is just for the demo in this colab. Although It would've been clearer if they used a subset of like 100 rows from the train split. I experimented a bit, the model is super friendly to fine-tuning. Whatever they did, it made this model really easy to tune. We're in a time where "tune-friendly" actually makes sense.