amazing as always. can you make another video on how to use this on google colab ?
@romroc6279 ай бұрын
As always your video are very helpful and clear. I use a VM too for object detection inference in cloud. I still didn't find a good serverless solution to run inference. Maybe one of the next videos could be to run inference with serverless architecture, with or without Gpu. Thanks
@SkalskiP9 ай бұрын
Could you be a bt more specific? What you are looking for? Server that you could youse for deployment? Contenerization? Terraform?
@romroc6279 ай бұрын
@@SkalskiP i need to deploy my object detection trained model for inference. Currently I have a VM in cloud to do that. Using a VM has some disadvantages: I pay even when it is idle, I need to mantain it, and so on. I tried to use a serverless solution to run inference, like Aws Lambda or Azure Functions, but they have limitations, and they don't run on gpu.
@유영재-c9c6 ай бұрын
It would be nice to also show running cogvlm in live cam mode.
@Roboflow6 ай бұрын
We will probably make video like this with next big multimodal LLM.
@jimshtepa54239 ай бұрын
why did. you use roboflow? what function does it have? what would you do otherwise if roboflow was not available?
@Roboflow9 ай бұрын
CogVLM is one of the models available in Inference Server. I used it because it is free and required only 2 commands to deploy. All you need is RF API KEY, and you can generate it with free tier account.
@Roboflow9 ай бұрын
As to what other options do you have, you would probably need to wrap the model in TorchServe.
@jimshtepa54239 ай бұрын
thank you. don't get me wrong, I was not criticizing. I just didn't understand the role of roboflow. Just to clarify, what is the purpose of api key when an ml model is deployed? compute resources are provided by aws, source code of the model is available on hf, what was the purpose for roboflow? what does it do?@@Roboflow
@varunnegi-v7z9 ай бұрын
make a video on finetuning cog-vlm and llava also
@Roboflow9 ай бұрын
Cool idea. I’m scared to even think how much compute you need to find tune this model.
@varunnegi-v7z9 ай бұрын
@@Roboflow yes i understand that the required compute will be too high, but still we can get some insight about fine-tuning vision-llm's , as currently there is very less to No articles or videos are available for this. Hoping that you will come up with some video or article on this 👍👍👍
@mohamednayeem26027 ай бұрын
Is any update on fine tuning it.. I did fine tune lava but not sure how do I do it for cogvlm. Can you help me if you have any resources??
@tomaszbazelczuk49879 ай бұрын
Awesome!!!
@SkalskiP9 ай бұрын
Thank you!
@cyberhard9 ай бұрын
Excellent as usual! BTW, nice hat.
@Roboflow9 ай бұрын
Thanks! It’s been a while since my last video. I’m a bit rusty.
@cyberhard9 ай бұрын
@@Roboflow seems like you edited the rust out. 😉
@Roboflow9 ай бұрын
@@cyberhard hah, what do you mean?
@filipemartins17216 ай бұрын
Is there any way to use FastAPI with this solution? Instead of using the UX provided I would like to send a API call. Any ideas?
@abdellatifBELMADY9 ай бұрын
Great job, thank you 😉
@Roboflow9 ай бұрын
Thanks a lot!
@eliaweiss19 ай бұрын
The 'inference server start' command always start a new container, while the old one stays on disk, this clog the disk and take long start up time How can I make the inference use the previous container?
@william-faria9 ай бұрын
Hello from São Paulo, Brazil! Thank you for your help and time. I have a question: Is it possible to train this model with another language, like Brazilian Portuguese? If yes, how can I do that?
@유영재-c9c6 ай бұрын
Do I have to utilize AWS or can I do it on my own server?
@Roboflow6 ай бұрын
You can run it on your own server!
@mohamednayeem26027 ай бұрын
Can you make a video to fine tune cog vlm
@akhileshsharma50679 ай бұрын
@Robloflow I made a project in roboflow and annotated 300 images but I only want to use 100 images for dataset generation. how to do that? There is no option to select number of images for dataset generation.
@slider05079 ай бұрын
How much did this cost on aws? 🤔
@Roboflow9 ай бұрын
It is around $0.50 per hour
@gexahedrop89239 ай бұрын
is it possible to run it on T4 with transformers library?
@Designer5989 ай бұрын
I am professional thumbnail Designer
@Roboflow9 ай бұрын
Please reach out to me on Twitter: twitter.com/skalskip92
@Designer5989 ай бұрын
@@Roboflow send email address
@eliaweiss19 ай бұрын
Amazon Machine Image (AMI) Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.0.1 (Amazon Linux 2) 20231219