How do you use on-device Generative AI for your Android app? Let us know below!
@VICTORdiulaКүн бұрын
It's for Samsung Galaxy A15? I commented before you did
@byunghwara22 сағат бұрын
Will older devices with much less capable hardware be supported in the future?
@LivingLinux19 сағат бұрын
@@byunghwaraWith older hardware, you are probably better off to run it remote. LLMs need a lot of memory and processing power. I have tested text to image generation with OnnxStream and Stable Diffusion, and that can run with only 512MB. But I haven't seen something similar with LLMs. Unless you can find a very small LLM, but that will mean it's limited in functionality.
@byunghwara18 сағат бұрын
@@LivingLinux Thanks for the explanation! Very helpful!
@ragingFlameCreations12 сағат бұрын
Google always doing the lords work. Thank you
@liujuncn13 сағат бұрын
How does on-device inference affect mobile phone battery consumption and usage time?
@ptruiz_google11 сағат бұрын
Mobile is one giant game of functionality vs battery for everything, so this isn't any different :) I haven't played with Gemini Nano yet really, but I can give you an answer for MediaPipe: depending on the model and its processing needs, you'll see more or less battery drain. On my Pixel 8 Pro I see less than 1% battery drop per query with Gemma 2b. Right now I think we're still in the exciting very early stages of "we've made this work!", and next is multiple improvement stages where things work better and more efficiently (especially on battery usage). By the time everything works really well, it'll be less exciting because people will have had LLMs in their apps for a while so it'll be old news, but that's just the cycle of introducing new tech that we (developers) have always seen.