Sir, which do you think would be the best AI Prompt model for Test Automation if you compare it now and in the future? Which one should I focus? Or, just wait for future?
@musibulkhan2015Күн бұрын
Thanks you so much you have fulfilled my request.
@ExecuteAutomationКүн бұрын
@@musibulkhan2015 I remember this was one of my pending request that you told me and I was on it to complete it as fast as possible
@bhaibhai-qe8tt9 сағат бұрын
How to given username and password. Is there a safe way to do it in browser use.
@ExecuteAutomation5 сағат бұрын
@@bhaibhai-qe8tt you can pass it to the prompt, but the password can be stored in an env file or environment variable, you can get that pass it along your prompt
@KnownForBadDeedsКүн бұрын
DeepSeek-r1 is significantly slow.😢 I am using 14B model on laptop. *_I can't understand why so much of hype.😢_* *_The world is behind making money through Hype, not behind building AI._* *_Create hype, get investors. Done!_*
@shreyshah_Күн бұрын
The inference speed depends on your hardware. I get 200 tokens / second inference in my local with a 32B parameter model. its blazing fast if you have powerful enough GPU. Try an alianware gaming laptop with at least 4090 32gb VRAM for 14 B model, 4 bit quantized. Running even 7b unquantized locally is a challenge.
@KnownForBadDeedsКүн бұрын
@shreyshah_ I am using Apple M3 Max 48GB Memory. I know my system is not best. But, I am only using 14B parameter of deepseek-r1 only.
@KnownForBadDeedsКүн бұрын
@shreyshah_ Bro, I am using 48GB Apple M3 Max.😞It's my office laptop.
@KnownForBadDeedsКүн бұрын
@@shreyshah_ I have 48GB RAM.👍
@ExecuteAutomationКүн бұрын
@@KnownForBadDeeds our local machines are not built to run these models. It’s because these models are way too complex and compute intensive. These hypes are real my friend