Really great insights. Economics is well explained.
@tensorops8 ай бұрын
Thank you!
@mohamedfouad13099 ай бұрын
😊
@loopaal7 ай бұрын
fantastic
@tensorops7 ай бұрын
Thank you so much 😀
@billykotsos46428 ай бұрын
Being handed a bill based on tokens generated by a model is preposterous... These LLM apps cost so much right now that you need to have a solid use case in mind.... Else you just wait for a couple more years when inferencing these LLMs wont be as expensive... the only reason these LLMs are so expensive to run is that they are SOTA and Nvidia is the only player right now.
@billykotsos46428 ай бұрын
the economics are broken because the hardware setup just isnt there... instead of paying by the hour you pay by the token/call which is insane..... Cloud has been build on the idea that you fire up the instance and you know what you pay.... but these days you need huge cloud instances to run these huge models... The costs will go down significantly to run these models in about 3 years.... you wont have to think about these things...
@lionhuang92099 ай бұрын
Where can we download the slides?
@balainblue7 ай бұрын
Can you explain the math of 5 requests per minute translating it to 9,000$ per month?
@tensorops7 ай бұрын
We recommend looking here gptforwork.com/tools/openai-chatgpt-api-pricing-calculator Assuming 220K requests, with proper prompts that are usually 1000-2000 tokens you can get to these costs. Additionally we want to remind that often a single request to an LLM application triggers more than one API call to an LLM
@balainblue7 ай бұрын
@@tensorops Thank you so much.
@balainblue7 ай бұрын
@@tensorops Can you please elaborate on that? "A single request to an LLM application triggers more than one API call to an LLM"
@tensorops7 ай бұрын
@@balainblue We give an example on the next webinar where you have one query that triggers many LLM calls. Sometimes even simple chains like Map-Reduce or Refine can cause many LLM calls to OpenAI for a simple action as "summarization"