So the prompt needs to be sent to Claude so whatever the result of your query will for sure be sent to the cloud to Claude so it can give you a reliable answer. The good news is that this new protocol is open so in the near future you will definitely be able to use this feature with local models.
@JoeGlines-AutomatorАй бұрын
I disagree. The queries are run locally (not in the cloud)
@edzyndaАй бұрын
@JoeGlines-Automator the queries, yes, but the result is then sent as part of the prompt for the LLM to evaluate so it by definition gets sent to Claude.
@JoeGlines-AutomatorАй бұрын
@@edzynda I still disagree.
@edzyndaАй бұрын
@@JoeGlines-Automator How else is claude supposed to know what the server on your machine responded with?
@JoeGlines-AutomatorАй бұрын
@@edzynda What makes you think you need Claude to know what is on the server? In each case we had claude create a query and claude is expecting a list returned in a specific format. At that point there is no need for AI, they just need to display the results which is easily done