If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
@kamranalisyed55534 ай бұрын
Interesting. Thanks for the share.
@SridharKumarKannam4 ай бұрын
If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
@Username562914 ай бұрын
Thanks
@Jeganbaskaran4 ай бұрын
One small clarification, Is the invisible prompt injection can be done in oneshot before we load into vecordb? or every response this will be getting called. How the prompt check/ evaluation has to introduce in every Q&A? will it impact the latency of the response to the user?
@SridharKumarKannam4 ай бұрын
1. We do it before ingesting the docs into vector store. 2. We also need to do the same during QA as the user queries may also have prompt injection. 3. Yes, it will impact the latency. 4. If we trust the users don't misuse the system, then we do only (1), then latency is not an issue. If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)
@seththunder20774 ай бұрын
Can you make a video on using presidio and faker library with langchain in a rag application using LCEL? langchain has integration with it however when I looked at fakers library, I couldnt find a way to create my own custom faker for my own use case like patient records.
@SridharKumarKannam4 ай бұрын
check this - kzbin.info/www/bejne/b5SunIaIhquWe5Y If you found this content helpful, please consider sharing it with others who might benefit. Your support is greatly appreciated :)