Hi - We've built our system to mitigate hallucination, but with all LLM based tools, there is no effective way to completely remove general knowledge in the response generation. But with every response we include the source documentation / content where that information was obtained from so it can be validated providing additional governance and trust.