At 39:30 the dangers of ChatGPT's delusions and hallucinations are discussed. ChatGPT or other chatbots based on language models are not the best tools for information extraction, unless its response generation parameters can be changed (and often there is no such option). For tools like chat, the model needs to be quite creative, whereas for tasks like information extraction, the level of creativity of the model should be zero. Therefore, it is better to work directly with the model (via API) than with the chat, so that the parameters of the LLM can be influenced. This will not prevent hallucinations, but it may reduce them. Using examples in the prompt (few-shot learning) also helps. Of course, we always have to be careful with the LLM output and check it anyway. When using LLMs to extract information, I have noticed their tendency to over-interpret the text they are analysing. Sometimes this is helpful: when it appears from the text that someone attended an event, such as the coronation of a king, and the model extracts the fact that they were in the locality where this coronation took place. But sometimes this leads the model to false conclusions.