Рет қаралды 81
Neural network driven applications like ChatGPT suffer from hallucinations where they confidently provide inaccurate information. A fundamental reason for this inaccuracy is the lack of robust measures that are applied on the underlying neural network predictions. In this tutorial, we identify and expound on three human-centric robustness measures, namely explainability, uncertainty, and intervenability, that every decision made by a neural network must be equipped and evaluated with. Explainability and uncertainty research fields are accompanied by a large body of literature that analyze decisions. Intervenability, on the other hand, has gained recent prominence due its inclusion in the GDPR regulations and a surge in prompting-based neural network architectures. In this tutorial, we connect all three fields using inference-based reliability assessment techniques to motivate robust image interpretation.
For the slides, please visit the following website:
alregib.ece.ga...