ICME'24 Tutorial on Robust Image Understanding: Explainability, Uncertainty, and Intervenability

  Рет қаралды 81

OLIVES at GATECH

OLIVES at GATECH

Күн бұрын

Neural network driven applications like ChatGPT suffer from hallucinations where they confidently provide inaccurate information. A fundamental reason for this inaccuracy is the lack of robust measures that are applied on the underlying neural network predictions. In this tutorial, we identify and expound on three human-centric robustness measures, namely explainability, uncertainty, and intervenability, that every decision made by a neural network must be equipped and evaluated with. Explainability and uncertainty research fields are accompanied by a large body of literature that analyze decisions. Intervenability, on the other hand, has gained recent prominence due its inclusion in the GDPR regulations and a surge in prompting-based neural network architectures. In this tutorial, we connect all three fields using inference-based reliability assessment techniques to motivate robust image interpretation.
For the slides, please visit the following website:
alregib.ece.ga...

Пікірлер
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 59 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 41 МЛН
Who Is AI For?
8:29
No Boilerplate
Рет қаралды 190 М.
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 692 М.
Fall 2024 Colloquium Talk- Santosh KC
1:01:21
SDSU Computational Science Research Center
Рет қаралды 101
Dr Gabor Mate answers question about October 7th during conference
12:53
Middle East Eye
Рет қаралды 765 М.
Israel Has The Right To Defend Itself | Stand-up Comedy by Daniel Fernandes
15:07
Lecture 2 | Image Classification
59:32
Stanford University School of Engineering
Рет қаралды 940 М.
Digital Image Processing - Image Denoising
59:38
OLIVES at GATECH
Рет қаралды 484