Рет қаралды 54
Talk abstract:
Intelligence augmentation through mixed-initiative systems promises to combine AI's efficiency with humans' effectiveness. This can be facilitated through co-adaptive visual interfaces. This talk will outline the need for human-AI collaborative decision-making and problem-solving. I will illustrate how customized visual interfaces can enable interaction with machine learning models to promote their understanding, diagnosis, and refinement. In particular, I will showcase various workflow designs tailored for computational linguistics analysis. Lastly, the talk will conclude with reflections on current challenges and future research opportunities.
Bio:
Menna El-Assady is an Assistant Professor at the Department of Computer Science at ETH Zurich. She heads the Interactive Visualization and Intelligence Augmentation (IVIA) Lab. Prior to that, she was a research fellow at the ETH AI Center; and before that, she was a research associate in the group for Data Analysis and Visualization at the University of Konstanz (Germany) and in the Visualization for Information Analysis lab at the OntarioTech University (Canada). She works at the intersection of data analysis, visualization, computational linguistics, and explainable artificial intelligence. Her main research interest is studying interactive human-AI collaboration interfaces for effective problem-solving and decision-making. In particular, she is interested in empowering humans by teaming them up with AI agents in co-adaptive processes. She has gained experience working in close collaboration with political science and linguistic scholars over several years, which led to the development of the LingVis.io platform. El-Assady has co-founded and co-organized several workshop series, notably Vis4DH and VISxAI.
Webiste: el-assady.com/