Рет қаралды 12,175
With the current rapid pace of advancement in AI, organizations are often playing catch up when determining how to assure their products are not causing negative impacts or harm. The rise of generative AI typifies such harms, as well as the potential benefits of AI technology. Instead of reacting to ever-frequent technology launches, organizations can buttress their processes through risk management.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use.
Our speaker this month is Reva Schwartz, Reva is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST), a member of the NIST AI RMF team, and Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program.
Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving an understanding of socio-technical systems within computational environments. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.
Reva's background is in linguistics and experimental phonetics. Having been a forensic scientist for more than a decade she has seen the risks of automated systems up close. She advocates for interdisciplinary perspectives and brings contextual awareness into AI system design protocols.
Thanks!
Justin Grammens