Рет қаралды 219
A UCL Laws lecture recording from 25 April 2024.
Speakers: Prof. Margot Kaminski (University of Colorado Law School), Associate Prof. Michael Veale (UCL Laws) and Assistant Prof. Jennifer Cobbe (University of Cambridge).
Chair: Andrew Strait (Ada Lovelace Institute)
Recent years have seen a surge in regulation targeting algorithmic systems, including online platforms (Online Safety Act [UK], Digital Services Act [EU]), artificial intelligence (AI Act [EU], AI Executive Order [US]), and the application and extension of existing frameworks, such as data protection, to algorithmic challenges (UK and EU GDPR, California Consumer Privacy Act and Draft Automated Decisionmaking Technology Regulations [USA]). Much of the time, these instruments require regulated actors to undertake or outsource some form of assessment, such as a risk assessment, impact assessment or conformity assessment, to ensure the systems being deployed have desired characteristics. On first glance, all these assessments look like the same regulatory mode - but are they? What are policymakers and regulators actually doing when they outsource the analysis of such systems to actors or audit ecosystems, and under what conditions might it produce good regulatory results? Is the AI Act's conformity assessment really the same kind of beast as the Digital Services Act or Online Safety Act's risk assessment, or the GDPR's data protection impact assessment? Is this just kicking the can on value-laden issues, like fairness or transparency, representativeness or speech norms, down to other actors, because legislators don't want to do it?
In this discussion, three scholars of these systems will compare and contrast different regulatory regimes concerning AI with a focus on how actors within them can understand the systems around them. Does the outsourcing of the analysis of how AI systems work make sense, and is it given to actors with the position and analytic capacity to do it, or might it lead to regulatory arbitrage or even failure?