Рет қаралды 57
Large language models like ChatGPT have popularised and revolutionised AI in the public consciousness, as well as presented new innovation opportunities, but they also raise issues including embedded bias and stereotyping, indiscriminate scraping of text and images to form training sets, hallucinatory tendencies, and the threat of “fake news on steroids”. In this fireside chat, Lilian Edwards (Turing Fellow) and Adrian Weller (Director of Research in machine learning, University of Cambridge & Programme Director for Safe and Ethical AI, The Alan Turing Institute) address regulatory issues surrounding the use of these technologies, including how effective the draft EU AI Act might be, as well as existing laws such as copyright and data protection. An interesting issue not much examined is how (or if) these models self-regulate by their own terms of service and privacy policies. There is an urgent need to work out how to promote safe, ethical and responsible use of these technologies.
(ps. ChatGPT co-wrote this text)
Find out all about AI UK here: ai-uk.turing.ac.uk/
And keep up with the latest AI UK releases and stay in the loop on the next AI UK - follow The Alan Turing Institute on:
Twitter: / turinginst
LinkedIn: / the-alan-turing-institute
Instagram: / theturinginst