Quantcast

Cambridge awarded €1.9m to stop AI undermining ‘core human values’

Work at the Leverhulme Centre for the Future of Intelligence will aim to prevent the embedding of existing inequalities – from gender to class and race – in emerging technologies. Artificial intelligence is transforming society as algorithms increasingly dictate access to jobs and insurance, justice, medical treatments, as well as our daily interactions with friends and family. 

As these technologies race ahead, we are starting to see unintended social consequences: algorithms that promote everything from racial bias in healthcare to the misinformation eroding faith in democracies.

Researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) have now been awarded nearly two million Euros to build a better understanding of how AI can undermine “core human values”.

The grant will allow LCFI and its partners to work with the AI industry to develop anti-discriminatory design principles that put ethics at the heart of technological progress.

The LCFI team will create toolkits and training for AI developers to prevent existing structural inequalities – from gender to class and race – from becoming embedded into emerging technology, and sending such social injustices into hyperdrive.

The donation, from German philanthropic foundation Stiftung Mercator, is part of a package of close to €4 million that will see the Cambridge team – including social scientists and philosophers as well as technology designers – working with the University of Bonn.

The new research project, “Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures”, comes as the European Commission negotiates its Artificial Intelligence Act, which has ambitions to ensure AI becomes more “trustworthy” and “human-centric”. The Act will require AI systems to be assessed for their impact on fundamental rights and values.

“There is a huge knowledge gap,” said Dr Stephen Cave, Director of LCFI. “No one currently knows what the impact of these new systems will be on core values, from democratic rights to the rights of minorities, or what measures will help address such threats.”

“Understanding the potential impact of algorithms on human dignity will mean going beyond the code and drawing on lessons from history and political science,” Cave said.

LCFI made the headlines last year when it launched the world’s only Masters programme dedicated to teaching AI ethics to industry professionals. This grant will allow it to develop new research strands, such as investigations of human dignity in the “digital age”. “AI technologies are leaving the door open for dangerous and long-discredited pseudoscience,” said Cave.

He points to facial recognition software that claims to identify “criminal faces”, arguing such assertions are akin to Victorian ideas of phrenology – that a person’s character could be detected by skull shape – and associated scientific racism.

Dr Kanta Dihal, who will co-lead the project, is to investigate whose voices actually shape society’s visions of a future with AI. “Currently our ideas of AI around the world are conjured by Hollywood and a small rich elite,” she said.

The LCFI team will include Cambridge researchers Dr Kerry Mackereth and Dr Eleanor Drage, co-hosts of the podcast “The Good Robot”, which explores whether or not we can have ‘good’ technology and why feminism matters in the tech space.

Mackereth will be working on a project that explores the relationship between anti-Asian racism and AI, while Drage will be looking at the use of AI for recruitment and workforce management.

“AI tools are going to revolutionize hiring and shape the future of work in the 21st century. Now that millions of workers are exposed to these tools, we need to make sure that they do justice to each candidate, and don’t perpetuate the racist pseudoscience of 19th century hiring practices,” says Drage.

“It’s great that governments are now taking action to ensure AI is developed responsibly,” said Cave. “But legislation won’t mean much unless we really understand how these technologies are impacting on fundamental human rights and values.”




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.