Transparency and accountability vital for just AI development

AI is developing at breakneck speed, and the potential benefits to society are huge. But there are also negative aspects that must be addressed to ensure ethical and socially just use of AI. Transparency and accountability are absolutely vital, according the researchers involved in the project, which is headed by Panagiotis Papapetrou.

Artificial intelligence – AI – is not a new concept, but in recent years the technology has developed at breakneck speed. Having once been a weird phenomenon from the realms of science fiction, many AI tools are now part of daily life. And all the indications are that this trend will continue.

AI is expected to revolutionize some sectors of society, and has the potential to achieve many positive changes. In health care, AI assistants can provide doctors and nurses with data for more reliable diagnoses and better use of limited medical resources. In schools, there are AI tools that can tailor exercises to meet the needs and learning goals of each student. Teachers can also receive help in the form of routine tasks and course administration, student assessments, syllabus development and supervision. In business, AI can optimize investment strategies, reduce financial risks and automate processes such as recruitment and resource allocation. In police work, AI can help to identify criminal or fraudulent activities and support automated surveillance.

From a broader perspective there are expectations that AI will facilitate scientific discoveries and solve global challenges. It seems highly likely that artificial intelligence has a vital role to play in shaping our future, and that we will see positive transformative impacts over the next few years.

But dark clouds also loom. As AI tools are put to use, they are meeting a growing chorus of warnings about their impact. Fake images, discriminatory recruitment and inaccurate patient assessments are just some of the negative experiences that have been reported. They often occur because the data used to train the AI models are distorted and misleading.

Recent experience has shown how important and urgent it is to examine the major social issues involved in AI development. Steps must be taken to ensure that systems are developed and distributed in a socially just manner. Transparency and accountability for algorithms are vital for ethical and just use of AI technology.

This constitutes a critical and multifaceted research challenge. The work requires a multidisciplinary approach combining technical expertise, ethical considerations and a nuanced understanding of the social and cultural contexts in which AI systems are used.

To achieve AI-assisted decision making based on justice and accountability, the researchers will be examining:

  • how to ensure just AI by design
  • how to ensure transparent and responsible AI-based decision making
  • how to assess the impact of the project’s solutions in areas such as health care and education.

Project:
“AI for society: Towards socially just algorithmic decision making.”

Principal investigator:
Professor Panagiotis Papapetrou

Co-investigators:
Stockholm University
Sindri Magnússon
Teresa Cerratto-Pargman
Stanley Greenstein

Institution:
Stockholm University

Grant:
SEK 6.4 million