

WASP–HS: research into AI, technology, humanity and society
Artificial intelligence (AI) and autonomous systems are affecting society and people at all levels. The WASP-HS research program explores the development of these new technologies and their consequences for our behaviors and for society in general.
“AI is not the good, the bad, the ugly – it’s all three,” says Kerstin Sahlin, who has chaired the program since its inception in 2019.
Christofer Edling, sociology professor and WASP-HS Program Director, adds: “These new technologies are very much what we make of them.”
Progress in the field has been rapid since Marianne and Marcus Wallenberg Foundation and Marcus and Amalia Wallenberg Foundation decided to make a joint investment in the nationwide WASP-HS program. Since then, AI has jumped ahead a generation. ChatGPT heralded the advent of the fourth generation of AI, often referred to as Generative AI or the GPT-4 era.
“Today it’s obvious why the research conducted by the program is necessary, the interesting thing is that the Foundation identified this need at such an early stage,” says Edling.
When Knut and Alice Wallenberg Foundation decided to launch the WASP-program, basic research in autonomous systems and AI, there were also discussions about the importance of investigating the human and social perspectives of the approaching technological shift.
Edling explains that WASP-HS addresses the big questions that the technology raises, such as how it impacts society and our capacity to build a good society, and indeed what it is to be human. What is and will be our role and purpose? Essentially, what becomes of us when technology has the potential for autonomy and intelligence.
“AI is a fantastic tool, but there are also risks associated with it. The really big questions relate to humans and society, as is so often the case with major technological changes in history,” Edling
notes.

Research themes
The research program includes issues such as ethics and responsibility, democracy, communication, human-machine interaction, economics and knowledge-learning.
“Relatively early in technological shifts like these, which we can still say we’re in, it’s important to have a grasp of what’s happening. It’s important to understand people, society and technology,” says Kerstin Sahlin, Professor of business administration and an organizational researcher. She says that these issues were among the core aims of the program from the outset.
“It’s been important to build up our knowledge to understand both the technology and society. And because we have humanists, social scientists and data scientists on the program, we have a basis for which we can ask fundamental questions, for example about how society develops, how growth takes place and how decisions are made.”
Sahlin says that it is important to look at what has happened and to predict what is to come. Where can you use AI and where can you not use it?
“To understand AI, it’s important to know how decisions are made, how the labor market functions, how different laws and regulations work, and how previous technological shifts have affected the world. This requires serious research, not just a general discussion – although that’s also important.”
Not human or AI – both
Society is becoming increasingly data-driven. AI systems control language models such as ChatGPT, and the responses we get from them. They determine whether we get a loan, a job or a place on that training course – and we can expect a growing number of decisions to be made by AI in the future. Furthermore, we interact with AI systems because the technology is built into many of the products and services we use on a daily basis.
“The interaction between society’s institutions and technology is a key issue,” notes Edling.
“In many settings in which the use of AI has progressed quite quickly and comprehensively, such as image recognition, translation and various decision-making support tools, you still can’t just hand everything over to AI. It’s not a question of humans or AI, but both. So, it’s especially important that people’s competence, attitudes and ways of working stay up to speed,” says Sahlin.
“AI is not the good, the bad, the ugly – it’s all three,” says Kerstin Sahlin.
Building dialogues with multiple actors
Several of the roughly 250 researchers and doctoral students on the program collaborate with researchers on the WASP program and Knut and Alice Wallenberg Foundation’s data-driven life science (DDLS) program. There is also a collaboration with MIT, the Massachusetts Institute of Technology, in the US.
A relatively large part of the program focuses on investigating how humans and machines interact.
“From a technical perspective, a problem may appear to have been solved, but then you give it to us humans and then often something else happens. This gets even more interesting with AI. What kind of relationship should we have with AI? Is it a co-worker or an appliance?” asks Edling.
Sahlin concludes by emphasizing the importance of humility when it comes to predicting the future.
“No one really knows where we’re going. This puts special demands on the research program, we’re often used to researching things that have happened, but we’re doing a lot of research into what’s happening now. This requires more dialogue between different research areas and with society as a whole. We’re trying to develop this further through deeper dialogue and co-operation with business, the public sector and the general public.”
Text: Carina Dahlberg
Photo: Magnus Bergström
Translation: Nick Chipperfield
Illustration: AI-generated