Ga naar hoofdinhoud

The Challenge Risk of AI

The list of risks associated with AI technology is long and varied, ranging from discrimination and lack of transparency to the question of who is responsible. Yet innovation requires taking risks. How should we deal with this paradox? A discussion between TU Delft professor Inald Lagendijk and Netherlands Scientific Council for Government Policy (WRR) researcher and lawyer Monique Steijns.

Text Bennie Mols

What AI applications do you use yourself and are you aware of the risks involved?

Steijns: “I use the recommender system of streaming services. Also, I obtained my driving licence not that long ago, and now I often have Google Maps on in the car. Last summer, Google Maps sent me and a whole load of other cars down a small road in order to avoid a traffic jam. I wondered what the people who lived on that road would have thought about that. I also use Google Translate a lot, mainly to hide the fact that my French is not that good from my family. This does make me wonder to what extent Google does or does not use words with a certain connotation and how that affects the translation. In the category of unintended side effects, I had an experience with the Photos app on my iPhone two years ago. While I was sitting having a meal with friends, the automatic Memories application suddenly decided to show me photos from the recent funeral of my father. I found that painful.”
Lagendijk: “I use pretty much the same collection of AI applications. With recommender systems, I am aware of the filter bubble effect and every now and then consciously try to look outside that bubble.

‘In my opinion, there is a lot of focus on the negatives at the moment, when AI has so many benefits to offer’

I also regularly use the automatic speech to text conversion. What I find downright annoying are tailored ads for things that I have just ordered and which you then keep being shown for weeks on end. But now we are talking about personal use of AI. What a lot of people don’t realise is that 80 to 90 per cent of AI systems work in the background of important services that we all use. Think about optimisation of the energy network or logistics chains.”

Monique Steijns is a lawyer and is involved in the Artificial Intelligence and public values project as a scientific advisor to the Netherlands Scientific Council for Government Policy (WRR). In 2021, she published a working paper about AI. Steijns is on the supervisory committee Human-Centred AI/ELSA LABS and also chairs the Netherlands Committee of Jurists for Human Rights.

Professor Inald Lagendijk is professor of computing-based society at TU Delft. He founded the Netherlands AI Coalition NL AIC and is, among other things, a member of the board of the national AiNed programme and CTO of the top national innovation team Dutch Digital Delta.

As far as you are concerned, are there any domains where AI should not be used a priori because the risks are too high?

Lagendijk: “I don’t think that you should rule out the use of AI in domains a priori but rather that you should look at each individual case. Is it necessary? What I struggle with is the fact that nowadays, it seems that everything has to contain AI. Why? Does it offer something extra? AI brings with it new problems such as reliability and reproducibility. These are often not discussed. I regard improving the reliability of AI as one of the main challenges over the next few years.”“I don’t think that you should rule out the use of AI in domains a priori but rather that you should look at each individual case. Is it necessary? What I struggle with is the fact that nowadays, it seems that everything has to contain AI. Why? Does it offer something extra? AI brings with it new problems such as reliability and reproducibility. These are often not discussed. I regard improving the reliability of AI as one of the main challenges over the next few years.”

Steijns: “In my opinion, the most important thing is for us to establish proper framework conditions. I worry about vulnerable groups in society. These are the people from whom a lot of data is often collected because they use social services, for example. We have now seen that they can be adversely affected by the application of AI systems. That is worrying. So I think the question is indeed whether we should want to use AI in certain domains.”

Lagendijk: “Over the past decade, AI development has been heavily technology-driven because that has been possible. There are other ways but we haven’t got them right yet. I am a strong advocate of bringing the social side and the technological side together. At TU Delft, we try to train technical AI specialists to think about social and ethical aspects too. Conversely, social and humanities scholars should be taught about AI technology too.”

Steijns: “We are having these conversations around the table now but in my opinion, this type of conversation is still not taking place frequently enough across society. In fact, it is very important to get people from different backgrounds around the table: not just AI technicians but also lawyers, ethicists and civil and social organisations. AI development should not be carried out in a closed environment.”

What is AI?

AI stands for artificial intelligence. As a science, AI is the study of machines that simulate human intelligence processes. This is a useful definition that deliberately sidesteps the difficult philosophical question as to what intelligence is. The scientific field of AI includes skills such as planning, reasoning, information retrieval, knowledge representation, language processing, image recognition and learning. In the latter aspect – learning – in particular, AI has constantly improved over the past decade.

AI stands for artificial intelligence. As a science, AI is the study of machines that simulate human intelligence processes. This is a useful definition that deliberately sidesteps the difficult philosophical question as to what intelligence is. The scientific field of AI includes skills such as planning, reasoning, information retrieval, knowledge representation, language processing, image recognition and learning. In the latter aspect – learning – in particular, AI has constantly improved over the past decade.

‘Human rights must come first. You can innovate within that framework’

Lagendijk: “But that is not an easy conversation. For instance, the European AI Act states that biometric AI applications should not be used if they have a disproportionate effect. So what do we mean by ‘disproportionate effect’? Am I allowed to put a camera up in my little shop because I get robbed 10 times a year Are we allowed to put cameras up at stations? What is stated in that AI Act is just the start of a conversation, not the end, even though it is sometimes presented that way.”

Monique, you carried out research for the WRR on how social organisations in the Netherlands can influence AI developments via what you call ‘countervailing power’. What is that?

Steijns: “I took my inspiration from recent developments in the US. In certain states, facial recognition has been banned in recent years. That was done under pressure from social justice organisations. On a smaller scale, residents have opposed the use of automatic camera systems in their apartment buildings, for example. I wondered what had been done in the Netherlands. I chose to look at three clusters: opposition and protests, like in the examples in the US; acting and thinking together, as we saw in the early days of the Internet, for example; and finally, confrontation and monitoring. All three clusters are forms of countervailing power.”

And what conclusions have you drawn from your research?

Steijns: “There have been very few instances of countervailing power in the Netherlands. It is in its early stage. There is also a big difference between the various social organisations. For the more traditional organisations, such as trade unions, AI hardly features at all whereas it gets much more attention from human rights organisations. Organisations that specialise in the digital domain certainly focus on it but tend to operate in isolation. In order to organise countervailing power better, organisations should collaborate more. For example, the knowledge that Bits of Freedom has would be extremely useful for the traditional organisations which have neither the capability nor funds to explore AI.”

Inald, how do you view that countervailing power?

Lagendijk: “I struggle with the term ‘countervailing power’. For me, it conjures up the image of saying ‘no’ instead of looking for ways to collaborate. I would like to bring in social parties to help come up with ideas. How can AI be useful for them?”

Steijns: “What I mean by the term ‘countervailing power’ is not actually ‘being against something’ but rather the system that provides checks and balances in society. That can include monitoring and confronting the party with the power over something but also providing guidance and contributing ideas, for example.”

Monique, in your WRR research you also drew a comparison between the development of the Internet and that of AI. What can we learn from that?

Steijns: “When the public Internet first emerged in the 90s, you saw a lot of ideas being contributed and adjustments from social organisations. I miss that appropriation of technology with AI. I think that’s because of the tech industry. Their AI applications are not very accessible.”

Lagendijk: “Indeed with the Internet, De Digitale Stad of Marleen Stikker and others was set up quickly. Now the tech giants move so fast…” [De Digitale Stad (1994), an initiative of ‘Internet pioneer’ Marleen Stikker, aimed to create an Internet that was accessible to everyone. Public terminals were installed in Amsterdam so that anyone could surf the Internet, ed.]

Steijns: “The first participants in the Internet were both users and producers of the technology. Now, AI is mainly developed by big companies. Data is also mainly in their hands. That makes it difficult for social organisations to help shape AI. However, that is one of my recommendations: make sure that you involve relevant organisations in the early stages of development. Perhaps then we might have been able to avoid a court ruling like the one against introducing System Risk Indication (SyRI).”

ai
initiative

8

faculities

7

themes

24

AI Labs

(more than)

1400

researchers

32

departments

(all)

28.000

students

96

PhD candidates

10

online courses

for professionals

25 +

partners

(companies, municipalities,
universities, international)

TUDELFT.NL/AI

Mondai, House of AI

This summer saw Mondai, an accessible meeting place for scientists, professionals, alumni, students and interested members of the public on campus, open its doors. Science, education and innovation come together here based on a programme of lectures, workshops, hackatons and other events. Collaboration between TU Delft, Erasmus University Rotterdam, Erasmus MC, Leiden University and LUMC also takes place here. For more information on this ‘House of AI’
and the agenda with activities, go to mondai.nl

Photo: The Delft AI Energy Lab is investigating how new, AI-based methods can enable the management of energy systems based on renewable energy sources.

How should we handle the dilemma between innovating on the one hand – and therefore taking risks – and limiting possible negative consequences of AI – and therefore limiting risks – on the other hand?

Lagendijk: “The medical world uses reliable procedures for developing drugs and treatments. We could do that for AI too. There are so many sectors where experimentation is called for: the automotive industry, the food industry, education… Of course, you shouldn’t just flood society with those experiments. Let’s carry out controlled experiments. But we must experiment and innovate!”

‘The Netherlands was almost the last country to develop a national AI strategy’

Steijns: “As far as I am concerned, proper control means putting fundamental rights first. However, people often want to see how far they can go which brings the risk of going too far. So don’t do it. Human rights must come first. You can innovate within that framework.

Sometimes companies and authorities start using systems before they even know how they work or what the consequences are. Do we consider it OK for insurers to use AI to increase premiums or exclude people? We have not talked about this enough during the development of social media over the past 15 years and look where we have ended up: with misinformation and filter bubbles.

Lagendijk: “raditionally, the Netherlands is pretty good at bringing the engineering and social domains together. You see that reflected in the national AI research agenda, for example. But we are very slow in following through. The Netherlands was almost the last country to develop a national AI strategy. The matters that we are currently discussing are hardly being touched on in politics.”

Steijns: “The problem is that there is very little knowledge of AI within our parliament. Within the WRR, we have published various reports on digitalisation and AI, including policy recommendations.

My feeling is that the politicians’ response to these recommendations is to plan yet another new study. Decisions are being put off.”

Lagendijk: “We can use Germany as a model – AI is a top priority there.”

Finally, what do you think about the public perception of AI at the moment?

Lagendijk: “AI is a systems technology that affects a lot of economic and social processes, in both a positive and a negative sense. In my opinion, there is a lot of focus on the negatives at the moment, when AI has so many benefits to offer. Yes, there are problems, but let’s put them into perspective.”

Steijns: “I think it’s good that we don’t have a laissez-faire attitude towards AI and that we now talk about the risks a lot. It’s magnification that brings about the necessary discussion.”

Read more?