B. Technology in service of democracy and fundamental rights

"Because the computer says so" can never be an acceptable explanation for a govern­ment decision that affects citizens. The application of automated decision-making calls for checks and balances in order to protect human dignity and ensure good governance. The GDPR sets legal limits for the use of algorithms in decision-making. The general rule[1] is that governments or companies cannot assign decisions to compu­ters if such decisions could bring about significant disadvantages for citizens or consu­mers. In exceptional cases in which automated decision-making is allowed, the citi­zen or consumer has the right to obtain an explanation, to object, and to request that a new decision is taken by a person instead of a computer.

ICT systems must therefore make it possible for government professionals to overrule the algorithms based on their own considerations of data and interests.[2] An official must be able to say 'no' even if the algorithm says 'yes'.

Governments need to demonstrate that their algorithms are fair. Automated decisions need to be well-reasoned so that they can be verified by the citizen(s) concerned, the more so because the rules for automated decision are not always a seamless translation of the underlying laws and regulations. Governments should make the algorithms they use public, explain their decision-making rules, assumptions, legal and data sources, and have the algorithms tested by inde­pendent experts, including ethicists. These tests must be repeated regularly, in particu­lar for self-learning algorithms.[3] This involves, among other things, ensuring that the algo­rithm does not develop a discriminatory bias with regard to certain social groups.[4]

The cities of Helsinki and Amsterdam have jointly developed a public register of algorithms. Such a register lists the algorithms that the municipality uses and explains their workings. Citizens are invited to give feedback, with the aim of building human-centred artificial intelligence.[5]

Amsterdam is also developing a method to assess the algorithms that are used in the city – both by the municipality and by companies – for detrimental effects such as discrimina­tion. One of the reasons for the assessment was an experiment with a self-learning algorithm that automatically handled complaints about a neighbourhood. If the algorithm had been put into service, it would have led to a situation where neighbour­hoods with well-educated citizens who know how to complain would have been better cleaned by the city’s sanitation department than other neighbourhoods.[6]

Governments can better comply with their duty to state reasons if they include the right to explanation as a de­sign requirement in the writing of the algorithm code. Truly smart algorithms must be able to explain in understandable language how they have arrived at an outcome. This facilitates human intervention in the decision-making process.[7]

Footnotes

Further viewing

Podcast: The digital welfare state: secret algorithms assessing social benefit claims and detecting child abuse

Dossier: The Guardian, Automating Poverty - How algorithms punish the poor

Reactions

GEF

This project is organised by the Green European Foundation with the support of Wetenschappelijk Bureau GroenLinks (NL), Green Economics Institute (UK), Institute for Active Citizenship (CZ), Etopia (BE), Cooperation and Development Network Eastern Europe and with the financial support of the European Parliament to the Green European Foundation.

Logo Green European Foundation