B. Technology in service of democracy and fundamental rights
11. Set limits to decision-making by algorithms and ensure human control. Have algorithms checked for discriminatory bias, and comply with duty to state reasons.
"Because the computer says so" can never be an acceptable explanation for a government decision that affects citizens. The application of automated decision-making calls for checks and balances in order to protect human dignity and ensure good governance. The GDPR sets legal limits for the use of algorithms in decision-making. The general rule is that governments or companies cannot assign decisions to computers if such decisions could bring about significant disadvantages for citizens or consumers. In exceptional cases in which automated decision-making is allowed, the citizen or consumer has the right to obtain an explanation, to object, and to request that a new decision is taken by a person instead of a computer.
ICT systems must therefore make it possible for government professionals to overrule the algorithms based on their own considerations of data and interests. An official must be able to say 'no' even if the algorithm says 'yes'.
Governments need to demonstrate that their algorithms are fair. Automated decisions need to be well-reasoned so that they can be verified by the citizen(s) concerned, the more so because the rules for automated decision are not always a seamless translation of the underlying laws and regulations. Governments should make the algorithms they use public, explain their decision-making rules, assumptions, legal and data sources, and have the algorithms tested by independent experts, including ethicists. These tests must be repeated regularly, in particular for self-learning algorithms. This involves, among other things, ensuring that the algorithm does not develop a discriminatory bias with regard to certain social groups.
Amsterdam is developing a method to assess the algorithms that are used in the city – both by the municipality and by companies – for detrimental effects such as discrimination. One of the reasons for the assessment was an experiment with a self-learning algorithm that automatically handled complaints about a neighbourhood. If the algorithm had been put into service, it would have led to a situation where neighbourhoods with well-educated citizens who know how to complain would have been better cleaned by the city’s sanitation department than other neighbourhoods.
Governments can better comply with their duty to state reasons if they include the right to explanation as a design requirement in the writing of the algorithm code. Truly smart algorithms must be able to explain in understandable language how they have arrived at an outcome. This facilitates human intervention in the decision-making process.
Further listening & reading
Podcast: The digital welfare state: secret algorithms assessing social benefit claims and detecting child abuse
Dossier: The Guardian, Automating Poverty - How algorithms punish the poor
 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, WP251, 2017
 Arjan Widlak, ‘Een echte smart city begint met fatsoenlijke ICT’, de Helling 31/4, 2018, pp. 14-17 (in Dutch)
 Amie Stepanovic, ‘Hardwiring the future: the threat of discrimination by design’, Green European Journal, 2018
 Kristian Lum, Predictive Policing Reinforces Police Bias, 2016. See also Amnesty International and Access Now, The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems, 2018 and Declaration of Cities Coalition for Digital Rights, 2018
 See Municipality of Amsterdam, Agenda Digital City, 2019, p. 24 and Jan Fred van Wijnen, 'Amsterdam wil 'eerlijke' computers in de stad', Het Financieele Dagblad, 1 March 2019 (in Dutch)
 That human intervention must be more than a formality: “To qualify as human intervention, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the available input and output data.” Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, WP251, 2017, p. 10