B. Technology in service of democracy and fundamental rights

11. Set limits to decision-making by algorithms and ensure human control. Have algorithms checked for discriminatory bias, and comply with duty to state reasons.

  
"Because the computer says so" can never be an acceptable explanation for a govern­ment decision that affects citizens. The application of automated decision-making calls for checks and balances in order to protect human dignity and ensure good governance. The GDPR sets legal limits for the use of algorithms in decision-making. The general rule[1] is that governments or companies cannot assign decisions to compu­ters if such decisions could bring about significant disadvantages for citizens or consu­mers. In exceptional cases in which automated decision-making is allowed, the citi­zen or consumer has the right to obtain an explanation, to object, and to request that a new decision is taken by a person instead of a computer.

ICT systems must therefore make it possible for government professionals to overrule the algorithms based on their own considerations of data and interests.[2] An official must be able to say 'no' even if the algorithm says 'yes'.

Governments need to demonstrate that their algorithms are fair. Automated decisions need to be well-reasoned so that they can be verified by the citizen(s) concerned, the more so because the rules for automated decision are not always a seamless translation of the underlying laws and regulations. Governments should make the algorithms they use public, explain their decision-making rules, assumptions, legal and data sources, and have the algorithms tested by inde­pendent experts, including ethicists. These tests must be repeated regularly, in particu­lar for self-learning algorithms.[3] This involves, among other things, ensuring that the algo­rithm does not develop a discriminatory bias with regard to certain social groups.[4]

Amsterdam is developing a method to assess the algorithms that are used in the city – both by the municipality and by companies – for detrimental effects such as discrimina­tion. One of the reasons for the assessment was an experiment with a self-learning algorithm that automatically handled complaints about a neighbourhood. If the algorithm had been put into service, it would have led to a situation where neighbour­hoods with well-educated citizens who know how to complain would have been better cleaned by the city’s sanitation department than other neighbourhoods.[5]

Governments can better comply with their duty to state reasons if they include the right to explanation as a de­sign requirement in the writing of the algorithm code. Truly smart algorithms must be able to explain in understandable language how they have arrived at an outcome. This facilitates human intervention in the decision-making process.[6]

Return to principles  

Further listening & reading

Podcast: The digital welfare state: secret algorithms assessing social benefit claims and detecting child abuse

Dossier: The Guardian, Automating Poverty - How algorithms punish the poor

Footnotes

[1] Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, WP251, 2017
[2] Arjan Widlak, ‘Een echte smart city begint met fatsoenlijke ICT’, de Helling 31/4, 2018, pp. 14-17 (in Dutch)
[3] Amie Stepanovic, ‘Hardwiring the future: the threat of discrimination by design’, Green European Journal, 2018
[4] Kristian Lum, Predictive Policing Reinforces Police Bias, 2016. See also Amnesty International and Access Now, The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems, 2018 and Declaration of Cities Coalition for Digital Rights, 2018
[5] See Municipality of Amsterdam, Agenda Digital City, 2019, p. 24 and Jan Fred van Wijnen, 'Amsterdam wil 'eerlijke' computers in de stad', Het Financieele Dagblad, 1 March 2019 (in Dutch)
[6] That human intervention must be more than a formality: “To qualify as human intervention, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the available input and output data.” Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, WP251, 2017, p. 10

Return to principles

Reacties

Elbert

L.s.,

Hartelijke dank voor uw antwoord-tweet https://twitter.com/WBGroenLinks/status/1206295345019400198
Het Handvest voor de Slimme Stad ga ik zeker lezen!
Nu al zou ik wat voorlopige kanttekeningen kunnen plaatsen bij punt 11: Set[ting] limits to decision-making by algorithms. Maar zal eerst de kerstvacantie benutten het hele rapport goed te bestuderen.
Met beste wensen voor het nieuwe jaar,

Elbert Kaan

Ps. Bijgaand alvast wat public info over automated decision support systems https://twitter.com/falsel_net/status/1192737410930692096

Reactie toevoegen