Innovation

A landmark court ruling could transform how governments use A.I.

A court in the Netherlands has ended the use of a controversial system.

Giant camera checkes group of people as a metaphor of AI-driven (artificial intelligence) surveillan...
Shutterstock

A court ruling in the Netherlands has banned the use of a system that scored citizens on how likely they were to commit certain kinds of fraud. The decision has been hailed as setting "an important precedent for protecting the rights of the poor in the age of automation" by human rights campaigners.

The System Risk Indication system was used by the Dutch government to profile citizens, analyze their personal data, and decide whether they were likely to commit tax or benefit fraud. The system, also known as SyRI, used an algorithm to score citizens. Citizens were not told how the system calculated its decisions.

However, a Dutch court ruled that the government needed to stop using the system immediately as it infringed on human rights. The court decided that the system infringed on Article 8 of the European Convention on Human Rights that guarantees a private life.

The decision has the potential to influence future decisions about how automated systems and artificial intelligence are employed in government decision-making. Amos Toh, a senior researcher in artificial intelligence and human rights for Human Rights Watch, hailed the decision:

"By stopping SyRI, the Court has set an important precedent for protecting the rights of the poor in the age of automation. Governments that have relied on data analytics to police access to social security – such as those in the US, the U.K., and Australia – should heed the Court’s warning about the human rights risks involved in treating social security beneficiaries as perpetual suspects."

Toh noted that one of the key issues with the system was its opaque operation. Even during the court case, the government did not provide a clear explanation for how the system uses data to arrive at conclusions. This meant people essentially could not challenge their scores, even though the government stored the results for two years. SyRI was also employed entirely in what were termed "problem" neighborhoods.

Notably, the court did not employ Article 22 of the General Data Protection Regulation, which protects against automatic decisions with legal effects. TechCrunch notes that it's unclear whether Article 22 applies if there's a human involved in the process, for example in a review step.

The government noted during the case that SyRI does not automatically trigger any legal action or open an investigation. This, however, was not enough to satisfy the court's concerns.

The Hague District Court ruled against SyRI.

Shutterstock

Philip Alston, United Nations special rapporteur on extreme poverty and human rights, declared in his brief to the court that SyRI and systems like it "pose significant threats to human rights, in particular for the poorest in society."

The ruling arrives in the same week that the Australian government came under fire over its role in the "robodebt" scandal. This system, started in 2016, compared a citizen's annual income data as reported by the tax office to earnings data reported every two weeks to the welfare office. The system issued an automatic debit notice when it spotted a discrepancy. Leaked emails showed the government had been told the system may be unlawful before it was suspended in November 2019.

Virginia Eubanks, associate professor of political science at the University at Albany, hailed both pieces of news as "a bad day for the algorithmic overlords [and] a good day for people on social assistance."

Whether it's enough to make governments give pause for thought is perhaps another question.

Related Tags