Facial recognition technology (FRT) – which identifies people from a digital or video image – is becoming controversial. Recent progress has led to law enforcement agencies around the world adopting FRT, but resistance is mounting from civil liberties campaigners.
This page sets out the basics of the debate surrounding government use of facial recognition software. To develop WikiTribune’s original reporting on this issue, help develop the draft story here.
The first major use of such technology by police in the UK revealed a 92 percent error rate. This happened at football championships in Wales in 2017.
In the U.S., the American Civil Liberties Union (ACLU) has a long-running campaign against the systematic use of facial recognition software by law enforcement, and has particularly targeted Amazon’s sales of its Rekognition software.
On June 18, the ACLU delivered a letter to Amazon CEO Jeff Bezos, supported by a petition with 150,000 signatures, arguing its software is “primed for abuse in the hands of governments.”
Following this, Amazon employees wrote to Bezos on June 21 calling on him to stop selling the technology. Amazon shareholders had already sent Bezos a letter, on June 15, citing concerns over the potential use of the software and and knock-on effects for the company’s share price.
Amazon issued a response to the ACLU’s earlier concerns on June 1, stating that there had been no reported abuse of the technology by law enforcement.
“[W]e believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future,” Amazon argued. “The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm.”
Meanwhile in the UK, human rights group Liberty is supporting a legal challenge to the widespread use of the technology by police departments.
Issues raised by these groups include potential infringements to privacy and freedom of expression, the technology’s inaccuracy and potential for discrimination, and the danger it could be sold to repressive governments to facilitate human rights abuses.
Which parts of this issue have been under-reported or need to be further analysed and explored?
Make suggestions in TALK or add directly to develop the story here.