IBM Quits Facial Recognition Market Over Police Racial Profiling Concerns
IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement.
The abrupt about-face comes as technology companies are facing increased scrutiny over their contracts with police amid violent crackdowns on peaceful protest across America.
In a public letter to Congress, IBM chief executive, Arvind Krishna, explained the company’s decision to back out of the business, and declared an intention “to work with Congress in pursuit of justice and racial equity, focused initially in three key policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities.”
The company, Krishna said, “no longer offers general purpose IBM facial recognition or analysis software.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency,” he added. “We believe now is the time to begin a national dialogue on whether and how facial-recognition technology should be employed by domestic law enforcement agencies.”
But some are sceptical of IBM’s move, noting that the company was already in a distant third place in the race to sell facial-recognition technology, and that the company’s statement leaves loopholes. It reserves the right to sell facial recognition technology for specific purposes, for example, as well as to re-sell the same technology from other vendors as part of its large consulting business.
The statement is still the strongest yet from a major technology company against misuse of facial recognition services, which have provoked alarm among civil rights communities for their ability to silently track entire populations.
In the UK, facial recognition technology has steadily gained ground as a policing tool, despite the objections of groups such as Liberty and Amnesty International, who argue that it’s a violation of privacy without the accuracy required to be useful for preventing crime.
In February, the Met police launched its largest trial yet, scanning shoppers in the Stratford Centre mall in east London to try to match them against a checklist of more than 5,000 people “wanted for serious criminality, such as grievous bodily harm.” The trial was rolled out despite warnings from watchdogs including the information commissioner, the surveillance camera commissioner and the biometric commissioner, critics alleged.
In September, Microsoft’s president, Brad Smith, told the Guardian that the company was voluntarily withholding its own facial recognition technology from governments that wold use it for mass surveillance, but stopped short of committing to an all-out ban.
“It is a technology that can be deployed in, literally, an Orwellian fashion,” Smith said. “But I think whenever you want to ban a technology, you also have to ask, well, what are the potentials for it to do good as well? And so then the question is how do you strike the balance? I don’t think that you strike that balance by banning all use. You strike that balance by banning the harmful use.”
Amazon, whose multibillionaire founder Jeff Bezos came out in support of the Black Lives Matter movement last week, has repeatedly refused to answer questions on the use of its own facial-recognition technology in policing protest.
The company also owns Ring, a smart home subsidiary that has worked closely with police in the past. It has partnered with more than 400 forces, and helped law enforcement gain access to surveillance footage without requiring a warrant, offering advice to officers such as being more active on social media in order to encourage owners to volunteer their recordings. In 2018, Amazon patented a proposal for pairing facial-recognition technology with its doorbells, describing a system that the police could use to match the faces of people walking by a doorbell with a photo database of “suspicious” people.
The American Civil Liberties Union slammed the plan, saying that Amazon was “dreaming of a dangerous future, with its technology at the centre of a massive decentralised surveillance network, running real-time facial recognition on members of the public using cameras installed in people’s doorbells.”
READ MORE HERE