New technology that is called “unethical” because of how it might be used unethically in the future, or by some malign agent, illustrates an abuse of ethics or, more likely, a basic misunderstanding of what ethics is. Technology, with rare exceptions, is neither ethical not unethical. Trying to abort a newly gestated idea in its metaphorical womb because of worst case scenarios is a trend that would have murdered many important discoveries and inventions.
The latest example of this tendency is facial recognition technology. In a report by Kashmir Hill, we learn that Clearview AI, an ambitious company in the field, scraped social media, employment sites, YouTube, Venmo—all public—to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other facial recognition products, creating a boon for law enforcement. The report begins with the story of how a child sexual abuser was caught because he had inadvertently photo-bombed an innocent shot that had been posted on Instagram.
This episode resulted in wider publicity for Clearview, which had attempted to soft-pedal its database and methods because it was afraid of the typical “unethical” uproar.
“The company’s method—hoovering up the personal photos of millions of Americans — was unprecedented and shocking. Indeed, when the public found out about Clearview last year, in a New York Times article I wrote, an immense backlash ensued,” writes Hill. “Facebook, LinkedIn, Venmo and Google sent cease-and-desist letters to the company, accusing it of violating their terms of service and demanding, to no avail, that it stop using their photos. BuzzFeed published a leaked list of Clearview users, which included not just law enforcement but major private organizations including Bank of America and the N.B.A.”
All futile and foolish. There was nothing unethical or illegal about a company using publically available faces in its database, and Facebook, LinkedIn and the rest are ethically estopped from throwing a tantrum about it. If the database is illegal, the internet and social media, which created the conditions, habits and narcissistic obsession with circulating person photographs to the world, is more unethical. Like so many such “shocking developments,” this one should have been anticipated. From an Ethics Alarms perspective, it is akin to The Naked Teacher Principle, first launched to pronounce a “no sympathy” verdict when grad school teachers lose their jobs after deliberately placing naked or sexually provocative images of themselves online. It’s not unethical for people to see them, and it’s not unethical for people to form opinions based on them. It’s not unethical for employers to base personnel decisions based on such photos, and what they convey about the individual who permitted them to be posted. Similarly, it is not unethical for a company to use what someone has posted or allowed to be photographed and posted on the web by others for a legitimate purpose, including building a business and making money. It’s unethical to scrape a photo online and use that face to represent an endorsement that the individual never made, but there are laws prohibiting that. What Clearview did is called “enterprise.” Bravo.
Naturally, because they can’t tell “ick” from ethics, legislators and others grandstanded their opposition to the database, calling it “an attack on privacy”:
“Senator Ed Markey of Massachusetts wrote to the company asking that it reveal its law-enforcement customers and give Americans a way to delete themselves from Clearview’s database. Officials in Canada, Britain, Australia and the European Union investigated the company. There were bans on police use of facial recognition in parts of the United States, including Boston and Minneapolis, and state legislatures imposed restrictions on it, with Washington and Massachusetts declaring that a judge must sign off before the police run a search. In Illinois and Texas, companies already had to obtain consent from residents to use their “faceprint,” the unique pattern of their face, and after the Clearview revelations, Senators Bernie Sanders and Jeff Merkley proposed a version of Illinois’s law for the whole country. California has a privacy law giving citizens control over how their data is used, and some of the state’s residents invoked that provision to get Clearview to stop using their photos. (In March, California activists filed a lawsuit in state court.) Perhaps most significant, 10 class-action complaints were filed against Clearview around the United States for invasion of privacy, along with lawsuits from the A.C.L.U. and Vermont’s attorney general.”
But nothing came of any of this so far, because the critics didn’t have a legal leg to stand on, nor, in my assessment, an ethical one. Clearview is booming, having and raised $17 million and is valued at nearly $109 million. As of January 2020, it had been used by at least 600 law-enforcement agencies; in 2021, the company says the number is about 3,100. The Army and the Air Force, ICE and the Child Exploitation Investigations Unit at Homeland Security all use Clearview AI for a variety of criminal investigations.
Hill writes that many, mostly on the political Left, are terrified that Clearview will win the various court challenges. Of course it will win them. “One major concern is that facial-recognition technology might be too flawed for law enforcement to rely on,” he says. Well, if it doesn’t work, it won’t be around long. Declaring a technology unethical because it hasn’t been proven perfect is unreasonable. Then we have the “systemic racism” argument: in three cases where police officers arrested and briefly jailed the wrong person based on a bad facial-recognition match, all three of the wrongfully arrested were black. None of the cases involved Clearview, but that proves it: facial recognition software is racist. Objections like this make me wonder if the real fear is that Clearview’s database will lead to the arrest of guilty blacks.
Finally, critics are citing the dystopian “Minority Report” scanario, where in the future companies could use our faces (in the movie, it’s our eyes) to track our every move. Talking billboards would call us by name. Yes, that world looked pretty ugly, but the fact that the technology could be used that way doesn’t make the technology itself unethical. Nor do other potential uses, some of them icky and maybe unethical. “Deploying facial recognition to identify strangers had generally been seen as taboo, a dangerous technological superpower that the world wasn’t ready for,” Hill writes. What determines what the “world is ready for”? Was the world ready for the internet? “It could help a creep ID you at a bar,’ Hill says. So can Facebook, Google, and any number of other tools, just not as quickly. “Or let a stranger eavesdrop on a sensitive conversation and know the identities of those talking.” Eavesdropping is unethical, but having “sensitive conversations” in public places is reckless and stupid. Don’t blame Clearview. “It could galvanize countless name-and-shame campaigns”…those are already unethical, but nobody’s suing Twitter… “allow the police to identify protesters”..and rioters? Targeting peaceful protesters is unconstitutional…”and generally eliminate the comfort that comes from being anonymous as you move through the world.” Well, it’s far too late for that.
The so-called ethical attacks on Clearview remind me of a memorable speech from “Inherit the Wind,” as the Clarence Darrow clone—cloning Darrow would be very ethical, and he would approve—“Henry Drummond” says in one of his speeches in the trial,
“Progress has never been a bargain. You have to pay for it. Sometimes I think there’s a man who sits behind a counter and says, “All right, you can have a telephone but you lose privacy and the charm of distance. Madam, you may vote but at a price. You lose the right to retreat behind the powder puff or your petticoat. Mister, you may conquer the air but the birds will lose their wonder and the clouds will smell of gasoline.”
When and if abuse of a technology becomes clear and widespread, that is the time to deal with those abuses. Stopping progress because the possibility of abuse exists is itself unethical.