Facial Recognition Software Isn’t Unethical, And Neither Is Clearview

New technology that is called “unethical” because of how it might be used unethically in the future, or by some malign agent, illustrates an abuse of ethics or, more likely, a basic misunderstanding of what ethics is. Technology, with rare exceptions, is neither ethical not unethical. Trying to abort a newly gestated idea in its metaphorical womb because of worst case scenarios is a trend that would have murdered many important discoveries and inventions.

The latest example of this tendency is facial recognition technology. In a report by Kashmir Hill, we learn that Clearview AI, an ambitious company in the field, scraped social media, employment sites, YouTube, Venmo—all public—to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other facial recognition products, creating a boon for law enforcement. The report begins with the story of how a child sexual abuser was caught because he had inadvertently photo-bombed an innocent shot that had been posted on Instagram.

This episode resulted in wider publicity for Clearview, which had attempted to soft-pedal its database and methods because it was afraid of the typical “unethical” uproar.

“The company’s method—hoovering up the personal photos of millions of Americans — was unprecedented and shocking. Indeed, when the public found out about Clearview last year, in a New York Times article I wrote, an immense backlash ensued,” writes Hill. “Facebook, LinkedIn, Venmo and Google sent cease-and-desist letters to the company, accusing it of violating their terms of service and demanding, to no avail, that it stop using their photos. BuzzFeed published a leaked list of Clearview users, which included not just law enforcement but major private organizations including Bank of America and the N.B.A.”

All futile and foolish. There was nothing unethical or illegal about a company using publically available faces in its database, and Facebook, LinkedIn and the rest are ethically estopped from throwing a tantrum about it. If the database is illegal, the internet and social media, which created the conditions, habits and narcissistic obsession with circulating person photographs to the world, is more unethical. Like so many such “shocking developments,” this one should have been anticipated. From an Ethics Alarms perspective, it is akin to The Naked Teacher Principle, first launched to pronounce a “no sympathy” verdict when grad school teachers lose their jobs after deliberately placing naked or sexually provocative images of themselves online. It’s not unethical for people to see them, and it’s not unethical for people to form opinions based on them. It’s not unethical for employers to base personnel decisions based on such photos, and what they convey about the individual who permitted them to be posted. Similarly, it is not unethical for a company to use what someone has posted or allowed to be photographed and posted on the web by others for a legitimate purpose, including building a business and making money. It’s unethical to scrape a photo online and use that face to represent an endorsement that the individual never made, but there are laws prohibiting that. What Clearview did is called “enterprise.” Bravo.

Naturally, because they can’t tell “ick” from ethics, legislators and others grandstanded their opposition to the database, calling it “an attack on privacy”:

“Senator Ed Markey of Massachusetts wrote to the company asking that it reveal its law-enforcement customers and give Americans a way to delete themselves from Clearview’s database. Officials in Canada, Britain, Australia and the European Union investigated the company. There were bans on police use of facial recognition in parts of the United States, including Boston and Minneapolis, and state legislatures imposed restrictions on it, with Washington and Massachusetts declaring that a judge must sign off before the police run a search. In Illinois and Texas, companies already had to obtain consent from residents to use their “faceprint,” the unique pattern of their face, and after the Clearview revelations, Senators Bernie Sanders and Jeff Merkley proposed a version of Illinois’s law for the whole country. California has a privacy law giving citizens control over how their data is used, and some of the state’s residents invoked that provision to get Clearview to stop using their photos. (In March, California activists filed a lawsuit in state court.) Perhaps most significant, 10 class-action complaints were filed against Clearview around the United States for invasion of privacy, along with lawsuits from the A.C.L.U. and Vermont’s attorney general.”

But nothing came of any of this so far, because the critics didn’t have a legal leg to stand on, nor, in my assessment, an ethical one. Clearview is booming, having and raised $17 million and is valued at nearly $109 million. As of January 2020, it had been used by at least 600 law-enforcement agencies; in 2021, the company says the number is about 3,100. The Army and the Air Force, ICE and the Child Exploitation Investigations Unit at Homeland Security all use Clearview AI for a variety of criminal investigations.

Hill writes that many, mostly on the political Left, are terrified that Clearview will win the various court challenges. Of course it will win them. “One major concern is that facial-recognition technology might be too flawed for law enforcement to rely on,” he says. Well, if it doesn’t work, it won’t be around long. Declaring a technology unethical because it hasn’t been proven perfect is unreasonable. Then we have the “systemic racism” argument: in three cases where police officers arrested and briefly jailed the wrong person based on a bad facial-recognition match, all three of the wrongfully arrested were black. None of the cases involved Clearview, but that proves it: facial recognition software is racist. Objections like this make me wonder if the real fear is that Clearview’s database will lead to the arrest of guilty blacks.

Finally, critics are citing the dystopian “Minority Report” scanario, where in the future companies could use our faces (in the movie, it’s our eyes) to track our every move. Talking billboards would call us by name. Yes, that world looked pretty ugly, but the fact that the technology could be used that way doesn’t make the technology itself unethical. Nor do other potential uses, some of them icky and maybe unethical. “Deploying facial recognition to identify strangers had generally been seen as taboo, a dangerous technological superpower that the world wasn’t ready for,” Hill writes. What determines what the “world is ready for”? Was the world ready for the internet? “It could help a creep ID you at a bar,’ Hill says. So can Facebook, Google, and any number of other tools, just not as quickly. “Or let a stranger eavesdrop on a sensitive conversation and know the identities of those talking.” Eavesdropping is unethical, but having “sensitive conversations” in public places is reckless and stupid. Don’t blame Clearview. “It could galvanize countless name-and-shame campaigns”…those are already unethical, but nobody’s suing Twitter… “allow the police to identify protesters”..and rioters? Targeting peaceful protesters is unconstitutional…”and generally eliminate the comfort that comes from being anonymous as you move through the world.” Well, it’s far too late for that.

The so-called ethical attacks on Clearview remind me of a memorable speech from “Inherit the Wind,” as the Clarence Darrow clone—cloning Darrow would be very ethical, and he would approve—“Henry Drummond” says in one of his speeches in the trial,

“Progress has never been a bargain. You have to pay for it. Sometimes I think there’s a man who sits behind a counter and says, “All right, you can have a telephone but you lose privacy and the charm of distance. Madam, you may vote but at a price. You lose the right to retreat behind the powder puff or your petticoat. Mister, you may conquer the air but the birds will lose their wonder and the clouds will smell of gasoline.”

When and if abuse of a technology becomes clear and widespread, that is the time to deal with those abuses. Stopping progress because the possibility of abuse exists is itself unethical.

10 thoughts on “Facial Recognition Software Isn’t Unethical, And Neither Is Clearview

  1. I think this is right. Yes, Clearview makes me worry about how it may be abused, but the technology itself is ethically inert. There is great potential for unethical use of the technology, such as spying on people for profit or even nefarious reasons, but that has nothing at all to do with the existence of the technology. It can also be used ethically and to beneficial effect.

    I think your quote from Inherit The Wind is incredibly apt, and shows, in relief, the sad truth about people, particularly Americans — they want everything and want it to cost them nothing, either in filthy lucre or diminution of rights, privileges or pleasures. Fear-mongering about privacy by politicians is simply what they do, but the reality is, Clearview is, pardon the pun, in the clear so far.

    The police and agencies using the technology? Maybe not so much.

    • Isn’t it like DNA technology? If the defense can prove the match is wrong, the case gets thrown out? I assume defense lawyers will soon be fully up to speed on the technologies weak points. Somebody’s going to make a lot of money conducting seminars…

  2. I agree that technology is ethically inert but how do they get around copyright law. If I take photographs of people or of myself and use them in my advertisements which could include Linked In or Facebook, the rights to use the photos remain mine. I suppose that simply having the database does not violate copyright but the moment that the database is sold to a third party whether it is a singular image or collection my work is being used without license. When I had a Facebook page it was private so were those images “Hoovered up”? I would also be interested in knowing how accurate the data points can be using a 1 inch 72 dpi image from Linked In.

    My understanding of facial recognition technology is that its accuracy is a function of the resolution of the image. If the tip of my nose is 5.5cm from the point where my upper lip intersects my lower lip and is also 8cm from the center of my left pupil and 8.1cm from the center of my right pupil, how can such precision be obtained from low res internet images. I took an art class and there is a general symmetry of all faces. The eyes are x inches apart, the nose is so many inches from the eyes and mouth etc. There is even a specific one for what are termed beautiful faces. To get a definitive face you would need to be able to exactly pinpoint all features within hundreds of thousands of an inch in which all points match exactly in relation to one another. I have doubts that such low res images can accomplish that.

    I can understand the need for biometric security technology for personal protection or access to sensitive places, but, while the technology may be neither ethical or unethical the widespread use of facial recognition technology does seen a bit dystopian. Why not just allow law enforcement to do periodic spot checks on people by pulling them over in their autos to ferret out criminals? Facial recognition is simply a non-invasive search to avoid obtaining a warrant. Why don’t we let law enforcement demand employers provide a head and shoulders photo of all personnel or simply require all residents to have national photo ID? My point is we would be shocked if the government used the images it has of us to ferret out crime so it uses a private enterprise to alleviate that problem. It’s ok because they are now doing it all the time.

  3. So that wingnut Ed Markey (where does Massachusetts come up with these people?) is going to stop the observation and compilation of faces in plain view, but the nefariously miss-named “reconciliation bill” is going to require banks and other financial institutions to report to the IRS any and every deposit or disbursal over six hundred dollars from every person or entity’s bank account? That’s okay but Clearview is not? Surely you jest, Ed. Let’s see: catching criminals is bad but catching tax cheats is good. Hmmm.

  4. “… facial recognition software is racist.”
    This reminded me of a phrase I often heard from victims and witnesses struggling to describe a suspect of a different race: “They all look alike to me.” Interestingly, facial recognition software will (allegedly) help to counter misidentifications related to the well-researched cognitive phenomenon of inaccurate cross-race identification.
    https://www.nytimes.com/2015/09/20/nyregion/the-science-behind-they-all-look-alike-to-me.html

  5. There is nothing inherently wrong with this technology, at least, not merely because it is new. I can conceive of a few new technologies which would be unethical, because they would be for achieving unethical goals. However, I feel that does not merely mean that change must be accepted for changes’ sake. There should be room for a society to ask “is this a direction we, as a community, want to head?” Perhaps that is simply answered by the invisible hand, but what when society clamors “No!” And the police, the government, the educators, the tech oligarchs say “Yes, please! Have some billions of dollars.” Where would that leave us as to the direction we should all go?

  6. When and if abuse of a technology becomes clear and widespread

    When that happens, it will be too late.

    The algorithms that Facebook/Twitter are already using to identify ‘misinformation’ are virtually identical to the algorithms used for facial recognition. The difference is merely one of computing power (images versus text). Computers simply make more calculations to process an image rather than text.

    This technology is already being actively abused for censorship and other partisan ends. A regulatory framework is needed now (er, yesterday) to mitigate the worst of abusive potential.

  7. I don’t know. It seems that reasonable people could list all the useful ways facial recognition tech can be used and all the sinister ways facial recognition can be used and list all the ways that facial recognition tech can be trojan horsed into society under the guise of usefulness but will more often than not lead to greater intrusions into privacy.

    But in all things, as technology makes the massive world more immediately like a medieval village, one would have to ask – what risks did the ordinary serf live with knowing everyone in the village knew what he looked like and what his behavior patterns were.

    And frankly I don’t care much for the ethics of medieval villages anyway so I’m not sure sure we can easily say “Facial Recognition” tech is ethically neutral.

  8. 1) I don’t know about Google, but Facebook could indeed have a legal case based on copyright of the photos. It’s a part of the terms of service for Facebook that anything you upload/post to their servers becomes their property (and no longer the property of the poster).

    2) That said, I think it IS useful for lawmakers to — for once — get out in front of a new/emerging technology and deal with the body of law that would apply to it BEFORE some big ugly problem happens and we’re left closing the barn door after-the-fact.

    3) In general, I think electronic facial profiles should be treated, under the law, exactly the same way we treat another major biometric identifier: fingerprints. That’s a long-term tried-and-true balance between individual rights and privacy vs. its utility to law enforcement and national security.

    4) On the subject “Facial Recognition is Racist” . . . I recall a punchline from Get Smart where Control’s search for a Chinese spy from his picture went horribly wrong, and ultimately the Chief figured out that the problem was “All Chinese look alike to the computer.” Apparently you CAN make this stuff up.

    –Dwayne

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.