Facial Recognition Software Isn’t Unethical, And Neither Is Clearview

New technology that is called “unethical” because of how it might be used unethically in the future, or by some malign agent, illustrates an abuse of ethics or, more likely, a basic misunderstanding of what ethics is. Technology, with rare exceptions, is neither ethical not unethical. Trying to abort a newly gestated idea in its metaphorical womb because of worst case scenarios is a trend that would have murdered many important discoveries and inventions.

The latest example of this tendency is facial recognition technology. In a report by Kashmir Hill, we learn that Clearview AI, an ambitious company in the field, scraped social media, employment sites, YouTube, Venmo—all public—to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other facial recognition products, creating a boon for law enforcement. The report begins with the story of how a child sexual abuser was caught because he had inadvertently photo-bombed an innocent shot that had been posted on Instagram.

This episode resulted in wider publicity for Clearview, which had attempted to soft-pedal its database and methods because it was afraid of the typical “unethical” uproar.

Continue reading