WA05

The Ethics of Clearview AI’s Facial Recognition Software

In July 2025, Angella Lipps was tracked down by the Tennessee police in her home and arrested for a bank fraud she supposedly committed in Fargo, hundreds of miles away. She then spent over five months in prison for a crime she in fact did not commit. The charges were officially dismissed on December 23, 2025. This was due to her bank records showing that she could not have possibly been in Fargo at the time of the fraud. The reasoning for this false arrest was Clearview AI, which has a database of over 70 billion faces scraped from the internet without anyone’s permission.

This is just the most recent and famous case, but Lipps isn’t the only case. So far, eight other individuals have been wrongfully arrested because of errors in the Clearview AI facial recognition software. Hoan Ton-That, Clearview AI’s founder, is mostly responsible for this due to his decision to scrape the entire internet, rather than building something, smaller, optional, and with more accountability. This is the main decision I’ll focus on because it is the decision that engineers will have to continue to make increasingly in the coming years.

Clearview AI’s facial recognition tool coming to apps, schools

The Decision

 In 2017, when Ton-That founded Clearview alongside Richard Schwartz, the question must have come up where the images which AI uses would come from. Of course, scraping the internet would result in the largest number of images to train with. However, he could have partnered with law enforcement and just used an existing mugshot database. He could have also offered a way to opt-out. He could have asked for permission to scrape. Instead, he chose to scrape without permission. This has resulted in over 70 billion faces in their databases, all collected without the owner’s permission or knowledge.

If I were working for Clearview, I’d argue that scraping is not the ethical way to collect the images. I think the best solution for the problem is to use a smaller database using images sourced from police bookings, mugshots, etc. It would ideally include an opt-out system. Furthermore, I think there needs to be a hard limit on how much the police force can act on the evidence which the system gives them. This would unfortunately result in a far less profitable and impressive business. However, this comes to the benefit of millions of individuals, preventing them from being potential victims of the system.

Justifying My Decision

The best argument from the scrape comes from, I believe, the Rights Lens which ties closely to Principle 1.6, “Respect Privacy” in the ACM Code of Ethics. This would ask whether an action will respect an individual’s rights, including the right to privacy. This involves controlling what information is put out about yourself. The decision to scrape faces without permission to place them in a database for future use will violate this lens and principle very blatantly. This is pushed further by the fact that the individual has no way to opt out.

The Care Ethics lens also applies well to this scenario. This lens emphasizes caring about how those affected (those with their images scraped) feel, and most of all taking their concerns seriously. For example, Porcha Woodruff, at eight months pregnant, was arrested for a carjacking due to a mishap with Clearview. While she was quickly let out due to the recognition of her innocence, this lens will still consider the stress she endured, and the idea of everyone being a potential victim to the same exact error.

I believe the best argument for Clearview can come from a utilitarian lens. Since most face matches are correct, and the criminals get caught, this means most are benefitting from the system. However, the occasional mishap is the price to pay for such a system. The argument against this is that it does not consider the effect of knowing any photo of yourself is searchable, which is far reaching (billions of people). Essentially the utilitarian approach claims to help most individuals as the failure rate is low, but it could be argued to harm the majority as it violates everyone’s privacy. Again, this relates back to Principle 1.6 from the ACM Code of Ethics.

Why this will likely continue

If this is such a clear disregard for privacy, why does it continue to grow as a system, and retain support, rather than an alternative? The main pro of this system is the amount of money it makes. Clearview just recently, in 2025, signed a contract with ICE for 9.2 million dollars. Furthermore, they are renewing older contracts with the Army, and Customs and Border Protection, showing the value Clearview provides to these larger entities. While these are done with good intention of stopping crime, the issues that face the broader public remain. A system with such a broad scope will easily dominate a smaller system like the one I described, with consensual photos, opt-outs, etc.

Clearview also has a legal basis behind their actions. They argue that they are protected under the First Amendment. However, this does not mean it is an ethical practice. This is specifically why the ACM Code of Ethics exists, as it provides engineers with ethical rules that may not be specifically stated under the law.

Final decision

I don’t believe that engineers at Clearview are making blatantly unethical decisions, and it is perfectly reasonable to support a system in which you believe is genuinely serving the broader societal interest. However, I believe that they are disregarding various rights of the public, such as the right to privacy, opting out, and not being a potential victim to a system they have no control over. An opt-out system, which is not trained on scraped images, but rather police bookings, mugshot photos, or even consensual social media photos would provide a far more ethical solution but would not be a possible competitor with something as far reaching as Clearview.


Comments

Post a Comment

Popular posts from this blog

About me

9/16 Post