Business

Why Regulators Can’t Stop Clearview AI

More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people’s permission.

The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner’s Office (ICO) said the firm, Clearview AI, had broken data protection law. Clearview AI denies violating the law.

However, the case highlights how governments have had to struggle with artificial intelligence regulation across national borders.

The data required for facial recognition is huge. In the race to build profitable new AI tools that can be sold to state agencies or attract new investors, companies have turned to downloading—or “scraping”—trillions of data points from the open web.

In the case of Clearview, these are pictures of peoples’ faces from all over the internet, including social media, news sites and anywhere else a face might appear. The company has reportedly collected 20 billion photographs—the equivalent of nearly three per human on the planet.

Those photos underpin the company’s facial recognition algorithm. They are used as training data, or a way of teaching Clearview’s systems what human faces look like and how to detect similarities or distinguish between them. It can recognize a person from a photograph with high accuracy, according to Clearview. U.S. government tests have shown that it’s one of the most precise facial recognition software on the market. The tool has been utilized by U.S. Immigration and Customs enforcement, thousands of police departments and businesses like Walmart.

The vast majority of people have no idea their photographs are likely included in the dataset that Clearview’s tool relies on. “They don’t ask for permission. They don’t ask for consent,” says Abeba Birhane, a senior fellow for trustworthy AI at Mozilla. “And when it comes to the people whose images are in their data sets, they are not aware that their images are being used to train machine learning models. This is outrageous.”

It claims its tools will keep you safe. “Clearview AI’s investigative platform allows law enforcement to rapidly generate leads to help identify suspects, witnesses and victims to close cases faster and keep communities safe,” the company says on its website.

Clearview, however, has also been subject to harsh criticism. Clearview has also been subject to intense criticism from advocates for responsible AI. They claim that facial recognition technology can often misidentify people of color. This makes it easier for law enforcement agencies to arrest wrong persons. Privacy advocates argue that, even though these biases were eliminated, hackers could still steal data or allow governments to monitor individuals.

Continue reading: Uber Drivers Say a ‘Racist’ Facial Recognition Algorithm Is Putting Them Out of Work

Will the U.K.’s fine have any impact?

Clearview also received a $9.4 million penalty from the U.K. regulator. Clearview was ordered to erase all personal data that it had obtained from U.K. residents. It would also ensure that its system couldn’t identify the picture of any U.K. resident.

Clearview may not pay the penalty or follow through with that order, but it’s unclear.

“As long as there are no international agreements, there is no way of enforcing things like what the ICO is trying to do,” Birhane says. “This is a clear case where you need a transnational agreement.”

It wasn’t the first time Clearview has been reprimanded by regulators. In February, Italy’s data protection agency fined the company 20 million euros ($21 million) and ordered the company to delete data on Italian residents. Other E.U. countries have also filed similar orders. France has data protection authorities. It was not clear whether or not the agency in France and Italy had responded to inquiries about whether compliance has been achieved.

In an interview with TIME, the U.K. privacy regulator John Edwards said Clearview had informed his office that it cannot comply with his order to delete U.K. residents’ data. In an emailed statement, Clearview’s CEO Hoan Ton-That indicated that this was because the company has no way of knowing where people in the photos live. “It is impossible to determine the residency of a citizen from just a public photo from the open internet,” he said. “For example, a group photo posted publicly on social media or in a newspaper might not even include the names of the people in the photo, let alone any information that can determine with any level of certainty if that person is a resident of a particular country.” In response to TIME’s questions about whether the same applied to the rulings by the French and Italian agencies, Clearview’s spokesperson pointed back to Ton-That’s statement.

Ton-That added: “My company and I have acted in the best interests of the U.K. and their people by assisting law enforcement in solving heinous crimes against children, seniors, and other victims of unscrupulous acts … We collect only public data from the open internet and comply with all standards of privacy and law. I am disheartened by the misinterpretation of Clearview AI’s technology to society.”

Clearview didn’t respond to queries about its intention to pay or contest the fine of $9.4million from the U.K. privacy guarddog. But its lawyers have said they do not believe the U.K.’s rules apply to them. “The decision to impose any fine is incorrect as a matter of law,” Clearview’s lawyer, Lee Wolosky, said in a statement provided to TIME by the company. “Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the U.K. at this time.”

AI regulation: Unfit for purpose

The U.S. is more successful in legal and regulation. Clearview had earlier agreed to give users in Illinois the ability to opt-out of search results. The agreement was a result of a settlement to a lawsuit filed by the ACLU in Illinois, where privacy laws say that the state’s residents must not have their biometric information (including “faceprints”) used without permission.

The U.S. does not have a federal privacy law. Enforcement is left up to the individual states. Clearview must stop selling services to any private business in the U.S. under the Illinois Settlement. However, Clearview is not subject to federal privacy laws. This means that companies such as Clearview are unregulated at both the national and international level.

“Companies are able to exploit that ambiguity to engage in massive wholesale extractions of personal information capable of inflicting great harm on people, and giving significant power to industry and law enforcement agencies,” says Woodrow Hartzog, a professor of law and computer science at Northeastern University.

Hartzog says that facial-recognition tools add new layers of surveillance to people’s lives without their consent. It is possible to imagine the technology enabling a future where a stalker could instantly find the name or address of a person on the street, or where the state can surveil people’s movements in real time.

E.U. New legislation is being considered by the E.U. on AI. This could lead to facial recognition systems based on data scraped from Europe becoming almost non-existent in that bloc. But Edwards—the U.K. privacy tsar whose role includes helping to shape incoming post-Brexit privacy legislation—doesn’t want to go that far. “There are legitimate uses of facial-recognition technology,” he says. “This is not a fine against facial-recognition technology… It is simply a decision which finds one company’s deployment of technology in breach of the legal requirements in a way which puts the U.K. citizens at risk.”

It would be a significant win if, as demanded by Edwards, Clearview were to delete U.K. residents’ data. Clearview would not allow them to be identified using its tools by doing this, according to Daniel Leufer (a senior policy analyst with Access Now in Brussels). But it wouldn’t go far enough, he adds. “The whole product that Clearview has built is as if someone built a hotel out of stolen building materials. It is imperative that the hotel be closed down. But it also needs to be demolished and the materials given back to the people who own them,” he says. “If your training data is illegitimately collected, not only should you have to delete it, you should delete models that were built on it.”

Edwards claims that Clearview has not been ordered to do so by his office. “The U.K. data will have contributed to that machine learning, but I don’t think that there’s any way of us calculating the materiality of the U.K. contribution,” he says. “It’s all one big soup, and frankly, we didn’t pursue that angle.”

Here are more must-read stories from TIME


To Billy Perrigo at billy.perrigo@time.com.

Tags

Related Articles

Back to top button