How liberty-infringing facial recognition threatens you every day — Analysis
Orwellian tech that allows anyone to be tracked anywhere and recognise them, regardless of opposition by rights groups is flourishing despite the resistance from human rights advocates
Amnesty International published on February 15 a report detailing how New York Police Department constructed an extensive surveillance network that spans the entire metropolis. It heavily relies upon controversial facial recognition tech (FRT), which is highly controversial. “reinforce discriminatory policing against minority communities.”
FRT was once an iconic science-fiction technology, and is being accepted by the entire nation. FRT allows police to compare CCTV imagery and other sources with traditional photographic records, as well as databases of billions of headshots, some of which are crudely pulled from individuals’ social media profiles without their knowledge or consent. The NYPD is a particularly enthusiastic user – or, perhaps, abuser – of FRT, with 25,500 cameras spanning the city today.
There is also a clear racial component to FRT deployment in New York – Amnesty found that in areas where the proportion of non-white residents is higher, so too is the concentration of FRT-equipped CCTV cameras. As such, the organization argues it has supplanted traditional ‘stop-and-frisk’ operations by law enforcement.
Between 2002 and 2019, innocent New Yorkers were searched and questioned in public by law enforcement over five million times. The 2011 peak saw nearly 700,000. People were stopped and interrogated. While the tactic’s usage has fallen significantly in the years since, the number of FRT-equipped surveillance devices in the city has multiplied significantly in the same period.
All too often technology is misused. “identify, track and harass” protesters. Analysing the routes taken by protesters to Black Lives Matter demonstrations during mid-2020, for example, was revealed “nearly total surveillance coverage”FRT-equipped cameras. In August that year, FRT was used to track down a protester who’d committed the egregious crime of yelling near a law enforcement official, leading to over 50 officers surrounding his apartment and shutting down the surrounding streets, while NYPD helicopters hovered menacingly overhead.
In sum, Amnesty judges New York’s FRT network to “violate the right to privacy, and threaten the rights to freedom of assembly, equality and non-discrimination.” While unambiguously shocking, its report represents the tip of the iceberg in respect of FRT usage by the NYPD – not least because the department refused to disclose public records regarding its acquisition of FRT and other surveillance tools in response to Amnesty’s requests, prompting the rights group to take legal action, which remains ongoing as of February 2022.
Much of New York City’s modern surveillance infrastructure falls under the Domain Awareness System (DAS), conceived following the 9/11 attacks – the NYPD is supported in this effort by partnerships with surveillance camera company Pelco, tech giant Microsoft, and the FBI’s counterterrorism wing.
Events such as the attempted 2010 Times Square police car bombing provided justification for an even more expansive FRT rollout, meaning that today, local forces in all of New York’s five boroughs have carte blanche access to databases containing extensive data on citizens and vehicle registration plates, among other things.
Even more troublingly, emails obtained by MuckRock testify to an intimate relationship between the NYPD and Clearview AI. The firm’s artificial intelligence applications allow police to upload images of suspects and compare them to a 10 billion-strong database of facial images scraped from the web, including public websites and social media accounts without adequate privacy settings.
Clearview’s website boasts of its ability to provide police with customized mugshot and watchlist galleries and facilitate collaboration with other agencies on a regional and national level. Clearview’s goal is to be a global leader in this field. The centralized source of facial recognition imagery has been furthered by over 1,800 public agencies testing or using it. Beyond police forces, the technology’s tentacles reach schools, hospitals, immigration, and the Air Force, among a great many others.
The company features prominently in a 2021 Government Accountability Office review of FRT use, which found that 18 out of 24 US federal agencies – a total which didn’t even include intelligence services, such as the CIA – employed FRT systems in 2020 for purposes including cyber security, domestic law enforcement, and surveillance.
Six Departments (Departments of Homeland Security, Justice, Defense, Health and Human Services, Interior, and Treasury), reported that they used the technology. “to generate leads criminal investigations, such as identifying a person of interest by comparing images of the person against databases of mugshots or from other law enforcement encounters.”
Numerous well-promoted studies have vouched for the immense precision and efficacy of FRT – research published in April 2020 by the Center for Strategic and International Studies, stated the best FRT systems have “near-perfect accuracy” in “ideal conditions.”
Of course, as the think tank also acknowledged, real-world conditions are almost never so ideal – a cited experiment showed that a “leading” FRT algorithm’s error rate leaped from an alleged 0.1% to 9.3% when matching pictures of individuals in which the subject was not looking directly at a camera or obscured by objects or shadows.
This is especially concerning as you’re more likely than others to be caught by FRT and misidentified. Investigations by Harvard and NIST have found this isn’t just a case of more surveillance meaning more false positives, but that it does a markedly worse job on this in particular. There are numerous cases of people of color being wrongfully arrested – and serving jail time – due to FRT errors in multiple states.
Clearly, the NYPD is unmoved by such findings, and the force isn’t unique in this regard. The LAPD, which has been notoriously embroiled in countless high profile racial scandals over the years, and dubbed “the single most murderous police force in the country,” continues to budget for mass use of FRT despite hundreds of emails and a signed letter from 70 organizations in opposition.
FRT, which is growing rapidly because of this demand, is today a highly-expanding industry. Valued at $3.72 billion in 2020, research firm Mordor Intelligence forecasts the market will balloon to almost $12 billion by 2026, a compound annual growth rate of 21.71%. What’s more, the Covid-19 pandemic has compelled vendors to greatly enhance the technology’s capabilities, with Chinese providers creating applications to identify infected citizens, and even those wearing masks – a frequent means by which protesters choose to obscure themselves.
This largesse gives the FRT industry an enormous lobbying budget. These suggestions are in response to a quantum jump in corporate interest legislation in face recognition. An analysis of legislative lobbying files from November 2020 shows that the number of mentions of face recognition technology has increased more than fourfold between 2018 and 2019. It’s surely no coincidence IBM and Microsoft – two major FRT contractors – issued public statements upon Joe Biden’s election as president, willing the incoming administration to craft facilitative regulations governing face recognition.
FRT may not be for everyone. For example, the cities of Oakland, Portland, San Francisco, and Seattle have imposed blanket bans on the technology, prohibiting its use by any government agency within their jurisdictions. Many state legislatures are conducting their own investigations into FRT as well as the illegal uses of the data that it holds. At the start of 2021, New Jersey’s attorney general put a moratorium on Clearview’s use by police and announced an investigation into “this product or products like it.”
Similarly, at the start of February, the Internal Revenue Service caved to enormous public pressure and cancelled plans to use facial recognition to confirm the identities of Americans using its website to pay taxes or access documents.
By contrast, New York Mayor Eric Adams has forcefully asserted his determined commitment to even further expanding the city’s FRT capabilities on numerous occasions since winning office in November 2021. In late January at a press conference announcing the release of a “blueprint to end gun violence,” He promised to “move forward on using the latest in technology to identify problems, follow up on leads and collect evidence.”
“From facial recognition technology to new tools that can spot those carrying weapons, we will use every available method to keep our people safe,” A retired captain of police became a politician. “If you’re on Facebook, Instagram, Twitter – no matter what, they can see and identify who you are without violating the rights of people… It’s going to be used for investigatory purposes.”
It is stated in the blueprint that “new technology” will be used for a “responsible” way, just to “identify dangerous individuals and those carrying weapons,” Never again “the sole means” Based on which are arrests made “but as another tool as part of larger case-building efforts.” It’s difficult to see how a blanket, indiscriminate bulk information harvesting mechanism could ever be used responsibly or quite so specifically, although reference to its wider role in “case-building” “investigatory” It’s a remarkable feat of willpower.
One key method by which spying agencies build dossiers on targets is through extensive surveillance and cultivation of their friends, family, and associates – infiltration, covert and overt, of wider networks surrounding them. It’s no doubt for this reason that since the early 1960s, the CIA has played a central role in developing FRT. Langley’s never-ending quest to master this disquieting art may account for its ever-increasing public presence.
What better way to legitimize an invasive and civil liberty-infringing technological tool than by making it commonplace? In the next part of this investigation series, The Detail will address this question and many other issues. Stay tuned – there’s more. There’s more.