(by Gregory Barber, Wired) – The San Francisco Board of Supervisors voted Tuesday to ban the use of facial recognition by city agencies, a first-of-its-kind measure that has inspired similar efforts elsewhere.
San Francisco’s ban covers government agencies, including the city police and county sheriff’s department, but doesn’t affect the technology that unlocks your iPhone or cameras installed by businesses or individuals.
It’s part of a broader package of rules, introduced in January by supervisor Aaron Peskin, that will require agencies to gain approval from the board before purchasing surveillance tech and will require that they publicly disclose its intended use. In coming weeks, Oakland and Somerville, Massachusetts, are expected to consider facial-recognition bans of their own.
Facial-recognition technology has been used by law enforcement to spot fraud and identify suspects, but critics say that recent advances in AI have transformed the technology into a dangerous tool that enables real-time surveillance. … Privacy advocates see banning facial recognition as a unique opportunity to prevent the technology from getting too entrenched. “We’re doing it now before the genie gets out of the bottle,” says Brian Hofer, an attorney who heads Oakland’s Privacy Advisory Commission, which spearheaded the legislation in that city.
In San Francisco, the police department says it doesn’t currently use facial recognition, although it tested the technology on booking photos between 2013 and 2017. The Sheriff’s department, which is included under the board’s unique city-and-county authority, says it doesn’t either. “We will comply with whatever the requirements are,” says spokeswoman Nancy Crowley, adding that officers are equipped with Axon body cameras that don’t use facial-recognition technology. (Last week, the California State Assembly passed a ban on biometric surveillance in police body cameras.) San Francisco’s ban will not affect federal agencies, including agents at the airport and ports.
There was little organized opposition to the proposal, but one local group, Stop Crime SF, argued a ban would remove a potential deterrent to property crime and impact the collection of evidence. The legislation was amended to clarify that private individuals can still share tips with law enforcement, although agencies can’t actively solicit information that they know comes from a facial-recognition system. The agency is also required to ask how the information was obtained in order to track how often facial recognition was involved. “If there’s a huge uptick, then that might mean we’re shoving facial recognition into a less-regulated private sector,” says Lee Hepner, a legislative aide to Peskin.
Joel Engardio, vice president of Stop Crime SF, says he’s largely satisfied with the amended bill. “We agree with the concerns that people have about facial ID technology. The technology is bad and needs a lot of improvement,” he says. While the group would have preferred a moratorium while the city worked out regulations, rather than a ban, he says he supports the broader set of surveillance rules.
Makers of facial-recognition systems have been notably silent in the local debates thus far. But Benji Hutchinson, vice president of federal operations for NEC, a major supplier of facial-recognition technology, says the industry is watching closely. “I think there’s a little bit too much fear and loathing in the land around facial-recognition technology,” he says. He’s concerned about the potential for “copycat bills” in other cities that could result in a patchwork of local laws. NEC is pushing for a federal law that would preempt local and state laws, require systems to be tested for accuracy by outsiders, and include new rules protecting against bias and civil rights abuses.
In a statement, Daniel Castro, vice president of the Information Technology and Innovation Forum, a think tank backed by tech companies including Amazon, which makes Rekognition facial-recognition software, called for “safeguards on the use of the technology rather than prohibitions.” Castro called the ban a “step backward for privacy,” because it will leave more people reviewing surveillance video.
At the state level, efforts to regulate facial recognition in Washington crumbled after Microsoft and Amazon, among others, opposed a proposed moratorium in favor of a bill with a lighter regulatory touch. In Massachusetts, which is considering an ACLU-backed moratorium on facial recognition until the state can develop regulations including things like minimum accuracy and bias protections, local police departments frequently partner with the state’s Registry of Motor Vehicles to identify suspects.
Kade Crockford of the ACLU of Massachusetts, which is working with Somerville officials on a proposal that would forbid such data sharing, is optimistic about the potential for cities to lead the way. “I’m not aware of any other example of people really successfully intervening in this very fast moving train of tech determinism and throwing a democratic wrench in the gears,” Crockford says. …
From wired .com. Reprinted here for educational purposes only. May not be reproduced on other websites without permission from Wired.
1. The first paragraph of a news article should answer the questions who, what, where and when. List the who, what, where and when of this news item. (NOTE: The remainder of a news article provides details on the why and/or how.)
2. a) Which city agencies are included in the ban?
b) Who will not be affected by the ban?
3. For what reasons has the Board of Supervisors banned law enforcement from using facial recognition technology?
4. What stipulations are included in the city ordinance?
5. a) How did the police department respond?
b) How did the Sheriff’s department respond?
6. What do you think? –
a) Should every city ban law enforcement’s use of facial recognition technology? Explain your answer.
b) Would it make a difference if the technology was 100% accurate? Explain your answer.
c) Every store (Walmart, Walgreens, CVS, grocery stores, retail outlets…) and every tech company (Google, Apple, Amazon, social media) have cameras filming you. It is unknown which companies also use facial recognition software. (Apple does for the iphone; Amazon does in its ring doorbell and its Rekognition software).
Should private companies and marketers be banned from using facial recognition software also? Is this a privacy issue? Does only the government have potential to abuse the technology?
Read about Amazon Rekognition.
Is it ok for Amazon, Google and other tech companies to use facial recognition software with its products (cellphones, Ring doorbell, …) but not law enforcement?
7. Should we trust private companies’ use of facial recognition and real-time surveillance? What should be/could be done to stop large corporations such as Facebook, Google and Amazon from sharing facial recognition data and conducting their own real-time surveillance?
The San Francisco Board of Supervisors approved the Stop Secret Surveillance ordinance Tuesday, culminating a reexamination of city policy that began with the false arrest of Denise Green in 2014. Green’s Lexus was misidentified as a stolen vehicle by an automated license-plate reader. She was pulled over by police, forced out of the car and onto her knees at gunpoint by six officers. The city spent $500,000 to settle lawsuits linked to her detention.
Since then, San Francisco officials determined flaws in the license-plate reader were just part of a wider potential for abuse with Big Brother-style surveillance capabilities. With new technologies increasingly making it possible to identify people, places and objects, the city decided to impose [a ban on the use of these surveillance technologies].
The U.S. Department of Justice said the technology is not always accurate and that implementation poses significant challenges to civil rights.
“The potential for misuse of face recognition information may expose agencies participating in such systems to civil liability and negative public perceptions,” according to a December 2017 report on facial recognition by the Bureau of Justice Assistance. “The lack of rules and protocols also raises concerns that law enforcement agencies will use face recognition systems to systematically, and without human intervention, identify members of the public and monitor individuals’ actions and movements.”
One of every two Americans already is captured in a face-recognition database accessible to law enforcement, according to a 2016 study at Georgetown Law. It’s mostly stored in the Federal Bureau of Investigation’s Next Generation Identification Interstate System, which has about 411 million individual photos. In a May 2016 report, the U.S. Government Accountability Office admonished the FBI for failing to disclose the extent to which it uses the technology, and to ensure privacy and accuracy. (from a May 14 Los Angeles Times report)[No word on the number of people around the world that Google, Apple, Amazon (through its Ring Doorbell and Amazon Rekognition technology…) and other tech companies have in face-recognition databases.]