#BanFacialRecognition: Facial recognition is anti-harm reduction and inconsistent with consent culture

#BanFacialRecognition: Facial recognition is anti-harm reduction and inconsistent with consent culture
DanceSafe is a public health nonprofit that provides harm reduction services, such as fact-based drug education and drug checking, at live music and other events. However, we believe that our mission extends into cultural harm reduction, including providing consent education and educating our community about highly problematic developments in the live event industry so that we can collectively respond and protect our community.

In 2018, Ticketmaster and Live Nation (a singular entity which, at the time of their merger in 2008, owned 80% of the market share) released an investor report stating that they had invested in Blink Identity, a facial recognition firm, with the intention of replacing physical and digital tickets with biometric information. The alarming nature of this report suggests that an attempt at mainstreaming facial recognition looms in the not-so-distant future, and we are compelled to contextualize the technology within our harm reduction and consent education mission.

Facial recognition represents a dystopic advancement of the police state. In a country where one in three adults has a federal criminal record by the age of 23 and a half-century War on Drugs has disproportionately targeted communities of color, there is no justice in mainstreaming facial recognition technology that will likely share and compare biometric databases with those of law enforcement. 

When thinking of outstanding warrants, your mind may go to robbery, identity theft, or violent crime. However, the reality is that most law enforcement resources go toward enforcing lower level offenses, especially drug possession and related convictions. In the United States, people of color and low income communities have suffered the most from the War on Drugs, but it is also fought internationally with catastrophic consequences. It is our belief that the War on Drugs is inherently unjust, and in John Ehrlichman’s own words, the War on Drugs was begun to break up communities of color and the anti-war movement:

“You want to know what this was really all about,” Ehrlichman, who died in 1999, said, referring to Nixon’s declaration of war on drugs. “The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying. We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

Beyond the injustice of the War on Drugs, facial recognition technology is highly inconsistent – research has demonstrated the facial recognition incorrectly identifies targets up to 98% of the time, especially if the target is not a white male. Mainstreaming this technology is particularly dangerous for people of color and gender-nonconforming people because the artificial intelligence powering facial recognition was built on databases of overwhelmingly white, male faces. It often cannot recognize Black faces as human, cannot distinguish between non-white faces, and regularly misidentifies non-male faces. This is a highly problematic tendency that already contributes to the over-criminalization of communities of color (due to the human cognitive bias called outgroup homogeneity, and more specifically cross-race bias), and automating policing with facial recognition will only increase harassment and false identification of people of color.  
The crimes that most people are accused of are related to the War on Drugs and crimes of poverty. If an AI uses these government databases to identify “criminals” at live music events, there is a high probability that fans of color will be harassed and targeted by law enforcement despite their innocence. Considering that police shootings are the number one cause of death for young black men in the United States and ICE regularly deports US citizens that were misidentified as undocumented immigrants, this is literally a matter of life and death for people of color.
We have already given up so much liberty for the sake of “security,” but the mainstreaming of facial recognition must be where this stops. Lack of regulation has already blurred the line between public and private sector, and this has led us into dangerous and uncharted waters with Amazon’s integration of Ring with police surveillance networks. However, there is no regulation strict enough for such an inherently toxic and problematic technology. We must be very wary of the age-old police state arguments of “security and safety” and ban facial recognition. This integration represents a terrifying leap towards dystopia.

Beyond the inherent harmfulness of automated policing, facial recognition is also entirely inconsistent with consent culture. People can only truly consent when they fully understand what is being asked of them, and burying biometric harvesting permissions inside the terms and conditions of purchasing an event ticket is deceptive and dangerous. We already know that commercial entities share (and sell) facial recognition data without individual consent, and currently there are no regulations that address this issue. Even if “consent” is granted to these companies and regulations are implemented, individuals will likely never know how their data is being used until there is a horrifying data breach or a highly visible scandal (such as Facebook and the Cambridge Analytica incident). 

Facial recognition technology exists in a regulatory system that has not evolved with the digital world. The technology is invasive and databases are often insecure with no access auditing, so there is no way to monitor who is accessing the information and how the information is being used. Critics of facial recognition often conclude that the solution to facial recognition is increased regulation, but digital and internet rights advocacy nonprofit Fight For the Future (the organization founded on the infamous SOPA protests, and the backbone of the net neutrality protest movement during the Obama administration) argues that regulations cannot fix a fundamentally flawed and unjust technology. 

In response to the rise of facial recognition technology in the US and concerning incidents in China that include automated policing that led to arrests, Fight For the Future launched their #BanFacialRecognition campaign. In the spirit of the nonprofit’s grassroots organizing strategy, live music fans were encouraged to reach out to their favorite festivals via email and social media and ask them directly: “Do you intend on allowing facial recognition at your event?” The response from most events has been a bewildered but resounding “No”. Interactions with large promoters such as Insomniac and AEG have also resulted in statements that they have no plans to use the technology. Despite these successes, some events and promoters have hesitated to respond. Evolving tactics, including soliciting festival sponsors to refuse to renew contracts unless the event agrees to ban facial recognition, have proven effective. 

The introduction of facial recognition into live events is early enough in its infancy that we, the fans, have the power to demand non-cooperation from our favorite festivals, venues, and promoters. The simplest thing you can do to help is to contact the festivals that are listed in red on the scorecard, or not listed at all—both by emailing them, and on social media. If you get a response, you can alert Fight for the Future via Twitter or by emailing festivals@banfacialrecognition.com. You can also sign #BanFacialRecognition’s petition.

Facial recognition is a technology that, once mainstreamed, can never be taken back. Creation of population-wide databases can never be undone, and even in a best-case political environment, there will always be toxic actors intent on weaponizing that information.

As of October 23, Fight for the Future declared victory (for now) on the live music element of the #BanFacialRecognition campaign. Live Nation released the following statement:

“Ticketmaster is always exploring new ways to enhance the fan experience, and while we do not currently have plans to deploy facial recognition technology at our ‘clients’ venues, rest assured, any future consideration would be strictly opt-in, always giving fans the right to choose.”

In just a few short weeks, this campaign organized fans and artists to prevent Ticketmaster’s facial recognition implementation by demanding non-cooperation from promoters, festivals, and venues. Fight for the Future continues to fight for a full ban on facial recognition. Follow them on Twitter to stay connected.

This is our community, and as the frontline generation, we have an obligation to keep it safe for each other and the next generation of music lovers.

Share This