By Jen Maravegias | Social Media | December 1, 2021 |
By Jen Maravegias | Social Media | December 1, 2021 |
On the heels of Jack Dorsey’s exit as Twitter CEO, yesterday the Twitter Safety Team published an update to the platform’s rules around sharing images that went into effect, and raised alarms, immediately.
Sharing images is an important part of folks' experience on Twitter. People should have a choice in determining whether or not a photo is shared publicly. To that end we are expanding the scope of our Private Information Policy. 🧵
— Twitter Safety (@TwitterSafety) November 30, 2021
In theory, not being able to share pictures or videos of people without their consent is a great idea. But it’s unrealistic unless they strap down the Safety Team and force them to review the feed Clockwork Orange style. As soon as that Tweet went out, activists and general users saw red flags and started asking (unanswered) questions about how this rule would affect social justice efforts that use the platform to identify white nationalists at rallies, record police activities, and help victims of racially motivated crimes.
we are about 30 days out from this being used to take down a video of a cop or a random karen harassing someone black https://t.co/Ff0qLhrGz4
— Check out Wild Hunt: Waterdeep on DMsGuild (@KetracelBlack) November 30, 2021
Twitter said they’ll require a first-person report of the photo/video in question and then the media will be reviewed before any enforcement action is taken. They also claimed they’re going to be taking into consideration whether the image is publicly available and/or being covered by journalists. They’re going to see if the image and the accompanying Tweet text “adds value to the public discourse,” if it’s being shared in the public interest or is relevant to the community. They also mention possible exceptions “in order to enable robust reporting on newsworthy events and conversations that are in the public interest.”
But even before the day ended, Twitter user @SkySpider_, a photojournalist who uses the platform to document activities and crimes of far-right extremists, had a tweet they posted in September taken down because it violated this new rule.
URGENT: As we feared, @TwitterSafety is already locking and suspending the accounts of extremism researchers under its new "Private Media" policy.
— Chad Loder (they/them) (@chadloder) November 30, 2021
The video is from September (predating the policy) and shows two right-wing extremists IN PUBLIC, planning violent assaults. pic.twitter.com/dp7zlt1u4r
The tweet allegedly included a video, recorded in public, of two right-wing extremists planning an assault. Folks like Andy Ngo have already weaponized the Digital Millennium Copyright Act to mass report pictures and videos of themselves on social media platforms, making it more difficult to expose their ties to dangerous groups amongst other things. Now they have a new tool in their arsenal to prevent their names and faces from being distributed. Good job, Twitter.
As I said, in theory, this could be great. It could help victims of revenge porn and might be applicable to oh, say, spoof-anime videos of members of Congress being murdered by … other members of Congress. But it’s too vague and leaves the door open to be abused by the bad actors it’s meant to be protecting us from. Anyone who has spent any time reporting Tweets for violations of existing rules already knows how frustrating it is to watch antisemitic, misogynistic, racist, homophobic content pass without being in violation of “Twitter’s community standards.” I can’t imagine this is going to work out any better for anyone except the bad guys.
Sigh. All we really want is an edit button, Twitter.