film / tv / substack / social media / lists / web / celeb / pajiba love / misc / about / cbr
film / tv / substack / web / celeb

Twitter.jpg

Twitter's New Vague, Unenforceable Rule Already Failed The Test

By Jen Maravegias | Social Media | December 1, 2021 |

By Jen Maravegias | Social Media | December 1, 2021 |


Twitter.jpg

On the heels of Jack Dorsey’s exit as Twitter CEO, yesterday the Twitter Safety Team published an update to the platform’s rules around sharing images that went into effect, and raised alarms, immediately.

In theory, not being able to share pictures or videos of people without their consent is a great idea. But it’s unrealistic unless they strap down the Safety Team and force them to review the feed Clockwork Orange style. As soon as that Tweet went out, activists and general users saw red flags and started asking (unanswered) questions about how this rule would affect social justice efforts that use the platform to identify white nationalists at rallies, record police activities, and help victims of racially motivated crimes.

Twitter said they’ll require a first-person report of the photo/video in question and then the media will be reviewed before any enforcement action is taken. They also claimed they’re going to be taking into consideration whether the image is publicly available and/or being covered by journalists. They’re going to see if the image and the accompanying Tweet text “adds value to the public discourse,” if it’s being shared in the public interest or is relevant to the community. They also mention possible exceptions “in order to enable robust reporting on newsworthy events and conversations that are in the public interest.”

But even before the day ended, Twitter user @SkySpider_, a photojournalist who uses the platform to document activities and crimes of far-right extremists, had a tweet they posted in September taken down because it violated this new rule.

The tweet allegedly included a video, recorded in public, of two right-wing extremists planning an assault. Folks like Andy Ngo have already weaponized the Digital Millennium Copyright Act to mass report pictures and videos of themselves on social media platforms, making it more difficult to expose their ties to dangerous groups amongst other things. Now they have a new tool in their arsenal to prevent their names and faces from being distributed. Good job, Twitter.

As I said, in theory, this could be great. It could help victims of revenge porn and might be applicable to oh, say, spoof-anime videos of members of Congress being murdered by … other members of Congress. But it’s too vague and leaves the door open to be abused by the bad actors it’s meant to be protecting us from. Anyone who has spent any time reporting Tweets for violations of existing rules already knows how frustrating it is to watch antisemitic, misogynistic, racist, homophobic content pass without being in violation of “Twitter’s community standards.” I can’t imagine this is going to work out any better for anyone except the bad guys.

Sigh. All we really want is an edit button, Twitter.