There’s been something really unsettling about Facebook for a long time now, in its response towards images portraying violence against women. That such posts end up on the site isn’t the problem—because that’s always going to happen on a network of Facebook’s size—nor is semi-randomly assigned adverts being placed on these pages. The problem is Facebook’s reaction when images and posts are flagged by the community. To date, the company has generally said it has to balance the right to free speech and offence; by contrast, pictures of breastfeeding are removed as a matter of routine.

Women, Action & the Media decided to take action, and started hitting Facebook where it hurts: in the bank account. Its campaign targeted advertisers rather than Facebook, and although many companies weaselled out of doing anything, some big names pulled ads, including Nissan UK and Nationwide UK. As WAM! has reported, Facebook has finally listened and posted a statement, Controversial, Harmful and Hateful Speech on Facebook.

It’s great to see WAM!’s success and also that Facebook is now finally responding, albeit after a bunch of companies pulled ads rather than beforehand. However, the Facebook statement was quite telling in how the network viewed discrimination:

Many different groups which have historically faced discrimination in society, including representatives from the Jewish, Muslim, and LGBT communities, have reached out to us in the past to help us understand the threatening nature of content, and we are grateful for the thoughtful and constructive feedback we have received. […]

Facebook’s mission has always been to make the world more open and connected. […] To facilitate this goal, we also work hard to make our platform a safe and respectful place for sharing and connection. This requires us to make difficult decisions and balance concerns about free expression and community respect. We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial.

The conclusion there has generally been the case with Facebook, but, for some reason, not with women. There are countless examples of rape ‘jokes’ coupled with horrific imagery that have been deemed acceptable by Facebook moderators and admin staff, even when flagged as unacceptable by hundreds of people. In some cases, these images have even been direct threats against individuals, including photographs altered to show someone with serious injuries. Again, by contrast, a picture of a breastfeeding woman is typically immediately banned, presumably because that is “directly harmful” somehow rather than “offensive or controversial”.

And yet the statement then directly contradicts Facebook’s own actions:

We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).

If that’s the case, why have so many images—including those targeting individuals—been allowed to stand, or at least been left online until literally many hundreds of people have complained about them? It shouldn’t take a social networking campaign to get a social network to remove a disgusting, bullying, hugely offensive, threatening image.

Facebook’s statement at least admits that its

systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate.

The network says it needs to do better. It will therefore review and update guidelines its team uses to evaluate reports of violations of its community standards around hate speech, integrating advice from representatives of the women’s coalition and other groups that have historically faced discrimination. Training will be updated, and those creating content will be held accountable, although I’m not sure some people will care if they’re forced to use their real identities to post hate.

Still, it’s a start, and perhaps it’s the beginnings of the network finally dealing with problems that should have been dealt with long ago.