All social media platforms have some form of reporting system that allows you to flag or report posts that could potentially be harmful. This can include harassment, revenge porn, posting personal identifying information, and a long list of other offenses. Facebook, however, seems to have a fatal flaw within their reporting system, one that allows seriously damaging posts to slip past the review team and remain public despite complaints. On October 26th a post was shared by a page called Justice for Cecil the Lion, an anti-hunting page devoted to ending big game hunting and poaching. The majority of their posts revolve around information pertaining to the Cecil fiasco or hunting and poaching laws and regulations around the world. This particular post, however, was something different.
The post was a link shared from the website themonsteramongus.com, a website also dedicated to eradicating hunting and poaching around the world, claiming to be a source for “exposing” the evils of animal cruelty and abuse. The link from themonsteramongus.com that appeared on the Justice for Cecil the Lion page wasn’t information about poaching, it wasn’t a story about a hunting trip, and it wasn’t an article condemning hunting. Instead, it was a list of personal information from individuals who had legally obtained hunting licenses in Florida in order to participate in Florida’s recent bear hunting season. This list included names, phone numbers, and email addresses of these individuals and both themonstersamongus.com and Justice for Cecil the Lion shared this list for the purpose of urging others to use the hunter’s personal information to harass and bully them.
Reporting the Problem to Facebook
Whether or not hunting is morally okay or not isn’t the issue with this particular story, there are millions of varying opinions when it comes to legal hunting, both big game and not. The issue here is the fact that thousands of individuals who legally obtained hunting licenses were having their private information spread around the internet with people urging everyone to email them, call them, harass them, and continue sharing their information so others could do the same. A number of people attempted to report this post to Facebook, assuming that publishing identifying information without permissions must be against Facebook’s guidelines. While it’s true that publishing personal or identifying information without permission of the individual/individuals is against Facebook’s guidelines, those reporting the post still ran into a major issue when attempting to flag the post for review. They found that there was simply no option within the reporting system that pertained to the post in question. When a user initially clicks on the “Report Post” option they are met with a pop-up window asking them what the problem is. There are three limited options to choose from:
- It’s annoying or not interesting
- I don’t think it should be on Facebook
- It’s spam
Out of these three options, one would assume that the aforementioned post belongs in the middle category along with other forms of harassment or bullying. However, once you choose this option you are met with another set of, more specific, options to choose from. The problem is, these more specific options only cover a limited range of occurrences, and they far from cover all behavior that goes against Facebook’s guidelines. What the users found, when trying to report the post containing the hunter’s names, was that there wasn’t really an option that pertained to the situation. The second set of options are:
- It’s annoying or distasteful
- It’s pornography
- It goes against my views
- It advocates violence or harm to a person or animal
- It’s a false news story
While these options may cover a lot of situations, they certainly don’t cover general harassment or the posting of a person’s personal information, two things that occur on the internet and social media quite often. Some of these options will lead to Facebook simply telling you to block the person/page/group that the post is from or giving you the option to message them in order to ask them to remove the offending post. Other options also give you the opportunity to ask Facebook to review the content, if you feel it violates their guidelines. In the case of the bear hunter’s post, many users ended up selecting “It advocates for violence or harm to a person or animal,” mostly because it was the closest option available.
Facebook’s Responses Reveal a Fatal Flaw in their Reporting
Now, Facebook’s guidelines do, in fact, state that the posting of personal information without permission is against their rules. However, due to their poor reporting system this particular post was reviewed as something “advocating for violence or harm to a person or animal” rather than being reviewed as what it was. In every instance of this post being reported, each user received the same response from Facebook. This response was, “Thank you for taking the time to report something that you feel may violate our Community Standards. Reports like yours are an important part of making Facebook a safe and welcoming environment. We reviewed the share you reported for containing graphic violence and found it doesn’t violate our Community Standards.” Despite the fact that the exact community standards Facebook mentions and links to in their response clearly states:
“We don’t tolerate bullying or harassment. We allow you to speak freely on matters and people of public interest, but remove content that appears to purposefully target private individuals with the intention of degrading or shaming them. This content includes, but is not limited to:
-Pages that identify and shame private individuals,
-Images altered to degrade private individuals,
-Photos or videos of physical bullying posted to shame the victim,
-Sharing personal information to blackmail or harass people, and
-Repeatedly targeting other people with unwanted friend requests or messages.”
Even after some of the users replied to this response, explaining the reason they had selected the option they did, noting the lack of a qualifying option in the reporting system, and alerting Facebook to their stance against sharing personal information, there was no response from Facebook and no further action was taken. Today, the post still remains on the Facebook page Justice for Cecil the Lion, where users comments range from urging others to bully those on the list to links of hunter’s personal Facebook pages-another way for the members of the group to harass the bear hunters.
What Does this Mean for Facebook Users?
Luckily, the following day the content was removed from the website it was originally posted on. The information on themonsteramongus.com shows that WordPress, the platform their website is on, removed the content after receiving a complaint about the posting of personal information. While this is a good thing, especially for those whose names and information ended up on that list, it says something bad about Facebook and their reporting system. All it took was one complaint for WordPress to properly look into the matter and take action. Meanwhile, Facebook doesn’t even supply users with the options needed to report something of this nature, which leads to posts like these slipping through the cracks and remaining live. The content on themonsteramongus.com was still present when Facebook reviewed the complaints regarding the post, and even now you don’t know that the information has been removed until you follow the link to it’s original source.
Why are Facebook’s reporting options so sparse? How do they expect their users to feel as though they’re protected from harassment when there isn’t even an option to report a harassing post or one that contains your personal information? If the problem stopped at the reporting methods, it wouldn’t be so much of a problem, but the fact is Facebook is allowing harmful posts to remain because their reporting options don’t accurately reflect what’s happening on their site. It seems as though the reason the post wasn’t removed wasn’t because it didn’t violate Facebook’s community standards, it obviously did, the problem was how the post was reported. Because there were limited options, those who reported the post had to choose the closest option to the situation; the option stating that the post advocated or depicted violence against a person or animal. Therefore, it seems the review process consisted of Facebook simply scanning the reported post for depictions or violence, and when that wasn’t found they deemed it fit for their site and called it a day. When Facebook reviewed this post they were only looking for what the user had chosen when the report was made; they didn’t look for identifying information or some form of harassment, because that’s not what they were told to look for. If we, as Facebook users, aren’t able to report instances of bullying, harassment, and the posting of personal information to Facebook, how are we supposed to feel comfortable using this site? On top of that, how many other scenarios are simply missing from Facebook’s reporting system? Their community guidelines cover multiple pages and sections, while their reporting system has a mere handful of options, some of which don’t even give you the option of requesting a review.
While it isn’t Facebook’s job to protect us from all of the nasty things on the internet, we should at least be able to trust that they’ll remove our personal information if it ends up being posted, especially when it’s posted with the express intention of harassment. This situation has revealed a major flaw in Facebook’s reporting system, review system, and in the way they handle complaints from users. There needs to be a broader selection to choose from when an individual decides to report a post to Facebook or an option to type out what the problem is when the pre-made options don’t fit. It seems that through an attempt at automation and efficiency, Facebook has abandoned any real enforcement of their guidelines and rules, leaving their users open to a number of bad situations.