Xenia Gonikberg is a sophomore journalism and sociology double major.
The rise of misinformation on social media platforms has inspired an ongoing debate surrounding free speech protections, especially since many users turn to social media as an informational tool.
On Feb. 17, Facebook temporarily discontinued service in Australia after a new law passed that would make the company pay publishers a certain amount for their content, with that amount being regulated by the government.
According to the Australian government, negotiations between news sites and publishers can be fairer, as the proposed law gives news organizations more leverage and bargaining power. Facebook was unwilling to comply with these terms and blocked news articles on their platform because of these restrictions. In the information landscape, this means that Facebook would have to do more fact-checking to ensure that they are paying for reliable content.
This left millions of Australians without links to news articles on Facebook, making valuable information regarding vaccines and other emergency services inaccessible on the platform. Instead, posts containing potentially deceptive details or links to potentially misleading stories gained popularity.
It wasn’t until Feb. 22 that Facebook and Australia struck a deal that restored access in return for more leeway in the decision-making process surrounding Australia’s passed law. Facebook essentially won in that deal, as they got to keep their negotiating power and their control in the tech market.
This isn’t new. More recently, for instance, Facebook has been criticized for its delayed response to combat misinformation — especially on vaccine distribution and rollout. The tech giant has been notoriously hesitant to adopt policies that would curb misinformation because of the fear that it would infringe on the rights of individuals.
Facebook’s policies also include loopholes that allow Facebook groups to exploit the freedom of the open platform in the shape of hate speech.
Their policy regarding hate speech reads, “speech that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent.”
The vagueness behind this statement communicates the message implies that users are free to share hateful and misleading claims without facing any consequences. This becomes an obvious issue when Facebook users share conspiracy theories that target and discriminate against minority groups.
While Facebook states they protect groups of people who are attacked based on their ethnicity, sexual orientation, religious background and other characteristics by taking down harassing posts, the truth of the matter is that they do not meet these standards. This is seen when Facebook groups that claim to support political agendas, post racist and xenophobic content to gain a larger following.
While a private company isn’t subject to the same government regulations as social media platforms, Facebook is still a big player in the news industry. It has a responsibility to its consumers to ensure that they’re consuming factual content.
It all begins with the platform recognizing its position as an information hub and the power it wields over its users. Although Facebook does not have the power to stop the spread of misinformation completely, it can significantly reduce the extent to which harmful lies spread.
If Facebook wants to act like a big player, then it should meet the ethical responsibilities that come with that status.
Facebook needs to take on the responsibility of curbing misinformation on their platform because they have a large audience that can easily spread the information that they see on their news feed.