Tech giants such as Facebook and Twitter have consistently given the same response when questioned about their roles in disseminating news and regulating content posted on them: we are a platform, not a publisher.
In doing so, tech giants are often able to rely on various legislative protections relating to content posted on platforms. However, in a California court case in 2018, Facebook’s legal defence team argued that it makes some editorial decisions, but it is protected by the First Amendment (freedom of speech). Facebook announced this month that it had gone one step further in that direction by creating an independent “Oversight Board”.
This is in response to criticism about its failure to deal with mass disinformation and fake news. Facebook has revealed 20 of the proposed 40 members of the Oversight Board, which it said is intended to be “almost like a Supreme Court”. The board will hear appeals against content removal decisions made by Facebook’s in-house moderation teams.
In an opinion piece published in the New York Times this month, the co-chairs said the board will be responsible for reviewing content “in areas such as hate speech, harassment, and protecting people’s safety and privacy. It will make final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram.”
The board is supposed to be independent of Facebook, and its members cannot be removed by the company. They are “committed to freedom of expression within the framework of international norms of human rights”, and will “make decisions based on those principles and on the effects on Facebook users and society, without regard to the economic, political or reputational interests of the company”.
Facebook has consistently relied on the protection afforded to it by various items of legislation, both here and abroad, as a defence in media and communications cases in relation to its content. This has often proved frustrating for individuals whose reputations are attacked on the platform, or who face privacy breaches upon it. While there are a series of online forms on Facebook’s website to facilitate the removal of content which constitutes harassment or hate speech, Facebook has thus far been notoriously reluctant to deal with calls for it to regulate what is posted on the platform.
It remains to be seen how the board will function in practice. Given that it is (so far) made up of constitutional law experts, human rights experts and journalists, it is possible they may find it difficult to grapple with the speed with which decisions should be made in order to work within the fast-pace of a social media company. Additionally, it remains to be seen just how independent it will be from Facebook in practice, and whether it will actively make decisions which could undermine the company’s interests.
However, given the likely delay in implementing the Online Harms Bill, the new board is another tool for reputation management lawyers in removing private or reputationally damaging content from Facebook and Instagram.
This article was written by our paralegal Palomi Kotecha.
Covid-19 is impacting individuals and companies around the world in an unprecedented way. We have collected insights here to help you navigate the key legal issues you may be facing at this time.
You can find further information regarding our expertise, experience and team on our Media Disputes page.
Subscribe – In order to receive our news straight to your inbox, subscribe here. Our newsletters are sent no more than once a month.