Regulating social media threatens free speech and competition
The PM’s ethics watchdog is preparing to call for a major change in the way social media is regulated. It's thought Lord Bew, who chairs the Committee on Standards in Public Life, will recommend that social media firms like Facebook and Twitter should be liable for the content posted on their platforms. In other words, he wants to reclassify social media platforms as publishers, holding them to the same standards as newspapers.
If you care about free speech and innovation, this should worry you. Currently, if someone posts a defamatory comment or shares extremist material on Twitter, they alone (and not Twitter) will be liable for legal damages or criminal prosecution. Twitter may of course choose to ban extremist accounts for violating its Terms of Services, but that decision lies with Twitter alone. If the law changes, then Twitter could be held liable if it fails to rapidly take down the offending content.
This may seem like a small change, but it would have massive implications for the internet ecosystem.
David Post, a professor specialising in internet law, makes the case that an obscure provision of the 1996 Telecommunications Reform act has been essential to the growth of platforms like Facebook, Twitter, YouTube and Tumblr.
The provision “immunizes all online “content intermediaries” from a vast range of legal liability that could have been imposed upon them, under pre-1996 law, for unlawful or tortious content provided by their users — liability for libel, defamation, infliction of emotional distress, commercial disparagement, distribution of sexually explicit material, threats or any other causes of action that impose liability on those who, though not the source themselves of the offending content, act to “publish” or “distribute” it.”
He argues that treating web firms as platforms and not publishers “created a trillion or so dollars of value”. Imagine if Facebook, Tumblr, Twitter and YouTube were liable to be sued or fined, whenever a user posted extremist, racist, or defamatory material. “The potential liability that would arise from allowing users to freely exchange information with one another, at this scale, would have been astronomical”. It’s easy to imagine venture capitalists passing up an opportunity to invest in Facebook, Twitter and YouTube at an early stage with those risks.
Eric Goldman, another online law professor, argues that treating online platforms as publishers will reduce competition and entrench major players. Under the current law, “new entrants can challenge the marketplace leaders without having to match the incumbents’ editorial investments or incurring fatal liability risks.”
Beyond the effect on new entrants, there’s a real risk that the free flow of ideas will be restricted by platforms over-enforcing restrictions on extremist and defamatory content. We have already seen multiple cases of platforms overreacting and banning users for seemingly mild violations. For instance, the comedian Marcia Belsky was banned from Facebook for 30 days for saying “men are scum” in response to death and rape threats. Unlike pornographic content, which can be identified algorithmically, identifying hate speech, threats and defamation relies on context. If the potential liability is high and policing abuse is labour intensive, then firms may be incentivised to shoot first and ask questions later. That could have a chilling effect on free speech.
Lord Bew may be frustrated by what he believes is inaction by social media companies (despite the fact that over 100,000 people worldwide are employed in content moderation). But, he shouldn’t risk throwing the baby out with the bath water. The result of treating online firms as publishers would be to reduce competition, deter innovation, and threaten the free flow of ideas online.