Wednesday, December 9, 2020

The Deep Wrong of Kyle on Platform Speech Governance

Kyle Langvardt (@kylelangvardt) recently wrote a piece for Lawfare on Platform Speech Governance - in a sense, how and when can the Government make censorship decisions for social media companies. He drives the argument with theories on how the First Amendment is interpreted and applied (as he is, in fact, a legal specialist in First Amendment law).

  • Editing (by social media companies) is not speech (because if it is, any regulation has to pass strict scrutiny, which it would probably not)
  • Code is not speech (because not all language is speech and therefore govt regulation of social media company code is ok)
  • He also includes some argument about the scale of social media companies meaning that the speech of their customers overrides their own first amendment rights

Each of these arguments is nonsense, but he makes them because the ends justify the means, as stated quite clearly:


He states directly on his podcast that he does not believe there is a particular ideological intent to content moderation at modern social media companies, but that he would be worried if the Mercer family owned them. But we live in a world where the top media and news companies have been owned and controlled by just a few powerful families. He's skeptical that market pressures from the public do anything because the gravity of network effects are too strong, but this is more a feeling than any kind of data-based analytical approach. Social media networks go in and out of style all the time. They add and remove content moderation features as pressured by their customers. 

But let's start at the top: Editing is speech and also code is speech. Writing a neural network that scans all of Trump's tweets, and downgrades any tweet that matches their political views is an act of expression. It's highly ironic that a law professor would reach for arguments that had such a keyhole sized view on human expression. 

A banana taped to a wall can be art in the same way. It's not just the code itself that is expression, but also my choice to write that particular code

It's hard to explain how tortured the arguments made in the paper are - he throws in a straw-man that Google could potentially claim that buying office space in a particular city is an editorial choice, but a better analogy might be a restaurant owner picking their decor and requiring that loud people keep their conversations down, which is more closely a business policy of expression.

Apple made a First Amendment argument in the San Bernardino case - essentially saying that when the Govt forced it to write a backdoor that was a violation of their First Amendment rights. And a similar argument applies here, although perhaps even more clearly.

I also don't think there's any serious reason why scale matters - even Parler has 10M users. I'm not sure we have a threshold for scale anyone could agree on and I don't think we want the courts interpreting First Amendment rights based on how much of a marketshare or stock valuation you have.

What is most worrying about Kyle's paper however, is not the speciousness of his arguments, but the collateral damage of his recommendations. Gutting prior restraint because you are scared of "Viral Content" opens a door to unknown horrors. 

The ends, in this case, not only don't justify the means, but lead to unexplored dangers when it comes to government regulation of public content and the platforms we are allowed to build. For that reason, I highly recommend applying strict scrutiny not just to this paper's recommends, but to the rest of the Lawfare content moderation project.

-----

Listening to the podcast while you run down the beach is the best way to analyze this piece.


 







 

No comments:

Post a Comment