I’ve been arguing something pretty similar to this for a decade, although not on free speech / moderation grounds. It’s nice to see someone make the case in a thorough and detailed way, and I’ll probably point people to this in future. I’m posting a brief summary excerpt here, but I recommend the whole thing.
Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams. On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process. Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don’t moderate). Still others have suggested that there should be no moderation allowed at all—at least for platforms of a certain size—such that they are deemed part of the public square.
As this article will attempt to highlight, most of these solutions are not just unworkable; many of them will make the initial problems worse or will have other effects that are equally pernicious.
This article proposes an entirely different approach—one that might seem counterintuitive but might actually provide for a workable plan that enables more free speech, while minimizing the impact of trolling, hateful speech, and large-scale disinformation efforts. As a bonus, it also might help the users of these platforms regain control of their privacy. And to top it all off, it could even provide an entirely new revenue stream for these platforms.
That approach: build protocols, not platforms.
Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendment Institute