Home > Uncategorized > The Solution to Free Speech is a Functional Marketplace of Varied Venues

The Solution to Free Speech is a Functional Marketplace of Varied Venues

I believe in free speech. I believe in a free society where I have the right to say something that deeply offends you, and you may say something that deeply offends me. Censorship of the internet in countries like China is disturbing, and other countries (including the United States and the European Union) are slipping towards censorship one tiny step at a time.

At the same time, I believe in the right to be free from harassment. I believe in the right to be free from crazy, false nonsense showing up on my computer screen (if I don’t want it to). I also believe there is such a thing as “false” and “true,” as I explain in a chapter from my forthcoming book Should You Believe Wikipedia? Truth is socially constructed, and we can sometimes make wrong decisions about what we believe to be “true.”   But truth exists, and all we can do is keep working hard to find it.

So how do we balance these competing desires? The answer is zoning.  There need to be places on the internet with different rules for what it’s OK to say, and what standards there are for verification of claims and politeness. Some of those places should be totally open, modulo respecting the most basic laws like the right to honest dealing in business and freedom from liable.  Other places should have standards.

For example, the subreddit r/science has over 21 million subscribers. Posts on r/science must link to a peer-reviewed scientific article published in the last six months in a journal with an impact factor of at least 1.5.  Comments must be about the science, and anecdotes and jokes are not allowed. The volunteer mods delete tons of great, interesting content. But that’s OK, because you can post that content on other subreddits like r/everythingscience or r/sciences, where the rules are laxer.  Reddit is one site on the internet that gets this right.  Different subs have different standards, and you can choose one that suits you or go ahead and create your own (as I suggest in my 1996 paper Finding One’s Own Space in Cyberspace).

When there are multiple spaces with different social norms, we can have a marketplace of ideas.  Parents who are upset about inappropriate content on YouTube should send their kids to watch videos on a site with higher standards. YouTube will never improve its practices if we all don’t vote with our feet.  A marketplace doesn’t work unless people have alternatives and make smart choices.

Unfortunately, some sites have become so big that it’s hard to find meaningful alternatives.  A dozen of my friends have proclaimed recently that they are quitting Facebook because they object to Facebook’s practices.  That’s great—that’s what you should do if you don’t like the site’s policies. But what’s the alternative?  In our research on grassroots groups, Sucheta Ghoshal and I have found that groups who do not agree with Facebook’s policies and find its privacy features insufficient often still use it to publicize their cause because that’s where the people are.  They’re stuck.

One of the imperatives in the revised ACM Code of Conduct (the first update in 25 years) says:

3.7 Recognize and take special care of systems that become integrated into the infrastructure of society.

Even the simplest computer systems have the potential to impact all aspects of society when integrated with everyday activities such as commerce, travel, government, healthcare, and education. When organizations and groups develop systems that become an important part of the infrastructure of society, their leaders have an added responsibility to be good stewards of these systems. Part of that stewardship requires establishing policies for fair system access, including for those who may have been excluded. That stewardship also requires that computing professionals monitor the level of integration of their systems into the infrastructure of society. As the level of adoption changes, the ethical responsibilities of the organization or group are likely to change as well. Continual monitoring of how society is using a system will allow the organization or group to remain consistent with their ethical obligations outlined in the Code. When appropriate standards of care do not exist, computing professionals have a duty to ensure they are developed.

The impossibly hard problem that follows is: What should we do in response to very large platforms that are integrated into the structure of society and fail to be good stewards? Democratic presidential candidate Elizabeth Warren wants to break up big tech, and she may have a point. The implications are headache-inducing.

In the mean time, one thing we all must do is to vote with our feet. To tell platforms who don’t meet our personal standards (Too restrictive of speech? Too unrestrictive? Or just a lousy user interface?) that we won’t use them until they clean up their act. And to support alternative platforms that emerge as they struggle to get started.  The marketplace of ideas can’t work unless there’s an actual, working, competitive marketplace.

Categories: Uncategorized
  1. June 13, 2019 at 11:42 am

    I like the idea, but one thing we’ve found with zoning is the fascist zones become incubators for dangerous speech and violence. Thinking specifically of various subreddits before Reddit started cracking down. Also 4chan and Gab. People are meeting there, getting radicalized, then working themselves up to violence. That’s precisely how the murder in Charlottesville was planned, via a private Discord channel that was a hate speech zone..

    (There’s a lot of academic and popular references on this topic, I imagine you know them better than me. But if you want references let me know.)

  2. June 16, 2019 at 12:51 am

    Reddit-style zoning and moderation makes sense but we lack tools and enforcement mechanisms to keep that honest and fair. What keeps a “Panda’s Happy Preschool Fun Time” group from being mislabeled as “safe” and being a hangout for pedophiles moderated by pedophiles? What keeps a moderator of a “Young Christian Singles” group from blocking posts from people they deem aren’t Christian enough, young enough, devout enough or (worse) white enough?

    I believe the only mechanism that has a chance of working is based on a nuanced system of “karma” like Slashdot’s. Each person, moderator and sub-group should have a community-driven score that summarizes their reputation for civility, honesty and quality that they risk jeopardizing by acting against their expressed standards.

    It’s hard for a troll to remain a troll when they have a bright neon sign above their head saying “Troll here” and a bright neon sign on their troll bridge saying “Troll bridge ahead.”

    So, in the above examples, the “Panda” group and its dishonest moderators would get flagged and downvoted into oblivion. Likewise, an extreme, exclusionary moderator on the “Christian” group would get downvoted by members of the group below a threshold where they’d draw attention of other moderators and possibly lose moderator status.

    Right now the problem with Twitter/Facebook’s enforcement of community standards is that it is centralized and therefore has to be automated in ways that inevitably fail to discriminate between content arguing against “bad things” and arguing for it. Likewise, it fails to recognize context (i.e. zones) where explicitly talking about sex (say) is about advocacy and education versus exploitation and porn.

    A karma system would de-centralize and distribute enforcement to humans better able to contextualize those judgments while also giving individuals real incentive to remain civil.

  1. June 13, 2019 at 10:50 pm

Leave a reply to nelsonminar Cancel reply