Doesn’t web 3 free us from censorship? Isn’t moderation censorship?

When approaching the topic of moderation I often encountered the criticism that in the “brave new blockchain world” everything is “censorship resistant”. We can’t have moderation because it would be “censorship”. How do we respond to these criticisms?

This is a salient question everyone should ask imho. I have written a blog post that speaks to the question and I will see if I can do a TLDR on it here …

  • One person’s moderating action is another person’s censorship because the matter is unavoidably contextual and the context may not be shared
  • Centralized social networks cannot contextualize their distant and detached moderating, but decentralized social networking should be able to
  • Moderating is a subset of governance and entails reinforcing behaviours we’d like to see more of and checking behaviours we’d like to see less of
  • Every single human being since the dawn of our species has moderated community; it’s the essential nature of social animals; it’s what we do
  • I suspect the key to moderating decentralized social networking lies in understanding how healthy and equitable and enjoyable pre-digital communities operated, and understanding how the governance and overall structures of centralized social networks got us all into the mess we’re in today.
1 Like
  • ‘Freedom to’ must be accompanied by ‘freedom from’. This does make me laugh …

image

3 Likes

noob here, if the rules of the social network are coded into bots that auto-censor or moderate accordingly then presumably that would be acceptable to all users? They willingly sign up to the social network’s rules so are just being held accountable to them.
Nice in theory but in practice is much more difficult however that seems to me like the best place to start with any web 3 approach to this issue…

1 Like

In this case it would mean ‘code is law’, right? Hello @bongoplayer ! That is an interesting take. How would you allow such an automated approach to evolve? Who would be in charge of changing the algorithms, if presumably a “mistake” was discovered? You already allure to the point: in reality it is much more complex. I assume you bring automation into the game as this would be adding a sufficient “neutrality” to the problem. Yet again, can algorithms be provably neutral? Wouldn’t we rather want a “credible neutral” approach? So the question we are discussing here is, quoting from Vitalik’s essay: " when building mechanisms that decide high-stakes outcomes, it’s very important for those mechanisms to be credibly neutral ." I believe the mechanisms (that can be algorithms and processes) need to be designed such, that they fulfill this criterion. This is what we try to do here. :slight_smile:

2 Likes

Thanks for the reply. As you have highlighted, mistakes and evolution can jeopardize the decentralised world’s ‘code is law’ goal. Even having overarching principals (such as America’s Bill of rights) may need amending over time. The modern solution is democracy but this could censor the minority in any vote.
Apologies, I have not offered any solutions but its certainly got me thinking! I’m glad the project is considering these tough topics early :slight_smile:

4 Likes