Hasinoff & Schneider: From Scalability to Subsidiarity in Addressing Online Harm - a summary and critical review

The recent publication “From Scalability to Subsidiarity in Addressing Online Harm” by Amy and Nathan is of high relevance for our own efforts in moderating online social spaces that we are creating with AKASHA. In this post, I am replicating key parts of their paper and providing my own perspective in light of our efforts to implement the perspectives provided.

Key points of the manuscript:

  • Today’s large social media platforms are generally designed to prioritize economic efficiency while holding a large number of users’ attention.
  • Responding to harm takes the form of simple, semi-automated sanctions, such as putting warning labels on posts or demoting or removing content.
  • Many users perceive moderation regimes as arbitrary and unfair.
  • Social media governance “has largely been informed by Western models of criminal justice, which rely on sanctions (e.g., punishment) to encourage compliance with formal rules and laws.”
  • Important features for context sensitivity in Western justice systems are absent; there are typically limited due process rights or systems of appeals, no democratic processes for creating and amending rules, and no juries of one’s peers.
  • Platform software provides little support for community involvement in governance or for holding moderators and administrators accountable.
  • Business models designed for scalability are considered highly desirable and are often required to secure financing.

Proposals: Restorative & Transformative Justice

  • Restorative justice: Instead of shunning or removing rule-violating content or users, a restorative justice approach to online conflict and abuse relies on community participation, centers the needs of people who have been harmed, and pursues the repair of that harm when possible.
  • Transformative justice additionally stresses transforming harmful incidents into opportunities for clarifying norms and enacting social change. How this could be executed is still being determined. A proposal is to use the “subsidiary principle.”
  • While scalability demands quick resolution to harmful incidents, both restorative and transformative justice call for slower, more individualized care and negotiation.

@Martin Etzrodt’s takeaway: Restorative and transformative justice do not seem scalable. The harm potentially done by rapidly proliferating damage can’t be halted by debating the incident for days or weeks to “understand both sides” of the conflict.

The authors propose the “subsidiarity” principle to address the scalability issue. Unfortunately, they do not offer immediate examples of how this could translate into an effective UX / UI. However, they point to the fediverse as a potential technology that could adopt the concept.

Proposal: Subsidiarity

Subsidiarity is the principle that local social units should have meaningful autonomy within larger systems and that such arrangements contribute to the health and accountability of the system as a whole.

  • Restorative justice involves facilitators and a mediation and “restoration” process that includes both victims and offenders.
  • Context-dependent knowledge of the participants and incidents is important.

@Martin Etzrodt: The bottom line is that we should break down the problem into smaller subunits. The authors list examples where subsidiarity is already in place:

Subsidiarity in Practice (limited):

  • Reddit has user-managed Subreddits.
  • YouTube has Channels.
  • Facebook has Groups.
  • Wikipedia has distinct language-specific communities.

Under the authority of user moderators, these units operate with some independence from the corporate authorities that own and maintain platform infrastructure and provide moderation tools. The moderators of a Subreddit or a Facebook Group can set and enforce their own rules, and most moderators experience no direct interaction or training with platform companies (Seering et al., 2019).

@Martin Etzrodt’s takeaway: The authors propose that federalism may be a way to allow subsidiarity and more community-based decision-making. They also have a related popular article on this topic.

  • Federalism involves hierarchy (Bednar, 2009).
  • Smaller units aggregate into larger units, which may form even larger units in turn. Federalist subsidiarity prioritizes the vibrancy and autonomy of those local levels, except only when power from further up the hierarchy is necessary.

This is in line with a “POD” style DAO structure, as highlighted by Vitalik, and employed by Ukraine DAO:

Takeaway: In this case and in the context of moderation (not resource allocation), “PODs” could represent individual “moderator teams.” It would be possible for the Core governance level to purge selected “PODs” if the PODs do not complete their assigned duties like technical review, inhibiting violence, etc. It would also be possible for Core governance to selectively activate, deactivate, or reactivate the moderation actions of PODs.

Example of the federalist structure by the authors:

In the Chinese political tradition, the concept of harmony explains how local autonomy can function beneath a strong central government through a federalist hierarchy, whether headed by an emperor or the Communist Party (Wang et al., 2016). This framework grants regional officials the freedom to diverge and experiment in contextually sensitive ways, while a commitment to harmony rules out direct challenges to the central government’s authority. Subsidiarity likewise does not constitute a rejection of centralized power over large domains per se.

Examples that use a subsidiary-based form of moderation:

  • Mastodon (a microblogging platform)
  • Matrix (a chat protocol)
  • PeerTube (a video-sharing platform)

Individuals or groups who run servers on these networks can set tailored rules for content moderation, dispute resolution, and how they will “federate” with other servers.

Federated networks manifest subsidiarity in that they facilitate local governance; the software is designed to maximize communities’ contextual control while also connecting communities into much larger systems.

@Martin Etzrodt’s take-home:

We need to allow for an increasing fragmentation/ fine-grained approach to moderator specialization. This means that while we may begin initially with moderators dealing with all violations, as the number of such violations increases, the recruitment of experts into different topics needs to increase. All moderators and their fields of expertise should be transparently known to their communities.

For non-legal violations of the CoC, there may be a way to switch moderator decisions “on and off” by the community.

To get a genuine plurality, we need to depart from the “one top-down moderation app” and “federate” first within AKASHA World to allow different teams or apps to moderate and perhaps have an oversight board that checks the performance of such apps and teams against each other.

  • Concretely we could break down the moderators into different teams and perform a kind of “A/B” testing on them (are various teams agreeing in their voting for removal or not).
  • We could also facilitate community members growing into moderation helpers. These moderation helpers could express their perspectives in a “training mode.” Thus every user would be able to initiate the first signal to a potentially harmful incident if they choose to see the reported content in the moderation app. A user-delisting would be the highest prioritized incident by an official moderator. If the community member and moderator choices deviate by a large degree, we would be able to see if it is a spam issue or a real disagreement between top-down governance and the community.
1 Like