Hey there! I’d like to share here the Moderating Design Priciples for a moderating app build for AKASHA. This is based on an article by @sheldrake
We propose the following design principles based on this article from our blog.
Goal 1: Freedom
We celebrate freedom of speech and freedom of attention, equally.
Goal 2: Inclusivity
Moderating actions must be available to all. Period.
-
Exe: Everyone is welcome to edit Wikipedia
Goal 3: Robustness
Moderating actions by different members may accrue different weights in different contexts solely to negate manipulation / gaming and help sustain network health. In simple terms, ‘old hands’ may be more fluent in moderating actions than newbies, and we also want to amplify humans and diminish nefarious bots in this regard.
Goal 4: Simplicity
Moderating processes should be simple, non-universal (excepting actions required for legal compliance), and distributed.
Goal 5: Complexity
The members and moderating processes involved should produce requisite complexity.
Goal 6: Levelling up
We want to encourage productive levelling up and work against toxic levelling down, for network health in the pursuit of collective intelligence.
Goal 7: Responsibility
Moderating processes should help convey that with rights (e.g. freedom from the crèches of centralized social networks) come responsibilities.
Goal 8: Decentralized
Moderating processes should be straightforward to architect in web 2 initially, and not obviously impossible in the web 3 stack in the longer-term. If we get it right, a visualisation of appropriate network analysis should produce something like the image in the centre here:
Moderating consequences — gotta get us some dynamic polycentricity
You also find it as a notion page here: https://www.notion.so/akasha-foundation/The-AKASHA-Moderating-Open-Design-Challenge-15cb49cf57e740be92534958828ca210?p=b64f2c4ad1f1418987d2f0ebe68205d6