We are in the midst of a design sprint to build the next version of moderating for AKASHA. Yet last week our current moderating system was challenged by one participant of our decentralized social networking experiment. The following post appeared on the timeline:
Ethereans swiftly reported the post using the currently already implemented flagging tool.
At this stage the platform is by invite only and only members of the AKASHA team have access to the moderating dashboard where final decisions can be made on a post to remove it from the view of other visitors on akasha.ethereum.word.
How did our team react?
As several moderators got notified immediately a discussion evolved on our group messaging platform. This highlight the value of backchannels for moderators to act in a coordinated fashion.
The post was swiftly removed. Furthermore our team is now more vigilant checking for content as often as possible.
In conclusion, this was a very clear case of breach of our Code of Conduct and the decision process was simple. Before automation we will need to add more moderators across different timezones to handle issues in a swift in effective manner. Involving our community will be essential for that.
What are the detailed criteria for content moderation? (where to view it)
What content is absolutely unacceptable?
Why?
Child pornography, terrorism, drug dealing are absolutely prohibited for universal principles.
Hello @Maxlion ! That are great questions! For akasha.ethereum.world you find the Code of Conduct in the section indicated below. When you flag a post (feel free to test it) you can choose from those points. Also you can leave a free text comment. We realized boxing the possible reasons for flagging is not ideal. In a future version we encourage everyone to be more vocal about their complaints and flagging reasons.
There are some things that are unacceptable based on for example human rights aspects. I agree with your opinion of "universal principles here that apply. Such would include any form of violence, abuse, degrading the dignity of another person. These would be also the most common reasons law enforcement might demand a take down of a post (we can’t delete content from a user’s personal storage, but we can delist it, so it is not populating the feed app anymore). Notably, as the “world operator” can comply by “delisting” the responsibility for the content is handed to the creator. It is not on “operator’s servers” but in a storage controlled by the public-private key of the creator. That is a very interesting aspect of a truly decentralised social network: More rights, but also responsibilities for everyone.
As soon as you operate a social network and this reaches a certain size it needs to comply with the legal authorities of the jurisdiction it operates. The legal entity chosen and its location then can be used as a way to mitigate state censorship (… and you can think of other means that could make it possible to share content in a decentralized social network I do not mention here, but STILL keep it a safe place and one where you also have a choice for “freedom of attention”, i.e. do not be exposed unsolicited to disturbing content).
We will also publish soon the Product Requirements Document of the next moderating app iteration that will include the changes we feel will be valuable to achieve this.
Martin, thank you so much for your responsible, honest, and detailed explanations.
I understand some concepts of decentralized socialization.
The way content is moderated and curated in ethworld is decentralized.
With decentralized power comes more, decentralized responsibility. Responsibility shifts from giants to communities and individuals.
The community has the right to conduct “self-censorship” to maintain community order.
I want to explain the problem like you do for the next maxpan.
“More rights, but also responsibilities for everyone.” This way of managing content in a decentralized manner is the practice of fulfilling real-world operator responsibilities