Further refine moderation guidelines

This commit is contained in:
Mikael 2024-11-21 18:46:56 +01:00
parent 5ea3b2406a
commit 81f0a4377c
Signed by: mikael
SSH key fingerprint: SHA256:21QyD2Meiot7jOUVitIR5YkGB/XuXdCvLW1hE6dsri0

198
README.md
View file

@ -4,6 +4,89 @@
This document is a working draft and incomplete.
Missing aspects:
- spam
- automated posts
- high-frequency posts
- …
## Nonnormative summary
We wish to provide an environment in which queer and neurodivergent people can
be their authentic selves. In addition to communication itself, we have various
tools for restricting communication to help us achieve that goal.
We need however be mindful that every action we take may not only protect from
harm, but can also be harmful by itself. We should therefore in general respect
peoples autonomy in deciding with whom they wish to interact and what kind of
content they like to see. We should only limit that autonomy to a degree that
is necessary and effective to achieve our goals.
Distribution of content that is strictly illegal under the laws of the
Federal Repulic of Germany, whether we agree with them or not, obviously
undermines our goals because it may result in termination of this instance.
Communication may be harmful to others, both to users of this instance as well
as on other instances. Harmfulness can be difficult to assess and quantify
objectively and always requires consideration of the larger context.
Communication content can be annoying or distressing to people, or it might
cause them to take harmful action.
Our moderation actions should be proportional to the risk of harm resulting
from the communication we moderate. We should attempt cooperative measures
(discussing an issue with the originators of a communication or the moderators
of another instance) before we forcefully restrict communication.
We should also note that we have the ability to apply restrictions selectively
both in terms of the communication parties or content as well as the degree of
restriction.
Restrictions can be applied based on local user identity, remote instance name,
hash tag or content keywords. They may be applied to messages themselves or
media attachments. And the degree of restriction can vary from a reduction in
visibility to complete termination.
### Local users
Local users are generally moderated through the administrative web interface
of this instance and available moderation actions include
- unlisting (removal from federated timeline), `mrf_tag:force-unlisted`,
- sandboxing (removal from public timelines), `mrf_tag:sandbox`,
- marking media attachments as sensitive, `mrf_tag:media-force-nsfw`,
- stripping media attachments, `mrf_tag:media-strip`,
- account deactivation, and
- account deletion.
### Instances
Instances are moderated through this Git repository using the following
settings:
- `activities`:
- `unlist`: Remove activities from federated timeline,
- `restrict`: force activities to be visible to followers only, or
- `reject`: reject all activities except deletes.
- `media`:
- `mark`: Mark media attachments as sensitive, or
- `strip`: strip all media attachments.
### Hashtags
Hashtags can be matched by caseinsensitive exact match with the following
flags:
- `sensitive`: Mark tagged activities as sensitive, and
- `unlisted`: remove tagged activities from federated timeline.
### Key words
Key words are matched by regular expression with the following options for
moderation:
- `unlist`: Remove matching activities from federated timelines, or
- `reject`: reject matching activities altogether.
## Definitions
The key words _shall_, _shall not_, _should_, _should not_, _may_ are to be interpreted as described in
@ -19,7 +102,7 @@ The term _representation_ applies to both textual (something being described in
(something shown in a still image or video) representation. It applies whether or not the represented
idea is real or imaginary.
## Visibility
## Visibility guidelines
- _unrestricted_
- Activities _may_ be visible in public timelines.
@ -133,58 +216,93 @@ Eye contact may be experienced as uncomfortable by some people. May need some re
**Practical application:**
- Selfies with eyes focused on the camera _may_ be marked as sensitive and labelled with `ec`.
## Instance moderation
## Escalation strategy
Instances _should_ only be moderated if an issue cannot be expected to be resolved in a less invasive manner,
for example by addressing them directly with the users causing them or the instance administrators.
Reducing the visibility of an instances content _should_ generally be preferred over complete defederation.
The moderation action should be proportional to the harm potential of the
communication. Occasional mild infractions should be solved through cooperation
while persistent or serious violations may require forceful action.
_(incomplete)_
Failure to remedy an issue or repeated violation should be met with gradual
escalation of measures.
- `activities`:
- `unlist`: Remove activities from federated timeline.
- `restrict`: Force activities to be visible to followers only.
- `reject`: Reject all activities except deletes.
- `media`:
- `mark`: Mark media attachments as sensitive.
- `strip`: Strip all media attachments.
(incomplete)
### Practical guidelines
### Prohibited content
_(incomplete)_
Content that is obviously illegal or very harmful is _prohibited_.
#### Politics
- _Any_ such content _shall_ be deleted locally immediately.
- Local user accounts distributing _any_ such content _shall_ be restricted
immediately by sandboxing or deactivation, they _should_ however not be
deleted without further investigation.
- Local user accounts _primarily_ distributing such content _shall_ be deleted
immediately.
- Remote instances distributing _any_ such content _may_ be restricted
immediately by `restrict`ing their activities and `mark`ing or `strip`ping
their attachments.
- Remote instances failing to contain the distribution of _any_ such content
within an adequate time period _should_ at least be restricted by
`restrict`ing their activities and `strip`ping their attachments. They _may_
also be defederated by `reject`ing their activities.
- Remote instances _primarily_ distributing such content _shall_ be defederated
immediately by `reject`ing their activities and `strip`ping their
attachments.
- Instances focused on rightwing extremism _should_ be defederated.
- Instances with strong free speech policies _may_ have their activities withheld from public timelines.
- Instances with a high prevalance of unlabelled political content _may_ have their content withheld from public timelines.
### Restricted content
#### Personal integrity
- _Any_ such content _may_ be removed from the public and federated timelines
if not adequately labelled.
- _Any_ such content _may_ be forcibly labelled with appropriate content
warnings.
- Local user accounts distributing _any_ such content without adequate labels
_should_ be contacted for cooperative resolution.
- Remote user accounts distributing _any_ such content without adequate labels
_may_ be contacted for cooperative resolution.
- Local user accounts _primarily_ distributing such content without adequate
labels _should_ be contacted for discussion.
- Instances with a high prevalence of media depicting violence _should_ have their media marked as sensitive.
(incomplete)
#### Nudity & sexuality
### Cooperative communication
- Instances focused on CSAM _shall_ be defederated.
- Instances with a high prevalance of legally questionable sexual content _should_ have their media stripped.
- Instances with a high prevalance of unmarked sexual content _should_ have their media marked as sensitive.
- Try to be friendly and respectful in your communication.
- Describe your role.
- Clearly describe the offending behaviour and explain the reason for it being
considered offensive.
- Note whether improvement of the behaviour is a suggestion, a recommendation
or mandatory.
- Mention the possible consequences if the behaviour is not improved.
## Local user moderation
#### Examples
Issues with local users _should_ preferably be addressed in a cooperative and constructive manner.
> “Hey, I am a moderator of this instance. My colleagues and I noticed that you
> have been posting a lot on current political events. While we have no strict
> rules about it, we feel that an excess of such content is very exhausting to
> our users. We therefore suggest that you consider labelling such posts as
> `??pol`, so that users can skip over them if they are not interested, or
> publishing them as _unlisted_, so that they are not visible on the public
> timelines.
>
> We trust in your ability to be considerate to others and dont believe that
> any further action is required on our part.”
_(incomplete)_
> “Hey, I am part of the moderatorion team of this instance. We have received
> complaints from other users about sexually suggestive posts from your
> account. While not considered inappropriate per se, we recommend that such
> posts be labelled as `suggestive` or `lewd` and media marked as _sensitive_.
>
> Please be reminded that if you continue to post such content without
> marking it appropriately, we may decide to remove your posts from the public
> timelines and mark all your media attachments as _sensitive_.”
- Unlisting (removal from federated timeline), `mrf_tag:force-unlisted`
- Sandboxing (removal from public timelines), `mrf_tag:sandbox`
- Marking media attachments as sensitive, `mrf_tag:media-force-nsfw`
- Stripping media attachments, `mrf_tag:media-strip`
- Account deactivation
- Account deletion
> “Hey, I am contacting you on behalf of this instancess moderation team. You
> have been posting sexually explicit images without marking them as sensitive.
> This is however mandatory and if you violate this requirement again, we will
> restrict your posts to followers only and forcibly mark all your media
> attachments as _sensitive_.”
### Practical guidelines
_(incomplete)_
- Users publishing CSAM _shall_ be permanently suspended.
- Users failing to mark sensitive media _may_ have them forcibly marked as sensitive.
> “Hi, I am writing to you on behalf of this instances moderation team. We
> have noticed that you have been mentioning … despite their clearly and
> repeatedly stated request not to be involved in the discussion any longer.
> We ask that you disengage and honour their wish. If you fail to do so, we
> will have to suspend your account.”