The other day we talked about an article that appeared in Wired, covering the ways that Riot Games is attempting to change their moderation policies in order to maintain a safe and pleasant environment while retaining players and rehabilitating offenders. They are trying to do it by having the community police itself.
The concept was met with a much derision and skepticism, both here and on the forums. But I think we are selling the idea short. I think it really could work. I had an idle hour or two (okay eight) yesterday, turning this idea over in my head, when suddenly this all fell out (okay not suddenly it took most of eight hours).
This is going to get pretty dry, especially if you click onto the GoogleDoc that contains the descriptions of each of these diagram components.
Software system design guy designs software systems. This is how I am supposed to work.
Don’t say I didn’t warn you.
In this system, Turbine turns most of the effort of moderating player behavior over to the players, while maintaining oversight (especially on Appeals, and for incident reports of type “Other” that cannot be easily described using pre-defined incident types and categories).
When an incident is Reported, players are selected at random to serve as Tribunal members, each one who accepts examining the Report and finding who was at fault, if anyone. “No Fault” is a valid Tribunal member finding. Tribunal members also assign a severity to the fault. Tribunal members receive Turbine Points as a reward for participating.
Once enough Tribunal judgements have been received to render a verdict, if fault was found then the system applies a punishment that is based on the average of the Tribunal member’s reported incident severity, and which also factors in historical violations of the accused. No individuals, either players or Turbine employees, need be involved in meting out a punishment.
People who file successful reports are rewarded with Turbine Points. People who file unsuccessful reports are not, and are limited in how many they may file. People who misuse the reporting system may end up being punished themselves.
This system is not designed to minimize bans. In fact, I think there would be a lot more bans, especially of the lesser duration variety, and especially at first.
People need to be educated as to what the community is willing to tolerate. I think many members would be surprised by the DDO community in particular; in my opinion, trollishness is more likely to be seen as a negative by the DDO community than by the internet community in general.
People will have to learn. And until then, bans aplenty.
Not that I think Turbine would actually implement this. It requires coding, mostly new code but not entirely, and that seems to be a very precious commodity indeed.
Imagine that every player account has two new attributes: A Reporting Level, and a Violation Level.
Every time the player files a report the Reporting Level goes up, quickly reaching a value that prevents the player from filing more reports.
Reports that are adjudicated by the system to be valid are removed from the accusing player’s reporting level: just like the NFL Coaches Challenge, reporting players are not penalized for successful reports.
In addition, the Reporting Level automatically decays for everyone by one per month. Even someone who reports a lot will be able to file a report again if he or she waits a month.
This value increases whenever the player is found to be at fault in an incident report. This value is ADDED to the severity of the fault when meting out punishment: for example, a player with Violation Level 3 who receives a 2-point fault will:
- Get a 5-point punishment
- Have their violation level increase to 5
Like Reporting Level, this also decays by one point per month, encouraging anyone to rehabilitate themselves with good behavior over time.
Workflow and Activity
Now we are going to get seriously dry.
The individual components referenced in the following diagrams are described in detail in a document titled “Self-Policing Gaming Community System Overview” which can be found here.
Reporting an incident
This is intended to work from forum thread posts, in-game chat boxes, or from the in-game Help menu. “Incident Evidence” depends on where the report is made; evidence for a forum report would be a text-capture of the thread as of the time the report was filed. Evidence for an in-game incident would include the chat logs of the accuser and violator(s), /LOC reports of everyone involved, and a screenshot.
There are a couple of important points here. For one, the Tribunal never meets; players who are selected (and agree to serve) provide their Judgments in isolation, one at a time. It may take some time for enough Judgments to be rendered to resolve in a final verdict. Justice is surer than it is swift in this case.
Second, Tribunal members get paid. In Turbine Points. Not many, but still, this is an annoying thing that is not game play. So … here, have some points.
Many reports would never enter this flow; only those that did not fit the pre-defined incident types and categories, and those that are appealed once a Tribunal verdict is reached.
Note that even here, one of the options is for the GM to kick it back to the community for Tribunal review. In fact, that would be the preferred option.
Disposition and Record-Keeping
Punishment is by algorithm; it should always work the same way. People may complain about their punishments but they cannot validly complain about arbitrariness or special treatment. Algorithms don’t have friends.
Another important point; punishments are always explained, in email, and in reports that can be viewed by anyone. Nothing secret is going on here. No one may claim they were permanently banned for a minor violation … unless they really were, which should not actually be possible.
Appeals and Reports
A lot is made of “privacy” as a reason to withhold information about who is punished and why. I find that to be vacuous. No one is asking for street addresses or social security numbers. Who exactly is harmed if I am able to tell that player xxxThroatCutterxxx is on a one-month “vacation” for harassment and has also been punished in the past? Why is it bad that I know this?
I argue that it is not bad. It is good. It may discourage others from behaving similarly, and at the least it will provide clear record of what types of behaviors are okay, and what types are not.
A good thing to know. Truly. No one is well-served by keeping this all secret.
On The Other Hand
There are some weak points, but that is not surprising since this is a complex system and I’ve spent at most eight hours in designing it. But to save you some trouble, I will point some of them out.
This system handles forum reports and in-game reports together, for simplicity. But it may be better to keep them separate, for instance, you wouldn’t want someone in the game to be asked to serve on a Tribunal for a Forum posting incident.
The component that selects reports for the Tribunal will require a complex and not-yet-devised algorithm that will be smart enough to combine multiple reports against the same individuals for the same incidents and still retain the fact that multiple people reported the incidents and are eligible for rewards.
No real thought has been put into how the selected Tribunal users will be cued that they have been selected to serve. This may be extra-difficult on the forums where it is harder to tell who is actually logged in and attentive at any given moment.
So there you have it. Design and opinion, all mashed together into yet another many-word-blog-post. If somehow you just can’t get enough of this, may I suggest the detailed description document which brings another six pages to the table?
Good thing there will never be a word shortage. I’d starve.
🙂 😀 🙂