More actions
Main article: Ratings system
Designing a Simple System First
A ratings system based community will have to start by voting on a set of basic principles and rating prospective members against it. The basic principles might be personal freedom (as long as you are not harming anyone else) and productivity (you must be contributing to the community). Therefore, the basic questions you’d want to know about someone are:
- Is X supportive of everyone else’s freedom to live as they please, think, and express themselves?
- Is X a productive member of our society? Does X contribute reasonably what he can to society?
Given that we are dealing with a ratings system, we might include the honesty question as a foundational one as well:
- Is X a basically honest person? When X communicates is he trying to give you the truth or is he attempting to deceive?
We might call these the three fundamental questions. Question 2 is a basic input to issues of economic “deservingness”. Questions 1 and 3 are essential elements of one’s ability to participate in a free society at all. A person failing on either is open to reprimand, the requirement to obtain more education, or some other sanction. We might also discount their weight in the ratings system until improvement is made.
Ratings and Fairness
A serious ratings system is susceptible to charges of unfairness. In our case, since ratings are expected to be a foundational principle in our voluntary communities, the notion that people will dismiss them as unfair is not acceptable.
Let’s look at two people who participate in a ratings system, Alice and Susan. They’re sisters so they know each other well and have established opinions of each other. Alice sees Susan’s rating and thinks the following:
- Susan did better than me but I think she’s a worse person.
- Therefore, the ratings system is unfair.
or
- Susan did worse than me but I know her to be the better person.
- Therefore, the ratings system is not accurate.
Both of these situations badly undermine the system and should be avoided. We should note here that this example was chosen because the ratings system will be assessed against a very high bar: our personal knowledge of someone we know well. The better we know the person, the more the ratings system will be judged for accuracy or otherwise.
Let’s further suppose that Alice knows exactly why the system failed. Susan has better social skills so she gets higher ratings across the board. Alice may know, for instance, that Susan is less honest than she is but Susan still gets a better rating, even for honesty. Clearly Susan’s social skills have deceived her raters into thinking she is honest.
This is a tough nut to crack but it’s a problem that happens all the time. Social skill often “enhances” other abilities and masks deficiencies. People are highly susceptible to manipulation by the socially skillful.
The answer to this problem, from a high level perspective, is education and proper categories. We can add to our list of “required courses” for prospective members of ratings-based societies: understanding people and psychological manipulation. Just being aware of the difference between honesty and charm should help alleviate any confusion over these. The ratings system itself should then, of course, be properly broken down into orthogonal categories and have clear definitions for each. Checks in the system can also alert raters that they may be confusing things if say, someone always rates charming individuals as honest.
Individuals will always disagree to some extent with the aggregate in a ratings system. The purpose of the aggregate, after all, is to put disparate opinions together to tell a simple story. But we should strive to keep outlier opinions (those far outside the standard deviation) to a minimum since they are the ones that later generate controversy and dissent. Outliers either reflect some new insight the rater has about the rated, is a mistake, or is simply biased. An outlier opinion should be followed up to find out what is going on, especially if there is some special insight that would be useful to the group. Mistakes should also be flagged, when they are obvious, so the rater can correct them. And rater biases should be reflected in the community’s ratings of the raters themselves. Whenever an outlier rating is made, the software should alert the rater and confirm their choices. This will help raters correct errors and give them a chance to reflect, perhaps more objectively, on what they did.
For the case of Alice and Susan, this would imply that Alice should come forward with her insight that Susan is not that honest. She would be doing the community a service. But despite her annoyance with Susan’s rating, we can easily see how Alice might refrain from doing her duty simply because she doesn’t really want to hurt her sister.
A profound change in culture is implied by this. We can surmise that Alice has no trouble divulging her insights about Susan’s honesty within her own family. It is likely that the entire family has a fairly accurate understanding of Susan, through their own observations and reinforcement by continuous testimony from members like Alice. So why is Alice comfortable being open within her family and not outside of it? We theorize a fairly simple dynamic at work here: the family largely takes care of its members regardless of their flaws. This removes the normal public constraints and allows people to be honest. We might further hypothesize an implicit bargain: that in exchange for security, families require openness from each other. Our ratings-based society, then, may require a family dynamic similar to this. Its members will view themselves as an extended family where some level of security is guaranteed (eg an income floor, social insurance, etc) but transparency will be required.
We might further speculate, along these lines, that the ratings system will protect its members by providing them with a large backing for any aggregate rating. A single individual will no longer be able to ruin someone’s reputation by speaking ill of them. Indeed, if the system works as intended, that opinion will already have been registered in the system. In our society, where we typically don’t rate each other (or give each other perfunctory good ratings), a single bad opinion carries a lot of weight. This is one reason why we are careful with people, like those outside our close family/friend circle, and avoid full transparency.