Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

The ratings system, human psychology and social dynamics

From Information Rating System Wiki

Main article: Ratings system

Designing a simple system first

A ratings system based community will have to start by voting on a set of basic principles and rating prospective members against it. The basic principles might be personal freedom (as long as you are not harming anyone else) and productivity (you must be contributing to the community). Therefore, the basic questions you’d want to know about someone are:

  1. Is X supportive of everyone else’s freedom to live as they please, think, and express themselves?
  2. Is X a productive member of our society? Does X contribute reasonably what he can to society?

Given that we are dealing with a ratings system, we might include the honesty question as a foundational one as well:

  1. Is X a basically honest person? When X communicates is he trying to give you the truth or is he attempting to deceive?

We might call these the three fundamental questions. Question 2 is a basic input to issues of economic “deservingness”. Questions 1 and 3 are essential elements of one’s ability to participate in a free society at all. A person failing on either is open to reprimand, the requirement to obtain more education, or some other sanction. We might also discount their weight in the ratings system until improvement is made.

Individual belief vs. group behavior

Everyone has individual beliefs. But not all individual beliefs are significantly represented in society. For instance, most Americans profess a belief in angels. But they don’t actively invoke “the angels” in the context of a work environment where the group is trying to achieve some practical objective. There is also no significant public policy which seriously assumes an angelic presence in our lives. We apparently have the capacity to hold a “private” view and a “public” view.

Needless to say, human psychology is complex and often contradictory. It hinges on what people really mean when they “believe” something. Sometimes the belief is weak or limited to certain contexts. Sometimes a belief is unclear in the believer’s mind and yields no practical manifestation. Sometimes it is merely a vague concept, an aspiration, or a hope.

Significantly, individuals change their persona in group environments. A personally held belief dissolves to some extent under the weight of group norms. More generally, individuals change their persona with other people. We don’t act the same way with our children as we do with our professional colleagues. We put our gameface on with customers, first dates, and anyone we need to impress. This behavior might be dismissed as faking it, but when is a persona fake and when are they real? It is often hard to know. It is simpler and probably more accurate to observe that they are probably all aspects of the same basic personality. In any event, we are malleable and our “beliefs” are, to some extent, molded into the social environment we belong to.

This is not to say that private beliefs never enter the public realm. An interesting example that straddles the line is racism. Many Americans harbor racist beliefs but these are normally kept under wraps. Racists work and interact with members of the race they supposedly hate all the time. However, they may also vote for racist candidates and policies. In recent years we’ve seen more manifestations of racism in the public political arena.

But mostly, since the Civil Rights era, we’ve made great progress on racial equality. We somehow managed to convince people not to express their beliefs in public and to act in a tolerant way. I’ve seen people do this in smaller settings and it largely works. People who are privately racist can manage to put their tolerant face on. It works so well that many of them will try to convince you (and themselves) they are not “racist” at all.

This phenomenon is, to a large extent, the result of an implicit societal-level ratings system. As culture changes, people realize their old personal views are no longer tolerated in larger settings and they modify them accordingly. The modification might be self-serving and “fake” at first but over time it may become more genuine. Indeed, psychology seems to confirm that a great way to change our beliefs is to start by simply acting out the correct behavior. Humans learn to do this as children. Focusing our ratings system on public behavior seems like a good place to start and is probably the best we can hope for short term.

But how do we design a ratings system specifically for group behavior? One would think that the only way to change group behavior is to change individual behavior. That is, to change the individual characteristics that, in the aggregate, influence group behavior. A number of ideas come to mind.

One is to simply have smaller communities where broken group dynamics can be repaired more easily. Ratings criteria that demarcate characteristics affecting group harmony can be flagged for special attention. Racism and other -isms would certainly fall into this category. These can become, in a sense, higher priority characteristics. Obviously these criteria are subject to being themselves rated and, by this process, the community will learn what the most important ones are. Since we are envisioning small communities anyway, we might be justified in being optimistic about this.

Part of identifying high priority characteristics is to allow people to recommit to them personally. In a sense, the ratings system reminds us of what our group values are, so this is partially accomplished just by having and maintaining a ratings system in the first place. But, in the process, why not rate ourselves against our group values? That would allow people to introspect and recommit themselves to group norms. A rule to always include a self-rating along with a rating for anyone else could be implemented. Communities could also institute activities by which members think about and express their allegiance to the values they signed up for. Sort of like a pledge of allegiance but one involving reasoned thought.

Another idea is to educate people on their own beliefs and how these influence group behavior. Education is a key component that we have assumed into a ratings-based society. Ratings criteria will need to be explained to people and, by themselves, will force a great deal of thought on what we value. This type of introspection is an important part of self and group improvement.

And yet another idea is to focus on policies. In a community, all policies are transparent and the focus would be on optimization of results. Policies that are solely antagonistic can be weeded out, even while recognizing the group bias that led to them. Other mitigation steps include crafting policy by using third parties who have no stake in the game and no history from which bad policy springs.

Let’s look at an example of an individual belief that might disrupt group harmony. Many Americans express the following sentiment from time to time: this country needs a dictator to straighten us out. My favorite is “this is not a democracy”, a statement frequently made by managers, dads, and other authorities when faced with recalcitrant underlings. If someone says this it probably means they believe it to some extent. It’s pretty easy to do when faced with group conflict and chaos. One might argue that this belief is only natural and any one person’s culpability for holding it is limited. This is especially true if they are not really acting on it.

The trouble is this seemingly innocuous personal sentiment can transform itself into a very harmful group view if expressed systematically. This can be brought out by demagogic leaders. Suddenly, a personal view is “liberated” into the public arena. A ratings system can home in and track issues like this, call it what it is (ie authoritarian sentiment), and bring to light the danger of holding any personal attachment to it.

Sentiments like these are obvious but sometimes dysfunctional societies don’t really know what their problem is. Argentina is a country that has performed sub-optimally for decades, both economically and politically. The Argentines themselves are aware of this and know better than anyone how difficult their society is to fix. In a sense, they rate themselves constantly through articles, research, books, etc. They are a very capable people individually but neither they nor outside observers seem able to pin down the exact behaviors they would need to change.

My personal view, for what it’s worth, is that the core of the problem lies in a bad relationship between the working class and the professional/upper classes. Argentina’s dominant political party is a nationalist workers party which is generally hated by the professional and upper classes who favor classical western liberalism. The country thus oscillates between these two poles, never producing a stable direction.

This is well entrenched in Argentine society but, even if correct, how would further introspection, further ratings if you will, break them out of it? This is not a question for just a far off country. Our own politics is in danger of transforming us into a type of Argentina. The US, and many Western nations, have become a struggle between inward looking nationalists and traditional liberals (or neo-liberals). Both sides, however, are failures from a policy perspective and oscillating between them is not a recipe for true progress. But it is difficult to see how a ratings system can stop this.

Once norms have gelled, nation states are hard to change. Argentina goes through regular bouts of crisis and US politics is caught in a type of stasis, in which needed policy changes are impossible to achieve. But our hope is that nation states will fade away and be replaced by voluntary communities.

Motivation to cooperate

Let’s look at a very small society of 3 people who rate each other and deliver economic benefits to each other. We will call them Jack, Jill, and Mike. Jack and Jill both score Mike low but score each other high. Mike has good reason to be scored low. He’s dishonest, doesn’t work very hard, and tries to take advantage of others. He resents his low score and scores Jack and Jill low too. The ratings system is completely open and transparent. The question is how do Jack and Jill rehabilitate Mike? This small society is economically independent and is reliant on the output of all three people. Suppose Jack provides the food, Jill the clothes, and Mike the homes. Food, clothing, and shelter are all anyone needs in this society.

So in a completely open system, Mike would see all the ratings but he could rate his ratings. Thus the group can debate the ratings and specify exactly what Mike is doing that is negatively affecting him. Specificity is important. If a ratings system is to function, it must do so in a spirit of objectivity and improvement. Fighting and insults aren’t going to work.

Another idea, which we’ve discussed, is to depersonalize the ratings system by agreeing to see aggregate ratings only. Thus Mike doesn’t know exactly what Jack and Jill scored him on particular items. He will still know he’s been scored low on average but this takes a little bit of the bite out because he can’t identify a single individual behind it.

Mikes motivation to improve could further come at the expense of Jack and Jill. Any ratings system will produce a hierarchy of ratings created by the ratio of individual to average rating. Jack and Jill’s ratio will be high, higher than Mike’s certainly. But as Mike improves, he not only improves his own ratio, he brings the others down as well. This notion, selfish and vain though it may be, could induce Mike to play the game. A ratings system attuned to presenting statistics in ego-enhancing ways might help us win cooperation from the recalcitrant.

Many aspects of the ratings system conspire to create a people who will behave cooperatively. Many “bad” people are that way because they get away with it with minimal damage to their reputations, at least among the people that count. Our ratings system will be built to prevent that. The economic system will further demotivate bad behavior by giving people a reasonable standard of living in the first place and putting a ceiling on how much wealth can be accumulated. Sociopaths will always exist, however, and a society can only treat this with proper attention to mental health and, for those actively dangerous to others, isolation.

Clearly every community will need some provision for justice when things or people go wrong. We will turn to that in a future edition.


Ratings and fairness

A serious ratings system is susceptible to charges of unfairness. In our case, since ratings are expected to be a foundational principle in our voluntary communities, the notion that people will dismiss them as unfair is not acceptable.

Let’s look at two people who participate in a ratings system, Alice and Susan. They’re sisters so they know each other well and have established opinions of each other. Alice sees Susan’s rating and thinks the following:

  • Susan did better than me but I think she’s a worse person.
  • Therefore, the ratings system is unfair.

or

  • Susan did worse than me but I know her to be the better person.
  • Therefore, the ratings system is not accurate.

Both of these situations badly undermine the system and should be avoided. We should note here that this example was chosen because the ratings system will be assessed against a very high bar: our personal knowledge of someone we know well. The better we know the person, the more the ratings system will be judged for accuracy or otherwise.

Let’s further suppose that Alice knows exactly why the system failed. Susan has better social skills so she gets higher ratings across the board. Alice may know, for instance, that Susan is less honest than she is but Susan still gets a better rating, even for honesty. Clearly Susan’s social skills have deceived her raters into thinking she is honest.

This is a tough nut to crack but it’s a problem that happens all the time. Social skill often “enhances” other abilities and masks deficiencies. People are highly susceptible to manipulation by the socially skillful.

The answer to this problem, from a high level perspective, is education and proper categories. We can add to our list of “required courses” for prospective members of ratings-based societies: understanding people and psychological manipulation. Just being aware of the difference between honesty and charm should help alleviate any confusion over these. The ratings system itself should then, of course, be properly broken down into orthogonal categories and have clear definitions for each. Checks in the system can also alert raters that they may be confusing things if say, someone always rates charming individuals as honest.

Individuals will always disagree to some extent with the aggregate in a ratings system. The purpose of the aggregate, after all, is to put disparate opinions together to tell a simple story. But we should strive to keep outlier opinions (those far outside the standard deviation) to a minimum since they are the ones that later generate controversy and dissent. Outliers either reflect some new insight the rater has about the rated, is a mistake, or is simply biased. An outlier opinion should be followed up to find out what is going on, especially if there is some special insight that would be useful to the group. Mistakes should also be flagged, when they are obvious, so the rater can correct them. And rater biases should be reflected in the community’s ratings of the raters themselves. Whenever an outlier rating is made, the software should alert the rater and confirm their choices. This will help raters correct errors and give them a chance to reflect, perhaps more objectively, on what they did.

For the case of Alice and Susan, this would imply that Alice should come forward with her insight that Susan is not that honest. She would be doing the community a service. But despite her annoyance with Susan’s rating, we can easily see how Alice might refrain from doing her duty simply because she doesn’t really want to hurt her sister.

A profound change in culture is implied by this. We can surmise that Alice has no trouble divulging her insights about Susan’s honesty within her own family. It is likely that the entire family has a fairly accurate understanding of Susan, through their own observations and reinforcement by continuous testimony from members like Alice. So why is Alice comfortable being open within her family and not outside of it? We theorize a fairly simple dynamic at work here: the family largely takes care of its members regardless of their flaws. This removes the normal public constraints and allows people to be honest. We might further hypothesize an implicit bargain: that in exchange for security, families require openness from each other. Our ratings-based society, then, may require a family dynamic similar to this. Its members will view themselves as an extended family where some level of security is guaranteed (eg an income floor, social insurance, etc) but transparency will be required.

We might further speculate, along these lines, that the ratings system will protect its members by providing them with a large backing for any aggregate rating. A single individual will no longer be able to ruin someone’s reputation by speaking ill of them. Indeed, if the system works as intended, that opinion will already have been registered in the system. In our society, where we typically don’t rate each other (or give each other perfunctory good ratings), a single bad opinion carries a lot of weight. This is one reason why we are careful with people, like those outside our close family/friend circle, and avoid full transparency.

Importance of written ratings

When we look at product ratings we often want to know the numerical average, hopefully of a large number of people, and meaningful written reviews. For a situation with thousands of ratings, one strategy is to try and look at the middle-scoring ones and then, of those, read the best written reviews. The best and worst ratings usually have uninformative reviews – eg Its great!!!!! or It arrived broken :(.

For a situation where the number of ratings is small we should ignore the overall numerical average and look at the written reviews. Since reviews of people may not be in statistically significant numbers, we will rely heavily on the written reviews to judge the person in question. The written reviews, taken together, can then provide an overall impression of what we’re dealing with. In other words we can synthesize our own numerical score for the person from the written reviews. This also solves problems of subjectivity in numerical ratings (different raters with the same opinion giving different numbers), ratings inflation, etc.

Our ratings system should provide for written reviews and especially encourage it when the number of ratings is low. It should also provide tools, perhaps with the help of AI, to synthesize the written reviews into a coherent overall score and break them up into reasonable sub-categories (eg honesty, work-ethic, etc).

Ratings and human limitations

No matter how motivational a ratings system might be, it will reach a natural limit with individuals. Many people know they must improve some aspect of their lives and simply can’t. They’ve been rated by others and have rated themselves accurately. Perhaps they’re trying to quit a bad habit, get in shape, stop getting angry for no reason, etc. The list of human shortcomings is endless. In most cases we settle into a comfortable acceptance of our own and others’ limitations. If we are urging someone else to change, we recognize that the conflict isn’t worth it. If we’re rating ourselves, we calculate that the protracted struggle required isn’t worth it. It is possible that an improved ratings system will lead to better results in this regard but it is reasonable to expect that, eventually, we will run into some hard limits.

This is not to say that our ratings system should eschew individual improvement. Indeed, it should be engineered to motivate and reward those who take its advice seriously. And to avoid personal resentments, we’ve identified anonymity and specificity as mitigators. And, whether folks choose to act or not, awareness of their own flaws is still important information to have.

Nevertheless our main goal is to focus on traits that affect our larger society. It might be hard to know what to attack in terms of individual thought, or know whether we are making any progress. But a few ideas come to mind. Dishonesty is a key one, obviously. So is belief in the false or fanciful. Other specific harmful ideas like racism or the notion that a dictatorship would be good should also be targeted. We will get into those below in more detail. In terms of economics, productivity and quality are important.

Beyond this we are trying to foster a culture of creative, innovative, and perhaps unconventional thought. People need to learn to think for themselves and not follow the flawed thinking of the group. However, the system would encourage the scientific method where statements of belief are treated as hypotheses until backed by evidence. Policy ideas, in particular, would be rated against objective standards by which they can be optimized. Unconventional thought would be rewarded and encouraged but not accepted until proven. A culture where this is taken for granted will dampen all manner of harmful partisan grandstanding.

Drawing in the ignorant

We’ve talked alot about improving our information through ratings, using civil debate, and filtering out bad actors. But how do we engage the unengaged? We are talking here about the so called “low information voter” who makes decisions largely reflexively and is not particularly curious about the quality of his information. We’ve discussed having educational material and “deep canvassing”. But deep canvassing, by its own admission, only persuades a small segment of the electorate (4-6%).

One way to draw this low information person into the system might be by allowing him to tune it to his own biased, low information views. We can initially provide ready made “bubbles” for people to join. This person will have, once comfortable in a bubble, the ability to explore other points of view (other bubbles). That could help the curious but we are dealing with an incurious person here. However, there will be an incentive to explore because it increases the user’s “open-mindedness” score. Once a user steps outside a bubble, such a score could go up automatically. Further increases would come from engaging with the other side in constructive dialog or debate, contributing information, etc. Just the act of changing filters or weighting constants will help. Basically exploration and engagement outside one’s normal bubble will be encouraged and be mutually beneficial.

Clearly gamification will play a large role in this undertaking. Users who see their rating points go up in a responsive and attractive way will be encouraged to keep exploring. So will having a clear path to improvement, such as hints about what to do next and how many “points” each task might be worth.

The low information voter will now, at least, be exposed to the other side. Furthermore, he will realize that anyone that “hate rates” him can be filtered out and he will be left with only respectful opponents. By default, for new users, we should have civility ratings that are fairly high so users witness, from the outset, a tone of civility and hence, a non-threatening, atmosphere.