Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

User education and self-improvement

From Information Rating System Wiki
Revision as of 16:13, 26 September 2024 by Pete (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Main article: Ratings system

Education

The ratings system we are designing could help people become better thinkers, question their assumptions, break cognitive bad habits and build new ones. Courses on logic, debate, and identifying misinformation can be offered. Soft skills courses could also be offered to promote civility such as empathy, constructive communication, emotion management, etc. Helping people think more broadly and open-mindedly in general would complement a curriculum of this nature. For people who complete such a course, a certification could be given which provides ratings benefits. And, for those who haven’t, the course would be an easy way to demonstrate improvement and work toward improved ratings.

Educational material

Our rating system should provide an array of curated sources to help users analyze information, eg:

This material could be used to create courses on information literacy, spotting misinformation, persuasive debate, etc. Those passing the course could be provided with a certificate or a plus-up in their ratings. In any event, the information should be presented in an obvious, easily accessible way, and efforts should be made to encourage users to engage with it.

An interesting book written in the wake of the 2016 election was Timothy Snyder's "On Tyranny". It is essentially a guide for what to watch for and how to fight incipient tyranny. A large part of this is becoming familiar with how information is distorted and what we can do to inoculate ourselves against bias. We are building an information system which must educate its users in how to consume information. In the book, Snyder provides 20 rules for combatting incipient tyranny, a few of which are listed here:

  • Believe in Truth
  • Investigate
  • Remember professional ethics
  • Contribute to good causes
  • Learn from peers in other countries
  • etc

These are among those that are directly amenable to our rating system. A few of them ask that we take part in demonstrations and meet in person which our system might help organize but doesn’t directly enable. Snyder, a student of European history (particularly Eastern Europe), wrote the book as a guide for Americans to prevent authoritarianism from taking hold here. Many of its recommendations are things we value (eg believe in truth, investigate) and can be part of our ratings system.

Needless to say, we also value Snyder’s overall objective of keeping tyranny at bay and I submit that it is one of the fundamental justifications for undertaking this project. Misinformation may have been a curiosity in the past. But no longer.

“On Tyranny” highlights the notion that we can create ratings to suit particular purposes. In this case it is preventing authoritarian rule, particularly in the US. But it can be for any purpose: medical advice, humor, collaborative engineering design, etc. With a diverse enough user base communities of interest should arise and create ratings suitable to their purpose.

Self-improvement

We expect the ratings system to help people improve, not just be a critique of them. Bob might acknowledge that he is biased or not particularly competent in a field, but then he'd want some feedback on how to improve. "Be less biased" or "be more competent" is not an answer. This is hard for people to do but if we aim to have a ratings system we should do so with the idea of helping people improve themselves. After all, our overall goal is societal self-improvement, which starts with the individual.

How would our system achieve this? Numerical ratings should first come with comments by default, particularly for people but also for opinions/arguments. But if a person or opinion is rated negatively (eg R < 0.5), then there should be a place to insert a reason and a comment on how this person can improve. In some contexts (decided upon by the community) the reason/comment could be required. That is, if not provided, the rating would not be counted. Another way to handle this might be to rate the numerical rating and the reason/improvement comment. If someone provides a numerical rating only, this would be rated lower than a number that came with a reason/improvement comment.

Users would heed the comments to improve their own ratings. But the trouble with self-improvement is that it tends to be a long term proposition. It takes awhile to learn the skills, the mental habits, the improved judgement, etc. and it takes even longer for people to notice that someone has really changed. Our system could, perhaps, short circuit this process by having users reveal what they are doing to improve in light of a negative rating and demonstrating their skills on new arguments, or by revising old arguments they have made in the past, perhaps the ones they received bad ratings for in the first place. Once they've revised their work, the raters would revise their ratings upward to reflect the improvement.

This situation demands participation by contributors and raters alike and binds them in an ongoing relationship. Raters would know that by not participating beyond an initial negative rating that they stand to be rated negatively themselves for their lack of continuity. This would hopefully keep raters engaged enough to participate constructively for the life of the issue in question.