Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

Establish trust

From Information Rating System Wiki
Revision as of 21:56, 23 July 2024 by Pete (talk | contribs)

Establishing Trust

One of the big problems users will have is determining the trust value to use, especially at first. Rarely do people assign proper numerical values to how much they trust their peers, much less calculate those values from controlled studies or their own experience. More likely, they just have an intuitive sense for when someone is trustworthy on a particular subject.

In the trust network we have no choice but to assign numbers to our trust levels. Does my daughter think I'm 70 or 80% trustworthy on a physics hw question? She probably doesn't know herself but she clearly has some level of trust or she wouldn't be asking. This leads us to a UI type question: what is the best way to get users to input their level of trust? Maybe instead of numbers we could use a cold to hot colorbar, various gradations of a smiley-face, etc.

The trust network itself provides us some benefits in dealing with this:

  • It gives us a way to see what others think of a recently added node. If we're asked to trust a new node, what do those nodes' immediate peers think of it? Maybe we can default to that trust for starters.
  • It gives us a way to collect information and calculate trust levels over time. It can store the queries we've made, who answered them, and provide a way to input our own satisfaction with the answers. It could then score the interactions between ourselves with each node that answered us.
  • It provides an ability to record how servers answer questions, to the extent privacy concerns allow. Are they taking their time to answer or do they answer immediately? Are the answers long or short? Are they cogent? Do they engage in petty attacks on their peers? These metrics might give us some proxies for trust.

On this last point it might help to view trust more as an earned accomplishment rather than something you are bestowed with by others. That is, good behavior -- ie thoughtful answers -- leads to higher trust. This will encourage the correct mindset for trust on the part of both clients and servers. Clients will approach trust as something to withhold until proven and servers will view it as something that requires work, real thought rather than off-the-cuff answers.

Trust in Information

Trust is normally thought of as trust in a person but the type of information being presented is often just as important. A salesperson's pitch for a product is obviously biased and is trusted less than if the same salesperson talks about neutral subjects. Indeed salespeople often start by talking about other things in the hope that a higher default trust will result that then extends into their product pitch.

To help, we might try classifying information into baseline levels of trustworthiness. The following is incomplete and based on some random examples. It forms a very rough continuum from reliable to unreliable.

  1. Established fact -- Generally uncontroversial, many verification methods, understood for a long time ==> Population of NYC. Conservation of energy.
  2. Expert consensus -- The result of many studies and review. Almost an established fact ==> Raising interest rates decreases inflation.
  3. Statistical data and studies -- Rigorous experimentation or field studies done by experts, or reviews of such work ==> Are eggs good for you?
  4. Theory -- A principle supported by scientific study but not completely proven or established (often difficult to prove) ==> big bang theory, string theory, etc.
  5. Philosophy -- An abstract general idea that is not explicitly testable but pursued through rigorous inquiry ==> How should we conduct ourselves? What is knowledge?
  6. Conjecture and hypothesis -- A testable proposition intended for scientific scrutiny ==> Efficacy of the latest Covid-19 vaccine?
  7. Speculation -- Semi-informed prediction, often by knowledgable people, but may never be tested rigorously ==> Best investment for the remainder of this year?
  8. Persuasion -- Subjective viewpoint based on evidence but usually tied to an agenda ==> Newspaper opinion columns.
  9. Marketing -- An attempt to sell a product/service. Usually easy to identify. In many cases fact-based but can use propaganda, ideology, etc. ==> What's the best scooter under $1000?
  10. Anecdotal evidence -- Personal story, perhaps true but unverified or unverifiable ==> The Wim Hof method makes me feel great!
  11. Personal opinion -- Subjective view with no expectation of verification ==> Are Vermeer paintings beautiful?
  12. Ideology / Religion -- A systematized set of beliefs held by many (usually) but unverifiable. Often borrows from philosophy, fact, opinion, andecdotal evidence, propaganda, etc. ==> Do you believe in socialism? An afterlife?
  13. Propaganda -- Explicitly biased information intended to win people over to a political agenda. Like misinformation but more visceral and less factual ==> Immigrants are bad people out to get us.
  14. Misinformation -- A close cousin. False statements disguised as fact or statistical studies in order to deceive. ==> Immigrants commit more crimes than we do.
  15. Satire, parody, sarcasm -- Not intended as informational but a deliberate distortion for humor or to make a point.

It seems we can probably use AI to perform some kind of classification of questions and answers into a continuum of this type to establish baseline levels of trust, or at least to present this to users: Hi, you are likely being presented with someone's personal opinion. We recommend a trust level of 0.5 for that. Or we could ask users to take a survey at the beginning to help them establish their default levels of trust for various types of information.

As we identified last time, many of these categories can be sub-classified into controversial or not. That Biden won the 2020 election is an established fact but is controversial because it's been affected by misinformation and propaganda. Others are personal opinions such as the legality of abortion but are still controversial because they touch on public policy and have also been affected by propaganda, religion, etc. Identifying controversial views might help adjust baseline levels of trust as long as the controversy is grounded in something other than outright misinformation.

Many of these categories but especially 1-7 can be divided into knowledge that's widely known vs. that which is known only to a few. Many questions will be in the latter category because the former can basically "be googled". This is what gives rise to concerns about trust attenuation and attendant solutions (eg signed registries).

Information and Bad Behavior

Here are a few classes of problematic information that don't fit neatly into the above continuum but come to mind:

  1. Protected information -- Security/privacy information that shouldn't be shared ==> What's Joe's private key? If we find people exchanging info. like this, what should be done?
  2. Unethical or illegal info -- Information that is illegal for us to have circulating on the system and should be taken down ==> Can you help me assassinate my business partner?
  3. Info that might have to be moderated out ==> Support for ethnic cleansing, genocide, etc.

I'm not sure how content moderation usually works but we could assign trust to zero for the offending nodes.