Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

Trust

From Information Rating System Wiki
Revision as of 20:00, 23 July 2024 by Pete (talk | contribs)

The dictionary defines trust as the assured reliance on the character, ability, strength, or truth of someone or something. For our purposes we might construct a slightly simpler and more general definition: trust is the extent to which something/someone matches our expectations. For the ratings system, this takes the form of a numerical value. For example, Alice trusts Bob to tell her the truth 80% of the time. Thus if Bob makes 100 statements, Alice knows that 20 of them are false. This lowers the confidence she has in any given statement Bob makes to her. For computational purposes, she can "derate" a statement made by Bob by a factor of 0.8.

The ratings system can be viewed as a tree of nodes descending from the node asking the question. Alice might ask her network if Jones is a good mechanic. Her direct contacts, Bob and Carol, answer the question and Alice aggregates those opinions to get an answer. Alice can choose from various aggregation algorithms.

Let's do a simple example. Suppose Alice trusts Bob and Carol 100%. They both answer that yes, Jones is a good mechanic 70% of the time. Using a Bayesian algorithm, Alice can combine these two results to calculate that Jones is a good mechanic 84.5% of the time. However, if Alice's trust in her contacts falls to 90%, her aggregated result will drop to 80.5%. A sandbox app was developed to do calculations like this one and play with Bayesian updating. This app is based more generally on a paper by Sapienza and Falcone, who studied trust in Bayesian systems.

From a probabilistic modeling perspective, 90% trust means Alice, given 100 answers from a source, thinks 90 are true/accurate and 10 are not. But what does that 10% really mean? It turns out we can Sapienza Trust Model Derivation Showing Equivalence with Random Answers show that it is equivalent to answering randomly. In other words, the 10 bad answers are the same as the source answering the question randomly.