More actions
No edit summary |
No edit summary |
||
Line 3: | Line 3: | ||
The [https://www.merriam-webster.com/dictionary/trust dictionary] defines '''trust''' as the ''assured reliance on the character, ability, strength, or truth of someone or something''. For our purposes we might construct a slightly simpler and more general definition: trust is the extent to which something/someone matches our expectations. For the [[ratings system]], this takes the form of a numerical value. For example, Alice trusts Bob to tell her the truth 80% of the time. Thus if Bob makes 100 statements, Alice knows that 20 of them are false. This lowers the confidence she has in any given statement Bob makes to her. For computational purposes, she can "derate" a statement made by Bob by a factor of 0.8. |
The [https://www.merriam-webster.com/dictionary/trust dictionary] defines '''trust''' as the ''assured reliance on the character, ability, strength, or truth of someone or something''. For our purposes we might construct a slightly simpler and more general definition: trust is the extent to which something/someone matches our expectations. For the [[ratings system]], this takes the form of a numerical value. For example, Alice trusts Bob to tell her the truth 80% of the time. Thus if Bob makes 100 statements, Alice knows that 20 of them are false. This lowers the confidence she has in any given statement Bob makes to her. For computational purposes, she can "derate" a statement made by Bob by a factor of 0.8. |
||
The [[ratings system]] can be viewed as a tree of nodes descending from the node asking the question. Alice might ask her network if Jones is a good mechanic. Her direct contacts, Bob and Carol, answer the question and Alice [[aggregates]] those opinions to get an answer. Alice can choose from various [[aggregation algorithms]]. |
The [[ratings system]] can be viewed as a tree of nodes descending from the node asking the question. Alice might ask her network if Jones is a good mechanic. Her direct contacts, Bob and Carol, answer the question and Alice [[aggregates]] those opinions to get an answer. Alice can choose from various [[Aggregation techniques|aggregation algorithms]]. |
||
Let's do a simple example. Suppose Alice trusts Bob and Carol 100%. They both answer that yes, Jones is a good mechanic 70% of the time. Using a Bayesian algorithm, Alice can combine these two results to calculate that Jones is a good mechanic 84.5% of the time. However, if Alice's trust in her contacts falls to 90%, her aggregated result will drop to 80.5%. A [https://peerverity.pages.syncad.com/trust-model-playground/ sandbox app] was developed to do calculations like this one and play with Bayesian updating. This app is based more generally on a paper by [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza and Falcone], who studied trust in Bayesian systems. |
Let's do a simple example. Suppose Alice trusts Bob and Carol 100%. They both answer that yes, Jones is a good mechanic 70% of the time. Using a Bayesian algorithm, Alice can combine these two results to calculate that Jones is a good mechanic 84.5% of the time. However, if Alice's trust in her contacts falls to 90%, her aggregated result will drop to 80.5%. A [https://peerverity.pages.syncad.com/trust-model-playground/ sandbox app] was developed to do calculations like this one and play with Bayesian updating. This app is based more generally on a paper by [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza and Falcone], who studied trust in Bayesian systems. |
Revision as of 15:14, 1 September 2024
Main article: Ratings system
The dictionary defines trust as the assured reliance on the character, ability, strength, or truth of someone or something. For our purposes we might construct a slightly simpler and more general definition: trust is the extent to which something/someone matches our expectations. For the ratings system, this takes the form of a numerical value. For example, Alice trusts Bob to tell her the truth 80% of the time. Thus if Bob makes 100 statements, Alice knows that 20 of them are false. This lowers the confidence she has in any given statement Bob makes to her. For computational purposes, she can "derate" a statement made by Bob by a factor of 0.8.
The ratings system can be viewed as a tree of nodes descending from the node asking the question. Alice might ask her network if Jones is a good mechanic. Her direct contacts, Bob and Carol, answer the question and Alice aggregates those opinions to get an answer. Alice can choose from various aggregation algorithms.
Let's do a simple example. Suppose Alice trusts Bob and Carol 100%. They both answer that yes, Jones is a good mechanic 70% of the time. Using a Bayesian algorithm, Alice can combine these two results to calculate that Jones is a good mechanic 84.5% of the time. However, if Alice's trust in her contacts falls to 90%, her aggregated result will drop to 80.5%. A sandbox app was developed to do calculations like this one and play with Bayesian updating. This app is based more generally on a paper by Sapienza and Falcone, who studied trust in Bayesian systems.
From a probabilistic modeling perspective, 90% trust means Alice, given 100 answers from a source, thinks 90 are true/accurate and 10 are not. But what do those 10 really mean? It turns out we can show that the "untrustworthy" part of the trust is equivalent to answering randomly. In other words, the 10 bad answers are the same as the source answering the question randomly.
There are other options. The source may be randomly lying or have some propensity toward bias. Random lying means the source is answering randomly as long as it is not the truth. Or the source may be randomly lying, biased, or lying in a biased way (lying but biased toward a particular outcome).
User of the ratings system software will need to have ways to establish trust levels for the people and information they interact with.