More actions
Main article: Ratings system
Let's take a very simple situation. You want to know whether it is going to rain tomorrow. You don't know so you ask two knowledgeable sources this question. One of them believes it will rain with a probability of 60%. The other believes it will rain with a probability of 80%. We can sketch this situation as follows:
You combine these probabilities using some aggregation technique. One such technique is Bayes' equation which, when applied to this case gives us:
Another technique is a straight average of the two answers:
There are other aggregation techniques and which one we pick depends on the nature of the data. Here, for Bayes, we assume that our two weather predictors are independent and have a test to arrive at their probabilities. Since both tests are independent, they tend to reinforce each other. Bayes is very rigorous about data and is most useful in situations where we have scientifically controlled tests (eg efficacy of a new medicine for curing a disease). In situations where we don't, serious problems can arise. A better intuitive understanding of Bayes will help users select it appropriately. The Bayes' equation also tends to restrict our understanding of trust to some degree, and there are some alternative approaches to deal with that.
For the averaging technique, an obvious alternative in light of this problem, the data is not as rigorous. Here we might have two friends who are providing an off-the-cuff opinion on the chance of rain tomorrow. For now, however, to keep things simple, we will continue with Bayes.
This example assumes that we trust our two sources. But what if we only partially trust them? In that case we would expect to reduce the weight of the answers we get from them. Let's suppose we have a 70% trust in our first source and a 90% trust in our second source:
We can now modify our probabilities using the following equation:
where
is the nominal probability, ie 50%
is the modified probability
is Trust
is the Probability assuming complete Trust
Note that for zero trust this equation reduces the probability to 50%, which is the same as a random answer and provides no meaningful information.
With this equation in mind we calculate our new modified probabilities.
Notice how the Trust values pushed both probabilities closer to 0.5.
We can now aggregate these values using the Bayes' equation:
Note how the presence of less than 100% trust reduces our certainty in the final answer (from 0.857 to 0.816).
This equation and the Bayes equation as used here is taken from a paper by Sapienza and Falcone. This handy web application, based on the paper, allows you to do calculations with your own trust levels and probabilities.
An early sandbox demo version of the ratings system is also available which simulates the network and has several of the aggregation algorithms. Included in this are some notes on setting up and using the sandbox, notes on using the algorithm interface which permits the user to add their own aggregation algorithm, exercising the algorithm interface with more complex data types, and a discussion of user input for continuous distributions and complex trust.
The network in this example consists of one level, but we can have more. If our two sources in turn rely on their sources we might have a situation like this:
Here the person asking the question (0) asks source 1 for their opinion on whether it will rain tomorrow. 1 has a personal opinion on this subject but asks two other contacts for their opinion and rolls all the numbers up into an aggregate which is transmitted to 0. If we use Bayes, the aggregate is calculated as follows:
First we modify the probabilities by the trust 1 has in 3 and 4:
These two values are combined via the Bayes eqn:
We can then combine this with 1's own opinion. Since 1 trusts himself, presumably, we don't have to modify his 60% probability with a trust factor. We simply apply Bayes' equation again:
We do the same thing for Source 2 with respect to its sources (5 and 6):
Now we have two aggregated opinions for 1 and 2, 0.855 and 0.983. These are now modified by the trust 0 has for 1 and 2:
These two values are then aggregated via Bayes to obtain the overall probability, ie the final answer:
This example shows how a network of sources can be combined to give an aggregate answer to someone asking a question. This is, in essence, how the ratings system would work.
Notice how the final answer gives a very high probability of rain tomorrow. This is because we now have several sources who reinforce each other using a Bayesian aggregation technique. As noted above, however, there are other aggregation techniques, such as simple averaging. Bayes works for strictly independent sources who have some rigorous probabilistic test to base their predictions on. Most predictions are not that rigorous. If we ask a population who the next president will be, an average of the answers is more accurate than a Bayesian combination. For this reason, among others, there are several aggregation techniques and users will have the means to add their own.
There are a few problems with the model as described. One is cycling, which is when a node in the network is used more than once and thus incorrectly reinforces an opinion. If Alice trusts Bob who trusts Carol who trusts Dave who trusts Bob, we are getting Bob's opinion twice and incorrectly reinforcing it. This can also lead to an infinite feedback loop unless we take steps to actively prevent it. Another is simply trust attenuation even if no cycling is involved. As the example above shows, as favorable opinions are combined using Bayes they tend to begin strongly reinforcing each other until near certainty is achieved. In most cases, this near certainty is incorrect. The Bayes' equation itself may be to blame for this but it brings to light another issue: the inadequacy of single-value trust factors.
One way to remedy this problem is to use a trust model with multiple factors, that is, one for trust in communication of answers and another for trust in the answer itself.