Technical overview of the ratings system: Difference between revisions
More actions
No edit summary |
No edit summary |
||
Line 16: | Line 16: | ||
⚫ | |||
⚫ | |||
<math display="block">P_{comb} = {{0.6(0.8)}\over {0.6(0.8)+0.4(0.2)}} = 0.857</math> |
<math display="block">P_{comb} = {{0.6(0.8)}\over {0.6(0.8)+0.4(0.2)}} = 0.857</math> |
||
Line 26: | Line 24: | ||
<math display="block">P_{comb} = {{0.6+0.8}\over {2}} = 0.7</math> |
<math display="block">P_{comb} = {{0.6+0.8}\over {2}} = 0.7</math> |
||
There are |
There are other [[aggregation techniques]] and which one we pick depends on the nature of the data. Here, for Bayes, we assume that our two weather predictors are independent and have a test to arrive at their probabilities. Since both tests are independent, they tend to reinforce each other. Bayes is very rigorous about data and is most useful in situations where we have scientifically controlled tests (eg efficacy of a new medicine for curing a disease). For the averaging technique the data is not as rigorous. Maybe they represent two friends who are providing an off-the-cuff opinion on the chance of rain tomorrow. |
||
This example assumes that we trust our two sources. But what if we only partially trust them? In that case we would expect to reduce the weight of the answers we get from them. Let's suppose we have a 70% trust in our first source and a 90% trust in our second source: |
This example assumes that we trust our two sources. But what if we only partially trust them? In that case we would expect to reduce the weight of the answers we get from them. Let's suppose we have a 70% trust in our first source and a 90% trust in our second source: |
||
Line 67: | Line 65: | ||
Notice how the Trust values pushed both probabilities closer to 0.5. |
Notice how the Trust values pushed both probabilities closer to 0.5. |
||
The network in this example consists of one level, but we can have more. If our two sources in turn rely on their sources we might have a situation like this: |
|||
<kroki lang="graphviz"> |
|||
digraph G { |
|||
fontname="Helvetica,Arial,sans-serif" |
|||
node [fontname="Helvetica,Arial,sans-serif"] |
|||
edge [fontname="Helvetica,Arial,sans-serif"] |
|||
layout=dot |
|||
0 [label="0, P=50%"] |
|||
1 [label="1, P=60%"] |
|||
2 [label="2, P=80%"] |
|||
3 [label="3, P=55%"] |
|||
4 [label="4, P=95%"] |
|||
5 [label="5, P=65%"] |
|||
6 [label="6, P=90%"] |
|||
0 -> 1 [label="T=0.7",dir="both"]; |
|||
0 -> 2 [label="T=0.9",dir="both"]; |
|||
1 -> 3 [label="T=0.55",dir="both"]; |
|||
1 -> 4 [label="T=0.95",dir="both"]; |
|||
2 -> 5 [label="T=0.65",dir="both"]; |
|||
2 -> 6 [label="T=0.90",dir="both"]; |
|||
} |
|||
</kroki> |
Revision as of 13:58, 27 August 2024
Let's take a very simple situation. You want to know whether it is going to rain tomorrow. You don't know so you ask two knowledgeable sources this question. One of them believes it will rain with a probability of 60%. The other believes it will rain with a probability of 80%. We can sketch this situation as follows:
You combine these probabilities using some aggregation technique. One such technique is Bayes' equation which, when applied to this case gives us:
Another technique is a straight average of the two answers:
There are other aggregation techniques and which one we pick depends on the nature of the data. Here, for Bayes, we assume that our two weather predictors are independent and have a test to arrive at their probabilities. Since both tests are independent, they tend to reinforce each other. Bayes is very rigorous about data and is most useful in situations where we have scientifically controlled tests (eg efficacy of a new medicine for curing a disease). For the averaging technique the data is not as rigorous. Maybe they represent two friends who are providing an off-the-cuff opinion on the chance of rain tomorrow.
This example assumes that we trust our two sources. But what if we only partially trust them? In that case we would expect to reduce the weight of the answers we get from them. Let's suppose we have a 70% trust in our first source and a 90% trust in our second source:
We can now modify our probabilities using the following equation:
where
is the nominal probability, ie 50%
is the modified probability
is Trust
is the Probability assuming complete Trust
Note that for zero trust this equation reduces the probability to 50%, which is the same as a random answer and provides no meaningful information.
With this equation in mind we calculate our new probability.
Notice how the Trust values pushed both probabilities closer to 0.5.
The network in this example consists of one level, but we can have more. If our two sources in turn rely on their sources we might have a situation like this: