Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

Identity trust in the subjective ratings system: Difference between revisions

From Information Rating System Wiki
Content deleted Content added
Pete (talk | contribs)
No edit summary
Pete (talk | contribs)
No edit summary
 
Line 5: Line 5:
Someone could, of course, manufacture a new identity and, over time, become trusted on the network as such. But this would require maintaining two identities. Or two fractional identities, to use Lem’s idea. Fractional personhood seems like the correct default mindset of anyone on a network like this. The fraction rises with meaningful interaction. If all Y saw from X are numerical ratings of various predicates, then Y could discount X as a real person since it is easy to fake numerical scores. But if X produced well-reasoned opinions, then Y might regard X as a larger fraction of a person depending on how many such opinions were produced, how genuine they feel, etc. X might have another identity but then X would have to create new content so Y can see the new identity as a real person. A natural limiting would thus occur.
Someone could, of course, manufacture a new identity and, over time, become trusted on the network as such. But this would require maintaining two identities. Or two fractional identities, to use Lem’s idea. Fractional personhood seems like the correct default mindset of anyone on a network like this. The fraction rises with meaningful interaction. If all Y saw from X are numerical ratings of various predicates, then Y could discount X as a real person since it is easy to fake numerical scores. But if X produced well-reasoned opinions, then Y might regard X as a larger fraction of a person depending on how many such opinions were produced, how genuine they feel, etc. X might have another identity but then X would have to create new content so Y can see the new identity as a real person. A natural limiting would thus occur.


Of course, as [[Internal:FromGitlab/Thoughts_on_a_privacy_preserving,_simulacrum_resistant_identity_verification_mechanism|Lem emphasized]], real content could also be manufactured by AI. X could instruct an AI program to reword his original opinion to reflect the same viewpoint but coming from his alternate identity. But this possibility simply changes our calculus about personhood. If cogently expressed written opinions once served as a marker of at least fractional personhood, they may no longer be adequate. Given this possibility, Y would demand more, and more varied, evidence of X’s personhood.
Of course, as [[Thoughts on a privacy preserving, simulacrum resistant identity verification mechanism|has been noted]], real content could also be manufactured by AI. X could instruct an AI program to reword his original [[opinion]] to reflect the same viewpoint but coming from his alternate identity. But this possibility simply changes our calculus about personhood. If cogently expressed written opinions once served as a marker of at least fractional personhood, they may no longer be adequate. Given this possibility, Y would demand more, and more varied, evidence of X’s personhood.


Technology has a way of doing this. It enhances our ability to produce and makes it seem like we are greater than we really are. If the people of, say, 1947 saw the productive output of a single worker today they would conclude that 5 workers had been responsible for it (https://tradingeconomics.com/united-states/productivity). Our standards for personhood change as we progress, particularly when we are judging people indirectly (ie through their production).
Technology has a way of doing this. It enhances our ability to produce and makes it seem like we are greater than we really are. If the people of, say, 1947 saw the productive output of a single worker today they would conclude that 5 workers had been responsible for it (https://tradingeconomics.com/united-states/productivity). Our standards for personhood change as we progress, particularly when we are judging people indirectly (ie through their production).

Latest revision as of 19:03, 26 September 2024

Main article: Privacy, identity, and fraud in the ratings system

Incidentally, the subjective ratings system mitigates the problem of multiple manufactured identities coming from the same person. People could certainly manufacture public/private key pairs and present them as new identities over the network. But each public key “identity” would only be trusted to the extent that it meaningfully interacted with someone. In the subjective system, most such manufactured keys wouldn’t even be known to the network, at least initially, and would be ignored.

Someone could, of course, manufacture a new identity and, over time, become trusted on the network as such. But this would require maintaining two identities. Or two fractional identities, to use Lem’s idea. Fractional personhood seems like the correct default mindset of anyone on a network like this. The fraction rises with meaningful interaction. If all Y saw from X are numerical ratings of various predicates, then Y could discount X as a real person since it is easy to fake numerical scores. But if X produced well-reasoned opinions, then Y might regard X as a larger fraction of a person depending on how many such opinions were produced, how genuine they feel, etc. X might have another identity but then X would have to create new content so Y can see the new identity as a real person. A natural limiting would thus occur.

Of course, as has been noted, real content could also be manufactured by AI. X could instruct an AI program to reword his original opinion to reflect the same viewpoint but coming from his alternate identity. But this possibility simply changes our calculus about personhood. If cogently expressed written opinions once served as a marker of at least fractional personhood, they may no longer be adequate. Given this possibility, Y would demand more, and more varied, evidence of X’s personhood.

Technology has a way of doing this. It enhances our ability to produce and makes it seem like we are greater than we really are. If the people of, say, 1947 saw the productive output of a single worker today they would conclude that 5 workers had been responsible for it (https://tradingeconomics.com/united-states/productivity). Our standards for personhood change as we progress, particularly when we are judging people indirectly (ie through their production).

We can see a variant of this idea without resorting to time travel. If we read an opinion from someone we trust who endorses a product, we regard it one way if the person is expressing their view freely and quite another way if we find that the person is being paid by the manufacturer to promote their product. Essentially the paid endorsement becomes an extension of the manufacturer and the person being paid is reduced in weight. We trust the endorsement to the extent that we trust the manufacturer itself. Of course, the manufacturer is trying to trick us into believing otherwise. But regardless, our notion of how people are counted changes constantly depending on the circumstances.

While we’re at it, let’s emphasize that repetition works. Opinion and facts have an additive property when repeated by different people. They become more persuasive even if the repeated opinion offers no new substantive information and is ultimately from the same source. Nazi propaganda relied on a mantra of repeat-and-simplify. We can fight this by recognizing its source, identifying it correctly as propaganda, fact-checking, etc. But it still has an effect and needs to be counted. And the additive effect is not always negative. If the message being repeated is true and beneficial then its repetition works for the good.

We might mention here that trust is a function of knowledge. The more we know someone the more we can trust them. We could even modify the definition of trust slightly to mean the confidence with which someone can predict another person’s actions. If we can predict what X is going to do, we trust X to do the thing we predict. This makes trust virtually synonymous with knowledge.

But that’s stretching the definition of trust a little bit. Nevertheless, all things being equal, the person who shares more about themselves will be seen as more trustworthy than someone who does not. In the sense of fractional personhood, the person who shares merits a larger fractional personhood. This is simply how human beings operate and is a choice for both the rater and the rated.

We asserted that the fractional personhood concept mitigates the manufactured identities problem in the subjective ratings system (SRS). But it does exactly the opposite in a community-based ratings system (CBRS) since the CBRS will rely more on standardized ratings and centralized approaches to identity. In CBRS, if a fake identity can fool the central system, it has presumably fooled the entire community which will, given the nature of the system, tend to think of identities as whole persons. We could stipulate a fractional person approach to identity for CBRS too, but the point of centralization is simplicity and greater trust. Its purpose is to avoid complexities like fractional personhood and establish a baseline level of trust at low cognitive cost. It asks us to give up some privacy but, like everything, this is a tradeoff that communities and their members will have to make. In short, while the fractional personhood concept is a natural one in the subjective ratings system, it is a very awkward one in the community-based system.

The fractional personhood concept is conceptually appealing but difficult to reduce to mathematics. So far, we have only discussed it as a change that people should adopt as part of their thinking. Do we add up the time spent with each “person” and divide by 24 hours to estimate what fraction of a day they represent? Do we then multiply this factor by our trust for them? What about subject matter? People we spend very little time with (eg doctors) merit much higher trust on medical issues than family members we see every day who have no medical knowledge. What happens when we read two similar opinions on a subject? We would add it one way if they were really independent opinions, another way if they were both derived from some other (perhaps authoritative) source, and yet another way if they were written by the same person with one of them signed by a fake identity. I would argue that all these scenarios are additive, but how much is the question. It gets complicated.