Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

Fake identities and ratings

From Information Rating System Wiki
Revision as of 18:53, 7 September 2024 by Pete (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Main article: Privacy, identity, and fraud in the ratings system

Fake identities start with the concept of identity verification and how to tell if someone is a real person. Let's take a look at a particular case: Someone introduces himself to multiple independent people as a different person every time. Say he has 100 fake identities that he does this with. Then he uses the 100 fake identities to rate someone. His motivation, perhaps, is to increase his friend’s average rating by introducing a bunch of positive ratings. If we suspect that a rating is fake, we can ask around to see if someone knows the person who made it. Since the rater did the work of introducing himself to multiple people, we will always find someone who can vouch for the rater. How do we prevent this?

One way would be if the people the rater introduced himself to got together and compared notes. They could conclude that the same guy gave each of them different identities and proceed to report this fact. Similarly, if someone could detect that the ratings were oddly similar in some way, then we could investigate the matter by interviewing the people who have met each of the suspected identities. But neither of these measures prevent the scam from happening in the first place. They may be a deterrent, the way a police force may deter crime, but they don’t preclude the scam.

It seems we would prefer a decentralized solution to the problem. This would presumably involve a public/private key where the rater could sign his rating with his private key and the community could use his public key to verify that it was really him. But what if our scam rater makes himself 100 public/private key pairs and gives a different public key to each of the 100 people he introduces himself to? Now we have the same situation as above. We would have 100 seemingly different ratings signed with 100 different private keys, all of which are verifiable by someone in the community who has one or another of the public keys he used to introduce himself with. We haven’t really solved anything.

Presumably, though, cryptographic keys would enable easier identification of scams. We could have a central database that stores everyone’s public key and if the population of public keys exceeds the known population, we could investigate and eventually ferret out the duplicates. A laborious process, to be sure, but doable. And, needless to say, as a preventive measure it may give pause to anyone contemplating a scam like this.

But now we are taking a step away from decentralization. In particular we need a database that stores the known population. This would involve a census or a requirement to report yourself to the community. The communal database would then have, say, your name, picture, physical characteristics, contact information, etc. along with your public key. Now, if a public key shows up in a rating that’s not in the database, we could flag it for investigation. A procedure like this could be automatic.

This might make things more difficult, but it still doesn’t prevent someone from creating multiple fake identities. Depending on the community’s rules, our scammer could simply create multiple fake names, pictures, addresses, etc. and report them as real people. If the community doesn’t check each entry by requiring an interview, for instance, it would itself fall victim to our multiple identity scammer.

We are now far from a decentralized solution and can see an emerging tradeoff. As our scammer becomes more determined, our community must become more intrusive about collecting its citizens’ information and storing it. Communities will thus have to decide how much they feel threatened by scams like this and adjust their level of intrusiveness accordingly. We can envision a community that has a very honest population and simply trusts that each public key represents a unique real person. And we can envision the high-scam community that requires each rating to be accompanied by a full biometric scan to be matched against a central database containing everyone’s unique biomarkers. We might include in that, for good measure, their name, address, family relations, where they work, a personal secret code, etc. Suffice it to say we can make the process so onerous that no one would try to cheat it.

In thinking about this tradeoff we might conclude that a middle ground is probably where most community’s will end up. We could envision a community ID card, difficult to counterfeit, with a readable chip containing each citizen’s information. Every time a rating is made, the rater must swipe their ID to prove their identity. Perhaps the need for the swipe could be done only occasionally, as a spot check. This is not foolproof, of course, but many countries have well-accepted technology like this.

Let’s stress too that participation in community ratings is also a tradeoff for each individual. We envision here free communities (let’s hope they’re all free) where members choose to participate. Those who don’t simply won’t receive the benefits that participation brings. If someone opts out entirely, they may only be eligible for minimum default benefits or no benefits at all. We could even have levels of participation where community members choose how much to reveal themselves and are eligible for different benefit tiers depending on their choices.

Another idea for maintaining identity integrity is to tie it to economic activity. In a money-based economy this could mean, for example, requiring people to pay to have their identities recognized. A fake identity could exist but only if someone pays for it, which would disincentivize its creation. Clearly the rich could create fake identities but if we contemplate an egalitarian society there wouldn’t really be the concept of “rich”. And if we imagine a society where we are correctly allocating resources, there would be no excess accumulation of goods. You would get what you need and maybe a little more for fun and that’s it. The cost to cheat would seem quite high.

This is made even easier if we contemplate a moneyless economy. Now we would tie economic production to identity and require that ratings be made by identifiable contributors to the central pool of resources. Someone could create a fake identity but that identity would also have to produce and so would be like maintaining a second job for the cheater. If a cheater were to create an identity that produced nothing, this would be noted in the new identity’s ratings. The community could have a rule saying that less productive members are weighted lower. So, again, the cost to cheat could be very high.