<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.peerverity.info/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lembot</id>
	<title>Information Rating System Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.peerverity.info/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lembot"/>
	<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/wiki/Special:Contributions/Lembot"/>
	<updated>2026-04-14T19:55:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Heuristics_and_policy-making&amp;diff=2461</id>
		<title>Heuristics and policy-making</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Heuristics_and_policy-making&amp;diff=2461"/>
		<updated>2024-10-21T21:27:28Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|System modeling}}&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:Heuristic|Heuristics]] are an inevitable part of the decision-making aspects of the [[community]] [[ratings system]]. [[wikipedia:Information overload|Information overload]] and [[Steps in policy-making|policy]] detail will be too much for most people to handle in a [[Direct democracy|direct democracy]], even the well-informed. We&#039;ve discussed [[Weight assignment -- The equal weight method|heuristics in the context of weight assignment]] as one solution. This, in turn, leads to the rise of expert [[Organizations as raters|organizations]] that can help. But going down this path too far leads us closer to the top-heavy and ineffective systems we have today. We would generally rather provide as many technological tools to individuals to make it possible for &#039;&#039;them&#039;&#039; to contribute meaningfully. The [[Ratings system|ratings system]], as an information system, will help with this by providing [[Aggregation techniques|aggregate]] opinions, [[polling]] information, and so on.       But these only go so far. We pose here that the cognitive burden of direct democracy can be broken with sophisticated [[System modeling|simulation]] that meet decision makers where they are.  &lt;br /&gt;
&lt;br /&gt;
Consider how our political process works today. Most people are ignorant of the policy choices facing them as citizens. They are not sure which politician’s agenda actually aligns with their interests. A politician, sensing this, will offer up simple fixes, strong stances on meaningless but emotional issues, or just a nicely packaged campaign of soundbites and entertainment. The voter is supposed to assess their actual desires and needs against the message of a political campaign run by marketing experts. Most American’s, as a result, vote randomly, as we’ve [[Voting methods#Vote cancellation and bias|discussed]]. We can excuse this by saying it is everyone’s choice to be informed (or not) but this seems unsatisfactory for a society aiming at inclusive democracy.&lt;br /&gt;
&lt;br /&gt;
But what if there were a better way? We suggest a [[direct democracy]] where the voter would have, presumably, a one-on-one relationship with policy. So no more self-interested politicians marketing to the public. This would help but is also likely to break the cognitive capabilities of even well-informed voters. One answer to this is using simulation as a heuristic. If a voter says [[Economic predicates|they want good jobs]] then simulation can measure that desire against simulated policies. The policy vote itself doesn’t actually have to be made by the voter. They can, in effect, delegate their vote to the outcome of a simulation which has placed the availability of good jobs as the highest priority among a variety of policy options.&lt;br /&gt;
&lt;br /&gt;
[https://www.npr.org/2024/07/11/g-s1-9460/understanding-the-resurgence-of-jobs-in-americas-left-behind-counties NPR recently interviewed David Madland] from the American Worker Project at the [https://www.americanprogress.org/ Center for American Progress]. He in turn described talking to workers at a new EV battery plant in TN who mentioned that the stable, well paying job they just got at the plant was the best one they’ve ever had. They mentioned being able to take their families on vacation. They were asked who they have to thank for these new, good jobs. They mentioned Ford (the company who owns the plant) and their union. These certainly deserve some credit but, from a policy perspective, the reason is The [[wikipedia:Infrastructure Investment and Jobs Act|Infrastructure Investment and Jobs Act]] and the [[wikipedia:Inflation Reduction Act|Inflation Reduction Act]], both passed by a [[wikipedia:Bipartisanship|bipartisan]] effort in Congress in 2021-2022. But the workers had no idea and its not really their fault. They get little to no information on this crucial policy link. There’s no sign in front of the plant that mentions these two acts. There is no marketing campaign linking every infrastructure effort (there are thousands) back to these acts or the politicians who were responsible for them. Perhaps there should be, but there isn’t, and even if there were it is likely that the workers in TN would still not know about it.&lt;br /&gt;
&lt;br /&gt;
It is fairly easy to surmise that ordinary people will never have the policy knowledge needed for informed voting. Politicians from all sides will probably tell them their policies will result in what they want but how is the ordinary person to know the difference? The best they can do is state their desires and turn their vote over to an objective analysis of policy. In this case, it takes the form of careful [[System modeling|modeling]] and simulation.&lt;br /&gt;
&lt;br /&gt;
Note that there is no partisanship in any of this. Associating a simulated policy to a party or [[wikipedia:Ideology|ideology]] is a separate cognitive step, perhaps interesting to the ideologically minded, but one that can be safely skipped for most.&lt;br /&gt;
&lt;br /&gt;
By relating policy choices directly to people’s needs through some heuristic mechanism we can incorporate the wishes of the generally uninterested voter. We might find that doing this leads to some curiosity about policy in general. Still, as a matter of course, any true democracy should try to meet people where they are.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Government_and_physical_space&amp;diff=2460</id>
		<title>Government and physical space</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Government_and_physical_space&amp;diff=2460"/>
		<updated>2024-10-21T21:06:43Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
Government jurisdiction is traditionally coincident with land within specified boundaries. In some sense, it is hard to separate government from this constraint. Much of what we need from government is a result of where we live. Road construction, water &amp;amp;amp; sewer [[wikipedia:Infrastructure|infrastructure]], protection from invasion, etc. depends on physical presence. But much of what we need is also independent of location: information, education (at least when delivered virtually), many jobs, monetary transactions, some social transactions, a large amount of consumer activity, etc.&lt;br /&gt;
&lt;br /&gt;
It seems there should be a way to divorce our physical location from official jurisdictional boundaries. Governments are generally only as powerful as people allow them to be. In principle it would seem that for many issues it shouldn’t matter if Person A chooses to be part of [[Community]] A and Person B, his physical neighbor, chooses Community B. Both communities can have very different laws. In some sense we already do this. My neighbor lives in a different family which has different “laws” than mine. He is a member of different organizations which have their own rules, and so forth. Virtual communities are no different and, indeed, they are widespread.&lt;br /&gt;
&lt;br /&gt;
It is when two communities need to interact that things become interesting. Interactions are usually governed by agreement, convention, [[Contract|contract]], [[Applications#Foreign Policy|treaty]], etc. Clearly A and B who live next to each other will need such devices. Can A operate a steel mill on his land which will be noisy and pollute the neighborhood? In A’s community this is legal. An alternative approach would be to join a community of physical neighbors which handles interactive matters of this nature, maintains the sewers, etc.&lt;br /&gt;
&lt;br /&gt;
We may dismiss this notion as too complicated but we already have multiple interacting jurisdictions in our lives already. We belong to a [[wikipedia:Federal republic|federal nation]] with its own laws, a state with other laws, a county which is part of the state but has separate functions, a town within a county with yet other functions and laws, a school district, a regional water authority, etc. Although there is a unifying [[Basic law|constitution]], it doesn’t do much on a practical level. Agreements, contracts, and conventions rule these interactions for daily activities.&lt;br /&gt;
&lt;br /&gt;
And so it would work for a system of government where we voluntarily join virtual communities. If these displace our current government structures it will be because people have chosen them since they are objectively better. This new infrastructure would presumably be more [[Direct democracy|directly democratic]] than what we have today. It might require more of its participants in terms of [[Steps in policy-making|policy-making]] but nothing would be coerced. “Citizens” can participate or not (just like now) and, importantly, they can find the community that suits their needs, and leave when they want.&lt;br /&gt;
&lt;br /&gt;
The [[ratings system]] will extend to communities in the form of a [[The subjective and community ratings system|community ratings system]] (CRS), as a complement to the [[The subjective and community ratings system|subjective ratings system]] (SRS). If a community is judged “bad” by its observers because it allows, say, child rape, members of other communities will have the ability to call this out, rate the community’s morality, etc. It is clear that low-rated communities will have difficulty engaging in the types of agreements mentioned above. Trade, in particular, could be restricted.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Freedom_of_speech&amp;diff=2459</id>
		<title>Freedom of speech</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Freedom_of_speech&amp;diff=2459"/>
		<updated>2024-10-21T20:42:01Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Political systems}}&lt;br /&gt;
&lt;br /&gt;
The question has come up about whether the [[ratings system]] requires freedom of speech. The answer is yes, but perhaps we’ve been too quick to assume that. Freedom of speech is a complex subject because, even in the most free societies, there are restrictions. One of these, perhaps the most important, is [[wikipedia:Defamation|defamation]]. Fortunately the [[Ratings system|ratings system]] has a built-in mechanism to oppose those who slander others. We have, incidentally, discussed having a [[Organizations as raters#Rating the Rater and Unfair Ratings|separate legal framework]] to handle this but we’ve generally settled on the idea that we should let the ratings system do its work first and only then consider more heavy-handed approaches. &lt;br /&gt;
&lt;br /&gt;
There are other restrictions on free speech having to do with [[Justice and defense in communities#Thoughts on defense|national security]], incitement to violence, obscenity, non-disclosure of sensitive information, etc. Some of these are agreed to by the involved parties and are thus governed by contract law. But still, taking a [[Philosophy of John Rawls|Rawlsian]] view on the inalienability of free speech, they represent restrictions. &lt;br /&gt;
&lt;br /&gt;
In addition, and more importantly, there are a whole host of restrictions that are culturally imposed. [[wikipedia:Hate speech|Hate speech]] is one of them and, while protected by the US Constitution, is highly restricted as a practical matter (eg in academia). Here we enter a dangerous zone. Communities under a ratings system will no doubt impose their own form of practical restrictions (again, the ratings system doing its job) based on cultural conditions. The question then becomes whether these restrictions would go too far would need to be pre-empted through law. Under Rawls, freedom of speech is an inalienable right, one not only protected but one that cannot be voluntarily surrendered even with unanimous [[community]] support.&lt;br /&gt;
&lt;br /&gt;
The ratings system, as software, [[Neutrality of the ratings system|does not have a political philosophy]] built into it. People and communities can do whatever they want. But we might want to encourage the foundation of communities with “good” principles by having appropriate defaults and weightings. We might consider, for example, a basic set of rated characteristics we could start with, one of which is adherence to [[Philosophy of John Rawls#The Basic Liberties Principle|principles of fundamental rights]]. We could also weight these characteristics higher in overall ratings for members. It is more important that people support a basic freedom, like speech, than their skill in a particular profession. Obviously, these settings could be changed but the idea is that most members would keep them in some form and they would effectively become foundational ideas for everyone in all communities.&lt;br /&gt;
&lt;br /&gt;
We will also, along with deploying the ratings systems software, start communities of our own. It is here that we would primarily favor certain characteristics such as adherence to a Rawlsian scheme of [[Philosophy of John Rawls#The Basic Liberties Principle|basic liberties]] (eg the &amp;quot;Rawlsian community&amp;quot;). Then, as communities develop, competition between them would determine which ones were designed optimally and would likely influence the rest. This is a Darwinian, survival of the fittest, vision of how communities evolve.&lt;br /&gt;
&lt;br /&gt;
No doubt this will happen. Darwinism can be viewed as a basic law, much like entropy and gravity. The best systems will not only amass power and influence naturally but will also be emulated by others. It is inevitable that they, and their ideas, will spread. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, for an issue as fundamental to the ratings system as freedom of speech, we probably want to leave as little to Darwinian chance as possible. What if it turns out that the optimal level of free speech is quite restrictive because it leads to greater social harmony? This is, in fact, what the US is struggling with right now. But most of us are probably uncomfortable with the notion of restrictions on our freedom, no matter how objectively optimal they may be for the community. &lt;br /&gt;
&lt;br /&gt;
In fact, we would say that such communities have an [[Societal optimization|objective function]] that is weighted incorrectly or is simply wrong. The correct objective function, a multi-objective function to be sure, should include [[Philosophy of John Rawls#The Basic Liberties Principle|basic freedoms]] as fundamental to our quality of life, just like our health, material well-being, etc. &lt;br /&gt;
&lt;br /&gt;
This means we should develop our system to be open and flexible but also with defaults that ensure, or perhaps strongly encourage, basic freedoms from the beginning. And we don’t have to adopt a full-scale Rawlsian political philosophy. Some of what Rawls believed, especially about income distribution, might be viewed as controversial. But his notion of basic political freedoms, especially freedom of speech, should be taken seriously. It is doubtful that these ideas will elicit any major objection from potential users.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s return to freedom of speech by noting that it is usually too general a term and needs to be broken down into separate categories to be meaningful. So let’s do that and consider how the current legal mechanisms handle each case vs. the ratings system.&lt;br /&gt;
&lt;br /&gt;
====Defamation/slander/libel====&lt;br /&gt;
&lt;br /&gt;
The ratings system is a good natural way to handle this. It can distinguish first the truth of the claim and the intent behind the alleged defamation: is it true and exposes something important, is it true but exposes merely the trivially embarrassing, or is it false? If false, the ratings system is expected to quickly bring this out, much faster than what we’re traditionally used to. If it is true and unimportant, its purpose was merely to hurt the other person and infringe on their privacy. In this case the ratings system is ideal for enforcing the cultural norm where one remains politely silent about known embarrassing but benign information about someone else. The right to privacy, a basic liberty in the Rawlsian sense, is thus maintained. And it is, of course, normally how courteous people are expected to behave anyway. If it is true and important, then its importance presumably outweighs the privacy rights of those it infringes on (eg a politician taking a bribe). The arbiter of importance is the community, operating through the ratings system.&lt;br /&gt;
&lt;br /&gt;
In general, the ratings system should be good at discovering intent. It will be able to distinguish between [[The subjective and community ratings system#Preventing Misinformation Bubbles in the CRS and SRS|misinformation]] which is someone who honestly spreads the wrong information and disinformation where someone knowingly spreads false information. Furthermore, the ratings system will likely do a good job establishing what the community thinks is important. Spreading the idea that someone is, say, a closet fan of the ballet will generate one level of reprobation (ie none for the fan and some for the tattler) but spreading the fact that they are a pedophile quite another.&lt;br /&gt;
&lt;br /&gt;
====Contract (trade secrets)==== &lt;br /&gt;
&lt;br /&gt;
Usually in contracts people agree to keep secret specific trade information, such as the workings of a business, an invention, the mechanics of a financial arrangement, etc. Usually there is some reciprocal arrangement. In return for keeping the secret, we are rewarded with something (money, a job, etc.) The secret is usually so specific that we don’t think agreeing to keep it is an infringement on the inalienability of our free speech rights. If Bob works as a chemist for Coca Cola and agrees to keep the formula a secret as part of his job, we don’t see that restriction on his rights as serious enough to worry about. After all, there are other recipes for Cola drinks and, while they’re not quite as good, humanity will survive without the real thing. Furthermore Bob is being compensated for his silence, he happily agreed to it, the agreement isn’t otherwise onerous, and his doing so doesn’t really affect the overall scheme of basic liberties enjoyed by society at large.  &lt;br /&gt;
&lt;br /&gt;
But there are cases when it might. Suppose Coca Cola figures out a new product that tastes way better than Coke but is dangerously addictive, something like heroin or fentanyl. The company has Bob not only agree to not reveal the formula but to also not reveal the project for the new drink itself. The substance is legal so the company could go ahead and make a fortune selling it. Bob knows the danger, however, so after failing to convince his managers that the project should be cancelled, decides to go to the press with his knowledge. He is promptly fired for contract violation and sued. &lt;br /&gt;
&lt;br /&gt;
Here we have a different situation because Bob’s [[Applications#Whistleblowing &amp;amp; Call-outs|whistleblowing]] was necessary for society to protect the system of basic liberties for everyone, or at least the millions who would have succumbed to the dangerous new Coke product. He may have violated his contract but society should see his act as a necessary one. A court, in our society, should exonerate his action.&lt;br /&gt;
&lt;br /&gt;
What the court might not do, however, is find him another job or compensate him for his trouble. But in a ratings based society, especially the [[Moneyless economy based on reputation and need|moneyless variant]], Bob’s contribution to society would be recognized for what it is and he would be rewarded naturally in the currency that matters: ratings. His ability to claim goods and services in the economy would be enhanced as a result of his action.&lt;br /&gt;
&lt;br /&gt;
====National security secrets====&lt;br /&gt;
This is similar to contract/trade secrets although, presumably, all national security secrets are ones that, if revealed, would damage the overall system of liberties for everyone. But there are exceptions. In the case of Edward Snowden for instance, who leaked the fact of a US government surveillance program in the wake of 9/11, his actions can easily be seen as protecting the basic liberties of everyone. National security information, like any contractually obligated secret, may need to be revealed to further the overall scheme of basic liberties.&lt;br /&gt;
&lt;br /&gt;
But Snowden is controversial because his actions can also be seen as violating the [[wikipedia:Espionage Act of 1917|Espionage Act]], for which he is wanted in the US. However, it is hard to see how a properly nuanced understanding of his whistleblowing would come from a US courtroom. Prosecuting him has been a desire at the highest levels of the US government for many years now.      &lt;br /&gt;
&lt;br /&gt;
A better picture would certainly come from a community appraisal, something the ratings system would make possible. Did his actions to benefit the many outweigh the compromising intelligence he allegedly revealed? A large number of community members would be in the best position to decide this. In case we are afraid of revealing more sensitive intelligence through this process, communities will have the ability to form committees with privileged access to information. The important thing in this process is that these sub-groups are community-controlled and subject to an ongoing endorsement through the ratings system. Before we move on, we might note that in a US court, the community, “the people” as it were, are in fact highly motivated government prosecutors.&lt;br /&gt;
&lt;br /&gt;
Let’s turn to other espionage-adjacent cases, those related to the [[wikipedia:China Initiative|China Initiative]], which targeted Chinese people in the US who are suspected of spying for China. It started in 2018 and has since been terminated (in 2022). The program has been criticized for its unfair treatment of academics (mainly) who were guilty of small infractions such as failing to disclose Chinese funding sources on grant [[applications]]. According to a recent [https://www.msn.com/en-xl/health/other/china-born-neuroscientist-jane-wu-lost-her-us-lab-then-she-lost-her-life/ar-AA1pJFLA article],&lt;br /&gt;
“In the past six years, more than 250 scientists - most of them of Asian descent - have been identified as having failed to disclose overlapping funding or research in China, or having broken other rules. There were only two indictments and three convictions as legal outcomes of those investigations, yet 112 scientists lost their jobs as a result”&lt;br /&gt;
 &lt;br /&gt;
This type of oversight is normally viewed in our society as minor. We fill out a great many forms and forget to write everything when answering the questions. To the government, however, the form is their gotcha. It is an easy way to establish a list of suspects without doing the real work intended by the policy: finding out who is really passing secrets to an adversarial government.&lt;br /&gt;
&lt;br /&gt;
The government could have approached the issue carefully, by discreetly realizing that most of its “suspects” were guilty of a trivial violation and dismissing them. Instead it pursued them loudly and, one suspects, with a political intent in mind. This is a good example of how government is often incapable of taking a larger societal view on issues and focuses instead on its narrow self- interest.&lt;br /&gt;
&lt;br /&gt;
A ratings system, run by a diverse community, would quickly surface the fact that the proposed methods of espionage discovery in this case were unfair and ineffective. Political intent, if any, would also be quickly exposed. But it is also likely that the community as whole would not know the correct method to investigate suspected espionage. In cases where expertise is required, however, it will have the ability to find the correct people and provide them the weight to make the necessary recommendations.&lt;br /&gt;
&lt;br /&gt;
====Hate speech====           &lt;br /&gt;
&lt;br /&gt;
Hate speech is normally defined on the basis of its repercussions. Certain words, usually referring to racial or identity groups, are so highly offensive in the US English language that they are almost never said aloud. Furthermore, using these words often presages a call to violence, or some other form of oppression, so we can reasonably see that using them causes a disruption to the larger system of liberties. But let’s leave this concern out of our consideration for the moment and go with the first problem: they are offensive. In contemporary society, there is only one reasonable response when someone says they are offended by something you said and that is to simply stop saying it. If a particular word is offensive there are usually plenty of acceptable substitutes, so the self-censorship that results is not onerous (after an acceptable period of time is given to adjust). So in this case, it is a matter of politeness to stop using a word deemed offensive. But if someone is offended by the expression of legitimate ideas, we cannot be so accommodating. Discussions of sensitive subjects such as slavery or the holocaust may be difficult for certain people but we cannot just stop talking about them. In order to continue making progress on our liberties, we must be able to recall what happens when we lose them. &lt;br /&gt;
&lt;br /&gt;
There is obviously a fine line between censorship as an act of courtesy and one where we can’t express legitimate ideas. Most people seem to understand intuitively where the line is. But the law or a rule at, say, a university can easily make mistakes and go too far one way or another. &lt;br /&gt;
&lt;br /&gt;
It would seem that the ratings system will reflect the views of the community better than the law would in matters such as these. Furthermore, the ratings system will impose the correct sanction on violations of cultural norms via speech. Instead of using heavy-handed measures such as fines, imprisonment, being expelled from college, etc. the ratings system will simply register its displeasure by expressing what others think, encouraging [[debate]], etc. This should be enough to correct any “misbehavior” without lasting consequences. A final note on overly tough consequences is that they frequently have the effect of hardening people’s positions rather than giving them the space to think and come to better conclusions.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Exercising_the_algorithm_interface_with_more_complex_data_types&amp;diff=2458</id>
		<title>Exercising the algorithm interface with more complex data types</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Exercising_the_algorithm_interface_with_more_complex_data_types&amp;diff=2458"/>
		<updated>2024-10-21T20:04:28Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
The [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/149 custom_algo.py snippet] implements all the [[Aggregation techniques|algorithms]] created so far for the &amp;lt;code&amp;gt;algorithms.py&amp;lt;/code&amp;gt; interface. Scroll past where it says &amp;lt;code&amp;gt;POST 8/21/23 MEETING&amp;lt;/code&amp;gt; to see the implementation of what is discussed here.&lt;br /&gt;
&lt;br /&gt;
==A more complex trust factor==&lt;br /&gt;
&lt;br /&gt;
So far &amp;lt;code&amp;gt;custom_algo.py&amp;lt;/code&amp;gt; has used single-valued [[trust]] factors which are specified as floats in the &amp;lt;code&amp;gt;ComponentData&amp;lt;/code&amp;gt; class in &amp;lt;code&amp;gt;algorithms.py&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;class ComponentData:&lt;br /&gt;
    opinion: Optional[OpinionData]&lt;br /&gt;
    trust_factor: float&lt;br /&gt;
    intermediate_results: list&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
However, from earlier work we anticipate a &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; with several components. [[Trust attenuation and the inadequacy of single-value trust factors|First, we introduced the idea that there may be a difference between judgement trust and communication trust]]. In a [[Modification to the Sapienza probability adjustment for trust to include random lying, bias, and biased lying|later post]] we derived an extension to the [[Modification to the Sapienza probability adjustment for trust to include random lying, bias, and biased lying|Sapienza trust-modification]] for probability which included the notion of lying (random lying and biased lying) and bias.&lt;br /&gt;
&lt;br /&gt;
In an effort to exercise our interface let’s include these concepts and do an example. To be clear, the interface itself permitted the new data type so, for the moment, nothing in &amp;lt;code&amp;gt;algorithms.py&amp;lt;/code&amp;gt; was changed.&lt;br /&gt;
&lt;br /&gt;
To incorporate these ideas, the &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; is now defined as a list of lists of floats, eg:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;trust_factor = [ [0.8, 0.1, 0.02, 0.02, 0.02, 0.02, 0.02],    #Tj, judgement trust&lt;br /&gt;
                 [0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0] ]        #Tc, communication trust&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To review, the meaning of each of these entries is explained below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;trust_factor = [ [Ttruth_j, Trandom_j, Tlierandom_j, Tliebias1_j, Tliebias2_j, Tbias1_j, Tbias2_j],&lt;br /&gt;
                 [Ttruth_c, Trandom_c, Tlierandom_c, Tliebias1_c, Tliebias2_c, Tbias1_c, Tbias2_c] ]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
where &amp;lt;code&amp;gt;_j&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;_c&amp;lt;/code&amp;gt; represent the judgement trust and communication trust respectively. The 1 and 2 represents a bias or biased lie &amp;lt;i&amp;gt;toward&amp;lt;/i&amp;gt; that outcome, ie &amp;lt;code&amp;gt;Tbias1&amp;lt;/code&amp;gt; represents a bias toward the &amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt; outcome. This example is for a [[predicate]] question in which the two choices are 1 and 2. For more choices we would simply extend the list appropriately with entries for 3, 4, etc.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Ttruth&amp;lt;/code&amp;gt; is the trust that the source is telling the truth (this is our original conception of trust).&lt;br /&gt;
&lt;br /&gt;
When the source is not telling the truth, ie for the &amp;lt;code&amp;gt;1.0-Ttruth&amp;lt;/code&amp;gt; portion, we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Trandom&amp;lt;/code&amp;gt; is the chance that the answer is random. In earlier work, with only the [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza trust-modification], this is the only portion of &amp;lt;code&amp;gt;1-Ttruth&amp;lt;/code&amp;gt; we were considering. &amp;lt;code&amp;gt;Trandom&amp;lt;/code&amp;gt; was thus implicit when stating &amp;lt;code&amp;gt;Ttruth&amp;lt;/code&amp;gt; and turns out to not even be necessary in the calculation. Now, however, since we are modeling other &amp;lt;code&amp;gt;1-Ttruth&amp;lt;/code&amp;gt; scenarios it becomes necessary to explicitly state what &amp;lt;code&amp;gt;Trandom&amp;lt;/code&amp;gt; is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Tlierandom&amp;lt;/code&amp;gt; is the chance that the answer is a random lie (ie a random answer that is not the truth).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Tliebias&amp;lt;/code&amp;gt; is the chance that the source is lying with a bias toward a particular answer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Tbias&amp;lt;/code&amp;gt; is the chance that the source is biased toward a particular answer.&lt;br /&gt;
&lt;br /&gt;
The function to perform the calculation in &amp;lt;code&amp;gt;custom_algo.py&amp;lt;/code&amp;gt; is called &amp;lt;code&amp;gt;pmod_random_lying_bias&amp;lt;/code&amp;gt; in the snippet (https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/149) and follows the equation laid out in [[Modification to the Sapienza probability adjustment for trust to include random lying, bias, and biased lying|the post mentioned above]]. The input to this function is a list of probabilities P and a list of trusts T representing either the judgement trust or communication trust, eg:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;P=[0.8,0.2]&lt;br /&gt;
T=[0.7, 0.1, 0.04, 0.02, 0.03, 0.05, 0.06]&amp;lt;/pre&amp;gt;&lt;br /&gt;
A check is made within the function to ensure that the P entries all add to 1 as well as the T entries. Other checks are made to ensure that the length of the P and T entries is consistent.&lt;br /&gt;
&lt;br /&gt;
In anticipation of the user desiring a simpler model, like the ones we’ve traditionally used, the &amp;lt;code&amp;gt;T&amp;lt;/code&amp;gt; entries can be truncated and the algorithm will simply fill in the rest with 0’s. If the user enters only one value of &amp;lt;code&amp;gt;T&amp;lt;/code&amp;gt; then the second value, &amp;lt;code&amp;gt;Trandom&amp;lt;/code&amp;gt;, will be taken as &amp;lt;code&amp;gt;1-T&amp;lt;/code&amp;gt; and the rest of the entries will be filled in with 0’s.&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;code&amp;gt;pmod_random_lying_bias&amp;lt;/code&amp;gt; can thus replace all the other functions we’ve been using for the [[Technical overview of the ratings system|Sapienza trust equation]] since it gives the same results in cases where only one value of &amp;lt;code&amp;gt;T&amp;lt;/code&amp;gt; is entered.&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;code&amp;gt;straight_average_intermediate_privacy_multitrust_random_lying_bias&amp;lt;/code&amp;gt; was created to exercise the more complex &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; discussed here. This is an extension of the [[A straight average algorithm with continuous input distributions, complex trust, and intermediate results|straight averaging]] algorithm we’ve discussed previously.&lt;br /&gt;
&lt;br /&gt;
For calculations to modify the probability due to random lying, biased lying and bias parts of the &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt;, only a new call to the &amp;lt;code&amp;gt;pmod_random_lying_bias&amp;lt;/code&amp;gt; function needed to be made instead of calling the original &amp;lt;code&amp;gt;calcpmod_sapienzatrust&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;calcpmod_sapienzatrust_intermediate&amp;lt;/code&amp;gt; functions. Thus, handling this is relatively simple and does not cause any real structural changes to the algorithm.&lt;br /&gt;
&lt;br /&gt;
However, handling the difference between judgement trust &amp;lt;code&amp;gt;Tj&amp;lt;/code&amp;gt; and communication trust &amp;lt;code&amp;gt;Tc&amp;lt;/code&amp;gt; is more involved. The &amp;lt;code&amp;gt;Tj&amp;lt;/code&amp;gt; is applied to any individual’s personal [[opinion]] by its immediate parent. The &amp;lt;code&amp;gt;Tc&amp;lt;/code&amp;gt; is also applied to the computed [[Opinion|opinion]] resulting from that child’s children. Since the only thing the child is doing is communicating the results of its children up the tree, we use the communication trust to do that. By distinguishing between &amp;lt;code&amp;gt;Tj&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Tc&amp;lt;/code&amp;gt; we can avoid needless [[Trust attenuation and the inadequacy of single-value trust factors|trust attenuation]] and improve the accuracy of our answers.&lt;br /&gt;
&lt;br /&gt;
The algorithm works by distinguishing between the head node and the child nodes in the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; that are communicated up the tree. Thus the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt;, instead of only containing the numerator sum and population information now contains this information for both the parent and its children. When this information is transferred up a level, the parent is modified with the &amp;lt;code&amp;gt;Tj&amp;lt;/code&amp;gt; and the children and modified with &amp;lt;code&amp;gt;Tc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let’s take a look at a specific example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, P=50%&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, P=50%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=50%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=60%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=70%&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, P=80%&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, P=90%&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 5 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 6 [label=&amp;quot;Tj=0.8, Tc=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
This is the same tree diagram used before for simple averaging but now we are introducing the two different types of trust.&lt;br /&gt;
&lt;br /&gt;
We start with the group of Nodes 1,3,4. Using the new &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; our standard setup is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;opinion1 = OpinionData([0.5,0.5], 1)&lt;br /&gt;
opinion3 = OpinionData([0.6,0.4], 1)&lt;br /&gt;
opinion4 = OpinionData([0.7,0.3], 1)&lt;br /&gt;
intermediate_results = []&lt;br /&gt;
trust_factor = [ [0.8, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],    #Tj, judgement trust&lt;br /&gt;
                 [0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&lt;br /&gt;
trust_factor_self = [ [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],    #Tj, judgement trust&lt;br /&gt;
                      [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&lt;br /&gt;
comp1 = ComponentData(opinion1, trust_factor_self, intermediate_results)&lt;br /&gt;
comp3 = ComponentData(opinion3, trust_factor, intermediate_results)&lt;br /&gt;
comp4 = ComponentData(opinion4, trust_factor, intermediate_results)&lt;br /&gt;
alginp134 = AlgorithmInput([comp1, comp3, comp4],{})&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
3 and 4 are leaf nodes and have no intermediate results of their own. Node 1 will pass &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; up to Node 0 later but, for the moment, has none either. The calculation, also called in the standard manner,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;output134 = straight_average_intermediate_privacy_multitrust_random_lying_bias(alginp134)&lt;br /&gt;
```python  &lt;br /&gt;
&lt;br /&gt;
first modifies the trusts of each node using `Tj = 0.8`, the trust Node 1 has for 3 and 4. It assumes it trusts itself with `Tj = 1.0` as shown above. The result is the same as one would obtain using the Sapienza trust formula since we are ignoring for now the additional lying, biased lying and bias discussed earlier. In the algorithm `pmodlist` holds this value:&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
pmodlist = [[0.5, 0.5], [0.58, 0.42], [0.66, 0.34]]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This simply means:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;Pmod1 = [0.5,0.5]&lt;br /&gt;
Pmod3 = [0.58, 0.42]&lt;br /&gt;
Pmod4 = [0.66, 0.34]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The average of this list is now taken:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;pavelist = [0.58, 0.42]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;new_intermediate_results&amp;lt;/code&amp;gt; are then established by the function &amp;lt;code&amp;gt;create_intermediate_results_when_no_intermediate_results_privacy_multitrust_random_lying_bias&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;new_intermediate_results = [[[0.5, 0.5], 1], [[1.24, 0.76], 2]]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which is simply the probability of node 1 followed by the numerator of Nodes 3,4 [(0.58+0.66), (0.42+0.34)] and the population of Nodes 3 and 4 which is obviously 2.&lt;br /&gt;
&lt;br /&gt;
To be clear these calculations occur after the &amp;lt;code&amp;gt;else&amp;lt;/code&amp;gt; statement, corresponding to the if statement that checks if &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;    if(len(intermediate_results) &amp;gt; 0):&lt;br /&gt;
        ...&lt;br /&gt;
    else:&lt;br /&gt;
        ...&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After the calculation is finished, the intermediate results for Node 2 are updated, since they will be used later:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;comp2.intermediate_results = output256.intermediate_results&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The same calculation is now performed for Nodes 2,5,6 and results in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;pavelist = [0.686667, 0.313333]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;new_intermediate_results = [[[0.5, 0.5], 1], [[1.56, 0.44], 2]]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we level up to the 0,1,2 group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;opinion0 = OpinionData([0.5,0.5], 1)&lt;br /&gt;
intermediate_results = []&lt;br /&gt;
trust_factor_self = [ [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],    #Tj, judgement trust&lt;br /&gt;
                      [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&lt;br /&gt;
comp0 = ComponentData(opinion0, trust_factor_self, intermediate_results)&lt;br /&gt;
#reset the trust for 1,2 to reflect their trust connection to 0&lt;br /&gt;
comp1.trust_factor = [ [0.8, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],    #Tj, judgement trust&lt;br /&gt;
                       [0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&lt;br /&gt;
comp2.trust_factor = [ [0.8, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],    #Tj, judgement trust&lt;br /&gt;
                       [0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&lt;br /&gt;
alginp012 = AlgorithmInput([comp0, comp1, comp2],{})&lt;br /&gt;
output012 = straight_average_intermediate_privacy_multitrust_random_lying_bias(alginp012)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; are just Node 0’s values along with the intermediate results calculated before:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;intermediate_results = [[[[0.5, 0.5], 1]], [[[0.5, 0.5], 1], [[1.24, 0.76], 2]], [[[0.5, 0.5], 1], [[1.56, 0.44], 2]]]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We proceed by modifying the probabilities of these results by distinguishing whether to apply judgement trust (for self and immediate children, 1st, 2nd, and 4th entries) or communication trust (grand-children, 3rd and 5th entries). This is done in the function:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;calcpmod_sapienzatrust_intermediate_privacy_multitrust_random_lying_bias&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The result is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;pmod = [0.5, 0.5], [0.5, 0.5], [1.216, 0.784], [0.5, 0.5], [1.504, 0.496]]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which, again, can be verified with the Sapienza trust equation. The average of these values is taken and &amp;lt;code&amp;gt;new_intermediate_results&amp;lt;/code&amp;gt; are calculated:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;pave = [0.602857142857143, 0.3971428571428572]&lt;br /&gt;
new_intermediate_results = ([[0.5, 0.5], 1.0], [[3.72, 2.28], 6.0])&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The calculation concludes by placing the output and intermediate results into the proper data structure and returning:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;pave_output = OpinionData(pave, 1)&lt;br /&gt;
return AlgorithmOutput(pave_output, new_intermediate_results)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Input and output data for continuous distributions ==&lt;br /&gt;
&lt;br /&gt;
Two algorithms were created for continuous distributions, &amp;lt;code&amp;gt;bayes_ave_tave_continuous&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;bayes_ave_tave_points_continuous&amp;lt;/code&amp;gt;. Both follow the math laid out in the continuous section in [[Binned and continuous distributions]] and produce the same results. The only difference between them is explained as follows:&lt;br /&gt;
&lt;br /&gt;
* The first (&amp;lt;code&amp;gt;_continuous&amp;lt;/code&amp;gt;) uses functions as input and throughout the calculation,&lt;br /&gt;
* while the second (&amp;lt;code&amp;gt;_points_continuous&amp;lt;/code&amp;gt;) can use either functions or [[wikipedia:Discrete mathematics|discrete points]] as input. It then converts any input functions to discrete points at the beginning of the calculation and uses the points throughout.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;points_continuous&amp;lt;/code&amp;gt; variant is more flexible and will probably be the one chosen for further implementation. It should be noted that since these calculations are, by their nature, numerical [[wikipedia:Integration (mathematics)|integrations]] the output will always be a set of discrete points. If we wish to send such output up to a higher level in the tree, it will be as points since it would be very awkward to have to convert the points back to a continuous function to satisfy an input requirement at the next level.&lt;br /&gt;
&lt;br /&gt;
So far, [[Multilevel calculations|multilevel calculations]] with &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; have not been implemented in either of these continuous algorithms. They are valid only for a parent and its immediate children. This, of course, will be rectified in later work.&lt;br /&gt;
&lt;br /&gt;
The algorithms combine three calculations in one: [[Technical overview of the ratings system|Bayesian combination]], [[Aggregation techniques|straight average]], and [[Aggregation techniques|trust-weighted average]]. This is done for the sake of efficiency since they are potentially long-running calculations. If each calculation were called separately, 3 separate integration loops would be invoked whereas this way only 1 is needed. Usage, of course, will dictate the prudence of this decision but for now it has the additional benefit of expanding our notion of the output data.&lt;br /&gt;
&lt;br /&gt;
For inputs that are functions the user would write, for instance,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def weibull(k, lamb, x):&lt;br /&gt;
    w = (k/lamb)*(x/lamb)**(k-1.0)*math.exp(-(x/lamb)**k)&lt;br /&gt;
    return w&lt;br /&gt;
&lt;br /&gt;
#pick some variant of Weibull for each probability density function&lt;br /&gt;
def Pd1(x):&lt;br /&gt;
    k = 7.0&lt;br /&gt;
    lamb = 0.7&lt;br /&gt;
    return weibull(k, lamb, x)&lt;br /&gt;
&lt;br /&gt;
def Pd2(x):&lt;br /&gt;
    k = 3.0&lt;br /&gt;
    lamb = 0.3&lt;br /&gt;
    return weibull(k, lamb, x)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
These are the same as the functions used in the example in [[Binned and continuous distributions]].&lt;br /&gt;
&lt;br /&gt;
Algorithm setup then proceeds by passing the desired function into the &amp;lt;code&amp;gt;OpinionData&amp;lt;/code&amp;gt; and setting up the components as usual:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;opinion0 = OpinionData(Pd1, 1)&lt;br /&gt;
print(opinion0)&lt;br /&gt;
trust_factor = 1.0&lt;br /&gt;
intermediate_results = []&lt;br /&gt;
comp0 = ComponentData(opinion0, trust_factor, intermediate_results)&lt;br /&gt;
print(comp0)&lt;br /&gt;
&lt;br /&gt;
opinion1 = OpinionData(Pd2, 1)&lt;br /&gt;
trust_factor = 0.7&lt;br /&gt;
intermediate_results = []&lt;br /&gt;
comp1 = ComponentData(opinion1, trust_factor, intermediate_results)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A &amp;lt;code&amp;gt;misc_input&amp;lt;/code&amp;gt; variable needs to be defined to establish the limits of integration and number of steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;misc_input = {&#039;xmin&#039;: 0.0, &#039;xmax&#039;: 1.0, &#039;n&#039;: 10}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;AlgorithmInput&amp;lt;/code&amp;gt; is now setup and the calculation is called:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;alginp01 = AlgorithmInput([comp0, comp1], misc_input)&lt;br /&gt;
output01 = bayes_ave_tave_points_continuous(alginp01)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The output data type is a list of lists of floats:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;opinion_output = OpinionData([x_list, Pdcombbayes_list, Pdave_list, Pdtave_list], 1)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For the example in the snippet (https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/149), the output can be rendered as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;x    Pdcombbayes            Pdave               Pdtave&lt;br /&gt;
0.0  0.0000000000000000000  0.0000000000000000  0.000000000000000&lt;br /&gt;
0.1  0.0007723693378394874  0.5353983016367595  0.4409312481410334&lt;br /&gt;
0.2  0.1525468172069069     1.6550908195260654  1.3639758039868168&lt;br /&gt;
0.3  1.9294466580934446     1.8702970280637445  1.5511504309489188&lt;br /&gt;
0.4  4.811844755093861      1.0012802578006554  0.8848125065964566&lt;br /&gt;
0.5  2.7787458964433065     0.7394852801116265  0.822147522458689&lt;br /&gt;
0.6  0.3214702169420221     1.418158704359427   1.6660540336376612&lt;br /&gt;
0.7  0.005167160511567982   1.839479957220381   2.164064860954624&lt;br /&gt;
0.8  6.126176241875283e-06  0.8729533228951275  1.0270038363258995&lt;br /&gt;
0.9  1.9480821926456603e-10 0.0678490009917635  0.07982235407810565&lt;br /&gt;
1.0  3.4586507710773026e-17 0.0002264086030695  0.00026636306243311763&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For the case when discrete points are used as the function (instead of the functions themselves), the input to the &amp;lt;code&amp;gt;OpinionData&amp;lt;/code&amp;gt; is a list of lists of floats:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;x_data_0 = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]&lt;br /&gt;
p_data_0 = [0.0, 8.4998494312e-05, 0.005439064803658, 0.061799644413066, 0.341296334310195, 1.207904653411647, 2.822898903602754, 3.678794411714423, 1.745906232336168, 0.135698001814369, 0.00045281720613]&lt;br /&gt;
xp_data = [x_data_0, p_data_0]&lt;br /&gt;
opinion0 = OpinionData(xp_data, 1)&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Establish_trust&amp;diff=2457</id>
		<title>Establish trust</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Establish_trust&amp;diff=2457"/>
		<updated>2024-10-21T19:16:10Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Trust}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Opinion}}&lt;br /&gt;
&lt;br /&gt;
== Establishing [[Trust]] ==&lt;br /&gt;
&lt;br /&gt;
One of the big problems users will have is determining the trust value to use, especially at first. Rarely do people assign proper numerical values to how much they trust their peers, much less calculate those values from controlled studies or their own experience. More likely, they just have an intuitive sense for when someone is trustworthy on a particular subject. &lt;br /&gt;
&lt;br /&gt;
In the trust network we have no choice but to assign numbers to our [[Trust|trust levels]]. Does my daughter think I&#039;m 70 or 80% trustworthy on a physics hw question? She probably doesn&#039;t know herself but she clearly has some level of trust or she wouldn&#039;t be asking. This leads us to a [[wikipedia:User interface|UI]] type question: what is the best way to get users to input their level of trust? Maybe instead of numbers we could use a cold to hot colorbar, various gradations of a smiley-face, etc.&lt;br /&gt;
&lt;br /&gt;
The trust network itself provides us some benefits in dealing with this: &lt;br /&gt;
* It gives us a way to see what others think of a recently added node. If we&#039;re asked to trust a new node, what do those nodes&#039; immediate peers think of it? Maybe we can default to that trust for starters.&lt;br /&gt;
* It gives us a way to collect information and calculate trust levels over time. It can store the queries we&#039;ve made, who answered them, and provide a way to input our own satisfaction with the answers. It could then score the interactions between ourselves with each node that answered us.&lt;br /&gt;
* It provides an ability to record how servers answer questions, to the extent [[Privacy, identity, and fraud in the ratings system|privacy]] concerns allow. Are they taking their time to answer or do they answer immediately? Are the answers long or short? Are they cogent? Do they engage in petty attacks on their peers? These metrics might give us some proxies for trust.&lt;br /&gt;
&lt;br /&gt;
On this last point it might help to view trust more as an earned accomplishment rather than something you are bestowed with by others. That is, good behavior -- ie thoughtful answers -- leads to higher trust. This will encourage the correct mindset for trust on the part of both clients and servers. Clients will approach trust as something to withhold until proven and servers will view it as something that requires work, real thought rather than off-the-cuff answers. &lt;br /&gt;
&lt;br /&gt;
== Trust in Information ==&lt;br /&gt;
&lt;br /&gt;
Trust is normally thought of as trust in a person but the type of information being presented is often just as important. A salesperson&#039;s pitch for a product is obviously biased and is trusted less than if the same salesperson talks about neutral subjects. Indeed salespeople often start by talking about other things in the hope that a higher default trust will result that then extends into their product pitch.  &lt;br /&gt;
&lt;br /&gt;
To help, we might try classifying information into baseline levels of trustworthiness. The following is incomplete and based on some random examples. It forms a very rough continuum from reliable to unreliable. &lt;br /&gt;
&lt;br /&gt;
# Established fact -- Generally uncontroversial, many verification methods, understood for a long time ==&amp;gt; Population of NYC. Conservation of energy.&lt;br /&gt;
# Expert consensus -- The result of many studies and review. Almost an established fact ==&amp;gt; Raising interest rates decreases inflation.&lt;br /&gt;
# Statistical data and studies -- Rigorous experimentation or field studies done by experts, or reviews of such work ==&amp;gt; Are eggs good for you?&lt;br /&gt;
# Theory -- A principle supported by scientific study but not completely proven or established (often difficult to prove) ==&amp;gt; big bang theory, string theory, etc.&lt;br /&gt;
# Philosophy -- An abstract general idea that is not explicitly testable but pursued through rigorous inquiry ==&amp;gt; How should we conduct ourselves? What is knowledge? &lt;br /&gt;
# Conjecture and hypothesis -- A testable proposition intended for scientific scrutiny ==&amp;gt; Efficacy of the latest Covid-19 vaccine?&lt;br /&gt;
# Speculation -- Semi-informed prediction, often by knowledgeable people, but may never be tested rigorously ==&amp;gt; Best investment for the remainder of this year?&lt;br /&gt;
# Persuasion -- Subjective viewpoint based on evidence but usually tied to an agenda ==&amp;gt; Newspaper [[opinion]] columns.&lt;br /&gt;
# Marketing -- An attempt to sell a product/service. Usually easy to identify. In many cases fact-based but can use propaganda, ideology, etc. ==&amp;gt; What&#039;s the best scooter under $1000?  &lt;br /&gt;
# Anecdotal evidence -- Personal story, perhaps true but unverified or unverifiable ==&amp;gt; The Wim Hof method makes me feel great!&lt;br /&gt;
# Personal opinion -- Subjective view with no expectation of verification ==&amp;gt; Are Vermeer paintings beautiful?&lt;br /&gt;
# Ideology / Religion -- A systematized set of beliefs held by many (usually) but unverifiable. Often borrows from philosophy, fact, opinion, andecdotal evidence, propaganda, etc. ==&amp;gt; Do you believe in socialism? An afterlife?&lt;br /&gt;
# Propaganda -- Explicitly biased information intended to win people over to a political agenda. Like misinformation but more visceral and less factual ==&amp;gt; The Tutsis are bad people out to get us.&lt;br /&gt;
# Misinformation -- A close cousin. False statements disguised as fact or statistical studies in order to deceive. ==&amp;gt; The Tutsis commit more crimes than we do.&lt;br /&gt;
# Satire, parody, sarcasm -- Not intended as informational but a deliberate distortion for humor or to make a point.  &lt;br /&gt;
&lt;br /&gt;
It seems we can probably use [[wikipedia:Artificial Intelligence|AI]] to perform some kind of classification of questions and answers into a continuum of this type to establish baseline levels of trust, or at least to present this to users: Hi, you are likely being presented with someone&#039;s personal opinion. We recommend a trust level of 0.5 for that. Or we could ask users to take a survey at the beginning to help them establish their default levels of trust for various types of information. &lt;br /&gt;
&lt;br /&gt;
Many of these categories can be sub-classified into controversial or not. The result of the [[wikipedia:2020 United States presidential election|2020 election]] is an established fact but is controversial because it&#039;s been affected by misinformation and propaganda. Others are personal opinions such as the legality of abortion but are still controversial because they touch on public policy and have also been affected by propaganda, religion, etc. Identifying controversial views might help adjust baseline levels of trust as long as the controversy is grounded in something other than outright misinformation.&lt;br /&gt;
&lt;br /&gt;
Many of these categories but especially 1-7 can be divided into knowledge that&#039;s widely known vs. that which is known only to a few. Many questions will be in the latter category because the former can basically &amp;quot;be googled&amp;quot;. This is what gives rise to concerns about [[Trust attenuation and the inadequacy of single-value trust factors|trust attenuation]] and attendant solutions (eg [[Public node|signed registries]]).&lt;br /&gt;
&lt;br /&gt;
== Information and Bad Behavior ==&lt;br /&gt;
&lt;br /&gt;
Here are a few classes of problematic information that don&#039;t fit neatly into the above continuum but come to mind:&lt;br /&gt;
&lt;br /&gt;
# Protected information -- Security/privacy information that shouldn&#039;t be shared ==&amp;gt; What&#039;s Joe&#039;s private key? If we find people exchanging info. like this, what should be done?&lt;br /&gt;
# Unethical or illegal info -- Information that is illegal for us to have circulating on the system and should be taken down ==&amp;gt; Can you help me assassinate my business partner?&lt;br /&gt;
# Info that might have to be moderated out ==&amp;gt; Support for ethnic cleansing, genocide, etc.&lt;br /&gt;
&lt;br /&gt;
I&#039;m not sure how [[Content moderation|content moderation]] usually works but we could assign trust to zero for the offending nodes.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Error_bars_and_a_problem_with_Bayesian_modeling&amp;diff=2432</id>
		<title>Error bars and a problem with Bayesian modeling</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Error_bars_and_a_problem_with_Bayesian_modeling&amp;diff=2432"/>
		<updated>2024-10-16T19:22:48Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
== Error Bars for Incomplete Analyses ==&lt;br /&gt;
&lt;br /&gt;
Error bars can be placed around incomplete results as they stream in. To do this we can compute the Pcomb for all the nodes in the network, whether they’ve answered or not, by putting P=50% on the nodes that haven’t answered. This way they don’t affect the calculation and the calculation can remain as simple as possible: just compute as if you have all the information. As nodes update their probabilities, better values will result. User’s can set their update time intervals to whatever they want.&lt;br /&gt;
&lt;br /&gt;
To calculate the [[wikipedia:Error bar|error bars]] we would perform three calculations: 1) As above, with P=50% on the nodes that haven’t answered and P for the nodes that have. 2) With P for the nodes that have answered and the remaining nodes at P=100% adjusted for the [[trust]] (which we presumably have). 3) With P for the nodes that have answered and the remaining nodes at P=0% adjusted for trust. Calculations 2 and 3 will then give us the max and min error around Calculation 1. At every time interval the user will see a graph with all their points and error bars around each.&lt;br /&gt;
&lt;br /&gt;
== A Problem with Bayesian Modeling ==&lt;br /&gt;
&lt;br /&gt;
One of the reasons to do this is that outstanding nodes, even just one, can have a huge influence on the answer. Given that, it would be good to see what the potential error is to prevent people from stopping their calculation prematurely.&lt;br /&gt;
&lt;br /&gt;
This can certainly happen in cases where there is a close split between two views. To see this we can run a case (https://peerverity.pages.syncad.com/trust-model-playground/ or use https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/138) where the choices are Sunny or Cloudy, N=10, and 5 nodes think P=90% Sunny and 5 nodes think P=90% Cloudy. The result will be an even split between Sunny (50%) and Cloudy (50%). Now, if 1 extra node thinks it will be Sunny then our combined probability will be Sunny (90%) and Cloudy (10%). In other words the single extra node determines the outcome.&lt;br /&gt;
&lt;br /&gt;
It doesn’t matter how many nodes we have. N=100 results in the same thing. See what’s wrong with this? How can a single extra [[opinion]] in an otherwise split contest among many make you almost certain that the one node is right?&lt;br /&gt;
&lt;br /&gt;
The [[Bayes&#039; theorem|Bayes]] eqn. works by modifying a [[wikipedia:Prior probability|prior probability]] with new evidence to generate a [[wikipedia:Posterior probability|posterior probability]]. It doesn’t know how the prior was generated. The prior can be one experiment or the result of several. Therefore if the results of several experiments are uncertain (a 50/50 split) they will cancel and the new experiment will carry all the weight. It’s as if you just had the one experiment.&lt;br /&gt;
&lt;br /&gt;
However, Bayes works when it is based on properly sampled experiments, each of which is [[wikipedia:Independence (probability theory)|independent]]. If 100 nodes are 100% certain of an outcome and another 100 are 100% certain of the opposite outcome, it means they are incorrectly [[wikipedia:Sampling (statistics)|sampling]] their own space or introducing some other error. The macro result can’t contradict the micro results without someone being wrong.&lt;br /&gt;
&lt;br /&gt;
In cases where there is a clear majority [[Opinion|opinion]] then that opinion will combine, via Bayes, to produce an almost certain result. If we take the above example and add 5 more nodes in the Sunny direction, we will be almost 100% certain of Sunny weather tomorrow. One more node in either direction won’t change that so, in a sense, we’ve converged and don’t have the problem mentioned above. But we still have a serious problem. If 105 nodes think Sunny and 100 think Cloudy, are you really 100% certain of Sunny weather?&lt;br /&gt;
&lt;br /&gt;
Again, Bayes is only useful with rigorously derived [[Technical overview of the ratings system|probabilities]] based on independent and correctly sampled experimental results. Most opinions are not that. The probabilities people generally come up with are just made up or sampled from invalid groups (eg friends who have the same opinion) or are repetitions of the same study (eg the same weather report on TV).&lt;br /&gt;
&lt;br /&gt;
It seems we will need to provide a different model than Bayes to account for this. One idea would be to simply average the probabilities and weight them using trust. Stay tuned.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Effect_of_cycling_in_trust_networks&amp;diff=2431</id>
		<title>Effect of cycling in trust networks</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Effect_of_cycling_in_trust_networks&amp;diff=2431"/>
		<updated>2024-10-16T19:14:53Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
This discussion will show how [[wikipedia:Cycle (graph theory)|cycling]] in [[trust]] [[Computer network|network]]s rapidly leads to an incorrect answer (Pcomb) which is often much higher than the answer you would get in a non-cycling network. The divergence between answers, however, depends on trust. If trust is low the answers will tend to converge and will be equal when trust is zero. If trust is high the answers will be very far apart.&lt;br /&gt;
&lt;br /&gt;
Suppose we have a trust network composed of four [[wikipedia:Node (graph theory)|Nodes]] 0, 1, 2, 3 with the following connectivity: {0:[1,2], 1:[0,3], 2:[0,3], 3:[0,1]}. That is, Node 0 is connected to Nodes 1 and 2, Node 1 is connected to Nodes 0 and 3, etc.&lt;br /&gt;
&lt;br /&gt;
Each node has the following [[Technical overview of the ratings system|probabilities]] which we will say are all the same for the sake of simplicity: [0.6, 0.4], [0.6, 0.4], [0.6, 0.4], [0.6, 0.4]. That is, Node 0 is 60% confident in its prediction, Node 1 is 60% confident in its prediction, etc.&lt;br /&gt;
&lt;br /&gt;
We will restrict ourselves for now to the case where Trust is 1.0. We will do a case later where the trust is below 1.&lt;br /&gt;
&lt;br /&gt;
This situation looks like the following if we have three “levels” (0-2) and is clearly cycling since Nodes 0, 1, and 3 are used more than once.&lt;br /&gt;
&lt;br /&gt;
{{Transparent|[[File:08db2f633968e0ada04eb4651bfd5291_diagram.drawio.svg|diagram.drawio.svg]]}}&lt;br /&gt;
&lt;br /&gt;
Node 0 will try to answer a [[predicate]] type question, such as “will it rain tomorrow, yes or no?”. It has its own [[wikipedia:Confidence interval|confidence]] of 60% and has friends 1 and 2 who also have 60% confidence in their prediction. They in turn have their friends on Level 2 who are 60% confident in their prediction.&lt;br /&gt;
&lt;br /&gt;
We can calculate this by starting at the bottom-most level (Level 2) which requires no calculation and just defaults to each node’s own confidence (ie 60%). Then we can combine that with the next higher level using Eric’s app (https://peerverity.pages.syncad.com/trust-model-playground/) or the script[[Media:63ac84aba11e887e036f2ea79e543b26_sapienza_bayes2.py|sapienza_bayes2]]. So, the nodes 1, 0, and 3 all have probabilities of 60% and will [[Aggregation techniques|combine to create a probability]] of 0.7714. Similarly the nodes 2, 1, and 3 will combine to create a probability of 0.7714. We now have the following situation:&lt;br /&gt;
&lt;br /&gt;
{{Transparent|[[File:8514b7b2ca9a35a06acd23869761e4be_diagram2.drawio.svg|diagram2.drawio.svg]]}}&lt;br /&gt;
&lt;br /&gt;
We can continue with Eric’s app or sapienza_bayes2.py to roll up the combined probability of Node 0, 1, and 2 which turns out to be 0.9447.&lt;br /&gt;
&lt;br /&gt;
The same non-cycling network looks like this and has a combined probability of 0.8350:&lt;br /&gt;
&lt;br /&gt;
{{Transparent|[[File:2c6678d41364d4623387166207d1c87a_diagram3.drawio.svg]]}}&lt;br /&gt;
&lt;br /&gt;
The non-cycling network uses each node only once and yields a significantly lower Pcomb than the cycling network. And this is only for a 3-level network. We suspect that continuing to add levels which cycle will quickly lead to a confidence of 100% (or 0% if the probabilities were under 50%). Indeed if we added just one additional level to this example (Level 3), Pcomb = 0.9977.&lt;br /&gt;
&lt;br /&gt;
Adding in [[Trust|trust factors]] below 1 will mitigate this effect to some extent but the overall divergence between cycling and non-cycling will still be obvious unless the trust is very low. For instance, repeating the above calculations with Trust=0.7 between all parties leads to Pcomb = 0.97 for the cycling case after 6 levels and Pcomb = 0.76 after 3 levels. Interestingly, the Pcomb does not approach 1.0 in the limit of many levels like it does for Trust=1. Instead it approaches an asymptotic limit which depends on the trust and becomes lower as the trust becomes lower.&lt;br /&gt;
&lt;br /&gt;
The following table illustrates this situation for the case of Trust, ie tfa (trust for all) = 1.0, 0.7, 0.4, and 0.1. For Trust = 0, the Pcomb values will remain at 0.6 since all nodes will be “neutral” except for the top-most node (Node 0) which will use its own default confidence of 0.6 (and implicitly trusts itself, ie Trust=1.0). The trust calculations are taken directly from the Sapienza paper, ( https://ceur-ws.org/Vol-1664/w9.pdf ). Here’s a Python script to do the calculations, based on the paper, which allows us to create a network and roll up the probabilities into a single Pcomb, with cycling and without: [[Media:6484e59605fd81085eaa127bd98bb9ed_sapienza_trusttree.py|sapienza_trusttree]]&lt;br /&gt;
&lt;br /&gt;
[[File:beb8766e6a253d847fc5ff1289be08ef_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here are some plots of the above:&lt;br /&gt;
&lt;br /&gt;
[[File:31a5b9b3879010d6ea6638921f73c4ee_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
[[File:1eda7c1df0d63a3c9c9a31def2cae641_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
[[File:bacd4758983568a998a8ee79488869fe_image.png|image]][[File:d0d5c2d486b64329778e635368e1cdb6_image.png|image]]&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Economic_systems&amp;diff=2430</id>
		<title>Economic systems</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Economic_systems&amp;diff=2430"/>
		<updated>2024-10-16T18:59:06Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
Economic systems refer to methods by which [[Resource allocation|resource allocation]] occurs as a result of the [[ratings system]]. In particular, the ratings system should make possible the [[Democratizing resource allocation|democratization of resource allocation]]. Using the system, people will rate each other&#039;s claims to economic goods based on multiple factors such as their [[productivity]], needs, overall rating, etc. Since the ratings system has both a [[The subjective and community ratings system|subjective and community version]], we distinguish two types of economic system, the [[SRBE and CRBE]] (subjective and [[community]] ratings based economy). The [[Thoughts on a subjective ratings based economy|SRBE]] would be the default since it is based on a subjective ratings system, although its [[Thoughts on an intentional community with a subjective ratings based economy|integration into an intentional community]] is also considered. The CRBE forms a large part of the discussion here since the presence of a community is assumed. However, both systems lead to [[Hippy communes, socialism, work, and egalitarianism|egalitarian economic outcomes]] and the [[Thoughts on a subjective ratings based economy|SRBE]] has the benefit of growing organically out of the completely decentralized subjective ratings system.      &lt;br /&gt;
&lt;br /&gt;
Another related and fundamental distinction is between a [[Money system based on ratings|moneyed]] and a [[Moneyless economy based on reputation and need|moneyless]] economic system. Either approach is possible or some combination of the two. We note, however, that the ratings system and attendant software should provide a feasible alternative to money and eliminate its corrupting influence. The SRBE, in particular, would probably work better in a moneyless system.&lt;br /&gt;
&lt;br /&gt;
We begin by outlining some ideas by which [[economic predicates]] in a ratings system could work. These are the questions that people would be rated on which would influence their ability to fulfill [[Money defined|economic claims]]. One of these, [[Productivity|productivity]] is highly relevant to judging not only &amp;quot;deservingness&amp;quot; but also [[Societal optimization|economic performance]] writ large. We might also, alongside these concepts, try to understand [[Personal choice and sacrificial contributions|personal economic choices and sacrificial contributions]], ones that are difficult to place in a traditional self-interested economic context.&lt;br /&gt;
&lt;br /&gt;
Since we highlight egalitarian economic systems, we might take a look at the [[Hippy communes, socialism, work, and egalitarianism|history of such systems]] in [[Contemporary society|contemporary society]]. Of particular interest is their ability to incentivize productive work. Speaking of productivity, we note that participation in the ratings system will be part of everyone&#039;s &amp;quot;job&amp;quot;, at least in the [[The subjective and community ratings system|community-based system (as opposed to the subjective system)]]. As such members may be [[Compensation for participants|compensated]] depending on the distribution of [[Optimal income distribution|income rules]], their rating, productivity, and so on.&lt;br /&gt;
&lt;br /&gt;
Making good economic decisions is closely tied to [[System modeling|system modeling]]. Our goal is to help communities clearly understand economic policy choices and avoid [[Economic losses and counterfactuals|economic loss]].&lt;br /&gt;
&lt;br /&gt;
The backdrop for all these ideas is the [[Influence of wealth in democracy|influence of wealth]] in contemporary society, particularly on how it erodes democratic principles. We envision communities based on the ratings system to have transparent economic systems with democratically defined rules and a built-in [[Enforcement mechanisms|enforcement mechanism]].&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Economic_predicates&amp;diff=2429</id>
		<title>Economic predicates</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Economic_predicates&amp;diff=2429"/>
		<updated>2024-10-16T18:21:40Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Economic systems}}&lt;br /&gt;
&lt;br /&gt;
Our [[ratings system]] is presumed to start with questions that can be answered through a numerical rating. “Is X a real person” might be answered with a 0.7, indicating their belief that there is a 70% probability that the person in question is real. We have constructed a [[Technical overview of the ratings system|Bayesian math framework]] to encapsulate this numerically.&lt;br /&gt;
&lt;br /&gt;
Let’s suppose we are trying to ascertain X’s need for a claimed item (in the SRBE/CRBE economy). For the question “Does X need the claimed item?” we could also answer it probabilistically. But there is another interpretation which is clear when we reframe the question as “How much does X need the claimed item?”. The two questions ask for different things.&lt;br /&gt;
&lt;br /&gt;
The first asks us, in essence, to answer Yes or No and then assign a level of confidence to the answer. Answering 50% is to say I don’t know (0% confidence). Answering 70% is to say Yes with a confidence of 40%. Answering 20% is to say No with a confidence of 60%.&lt;br /&gt;
&lt;br /&gt;
For the second question, let’s assume the scale is 0-1. This would mean 0.5 (50%) would represent someone who has an “average” need for the item. Clearly, we would need to define what the scale really means but let’s say it corresponds to a standard distribution with a mean at 0.5. A 0 would represent someone with no need for the item. A 1 would represent someone who has a maximum need for the item.&lt;br /&gt;
&lt;br /&gt;
Although the first question is a [[predicate]], the second question seems to be more useful in a ratings context. The first question provides a confidence which is really just an error estimation. Error will occur, of course, but it seems we should be placing the error around the second question. That is, people could answer the “how much” question with an interval representing where they think the error lies (eg 0.7 +- 0.1, so an interval of 0.6-0.8).&lt;br /&gt;
&lt;br /&gt;
The need rating should probably start with the claimant. A statement of need and corresponding numerical score could then set the stage for further ratings by [[community]] members. These, in turn, should probably start by providing a brief statement of how they are qualified to know someone else’s need (eg He’s my neighbor and I’ve known him for 20 years). A member rating of someone’s need without such a statement should probably be rejected unless it is clear from prior ratings that they have an ongoing relationship.&lt;br /&gt;
&lt;br /&gt;
We presume here that items are “needed”. But need is a sliding scale as well. We absolutely need food, clothing, and shelter. But do we really need a car? Well, probably in the US most people can make a reasonable argument for a car due to how we’ve designed our urban environment. But our need for cars is certainly less than that for food. The same is true for all manner of goods/services we think we need. Even goods classified as needs, such as food, often have a large hedonic component built into them (eg chocolate cake) and create a claim which is essentially outside the bounds of basic needs. In the US, “necessary” foods are sometimes labeled for compliance with government food aid programs (eg WIC).&lt;br /&gt;
&lt;br /&gt;
It will be important for our system to rate the necessity of items in relation to basic needs (eg food). Then, when claims are made, [[Optimal income distribution|equitable distribution]] can be prioritized for those items classified closer to the basic need level. The concept, loosely stated, is that everyone gets to eat before anyone gets more.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Economic_losses_and_counterfactuals&amp;diff=2428</id>
		<title>Economic losses and counterfactuals</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Economic_losses_and_counterfactuals&amp;diff=2428"/>
		<updated>2024-10-14T21:20:21Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Economic systems}}&lt;br /&gt;
&lt;br /&gt;
{{Main|System modeling}}&lt;br /&gt;
&lt;br /&gt;
During Covid, a [https://www.cnbc.com/2024/05/29/doj-charges-chinese-national-in-5point9-billion-covid-botnet-fraud.html botnet with malware] that infected people’s computers enabled its owner to claim $5.9 billion in Covid benefits. $5.9 billion is a lot of money but probably no one actually “felt” the loss. This is usually the case with [[Financial fraud|financial fraud]]. If someone robs our bank, we do not personally feel the effect of that. For one thing, the [[wikipedia:Federal Deposit Insurance Corporation|FDIC]] insures our accounts so the loss is only felt collectively. But another problem, obviously, is that much of what counts as a loss is only measurable in the future. In other words, the loss is the difference between what we would have received in the future (if the bad thing hadn’t happened) and what we actually received. Since we never lived in the future where the loss hadn’t happened, we don’t see it as a “loss”. Imagining the future where the loss hadn’t happened is a counterfactual scenario.&lt;br /&gt;
&lt;br /&gt;
Counterfactuals are not just the bane of collective financial losses like those above but also of [[Economic systems|economic policy]]. If the government chooses policy A over policy B, we only get to live in the future where Policy A was selected. There is no “control group” future that allows us to make a real comparison. An understanding of policy B is the counterfactual scenario and it is essential to be able to do that, especially in cases where substantial losses occur.&lt;br /&gt;
&lt;br /&gt;
The best way to run complex counterfactual scenarios is to model them with one of the available [[System modeling|simulation methodologies]]. Our [[community]] should have the ability to rate policies not just on their actual effect but also against a carefully constructed counterfactual that serves as a control group. In fact, since policy effects are frequently confounded by other variables, the &amp;lt;i&amp;gt;only way&amp;lt;/i&amp;gt; to know the effect of a policy is to compare its real effect against a counterfactual simulation where everything was the same except that the policy in question was not enacted. All of this can be approximated through simulation.&lt;br /&gt;
&lt;br /&gt;
Counterfactual economic policy is difficult but losses due to crime and corruption shouldn’t be. [https://www.forensicscolleges.com/blog/follow-the-money/unpunished-financial-crimes It is estimated] that [[Ratings and white-collar crime|white-collar crime]], almost all of which is financial in nature, costs the US economy $1 trillion per year, whereas [[wikipedia:Street crime|street crime]] costs only $15 billion per year. The $1 trillion is 4% of the entire economy. That is, if we run the counterfactual where financial crime didn’t occur, we would all be 4% richer per year. That’s a lot, especially compounded over time.&lt;br /&gt;
&lt;br /&gt;
A community performing simulations on financial crime might trade off the cost of regulation or privacy intrusiveness vs. not losing money to some potential fraudulent scheme. It wouldn’t be easy to estimate, but after a few years of experience reasonable probabilities and correlations should become evident. One of the great strengths of our communal ratings-based system is the ability to experiment and optimize.&lt;br /&gt;
&lt;br /&gt;
Another thing a community can do is make sure that counterfactuals are presented to members as the correct basis for evaluating any policy choice, or the effects of failure. If we hadn’t had the [[wikipedia:2007–2008 financial crisis|2008 financial crisis]], or had responded more effectively to Covid, what would our current status be today? This is the correct way to think about policy but we usually don’t do it. We normally just compare ourselves with our past. Simulations should be able to easily break down how each individual would have probably fared in the counterfactual scenario. Having this information would keep people more interested in policy and ensure the watchfulness over government that is frequently missing in our current political system. And, needless to say, if we could devise a system where financial losses are “felt” by real people, it might wake us up to the necessity of building integrity into economic distribution schemes in the first place.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Direct_democracy&amp;diff=2427</id>
		<title>Direct democracy</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Direct_democracy&amp;diff=2427"/>
		<updated>2024-10-14T20:56:15Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Political systems}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Philosophy of John Rawls}}&lt;br /&gt;
&lt;br /&gt;
Our system is uniquely positioned as an experiment in direct democracy. The debating, voting, and, by extension, [[Steps in policy-making|policy making infrastructure]] is already there in the concept we have so far. It would seem that we could easily transform these elements into a government in which “the people” make and decide on policy details.&lt;br /&gt;
&lt;br /&gt;
The usual critique of direct democracy is that normal people don’t have the time or inclination to pass laws and make regulatory policy to enforce those laws themselves. Furthermore, in places where direct democracy elements exist (like referenda), “the people” choose incompatible goals such as balanced budgets, lower taxes, and more spending. This is, presumably, because the people don’t pay any price for holding contradictory positions and can vote in the [[wikipedia:Referendum|referendum]] any way they want. These are valid concerns and the [[Ratings system|rating system]] we are contemplating may be able to mitigate some of them.&lt;br /&gt;
&lt;br /&gt;
First, just the fact of a referendum (or a state that has them) does not imply a direct democracy with all the attendant complications that such would entail. After a referendum, voters do not have to work out the details of resolving mutually incompatible goals. That is left to the rest of government which usually means representatives in a legislature, the state governor, etc.&lt;br /&gt;
&lt;br /&gt;
Second, not everyone has to participate in a direct democracy to be effective. [[Voting methods#Vote cancellation and bias|We&#039;ve noted]] that the [https://www.economist.com/media/globalexecutive/myth_of_the_rational_voter_caplan_e.pdf “miracle of aggregation” put forth by Caplan] allows us to have a [[wikipedia:Representative democracy|representative democracy]] even though most people cancel their vote out because they are too uninformed to participate meaningfully. Enough people vote knowledgably that our representative system “works”.&lt;br /&gt;
&lt;br /&gt;
Third, our [[ratings system]], as we’ve discussed, can use technology to find nuanced positions that would elude a traditional direct democracy effort. A traditional system using pre-internet technology (paper, the post office, physical conventions, etc) would be cumbersome indeed. Even one that utilizes modern methods of communication would be hampered, just as our legislative branch is, by outdated practices and the underutilization of computational potential. Our system’s algorithmic power, ability to process complex information, and attain consensus in a customized way makes direct democracy far more viable. Keep in mind also that what we are proposing can evolve at a much faster rate than our traditional three branches of government.&lt;br /&gt;
&lt;br /&gt;
Finally, we should mention the primary benefit of direct democracy in light of our polarized politics today. [[Political polarization|Polarization]] is part and parcel a feature of a bipolar political alignment, one that treats the two major parties as holding all the cards of policy. We [[Voting methods#Traditional voting|have seen]] why our system leads to two parties but parties are, fundamentally, organized around the notion of representative democracy. Their job is to advance candidates and get them elected. It is not to advance policy and get it elected.&lt;br /&gt;
&lt;br /&gt;
Direct democracy, by contrast, will focus people on policy rather than candidates identified with a simple left/right distinction. This alone will help those who want to enact policy find solutions that make sense rather than lose themselves in ideological debates. Of course, not everyone will want to engage in the minutia of policy making. Those who don’t but still have an [[opinion]] will be able to express their general view, have it rated, and rate other such views. The final policy will then need to include a mechanism to reflect general views in the final technical language. Everyone will be able to vote, presumably, on the final policy that is enacted.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Differential_privacy,_secure_multiparty_computation,_and_homomorphic_encryption&amp;diff=2426</id>
		<title>Differential privacy, secure multiparty computation, and homomorphic encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Differential_privacy,_secure_multiparty_computation,_and_homomorphic_encryption&amp;diff=2426"/>
		<updated>2024-10-14T20:46:31Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Privacy, identity, and fraud in the ratings system}}&lt;br /&gt;
&lt;br /&gt;
==Differential privacy==&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:Differential privacy|Differential privacy (DP)]] is a technique used to anonymize datasets with personal or sensitive information. In DP we introduce some statistical noise into the answer each person gives in a way that gives that person plausible deniability while still providing a reliable aggregate statistic (eg an average).&lt;br /&gt;
&lt;br /&gt;
The simple example given to beginners is to imagine a college giving a survey to its students asking them if they’ve ever cheated on a test. The college assures the students that they don’t have to provide their names so their answers will be completely anonymous. But this is not secure because the college also asks other identifying questions, like year, major, gender, etc., which could be used to identify individuals. So the students, fearing they may be summoned to the honor court if they report having cheated, are not happy with the survey.&lt;br /&gt;
&lt;br /&gt;
The better way for the college to ensure privacy is to suggest to the students that they answer the question as follows: flip a coin and if it’s heads answer the question honestly. If tails, flip the coin again and answer honestly if it’s heads and dishonestly if its tails. So the cheater who flips two tails in a row answers that they never cheated and the non-cheater doing the same answers that they have cheated. So we end up with 25% of the respondents answering the question dishonestly. Now if the college tries to identify and prosecute a cheater, the student can claim plausibly that they were simply following the coin toss instructions and are part of the 25%.&lt;br /&gt;
&lt;br /&gt;
But how is the college to get an accurate [[Aggregation techniques|aggregate statistic]] when 25% of the students effectively lied? From the resulting dataset they can back-calculate the correct number using the following equation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_c = 0.75c + 0.25(1 - c)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_c&amp;lt;/math&amp;gt; is the fraction of reported cheaters in the dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;c&amp;lt;/math&amp;gt; is the fraction of true cheaters&lt;br /&gt;
&lt;br /&gt;
To put this in words, the fraction of cheaters in the dataset (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_c&amp;lt;/math&amp;gt;) is the fraction of true cheaters, those cheaters who answer honestly (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;0.75c&amp;lt;/math&amp;gt;) plus the fraction of non-cheaters who answer dishonestly (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;0.25(1 - c)&amp;lt;/math&amp;gt;). Now we can solve for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;c&amp;lt;/math&amp;gt;, providing us with an estimate of the fraction of cheaters from &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_c&amp;lt;/math&amp;gt;, the fraction who answered that they cheated:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;c = 2F_c - 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It’s important to keep in mind that these are probabilities which become more accurate as we increase the size of the dataset.&lt;br /&gt;
&lt;br /&gt;
This example is explained at length [https://towardsdatascience.com/a-differential-privacy-example-for-beginners-ef3c23f69401 here].&lt;br /&gt;
&lt;br /&gt;
Now let’s do something similar with our example above where several people are asked to rate someone on, say, financial accounting on a scale of 0-10. We will randomly ask 25% of the raters to provide a random answer (0-10) instead of their real answer. Now, any rater who is identified can plausibly claim that their answer was random.&lt;br /&gt;
&lt;br /&gt;
This situation is not as clean as the one above where there were only two answers. In the cheater/no-cheater case anyone who changes their answer only has one other choice. This narrows down the possibilities considerably and makes for a very simple, and accurate, equation for back-calculation.&lt;br /&gt;
&lt;br /&gt;
In the ratings case the “dishonest” rater can randomly pick any integer between 0-10. For a statistically valid sample size, the average of the dishonest ratings will be 5. Meanwhile the average of the honest ratings can be anything in the 0-10 range. If the average of the honest ratings is higher than 5 we can surmise that the dishonest ratings are too low. This is based on the assumption that the dishonest ratings should have been about the same average as the honest ones. Similarly, if the average of the honest ratings is lower than 5 we surmise that the dishonest ratings are too high.&lt;br /&gt;
&lt;br /&gt;
With this in mind, we can derive an equation to correct a dataset with dishonest answers. Suppose the total population is &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N = 100&amp;lt;/math&amp;gt; and 25% of them are asked to provide a random answer instead of the truth. Thus 25 people rated randomly and we can expect the average of their answers to be around 5. Let’s suppose the average of the honest raters is 8. So we assume that our dishonest average is low and skews the overall average toward the low side. Let’s call this overall raw average the “privacy average”, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA = {75(8) + 25(5)\over 100} = 7.25&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since we want an average of 8 (assuming it’s the same as the honest raters), we correct it as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = {75(8) + 25(5) + 25(8-5)\over 100} = {PA + {25(8-5)\over 100}} = 8&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA&amp;lt;/math&amp;gt; is the corrected average&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = 8&amp;lt;/math&amp;gt;, so&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA + {25(CA-5)\over 100} = CA&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can now solve for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA&amp;lt;/math&amp;gt; in terms of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA + {25CA\over 100} - {25(5)\over 100} = CA&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA(1 - {25\over 100}) = PA - {25(5)\over 100}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = {PA - {25(5)\over 100}\over {1 - {25\over 100}}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that 25/100 is simply the fraction of dishonest raters which we set at 25%. We can call this &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt; to generalize it. And 5 is the average of the dishonest ratings which we can call &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_d&amp;lt;/math&amp;gt; to generalize. Our formula is now&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = {{PA - {F_d}{A_d}}\over {1-F_d}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt; is the fraction of dishonest raters (eg 0.25 in example above).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_d&amp;lt;/math&amp;gt; is the average of dishonest raters (eg 5 in example above).&lt;br /&gt;
&lt;br /&gt;
Thus, knowing any privacy average, fraction of dishonest raters (which we set), and average of dishonest ratings (eg 5 or the midpoint of the range of ratings used) we can find a corrected average.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that these corrections are only statistically accurate, not rigorously so. In a small dataset they may include considerable error. This is why we accept, as the price of privacy, that the aggregate will have some error in it. However, since aggregate numbers often don’t have to be that precise, this is an acceptable tradeoff.&lt;br /&gt;
&lt;br /&gt;
One important property of DP is that it ensures that any new rating that appears in the dataset reveals no information about the rater. Suppose, with no DP, we have the average of a dataset composed of 100 values. A new person then decides to send in their rating to make the number of values 101. A new average is calculated (again, no DP). However, by knowing the new average and the old average we can easily calculate what that new person gave as their rating. But with DP this calculation does not reveal any reliable information about the rater.&lt;br /&gt;
&lt;br /&gt;
DP is an active field of research and the examples shown here only scratch the surface. A few references to work in this area:&lt;br /&gt;
&lt;br /&gt;
https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf&lt;br /&gt;
&lt;br /&gt;
https://www.cerias.purdue.edu/news_and_events/events/security_seminar/details/index/j9cvs3as2h1qds1jrdqfdc3hu8&lt;br /&gt;
&lt;br /&gt;
https://eti.mit.edu/what-is-differential-privacy/&lt;br /&gt;
&lt;br /&gt;
https://people.seas.harvard.edu/~salil/cs208/spring19/&lt;br /&gt;
&lt;br /&gt;
Other privacy techniques (not necessarily DP):&lt;br /&gt;
&lt;br /&gt;
https://en.wikipedia.org/wiki/K-anonymity&lt;br /&gt;
&lt;br /&gt;
https://openmined.org/&lt;br /&gt;
&lt;br /&gt;
== Error in Differential Privacy ==&lt;br /&gt;
&lt;br /&gt;
Let’s return to the problem where we’ve asked some people to rate Pete, say, on his financial accounting skills. The raters don’t want to reveal their individual scores. Pete, however, will see the aggregate and that’s ok with the raters because we will use a differential privacy (DP) scheme.&lt;br /&gt;
&lt;br /&gt;
Above we produced a general equation for correcting a differential privacy method:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = {{PA - {F_d}{A_d}}\over {1-F_d}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt; is the fraction of dishonest raters.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_d&amp;lt;/math&amp;gt; is the average of dishonest raters.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA&amp;lt;/math&amp;gt; is the privacy average, or raw average including the dishonest answers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA&amp;lt;/math&amp;gt; is the corrected average.&lt;br /&gt;
&lt;br /&gt;
Let’s do a concrete example of this scheme to understand it better and then analyze the error associated with it. Unlike above where we had &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N=100&amp;lt;/math&amp;gt;, let’s try a case with fewer people, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N=10&amp;lt;/math&amp;gt;, since that is probably more representative of Pete’s raters. Let’s say the fraction of raters asked to provide a random number (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt;), ie the fraction of “dishonest” raters, is 40%. So the first 4 raters are asked to rate at random, anywhere between 0-10 including the end points and everyone else rates honestly. Let’s assume further that the honest ratings have an average of 8, so the presence of the random raters will probably skew the average low (since on average their rating will be 5).&lt;br /&gt;
&lt;br /&gt;
The following spreadsheet shows one possible result:&lt;br /&gt;
&lt;br /&gt;
[[File:e15b358629a315ddd7b0cb21830d8483_image.png|468x276px|image]]&lt;br /&gt;
&lt;br /&gt;
The error between the true average (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA_{true}&amp;lt;/math&amp;gt;) and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA&amp;lt;/math&amp;gt; is 1.1%. This seems quite good given the low number of raters and the fairly hefty number who answered randomly.&lt;br /&gt;
&lt;br /&gt;
But this is only one result and maybe we just got lucky. Let’s try to calculate the error by running this case 1000 times (say) and calculating the average error. The [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/178 attached Python snippet] does this and, sure enough, reveals an error that is higher, at 3.7%. Still, this also seems fairly encouraging given the low number of samples and high fraction of random raters.&lt;br /&gt;
&lt;br /&gt;
We can do this for several combinations of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt; (fraction of sample that answers dishonestly or randomly) and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N&amp;lt;/math&amp;gt; (population size):&lt;br /&gt;
&lt;br /&gt;
[[File:080a2c8343a03800f3444e57c9594531_image.png|231x90px|image]]&lt;br /&gt;
&lt;br /&gt;
Here we find that for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N=5&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d = 0.8&amp;lt;/math&amp;gt; we obtain an error of 13.1%. We might ask how this is possible since a solid majority of the raters were dishonest. To answer this let’s assume that &amp;lt;i&amp;gt;all&amp;lt;/i&amp;gt; the raters had been dishonest (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d = 1&amp;lt;/math&amp;gt;). This case isn’t shown here because there is no way to calculate &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA&amp;lt;/math&amp;gt; since the denominator in the equation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;CA = {{PA - {F_d}{A_d}}\over {1-F_d}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
is zero.&lt;br /&gt;
&lt;br /&gt;
Instead we can multiply both sides of the equation by &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;1-F_d&amp;lt;/math&amp;gt; and reach the obvious conclusion that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PA = A_d&amp;lt;/math&amp;gt;. So the only two useful numbers we have are the average of the dishonest ratings, which should be around 5 and the average of what they would have been if they were allowed to rate honestly, which should come to around 8. So the highest possible error in this scheme is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;(8-5)/8 = 0.375&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or 37.5%. No error, therefore, can be higher than this. So, in a sense, we are biasing our errors toward the low side. We could recreate the table above again by calculating the fraction of this maximum error that each error represents, and express this as a percentage:&lt;br /&gt;
&lt;br /&gt;
[[File:5f4e70fde5dbf13ea59ee49106fdd2be_image.png|245x96px|image]]&lt;br /&gt;
&lt;br /&gt;
The errors don’t look so great anymore. We can go further and realize that in any scenario involving a 0-10 rating, that the maximum possible error is 50%. This is because the average rating will always be around 5 and the maximum either a 10 or a 0. In either case the error is 50%.&lt;br /&gt;
&lt;br /&gt;
Curiously, the error depends strongly on the average rating itself. If someone rates at either end of the 0-10 spectrum, the error is high. But if someone has an average honest rating of 5, then the error is 0 even if all raters rate randomly. A rating of 5 means that it makes no difference if the raters rated honestly or randomly.&lt;br /&gt;
&lt;br /&gt;
Despite all this, it is clear that differential privacy works even on smaller datasets with substantial fractions of random ratings. It will be important to clarify to users exactly what the errors are, and what they mean, when using this technique.&lt;br /&gt;
&lt;br /&gt;
== Differential Privacy (DP) and Normal Distributions ==&lt;br /&gt;
&lt;br /&gt;
One of the downsides of the DP scheme we discussed above was that some of the raters, most in fact, had to provide their honest ratings. Although everyone could &#039;&#039;claim&#039;&#039; deniability, if it became known somehow who the honest raters were, this advantage would disappear. Here we discuss a scheme where all the raters provide random ratings so they all have true deniability.&lt;br /&gt;
&lt;br /&gt;
The scheme works by telling the raters to provide a random number within a normal distribution centered about their true rating. We suspect that doing this will lead to some ratings higher and some lower than the true rating but, on average, will match the true rating fairly closely. Therefore there wouldn’t be any need for correcting the average result. Indeed, there would be no basis for correcting the result because the [[wikipedia:Statistical noise|statistical noise]] is always centered about the average.&lt;br /&gt;
&lt;br /&gt;
As an example, let’s take a single case with a population of 10 raters and a 0-100 ratings scale. Let’s suppose the nominal average rating is 80. Each rater has a True Rating but then runs a random number generator to alter that with a normal distribution (say) with a mean set to the true rating and a standard deviation of 10 (say). We will call the altered rating the Privacy Rating. The following table gives one possible result:&lt;br /&gt;
&lt;br /&gt;
[[File:e7b1ca4f32203a7327c40852151b70f8_image.png|180x212px|image]]&lt;br /&gt;
&lt;br /&gt;
This is reasonable. The average rating and the privacy rating are somewhere close to 80, even for this statistically small sample. And the raters are given a healthy margin by which to alter their results. For this case the error is about 3% which seems acceptable for statistical purposes.&lt;br /&gt;
&lt;br /&gt;
But this is only a single case and a more rigorous error analysis is necessary to prove that the method works and to understand the tradeoffs associated with it. The [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/180 attached Python snippet] performs this analysis, in a similar way as above. It runs each set of ratings, as seen in the table above, 1000 times to ensure that we are getting a proper average of the errors. The following table provides these errors for various population sizes and standard deviations:&lt;br /&gt;
&lt;br /&gt;
[[File:ccb04cca44038daebd0507bc01b231a1_image.png|254x96px|image]]&lt;br /&gt;
&lt;br /&gt;
Again, along with the results above, we can be pleasantly surprised by this. Unless the population size becomes very small with a large standard deviation, the errors are reasonable. For large populations they become negligible.&lt;br /&gt;
&lt;br /&gt;
As a matter of visualization and understanding, we might draw a graph of what the standard deviation looks like for these cases. One standard deviation means that about 68% of the results will lie within it, centered about the mean. Two standard deviations means that about 96% of results lie within it and 3 means greater than 99%. So here, when we speak of a standard deviation of 10, we mean that 68% of the results lie between +-10 of the mean, so between 70 and 90.&lt;br /&gt;
&lt;br /&gt;
[[File:2f6b55aa075b717f65630766cac414fb_image.png|468x217px|image]]&lt;br /&gt;
&lt;br /&gt;
In this case, as a practical matter, StdDev = 5 is somewhat constrained in terms of its plausible deniability but above that we have quite a wide practical range.&lt;br /&gt;
&lt;br /&gt;
We note that the rating on the x-axis goes up to 150, far beyond the 100 we stipulated as the maximum for this scale. This is only for illustration. In practice, anyone whose random rating is calculated to be above 100 would just pick 100. Similarly, if the mean rating had been below 50, the larger standard deviations would produce ranges extending well below 0. In this case the rater would just pick 0.&lt;br /&gt;
&lt;br /&gt;
The Python snippet mentioned above has the ability to either cut off the scale at 0 or 100 or to let the scale float above/below these values as shown on the graph. The table above calculates the errors with the floating scale since this gives us mathematically precise error values. Cutting the scale off artificially obfuscates the error needlessly. Our purpose here is to demonstrate the efficacy of the method and this demands an accurate representation of the error.&lt;br /&gt;
&lt;br /&gt;
Here is an attachment to the Excel spreadsheet the tables and graphs came from: [[Media:b83f69166336015f48780b191042e137_Brainstorming_21.xlsx|Brainstorming_21.xlsx]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Secure Multiparty Computation (SMC) ==&lt;br /&gt;
&lt;br /&gt;
SMC is a method by which a group can perform some mathematical operation on shared data without anyone knowing the data of anyone else. The classical example of this is the Millionaires’ Problem in which Alice who has $8 million and Bob who has $5 million wish to know who is richer without revealing how much each one has. This problem was solved by Yao, among others, and this [https://en.wikipedia.org/wiki/Yao%27s_Millionaires%27_problem reference] provides an explanation for how this is done.&lt;br /&gt;
&lt;br /&gt;
Let’s apply a version of SMC more suited to our case using a very simple example. Three people, Alice, Bob, and Carol wish to know their average rating without revealing to the others what their own rating is. To make the numbers easier, let’s assume the ratings scale is 0-100. Each person’s real rating is (20, 10, 60) for an average of 30. They start by randomly selecting 3 numbers that add up to their rating:&lt;br /&gt;
&lt;br /&gt;
[[File:485534ab8975877931f5e107f9f27f89_image.png|244x79px|image]]&lt;br /&gt;
&lt;br /&gt;
Then they jumble the numbers such that each of the three raters gives 2 of their random numbers to the others:&lt;br /&gt;
&lt;br /&gt;
[[File:02f5cf4a38d025be7a8a28ed02595a15_image.png|243x80px|image]]&lt;br /&gt;
&lt;br /&gt;
Notice that this exchange provides no helpful information to each rater in figuring out another rater’s rating.&lt;br /&gt;
&lt;br /&gt;
Now each rater has 3 random numbers and proceeds to add them up:&lt;br /&gt;
&lt;br /&gt;
[[File:ccea69a6e776469ccd8c233bc15aec98_image.png|336x79px|image]]&lt;br /&gt;
&lt;br /&gt;
If we then take the sum of these sums and calculate the average, we obtain the true average:&lt;br /&gt;
&lt;br /&gt;
[[File:7415e7e99e90ea3c3ca76bceb712e396_image.png|381x79px|image]]&lt;br /&gt;
&lt;br /&gt;
Now the group knows the average but no one in the group knows any other member’s rating but their own.&lt;br /&gt;
&lt;br /&gt;
This variant of SMC does not work, of course if there are only two raters. If two people want to compute an average of their two ratings, there is no way they won’t be able to determine the other person’s rating. This scenario is the same as simply asking the other person for their rating.&lt;br /&gt;
&lt;br /&gt;
The example shown here was taken from [https://inpher.io/technology/what-is-secure-multiparty-computation/ inpher ] which provides commercial tools for both SMC and HE and also provides an [https://inpher.io/tfhe-library/ open source library] for HE and makes it [https://github.com/tfhe/tfhe available on github].&lt;br /&gt;
&lt;br /&gt;
Other SMC references:&lt;br /&gt;
&lt;br /&gt;
https://www.sciencedirect.com/science/article/abs/pii/S0167404823000524&lt;br /&gt;
&lt;br /&gt;
https://link.springer.com/article/10.1186/s13059-022-02841-5&lt;br /&gt;
&lt;br /&gt;
https://www.researchgate.net/profile/Joseph-Oluwaseyi-2/publication/379409115_Secure_Multi-Party_Computation_MPC_Privacy-preserving_protocols_enabling_collaborative_computation_without_revealing_individual_inputs_ensuring_AI_privacy/links/66076488390c214cfd24b406/Secure-Multi-Party-Computation-MPC-Privacy-preserving-protocols-enabling-collaborative-computation-without-revealing-individual-inputs-ensuring-AI-privacy.pdf&lt;br /&gt;
&lt;br /&gt;
== Homomorphic Encryption (HE) ==&lt;br /&gt;
&lt;br /&gt;
Let’s suppose this same group of 3 wants to send their ratings to an aggregator in an encrypted form, have the aggregator perform the average over the &amp;lt;i&amp;gt;encrypted&amp;lt;/i&amp;gt; values, obtain an encrypted average, and send back the result to the group. The aggregator does nothing but perform as a calculator in this case and knows nothing about the data. Let’s say the true ratings are again (20, 10, 60) which averages out to 30. They each encrypt their rating with the following simple key: multiply the rating by 0.6258.&lt;br /&gt;
&lt;br /&gt;
[[File:40a5705408ba31e2516976ac7fbc780d_image.png|207x80px|image]]&lt;br /&gt;
&lt;br /&gt;
For each rating, we obtain using this simplistic formula the values (12.516, 6.258, 37.548) and send these to the aggregator who computes the average, 18.774:&lt;br /&gt;
&lt;br /&gt;
[[File:eb13beedd9bfec54fc9ccace86022f7c_image.png|308x81px|image]]&lt;br /&gt;
&lt;br /&gt;
This value is sent back to each member of the group who decrypt it by dividing it by the key, 0.6258, to obtain the average, 30, which matches the known true average.&lt;br /&gt;
&lt;br /&gt;
[[File:ec4042a0142e452eef316b984a6c8bdf_image.png|468x95px|image]]&lt;br /&gt;
&lt;br /&gt;
The aggregator, in principle, knows nothing about the real numbers since he doesn’t know the encryption key. All he can do is perform the average on the numbers he is given, which are encrypted. The individuals in the group, meanwhile, only know the average, not the ratings given by the other members.&lt;br /&gt;
&lt;br /&gt;
Homomorphism is the property of structure preservation in a mapping. If we map our original ratings to encrypted ones, the encrypted ones preserve the same structure, and basic relationship between members, that the original ones had. This means, among other things, that if a number &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;a &amp;gt; b&amp;lt;/math&amp;gt; in the original set that it continues to be in the homomorphically encrypted set, ie &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;a_e &amp;gt; b_e&amp;lt;/math&amp;gt;. Obviously a simplistic mapping like multiplying by a constant will be homomorphic. But much more complex mappings are homomorphic as well.&lt;br /&gt;
&lt;br /&gt;
We might be worried here about the aggregator being dishonest and reporting back a junk number. A simple way to handle this is to use more than one aggregator and see if they agree.&lt;br /&gt;
&lt;br /&gt;
We may also be worried about a hacker who intercepts the encrypted numbers from the group to the aggregator and replaces them with fake numbers. Again, we could use more than one aggregator to lessen the likelihood that the hacker would intercept all the numbers. But let’s suppose we aren’t satisfied with this and want more certainty.&lt;br /&gt;
&lt;br /&gt;
The group could, perhaps, embed a verification sequence within the encrypted numbers. For example they could agree to add 4 random numbers that add up to 10 to each of the numbers they sent to the aggregator, eg (12.5164303, 6.2582314, 37.5486121). Now they could ask the aggregator to provide the last 4 digits of each number back to them. If the aggregator received a hacked number it is highly likely that it would not include a last-four sequence that adds up to 10. There is some possibility, of course, that the aggregator himself is dishonest and, knowing that the last 4 digits is important, proceeds to figure out the verification sequence. But even so, if the aggregator is to do serious harm, he would have to change the numbers substantially, a problem which, as noted above, can be solved by having more than one aggregator.&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:Homomorphic encryption|Homomorphic encryption]] is obviously more complex than this naïve example suggests. Clearly every mechanism shown here is so simple that a decent cryptologist could figure it out. Indeed, real HE is so complex that one of its limiting problems is computational cost. We provide the example only to get a feel for what HE is and examine some possibilities in its application. More references on HE can be found here:&lt;br /&gt;
&lt;br /&gt;
https://link.springer.com/article/10.1007/s11227-023-05233-z&lt;br /&gt;
&lt;br /&gt;
https://link.springer.com/chapter/10.1007/978-3-030-77287-1_2&lt;br /&gt;
&lt;br /&gt;
https://link.springer.com/chapter/10.1007/978-3-319-12229-8_2&lt;br /&gt;
&lt;br /&gt;
https://dl.acm.org/doi/abs/10.1145/2046660.2046682&lt;br /&gt;
&lt;br /&gt;
https://digitalprivacy.ieee.org/publications/topics/what-is-homomorphic-encryption&lt;br /&gt;
&lt;br /&gt;
https://www.keyfactor.com/blog/what-is-homomorphic-encryption/&lt;br /&gt;
&lt;br /&gt;
https://www.internetsociety.org/resources/doc/2023/homomorphic-encryption/&lt;br /&gt;
&lt;br /&gt;
== The Problem of Collusion ==&lt;br /&gt;
&lt;br /&gt;
Regardless of the method, it is difficult to prevent collusion from exposing a single person’s rating. Suppose that, in the above scenarios, Alice and Carol wish to learn Bob’s rating. They can collude to tell each other their rating, or collude to provide a fake rating and tell each other that. After the average is computed, Alice and Carol can easily back-calculate Bob’s rating from the average and their own two ratings. Or, using a more direct approach, perhaps they collude with the aggregator who simply gives them all his encrypted ratings. There doesn’t appear to be any way to prevent this. Certainly neither SMC or HE, on their own, will solve this problem.&lt;br /&gt;
&lt;br /&gt;
Differential privacy, however, offers some help because now Bob may have rated randomly so Alice and Carol would learn nothing about Bob’s real rating. But even here, if the random raters are assigned and Alice was the assigned random rater (eg &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d = 0.333&amp;lt;/math&amp;gt;), then she knows the other raters, particularly Bob, is likely to be honest.&lt;br /&gt;
&lt;br /&gt;
This problem could be solved by randomly assigning the random rater. In the scenario above, if all members of the group “roll a 3-sided die” to see if they are to randomly rate, Alice and Carol wouldn’t know if Bob’s answer was random or not. In such a small sample there’s a good chance this wouldn’t work, but for a large statistically valid group it would. This is essentially the same as the cheating survey we discussed last time where students flip two coins to decide if they answer honestly. Of course, in a large group it becomes ever more unlikely that collusion is taking place anyway.&lt;br /&gt;
&lt;br /&gt;
It seems the best we can do is to mitigate the collusion problem but not eliminate it altogether. After all, if everyone except one person is willing to share all information, it seems difficult to prevent them from isolating that one person’s information. The only remedy is for that person to simply not provide the correct answer.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Democratizing_resource_allocation&amp;diff=2425</id>
		<title>Democratizing resource allocation</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Democratizing_resource_allocation&amp;diff=2425"/>
		<updated>2024-10-11T18:32:26Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Societal optimization}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Economic systems}}&lt;br /&gt;
&lt;br /&gt;
One of the primary benefits of a system like ours will be in improved resource allocation. Current resource allocation systems (eg [[wikipedia:Mixed economy|regulated market capitalism]]) waste a tremendous amount of wealth on unnecessary labor and goods. Furthermore, we distribute wealth in counterproductive and undemocratic ways. We underpay manual workers, overpay managers, maintain positions in which no actual work is accomplished, create make-work projects, spend on [[Luxury|luxury goods]] (an example of the [[Multi-criteria decision-making methods|flat part of the pareto curve]]), and underfund investments in [[wikipedia:Infrastructure|infrastructure]], [[Education in a ratings-based society|education]], and [[Science in a ratings-based society|research]]. Value in [[wikipedia:Capitalism|capitalism]] is whatever someone is willing to pay. Value in [[wikipedia:Socialism|socialism]] is decided by a government. Although most [[Economic systems|economic systems]] are an amalgam of both, probably for the better, the result is not an optimal allocation of resources but rather one decided by interested parties, usually the powerful.&lt;br /&gt;
&lt;br /&gt;
We can do better by choosing an economic system more objectively focused on optimized results. This de-emphasizes the role of self-dealing, including ideology, and re-emphasizes the role of practical outcomes. One ideological conflict, for instance, is how best to provide [[Health care in a ratings-based society|health care]]. Ideological debates will focus on whether it is a function of government to provide such services or whether it should be left to the individual (and [[wikipedia:Market economy|market]]). But if we agree that our societal goal is to maximize health services at the lowest cost, we already have at hand the [[Optimization problem|optimization problem]] that should govern the question. Some won’t like the answer because it may require government intervention but if this is the case, the optimization problem was not stated correctly, at least for them. Others won’t like the answer because it would be contrary to their own interests but, again, if this is the case they signed up dishonestly for the optimization problem. Their optimization problem was to produce a health care system that minimizes &amp;lt;i&amp;gt;their&amp;lt;/i&amp;gt; cost. At any rate, once we have an optimization problem everyone agrees on, it is a matter of constructing a [[Steps in policy-making|policy]] likely to achieve this outcome.&lt;br /&gt;
&lt;br /&gt;
How might this work? Ideas and perhaps full policy proposals will be [[Steps in policy-making|put forward by interested participants]]. But remember, since we are dealing with a clear economic problem, everything we do from this point forward is really [[Societal optimization|optimization of an objective function]], or at least the generation of the [[Societal optimization|Pareto optimal curve]]. Many ideas will come from participants. These ideas will have to be assembled into full policy proposals, perhaps by a few people who have been designated for the task. A “comment and review” period will ensue where the public provides feedback on the proposals and they are modified accordingly. When the proposals are finished they will be rated for cost and benefit. It is presumed that enough ideas will be generated that a Pareto optimal front can be established. This will depend on how well the [[Design space|design space]] has been explored. Regardless, a Pareto curve will be established and the proposals on it will then be voted on by the public, perhaps through a [[Cardinal voting systems|cardinal voting]] procedure. Once the winning policy is selected a further optimization round might be contemplated by, for instance, combining with the winning policy elements of the next best policy. The rating of the optimized policy can then be ascertained and, if truly better, adopted.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Defining_utopia&amp;diff=2424</id>
		<title>Defining utopia</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Defining_utopia&amp;diff=2424"/>
		<updated>2024-10-11T17:51:04Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Political systems}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
The [[ratings system]] is an attempt at transforming society for the better. We might say, somewhat tongue in cheek, that we are designing [[wikipedia:Utopia|utopia]]. But what is our definition of utopia? Among many ideas, here are some:&lt;br /&gt;
&lt;br /&gt;
Utopia permits [[Balance between individual liberties and the community|personal freedom]] of choice, expression, and action. The only constraints on these involve interactions and effects on others.&lt;br /&gt;
&lt;br /&gt;
Utopia, mostly through personal choice and [[Consensus|consensus]], rather than government diktat, [[Optimal income distribution|distributes its wealth equitably]]. It minimizes the distinction between haves and have-nots and maintains inequality only to the extent that it motivates people to perform at their best. Utopia places a high value on providing “essential” goods and services to everyone: food, shelter, education, medical care, etc.&lt;br /&gt;
&lt;br /&gt;
Utopia dissolves sub-optimal systems naturally through a [[Ratings-based consensus|ratings-based consensus]] process. The private health insurance and medical care system of the US would be replaced. The currently legal but corrupt system of [[Influence of wealth in democracy|money in politics]] would be eliminated. The [[Justice and defense in communities#Thoughts on justice|justice system]], one yielding a regular stream of arbitrary, ideological, and monetarily influenced results, would be done away with.&lt;br /&gt;
&lt;br /&gt;
Utopia dissolves the personal aggrandizement motivation in our current society and minimizes [[wikipedia:Rent-seeking|rent seeking]] economic behavior.&lt;br /&gt;
&lt;br /&gt;
Utopia strives toward a [[Post-scarcity defined|post-scarcity society]] and concentrates its attention on future betterment. It is an investment-focused society.&lt;br /&gt;
&lt;br /&gt;
Utopia strives to perform good work to maintain and advance itself. It is not an idle society.&lt;br /&gt;
&lt;br /&gt;
Utopia strives for [[Philosophy of John Rawls#The Equality and Difference Principle|social equality]] and rates its people by their contributions, accomplishments and character. It does not judge on the basis of race, origin, gender, sexual orientation. It minimizes [[wikipedia:Ideology|ideology]] as a basis for judgement since it is open to new ideas and hews to the idea of [[Societal optimization|optimization]]. It seeks generally to take ideas from all ideologies to produce an optimum result.&lt;br /&gt;
&lt;br /&gt;
Utopia values [[Objective truth|objective truth]] and advances it. It has a cultural intolerance for lies and falsehood. It carefully weighs controversial ideas before dismissing them, however.&lt;br /&gt;
&lt;br /&gt;
Utopia is peaceful and strives for peaceful solution to conflict. It understands that it shares the world with other communities that are ideologically diverse. It avoids conquest, military adventurism, and covert campaigns of intelligence or propaganda directed at other communities.&lt;br /&gt;
&lt;br /&gt;
Needless to say, many societies will agree and even have these ideas written into their foundational documents. The difference between our Utopia and these conventional visions is the emphasis on [[Community|communities of choice]], the centrality of personal choice, a formal and transparent system of ratings, a focus on rigorous optimization as a standard for success, and a wholly idea-based culture that seeks objective truth.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Current_ratings_systems&amp;diff=2420</id>
		<title>Current ratings systems</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Current_ratings_systems&amp;diff=2420"/>
		<updated>2024-10-10T19:57:17Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
Our current society is filled with various types of rating systems. Some are ratings by experts or users of products and services. [https://www.consumerreports.org/homepage/ Consumer Reports] is a well-known example of this. Online retailers like [https://www.amazon.com/ Amazon] and [https://www.walmart.com/ Walmart] have user-ratings to accompany almost every product they sell. Online forums where individuals can post their advice are also rated so you can see the highest rated posts at the top, eg [https://stackoverflow.com/ Stackoverflow]. &lt;br /&gt;
&lt;br /&gt;
Current methods for rating people and information are also instructive. A [[wikipedia:Intelligence source and information reliability|method used by intelligence services]], for instance, uses a separate score for the source based on their reliability and another score created for the credibility of information, largely derived by cross-checking it against other sources. Our system can improve on this by including more ratings criteria and allowing users to select them. But the basic idea, that of separately rating authors and information, will be preserved.&lt;br /&gt;
  &lt;br /&gt;
A number of ratings systems rate the press, such as for [https://adfontesmedia.com/interactive-media-bias-chart/ political bias and reliability]. We might note that [[The media, overcoming bias, and making it fun|political bias is part of attracting interested readers]] (or viewers) and is often at odds with complete objectivity. &lt;br /&gt;
&lt;br /&gt;
Ratings for important larger services also exist, such as for [https://www.hospitalsafetygrade.org/ hospitals] or [https://www.princetonreview.com/college-rankings/best-colleges universities]. Usually these are provided by experts and their [[&amp;quot;Open source&amp;quot; decision making|methods are somewhat obscure]], a weakness which an open ratings system will fix. Nevertheless, there are technology-enhanced variants on this idea which we might draw some inspiration from. Other expert-driven ratings services exist for entire countries, such as [https://freedomhouse.org/ Freedom House], which we will be looking at more below. Freedom House is instructive not only as a ratings system but also for what it reveals about various countries. &lt;br /&gt;
&lt;br /&gt;
We will be also be looking at the [[Current ratings systems#The Chinese Social Credit System (SCS)|Chinese social credit system]], an attempt to improve individual behavior and societal harmony through a centrally administered ratings system. This system might be compared to what we are designing but, in fact, is fundamentally different. &lt;br /&gt;
&lt;br /&gt;
==Freedom House==&lt;br /&gt;
&lt;br /&gt;
As we&#039;ve noted [[Philosophy of John Rawls#Libertarianism|elsewhere]], we have seen the not-great performance of the US on the [https://freedomhouse.org/explore-the-map?type=fiw&amp;amp;year=2024 Freedom House] scores. One question was how the US did over time. The following shows a sampling of “free” countries and their performance since 2013:&lt;br /&gt;
&lt;br /&gt;
[[File:d936faac53a60c88483fb90396397da2_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
As we suspected, the US has declined significantly since the advent political polarization and extremism. It looks like Poland has had a similar trajectory with the rise of its own form of the same. Meanwhile Latvia, after several years of improvement, declined modestly from 2021 to 2022 and has been steady since then. For reference, we include the top country, Finland, which has achieved the highest possible score throughout this period. No other country has quite matched Finland but the other Nordic countries are very close, as are Canada, the Netherlands, New Zealand, Australia, and Japan.&lt;br /&gt;
&lt;br /&gt;
Here’s a chart showing, in addition, a few countries that are not so free:&lt;br /&gt;
&lt;br /&gt;
[[File:ad688b85e665b225d34671b3db492649_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
We might conclude that North Korea is the worst country on freedom but there are a few lesser known countries (Turkmenistan, South Sudan) which score even lower. But at this point the differences are small. North Korea gets a 3, Turkmenistan a 2, and South Sudan a 1 (in 2024).&lt;br /&gt;
&lt;br /&gt;
We should note here that freedom is a function of both government and private repression. In North Korea the government is very strong and repressive but private repression is low (there is little to no crime in North Korea). In South Sudan, however, the government is weak but private actors (criminals, warlords, private armies, terrorists) conspire to create a repressive and unsafe environment. This situation creates a vicious cycle: the lawlessness itself reduces freedom, and the government’s response to it, often ineffective, further limits the freedom of ordinary citizens.&lt;br /&gt;
&lt;br /&gt;
The other two countries, Iraq and Afghanistan serve as a reminder to Americans of how badly those two wars went. If their purpose was to create stable democracies in the middle east, they were utter failures. Afghanistan ended with a victory for the Taliban, at which point a sharp decline from bad to worse ensued. Meanwhile Iraq has hovered in democratic mediocrity for years. Perhaps the only positive point here is that Iraq was even less free under Saddam Hussein. This was not shown here but a different dataset, going further back, confirms it: [[Media:e7e6a17f39392afdadc7ef92b3650e1a_Country_and_Territory_Ratings_and_Statuses_FIW_1973-2024.xlsx|Country_and_Territory_Ratings_and_Statuses_FIW_1973-2024.xlsx]]. We might deem this a “worse to bad” progression.&lt;br /&gt;
&lt;br /&gt;
We might also notice the highest rating, for Finland, and ask whether the methodology grades on a curve. Freedom House claims to be objective and uses a wide variety of categories to arrive at the aggregate score. Nevertheless, it is fairly clear that an implicit curve is built in. Not to take anything away from Finland, which is by all accounts a stellar example of democracy, it is not a perfect Rawlsian society. The government, while clean by world standards, is not entirely free of corruption. There is some crime as well as significant racial/ethnic bias in hiring, even though the country scores comparatively well on both. Taxation is quite high although, given the excellent level of social services, Finns are arguably getting what they pay for and choosing to pay. Nevertheless, taxation is always form of government imposition. So, a 100? The country isn’t perfect, so probably not. It might be hard to get better than Finland but that just means Freedom House is, obviously, grading on a curve.&lt;br /&gt;
&lt;br /&gt;
Our [[ratings system]] should preclude curve-style grading because it leads to a sense of complacency and obscures problems that could be worked on. This is true for both societies and individuals. In the Finnish case we might want to accurately quantify any crime, corruption, lack of transparency, bias, etc., low though they may be, and use them as the basis for a truly objective aggregate.&lt;br /&gt;
&lt;br /&gt;
We might notice, quite obviously, that Freedom House is a ratings system for countries. We’ve discussed how different communities will no doubt choose to interact with each other based on a similar system, that is their assessment of how well their peer communities hold up values of importance to them. Our ratings system is one for not just individuals but the entirety of a [[community]]. A community, in this instance, may rate itself and be rated by other communities.&lt;br /&gt;
&lt;br /&gt;
Will any of this motivate communities to improve? Does the fact that the US freedom rating has declined steadily for several years motivate us to improve? Not many Americans know about Freedom House but we have a general sense that democracy is backsliding in the US. This may not matter much to Americans but let’s keep in mind that the rating system in our new society will be front and center. We are proposing a society in which ratings are a continuous presence in everyone’s life and the presumption is that we will take them seriously. After all, they will determine our standing in society and the fruits of economic distribution. It would seem that a society tuned in to ratings on an individual basis would also be concerned about how their community fares vis a vis other communities.&lt;br /&gt;
&lt;br /&gt;
This is not just wishful thinking on my part. The US is a bad example of the effect of ratings since many Americans are simply indifferent to how the world views us. But if we travel to smaller nations, one’s more dependent on trade and productive relationships with the rest of the world, we find their citizens far more aware of how their society is viewed by others. Our communities will also presumably be small, smaller than nations (we think), and should have a healthy regard for how they are perceived by other communities.&lt;br /&gt;
&lt;br /&gt;
Still, a plethora of ratings information can be information overload, and many people will probably not be interested in how their community ranks on democracy. They will be even less interested in rating other communities. But organizations like Freedom House can exist to both provide ratings and distill them into politically actionable recommendations. Individuals can delegate their ratings power (weight) to organizations such as these who will provide this service for them. It is easy to imagine how a large number of such organizations could exist for various issues.&lt;br /&gt;
&lt;br /&gt;
In this sense, our [[direct democracy]] would evolve into a more representative one. This is probably inevitable but it would be quite a bit different than what we have today. The delegation of ratings power to organizations would be done by individuals when and if they want to and can be taken back at any time. Also note that such a delegation would be done to not a single representative (or political party) but to any person or organization who, presumably, demonstrates expertise in a certain area. This would lead to “representation” by knowledgeable people rather than politicians who have little expertise in complex policy issues.&lt;br /&gt;
&lt;br /&gt;
We have done this to some extent with Freedom House itself. We might have some notional understanding of how free different countries are around the world, but we lack detailed knowledge on every country. And we generally lack the specifics that are important to numerical ratings. Just how much does Viktor Orban’s Fidesz party control the media in Hungary? How much is personal freedom impacted by criminal organizations in Mexico? How much has our judiciary degraded fundamental rights in the US? It is hard to answer these questions precisely except by doing a comparative analysis and getting some training in assigning numerical scores to issues that have a heavy subjective bent. Not to mention having the time to go through a long list of countries and the multiplicity of factors that influence freedom. Organizations like Freedom House have the staff and support structure to carry out these types of analyses, whereas individuals usually do not.&lt;br /&gt;
&lt;br /&gt;
Why do we think Freedom House, in particular, is trustworthy? Well, it appears to be non-partisan, draws its leadership from a diverse group of ex-government, business, and media officials. It posts the bio of its analysts on its website and they seem to be qualified. It has a long history (founded in 1941) and seems to be an established, well respected organization. It has been used widely in the media and by the US government for information about democracy in other countries. It has, however, [https://www.washingtonpost.com/news/monkey-cage/wp/2017/11/07/why-do-we-trust-certain-democracy-ratings-new-research-explains-hidden-biases/ in this Washington Post article] been criticized for being US-centric and its ratings tend to be in line with US foreign policy.&lt;br /&gt;
&lt;br /&gt;
Notably, the article also mentions that in its early years, its ratings were the product of one person and his wife, who acted as his assistant. This person admits that some “guesswork” went into them. Nevertheless, it has improved its methodology over the years and is a respected organization. It’s ratings, incidentally, can be compared to other similar organizations, such as the [https://infographics.economist.com/2017/DemocracyIndex/ Economist Intelligence Unit Democracy Index]:&lt;br /&gt;
&lt;br /&gt;
In general, these ratings are roughly in line with Freedom House’s. In any case, any system of ratings that we “delegate” to will itself be subject to the public’s ratings. If Freedom House is considered biased, as that Washington Post article alluded to, we would presumably have sources to tell us that (ie the media). A virtuous circle of ratings presumably keeps everyone honest.&lt;br /&gt;
&lt;br /&gt;
In any event, our communities will no doubt have organizations to perform as specialized ratings services. Members will delegate their rating power to them, if they choose to, creating de facto representative bodies. We might look upon this “erosion” of [[Direct democracy|direct democracy]] with dismay, but we should note that it is probably natural and inevitable. According to Chandler, historians estimate that of 30,000 Athenian citizens, only about 1,000 were actively engaged in politics and only about 20 initiated most policy. In other words, a political class of informal representatives arose naturally.&lt;br /&gt;
&lt;br /&gt;
==The Chinese Social Credit System (SCS)==&lt;br /&gt;
&lt;br /&gt;
This brings to mind the notion of government-controlled ratings systems, such as the social credit system (SCS) in China (see https://velocityglobal.com/resources/blog/chinese-social-credit-system/ and https://joinhorizons.com/china-social-credit-system-explained/). The SCS has its roots in Chinese history and culture dating back to Confucianism, a philosophy which emphasized the relationship between individual character and the functioning of society. But China’s current system was inspired, in the 1990’s, by the financial credit rating system in the US, and for many years its purpose was essentially just that – financial. They wanted to develop Western style financing and needed a way to establish [[trust]] in a country where fraud and cheating were common. They extended this to businesses with the purpose of inducing greater trust in Chinese companies that Western firms were considering collaborating with. Today, although its emphasis is still financial, it has been extended and grown into a complex system of behavior modification.&lt;br /&gt;
&lt;br /&gt;
It measures such activities as paying taxes on time, reporting financial data accurately, staying out of legal trouble, donating to charity, taking care of family members, helping with community activities, etc. Individuals who perform well get put on “redlists” which give them easier access to credit and other benefits. Those who don’t, get put on “blacklists” which limits their ability to find jobs, places to live, travel, etc. The system is applied to individuals, businesses, and government officials (ostensibly to reduce corruption). For businesses, complying with government regulations is an important criteria in the aggregate score they receive. The system has grown more centralized although data that is collected comes from many sources throughout the country.&lt;br /&gt;
&lt;br /&gt;
Needless to say, this system isn’t popular in the West because it is associated with big-brother style surveillance. Nevertheless, we might want to stop and think about it for a moment. We might ask why a country would expend large amounts of resources to develop such a system. It must have some positive benefit, right? In addition to compelling good behavior of its citizens, the presumed “benefit” is ensuring political harmony, that is harmony with the CCP (Chinese Communist Party). Westerners may turn away from motivations like that but isn’t political harmony generally a good thing? Isn’t that what we lack right now and need more of in the US? Didn’t we have that in years past when we were defined by two political parties (not one as in China) that were much closer together, one center-left and the other center-right? Didn’t our media and social environment conspire to create a system of relatively harmonized politics? We may not have had Chinese style surveillance but we did have a [https://en.wikipedia.org/wiki/Manufacturing_Consent propaganda mechanism] that defined, quite sharply, acceptable political boundaries and behavior. And, in terms of compelling good behavior, many Western analysts have noted the parallel between Chinese surveillance and the rising surveillance in our own societies brought on by technology.&lt;br /&gt;
&lt;br /&gt;
Along these lines we may want to compare and contrast our own ratings system with the Chinese SCS. We should start by first noting that many of our aims are the same: we want better societal behavior in general and, more specifically, an avoidance of destructive ideological/political polarization. We envision, despite political differences, a relatively harmonious society that hews to facts and the truth. This is perhaps the first important contrast: The SCS is designed to further the political agenda of the CCP and ours is to find an optimized political agenda based on objective reasoning. There is only one party in China (and only two in the US) but there can be several in our system. Indeed we envision many diverse political communities and diversity of thought within each individual community.&lt;br /&gt;
&lt;br /&gt;
Another big difference is privacy. The SCS appears to lack privacy controls to the extent that it is probably a detriment to basic liberties. The SCS vacuums up everyone’s data and sends it to government agencies which work out aggregate scores. There doesn’t seem to be much anyone can do about this. Mistakes are difficult to correct and punishment can be quite arbitrary. There are stories of Chinese citizens suddenly finding themselves on blacklists in the middle of a trip and then [https://www.wired.com/story/china-social-credit-system-explained/ having no way to get home] (one of the punishments is to lose access to rail and air tickets).&lt;br /&gt;
&lt;br /&gt;
The other glaring difference is the top-down nature of the SCS vs the bottom-up nature of our ratings system. One of the primary reasons for our system is to enable democratic participation. Its central function is to allow people to regain power from established institutions and impersonal forces, such as the “market” which conspire to advance the agenda of the few, to the detriment of the many. The SCS has no such purpose.&lt;br /&gt;
&lt;br /&gt;
Let’s pause for a moment and highlight some of the important differences between the SCS and our ratings system:&lt;br /&gt;
&lt;br /&gt;
* Unlike the SCS, our system is ultimately controlled by members of the community, not a faceless central government. Even though organizations may perform ratings, they will be subject to other organizations’ control and the people.&lt;br /&gt;
* In particular, the privacy of the system will be controlled by individuals and the community at large (more on this later).&lt;br /&gt;
* Sanctions due to low ratings, or rewards due to high ratings, will be subject to community approval and closely supervised for fairness.&lt;br /&gt;
* An unfair rating due to, for instance, a personal vendetta can be challenged in “court”. The rating will be removed if the aggrieved party is found to be right. If the rating is the product of malice or some other self-interested reason, the rater can be sanctioned.&lt;br /&gt;
* The ratings methodology and algorithms are subject to continuous community review and modification.&lt;br /&gt;
* The ratings system itself is a voting system which inherently allows community participation in policy-making activities.&lt;br /&gt;
&lt;br /&gt;
One interesting aspect of the SCS is the blacklist/redlist system. This might seem dystopian, as many aspects of the SCS are, but there may be a way to apply it constructively and in a way consistent with basic liberties. Our ratings system already has the notion that people will receive a higher income (or more goods/services) as a function of their rating. So this is similar to the Chinese redlist. We would emphasize that such a system should allow for modest income graduations, consistent with Rawls’ difference principle, and to avoid the development of privileged classes. The blacklist is more problematic but we might set some lower ratings limit and blacklist people who fall under it by removing, for instance, their right to privacy. The low limit would be very low, for example conviction of a serious criminal offense. In a sense we already do that with our ability to publicly check anyone’s court records, as noted above.&lt;br /&gt;
&lt;br /&gt;
This brings to mind the notion of flexible privacy rights. We might envision, for one, a privacy difference between organizations and individuals. Organizations would not have any right to privacy given their greater capacity for malfeasance and the fact that no individual basic liberty seems to be at stake in holding them to complete transparency. Obviously for individuals, this is different, but we might also consider the idea that increased trustworthiness allows for increased privacy rights. Instead of the blacklist mentioned above, we could have a system where folks give up privacy rights along some sliding scale as their trustworthiness score decreases. Needless to say, any move to take away fundamental rights according to some algorithm brings to mind dystopian visions, so this would need to have a sizeable benefit compared to its obvious risk.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Culture_and_privacy_in_a_ratings-based_society&amp;diff=2419</id>
		<title>Culture and privacy in a ratings-based society</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Culture_and_privacy_in_a_ratings-based_society&amp;diff=2419"/>
		<updated>2024-10-10T19:45:11Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Privacy, identity, and fraud in the ratings system}}&lt;br /&gt;
&lt;br /&gt;
Judging from our own society, we can infer that most members of a ratings-based [[community]] will not understand the [[Privacy mechanisms in the ratings system|encryption scheme]] well enough to assure themselves that it is, in fact, private. But they may take the word of [[wikipedia:Cryptography|cryptologists]] who do it for them. This is another reason why organizations will be important. In this case, crypto-organizations will be a [[Organizations as raters|trusted party]] for advanced cryptographic methods.&lt;br /&gt;
&lt;br /&gt;
How much [[Privacy, identity, and fraud in the ratings system|privacy]], and [[trust]] in privacy, will be necessary to accomplish the goals of our society? Again we can offer a variety of algorithms, but it would be good to know what types of information go with what level of privacy. We’d expect data on personal health, for instance, will be subject to rigorous methods such as HE. Political opinions, however, may not be so rigorous. In fact, we may not want them to be since presumably everyone benefits from an open marketplace of ideas and [[debate]]. [[System modeling|Simulation]] and experimentation should lead to the optimum settings for privacy on a society-wide scale for a variety of [[applications]]. Of course, not everyone will adopt these recommendations but having them will serve as an important educational tool.&lt;br /&gt;
&lt;br /&gt;
It is hard to know in advance what a culture will produce as its standard of openness/privacy. We have assumed that the society we are aiming for will be more open and less private than our current society. After all, we will be sharing much more information with our [[Peer|peer]]s, whether aggregated privately or not. And we are depending on our peers to provide us with constant feedback on how we’re doing. The idea is that when we go astray our peer ratings will help guide us back. This process would appear to suggest a level of comfort with honest exchange quite a bit greater than we have today.&lt;br /&gt;
&lt;br /&gt;
Simulation and optimization notwithstanding, we might ask right now whether greater or less privacy leads to a better society. On the one hand, if our views are open to all, they can be checked much more quickly by others. We could prevent, hopefully, [[The subjective and community ratings system#Preventing Misinformation Bubbles in the CRS and SRS|bubbles of misinformation]] from developing by ensuring that people who disseminate them are rated accordingly. Open information is, furthermore, a foundation for collaboration on all manner of projects. On the other hand, we consider privacy a necessary part of thought. We want to be able to think things through privately, have opinions that may not fit with the mainstream, try them out, and discard them ourselves before being subject to the [[Community|community]]’s disapproval. Thinking freely often means thinking privately. Our system will need clearly delineated privacy settings and reasonable defaults depending on the situation. All privacy settings will, of course, be ultimately modifiable by the user.&lt;br /&gt;
&lt;br /&gt;
We go further and speculate that a ratings-based society will have less need for privacy because trust, overall, will be greater. Thus a virtuous cycle would develop where greater trust allows people to feel they can more freely share information (less privacy), leading to even greater trust. We might hope for this outcome but there is probably some limit. At some point, we don’t want to know everything about everyone and folks don’t want to tell us everything. We suspect there is always a privacy barrier that is better not to cross. Our interest, in any event, is mostly public information and behavior. Optimizing the privacy settings in that domain should be enough.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Contract_as_a_method_to_mitigate_the_basic_liberties_imposition&amp;diff=2402</id>
		<title>Contract as a method to mitigate the basic liberties imposition</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Contract_as_a_method_to_mitigate_the_basic_liberties_imposition&amp;diff=2402"/>
		<updated>2024-10-09T20:20:05Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Political systems}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Philosophy of John Rawls}}&lt;br /&gt;
&lt;br /&gt;
{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
There is a natural conflict between [[community]] needs and individual [[Philosophy of John Rawls#The Basic Liberties Principle|basic liberties]]. Sometimes, under exigent circumstances, the [[Community|community]] will have to impose on its members to the point of taking away their basic liberties. In extreme cases this might include their life. &lt;br /&gt;
&lt;br /&gt;
One way to ameliorate this problem is to have a [[Contract as a method to mitigate the basic liberties imposition|contract]] upon admission to the community in which the prospective member agrees in advance to these impositions. If the member was “born into” the community then they would do so at the [[Impact of user age on the ratings system|age of majority]], eg 18. In the US, and most other countries, natural born citizens are generally not asked to sign any such contract. [[Citizenship|Naturalized citizens]] are asked if they would be willing to fight for the US but it is not clear what happens if they say no. The interviewing officer has quite a bit of latitude in asking the question in the mildest possible terms (pretend aliens are invading and threatening your children. Would you defend them?). In any event, the Q/A is not contractual. A [[Community|ratings-based society]], however, can create a proper contract upon membership and go over a comprehensive list of scenarios where the member might be asked to surrender their liberties.&lt;br /&gt;
&lt;br /&gt;
This certainly helps and should be part of any member-community arrangement. But it does not completely resolve the moral problem of community imposition on individual liberties. One issue is that, according to [[Philosophy of John Rawls|Rawls]], basic liberties are inalienable – in other words you can’t make a deal with someone in which you voluntarily forfeit your liberties. In a battle between an admittedly free contractual agreement and basic liberties, the liberties will prevail. This principle seems correct, at least when looked at broadly. If we think of basic liberties as the central foundation of our society, then it seems we shouldn’t allow individuals to chip away at them in deals that, while perhaps beneficial to them in the moment, end up corroding the benefit of liberty throughout all of society. We should also be suspicious of deals that may be the result of [[Coercion|coercion]] (or coercive circumstances) or uninformed decision-making.&lt;br /&gt;
&lt;br /&gt;
Of course, in some sense this principle is itself an imposition on individual liberty. But according to Rawls, liberty can be restricted under the condition that doing so protects or enhances the overall system of liberties. In this case the freedom to contract is subsumed by the very same basic liberties the contract would deny. Furthermore, the freedom to contract is seen (generally) as less fundamental than the right, for example, to [[Freedom of speech|free speech]], political liberty, etc.&lt;br /&gt;
&lt;br /&gt;
This brings up the conditions under which liberty can, in fact, be restricted. We’ve mentioned the first one, that doing so must be in service to overall liberty. Another is that the restrictions be acceptable to those likely to make the sacrifice. This is an obvious case where contracts become important. We clearly need to ask and ensure that the people being affected (potentially the entire community) are ok with an imposition on their liberties. Alongside this condition is the proviso that the restriction be the mildest it can be to accomplish whatever community goal we seek the restriction for.&lt;br /&gt;
&lt;br /&gt;
So contracts are necessary but still seem insufficient by themselves. The community, even after gaining contractual agreement, still has to tread very carefully in imposing on the basic liberties of its members. These include using the least restrictive means to accomplish its goals. They also include performing the imposition fairly, taking care of members who are being imposed upon, optimizing whatever [[Steps in policy-making|policy]] is being pursued carefully to gain maximum benefit for minimal cost, etc. Contracts are a necessary part of this arrangement but the community has a duty of proper performance in many other related areas as well.&lt;br /&gt;
&lt;br /&gt;
Intuitively, this seems right. Even if Bob has agreed in principle to serve in his community’s [[Justice and defense in communities#Thoughts on defense|army]] in time of war, he will still feel imposed upon when he is drafted. It is better that he agreed, of course, but to more fully ameliorate the imposition he should see his community trying hard to lessen his sacrifice and optimally planning the policy he was asked to sacrifice for. Indeed, in his future with the community, the sacrifice should be remembered and compensated. Societies like ours who do a reasonable job of this (US in WWII) vs those who do not (US in Vietnam) see a large difference in how their veterans view the imposition.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Consensual_reality&amp;diff=2401</id>
		<title>Consensual reality</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Consensual_reality&amp;diff=2401"/>
		<updated>2024-10-09T19:40:10Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
Consensual reality is a perspective shared by a group of people. That perspective can be an agreement about facts (the population of the world is about 8 billion), morality (stealing is wrong), religion (there is an afterlife), justice (murderers deserve the death penalty), art (Picasso is disturbing), etc. It is, in particular, a shared view of people, such as public figures. Is the President too old? Is his opponent a bad apple or a savior? In our case, and more specifically, users assign [[trust]] values to those who provide information. It is easy to see how [[Community|communities]] who regard certain individuals highly (high in [[Trust|trust]]) could form. These high trust individuals would then have a ready audience for future material in much the same way that influential columnists in newspapers have their followers. Obviously consensual reality will also define those whose trust is low so we can filter them out. &lt;br /&gt;
&lt;br /&gt;
We believe this system will be a force for good even though any consensual reality will be possible. It will allow the formation of &amp;quot;bad&amp;quot; [[The subjective and community ratings system#Preventing Misinformation Bubbles in the CRS and SRS|opinion bubbles]] in the same way they exist on other platforms. However, it will also have methods to escape from them. The [[Technical overview of the ratings system|algorithmic flexibility]] alone is one way to do that. Just by adjusting certain parameters the user will be able to &amp;quot;tune in&amp;quot; to a different reality. Our system will have some preset algorithms, tuned a few different ways, to serve as guidance but users will ultimately be able to adjust things for themselves.&lt;br /&gt;
&lt;br /&gt;
The idea of flexibility allows users to take control of their information ecosystem and realize that it is conscious choice that enables this. Doing so has the additional benefit of giving users the tools to think for themselves with the understanding that the ecosystem they are choosing is merely a tool to help them do that. &lt;br /&gt;
&lt;br /&gt;
A personal anecdote might help illustrate this. A long time ago I had a Sirius satellite radio which had several political talk stations. Two of them were clearly labeled: &amp;quot;Sirius Left&amp;quot; and &amp;quot;Sirius Right&amp;quot;. It was fun switching between the two on long car trips. I could tune in to one political universe or another. Sometimes they would even talk about the same news event at the same time from completely different angles. At some point, listening alone would get boring and the game would switch to critiquing both sides, supporting them with additional commentary they failed to mention, or just laughing at the strategies the hosts would use to hook their listeners. So now I&#039;m no longer just controlling what I hear, I&#039;m enabled to think for myself. I might sketch out positions in the middle or farther to the left or right, or sometimes none of the above. I suppose the opposite could have happened, ie I could have been sucked into the bubble created by one station, but it didn&#039;t. The tool, limited though it was, enabled control and thought.  &lt;br /&gt;
&lt;br /&gt;
Consensual reality is often referred to by a related term, [[wikipedia:Consensus reality|consensus reality]] which is said to have largely broken down in our society. Pre-internet, our information came from fewer [[wikipedia:Mass media|mass-media]] sources which shaped a more homogeneous view of ideology, politics, etc. [[wikipedia:Institutional trust (social sciences)|Institutional trust]] was significantly higher as well. Returning to this may seem desirable but it also seems antithetical to the path of a supposedly free society. We&#039;d have to impose such a heavy hand on media, traditional and social, that it would require a fundamentally anti-democratic shift in our society. We&#039;ve already become accustomed to [[Decentralization|decentralized information]]. The only issue is how to move forward. The creation of tools to easily control information can lead to a future where fact-based consensus and reasonableness emerge on their own, by choice. Without additional tools, it is hard to see how our present chaotic informational state can be advanced.&lt;br /&gt;
&lt;br /&gt;
One idea in this vein is to help people develop their own philosophy and apply it to the information system they are interacting with. Most people, in my view, will try to think rationally if given a supportive environment. The Sirius radio only had two options, Right or Left, but it would have been nice if more positions in between and further out existed. Indeed, a system that can be dialed to ones own philosophy and evolve with it is exactly what we can achieve here. Furthermore, the positions that are staked out would be easily testable, either in formal [[debate]] with another side or in simply allowing the opposite point of view to be easily accessed. The system could be designed to ask challenging questions, ones you might even ask yourself, such as &amp;quot;what is the best argument for the exact opposite point of view?&amp;quot;. It could then allow you to find what people on the other side are saying about that very question.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Compensation_for_participants&amp;diff=2399</id>
		<title>Compensation for participants</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Compensation_for_participants&amp;diff=2399"/>
		<updated>2024-10-09T16:22:40Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Economic systems}}&lt;br /&gt;
&lt;br /&gt;
Since users of the system will be doing alot of work, work that would be typically paid in the “real” world, a [[community]] might contemplate paying them for their services. A [[community]] will need to aggregate [[opinion]] for policy-making (beyond the algorithmic), hold votes on issues, enforce bylaws, report on news, etc. Plenty of administrative effort will likely be needed over time to handle these tasks, even though the [[community]] will probably start as an exclusively voluntary effort. It is not unreasonable to see that full-time staff would eventually be required as the [[community]] grows in numbers and complexity. The wages of such employees would, of course, be determined in the same way that all policies are. The effectiveness of employees could be judged by the people in the [[community]] using the tools provided by the [[ratings system]] itself. And pay could be made transparent so those attaining higher management positions would have strong resistance to paying themselves disproportionately. We might contemplate the creation of a [[Cryptocurrency|cryptocurrency]] for this purpose, one that can be used interchangeably across the whole [[ratings system]]. As a side note, doing so might increase the mainstream use of crypto as a currency for daily trade.&lt;br /&gt;
&lt;br /&gt;
It is inevitable that a [[ratings system]] with crypto will seek to pay its members on the basis of ratings in general, not just those pertaining to work. After all, people who are highly rated are probably also the ones most valuable to society. People who lie to us, answer in bad faith, are manipulative, etc. are a net drain on us regardless of their expertise in their work area. So it makes sense to do this.&lt;br /&gt;
&lt;br /&gt;
We can think of this as a type of [[wikipedia:Universal basic income|UBI]] that you simply receive if you score high enough in the ratings. Traditional economists may critique this as creating money for situations where no transaction took place but by incentivizing good behavior we improve everyone’s economic outlook. We might note, further, that alot of non-monetary labor is performed all the time including volunteer work, helping a friend, parenting, etc. We might argue that this is done within some nuanced system of informal ratings. It doesn’t seem like a stretch to formalize the ratings and attach some form of monetary reward to it.&lt;br /&gt;
&lt;br /&gt;
This system could also be used to supplement the pay of “real world” laborers who are generally viewed as underpaid. Rather than rectifying the power structures needed to pay them reasonably (eg laws, [[wikipedia:Labor union|labor unions]]), a [[ratings system]] could offer them a supplement based on the rated value of their labor and each individual’s rating. Our system, in this capacity, begins to displace traditional [[Economic systems|economic systems]].&lt;br /&gt;
&lt;br /&gt;
Once we introduce a cryptocurrency (or even without it) we must raise the problem of [[Astroturfing|paid fake ratings]]. One supposes that a crypto-enabled payment system could be built with a high level of transparency in order to better detect malfeasance of this type. This wouldn’t stop payments from circumventing the built-in one, however. Given this obvious danger, and the presumed investigative abilities of the crowd, we might suppose [[Ratings and white-collar crime|corruption]] of this sort to be rare. Indeed, our algorithms might be able to detect patterns of behavior that suggest corruption, such as unusually high ratings given to someone across a very wide spectrum of views. We might also suppose that communities would enact laws against corruption with appropriate punishments, etc.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Community_and_libertarianism&amp;diff=2398</id>
		<title>Community and libertarianism</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Community_and_libertarianism&amp;diff=2398"/>
		<updated>2024-10-09T16:03:58Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
One aspect of our culture is a strong emphasis on individual agency and rights. We may not all be formally libertarians but a [[Philosophy of John Rawls#Libertarianism|libertarian ethos]] pervades our views and, as noted above, is strongly connected to the [[Cryptocurrency|crypto]] [[community]] where the idea of a [[ratings system]] might find its first adopters.&lt;br /&gt;
&lt;br /&gt;
There is inevitably a conflict between a [[Ratings system|ratings system]], [[Community|communities]], and libertarianism. There is further conflict when we add concerns about [[Privacy, identity, and fraud in the ratings system|privacy]], which I would argue is an important component of libertarianism. After all, keeping the state out of our affairs is much easier when it doesn’t &amp;lt;i&amp;gt;know&amp;lt;/i&amp;gt; about our affairs.&lt;br /&gt;
&lt;br /&gt;
But we have defined our communities, and any ratings system they might have, as voluntary in nature. Thus the individual, the libertarian if you will, can choose the community and ratings system they wish to join. Once in a community, they have continuing influence over its ratings system. In some ways we can view the ratings system as the glue that makes a libertarian society work since it can be tuned to be as obtrusive or unobtrusive as its members want. The ratings system can also be as individualistic or as communitarian as members choose. And members can, of course, leave the community and join another one whenever they want.&lt;br /&gt;
&lt;br /&gt;
Is this enough? Have we reconciled the libertarian with the community? Maybe. We will explore this a little further as we go along. Libertarians will certainly be attracted to joining communities of choice rather than the one they were born in, for which they had no choice and have almost no influence over. [[wikipedia:Nation state|Nation-states]] are not, to say the least, libertarian societies. But communities with ratings systems where everyone is judged must seem like quite an imposition on the libertarian mind.&lt;br /&gt;
&lt;br /&gt;
Instead of answering this directly let’s point to some of the advantages of libertarian philosophy in any community. The greatest one, it would appear, is that in a conflict between flawed social obligation vs. personal morality, the libertarian would favor the latter. Libertarians don’t commit [[wikipedia:Genocide|genocide]] or knowingly design faulty airplanes. We can expect their sense of personal morality and agency to assert itself in cultures that try to promote questionable behavior.&lt;br /&gt;
&lt;br /&gt;
Healthy communities require dissent and make progress as a result of it. [[wikipedia:Slavery|Slavery]] existed for thousands of years before dissenters realized that it was wrong and started campaigning to get rid of it. But it took a long time to accomplish and many vested interests had to be defeated in order to do it. Today we are experiencing a large scale evolution to a non-majority white Christian society and many people are against it. We are also changing our views on [[wikipedia:Greenhouse gas emissions|carbon emissions]] for [[Environmental policy in communities|environmental]] reasons but, again, there will be plenty of opposition and it will take a long time.&lt;br /&gt;
&lt;br /&gt;
It would help if we had a ratings system that could produce required cultural changes faster. By [[Debate|giving voice to dissenting views]] and simply having a quicker feedback loop, we can reduce the time needed for social change. It would also help to have small communities that can pioneer ideas and see if they work before they are adopted more generally. In this way, both the community and the dissenter are an active part of change.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Community&amp;diff=2397</id>
		<title>Community</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Community&amp;diff=2397"/>
		<updated>2024-10-09T15:35:21Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
=== Community Defined ===&lt;br /&gt;
In the context of the [[ratings system]], a &#039;&#039;&#039;community&#039;&#039;&#039; is a group of users who collaborate with each other. They generally have a broad agreement to follow certain rules, conventions, and norms. In some ways it is nothing more than a contract but a contract implies a straightforward give and take economic transaction, and hard economic obligations on both sides. A community is more than that and is less formal although it implies a type of governance, values, perhaps even a culture. It is still a [[Contract as a method to mitigate the basic liberties imposition|contract]], in the sense that members &#039;&#039;agree&#039;&#039; to abide by the community&#039;s standards and the community agrees to give the member certain benefits. But it is more than what a contract normally implies. &lt;br /&gt;
&lt;br /&gt;
Contemporary communities include organizations in our everyday life: clubs, companies, non-profits, etc. Clubs, informal organizations (generally) organized around some interest (eg books, board games, sports, etc) could easily adopt a ratings system for their subject of interest. Book clubs could rate books, gamers could rate games, and so on. Sports clubs could rate their players and the competition using their own statistical metrics. Needless to say, matters of internal governance, such as voting on bylaws could easily be handled by the ratings system.  &lt;br /&gt;
&lt;br /&gt;
Companies and non-profits represent a larger and perhaps more organized type of [[Community|community]]. They could use the system to rate their employees, or for taking surveys of their customers and donors. Ratings already exist to compare charitable organizations, particularly in how well they spend their money. This type of organization might benefit from an internal ratings system to discover problems before they become public. &lt;br /&gt;
&lt;br /&gt;
Other examples of contemporary communities are so called [[Hippy communes, socialism, work, and egalitarianism|hippy communes]], also known as [[wikipedia:Intentional community|intentional communities]]. Another, more well established version of this, is the [[wikipedia:Kibbutz|Kibbutz movement]] in Israel. Communities like the Kibbutzim and the hippy communes often exist within a loosely [[Libertarian socialism|libertarian socialist]] context. Given their emphasis on individual liberty and egalitarianism it could be a model communities based on the ratings system might adopt.   &lt;br /&gt;
&lt;br /&gt;
We use the word community alot because it is necessary for what we are trying to do. A [[ratings system]] can only work amongst a group of people that are trying to accomplish something together. This would seem to become more true the more significant the goals of the group are. Indeed, many communities will arise out of a [[Capital vs. Labor: a theory of society|common theory of how society works, such as the tension between capital and labor]]. A truth-seeking ratings system, one that can really improve society, lead to better governance, etc. needs to be a significant community. Our ratings system is doing much more than helping people decide which product to buy or restaurant to eat at.&lt;br /&gt;
&lt;br /&gt;
The loosely libertarian crypto community that our ratings system is, at least initially, directed at probably sees the individual as the basic unit of government. There&#039;s plenty of good reason for this since it is individuals that comprise the groups and individuals who do the thinking to make governance possible. But here we take the presence of individuals as a given and see communities as the basic building block for any worthwhile experiment in social improvement. This is not a value judgement, not a right or wrong thing. It is just why we refer alot to community and why we take the relationship between [[Community and libertarianism|community and libertarianism]] seriously.&lt;br /&gt;
&lt;br /&gt;
Communities are a natural expression of people&#039;s social instincts but they also bring complexity, especially if we regard individual autonomy and freedom as the highest ideal. Do [[Government and physical space|communities occupy physical territory]] and have borders? What is the [[Relationship between community members and its policies|relationship between community members and its policies]]? What is the [[Balance between individual liberties and the community|balance between individual liberties and the community]]? Do sub-groups within a community have the [[Communities and the right to secession|right to secede]]? &lt;br /&gt;
&lt;br /&gt;
A community might not have its agreements written down. It could be just a set of norms that are implicitly followed. Over the past several years we&#039;ve started using the word &amp;quot;norms&amp;quot; much more as bipartisan cooperation has broken down. We used to work together on matters of shared interest. Similarly, most people in a community agree to be reasonable and respectful toward each other. It is only when this doesn&#039;t happen regularly that we might decide to write it down as a rule. &lt;br /&gt;
&lt;br /&gt;
In many cases community norms come from the surrounding culture. In a sense, this just means that communities influence each other and that there may be larger, more encompassing communities that envelop smaller ones. We understand that in our ratings based community system, members may be in many communities at once and that alot of overlap between communities will exist.&lt;br /&gt;
&lt;br /&gt;
=== Steps in Community Building ===&lt;br /&gt;
We anticipate that like-minded users of the [[ratings system]] will come together to form communities. An important ingredient of this is the idea of [[consensual reality]]. At a practical level, [[#Community Building Tools|Community-building tools]] will be available through the software. Some communities will be shared interest forums for enthusiasts of specific activities (sports, stamp collecting, etc.) But others will seek to develop [[Political systems|political communities]], complete with forms of [[Justice and defense in communities|justice, defense]], and [[Economic systems|economic systems]]. Although the [[ratings system]] will encompass both types of community, we will focus here on the latter.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s start by discussing [[wikipedia:Political philosophy|political philosophy]] and its importance in community-building. We might ask why a [[ratings system]] community would need this. What is the purpose, say, of [[Philosophy of John Rawls|Rawlsian liberalism]] in a ratings system where individuals will subjectively decide what they want and communities will collectively do the same? The answer, of course, is that any type of society, except for perhaps the simplest ones, need a political philosophy to guide them. Most societies (and smaller groups) have one even if they are not aware of it. Certainly if we deign to manufacture a new society, or some new social paradigm, we would want a political philosophy to aim for. Otherwise, why would we care? We are not doing this because a ratings system gives us a better way to judge the next car to buy, or who the best dentist in town is, although it might be useful for that too. We are doing it because a high-quality system of ratings can improve everyone and, by extension, society itself.&lt;br /&gt;
&lt;br /&gt;
It would seem then that the first act of community building would be to identify, consciously, a political philosophy to follow. Communities would then proceed to define a constitution and a basic system of derivative law. But how does a community even agree on a political philosophy if it doesn’t have a system of law already in place? Well, it has the ratings system. And, presumably, it has some mechanism for communication which enables [[Debate|debate]]. We will, as the designers of the system, provide software with built-in defaults to enable debate, provide categories for ratings, a selection of algorithms, weights, etc.&lt;br /&gt;
&lt;br /&gt;
Informal communities should then form spontaneously on the basis of shared interests (and philosophies). Like-minded people have a way of finding each other. Debate would then take place to establish a constitution or, using Rawls’ wording, a set of basic liberties. A system of derivative law might follow if the members could identify, at this point, specific such laws that would be necessary. It is usually the case that law follows experience and an understanding of what is necessary as the community begins to function as such. Along these lines, a government structure could be put in place, through further debate and following the agreed-upon set of basic liberties, if desired. Note that we would have the basic liberties established first, instead of as an afterthought as was done in the US constitution with the Bill of Rights.   &lt;br /&gt;
&lt;br /&gt;
We might also consider that the government structure might be very lightweight at first since we have the [[ratings system]] in place already, which acts as a default mechanism of [[direct democracy]]. A [[Voting methods|voting mechanism]], a natural extension of the [[ratings system]], will also be in place to allow formal decisions to be made. And since the ratings system &#039;&#039;is&#039;&#039; an information system, another necessary ingredient in community formation is already in place. &lt;br /&gt;
&lt;br /&gt;
With a political philosophy, constitution, and some governmental framework (and derivative law), we have a formal community. Probably the first thing it will want to do is establish some criteria for membership, another issue to be debated, rated, and voted on. In any event, a community is born.&lt;br /&gt;
&lt;br /&gt;
=== The Power to Effect Change ===&lt;br /&gt;
&lt;br /&gt;
One of the benefits of large nation states is their potential ability to enact sweeping public policy changes very quickly. If the US and China alone were to agree to reduce their carbon footprint, all they would need to do is pass legislation in their respective legislatures and create a treaty. It is a relatively simple, and well understood process. It would take many small voluntary communities, each agreeing to do the same, to achieve this.&lt;br /&gt;
&lt;br /&gt;
But this is only in theory. The US and China are egregious in their lack of concern for environmental issues and no hopeful treaty along these lines is on the horizon. The US is politically incapable of effecting great change and China is determined to become the next superpower, a plan largely built around expanding its industrial and military prowess, not saving the planet. Neither country is particularly introspective and both lack the humility required for sustained change.&lt;br /&gt;
&lt;br /&gt;
But would a collection of small voluntary communities do any better? The same hubris might certainly affect them as well. But one advantage they would have is a ratings criterion that measures their own participation in world affairs. Communities will be able to rate themselves and each other on common issues. As we’ve discussed, treaties between communities will be possible and even confederations of communities. Alongside individual introspection, community introspection will be an important value and lead to continuous improvement. The ratings system would act as an important virtuous feedback loop on the general community system. Our goal would be to prevent communities from becoming rigid and ossified in the face of obvious reform (like the US).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Community Building Tools ===&lt;br /&gt;
&lt;br /&gt;
As part of the [[ratings system]] we plan to have a &amp;quot;community building&amp;quot; feature to get people together for group projects, discussion, activism, etc. Someone, perhaps on a public node, would kick things off by proposing a community to work on X. Other users would join and begin collaborating. Our [[trust]] features can be used to filter out those who don&#039;t meet certain criteria or only allow those who do. An important part of community building will be [[debate]], especially as it relates to the creation of bylaws and policymaking. &lt;br /&gt;
&lt;br /&gt;
There are a number of [https://startupstash.com/community-building-tools/ existing tools] in the community-building space. A few well known ones include:&lt;br /&gt;
&lt;br /&gt;
* [https://www.mightynetworks.com/ Mighty Networks] &lt;br /&gt;
* [https://hivebrite.io/ Hivebrite]&lt;br /&gt;
* [https://circle.so/ Circle]&lt;br /&gt;
&lt;br /&gt;
Some key features of community building tools include:&lt;br /&gt;
&lt;br /&gt;
# User Profiles&lt;br /&gt;
# Discussion forums&lt;br /&gt;
# Messaging &amp;amp; notifications&lt;br /&gt;
# Content sharing&lt;br /&gt;
# Groups and sub-communities&lt;br /&gt;
# Moderation tools&lt;br /&gt;
# User engagement metrics&lt;br /&gt;
# Event management&lt;br /&gt;
# Integration with 3rd party tools (social media, CRM, analytics, etc)&lt;br /&gt;
# Customization of look n feel, layout, etc.&lt;br /&gt;
# Mobile access&lt;br /&gt;
# Gamification -- badges, points, rewards, etc.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Communities_and_the_right_to_secession&amp;diff=2396</id>
		<title>Communities and the right to secession</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Communities_and_the_right_to_secession&amp;diff=2396"/>
		<updated>2024-10-03T16:36:58Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
We envision ratings-based communities as voluntary organizations. People choose to join them voluntarily and can leave them at will. The US, as well as many other [[wikipedia:Nation state|nation-states]], have a joining process (ie green card, citizenship, etc) and a process of citizenship renunciation. For individuals, [[community]]-based processes will exist for these although we anticipate that it will be less bureaucratic, faster, and more informal. The [[ratings system]] itself will be able to do much of the vetting for joining a [[Community|community]]. We also anticipate individuals having membership in multiple communities.&lt;br /&gt;
&lt;br /&gt;
So the right to renounce a group membership seems relatively undisputed. We might invoke here the caveat that under extreme circumstances the right to leave might be restricted. A group member may have strategic knowledge and wish to provide it to an enemy, a group member may be so valuable to the community that his departure would significantly harm the liberties of everyone, etc. But under normal circumstances the right to leave seems uncontroversial.&lt;br /&gt;
&lt;br /&gt;
It would then seem that the right of an entire group to leave would be the same and again, there doesn’t seem to be much dispute over this. Marginalized ethnic groups have migrated to other, more hospitable countries, throughout history. We might again invoke the caveat of extreme circumstances and note further that the ability to harm rises as the number of people choosing to leave rises. Thus it might be easier to reach the limit at which a community begins restricting the ability of a group to leave. Nevertheless, here too, it would appear that a group right to leave is mostly intact.&lt;br /&gt;
&lt;br /&gt;
Why then would [[wikipedia:Secession|secession]], which seems somewhat like a departing group, cause problems? Throughout history, seceding regions are often the cause of [[wikipedia:Civil war|civil war]]. One clear difference is that the seceding region is taking with it valuable resources, such as land. It also takes with it citizens who pay taxes, serve in the government/military, etc. More fundamentally it renounces the laws of the parent nation and so takes with it all of the parent’s legal power over an entire sector of its territory. Citizens who renounce their citizenship but remain in their original country cause, by these measures, only a fraction of the loss that secession does. If they leave their original country, they may cause more loss but also do so at considerable expense to themselves. It is unusual for large blocs of people to agree to renounce their citizenship and move to a different country. Indeed, it is even unusual for them to renounce their citizenship en masse and stay. But even if they manage to do it (eg the Jews who formed Israel) they mostly leave behind the land and capital assets of the parent nation.&lt;br /&gt;
&lt;br /&gt;
So the crux of the difference between secession and group renunciation of citizenship seems to lie in the economic and legal ramifications. A nation-state is nothing if not the ability to impose a legal framework over territory. And territory can be viewed primarily as an economic asset.&lt;br /&gt;
&lt;br /&gt;
We’ve discussed that our ratings-based society will not necessarily have [[Territorial boundaries|territorial boundaries]]. This may be the case, but it will certainly have control over economic assets in some fashion, whether this is land, productive assets like machinery, people (members), etc. It will no doubt have some legal binding of its members despite the fact that everyone agreed to it voluntarily and is free to leave. Secession therefore seems eminently possible in the community context. How would communities solve this problem? &lt;br /&gt;
&lt;br /&gt;
One way would be to not worry about it and simply allow it. Or not mention it at all in the [[Constitution|body of law]] that they create. The latter is similar to the US which never mentioned secession in its founding documents. The US example of ignoring the matter entirely led to the Civil War which adjudicated the issue on the battlefield and, only post-war, [[wikipedia:Texas v. White|adjudicated it legally]] to favor the view that the Union is perpetual and unbreakable. That is, states could not secede, but this seems to have truly been decided by force.&lt;br /&gt;
&lt;br /&gt;
But either of these situations (allowing it or ignoring it) sends us down a tricky path. Clearly ignoring it in plain sight (in the US case) is a bad idea. But simply allowing it causes problems too. If communities create a binding body of law for their members, as we presume they will, what does it then mean if secession is allowed? At any point of inconvenience, a member or a group thereof can simply throw off the yoke of oppressive law and start anew. And what does secession entail under this principle? A group of members could presumably “secede” by whatever process they see fit and start ignoring (ie violating) the laws of their parent community with the justification that they are now a new community. It would appear that this wouldn’t work either.&lt;br /&gt;
&lt;br /&gt;
The only solution then is to codify in the body of law, along with the process for joining a community, the steps by which members can leave the community and, more importantly, secede from the community. This is what the [[wikipedia:European union|EU]] does. It codifies in its own constitution the process of separation. And we saw recently with [[wikipedia:Brexit|Brexit]] how this works and leads to a peaceful conclusion. Britain today lives side by side with the EU, is still friends with it, and has many other treaties which make it part of a broader European community. We might argue that Brexit was a bad idea but it did reveal how a relatively harmonious separation from a larger state can occur.&lt;br /&gt;
&lt;br /&gt;
This brings to light a basic property of law which is that it must contain its own exceptions for those exceptions to be valid. The exception doesn’t exist by default if it goes unmentioned because if this were the case then anyone can break the law by “seceding” first and declaring the law void in their newly seceded “country”. Let&#039;s suppose John wants to kill his wife Linda but is afraid of the law. He &amp;quot;secedes&amp;quot; from his country and declares his house and property a new country, Johnland. Since there&#039;s no law about murder in Johnland, he proceeds to kill Linda. No doubt, the police would arrest John anyway and prosecute him. John would argue that the law on murder is not valid because he seceded and committed it in another country. Clearly, John&#039;s argument is ridiculous unless some codified process for secession actually exists.    &lt;br /&gt;
&lt;br /&gt;
This is apparently what the Confederate States (CS) ran up against when they tried to secede from the US. The US constitution ignores secession but the [[wikipedia:Tenth Amendment to the United States Constitution|10th Amendment]] provides that powers not given to the federal government are reserved by the states (or the people). Therefore, the argument goes, the CS could legally secede. So when they did, Lincoln, who did not believe that secession was legal, nevertheless allowed the seceded states to maintain slavery and was open to some type of negotiated settlement. So the CS’s procedural act of secession (declaring the fact, signing a new constitution, electing new leaders, etc) didn’t seem particularly harmful in and of itself. To the US, nothing much had changed, and constitutional law (and federal law) continued to be applicable in the CS. But then South Carolina [[wikipedia:Battle of Fort Sumter|decided to fire on Fort Sumter]] (a US fort), as it was being resupplied, in an attempt to drive US troops from CS territory. Now a law had been dramatically broken and the US had little choice but to counter it with force.  &lt;br /&gt;
&lt;br /&gt;
The Civil War truncated an interesting [[debate]] on whether states really had the right to secede. The answer, by the victor, was clearly no but this was only a conclusion drawn after the war. It seems fairly clear that if the US constitution had established a procedure for secession, the armed conflict could largely have been avoided. It also seems fairly clear that, by ignoring the subject entirely, the US made secession intolerable for itself (because we cannot have exceptions to the law that go unmentioned in the law) while permitting the other side just enough legal space to realistically entertain the notion.   &lt;br /&gt;
&lt;br /&gt;
This history is troubling and should be to a community of voluntary members. We normally would think, as the CS did, that we have a right by default if nothing in our law says we don’t. At the same time we can see the folly of unilateral separation as a means by which to break the law. This is why it is imperative that secession (and separation by members) be spelled out by the community.&lt;br /&gt;
&lt;br /&gt;
But what if a community spells out terms that are so onerous that they can’t be practically fulfilled? Clearly, this is something a community must be aware of when it drafts its basic laws. But of course it could still happen, and we might find ourselves with seceding groups within communities that simply do so without attempting any legal justification of their actions. At the end of the day, if conditions are bad enough, force will prevail over any legal debate. However, given the potentially tragic outcomes of failing to address it adequately, it behooves any community to build a reasonable system of secession in from the beginning.&lt;br /&gt;
&lt;br /&gt;
In general, communities should be aware of any potential for armed revolution, not just secession. The essence of preventing revolution is to allow a peaceful alternative that gives the revolution what it wants at less cost. Why has the US, for instance, been a relatively stable country for so long? One reason is that our [[Democracy|democracy]], and particularly our [[wikipedia:Federal republic|federal system]], gives dissenters a path to get what they want at lower cost than armed struggle would. In other words, to anyone contemplating armed revolution against the US, it is likely that standard political action is more effective and a lot cheaper. It is also likely that if standard political action fails, that armed revolution would fail too. We may doubt if these conditions hold anymore in the US, but they did at one time, and if a conventional nation-state is capable of it, a direct ratings-based democracy should be too.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Commentary_on_technology_and_feasibility_of_the_ratings_system&amp;diff=2395</id>
		<title>Commentary on technology and feasibility of the ratings system</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Commentary_on_technology_and_feasibility_of_the_ratings_system&amp;diff=2395"/>
		<updated>2024-10-03T16:16:21Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
==Technology==&lt;br /&gt;
The system we are proposing is far-reaching and may strike many as unrealistically grandiose. But it is also unrealistic to doubt the impact of technology. Consider that today our [[Mobile technology|mobile technology]], in particular, has wrought a type of dystopian present where everyone is addicted to their phones. I was looking at students at a bus stop not long ago and every single one of them was immersed in their phone. It reminded me of a [[wikipedia:Star Trek: The Next Generation|Star Trek TNG]] episode from decades ago where everyone became addicted to a video game and couldn’t stop playing it. The game somehow acted upon the pleasure center of the human brain. We know that mobile software, particularly [[Social media|social media]], is doing largely the same thing. It is furthermore known to be having profoundly negative effects on politics and social interaction. So we have an example of a technology that has changed the habits of nearly everyone, along with a profoundly negative aggregate effect. If someone had predicted this 20 years ago it most likely would have been critiqued for being unrealistically far-reaching.&lt;br /&gt;
&lt;br /&gt;
Furthermore technology has a tendency to [https://digitaltonto.com/2017/the-30-years-rule-innovation-takes-a-lot-longer-than-you-think/ lag in its economic and societal effect]. [[wikipedia:Thomas Edison|Thomas Edison]]’s first electric generating plant was established in 1882, about 40 years before electricity started having a major impact on the economy. The [https://en.wikipedia.org/wiki/The_Mother_of_All_Demos “Mother of all Demos”], where the first computer with windows, mouse, video conferencing, [[wikipedia:Revision control|revision control]], etc. was shown, was done in 1968, which was approximately 30 years before these technologies would be widely used by the public. Usually the introduction of a technology needs to wait for further refinements combined with social conditioning, mass adoption and finally to a societal change, presumably for the better.&lt;br /&gt;
&lt;br /&gt;
In our case, the technology exists in parts and plenty of social conditioning has taken place. Acceptance of a tool like the one we are proposing might be a challenge but not impossible given people’s facility with the internet, mobile devices, and social media. Furthermore, our ideas are coupled with widespread dissatisfaction of government, media, educational institutions, etc. It seems the time would be ripe for a defining change.&lt;br /&gt;
&lt;br /&gt;
==Prospects for reform==&lt;br /&gt;
Alongside the impact technology can have, we might ask how likely we are to reform &amp;quot;the system&amp;quot; when so many other attempts have failed (or fallen far short). An interesting web-site exists to [[https://www.govtrack.us/ govtrack.us track bills]] as they make their way through [[wikipedia:United States Congress|Congress]]. We are contemplating an improved version of this ourselves through the [[ratings system]]. Its author, [[Joshua tauberer|Joshua Tauberer]], has quite a bit of [[https://medium.com/civic-tech-thoughts-from-joshdata/so-you-want-to-reform-democracy-7f3b1ef10597 criticism for those attempting to reform democracy]].&lt;br /&gt;
&lt;br /&gt;
We don’t fundamentally agree with Josh because he dismisses reformers the way so many do, by saying “the world is too complicated for your simple mind to fix”. Even if this turns out to be true for most people, if we all took his advice nothing would change. It’s kind of like criticizing people who start businesses because most businesses fail.&lt;br /&gt;
&lt;br /&gt;
Nevertheless he offers some insights worth reminding ourselves of: focus on what you can really do, test it with real people, iterate, and don’t expect success right away. With this in mind, our goal should be to create a durable framework that can evolve and gain users over time. If we can do this, and the virtuous cycle of ratings works, we will have something that can catalyze real change.&lt;br /&gt;
&lt;br /&gt;
On this we can take a page from the [[wikipedia:Cryptocurrency|crypto community]]. They set out idealistically to replace central banks (and banks in general) with a decentralized virtual monetary system. They didn’t succeed but did manage to create an organically durable movement. It is unlikely in the extreme that crypto will simply die as it may have in [[Cryptocurrency|Bitcoin]]’s earliest days, or be killed through government intervention. Crypto and its supporting platforms are still being developed at a healthy pace. It isn’t clear where the movement will end up but it may yet lead to a future that changes fundamentally the role of money in our society. It has certainly attained the staying power that makes this prospect possible.&lt;br /&gt;
&lt;br /&gt;
In some sense crypto is itself a system of communities with a built-in rating system. Each currency, if traded openly, is [https://coinmarketcap.com/ &amp;quot;rated&amp;quot; against its peers] and traditional currencies through an exchange rate. This rate is produced by normal market forces but is also the result of participants rating the merits of the specific currency in question, its mission, values, the people involved, etc.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Civility_and_battling_entrenched_bias&amp;diff=2394</id>
		<title>Civility and battling entrenched bias</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Civility_and_battling_entrenched_bias&amp;diff=2394"/>
		<updated>2024-10-03T16:00:35Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
{{Main|The ratings system, human psychology and social dynamics}}&lt;br /&gt;
&lt;br /&gt;
Entrenched [[Bias|bias]] is bias held by individuals that has persisted for a long time, perhaps from childhood. It is often unconscious and expressed in an automatic way. There doesn&#039;t appear to be a straightforward technical feature to combat it, other than what we are already contemplating (ie the [[ratings system]]). We clearly need to get people to think critically about their own beliefs. More precisely, how do we get people to believe the truth when they believe a falsehood and have done so for a long time?&lt;br /&gt;
&lt;br /&gt;
Arguing usually does not work but empathetic listening does. Using evidence and facts helps but only if done in a non-confrontational way. The NYTimes columnist [https://www.nytimes.com/2024/02/22/opinion/bidenomics-working-class.html David Brooks] argues, to give a recent example, that a growing disrespect for the [[wikipedia:Working class|working class]] explains much of their shift toward Trump. In his view, the Democrats are increasingly the party of the educated and have, as a result, started looking and feeling quite different than the working class. They try to use [[Logic|logic]] and facts to make their case. Biden’s attempt to reach them through economic policies is not having the desired effect because respect is more about feelings and attitudes than about objective benefits. It is notable here that the Democratic party is the one that believes in inclusiveness but seems to have no problem excluding those who don’t subscribe fully to its own notions of inclusiveness, even at the risk of electoral defeat.&lt;br /&gt;
&lt;br /&gt;
It is hard to do this in person and hard to create an environment within an information system that achieves this. One way we could try to do it is to provide a rating for civility and filter out uncivil discourse on the platform. Most people will probably aim for civility and when they don’t, they are choosing not to. Their rating will reflect that choice. The system should probably be built by default with a fairly robust civility setting. &lt;br /&gt;
&lt;br /&gt;
The [[Ratings system|ratings system]], in addition, will rate highly those who are open minded and capable of interacting with a diverse set of people, whether they agree with them or not. Often, just keeping a dialogue open is key to keeping ideological discourse from spiraling out of control. The participants know there are real human beings on the other side of the [[Debate|debate]] and, most of the time, and recognize them as good people (even if they are wrong about something).&lt;br /&gt;
&lt;br /&gt;
One specific technique, called [https://www.theatlantic.com/ideas/archive/2020/10/how-we-got-voters-to-change-their-mind/616851/ deep canvassing], has been used to change voters’ minds and relies primarily on [[wikipedia:Empathy|empathy]] as a tool. [https://www.vice.com/en/article/4ay4wn/how-to-change-a-voters-mind-is-deep-canvassing This article] notes that &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The political scientists Joshua Kalla and David Broockman have studied what campaigns can do to change voters’ minds during general elections, and the answer is basically nothing works.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The article goes on to describe [[wikipedia:Deep canvassing|deep canvassing]] as the method of choice in situations like these. However, the problem with deep canvassing is that it’s very expensive. Canvassers need to establish a one-on-one relationship with someone for maybe 15 minutes at a time and perhaps going back to them again a few more times for follow-up. This is clearly not something you can do on a large scale but is done with carefully targeted persuadable voters.&lt;br /&gt;
&lt;br /&gt;
But, within these constraints, the method appears to be effective:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The results: deep canvassing changed attitudes by an average of around 4 to 6 percent, while traditional canvassing had little effect.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Deep canvassing might also help with a problem [https://www.nytimes.com/2024/02/26/opinion/white-rural-voters.html discussed by Krugman recently], that of “rural rage”, where rural voters prefer Trump to Biden even though Biden’s policies, according to Krugman, are more likely to help them economically.&lt;br /&gt;
&lt;br /&gt;
If deep canvassing is indeed effective, how would we get our ratings system to “deep canvas” as part of an argument strategy? Well, for one thing, people can choose [[debate]] with those rated highly in empathy. Just having an empathy rating will make people more likely to want to score highly in this area. But another idea is to use AI to deep canvas. This is a creative endeavor which AI, surprisingly, seems to be fairly good at. Just tell it to pretend to be an empathetic debater on some topic and have a debate. The results are quite good. So for the “debate” feature of our system, we could start with AI setting a tone of civility which will help others follow along.&lt;br /&gt;
&lt;br /&gt;
Another idea in boosting civility is to have a lag on posted material that gives authors a chance to think about what they have posted and make modifications. It especially gives them a chance to retract posts made in anger which they would later regret. This has been floated for social media and some forums, such as [https://www.kialo.com Kialo], effectively have it because posts must be approved by a human before publication. However, it would appear that major social media sites have not implemented this idea.&lt;br /&gt;
&lt;br /&gt;
Whatever our strategy for promoting civility we should keep in mind that one of the central aims of our system is to get a controversial but correct [[opinion]] to the fore. Our system should enable this, not run counter to it as seems so often the case on social media. We have to create an environment where people can answer in good faith (not trampled on by negative ratings) and then be accepted by the [[community]] based on [[logic]], veracity, etc. We’ve discussed [[Organizations as raters#Rating the Rater and Unfair Ratings|rating the rater]]. Perhaps a simple rater objectivity index could be formulated so that, over time, we can identify those raters worth filtering for.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Cardinal_voting_systems&amp;diff=2360</id>
		<title>Cardinal voting systems</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Cardinal_voting_systems&amp;diff=2360"/>
		<updated>2024-10-01T20:42:44Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Voting methods}}&lt;br /&gt;
&lt;br /&gt;
== What it is ==&lt;br /&gt;
&lt;br /&gt;
A cardinal voting system is one where voters give an independent rating to each preference, in contrast to merely ranking their preferences. This requires more information but avoids Arrow’s paradox and provides higher quality results, especially for policy-level decision making. It is frequently the case that voters care a great deal about a few subjects and are mostly indifferent to the rest. An ordinal voting system forces a ranking of all choices, even ones the public is mostly indifferent to. A cardinal voting system captures the important issues with less effort and allows policy to focus on those.&lt;br /&gt;
&lt;br /&gt;
It should be noted that Arrow himself dismissed [[Cardinal voting systems|cardinal voting systems]] because the strength of anyone’s preference is not objectively comparable with that of anyone else:&lt;br /&gt;
&lt;br /&gt;
“It seems to make no sense to add the utility of one individual, a psychic magnitude in his mind, with the utility of another individual.”&lt;br /&gt;
&lt;br /&gt;
He believed, therefore, that only ordinal systems made any sense.&lt;br /&gt;
&lt;br /&gt;
Nevertheless, let’s take a look at how we might achieve a cardinal voting system using one of our [[applications]], eg Foreign Policy. The goal is to develop a top-level foreign policy. We could start by asking our network the question, “what should our foreign policy be?”. It is very open ended but let’s start here and then explore variants on how we might ask such a question differently. Some members of the network will come forward with answers. We can deal with these answers in various ways:&lt;br /&gt;
&lt;br /&gt;
# Just pick the best one, say, by a judge or panel of judges.&lt;br /&gt;
# Blend them in some way that seems best, again according to some judging committee.&lt;br /&gt;
&lt;br /&gt;
These two are not likely to win approval of a majority if the majority has no input. So ways to gain input might include:&lt;br /&gt;
&lt;br /&gt;
# Ask the network to rate each policy on various characteristics which are then weighted using an equation. The highest rated policy wins.&lt;br /&gt;
# Try to improve on the highest rated policy by blending it with some of the other policies or new ideas. Go through the network rating again and try to “optimize”.&lt;br /&gt;
&lt;br /&gt;
After network-based ratings come in, we would normally apply our rating of the network participants. But, since this a public decision we could impose the rule that everyone is treated equally, unless we have a previous vote that allows a difference in weight for those who, for instance, are very knowledgeable about the subject.&lt;br /&gt;
&lt;br /&gt;
Let’s do an example with some simple math to show how this might work. Let’s suppose we get three answers:&lt;br /&gt;
&lt;br /&gt;
# US foreign policy should be to promote democracy around the world by giving aid to emerging democratic countries and democracy movements in undemocratic countries.&lt;br /&gt;
# US foreign policy should be to promote human rights around the world.&lt;br /&gt;
# US foreign policy should be to use its intelligence and military capabilities to actively overthrow dictatorships around the world.&lt;br /&gt;
&lt;br /&gt;
The criterion by which each policy is rated is by its beneficial effect on the following:&lt;br /&gt;
&lt;br /&gt;
# US economy, e&lt;br /&gt;
# US hegemony/power, h&lt;br /&gt;
# People outside the country we are affecting, p&lt;br /&gt;
# Adherence to moral principles, m&lt;br /&gt;
&lt;br /&gt;
Each criterion can be 0-100, so a perfect score is 400.&lt;br /&gt;
&lt;br /&gt;
Suppose the following ratings were obtained for each policy:&lt;br /&gt;
&lt;br /&gt;
# Promote democracy ==&amp;amp;gt; e = 20, h = 40, p = 30, m = 60 ==&amp;amp;gt; total = 150&lt;br /&gt;
# Promote human rights ==&amp;amp;gt; e = 10, h = 20, p = 70, m = 80 ==&amp;amp;gt; total = 180&lt;br /&gt;
# Overthrow dictatorships ==&amp;amp;gt; e = 30, h = 60, p = 40, m = 70 ==&amp;amp;gt; total = 200&lt;br /&gt;
&lt;br /&gt;
Our policy winner is 3, overthrowing dictatorships. Notice that although everyone participated in “voting” for this outcome, this choice may not represent the majority view. That is, if the three options had been put up for a straight vote, it is not necessarily the case that 3 would win. Furthermore, the more choices exist, the less likely it is that any one option would receive greater than 50% of the vote. So 3 could only win by plurality, not by majority. In situations like these, a runoff election could occur where we take the highest plurality winners and have a 2nd election and continue this process until clear majorities emerge.&lt;br /&gt;
&lt;br /&gt;
For this reason it is important that the [[community]] choose its voting mechanism. Let’s suppose the voting mechanism is the weighted system described here followed by an “optimization round”, which is analogous to a post-plurality vote that seeks an outright majority. Here, we could take the two highest ranking options, 2 and 3, and combine them in some way, eg:&lt;br /&gt;
&lt;br /&gt;
“US foreign policy should be to use its intelligence and military capabilities to actively overthrow dictatorships around the world while also promoting human rights. This means that we must ensure a new government (after the overthrow) is a respecter of human rights, we prioritize the dictatorship to be overthrown by how badly it violates human rights, and where overthrow is not possible we work to promote human rights in the country by direct help and undermining the violating government.”&lt;br /&gt;
&lt;br /&gt;
It is easy to see how this policy, which combines the two priorities of the voting community, might be favored more than each one individually. A new vote would confirm that. A couple of differently worded “combination” policies could be formulated and put up for a vote, with the highest one being the winner and the one adopted as the “official” policy.&lt;br /&gt;
&lt;br /&gt;
It is likely that a large, open ended question would result in few entries. Smaller, more focused questions might attract more contributors, eg “What should US policy toward Russia be in light of the Ukraine invasion?”. The sum of a series of such questions (and their answers) would then become the policy. The community, of course, would have to decide on such matters as how granular to make their participation. Perhaps they only want to participate at a high level or perhaps they want input into even the smallest decisions.&lt;br /&gt;
&lt;br /&gt;
== Issues with cardinal systems ==&lt;br /&gt;
&lt;br /&gt;
Our discussion of ordinal vs cardinal voting systems generally concludes that cardinal systems are better. This is basically because they take into account the strength of preference and do not run into [https://plato.stanford.edu/entries/arrows-theorem/ Arrow’s impossibility theorem]. However, it should be noted that they are not a panacea.&lt;br /&gt;
&lt;br /&gt;
Let’s suppose voters are asked what the most important things are to focus on:&lt;br /&gt;
&lt;br /&gt;
# Outlaw abortion&lt;br /&gt;
# Tough on crime&lt;br /&gt;
# Improve economy&lt;br /&gt;
&lt;br /&gt;
Each voter has 100 points to allocate. Two voters feel so strongly about abortion that they award all their points to that single issue. The other voters are united in viewing crime as the most important issue with abortion in last place. They happen to agree on their strength of preference but this is not a requirement to illustrate the point:&lt;br /&gt;
&lt;br /&gt;
[[File:a15e91268a77b2e776948e817f9e6ec0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here the cardinal voting system wins over a clear majority that prefers something else. This is because of the presence of two extremist voters but it can happen in more moderate cases as well. Our system, if it is to prefer cardinal voting systems (and I think it should, in general) needs to guard against these types of outcomes.&lt;br /&gt;
&lt;br /&gt;
One idea would be to identify extremism and somehow derate it. If a voter puts all their points in one basket we can count their vote for less. Another way to do this would be to spread out their vote and give an equal number of points to the other priorities. Note that doing this is to begin turning the cardinal system into more of an ordinal one. We shouldn’t go all the way, of course, because ordinal systems have their own problems (as we’ve discussed) but we can come up with a set of adjustable rules that can be chosen by users.&lt;br /&gt;
&lt;br /&gt;
Clearly this gives rise to an optimization exercise where some social metric of goodness is the objective function. Cardinal systems are already required to have weights for each criterion which can then be rolled up into an overall figure of merit and displayed on the y-axis of a Pareto chart. Obviously an approach like this appeals to engineers, especially systems engineers, but there is no reason why it can’t be used for [[System modeling|social policy]], which is nothing more than a system of people. We should note that [https://jplteamx.jpl.nasa.gov/ JPL’s Team X], an advanced conceptual design center for spacecraft, uses both [[wikipedia:Systems engineering|systems engineering]] tools and a surprisingly manual social approach to finalizing system designs. They literally have a designated specialist go around the room and synthesize every engineer’s viewpoint until a consensus emerges.&lt;br /&gt;
&lt;br /&gt;
Donations to political causes and candidates was a form of cardinal voting, and a negative one at that. This is an example of a few actors with vastly greater resources than most tilting the playing field. In the US, at least, this practice has reached a level where it is tantamount to [[Political corruption|corruption]].&lt;br /&gt;
&lt;br /&gt;
Formal cardinal voting systems do not permit some people to have vastly more “points” than others. Everyone is presumably allocated the same number by default with adjustments made, perhaps, on the basis of [[Reputation|reputation]]. This would no doubt be true for every vote but we could extend this idea to be true for multiple votes. We could, for instance, allocate a certain number of points to each user for all the votes currently being held and thereby force them to use those points judiciously. This would give policy makers a quick idea of what is really at issue and what can safely be ignored. It is common that elections revolve around a few key concerns even though the policy making arena is huge at any given time.&lt;br /&gt;
&lt;br /&gt;
== Modifying cardinal systems ==&lt;br /&gt;
&lt;br /&gt;
Above we mentioned that in cardinal voting systems people can skew the results by putting all their voting points on a single issue. We showed the case of an ordinal system with a clear majority preference vs a cardinal system which won on points because of single issue voters:&lt;br /&gt;
&lt;br /&gt;
[[File:fba2c4732ca6278f492f86051857a9d4_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Needless to say, this result is not desirable. The winner should be the preference of Voter 3,4, and 5 which is the result that an ordinal system would produce (this result is not affected by Arrow’s theorem because a clear majority favors it). One way to handle this is to distribute the votes of the single issue voters in some way. Here we propose two ways to do this.&lt;br /&gt;
&lt;br /&gt;
=== A cardinal to ordinal model ===&lt;br /&gt;
&lt;br /&gt;
We can view an ordinal voting system as a special case of a cardinal system with equally spaced preferences. For example, let’s suppose we have 4 choices in a cardinal system:&lt;br /&gt;
&lt;br /&gt;
[[File:9acbbb7deb9bf39016994594c46cecec_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
The only reason for 4 choices instead of the 3 we used previously is that it is easier to generalize 4 to any number of choices. So, we’d like to take this distribution and create one where the spacing between the rankings is the same. We will call this an “ordinal” result whereas the original vote was a cardinal result:&lt;br /&gt;
&lt;br /&gt;
[[File:b368916ba1c923d12ac1b1c853a4dc66_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Note that the ratio of high to low values of the resulting “ordinal” vote matches that of the original cardinal vote. This is an arbitrary, although reasonable, constraint used to close the system of equations which will be discussed below.&lt;br /&gt;
&lt;br /&gt;
We further note that this cardinal vote is not exactly the single issue vote we saw above. There is a distribution of preference for all the choices but the last three choices tend to cluster near the bottom while the first choice is weighted very highly. Let’s do this case first and then discuss how we can handle the situation where a voter assigns all their points to their first choice and zero to the remaining choices.&lt;br /&gt;
&lt;br /&gt;
The ordinal result is achieved with the following set of equations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f_1 = {{A_b-C_r}\over {C_r-E_c}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f_2 = {{C_r-E_c}\over {E_c-F_p}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f_{1new} = f_1-F_o(f_1-1)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f_{2new} = f_2-F_o(f_2-1)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A_{bnew} - C_{rnew} = f_{1new}(C_{rnew}-E_{cnew})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;C_{rnew} - E_{cnew} = f_{2new}(E_{cnew}-F_{pnew})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A_{bnew}+C_{rnew}+E_{cnew}+F_{pnew}=N&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;F_d = {A_{bnew}\over F_{pnew}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_b = 80&amp;lt;/math&amp;gt; Preference for Abortion issue&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;C_r = 10&amp;lt;/math&amp;gt; Preference for Crime issue&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;E_c = 7&amp;lt;/math&amp;gt; Preference for Economy issue&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_p = 3&amp;lt;/math&amp;gt; Preference for Foreign policy issue&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N = 100&amp;lt;/math&amp;gt; Number of preference points available per voter to be distributed among the issues&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o = 1&amp;lt;/math&amp;gt; Ordinality factor, 0 = original cardinal vote, 1 = fully ordinal, ie equally spaced&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d = 26.66666667&amp;lt;/math&amp;gt; Distribution factor – to change hi/lo distribution. &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d &amp;lt; F_{dorig} = {Ab \over F_p}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This system of equations, when solved, yields the result under the Ordinal vote in the table above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_{bnew} = 48.193&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;C_{rnew} = 32.731&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;E_{cnew} = 17.269&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_{pnew} = 1.807&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The model has two user adjustable factors, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt;. &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o&amp;lt;/math&amp;gt; is an “ordinality” factor which governs the evenness of the spacing between the choices. &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o=0&amp;lt;/math&amp;gt; means spacing the same as the original and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o=1&amp;lt;/math&amp;gt; means spacing that is even. We presume that even spacing would equate to an ordinal system but this is just supposition. An ordinal system is just a rank ordering and doesn’t really have spacing between the choices. But we implicitly treat ordinal systems as if they had even spacing. One would think that the closest a cardinal system could get to an ordinal system, and still be a cardinal system, is when the choices are evenly spaced.&lt;br /&gt;
&lt;br /&gt;
According to the model then, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o=1&amp;lt;/math&amp;gt; is evenly spaced, a fact which can be confirmed by noting that the difference between successive preferences is the same, ie:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_{bnew} - C_{rnew} = C_{rnew} - E_{cnew} = 48.193 - 32.731 = 32.731 - 17.269 = 15.462&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;C_{rnew} - E_{cnew} = E_{cnew} - F_{pnew} = 32.731 - 17.269 = 17.269 - 1.807 = 15.462&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, say, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o = 0.5&amp;lt;/math&amp;gt; then the spacing will be somewhere between that of the cardinal system and the fully “ordinal” one. We can create a table which gives us results for a few &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o&amp;lt;/math&amp;gt; values to see how this works more clearly:&lt;br /&gt;
&lt;br /&gt;
[[File:8a232a5e522c796483faef89ff773eff_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
The transition to even spacing with &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_o&amp;lt;/math&amp;gt; is clearly not linear but the factor gives the user the ability to choose how to model single issue extremism and other skewed preference votes.&lt;br /&gt;
&lt;br /&gt;
Another user-adjustable factor, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d&amp;lt;/math&amp;gt; governs the ratio between the highest and lowest scoring choice. Here we have set the ratio given the original cardinal vote,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;F_d = {80\over 3} = 26.6666667&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and simply maintained it throughout. Choosing this ratio is required to make the equations close and a different constraint could certainly have been chosen. Note that choosing &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d = 1.0&amp;lt;/math&amp;gt; here would result in all the results being the same. Choosing &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_d &amp;lt; 1.0&amp;lt;/math&amp;gt; results in a reversal of the preference ordering.&lt;br /&gt;
&lt;br /&gt;
We noted above that this example was not exactly the single issue vote we saw last week. Indeed if we had tried a single issue case here, it would not have found a solution. For example, the following case would not work:&lt;br /&gt;
&lt;br /&gt;
[[File:30c8b578c7625f6fb4d0704a820163be_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
The problem is that the denominator in the above equations &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f_1 = {{A_b-C_r}\over {C_r-E_c}}&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f_2 = {{C_r-E_c}\over {E_c-F_p}}&amp;lt;/math&amp;gt; is zero which causes the system to fail. This system cannot even deal with a tie vote for the same reason. Nevertheless this problem is easy to fix by simply adjusting the votes to be close but not equal to a single issue vote, eg&lt;br /&gt;
&lt;br /&gt;
[[File:13ae06735905cf197394d712ec5caedb_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
We note here that the answer will converge the closer we approximate our cardinal vote to the original single issue vote. We further note that the tie votes have been given some small differences to prevent the divide by zero error discussed above and are, hence, artificially ranked. This, of course, leads to an artificial ranking in the final result which could be kept but might be best handled by simply making them all the same:&lt;br /&gt;
&lt;br /&gt;
[[File:07ae1324ff849a893ad4e7bab588626c_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Doing this treats them all equally in any final aggregation across multiple voters.&lt;br /&gt;
&lt;br /&gt;
This system of equations can be solved with any simultaneous equation solver. We used our own [https://www.solvercad.com SolverCAD ]for this case. An input file for the above set is attached here: [[Media:8ee08d1d08e0ed1a51cb070bd8c42596_card_to_ord_4.scin|card_to_ord_4.scin]]. The system as shown here only works for four preferences. It is straightforward, however, to generalize it to any number of choices. A dedicated program could be written to take in the input preferences, generate the requisite equations, and call SolverCAD to solve them. Alternatively, an algebraic solution is probably obtainable that could be generalized for any number of equations and written directly into such a dedicated program.&lt;br /&gt;
&lt;br /&gt;
=== A simpler model ===&lt;br /&gt;
&lt;br /&gt;
A simpler model can be obtained with the following set of equations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A_{bnew} - C_{rnew} = f(A_b - C_r)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;C_{rnew} - E_{cnew} = f(C_r - E_c)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;E_{cnew} - F_{pnew} = f(E_c - F_p)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A_{bnew} + C_{rnew} + E_{cnew} + F_{pnew} = N&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;A_b = 80&amp;lt;/math&amp;gt; Abortion preference&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;C_r = 10&amp;lt;/math&amp;gt; Crime preference&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;E_c = 7&amp;lt;/math&amp;gt; Economy preference&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;F_p = 3&amp;lt;/math&amp;gt; Foreign policy preference&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N = 100&amp;lt;/math&amp;gt; Total preference points available to be distributed among all the choices&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;1.0 \leq f \leq 0.0&amp;lt;/math&amp;gt; user factor where 1.0 represents no change and 0.0 represents all values equal.&lt;br /&gt;
&lt;br /&gt;
This model is quite a bit simpler but doesn’t have a mechanism to enforce equal spacing (unless we count &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f=0.0&amp;lt;/math&amp;gt;) or the ratio of largest to smallest preference. It does not, in short, create an “ordinal” system as described above. But it is a way of redistributing extreme votes in a way that preserves, in some sense, the skewness of the original distribution. It also has the advantage of handling tie votes out of the box. We can generate the following table of results for different values of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
[[File:2f58bf459daa3784225b6d787e7e34d0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
As &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f&amp;lt;/math&amp;gt; decreases from 1.0 to 0.0 the original distribution converges until all the values are exactly the same. Any value above 0.0 produces a distribution that maintains, to one extent or another, the fact that the top-ranked choice (Abortion) is quite a bit higher than the other choices.&lt;br /&gt;
&lt;br /&gt;
This system was also solved in SolverCAD with the input file attached here[[Media:1ed6574da7da27b77faed95705a75dca_card_to_ord_5.scin|card_to_ord_5.scin]]&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Capital_vs._Labor:_a_theory_of_society&amp;diff=2359</id>
		<title>Capital vs. Labor: a theory of society</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Capital_vs._Labor:_a_theory_of_society&amp;diff=2359"/>
		<updated>2024-10-01T19:35:34Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
The ratings based society should start with a theory for how society works in the first place. One  view is that the key to societal function, particularly in a democracy, lies in the relationship between the owning class and the working class. This view is broadly Marxian although it is not necessarily Marxist. It is the reason why we have gravitated toward a moneyless society, rules for income (or resource) distribution, and an emphasis on need as well as rated merit in distributing economic resources. &lt;br /&gt;
&lt;br /&gt;
Although modern societies have come a long way since Marx wrote, they still bear out his essential insight: capital and labor are inherently in conflict. Society has little choice in this. Labor and capital are &amp;lt;i&amp;gt;both&amp;lt;/i&amp;gt; required for society to function. And generally those who control the capital are not the ones who do the work, although as we will see, it is precisely on this point where change may be possible.&lt;br /&gt;
&lt;br /&gt;
Furthermore, modern societies bear out the consequences of this relationship. The ones that treat their working classes the worst score lower in overall freedom and economic metrics such as [https://hdr.undp.org/data-center/human-development-index#/indicies/HDI HDI]. The ones that treat them the best score the highest. Turns out it pays to treat workers well.    &lt;br /&gt;
&lt;br /&gt;
The advanced western democracies are witnessing what happens when we turn away from treating the working class fairly. As might be expected, they tend to get angry. The [[wikipedia:Make America Great Again|MAGA movement]], whose singular political accomplishment has been to galvanize the working class, is a direct consequence of that anger. So much so that it has broken with Republican orthodoxy on immigration and free trade, while elevating a whole host of cultural issues far beyond where mainstream conservatives were comfortable. The populist takeover of the [[wikipedia:Republican Party (United States)|Republican party]] can be seen as a reaction to [[wikipedia:Neoliberalism|neoliberal]] views on markets and labor.   &lt;br /&gt;
&lt;br /&gt;
Our [[ratings system]] anticipates and encourages a correct relationship between capital and labor by subjecting these facets of society, like everything else, to ratings and votes. We reject the notion of an invisible “market”, one that limits our actions, as the product of a disinformation campaign waged by interested parties. People create the rules by which society functions, period. Therefore the [[Ratings system|ratings system]], which is the mechanism of rule creation (and evaluation), is the ultimate source of authority in a ratings-based society.&lt;br /&gt;
&lt;br /&gt;
This is not to say that mechanisms to limit popular will shouldn’t be put in place. The people of a [[community]] should adhere to [[Philosophy of John Rawls|Rawlsian]] concepts of basic liberties and hold these as inviolate, no matter what their momentary passions may dictate. They should also build [[hysteresis]] (XXX) into their governance to avoid sudden drastic changes. The need for a proper balance between the [[The subjective and community ratings system|CRS]] and an effective governance structure is obvious.&lt;br /&gt;
&lt;br /&gt;
Even if these ideas work, the working class also wants respect. In fact, they may want that more than anything else. Decades of neoliberalism (XXX) have not only hollowed out their economic prospects but also been accompanied by a distinct contempt for them held by professionals and elites. They are different: they don’t act or talk like we do (XXX). We look down on them. &lt;br /&gt;
&lt;br /&gt;
Will the ratings system, one that seems to heavily favor those adept at processing information, treat the working class with the respect they deserve? We would hope first that the economic system created (XXX) by the ratings system would start by compensating workers fairly. This is step 1 and it will probably come naturally to any [[Community|community]]-building effort as soon as any real work is called for. Almost nothing can be accomplished without the working class. Step 2, however, is to use [[Heuristics and policy-making|heuristics]] and [[Societal optimization|optimization techniques]] (XXX) to minimize the cognitive load required for [[Direct democracy|direct democratic governance]]. This will start by modifying the language of governance to what normal people understand (as opposed to, say, those with law degrees). This will benefit everyone, not just the working class. Having meaningful input without requiring expertise will be an important step in making governance an acceptable duty for all to bear. &lt;br /&gt;
&lt;br /&gt;
Ultimately respect evolves from cultural conditions. We can design an inclusive economic and governance framework but we cannot design cultural attitudes. We can only hope that the correct attitudes emerge from the designed frameworks and the realization that [[wikipedia:Egalitarianism|egalitarianism]] requires a certain mindset to accompany it.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Binned_and_continuous_distributions&amp;diff=2358</id>
		<title>Binned and continuous distributions</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Binned_and_continuous_distributions&amp;diff=2358"/>
		<updated>2024-10-01T19:05:11Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Aggregation techniques}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
The simple [[predicate]] question we’ve been considering can be turned into one requiring a distribution, aka a probability density function. For example, if we ask whether it will be Sunny or Cloudy tomorrow the respondent can answer by saying:&lt;br /&gt;
&lt;br /&gt;
# Sunny or Cloudy.&lt;br /&gt;
# % chance of each. This is, in a sense, the coarsest [[wikipedia:Probability distribution|distribution]] possible and is the type of answer we have been considering so far.&lt;br /&gt;
# An actual distribution where we create a continuous variable, 0-1, to represent the degree of Cloudiness. For example 0 is no clouds and 1 is a densely overcast sky.&lt;br /&gt;
&lt;br /&gt;
Cases 1 and 2 are the ones we’ve been assuming so far and already have the math for. Case 1, as we’ve noted previously, is a specific variant on 2 where the probabilities are assumed to be 100% or 0% (eg 100% Sunny / 0% Cloudy or vice versa). And although both 1 &amp;amp;amp; 2 are specific cases of 3 in the sense that they are “distributions”, we handle them mathematically in a slightly different way than an actual distribution. Specifically, case 3 has a meaningful x-axis which represents a scale against which a probability density can be plotted. Cases 1-2 are fully discrete categories and we simply use the probabilities themselves.&lt;br /&gt;
&lt;br /&gt;
This post is about Case 3. First we will discuss “binned” distributions and then move on to the fully continuous case.&lt;br /&gt;
&lt;br /&gt;
== Binned Distributions ==&lt;br /&gt;
&lt;br /&gt;
This situation is very similar to the one Sapienza wrote about in his paper: https://ceur-ws.org/Vol-1664/w9.pdf.&lt;br /&gt;
&lt;br /&gt;
Let’s represent the Cloudiness example with a number of discrete bins along the x-axis: 0-0.2 is sunny, 0.2-0.4 is mostly sunny, 0.4-0.6 is partly cloudy, 0.6-0.8 is overcast, 0.8-1.0 is thick overcast.&lt;br /&gt;
&lt;br /&gt;
Our first source provides a distribution as follows:&lt;br /&gt;
&lt;br /&gt;
[[File:2ddb53976c56e7099f4f0d8fa225adb2_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
This is a [[wikipedia:Probability density function|probability density function]] (PDF) and the total area under the curve is 1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\int_{x=0}^{x=1} P_{d}dx = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 represents the total probability, ie that one of the outcomes in the distribution will happen. If we want to know the probability of an event lying between any two points on the x-axis, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\int_{x=x_0}^{x=x_1} P_{d}dx = P&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, the probability of overcast skies (0.6-0.8) is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{overcast} = \int_{x=0.6}^{x=0.8} P_{d}dx = 2(0.8-0.6) = 0.4&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For simple cases like this one where &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_d&amp;lt;/math&amp;gt; is constant over some interval &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x&amp;lt;/math&amp;gt;, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P =P_d \Delta x&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A second source produces the following distribution:&lt;br /&gt;
&lt;br /&gt;
[[File:30415947a5bb34d0f7820fb82eac1944_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Our goal is to combine these sources via [[Bayes&#039; theorem|Bayes]] and the [[Aggregation techniques|averaging techniques]] we have seen.&lt;br /&gt;
&lt;br /&gt;
For the Bayesian combination we can write the combined probability density for an interval &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x_1&amp;lt;/math&amp;gt; to &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x_2&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{comb,x_1tox_2} = {{P_{d1,x_1tox_2}P_{d2,x_1tox_2}}\over {\int_{x=0}^{x=1} P_{d1}P_{d2}dx}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, for the interval 0.6 to 0.8, the combined &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_d&amp;lt;/math&amp;gt; is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{comb,0.6to0.8} = {2(1)\over {0.2(0.15)(0.35)+0.2(0.35)(2)+0.2(1)(1.5)+0.2(2)(1)+0.2(1.5)(0.15)}} = 2.23&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing this for each interval and plotting the results leads to the following graph of combined &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_d&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
[[File:403b1549bbf50de3d08015154005457d_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
This graph can then be used as the updated distribution to which new source distributions can be combined in exactly the same manner. In general we see that combining distributions is no different than using the Bayes equation as we have, except that we have more sub-intervals to consider and the denominator is an integral rather than a summation of probability products.&lt;br /&gt;
&lt;br /&gt;
Averaging or [[trust]]-weighted averaging works exactly the same way: each sub-interval is taken and averaged to produce a new distribution. A straight average of the overcast case above leads to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d,ave,overcast} = {P_{d1,0.6to0.8} + P_{d2,0.6to0.8} \over 2} = {{2+1} \over 2} = 1.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing this for each subinterval leads to the following distribution:&lt;br /&gt;
&lt;br /&gt;
[[File:9a385b80143861f411ed3fb6d7197434_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
[[Trust|Trust]] weighted averaging also works the way you’d expect, with each sub-interval handling the trust-weighting and averaging scheme. Again, for the 0.6 to 0.8 sub-interval, assuming a Trust=1.0 for the 1st source and a Trust=0.7 for the 2nd Source:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d,tave,overcast} = {T_1 P_{d1,0.6to0.8} + T_2 P_{d2,0.6to0.8} \over {T_1 + T_2}} = { {1(2)+0.7(1)} \over {1+0.7} } = 1.588&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Doing this over each sub-interval and plotting leads to:&lt;br /&gt;
&lt;br /&gt;
[[File:f9d0e2fd462652a2af8f477771ba7572_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
This Excel spreadsheet (Sheet 1), [[Media:5060625a0dbac7a801981bc38d9e92ae_binning.xlsx|binning.xlsx]] , and [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/144 snippet ]reproduce the calculations shown here.&lt;br /&gt;
&lt;br /&gt;
== Continuous Distributions ==&lt;br /&gt;
&lt;br /&gt;
The above situation was a semi-continuous distribution, piecewise if you will. For the case of a fully continuous distribution, where the source provides a function, we can largely approach it the same way, by breaking up the function into discrete sub-intervals and performing numerical integration.&lt;br /&gt;
&lt;br /&gt;
Let’s suppose Source 1 answers the question with a [[wikipedia:Weibull distribution|Weibull distribution]]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d1} = {k \over \lambda} { \left({x \over \lambda}\right) }^{k-1} e^{-\left({x \over \lambda}\right)^k}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;k = 7&amp;lt;/math&amp;gt; (known as a “shape” parameter)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\lambda = 0.7&amp;lt;/math&amp;gt; (known as a “scale” parameter)&lt;br /&gt;
&lt;br /&gt;
However, it is not important what the functional form of the distribution is (this is just an example) as long as the area under the curve is 1. If we plot this distribution we obtain:&lt;br /&gt;
&lt;br /&gt;
[[File:804b25df27c20fcb5dff1073ee7bb748_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
As we saw above, the probability of the cloudiness extent being in any given sub-interval is the area under the curve of that sub-interval. We obtain the area by integrating numerically. If we break up the function into 10 equally spaced sections we can find the area for the sub-interval x=0.6 to 0.8 by adding the areas for the sections 0.6 to 0.7 (yellow region) and 0.7 to 0.8 (green region):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{0.6to0.8} = \int_{x=0.6}^{x=0.8} P_{d1}(x)dx \approx 0.5(P_{d1.6}+P_{d1.7})(\Delta x) + 0.5(P_{d1.7}+P_{d1.8})(\Delta x)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;= 0.5(2.8229+3.6788)(0.1) + 0.5(3.6788 + 1.7459)(0.1) = 0.5963 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This area is not very accurate because we only used 10 sections. But if we break the function up into more sections (ie make &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\Delta x&amp;lt;/math&amp;gt; small enough) we can obtain an accurate approximation of the area. We can then sum these small areas to find the area for any reasonably sized sub-interval.&lt;br /&gt;
&lt;br /&gt;
Note that we are using trapezoidal areas here rather than the simple rectangular areas used in the binned example. This is only for the purpose of numerical accuracy. We could have used rectangular areas but then we would require more sections (a finer grid), and hence more computation, to achieve good accuracy.&lt;br /&gt;
&lt;br /&gt;
Let’s introduce a 2nd source who answers the question with a 2nd Weibull distribution but shaped and scaled a little differently:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;k = 3&amp;lt;/math&amp;gt; (“shape” parameter)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\lambda = 0.3&amp;lt;/math&amp;gt; (“scale” parameter)&lt;br /&gt;
&lt;br /&gt;
[[File:d6297d346b2e9582a2d614d861e9292c_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Source 2 believes the weather tomorrow will be sunnier than Source 1. To combine these distributions via Bayes, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d,comb,bay,x_1tox_2} = {{P_{d1,x_1tox_2}P_{d2,x_1tox_2}}\over {\int_{x=0}^{x=1} P_{d1}P_{d2}dx}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The numerator is just the product of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_d&amp;lt;/math&amp;gt; for each source at the desired &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x&amp;lt;/math&amp;gt;. The denominator is the area under the curve of a function comprising the product of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{d1}&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{d2}&amp;lt;/math&amp;gt;. This area is found by integrating numerically using the same technique as above (with trapezoidal sections):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;{\int_{x=0}^{x=1} P_{d1}P_{d2}dx} \approx \sum_{i=1}^n 0.5 \lbrack P_{d1}P_{d2}(x_{min}+(i-1)\Delta x) + P_{d1}P_{d2}(x_{min}+i\Delta x) \rbrack \Delta x&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n&amp;lt;/math&amp;gt; is the number of sections (=10 for example)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x_{max}&amp;lt;/math&amp;gt; is the maximum value of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x&amp;lt;/math&amp;gt; (=1 in this case)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x_{min}&amp;lt;/math&amp;gt; is the minimum value of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x&amp;lt;/math&amp;gt; (=0 in this case)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\Delta x = {x_{max} - x_{min} \over n}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we do this for all &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n&amp;lt;/math&amp;gt; sections we can plot the results along with the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_d&amp;lt;/math&amp;gt; for the first two sources for comparison.&lt;br /&gt;
&lt;br /&gt;
[[File:2fa42886b949e56359636e5d5a054cab_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
We can calculate the straight average at &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x=0.6&amp;lt;/math&amp;gt; (for example) and obtain:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d,ave} = {P_{d1,x=.6} + P_{d2,x=.6} \over 2} = {{2.823+.0134} \over 2} = 1.418&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the trust-weighted average with &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_1 = 1.0&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_2 = 0.7&amp;lt;/math&amp;gt; also at &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x=0.6&amp;lt;/math&amp;gt; we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{d,tave} = {1.0(P_{d1,x=.6}) + 0.7(P_{d2,x=.6}) \over 1 + 0.7} = {{2.823+0.7(.0134)} \over 1 + 0.7} = 1.666&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we do these averages for several points, we can add the resulting plot to the plot above:&lt;br /&gt;
&lt;br /&gt;
[[File:eeb7dbf6fd7465ce5b16a051ab110000_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
A table of the values used for the case &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=10&amp;lt;/math&amp;gt; is shown here:&lt;br /&gt;
&lt;br /&gt;
[[File:ed3e6928a0d2250896403178cb93915f_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
The curves above were actually plotted for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=100&amp;lt;/math&amp;gt; but the values at the points shown in this table correspond quite closely. An Excel spreadsheet to do the calculations is here (Sheet 2): [[Media:10dcbfa23b1a6f4c0242750acf94fe5d_binning.xlsx|binning.xlsx]]&lt;br /&gt;
&lt;br /&gt;
Now, to find the probability of any sub-interval within any of these combined curves, we numerically integrate as shown above for Source 1. This is equivalent to adding up each section’s area, the dA values shown in the table &amp;amp;amp; spreadsheet above.&lt;br /&gt;
&lt;br /&gt;
For the case of Source 1 between &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x=0.6&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;x=0.8&amp;lt;/math&amp;gt; we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{1,0.6to0.8} = 0.325085 + 0.271235 = 0.5963&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This, you will notice, is the same value as that produced above when we performed the numerical integration “from scratch”.&lt;br /&gt;
&lt;br /&gt;
For Source 2, we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{2,0.6to0.8} = 0.0006792 + 8.29581(10)^{-6} = 0.000687496&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Bayesian combination we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{bay,0.6to0.8} = 0.01633 + 0.000258 = 0.01659&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on. The numbers above refer to the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=10&amp;lt;/math&amp;gt; case. We can perform these calculations for all the curves for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=10&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=100&amp;lt;/math&amp;gt; (values in the Excel spreadsheet) and produce the following table:&lt;br /&gt;
&lt;br /&gt;
[[File:13d00d94459adfe74da8ec4db1a0771e_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
We can see that Source 1 believes, to a large extent, in overcast skies tomorrow. Source 2, however, doesn’t believe that at all. The Bayesian combination, as it tends to do, pulls in favor of certainty so produces a low probability of overcast skies. The average produces something in between and the trust weighted average favors Source 1 a little because it discounts Source 2 somewhat (Trust = 0.7).&lt;br /&gt;
&lt;br /&gt;
Also notable is the fact that we obtain a small but significant change in the values as we move from &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=10&amp;lt;/math&amp;gt; to &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=100&amp;lt;/math&amp;gt;. It isn’t clear from this that we have achieved “grid independence” but we are very close. This [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/145 snippet], which reproduces the calculations above, can be run at &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;n=1000&amp;lt;/math&amp;gt; and you will see that the answers don’t change much. We should be careful to refine the grid only when necessary because it increases the cost of calculation enormously.&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
The key point in all this analysis is that the math we’ve developed so far applies to probability densities in exactly the same way as it does for simple probabilities. There’s just more of it because you have to break up the distribution into small increments along the x axis and do the math on each increment. But the math is the same.&lt;br /&gt;
&lt;br /&gt;
We’ve covered three possibilities so far for how respondents can answer questions: 1) They can assign a single probability to their answer, 2) They can provide a binned distribution, and 3) they can provide a truly continuous distribution. They can also just answer the question without giving a probability but this is a special case of 1 where we assume a probability of 100%.&lt;br /&gt;
&lt;br /&gt;
It is likely that 1 and 2 will be the preferred ways to answer. The continuous distribution, although important, seems like the vicinity of experts who perform a study or simulation to come up with their function.&lt;br /&gt;
&lt;br /&gt;
Also, keep in mind that continuous distributions are only useful in cases where there is a continuous variable. The binned distribution can be used even when there isn’t a continuous variable because, for truly discrete categories, you can simply break up the x-axis into equal length sections and use the same math. This is what Sapienza did, for instance. You can also just dispense with the x-axis and use the probabilities (instead of probability densities), as we’ve been doing in the work before this post.&lt;br /&gt;
&lt;br /&gt;
This brings up the question of why Sapienza chose to use a binned distribution for what was, at first glance, a discrete variable problem. One answer is that maybe he meant for his example to represent a continuous variation from good to bad weather. For simplicity, he then broke up the x-axis into equal length bins. It is certainly easier to think about continuous variations as a collection of discrete groupings. But the more likely explanation is that he did it because it is a more generalized method that works for all cases, discrete and continuous.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Bayesian_%26_non_Bayesian_approaches_to_trust_and_Wang_%26_Vassileva%27s_equation&amp;diff=2357</id>
		<title>Bayesian &amp; non Bayesian approaches to trust and Wang &amp; Vassileva&#039;s equation</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Bayesian_%26_non_Bayesian_approaches_to_trust_and_Wang_%26_Vassileva%27s_equation&amp;diff=2357"/>
		<updated>2024-10-01T18:49:48Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
== How Bayesian Approaches Restrict our Thinking, particularly on [[Trust]] ==&lt;br /&gt;
&lt;br /&gt;
If we adopt a non-Bayesian approach, it opens the door to many possible ways that Probability and Trust can be assigned. If you’d prefer to skip to that, just go to the next section. This section is somewhat of an essay on how/why, in a Bayesian approach, our thinking becomes more limited.&lt;br /&gt;
&lt;br /&gt;
Bayes restricts us in both a hard technical sense and more generally in how we think about the information network. The Bayes eqn requires prior and posterior probabilities to make updates with. These probabilities are, presumably, the result of rigorous experimental evidence produced by others (ie experts). This isn’t, of course, strictly necessary but it is the way we usually think about it.&lt;br /&gt;
&lt;br /&gt;
The probabilities can then be modified with trust but, in this context, trust is generally thought of as the competence and honesty of the person performing the test and both are assumed to be relatively high. Trust has less to do with competence in [[Bayes&#039; theorem|Bayesian approach]]es (vs [[Aggregation techniques|averaging approaches]]) because they only require that the source run a test and report the result correctly. There’s competence in that, to be sure, but it involves little judgement or in-depth knowledge.&lt;br /&gt;
&lt;br /&gt;
Incidentally, in [[Modification_to_the_Sapienza_probability_adjustment_for_trust_to_include_random_lying,_bias,_and_biased_lying|previous posts]], the competence was modeled as the random part of Trust (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_r&amp;lt;/math&amp;gt;) and the honesty modeled as Lies and Bias (&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_l&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_b&amp;lt;/math&amp;gt;). Implicitly (but not necessarily) left out of the competence is the more important issue of the quality of the underlying experimental evidence. This is done because we assume, correctly or not, that such evidence has the backing of experts, has been peer-reviewed, etc. It is, furthermore, difficult for the layperson to independently verify this type of information. We assume, in effect, that the probabilities are right, in and of themselves, and that only the respondent can mess them up by reporting them incorrectly (or falsely).&lt;br /&gt;
&lt;br /&gt;
Here’s a simple example. If we have a family gathering and ask everyone to take Covid tests before the event, we usually trust that everyone will competently follow the simple instructions and report correctly whether they saw one line or two. We also trust that they will do this honestly. We don’t question the underlying testing that was done to validate the test. However, if during our get together we ask everyone what the best economic system is, or form of government, we will likely get plenty of [[debate]] questioning the reasoning, sources, and methods folks used to arrive at their answers. Questions will be raised, in effect, about the tests people are using to determine their answer. Our Trust methodology changes, as it should, as does our method for combining everyone’s [[opinion]] (averaging instead of Bayes).&lt;br /&gt;
&lt;br /&gt;
Furthermore in a Bayesian approach, because Trust is limited, we tend to limit our view of who assigns it. Indeed, so far, the requestor of information is 100% responsible for assigning Trust to the respondent it is directly querying. If you have had interactions with someone you feel pretty confident that you can assign a trust to them on questions of basic honesty and competence. This might work if all the respondent has to do is report the result of a test. However, if you then engage in a discussion with that person about weighty topics you’ll often have more doubts about how far they can be trusted, not because they are suddenly dishonest but because they may not know alot about the subject.&lt;br /&gt;
&lt;br /&gt;
You might then find it useful to engage others in determining to what extent the source can be trusted on a particular question.&lt;br /&gt;
&lt;br /&gt;
== How Non-Bayesian Trust Approaches Might Work ==&lt;br /&gt;
&lt;br /&gt;
The weighted average discussion last time brought out the notion that if we are not using Bayes we have a great deal of flexibility in what types of trust information could be provided, and who provides it.&lt;br /&gt;
&lt;br /&gt;
Answers do not necessarily have to be given in terms of probabilities. The source would simply report an answer (True / False) and leave it to the requestor (or others) to evaluate its probability (based on Trust or some component thereof). In a system like this we could simply count up all the True answers and all the False answers and display that. A percentage True/False could be calculated and the higher number would be the “winner”.&lt;br /&gt;
&lt;br /&gt;
An equation to perform this could simply use our current equations (Bayes or averaging) with 100% as the respondent’s probability. In an averaging approach this leads to the same result as simply averaging the answers. We will skip a demonstration of this because it is obvious.&lt;br /&gt;
&lt;br /&gt;
A remaining issue is that of assigning Trust. We would still want trust to modify probabilities, even if they start at 100%. For instance, sources could assign a confidence to their own answers. This could be treated mathematically like Sapienza’s trust-modified probability but it differs qualitatively in the sense that now we recognize that the probability is more nuanced than our Bayesian thinking suggested. In particular it recognizes that confidence in an answer is itself a judgement that might benefit from multiple views.&lt;br /&gt;
&lt;br /&gt;
More generally then, Trust, or let’s say Confidence in someone’s knowledge, can be assigned by multiple parties. The source can provide a confidence and we can evaluate that confidence on the basis of our trust. If we think they have an inflated view of their confidence (a common problem) we can adjust it downward with our own view using an averaging technique. We could weight each confidence to favor either party in performing the average.&lt;br /&gt;
&lt;br /&gt;
== Wang and Vassileva&#039;s Trust Equation ==&lt;br /&gt;
&lt;br /&gt;
Indeed we could envision the entire [[community]] weighing in on the Trust of the source (ie respondent). A paper by [[Media:774c8855f108538162193799fb483ffa_Bayesian_network-based_trust_model.pdf|Wang and Vassileva]] does just this and provides an Equation 3 for dealing with this:&lt;br /&gt;
&lt;br /&gt;
[[File:ee8a0014dffb1539b84c015b9e2870ad_Eqn3_a.png|Eqn3_a]]&lt;br /&gt;
&lt;br /&gt;
[[File:04b9355915adfb6053a2c79edbd3e059_Eqn3_b.png|Eqn3_b]]&lt;br /&gt;
&lt;br /&gt;
Here we take Equation 3 and add two terms to it to represent our situation: Our source’s trust (confidence) in it’s own answer and the requesting node’s trust in the source. The paper’s apparent presumption was that a) the requesting node did not know the source at all a-priori and thus requires help to form its Trust and b) that the source would not provide a valuable assessment of itself because it is biased. Here we relax these two presumptions by noting that we may have an a-priori [[Opinion|opinion]] of a source but still want help to refine it and that an honest source might indeed provide some insight into its own confidence level. We also refactor the equation slightly by using easier to understand variable names.&lt;br /&gt;
&lt;br /&gt;
To understand the equation we first draw a picture of the nodes represented by it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;r, requesting node&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;k, known node&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;u, unknown node&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;s, source node&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [label=&amp;quot;Trk&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;Tks&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 4 [label=&amp;quot;Tus&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 4 [label=&amp;quot;Trs&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 4 [label=&amp;quot;Tss&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
If we imagine many k and u nodes, the equation is,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;T_{rs,mod} = w_k {\sum_{k=1}^{N_k} T_{rk}T_{ks} \over {\sum_{k=1}^{N_k} T_{rk}}} + w_u {\sum_{u=1}^{N_u} T_{us} \over N_u} + w_r T_{rs} + w_s T_{ss}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;w_k + w_u + w_r + w_s = 1&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_k&amp;lt;/math&amp;gt; is the weight placed on the known trustworthy references&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_u&amp;lt;/math&amp;gt; is the weight placed on the unknown references&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_r&amp;lt;/math&amp;gt; is the weight placed on r (our view of ourself)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s&amp;lt;/math&amp;gt; is the weight placed on s (the source’s view of itself)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod}&amp;lt;/math&amp;gt; is the modified Trust that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt; has in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt;, after taking into account the other nodes’ opinions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs}&amp;lt;/math&amp;gt; is the a-priori Trust that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt; has in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rk}&amp;lt;/math&amp;gt; is the Trust that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt; has in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;k&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ks}&amp;lt;/math&amp;gt; is the Trust that the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;k&amp;lt;/math&amp;gt; known reference has in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{us}&amp;lt;/math&amp;gt; is the Trust the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u&amp;lt;/math&amp;gt; unknown reference has in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt; is the Trust that &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt; has in itself&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt; is the node requesting information&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt; is the source of answer (the respondent)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N_k&amp;lt;/math&amp;gt; is the number of known references&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N_u&amp;lt;/math&amp;gt; is the number of unknown references&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;k&amp;lt;/math&amp;gt; is a known reference (known to &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;u&amp;lt;/math&amp;gt; is an unknown reference&lt;br /&gt;
&lt;br /&gt;
We first note that this slightly modified equation becomes equivalent to Wang and Vassileva’s Eqn. 3 when &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_r=0&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s=0&amp;lt;/math&amp;gt;. We also note that the first term, in particular, has the property that when &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rk}=0&amp;lt;/math&amp;gt;, the corresponding &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ks}&amp;lt;/math&amp;gt; simply ceases to contribute.&lt;br /&gt;
&lt;br /&gt;
Indeed this equation is similar in this respect to the [[A trust weighted averaging technique to supplement straight averaging and Bayes|trust-weighted averaging scheme]] we saw previously. Although the paper’s approach is Bayesian (presumably because they have hard performance data on file providers), we can adopt this equation for non-Bayesian purposes.&lt;br /&gt;
&lt;br /&gt;
In this scheme we are effectively creating another [[Technical overview of the ratings system|trust network]] to ask the question: how much trust do we have in Node s? This information will then be used to modify the probability that Node s responds with when asked a real question by Node r.&lt;br /&gt;
&lt;br /&gt;
We are, furthermore, using nodes that are unknown to us, ones that we generally don’t use to ask questions of. This is a reasonable approach, especially for distant nodes that we may not have known nodes for.&lt;br /&gt;
&lt;br /&gt;
The authors use a weighting to distinguish known and unknown references. Presumably the weighting is higher for known nodes. But it would seem that we can arbitrarily make up groups of nodes in either category to further subdivide them. We could have known nodes that have particularly good skills in rating others and group them together, or have a group of “semi-known” nodes that rank in the middle. The extension of the equation to either of these cases is trivial. To some extent we have already done this by adding the node’s &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;r&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;s&amp;lt;/math&amp;gt; as contributors to the overall Trust level.&lt;br /&gt;
&lt;br /&gt;
Such a scheme implies a great deal of information from various nodes. This information could be introduced via Lem’s [[Binned and continuous distributions|binning technique]] to present [[Trust|trust distributions]]. This would clearly be valuable in and of itself but might also provide insight into groupings that could be weighted differently. A grouping of overly biased supporters, for instance, might easily be identified given a distribution of this kind.&lt;br /&gt;
&lt;br /&gt;
== Numerical Example ==&lt;br /&gt;
&lt;br /&gt;
Let’s suppose we have 5 known nodes and 10 unknown nodes as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N_k = 5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;N_u = 10&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The requestor node’s Trust for it’s Known nodes is high:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rk} = [0.9, 0.9, 0.9, 0.9, 0.9]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But the Trust of the Known nodes for the Source node is mixed, and lower:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ks} = [0.5, 0.6, 0.7, 0.8, 0.9]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Trust of the unknown nodes for the Source node, however, is higher and more consistent:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{us} = [0.8, 0.82, 0.84, 0.86, 0.88, 0.9, 0.92, 0.94, 0.96, 0.98]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The requestor node’s own trust in its source node is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs} = 0.70&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And the source’s trust in itself is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss} = 1.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With a weighting distribution which emphasizes the contribution of the known nodes,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_k = 0.6&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_u = 0.2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_r = 0.1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s = 0.1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we obtain, after plugging into the equation above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;term1 = {w_k {\sum_{k=1}^{N_k} T_{rk}T_{ks} \over {\sum_{k=1}^{N_k} T_{rk}}}} = 0.6{{0.9(0.5)+0.9(0.6)+0.9(0.7)+0.9(0.8)+0.9(0.9)} \over {0.9+0.9+0.9+0.9+0.9}} = 0.42&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;term2 = {w_u {\sum_{u=1}^{N_u} T_{us} \over N_u}} = 0.2{0.8+0.82+0.84+0.86+0.88+0.90+0.92+0.94+0.96+0.98 \over 10} = 0.178&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;term3 = w_r T_{rs} = 0.1(0.70) = 0.07&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;term4 = w_s T_{ss} = 0.1(1.0) = 0.1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod} = term1 + term2 + term3 + term4 = 0.768&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is possible, however, for us to suspect that the known nodes are biased against the source and that perhaps the unknown nodes have a more objective opinion. In this case we might change the weighting to reflect that with a consequent increase in our overall Trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_k = 0.2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_u = 0.6&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_r = 0.1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s = 0.1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod} = 0.844&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/143 snippet] reproduces these calculations.&lt;br /&gt;
&lt;br /&gt;
== Addendum: Mediating Tss ==&lt;br /&gt;
&lt;br /&gt;
The analysis above does not mediate the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt; opinion except through the weighting factor &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s&amp;lt;/math&amp;gt; applied to it. To temper this value, one likely to be larger than realistic (people often have an inflated opinion of themselves), we might try using a network similar to the one we just described above to find &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod}&amp;lt;/math&amp;gt;. In effect we are asking the question, how much should we trust s’s confidence in itself, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt;? We can use the same network for this without including the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_ss&amp;lt;/math&amp;gt;, or simply by setting &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s=0&amp;lt;/math&amp;gt;. The same equation as above then applies.&lt;br /&gt;
&lt;br /&gt;
Once we have computed the answer we can multiply it by &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt; to produce an improved value of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt;, that is &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss,mod}&amp;lt;/math&amp;gt;. The calculation above then proceeds with &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss,mod}&amp;lt;/math&amp;gt; to produce the desired &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs_mod}&amp;lt;/math&amp;gt;, a trust value which will be used to find the probability of the real question at hand.&lt;br /&gt;
&lt;br /&gt;
To break this down, the calculation would proceed as follows:&lt;br /&gt;
&lt;br /&gt;
# Ask the network “how much trust should we have in &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;{T_ss}&amp;lt;/math&amp;gt;, s’s confidence in itself? Set &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_s=0&amp;lt;/math&amp;gt; and calculate a &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod,ss}&amp;lt;/math&amp;gt; as described by the equation above.&lt;br /&gt;
# Calculate &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss,mod} = T_{rs,mod,ss}(T_{ss})&amp;lt;/math&amp;gt;&lt;br /&gt;
# Ask the network the “real question” and calculate &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod}&amp;lt;/math&amp;gt; as described by the equation above using &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss,mod}&amp;lt;/math&amp;gt; instead of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{ss}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Use &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_{rs,mod}&amp;lt;/math&amp;gt; as you normally would, to modify the probability that the source is giving you (using Bayes or an averaging technique).&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Bayes_and_certainty&amp;diff=2356</id>
		<title>Bayes and certainty</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Bayes_and_certainty&amp;diff=2356"/>
		<updated>2024-10-01T18:17:00Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;introduction--bayes-pulls-toward-certainty&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Introduction – Bayes pulls toward certainty ==&lt;br /&gt;
&lt;br /&gt;
We’ve already discussed the fact that Bayes pulls in favor of certainty. If we combine a 99% opinion with a 10% opinion we get 91.7%. But if we increase the 99% to 99.9% the combined opinion rises to 99.1%. If we increase yet again to 99.99% the combined opinion rises to 99.91%. We summarize this below to make it easy to see:&lt;br /&gt;
&lt;br /&gt;
99% combined with 10% ==&amp;amp;gt; 91.7%&lt;br /&gt;
&lt;br /&gt;
99.9% combined with 10% ==&amp;amp;gt; 99.1%&lt;br /&gt;
&lt;br /&gt;
99.99% combined with 10% ==&amp;amp;gt; 99.91%&lt;br /&gt;
&lt;br /&gt;
See what’s happening here? The combined opinion gets pulled very strongly toward the 1st opinion the more certain the first opinion becomes. It seems a little strange that what seems like small differences in the first opinion should have such a pronounced effect.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;a-more-intuitive-explanation&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== A more intuitive explanation ==&lt;br /&gt;
&lt;br /&gt;
The Bayes eqn will confirm the truth of this but this situation begs for a more intuitive explanation. Although 99% and 99.9% don’t seem like much of a difference, they represent a huge difference in sample sizes. To be able to say 99%, one must perform at least 100 experiments, 99 of which succeeded and 1 which failed. To be able to say 99.9%, one must perform at least 1000 experiments where 999 succeeded and 1 failed. And so on.&lt;br /&gt;
&lt;br /&gt;
The evidential difference between 99 and 99.9% is now clearly huge. By performing 1000 experiments a scientist has built an almost unassailable advantage vs someone who would challenge his results with just 100 experiments. The challenger would also have to perform 1000 experiments just to compete and many more to show that the first scientist’s results were conclusively wrong.&lt;br /&gt;
&lt;br /&gt;
This also explains why any opinion, except 0%, combined with 100% yields 100% in [[Bayes&#039; theorem|Bayes]]. The one exception, the 0% opinion, combined with 100% is not computable and yields no result. In the sense outlined here, a 100% opinion doesn’t really exist because it implies that the scientist must perform an infinite number of experiments to attain it. The same is true for 0%. Anyone who performs, say 1000 experiments, and claims that all of them succeeded (thus yielding a “100%” success rate) can be challenged as simply not having performed enough experiments.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that a [[wikipedia:Probability|probability]] is something that &#039;&#039;can&#039;&#039; happen, or not, through random chance. We are not talking about logical or mathematical statements, for example. You cannot say that 2+2=4 100% of the time and call this a probabilistic statement. Such a statement is always true and there is no random event that can alter it. And, needless to say, statements of judgement are not probabilities either. If someone says they are 100% sure it will rain tomorrow, they are either rounding, estimating and exaggerating, or using 100% as an English synonym for “very certain”. Whenever a probability is used, it must include the random chance that the opposite event will take place, however small it might be.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;example&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
Let’s relate this to Bayes using a concrete example. A test for a disease is 99% accurate. So, as we’ve just seen, the experiment leading to that 99% must have performed at least 100 tests to confirm. Let’s suppose that a 2nd test is 10% accurate. So we have the following situation:&lt;br /&gt;
&lt;br /&gt;
100 healthy subjects ==&amp;amp;gt; Test 1 is 99% accurate ==&amp;amp;gt; 1 tests positive (false positive) ==&amp;amp;gt; Test 2 is 10% accurate ==&amp;amp;gt; .9 test positive again.&lt;br /&gt;
&lt;br /&gt;
100 sick subjects ==&amp;amp;gt; Test 1 is 99% accurate ==&amp;amp;gt; 99 test positive (true positive) ==&amp;amp;gt; Test 2 is 10% accurate ==&amp;amp;gt; 9.9 test positive again&lt;br /&gt;
&lt;br /&gt;
The number of people who tested positive twice = 0.9 + 9.9 = 10.8&lt;br /&gt;
&lt;br /&gt;
Of these, the number of people who are actually sick: 9.9&lt;br /&gt;
&lt;br /&gt;
P = Number of sick people who tested positive twice / Number of people who tested positive twice = 9.9/(9.9 + 0.9) = 0.917&lt;br /&gt;
&lt;br /&gt;
We can confirm that this is equivalent to the Bayes eqn as follows:&lt;br /&gt;
&lt;br /&gt;
9.9/ 10.8 = 9.9 / (9.9 + 0.9) = 99(0.1)/((99(0.1) + 1(1-0.1)) = 100(0.99)(0.1) / ((100(0.99)(0.1) + (100 - 100(.99))(1-0.1)) =&lt;br /&gt;
&lt;br /&gt;
0.99(0.1) / ((0.99(0.1) + (1-0.99)(1-0.1) )&lt;br /&gt;
&lt;br /&gt;
We recognize this result as the Bayes eqn. for this case.&lt;br /&gt;
&lt;br /&gt;
Now let’s do the same with a test that is 99.9% accurate. We need 1000 subjects in this case:&lt;br /&gt;
&lt;br /&gt;
1000 healthy subjects ==&amp;amp;gt; Test 1 is 99.9% accurate ==&amp;amp;gt; 1 tests positive (false positive) ==&amp;amp;gt; Test 2 is 10% accurate ==&amp;amp;gt; .9 test positive again.&lt;br /&gt;
&lt;br /&gt;
1000 sick subjects ==&amp;amp;gt; Test 1 is 99% accurate ==&amp;amp;gt; 999 test positive ==&amp;amp;gt; Test 2 is 10% accurate ==&amp;amp;gt; 99.9 test positive again&lt;br /&gt;
&lt;br /&gt;
The number of people who tested positive twice = 0.9 + 99.9 = 100.8&lt;br /&gt;
&lt;br /&gt;
Of these, the number of people who are actually sick: 99.9&lt;br /&gt;
&lt;br /&gt;
P = Number of sick who tested positive twice / Number of people who tested positive twice = 99.9/(99.9 + 0.9) = 0.991&lt;br /&gt;
&lt;br /&gt;
Let’s juxtapose the two relevant calculations:&lt;br /&gt;
&lt;br /&gt;
9.9/(9.9 + 0.9) = 0.917&lt;br /&gt;
&lt;br /&gt;
99.9/(99.9 + 0.9) = 0.991&lt;br /&gt;
&lt;br /&gt;
We see here that the numerator increases by a factor of 10, corresponding to the 10 times increase in [[wikipedia:Sample size|sample size]], and representing the number of people who test positive twice who are actually sick. The denominator is composed of this same number plus a constant, the number of people who falsely test positive twice. This constant remains so because as the test becomes more accurate, ever fewer results are bad (in percentage terms). For every 9 added to the decimal place, it becomes about 10 times harder to have a bad result. This is the basic reason why certainty pulls so hard in its favor.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;counteracting-certainty-and-oom&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Counteracting certainty and OOM ==&lt;br /&gt;
&lt;br /&gt;
Indeed, to counteract the effect of certainty we would need an equal and opposite level of certainty for the opposite opinion. If 99% represents our probability of a True result, then a 1% opinion for True is it’s opposite (99% for False). When combined these two yield, via Bayes,&lt;br /&gt;
&lt;br /&gt;
0.99(0.1) / (0.99(0.1) + 0.1(0.99)) = 0.5&lt;br /&gt;
&lt;br /&gt;
and thus cancel themselves out. 99.9% would require a 0.1% counterbalance to cancel itself, and so on. In terms of the evidentiary weight argument we’ve been using, the 99.9% requires 1000 experiments to establish but the 0.1% also requires 1000 experiments. They are “equal” in that sense.&lt;br /&gt;
&lt;br /&gt;
We can understand this numerically in terms of [[wikipedia:Order of magnitude|order of magnitude]] (OOM). The counterbalance against certainty (eg 99.9%) is a small number (eg 0.1%) which causes the failures to be ever smaller compared to the successes, as we’ve seen. For this case, 0.1% equates to 0.001 which has order of magnitude of -3. 99.99% (0.01%) would have order of magnitude of -4 and have 10 times the evidence in favor of it. Any time we see an order of magnitude difference, we know we are dealing with experiments which have large discrepancies in their evidence and their Bayesian combination will favor greatly the experiment with the more negative OOM.&lt;br /&gt;
&lt;br /&gt;
To an extent we can also understand this in terms of decimal places. The more decimal places an opinion has, the more tests were done to confirm it, with each decimal place adding an order of magnitude to the the number of tests.&lt;br /&gt;
&lt;br /&gt;
This only works at the certainty (99%, 99.9%, 99.99%, etc.) and uncertainty end (1%, 0.1%, 0.01%, etc.) of the probability spectrum, which is why the OOM view is the more accurate way to look at it. If someone says they are 10.001% certain of something they must have done at least 100,000 experiments to confirm that (10001 succeeded, 89999 failed). But in Bayes, this is not very different than someone saying 10% where only 10 experiments were conducted for confirmation (9 succeeded, 1 failed). When combined with a 99.999%, which has the same number of decimal places, the result will be 99.991%, very close to 99.999%. The “decimal place heuristic” clearly only works at the ends of the spectrum (where the smaller opinions exhibit a difference in OOM).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;using-decimal-places-to-combine-and-evaluate-experimental-evidence&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Using decimal places to combine and evaluate experimental evidence ==&lt;br /&gt;
&lt;br /&gt;
Nevertheless, this idea asks us to consider what happens when two opinions differ in the evidentiary weight behind them. Clearly an experiment with 1000 tests is not equivalent to one with 10. The Bayes equation, however, has no mechanism to judge the quality of the probabilities inserted into it except when those probabilities clearly differ in terms of their OOM.&lt;br /&gt;
&lt;br /&gt;
So one idea to handle this is to weigh the opinions as follows:&lt;br /&gt;
&lt;br /&gt;
# Use the regular [[Bayes&#039; theorem|Bayes equation]] to combine the two probabilities. This is the one limit.&lt;br /&gt;
# Use the experiment with the most evidence for it (most decimal places) to establish another limit, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_1&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Decide on a factor for the difference in evidence between the two experiments using the decimal place of the reported probabilities. An 80.3% probability vs. 73% has one more decimal place and so gives the 80.3% opinion 10 times the weight of the 73% opinion. This leads to the equation &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f = 10^{Nd_1-Nd_2}&amp;lt;/math&amp;gt; where &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;Nd_1&amp;lt;/math&amp;gt; is the number of decimal places of the most accurate probability. Of course, if the probability comes with the sample size used to calculate it we can just divide the sample sizes to obtain the factor.&lt;br /&gt;
# Use the factor just calculated as a weighting factor in establishing the combined opinion between the lower and upper limit.&lt;br /&gt;
&lt;br /&gt;
For example, using &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_1 = 0.803&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_2 = 0.73&amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P\_{bay} = {0.803(0.73)\over {0.803(0.73) + 0.197(0.27)}} = 0.917&amp;lt;/math&amp;gt;&lt;br /&gt;
# &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P\_{1} = 0.803&amp;lt;/math&amp;gt;&lt;br /&gt;
# &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;f=10^{(Nd_1 - Nd_2)} = 10^{3-2} = 10&amp;lt;/math&amp;gt;&lt;br /&gt;
# &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb,w} = 0.803 + {(0.917 - 0.803)\over 10} = 0.8144&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given how close the result is to &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_1&amp;lt;/math&amp;gt; we might just consider skipping the calculation and using &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_1&amp;lt;/math&amp;gt; in situations where an order of magnitude or more separates the two experiments. The calculation might be more useful in cases where we can’t see any difference in decimal place accuracy but know that significantly different sample sizes were used.&lt;br /&gt;
&lt;br /&gt;
This idea is also useful for evaluating the relative merits of two experiments without necessarily combining them via Bayes. If two scientists disagree and one has significantly more experimental evidence, then we could use the above idea to perform a [[A trust weighted averaging technique to supplement straight averaging and Bayes|weighted average]] of their opinions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;reporting-results-correctly-using-decimal-places&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Reporting results correctly using decimal places ==&lt;br /&gt;
&lt;br /&gt;
This discussion assumes that probabilities will be reported to the correct number of decimal places. If we do 11 experiments and 9 succeed we could claim&lt;br /&gt;
&lt;br /&gt;
P = 9/11 = 0.818181818181&lt;br /&gt;
&lt;br /&gt;
but this would be to represent the result with far too many decimal places, implying that many more experiments had been done to confirm it. The correct decimal representation is 0.8, implying that about 10 experiments have taken place, leading to one decimal place.&lt;br /&gt;
&lt;br /&gt;
Therefore we need to watch when reports are made with a suspiciously large number of decimal places. Sources should be encouraged to report their experimental results as fractions where we can see openly in the denominator how many experiments were conducted.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Balance_between_individual_liberties_and_the_community&amp;diff=2184</id>
		<title>Balance between individual liberties and the community</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Balance_between_individual_liberties_and_the_community&amp;diff=2184"/>
		<updated>2024-09-25T16:10:22Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
[[Community|Communities]] have a tendency to impose on individual [[Liberty|liberties]] to some extent. If we look, for example, at the [[Freedom House|Freedom House]] score for Ukraine in recent years we will see a decline [XXX]. This obviously coincided with the beginning of the [[wikipedia:Russian invasion of Ukraine|Russian invasion]] and it is assumed that it has something to do with forced [[wikipedia:Conscription|conscription]] of civilians into military service. We might pose that forced conscription is wrong because it violates the basic liberties of otherwise free citizens.&lt;br /&gt;
&lt;br /&gt;
This is a difficult question, one pitting clear individual basic liberties against stark [[community]] needs. How should we view this if we take a [[Philosophy of John Rawls|Rawlsian]] approach to basic liberties (and [[Privacy|privacy]]) as cornerstones of our political philosophy? Rawls himself thought forced conscription, in general, was a violation of individual basic liberties. He preferred economic incentives to encourage more people to sign up for military duty. But he said there were exceptions to this in cases where it was necessary to uphold the system of basic liberties for everyone. Therefore, for an invader who threatens to impose tyranny over its conquered territory, it may be necessary to have conscription to counter it.&lt;br /&gt;
&lt;br /&gt;
So Ukraine requires its able-bodied male citizens to serve in its defense. The question at hand is whether this violation of liberty is justified by the threat it faces.&lt;br /&gt;
&lt;br /&gt;
Rawls’ own historical experience is informative here. He was a veteran of [[wikipedia:World War II|WWII]] (something that persuaded him to reject religion, btw), and produced his greatest work, [https://giuseppecapograssi.wordpress.com/wp-content/uploads/2014/08/rawls99.pdf A Theory of Justice], during the [[wikipedia:Vietnam War|Vietnam War]]. He was against the Vietnam-era draft and was against the war itself, using it as a motivator to analyze the considerable defects of American government. Vietnam was clearly a war of choice and was prosecuted ineptly. The government covered up the military difficulty it was facing (they knew early on it could not be won) and it prolonged the war for many years more than was necessary. Forcing young Americans to fight in it was a clear violation of basic liberties, according to Rawls.&lt;br /&gt;
&lt;br /&gt;
WWII was different. Although we could argue that for the US it was a war of choice, that’s only because we can reasonably imagine that the axis powers would not have invaded the US once they finished with the rest of the world. But this is a tenuous proposition. And, without question, millions of previously free people would have lived under totalitarian conditions had the US not acted, and many would have succumbed to the genocide of those regimes. Even if the US had not been invaded, it would have been a country on a permanent war footing, with the curtailed view of individual liberties that such conditions always produce. Few pacifist thinkers, eg [[wikipedia:Howard Zinn|Howard Zinn]], along with Rawls, believed that the US should have stayed out of WWII.&lt;br /&gt;
&lt;br /&gt;
The basic principle being advanced here is that sometimes it is necessary to deny someone their basic liberties in order to protect them for many more people. But here there is, like there is for denying privacy, a careful method by which we perform the denial: first, gauge the level of the threat properly. Then ask for volunteers. Increase the incentives for such volunteers. If the army raised is still insufficient, institute a draft. Ensure that the draft is fair by making everyone eligible for it and randomly selecting from among that pool. It is especially important to not allow the privileged to escape from this because doing so has too much of a socially corrosive effect. Conduct the war with an adequate regard for the lives and health of the soldiers. Post-war, ensure that veterans are treated well: medical care should be ensured, education benefits, opportunities to work, etc. Historically, very few WWII veterans saw their forced service as a terrible imposition on their liberty but many Vietnam veterans did. WWII had many of the characteristics listed here of a stepwise and relatively careful process justified by a high-magnitude threat. Vietnam had only some of these characteristics but without the high-magnitude threat. Vietnam was notorious for exempting young people of privilege and for its generally incompetent management.&lt;br /&gt;
&lt;br /&gt;
So now we have two reasons why individual liberties might be subsumed: 1) to protect the entire community from losing its liberties, and 2) to prevent a momentary act of emotion from denying us a more important benefit (the [[hysteresis]] argument). The troublesome aspect of both of these is that ultimately they must be imposed. War tends to require far more bodies than people willing to volunteer. Stopping a suicidal teenager means, in the moment, forcing them to stop. And decision-making frameworks have rules in place to stop their members from acting unwisely. This is, unfortunately, the basic crux of the matter. The imposition by someone else on our liberties &amp;lt;i&amp;gt;always&amp;lt;/i&amp;gt; feels like a denial of our basic liberties.&lt;br /&gt;
&lt;br /&gt;
But this seems like an immature view of the nature of basic liberties. They are not, like so many other aspects of life, absolute. They exist alongside the rights of others. They exist, frequently, under the condition of “normal” or “reasonable” circumstances. Under normal circumstances we have the right to speak freely. Under circumstances of grave threats to our national security, however, we do not. Under reasonable circumstances (eg terminal illness) we have the right to commit suicide. But if we’re just having a bad day, we do not.&lt;br /&gt;
&lt;br /&gt;
And, perhaps, most importantly they don’t exist just for us. They exist for the benefit of society at large. We emphasize them because we not only hold them dear personally but also because we believe deeply that they are the basis for a better society.&lt;br /&gt;
&lt;br /&gt;
Here is a [[debate]] we can have about the [[wikipedia:Second Amendment to the United States Constitution|2nd Amendment]], for example. Gun-rights advocates will point to the 2nd Amendment to justify their right to own a gun. Gun-control advocates will argue that they may indeed have such a right by default but almost certainly it is not because of the 2nd Amendment, which concerns itself with a collective right for the purpose of defending against tyranny. Almost no gun owner in the US is defending the nation against tyranny. They own a gun because they, for personal reasons, simply want one. This is perfectly legitimate, of course, but the [[wikipedia:Founding Fathers of the United States|Founders]] were not interested in anyone’s personal reasons for gun ownership any more than their reasons for horse ownership. The 2nd Amendment doesn’t cover the use case that most Americans fall under. Therefore, the gun-control side will argue that no [[wikipedia:Constitution of the United States|constitution]]al reason exists for whatever restriction they have in mind. The gun-rights advocates will also point to the 2nd Amendment and argue that the text does not explicitly condition gun ownership on protection against tyranny. Therefore it covers the personal use-case. Either way, the 2nd Amendment is a good example of basic individual rights existing in tension with their collective purpose.&lt;br /&gt;
&lt;br /&gt;
This starts to veer strongly into [[Philosophy of John Rawls#Utilitarianism|utilitarianism]]. At the end of the day, for exigent enough circumstances, we have no choice but to invoke a utilitarian argument. If millions of innocent Americans die every year (let’s say) because of guns then we might have a grounded utilitarian argument for invoking some gun-control measure or another. However if only a handful are dying then we don’t. The real number is around [https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/ 26,000 suicides and 21,000 murders], with a much smaller number attributable to accidents and law enforcement actions. Assuming gun ownership is a basic liberty enshrined in the Constitution, is this enough to invoke a utilitarian argument against it? Politically, in the US, the answer has been a firm no. But we can see how that might change given higher numbers.&lt;br /&gt;
&lt;br /&gt;
A starker way of looking at this is to say that if the life of one person must be sacrificed to save everyone, it is hard to argue against it on basic liberties principles since the basic liberties of all are at stake. Utilitarianism never goes away and it is a matter of when it should be invoked. The great philosophies tend to have this property. They are clearly applicable under some circumstances and not under others.&lt;br /&gt;
&lt;br /&gt;
Let’s take a look at another personal right and how it might conflict with collective well-being. We have envisioned a moneyless society where “income” distribution is controlled by the community to some reasonable level. However, don’t we have the right to ask other people for money and, in so doing, amass wealth? Our right to ask is certainly protected as free speech and let’s say the person making the donation is freely choosing to do so as well. The transaction may be more involved than that, having some exchange or quid pro quo. But even so, the freedom to transact in this way seems like it falls within our basic liberties.&lt;br /&gt;
&lt;br /&gt;
The problem here isn’t really the request and subsequent donation, in and of itself. The problem is that, done enough times, it results in a skewed income distribution that violates what the community is comfortable with. Communities have the right to set their economic terms: to allocate resources according to whatever scheme they think is best. Someone who derives an inflated income through charitable contributions is not only violating the distribution rule but is probably also not a productive member of society. Again, we have a straightforward conflict between individual and collective rights.&lt;br /&gt;
&lt;br /&gt;
Keep in mind here that we are not preventing people from taking the free food offered at a potluck or saying they can’t give their friends a ride home. In the ordinary course of events, people ask for things and others give them. We don’t expect to stop doing this or pass laws to prevent it. Small acts of charity, that are often reciprocated anyway, pose little problem. It is the acts of charity that skew the economic order that would concern us here. The collective right is imposed mainly when it is of collective significance. In this way we balance the rights of individuals to behave freely (and feel like they are free) with collective rights.&lt;br /&gt;
&lt;br /&gt;
We can also look at the money, or resources, request issue by breaking it into a few different categories. First, why is someone asking for money? Is there a legitimate reason, like need? In a [[Post-scarcity economy|post-scarcity]] fair economy, this should never be the case. Is the reason for investment? If so, this is governable by community law and the basic ask/donate right would need to conform to investment laws and community perceptions of good investments. Perhaps the person is especially charismatic and people fall under his “spell”, like a cult leader. If the leader is then using the money for self-serving reasons, we would expect the [[ratings system]] to catch this and derate the “leader”. Or, depending on the practices of the cult, shutting it down because it violates the basic liberties of its members. But perhaps the spiritual leader is legitimate and simply wants to build a meditation retreat for his members and needs resources to do it. This seems like the kind of resource allocation the community should be able to handle as part of its normal economic affairs. So perhaps the remedy is just to get the folks involved to follow correct procedures instead of doing it “off the books”. In any event, the right to request resources is also constrained by the intent behind the request.&lt;br /&gt;
&lt;br /&gt;
None of this should be surprising to citizens of “free” societies. Our rights constantly exist in a balance with community needs, not to mention the exact same rights of others. Frequently, however, the balance skews too far towards the community (state power) and sometimes it skews too far in favor of the individual. In our current society we have a cumbersome, corrupt, and ineffective process for changing this balance should we desire to do so. Under a [[Rating system|ratings system]], however, this ability will be explicit and controllable.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Avoiding_feedback&amp;diff=2182</id>
		<title>Avoiding feedback</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Avoiding_feedback&amp;diff=2182"/>
		<updated>2024-09-25T15:32:07Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;methods-of-preventing-feedback&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
= Methods of preventing feedback =&lt;br /&gt;
&lt;br /&gt;
Say we have a fully-connected [[trust]] network that looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    alice -&amp;gt; bob [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    alice -&amp;gt; carol [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    carol -&amp;gt; bob [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
And &#039;&#039;alice&#039;&#039; wants &#039;&#039;bob&#039;&#039;’s [[Computed opinion|computed opinion]]. A careless implementation could run into an infinite loop, where &#039;&#039;alice&#039;&#039; asks &#039;&#039;bob&#039;&#039; for his computed opinion, who asks &#039;&#039;carol&#039;&#039; for her computed opinion, who asks &#039;&#039;alice&#039;&#039; for her computed opinion, who asks &#039;&#039;bob&#039;&#039;…&lt;br /&gt;
&lt;br /&gt;
This also leads to the fact that &#039;&#039;alice&#039;&#039; can get &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039;’s opinion twice – once directly, once indirectly through the other. This is acceptable, and probably desirable. If we add some [[Trust factor|trust factors]] to this network like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    alice -&amp;gt; bob [label=&amp;quot;trust=0.8&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    alice -&amp;gt; carol [label=&amp;quot;trust=0.2&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    carol -&amp;gt; bob [label=&amp;quot;trust=0.8&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
We can see that &#039;&#039;alice&#039;&#039; doesn’t [[Trust|trust]] &#039;&#039;carol&#039;&#039; much, so she will assign a low weight &#039;&#039;carol&#039;&#039;’s direct opinions. But when she asks &#039;&#039;bob&#039;&#039; for his opinion, &#039;&#039;bob&#039;&#039; rates &#039;&#039;carol&#039;&#039; highly so his computed opinion will include &#039;&#039;carol&#039;&#039;’s opinion, and &#039;&#039;carol&#039;&#039;’s influence on &#039;&#039;alice&#039;&#039;’s computed opinion will probably be significantly higher through &#039;&#039;bob&#039;&#039; than when presented directly to &#039;&#039;alice&#039;&#039;. But we can say that &#039;&#039;alice&#039;&#039; has assigned high trust in &#039;&#039;bob&#039;&#039; because &#039;&#039;bob&#039;&#039;’s computed opinions are usually right, regardless of how he comes up with them. She doesn’t know that he trusts &#039;&#039;carol&#039;&#039;, and doesn’t need to. So getting &#039;&#039;carol&#039;&#039;’s opinion through &#039;&#039;bob&#039;&#039; is probably a good thing, as her low direct trust in &#039;&#039;carol&#039;&#039; may be a mistake. Or maybe this is a situation where we have different trust factors for different domains; where &#039;&#039;alice&#039;&#039; is asking a car question and &#039;&#039;bob&#039;&#039; knows that &#039;&#039;carol&#039;&#039; is a mechanic so has a high trust factor on automotive [[Predicate|predicate]]s, but &#039;&#039;alice&#039;&#039; just knows her as someone with a reputation for being prone to [[wikipedia:Groupthink|groupthink]] on cultrue-war issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;depth-limit&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Depth Limit ==&lt;br /&gt;
&lt;br /&gt;
A simple way to avoid [[wikipedia:Recursion|infinite recursion]] is to put in a depth limit. When &#039;&#039;alice&#039;&#039; asks &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039; for their computed opinions, she will set a depth limit of, say, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 4&amp;lt;/math&amp;gt;. Then, when &#039;&#039;bob&#039;&#039; gets the request, he would ask the nodes he trusts for their computed opinions, but pass &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth - 1&amp;lt;/math&amp;gt; to &#039;&#039;alice&#039;&#039; and &#039;&#039;carol&#039;&#039;, and so on. Eventually, if you are asked for a computation with &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 0&amp;lt;/math&amp;gt;, you just return your own personal opinion, or &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; if you don’t have one. The initial depth limit would need to be set to be roughly the depth of an acyclic version of your network so there aren’t any nodes you can’t reach because they’re too far away.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* simple to implement&lt;br /&gt;
* doesn’t require any node identifiers&lt;br /&gt;
* low network overhead (1 byte?)/low computing overhead&lt;br /&gt;
* results are stable – if you ask the same question on the same network multiple times, you’ll get the same answer.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* peoples opinions are counted multiple times. my gut feeling is people near you tend to get relatively more weight than those farther away from you.&lt;br /&gt;
* you have to balance your initial value of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth&amp;lt;/math&amp;gt; somewhere between cutting off distant sources of information, and over-valuing nearby information&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
&lt;br /&gt;
* The ideal value for the three-person fully-connected network would be &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 1&amp;lt;/math&amp;gt;, and even that may have &#039;&#039;alice&#039;&#039; getting her own personal opinion fed back to her unless the algorithm has a way for alice to tell bob not to turn around and ask her right back, which would be a step towards a node identifier.&lt;br /&gt;
* For [[Privacy|privacy]], you may want to stop at &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 1&amp;lt;/math&amp;gt; in the algorithm as described above because allowing &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 0&amp;lt;/math&amp;gt; requests directly exposes your personal opinion)&lt;br /&gt;
* My ten-seconds-of-effort google search turned up these statistics on [[Social network|social networks]]: https://miro.medium.com/v2/resize:fit:4800/format:webp/1*rkITSNe7ngMh8vmQexayQw.png The highlight being that for twitter, the average path length was 2.6 and the diameter was 18. Other services had somewhat higher average path lengths and somewhat lower diameters, but the difference was still at least a factor of 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;unique-identifier-for-each-inquiry&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== [[Unique identifier|Unique identifier]] for each inquiry ==&lt;br /&gt;
&lt;br /&gt;
This has a few variants. The property they all have in common is that when &#039;&#039;alice&#039;&#039; starts trying to figure out her computed opinion, she assigns a unique identifier to the computation, and passes that identifier to &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039; with her requests. If a node gets a request with an identifier it has never seen before, it does a full computation. If it gets a request with an identifier it has seen before, it does something less. That could be:&lt;br /&gt;
&lt;br /&gt;
* returning &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt;, effectively saying “I have no opinion on this, because my opinion has already been accounted for in this calculation”&lt;br /&gt;
* returning the same (cached) result they returned the first when they did the full computation&lt;br /&gt;
* returning their own personal opinion&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* low network overhead (16 bytes). low computational overhead&lt;br /&gt;
* doesn’t require node identifiers&lt;br /&gt;
* doesn’t loop back and count someone’s opinions multiple times&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* unstable. results change depending on which order nodes receive requests&lt;br /&gt;
* some variants wouldn’t allow someone’s opinion to count multiple times, which we convinced ourselves was desirable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;passing-the-path&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Passing the path ==&lt;br /&gt;
&lt;br /&gt;
In this method, each time you send a request to a new node, you include the list of all upstream nodes, so it knows not to loop back. When a node receives a request and all of the nodes they trust are listed in the request’s upstream nodes, it will just return its personal opinion, or null if it has none.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;seqdiag&amp;quot;&amp;gt;&lt;br /&gt;
seqdiag {&lt;br /&gt;
  alice -&amp;gt; bob [label = &amp;quot;request(ignore=alice)&amp;quot;];&lt;br /&gt;
  bob -&amp;gt; carol [label = &amp;quot;request(ignore=alice,bob)&amp;quot;];&lt;br /&gt;
  bob &amp;lt;-- carol [label = &amp;quot;carol&#039;s personal opinion&amp;quot;];&lt;br /&gt;
  alice &amp;lt;-- bob [label = &amp;quot;bob&#039;s computed opinion&amp;quot;];&lt;br /&gt;
  alice -&amp;gt; carol [label = &amp;quot;request(ignore=alice)&amp;quot;];&lt;br /&gt;
  carol -&amp;gt; bob [label = &amp;quot;request(ignore=alice,carol)&amp;quot;];&lt;br /&gt;
  carol &amp;lt;-- bob [label = &amp;quot;bob&#039;s personal opinion&amp;quot;];&lt;br /&gt;
  alice &amp;lt;-- carol [label = &amp;quot;carol&#039;s computed opinion&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* stable&lt;br /&gt;
* only counts opinions twice when we want it to&lt;br /&gt;
* doesn’t double-count anyone or loop back&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* requires node identifiers&lt;br /&gt;
* higher network overhead for the list of node identifiers&lt;br /&gt;
* list of upstream nodes leaks a lot of information&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;privacy-preserving-variant&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Privacy-preserving variant ===&lt;br /&gt;
&lt;br /&gt;
To make the above more palatable, we can change it as follows:&lt;br /&gt;
&lt;br /&gt;
* Every time a node makes a request&lt;br /&gt;
** they create a random [[wikipedia:Universally unique identifier|UUID]] to identify themselves for this computation. They store it in a short-term cache so they’ll know it if they see it again.&lt;br /&gt;
** they also generate a number of other decoy UUIDs that help obscure the trust paths. Maybe between 3 and 10 decoys per request.&lt;br /&gt;
** then they send their request with both the real and decoy UUIDs (sorted lexicographically, say)&lt;br /&gt;
* When a node receives a request, it checks to see if any of the UUIDs in the request are in its short-term cache of ids that represent itself.&lt;br /&gt;
** if so, it returns &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt;&lt;br /&gt;
** if not, it computes its opinion normally. When it has to send requests to other nodes for, it does the same thing the originating node did: create and cache a real UUID, create several decoys, and append them to the list of UUIDs in the request it’s processing&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* makes it much harder to figure out who is whom&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* makes requests much larger on the network (figure what, maybe another 512 bytes on average?)&lt;br /&gt;
* also increases the number of requests (a full new set of leaf nodes). In the non-privacy version, if &#039;&#039;carol&#039;&#039; was processing &amp;lt;code&amp;gt;request(ignore=alice,bob)&amp;lt;/code&amp;gt;, she knew not to contact &#039;&#039;alice&#039;&#039;. In the privacy variant, she needs to contact &#039;&#039;alice&#039;&#039; because she doesn’t know if any of the UUIDs represent alice.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Attenuation_in_trust_networks&amp;diff=2180</id>
		<title>Attenuation in trust networks</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Attenuation_in_trust_networks&amp;diff=2180"/>
		<updated>2024-09-25T15:24:16Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Review of Cycling&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To summarize [[Effect of cycling in trust networks|cycling]] briefly, if [[Trust]]=1.0 for all nodes, then cycling will rapidly lead to a confidence of 1 (100%) for a given question (if probabilities for most of the nodes are above 50%). However, if [[Trust|Trust]] &amp;amp;lt; 1.0 then the confidence will asymptotically approach some limit below 1. The question was asked, why? Why the [[wikipedia:Asymptote|asymptote]]? I hadn’t, and still really haven’t, done the math to answer this (See APPENDIX below for an attempt) so I offered up the following qualitative explanation:&lt;br /&gt;
&lt;br /&gt;
# The confidence goes to 100% in a cycling network with Trust=1 (as long as our probabilities are above 50%). If Trust=0, the confidence is simply the confidence of the head node, which trusts itself. We would think that if Trust is between 0 and 1 that the confidence would also go to 100% if enough cycling were to take place, just like the case when Trust=1.0. But this would imply a sharp discontinuity between Trust=0 and everything else. Usually, when given a choice between continuous and discontinuous, you should pick continuous. Nature just seems to work that way, at least from a human-scale point of view. It is more reasonable that a continuous variation exists between Trust=0 and Trust=1.&lt;br /&gt;
# The asymptote is the result of two forces fighting each other: one being the multiple counting of the same nodes over and over (cycling) and the other being the attenuation of the trust as the nodes get further away from the top-most node. When trust=1, no attenuation occurs so the full effect of multiple counting takes over. When Trust &amp;amp;lt; 1, nodes further out exert less and less influence on the final answer because they have to go through multiple trust layers and, hence, become attenuated. A node with Trust=0 has no influence on the final answer which is almost the same as the influence a distant node has. Since its influence is almost nothing we could view that as being effectively the same as a trust of zero. Hence the term “trust attenuation”. As nodes get farther away from the top node their “effective” trust declines more and more.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Effect of Trust Attenuation&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let’s note first that this has nothing to do with cycling. Even in networks where no cycling takes place, if the trust of multiple nodes is below 1, attenuation will cause the last node to have very little influence on the final answer.&lt;br /&gt;
&lt;br /&gt;
Let’s look at the implications of this. If a node that’s many levels down has low trust (due to attenuation) but is the only node that has any informed knowledge of a subject, how do we factor that in without completely washing out the information it provides? It seems a shame to discount the only node that knows anything because of attenuation.&lt;br /&gt;
&lt;br /&gt;
Cases of this would be any question that is difficult to answer and will require an extensive dive into the network to find an authoritative source. In this situation most of the network doesn’t know anything but there’s one guy who does, somewhere deep down. Examples would be serious questions we might really pose: Is Bibi, from Israel, who just contacted me for a business deal, a good guy? Is it better to get a liver transplant in India or the US? No one in my immediate network knows the answer to these questions so it’s going to take a few levels to get there. The problem is that the network path to anyone who knows is long and has trust factors built in to every node which will attenuate the result as we go along. Take a look at the following example:&lt;br /&gt;
&lt;br /&gt;
[[File:354dda5b3fb3b7a7d8220df8a448cb44_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here all nodes have a trust of 70% for their succeeding node and Nodes 0-7 have P=50%, meaning they don’t know anything and their contribution to the overall answer is nil. The only node with any real knowledge is 8, which we represent as P=100%. If we use [https://gitlab.syncad.com/peerverity/trust-model-playground/-/wikis/uploads/6484e59605fd81085eaa127bd98bb9ed/sapienza_trusttree.py sapienza_trusttree.py] to compute this case we arrive at Ptot = 53% as shown. This is not a very good result, just slightly better than random.&lt;br /&gt;
&lt;br /&gt;
Let’s contrast this with a network composed of two nodes, where there is a direct link between 0 and 8:&lt;br /&gt;
&lt;br /&gt;
[[File:6945896d20cb2f044417fa6bb1420df0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here, by eliminating all the nodes that contribute nothing to the answer except attenuation, we obtain the far more reasonable result of 85%.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Ideas for Dealing with Trust Attenuation&amp;lt;/h2&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To handle this we could allow the user to assign trust factors of 1 to nodes that don’t know the answer (ie ones that have 50% confidence). Trust seems like a less likely issue in cases where people simply don’t know but are willing to pass along information they’ve received. We could even automate this – the node could just pass the answer from their child up without doing anything if they, in fact, have no knowledge and, presumably, no stake in the outcome:&lt;br /&gt;
&lt;br /&gt;
[[File:16f4acbb814cbb110a2afa9eff0f51e0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here we’ve cleanly passed up Node 8’s confidence to the top node without that pesky trust attenuation problem getting in the way. But we’ve also introduced another obvious problem in that we are simply taking Node 8’s word for it on its 100% confidence level. Clearly we need to reality-check this with a trust factor but we’ve just assigned them all to 1 to avoid attenuation.&lt;br /&gt;
&lt;br /&gt;
We might call this “the biased source problem”. Frequently the people who know anything about an obscure subject are biased because they make their business that subject and have a stake in the outcome. Our best source on Bibi is his golfing buddy at the chamber of commerce. The best source on liver transplants in India is an Indian hepatologist who’d like to do the transplant himself. The results are unrealistically confident answers.&lt;br /&gt;
&lt;br /&gt;
So a variant on this idea is to allow the trust factor of the next-to-last node to stand but ignore the rest by assigning them to 1. This would preserve the trust information of the guy who knows the guy who knows. Here, Node 7’s trust for Node 8 stands (at 70%) but the rest of the nodes are set to 100%:&lt;br /&gt;
&lt;br /&gt;
[[File:6a34ceceeac7399d5f077c25fabefc7b_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Another option would be to allow Node 0 to assign a single trust factor to the whole network, perhaps based on how far away the source node (Node 8) is.&lt;br /&gt;
&lt;br /&gt;
In any event, we are short-circuiting our way to the nodes that really matter, the one that knows the guy who knows, and the guy who knows. I think our system should allow this in some form or other. The user would then get an unfiltered [[Opinion|opinion]] on the question at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;APPENDIX -- Some math on the asymptote&amp;lt;/h2&amp;gt;&lt;br /&gt;
This is a continuation of the first paragraph of this post. Please read that before trying to understand what&#039;s going on here. Also, this is strictly a nerd&#039;s eye view of a (very) specific problem. Read it if you&#039;re having insomnia.&lt;br /&gt;
&lt;br /&gt;
I don’t have an elegant proof because it’s a crazy amount of algebra, but we can go over some thoughts:&lt;br /&gt;
&lt;br /&gt;
Let’s take a look at the following network:&lt;br /&gt;
&lt;br /&gt;
0 – 1 – 2 – 3&lt;br /&gt;
&lt;br /&gt;
where P=0.6 for all nodes and T=0.7 for all nodes. To roll up the confidence of Node 0 for any given [[predicate]] question we can start by computing the confidence of Node 2 after taking into account Node 3. This is just the [https://ceur-ws.org/Vol-1664/w9.pdf Bayes eqn. as modified by Sapienza’s trust factor]:&lt;br /&gt;
&lt;br /&gt;
P2 = 0.6*(0.5 + (0.6-0.5)&#039;&#039;T) / (0.6&#039;&#039;(0.5 + (0.6-0.5)&#039;&#039;T) + 0.4&#039;&#039;(0.5+(0.4-0.5)*T)) = 0.6654&lt;br /&gt;
&lt;br /&gt;
We continue by calculating the confidence of Node 1 using the above, just calculated, confidence of Node 2:&lt;br /&gt;
&lt;br /&gt;
P1 = 0.6*(0.5 + (0.6654-0.5)&#039;&#039;T) / (0.6&#039;&#039;(0.5 + (0.6654-0.5)&#039;&#039;T) + 0.4&#039;&#039;(0.5+(1-0.6654-0.5)*T)) = 0.7062&lt;br /&gt;
&lt;br /&gt;
and so on until we’ve calculated Node 0. If we just substitute T=0.7 into this, we can derive a [[wikipedia:Recurrence relation|recurrence relation]] of the form:&lt;br /&gt;
&lt;br /&gt;
Pnew = (.09 + .42&#039;&#039;P ) / (.43 + .14&#039;&#039;P)&lt;br /&gt;
&lt;br /&gt;
That is, the Probability of the next level (new) is a function of the probability of the previous level. The other numbers are just constants associated with the trust (0.7 to keep things simple) and the Pnom (0.5 for a [[Predicate|predicate]] question). P will vary from 0-1, so we can construct a table of Pnew as a function of P:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! When T=0.7&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
| P&lt;br /&gt;
| Pnew&lt;br /&gt;
|-&lt;br /&gt;
| 0&lt;br /&gt;
| 0.209302326&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.297297297&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.379912664&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.457627119&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.530864198&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.6&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.66536965&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.727272727&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.78597786&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.841726619&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 0.894736842&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We see here that P equal to and below 0.7 results in Pnew &amp;amp;gt; P. Numbers equal to and above 0.8 results in Pnew &amp;amp;lt; P. Therefore, no matter where we start, the recurrence relation will converge somewhere between 0.7-0.8. If we run sapienza_trusttree.py for many nodes (Level = 15) we will obtain P=0.7668. We can also just say Pnew = P in the equation above and solve to get the same result.&lt;br /&gt;
&lt;br /&gt;
When Trust=1, we get the following recurrence relation:&lt;br /&gt;
&lt;br /&gt;
Pnew = 0.6&#039;&#039;P / (0.2&#039;&#039;P + 0.4)&lt;br /&gt;
&lt;br /&gt;
Which leads to a table like this:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! When T=1&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
| P&lt;br /&gt;
| Pnew&lt;br /&gt;
|-&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.142857143&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.272727273&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.391304348&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.5&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.6&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.692307692&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.777777778&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.857142857&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.931034483&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
|-&lt;br /&gt;
| 1.1&lt;br /&gt;
| 1.064516129&lt;br /&gt;
|-&lt;br /&gt;
| 1.2&lt;br /&gt;
| 1.125&lt;br /&gt;
|-&lt;br /&gt;
| 1.3&lt;br /&gt;
| 1.181818182&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This shows that all values of P above 0 but below 1 increase the resulting Pnew. The asymptote here, of course, is 1 as we already know.&lt;br /&gt;
&lt;br /&gt;
The only difference between these recurrence relations is Trust. Therefore it is trust below 1 which generates a recurrence relation that leads to an asymptote below 1.&lt;br /&gt;
&lt;br /&gt;
Based on this, we can write a general equation as a function of T and P:&lt;br /&gt;
&lt;br /&gt;
Pnew = (0.3 + 0.6&#039;&#039;T&#039;&#039;(P-0.5)) / ( (0.3 + 0.6&#039;&#039;T&#039;&#039;(P-0.5)) + 0.2 + 0.4*(0.5-P)*T )&lt;br /&gt;
&lt;br /&gt;
If we set Pnew = P, we obtain, after some algebra,&lt;br /&gt;
&lt;br /&gt;
P**2 + P*(2.5/T - 3.5) + (1.5 - 1.5/T) = 0&lt;br /&gt;
&lt;br /&gt;
The solution of this equation defines the asymptote (Pasymp), ie the highest probability we can achieve given the trust. It can be solved numerically or using the quadratic formula:&lt;br /&gt;
&lt;br /&gt;
P=( (3.5-2.5/T) +- SQRT((2.5/T - 3.5)*&#039;&#039;2 - 4&#039;&#039;(1.5-1.5/T)) ) / 2&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! T&lt;br /&gt;
! Pasymp&lt;br /&gt;
|-&lt;br /&gt;
| 0.01&lt;br /&gt;
| 0.60096891&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.610567768&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.623475383&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.639520137&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.659852575&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.686140662&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.72075922&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.766864466&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.827934423&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.906150469&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[File:96bd3aa31a128c2ea22634e90459d987_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
This is not quite a proof but it gives us a more rigorous picture of what’s going on. When trust falls to 0, the confidence of the head node, 0.6, is as good as it gets. If Trust is 1, we will eventually reach P=1 after enough nodes have been factored in. For all T in between, we have a continuously varying degree of P. This makes sense and confirms our initial intuition that a continuous variation in P will result from varying T.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Arrow%27s_theorem&amp;diff=2179</id>
		<title>Arrow&#039;s theorem</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Arrow%27s_theorem&amp;diff=2179"/>
		<updated>2024-09-25T15:13:46Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Voting Methods}}&lt;br /&gt;
&lt;br /&gt;
[[Multi-criteria decision-making methods|MCDM]] can be used for voting but runs into [https://plato.stanford.edu/entries/arrows-theorem/ Arrow’s impossibility theorem]. Kenneth Arrow, an economist, won a [[wikipedia:Nobel prize|Nobel prize]] in 1972 for his contention that it is impossible to extract a social order of preferences from individual preferences while also adhering to fair voting principles. To get some understanding of this, it is useful to imagine a scenario where we are trying to choose between Policy A, B, or C. Suppose 3 people come up with their order of preferences:&lt;br /&gt;
&lt;br /&gt;
# ABC&lt;br /&gt;
# BCA&lt;br /&gt;
# CAB&lt;br /&gt;
&lt;br /&gt;
Now our task is to devise a single order of preferences which best reflects the views of all three. Voter 1 prefers A to B and this preference is shared by Voter 3. We could therefore say that A &amp;amp;gt; B. We can further see that B &amp;amp;gt; C because both Voters 1 and 2 prefer it. We would therefore expect that A &amp;amp;gt; C by a presumed [[Transitive property|transitive property]] but in fact quite the opposite is true. Two voters prefer C to A. If we continue like this we get, rather than a social ordering, a cycle. It turns out to be impossible to construct a social ordering that makes any sense.&lt;br /&gt;
&lt;br /&gt;
We obtain the same result if we do this a little differently. Suppose we look at the ordering BC which means that B is preferred to C. We can see that this is the case for 1 and 2 and since this preference is satisfied by two of the options on the list, we will give 2 points to each option that has it. Thus option 1 receives 2 points and option 2 receives 2 points for its BC preference.&lt;br /&gt;
&lt;br /&gt;
Let’s do the same for ordering AB, and again we can give 2 points to Option 1 and 2 points to Option 3 for having it.&lt;br /&gt;
&lt;br /&gt;
If we continue along these lines, we will come up with the following points for each option:&lt;br /&gt;
&lt;br /&gt;
# ABC – +2 pt for BC, +2 pt for AB, +1 pt for AC&lt;br /&gt;
# BCA – +2 pt for BC, +2 pt for CA, +1 pt for BA&lt;br /&gt;
# CAB – +2 pt for CA, +2 pt for AB, +1 pt for CB&lt;br /&gt;
&lt;br /&gt;
Each option wins 5 points, meaning there is no clearly preferred social choice. It doesn’t help to invoke preferences which no voter listed in the hope that it will somehow result in a better compromise:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot; style=&amp;quot;list-style-type: decimal;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;CBA – +1 pt for CB, +1 pt for BA, +2 pt for CA&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;ACB – +1 pt for CB, +1 pt for AC, +2 pt for AB&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;BAC – +1 pt for BA, +2 pt for BC, +1 pt for AC&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each of these options are equal as well, although they are all worse than the first three.&lt;br /&gt;
&lt;br /&gt;
Let’s make this a little more realistic by assuming the 100 people voted for preferences 1,2,3. Let’s say:&lt;br /&gt;
&lt;br /&gt;
# 30 votes for ABC&lt;br /&gt;
# 30 votes for BCA&lt;br /&gt;
# 40 votes for CAB&lt;br /&gt;
&lt;br /&gt;
We obtain the following vote tallies:&lt;br /&gt;
&lt;br /&gt;
* 60 votes for B &amp;amp;gt; C&lt;br /&gt;
* 70 votes for C &amp;amp;gt; A ==&amp;amp;gt; here we would expect that B &amp;amp;gt; A, but…&lt;br /&gt;
* 70 votes for A &amp;amp;gt; B&lt;br /&gt;
&lt;br /&gt;
The last vote tally contradicts the first two and leads to the same cycle we mentioned above.&lt;br /&gt;
&lt;br /&gt;
Arrow’s Theorem (aka Arrow’s Impossibility Paradox) generalizes this result by saying that in any [[Ranked-choice voting systems|ranked choice voting]] system (except trivial ones) it is impossible to determine a social choice preference given the following fair voting principles:&lt;br /&gt;
&lt;br /&gt;
# Social ordering – The results must be an ordering of alternatives, not a cycle.&lt;br /&gt;
# Unrestricted domain – The [[Aggregation techniques|aggregation procedure]] must handle any individual preferences.&lt;br /&gt;
# Pareto efficiency – Unanimous individual preferences must be respected. If every voter prefers A to B then A should always win, regardless of any change in the preferences of other alternatives.&lt;br /&gt;
# Non-dictatorship – The wishes of multiple voters needs to be considered. The decision cannot mimic the choice of a single voter.&lt;br /&gt;
# Independence of irrelevant alternatives – If a choice is removed, the ordering of the rest of the choices should be preserved.&lt;br /&gt;
&lt;br /&gt;
With this in mind, let’s suppose instead that we tallied the votes by awarding every ordering (1,2,3) with the total for every pairwise ordering:&lt;br /&gt;
&lt;br /&gt;
# ABC would get 70 for AB, 30 for AC, and 60 for BC ==&amp;amp;gt; Total = 160&lt;br /&gt;
# BCA would get 60 for BC, 70 for CA, and 30 for BA ==&amp;amp;gt; Total = 160&lt;br /&gt;
# CAB would get 70 for CA, 40 for CB, and 70 for AB ==&amp;amp;gt; Total = 180&lt;br /&gt;
&lt;br /&gt;
It would appear that CAB is the winner, even though we suspect that it won just because it has a [[wikipedia:Plurality (voting)|plurality]] of the votes. In fact, though, it violates Condition 5 of Arrow’s theorem. If we remove, for instance, choice A we would get the following:&lt;br /&gt;
&lt;br /&gt;
# BC would get 30 votes&lt;br /&gt;
# BC would get 30 votes&lt;br /&gt;
# CB would get 40 votes&lt;br /&gt;
&lt;br /&gt;
Now, clearly BC is the winner even though in our previous analysis we scored CB the winner because CAB won.&lt;br /&gt;
&lt;br /&gt;
Arrow’s paradox does not always lead to failure. For example, in cases where a clear majority favors one ordering of choices, the winner is obvious. It is also not a problem when a simple two-choice decision is made (as in many elections) because then, indeed, a clear majority will favor one or the other. This case can only fail when the electorate is evenly split, a generally unlikely event where a large number of people vote. In smaller voting scenarios (eg panels of judges) a majority will usually be guaranteed by restricting the membership to an odd number.&lt;br /&gt;
&lt;br /&gt;
Nevertheless, Arrow’s insight about [[Voting Methods|voting methods]] presents real difficulties for consensus-based systems. We should note at this point that it applies to ordinal voting systems only. We can get around its difficulties, perhaps, by looking at [[cardinal voting systems]].&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Allowing_for_more_than_predicate_questions_in_the_trust-weighted_histogram_(TWH)_algorithm&amp;diff=2177</id>
		<title>Allowing for more than predicate questions in the trust-weighted histogram (TWH) algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Allowing_for_more_than_predicate_questions_in_the_trust-weighted_histogram_(TWH)_algorithm&amp;diff=2177"/>
		<updated>2024-09-25T14:50:28Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|[[Trust-weighted histogram|Trust-weighted histograms]]}}&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;trust_weighted_histogram&amp;lt;/code&amp;gt; (TWH) algorithm was discussed last time but only handles [[predicate]] questions. Indeed, it only handles single valued probabilities under the assumption that the other [[Probability|probability]] in a [[Predicate|predicate]] [[Probability distribution|distribution]] is (1-P). Although the examples that exercise it are actually provided as a two-valued distribution [P, 1-P], the algorithm ignores the 2nd value.&lt;br /&gt;
&lt;br /&gt;
This was corrected using the &amp;lt;code&amp;gt;trust_weighted_histogram_sets&amp;lt;/code&amp;gt; algorithm where each “set” corresponds to a different probability and results in a different [[Histogram|histogram]]. In essence the algorithm does the same calculation as before for multiple probabilities and presents its results as a set of histograms, one for each probability.&lt;br /&gt;
&lt;br /&gt;
The setup and example are exactly the same as in the [[Internal:FromGitlab/Notes_on_using_Lem&#039;s_algorithm_interface|previous discussion]] and [[Internal:FromGitlab/Dan&#039;s_proposal_for_trust_weighted_histograms|Eric’s description of the TWH algorithm]]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    1 [label=&amp;quot;1, P=20%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=30%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=40%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=45%&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, P=60%&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, P=90%&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, P=55%&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, P=65%&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, P=70%&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, P=80%&amp;quot;]&lt;br /&gt;
    1 -&amp;gt; 2 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 5 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 6 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 7 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 8 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 9 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 10 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
The output for both the actual and intermediate results has a 2nd set of values representing the 2nd probability in the predicate distribution:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[[Trust|trust]]_weighted_histogram_sets test -- TEST1:&lt;br /&gt;
Trust Weighted Histogram output256 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.0, 0.0, 0.9], [0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.0, 1.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output378 =  [[0.0, 0.0, 0.0, 0.0, 1.0, 0.9, 0.9, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output4910 =  [[0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.9, 0.0], [0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram Overall output1234 =  [[0.0, 0.0, 0.5555555555555556, 0.5, 1.0, 0.45, 0.9, 0.45, 0.45, 0.45], [0.0, 0.5, 0.5, 1.0, 1.0, 0.5555555555555556, 0.5555555555555556, 0.5555555555555556, 0.6172839506172839, 0.0]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is, of course, still a predicate question but any number of probabilities can now be handled. Another test (TEST2) of this algorithm is provided in the [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/148 snippet ]which adds a 3rd probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;trust_weighted_histogram_sets test -- TEST2:&lt;br /&gt;
Add a 3rd probability&lt;br /&gt;
Trust Weighted Histogram output256 =  [[0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.0, 0.0, 0.9, 0.0], [0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output378 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.4736842105263158, 0.0, 0.0], [0.0, 0.0, 0.0, 0.9, 0.0, 0.9, 1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output4910 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.9, 0.0, 0.0], [0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram Overall output1234 =  [[0.0, 0.5555555555555556, 0.5, 1.0, 0.0, 0.45, 0.45, 0.6868421052631579, 0.45, 0.0], [0.0, 0.4736842105263158, 0.4736842105263158, 0.9473684210526316, 0.4736842105263158, 1.0, 0.5263157894736842, 0.5263157894736842, 0.5847953216374269, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
The functions for this algorithm are denoted by the suffix &amp;lt;code&amp;gt;_sets&amp;lt;/code&amp;gt;. They are largely the same as the functions for [[Internal:FromGitlab/Dan&#039;s_proposal_for_trust_weighted_histograms|TWH ]] algorithm discussed previously except modified to handle one extra dimension. This is mainly seen in any list/array operation which had to be modified to be a “list of lists” rather than a simple list.&lt;br /&gt;
&lt;br /&gt;
For example the functions GetHForLeafNode and GetHForLeafNode_sets differ in exactly this manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def GetHForLeafNode(opinion, bins):&lt;br /&gt;
    P = opinion.pdf_points&lt;br /&gt;
    H = []&lt;br /&gt;
    for binx in bins:&lt;br /&gt;
        if(P[0] &amp;gt;= binx[0] and P[0] &amp;lt; binx[1]):&lt;br /&gt;
            H.append(1.0)&lt;br /&gt;
        else:&lt;br /&gt;
            H.append(0.0)&lt;br /&gt;
    return H&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def GetHForLeafNode_sets(opinion, bins):&lt;br /&gt;
    P = opinion.pdf_points&lt;br /&gt;
    H = []&lt;br /&gt;
    Hlist = []&lt;br /&gt;
    for p in P:&lt;br /&gt;
        for binx in bins:&lt;br /&gt;
            if(p &amp;gt;= binx[0] and p &amp;lt; binx[1]):&lt;br /&gt;
                H.append(1.0)&lt;br /&gt;
            else:&lt;br /&gt;
                H.append(0.0)&lt;br /&gt;
        Hlist.append(H)&lt;br /&gt;
        H = []&lt;br /&gt;
    return Hlist&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This function takes the [[Opinion|opinion]] of a leaf node and determines which probability bin it corresponds to. It places a 1 in the proper bin and 0 in the other bins, creating a histogram. The first function ignores the fact that P is multi-valued and simply uses the first value, P[0]. The second iterates over all the P values in an outer loop. An &amp;lt;code&amp;gt;Hlist&amp;lt;/code&amp;gt; is used to store each &amp;lt;code&amp;gt;H&amp;lt;/code&amp;gt; so created. Many of the other functions in the algorithm were modified to create an outer loop along these lines.&lt;br /&gt;
&lt;br /&gt;
No modifications to the &amp;lt;code&amp;gt;algorithms.py&amp;lt;/code&amp;gt; interface were required although it should be noted that the output opinion is now a list of lists (as you can see above) rather than a simple list. The pdf_points field in OpinionData is specified as a &amp;lt;code&amp;gt;list[float]&amp;lt;/code&amp;gt; but apparently it doesn’t care that the actual data type is a list of lists of floats.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;trustprobabilitypopulation-graphs-algorithm&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Straight_average_algorithm_with_continuous_input_distributions,_complex_trust,_and_intermediate_results&amp;diff=2176</id>
		<title>Straight average algorithm with continuous input distributions, complex trust, and intermediate results</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Straight_average_algorithm_with_continuous_input_distributions,_complex_trust,_and_intermediate_results&amp;diff=2176"/>
		<updated>2024-09-25T14:32:33Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|[[Aggregation techniques|Aggregation techniques]]}}&lt;br /&gt;
&lt;br /&gt;
In [[Internal:FromGitlab/Exercising_the_algorithms_py_interface_with_more_complex_data_types|this previous post]] we discussed an algorithm for modifying probabilities using a more complex [[trust]] factor, one involving random [[Deception|lying]], [[Bias|bias]], and biased lying. We also discussed a simple case of combining [[Continuous distribution|continuous distribution]]s using [[Bayes&#039; theorem|Bayes]], straight averaging and [[Trust-weighted averaging|trust-weighted averaging]]. However, the continuous distributions in this case did not use the complex [[Trust factor|trust factor]].&lt;br /&gt;
&lt;br /&gt;
In [[Internal:FromGitlab/Notes_on_setting_up_&amp;amp;_using_the_sandbox_and_algorithmic_improvements|this post from last week]] we discussed a way to modify probabilities using the complex trust factor with continuous distributions. Doing so requires that we break up the continuous distribution into the set of choices being offered in the question (eg rainy, cloudy, sunny would be 3 choices) and apply the lying and biased lying portion of the [[Trust|trust]] modification to each choice accordingly.&lt;br /&gt;
&lt;br /&gt;
None of this previous work, however, allowed us to calculate a full multi-level tree of nodes where we use &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; to transfer calculations up to the parent node so the calculation can continue. This post is about an algorithm that rectifies this shortcoming and yields the ability to do straight averaging with complex trust, continuous input distributions, and multi-level calculations.&lt;br /&gt;
&lt;br /&gt;
The algorithm, at its essence, keeps track of each node’s a) [[Personal opinion|personal opinion]] and b) the sum of the [[Opinion|opinion]]s of all its children (after they have been modified by trust). These two pieces of information become the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; which are used to calculate the average for that node and are passed up to the parent node. Each parent, in turn, sums all its children’s &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; together to form, along with its personal opinion, its own &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
First, let’s start by reminding ourselves of how &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; is constructed in this context:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[personal opinion, sum of child opinions, population of children]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since we are dealing with continuous distributions, we include the x-values of the points in this list:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[[x points of personal opinion, pdf points of personal opinion], [xpoints of child opinions, sum of pdf points of child opinions], population of children]]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following is an example of this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;inter_results012 =  [[[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], [2.00051038563241, 2.00051038563241, 2.00051038563241, 2.00051038563241, 2.00051038563241, 2.00051038563241, 1.5350918071000005e-06, 1.5350918071000005e-06, 1.5350918071000005e-06, 1.5350918071000005e-06, 1.5350918071000005e-06]], [[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], [5.821628442333012, 5.821628442333012, 5.821628442333012, 5.821628442333012, 5.821628442333012, 5.821628442333012, 6.144719133017774, 6.144719133017774, 6.144719133017774, 6.144719133017774, 6.144719133017774]], 6]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Let’s also remind ourselves of the complex &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; which is broken up into a judgement trust and a communication trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;trust_factor = [ [0.8, 0.1, 0.0, 0.02, 0.03, 0.02, 0.03],    #Tj, judgement trust&lt;br /&gt;
                 [0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0] ]  #Tc, communication trust&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Here there are two choices (say rainy and sunny) so each &amp;lt;code&amp;gt;Tj&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Tc&amp;lt;/code&amp;gt; could be described as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;[Ttruth, Trandom, Tlierandom, Tliebias_rainy, Tliebias_sunny, Tbias_rainy, Tbias_sunny]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is discussed at some length in [[Internal:FromGitlab/Exercising_the_algorithms_py_interface_with_more_complex_data_types|Exercising the algorithms.py interface with more complex data types]]&lt;br /&gt;
&lt;br /&gt;
The new algorithm is encapsulated in the function &amp;lt;code&amp;gt;straight_ave_points_continuous_complex_trust_intermediate&amp;lt;/code&amp;gt; which looks like the following and is available in the following [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/151 snippet]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;@register_algorithm()&lt;br /&gt;
def straight_ave_points_continuous_complex_trust_intermediate(data: AlgorithmInput):&lt;br /&gt;
    convert_functions_to_points_continuous(data)&lt;br /&gt;
    set_initial_intermediate_results_sapccti(data)&lt;br /&gt;
    modify_prob_data_for_continuous_complex_trust_intermediate(data)&lt;br /&gt;
    sum_kids_intermediate_results_and_putinto_parent(data)&lt;br /&gt;
    x_list, Pdave_list = ave_points_continuous_complex_trust_intermediate4(data)&lt;br /&gt;
    opinion_output = OpinionData([x_list, Pdave_list], 1)&lt;br /&gt;
    intermediate_results = data.components[0].intermediate_results #not really necessary&lt;br /&gt;
    algout = AlgorithmOutput(opinion_output, intermediate_results)&lt;br /&gt;
    return algout&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At this outermost level the algorithm is composed of 5 basic steps, the first five function calls in the script above:&lt;br /&gt;
&lt;br /&gt;
# Converting input continuous distributions (provided by a [[wikipedia:Weibull distribution|Weibull distribution]]) to discrete points. This is governed by the &amp;lt;code&amp;gt;misc_input&amp;lt;/code&amp;gt; parameter, particularly the number of points, &amp;lt;code&amp;gt;n&amp;lt;/code&amp;gt;, desired. The function &amp;lt;code&amp;gt;convert_functions_to_points_continuous&amp;lt;/code&amp;gt; handles this.&lt;br /&gt;
# Setting initial &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; to zero for leaf nodes. These nodes only have personal opinions. However, by forcing them to have &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; of the same form as all other nodes, the calculations can be applied consistently to all nodes (with no need for special casing the leaf nodes). The function &amp;lt;code&amp;gt;set_initial_intermediate_results_sapccti&amp;lt;/code&amp;gt; handles this.&lt;br /&gt;
# Modifying the probability data using complex trust as discussed in [[Internal:FromGitlab/Exercising_the_algorithms_py_interface_with_more_complex_data_types|this previous post]]. &amp;lt;code&amp;gt;modify_prob_data_for_continuous_complex_trust_intermediate&amp;lt;/code&amp;gt; handles this.&lt;br /&gt;
# Take the sum of all the children &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; and place this result as part of the parent &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; (along with the parent personal opinion). This is done in &amp;lt;code&amp;gt;sum_kids_intermediate_results_and_putinto_parent&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Use the result of 4 to take the straight average for the node, which consists of its personal opinion summed with the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; of all the children. The function &amp;lt;code&amp;gt;ave_points_continuous_complex_trust_intermediate4&amp;lt;/code&amp;gt; handles this. Incidentally, the 1-3 variants of this are earlier versions of the function which are not important to this discussion. They are kept for reference since they have properties of interest for future calculations.&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;code&amp;gt;convert_functions_to_points_continuous&amp;lt;/code&amp;gt; takes continuous functions given by a Python function name, eg:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def weibull(k, lamb, x):&lt;br /&gt;
    w = (k/lamb)*(x/lamb)**(k-1.0)*math.exp(-(x/lamb)**k)&lt;br /&gt;
    return w&lt;br /&gt;
&lt;br /&gt;
def Pd0(x):&lt;br /&gt;
    k = 3.0&lt;br /&gt;
    lamb = 0.2&lt;br /&gt;
    return weibull(k, lamb, x)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and converts them to a discrete set of (x, p) points depending on the &amp;lt;code&amp;gt;misc_input&amp;lt;/code&amp;gt;, eg:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;misc_input = {&#039;xmin&#039;: 0.0, &#039;xmax&#039;: 1.0, &#039;n&#039;: 10}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The continuous function is sampled at each of the &amp;lt;code&amp;gt;n&amp;lt;/code&amp;gt; intervals between &amp;lt;code&amp;gt;xmin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;xmax&amp;lt;/code&amp;gt; to create the corresponding set of discrete points. If the input is already a set of discrete points, &amp;lt;code&amp;gt;convert_functions_to_points_continuous&amp;lt;/code&amp;gt; does nothing.&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;code&amp;gt;set_initial_intermediate_results_sapccti&amp;lt;/code&amp;gt; provides an empty set of &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; to nodes (leaf nodes) that do not already have them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def set_initial_intermediate_results_sapccti(data):&lt;br /&gt;
    for comp in data.components:&lt;br /&gt;
        if(comp.intermediate_results == []):&lt;br /&gt;
            x_points_zero = comp.opinion.pdf_points[0]&lt;br /&gt;
            pdf_points_zero = [0.0]*len(x_points_zero)&lt;br /&gt;
            comp.intermediate_results = [[comp.opinion.pdf_points[0], comp.opinion.pdf_points[1]], [x_points_zero, pdf_points_zero], 0]&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The last line in the above provides the &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; for a particular node, that is the (x, p) points for its personal opinion, the (x, p) points of the nonexistent children where p is a list of zeros, and the population of the children which, in this case, is zero.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;modify_prob_data_for_continuous_complex_trust_intermediate&amp;lt;/code&amp;gt; is an outer function for doing the probability modification using the complex trust. It breaks up each node’s &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt; into its corresponding personal opinion and the summed opinion of its children. The personal opinion is then modified using the judgement trust portion of the &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt; and the summed children opinions are modified using the communication portion of the &amp;lt;code&amp;gt;trust_factor&amp;lt;/code&amp;gt;. Since in most cases the population of the children will be greater than 1, the summed probability of the children is divided by this population to create an average probability for the trust-modification. After modification, the result is multiplied by the population to recreate the sum for continued calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def modify_prob_data_for_continuous_complex_trust_intermediate(data):&lt;br /&gt;
    for comp in data.components:&lt;br /&gt;
        # personal opinion&lt;br /&gt;
        xp_points_personal = comp.intermediate_results[0]&lt;br /&gt;
        x_points_personal = xp_points_personal[0]&lt;br /&gt;
        pdf_points_personal = xp_points_personal[1]&lt;br /&gt;
        T_personal = comp.trust_factor[0] #use Tj&lt;br /&gt;
        comp.intermediate_results[0][1] = calc_pdfmod_from_complex_trust_intermediate(x_points_personal, pdf_points_personal, T_personal)&lt;br /&gt;
        # computed opinion of children&lt;br /&gt;
        xp_points_children = comp.intermediate_results[1]&lt;br /&gt;
        x_points_children = xp_points_children[0]&lt;br /&gt;
        pdf_points_children = xp_points_children[1] #this is really a sum&lt;br /&gt;
        T_children = comp.trust_factor[1] #use Tc&lt;br /&gt;
        Npop = comp.intermediate_results[2]&lt;br /&gt;
        #Need to modify the function to handle Npop &amp;gt; 1 PM_090623&lt;br /&gt;
        if(Npop == 1):&lt;br /&gt;
            comp.intermediate_results[1][1] = calc_pdfmod_from_complex_trust_intermediate(x_points_children, pdf_points_children, T_children, Npop)&lt;br /&gt;
        elif(Npop &amp;gt; 1):&lt;br /&gt;
            #account for the fact that we have a sum here, not an individual probability&lt;br /&gt;
            pdf_points_children = (np.array(pdf_points_children) / Npop).tolist()&lt;br /&gt;
            inter_results = calc_pdfmod_from_complex_trust_intermediate(x_points_children, pdf_points_children, T_children, Npop)&lt;br /&gt;
            inter_results = (np.array(inter_results) * Npop).tolist()&lt;br /&gt;
            comp.intermediate_results[1][1] = inter_results&lt;br /&gt;
        else: #(Npop == 0)&lt;br /&gt;
            continue&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Within this function another function, &amp;lt;code&amp;gt;calc_pdfmod_from_complex_trust_intermediate&amp;lt;/code&amp;gt; is called to perform the actual modification calculation. This follows exactly the description set out in the [[Internal:FromGitlab/Notes_on_setting_up_&amp;amp;_using_the_sandbox_and_algorithmic_improvements|last post]] starting at the section titled “Trust modification for continuous algorithms using complex trust factors”.&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;code&amp;gt;sum_kids_intermediate_results_and_putinto_parent&amp;lt;/code&amp;gt; goes through each child node and adds up all their &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt;. This sum then becomes this parent’s child portion of its &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt;, alongside its personal opinion:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def sum_kids_intermediate_results_and_putinto_parent(data):&lt;br /&gt;
    #parent is the first one (index 0) and kids are the rest&lt;br /&gt;
    x_points = data.components[0].opinion.pdf_points[0]&lt;br /&gt;
    kids = data.components[1:]&lt;br /&gt;
    tot_kids = [0.0]*len(x_points)&lt;br /&gt;
    tot_npop_kids = 0&lt;br /&gt;
    for kid in kids:&lt;br /&gt;
        sum_kid = addlists(kid.intermediate_results[0][1], kid.intermediate_results[1][1])&lt;br /&gt;
        npop_kid = 1 + kid.intermediate_results[2]&lt;br /&gt;
        tot_kids = addlists(tot_kids, sum_kid)&lt;br /&gt;
        tot_npop_kids = tot_npop_kids + npop_kid&lt;br /&gt;
    #put this into the parent intermediate result&lt;br /&gt;
    data.components[0].intermediate_results[1][1] = tot_kids&lt;br /&gt;
    data.components[0].intermediate_results[2] = tot_npop_kids&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Given a finalized set of &amp;lt;code&amp;gt;intermediate_results&amp;lt;/code&amp;gt;, the only remaining task is to take the average for the node which is done in &amp;lt;code&amp;gt;ave_points_continuous_complex_trust_intermediate4&amp;lt;/code&amp;gt;. This is fairly straightforward since it only involves adding up the node’s personal opinion with the already calculated sum of its children and dividing by the total population:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def ave_points_continuous_complex_trust_intermediate4(data):&lt;br /&gt;
    inter_results_parent = data.components[0].intermediate_results&lt;br /&gt;
    x_data = inter_results_parent[0][0]&lt;br /&gt;
    p_parent = inter_results_parent[0][1]&lt;br /&gt;
    p_sum_kids = inter_results_parent[1][1]&lt;br /&gt;
    npop_kids = inter_results_parent[2]&lt;br /&gt;
    p_sum = addlists(p_parent, p_sum_kids)&lt;br /&gt;
    npop = npop_kids + 1&lt;br /&gt;
    p_ave = (np.array(p_sum) / npop).tolist()&lt;br /&gt;
    return [x_data, p_ave] &amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2134</id>
		<title>Argument evaluation and scoring</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2134"/>
		<updated>2024-09-23T19:28:06Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;Some thoughts on evaluating [[Argument|argument]]s&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our last meeting Dan asked how we might evaluate an argument made by a respondent instead of simply relying on the given [[Probability|probability]] (as we&#039;ve been doing so far). An argument, assuming it is made public, could then be evaluated by the questioner and others independently to find a more accurate probability. This opens up a new idea in our work, that of assessing the truth by evaluating the reasoning put forth in an [[Opinion|opinion]].&lt;br /&gt;
&lt;br /&gt;
One idea for doing this starts with a simple model for argument construction. The argument consists of [[Supporting statement|supporting statement]]s which are tied together with [[Logic|logic]] to form a conclusion. The conclusion is the answer to the overall question being asked of the network. Each supporting statement and the logic can be evaluated independently to determine the extent to which the conclusion is true. The following diagram illustrates this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Answer/Conclusion, Pc&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Logic, Pl&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Support. Stmt. 1, Ps1&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;Support. Stmt. 2, Ps2&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;Support. Stmt. 3, Ps3&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The probability of the supporting statements can be combined in a [[Bayes&#039; theorem|Bayes]]ian manner. This is in keeping with the Bayesian idea of modifying prior probabilities given new evidence (ie supporting statements). These probabilities can be [[Trust|trust]]-modified as Sapienza proposed (https://ceur-ws.org/Vol-1664/w9.pdf) but since they are likely being assigned by the questioner, we will assume that trust is already built into them. Of more importance is the [[Relevance|relevance]] of the supporting statements. They can range from completely irrelevant to completely relevant. A completely relevant statement will take the full value of the probability it was originally assigned. A completely irrelevant statement would reduce the probability to 50%, where it will have no influence on the outcome. In that sense relevance functions in the same way trust does to modify the probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{mod} = P_{nom} + R(P - P_{nom})&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is relevance (0.0 - 1.0) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{mod}&amp;lt;/math&amp;gt; is relevance-modified probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{nom}&amp;lt;/math&amp;gt; is the nominal probability (=0.5 for a [[Predicate|predicate]] question)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is the unmodified probability&lt;br /&gt;
&lt;br /&gt;
After the relevance-modification, each supporting statement is combined in the usual manner via Bayes. For the first two statements,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{comb1,2} = {P_{s1}P_{s2}\over {P_{s1}P_{s2} + (1-P_{s1})(1-P_{s2})}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for each additional statement. Here it is to be understood that &amp;lt;math&amp;gt;P_{s1}&amp;lt;/math&amp;gt;, etc. is the value &amp;lt;i&amp;gt;after&amp;lt;/i&amp;gt; the relevance modification.&lt;br /&gt;
&lt;br /&gt;
Logic will also have a probability assigned to it to represent its quality. A fully illogical argument would receive a 0, which when combined via Bayes with the supporting statements would render the probability of the entire argument 0. This makes sense because a completely illogical argument, regardless of the strength of its supporting statements, destroys itself. A fully logical argument, however, will not receive a 1 but rather a 0.5. When combined with the supporting statements a 1 would render the final probability a 1, which is not reasonable. A 0.5, however, would do nothing and the final probability would be the combined probability of the supporting statements. Thus we assume that perfect logic is neutral and less than perfect logic reduces the combined probability of the statements. Again, this seems reasonable. We expect, by default, logical arguments which then rest on the strength of their supporting statements. If we notice flaws in the logic we discount the strength of the argument accordingly.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s try an example with these ideas:&lt;br /&gt;
&lt;br /&gt;
Question: Are humans causing frog populations to decline?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Answer / Conclusion: Yes, mankind is causing a fall in frog populations.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Logic: Mankind is causing the fall of frog populations if we can show that frog populations are decreasing over time and can show a human behavior that causes the decline.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Supporting Statements:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# [https://en.wikipedia.org/wiki/Frog#:~:text=Frog%20populations%20have%20declined%20significantly%20since%20the%201950s. Frog populations have declined since the 1950&#039;s.]&lt;br /&gt;
# My wife complained that she doesn&#039;t see frogs anymore.&lt;br /&gt;
# [https://wwf.panda.org/discover/our_focus/freshwater_practice/freshwater_biodiversity_222/ Scientists say] that the loss of freshwater habitats has affected frog populations.&lt;br /&gt;
&lt;br /&gt;
We start by judging the quality of the supporting statements. 1 seems like a well substantiated statement (a high P) but is not completely relevant because it only hints at human involvement. 2 is completely true but mostly irrelevant. 3 is a contributor but seems less substantiated than 1 and contains no human cause. We proceed by assigning probability and relevance values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1}=0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s1}=0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1mod} = 0.5 + 0.7(0.9 - 0.5) = 0.78&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2}=1.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s2}=0.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2mod} = 0.5 + 0.0(1.0 - 0.5) = 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3}=0.75&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s3}=0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3mod} = 0.5 + 0.5(0.75 - 0.5) = 0.625&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since s2 won&#039;t count in the Bayesian calculation we can ignore it and:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{comb,s} = {(0.78)(0.625) \over {0.78(0.625)+0.22(0.375)}} = 0.855&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logic/conclusion in this case is reasonably strong so we will assign it a high value, say &amp;lt;math&amp;gt;P_l = 0.45&amp;lt;/math&amp;gt; (remember, out of 0.5). It could be improved by observing that the word &amp;quot;behavior&amp;quot; is too general and should be replaced by, say, &amp;quot;policy choice&amp;quot; (ie urban growth into ecologically important wetlands). We note here that logic is more than just the mathematical construction of an argument. Since we are speaking a human language, logic might also be flawed because it uses imprecise wording.&lt;br /&gt;
&lt;br /&gt;
Putting &amp;lt;math&amp;gt;P_{comb,s}&amp;lt;/math&amp;gt; together with &amp;lt;math&amp;gt;P_l&amp;lt;/math&amp;gt; using Bayes we obtain a concluding probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_c = {0.855(0.45) \over {{0.855(0.45) + 0.145(0.55)}}} = 0.82&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One potential pitfall of this model is that repetitive supporting statements of high probability will quickly render a combined probability near 1.0. As we&#039;ve seen in the past, this is simply the result of the Bayes equation. The user would need to watch for attempts like these to distort the answer by removing repetitive statements or making them irrelevant. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Scoring of individual arguments&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arguments can be scored on [[Veracity|veracity]], impact, relevance, clarity, and informal quality (lack of fallacies):&lt;br /&gt;
&lt;br /&gt;
- Veracity is how true the argument is based on source information. Source information itself will be scored: &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Impact &amp;amp; Relevance is how deeply the argument affects the main contention of the debate (or the argument immediately above): &amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Clarity is how understandable the argument is: &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Informal quality (lack of fallacies) is whether the argument commits any logical fallacies of its own. A list of informal fallacies (and formal ones) will be provided to help users select appropriately: &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since Impact and Relevance are closely related concepts we will merge these into one, Relevance. The simplest method for combining these is to average them, or weighted average them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = w_vV + w_rR + w_cC + w_fF&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_x&amp;lt;/math&amp;gt; is a weighting for category X (eg Veracity, Relevance, etc)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_v + w_r + w_c + w_f = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This seems reasonable and if we believe that certain criteria should weigh more (such as Veracity) we can easily make the weighting factors reflect this. However, intuitively it seems that a category such as Veracity should not only weigh more but have the power to take down the whole argument. After all, if the argument is a straightforward lie, it should receive a score of zero, regardless of its other attributes (such as relevance, clarity, etc):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he will land a person on Mars in his first year.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is a lie and although it is clear, has no evident fallacies, and is relevant to the question at hand, it should be thrown out.&lt;br /&gt;
&lt;br /&gt;
The same can be said of Relevance. A completely irrelevant argument should also have the power to render the whole argument moot:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he likes pizza and so do I.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this in mind, we can propose the following equation, which we will dub the &amp;quot;VRFC equation&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; = Score for the argument which varies from 0-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; = weighting factor for Fallacies.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; = weighting factor for Clarity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and each of the constituent variables (&amp;lt;math&amp;gt;V, R, F, C&amp;lt;/math&amp;gt;) has a range 0-1.&lt;br /&gt;
&lt;br /&gt;
In this equation either Veracity or Relevance have the power to nullify the entire argument. Similarly a combination of Fallacies and lack of Clarity can do the same. However, a fallacious argument alone seems like it could still have merit, as would an argument whose only flaw was lack of clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Czechoslovakia because if the Nazi&#039;s prevail they will conquer the world.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument commits the slippery slope [[Fallacy|fallacy]] but is not entirely invalid. Similarly an unclear argument can still manage to make a point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Europe because first the Sudetenland, then the Czechs, and soon enough it&#039;s over when all the Brits had to do was get rid of that weakling sooner.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It would seem that a fallacious argument should weigh more than an unclear one. Proposed weights might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f = 0.7, w_c = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rolling up the score of argument trees&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation above applies to a single argument but, as we&#039;ve seen, most arguments have sub-arguments below, sub-sub-arguments, and so forth. They are really trees in which each individual argument can be scored separately. &lt;br /&gt;
&lt;br /&gt;
Here we develop a proposed equation for rolling up the score for an argument based on its own score and that of its sub-arguments. In doing so we emphasize that any argument can stand on its own and be scored in the absence of sub-arguments. This creates an interesting dynamic. The sub-argument may bolster or detract from the parent argument but the extent to which it does should be limited.&lt;br /&gt;
&lt;br /&gt;
Furthermore, once the sub-argument becomes weaker than a certain threshold, it should stop influencing the parent argument altogether. Here, we will set this threshold at 0.5. Thus only Pro sub-arguments that score 0.5 or better will have any influence on the parent argument. For Con sub-arguments we will use the same threshold but first modify the sub-argument score by &amp;lt;math&amp;gt;1-S&amp;lt;/math&amp;gt;. Thus a strong Con sub-argument, scoring say 0.9, would enter the calculation with a score of 0.1. The result is a range of scores 0-1 of which 0-0.5 is Con and 0.5-1 is Pro. Scores of exactly 0.5 are neutral.   &lt;br /&gt;
&lt;br /&gt;
Let&#039;s consider the case with one argument and one pro sub-argument and one Con sub-argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Argument, s = 0.9&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Pro sub-argument, xp = 0.7&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Con sub-argument, xc = 0.7&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the argument&#039;s score is 0.9, and both the Pro/Con sub-argument score is 0.7. These numbers would normally be arrived at by using the VRFC eqn above, but we will just assume them for now. The first sub-argument, in this case, bolsters the argument because it is a Pro argument and has a score (0.7) greater than 0.5. The second sub-argument, with the same score, detracts from the argument because it is on the Con side. We emphasize that if these scores were at or below 0.5 they would have no effect on the argument.&lt;br /&gt;
&lt;br /&gt;
The general equation governing this situation is as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2(1-s)fx + s - (1-s)f&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;/math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2sfx + s - sf&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; = score for parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p&amp;lt;/math&amp;gt; = score for Pro arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1-x_c&amp;lt;/math&amp;gt; = score for Con arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; = maximum possible fraction of increase possible, 0-1&lt;br /&gt;
&lt;br /&gt;
The variable &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a user selected number between 0-1 and represents the extent to which the sub-argument score can be affected. For example, a sub-argument with x = 0.9, as in the diagram above, can be improved by 0.1 to a maximum of 1. Then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; represents the fraction of 0.1 that we will allow for our improvement. If &amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt;, for instance, then the maximum range around 0.9 that the sub-argument can affect is &amp;lt;math&amp;gt;(0.25)(0.1) = 0.025&amp;lt;/math&amp;gt;. Thus the maximum score the argument can have is 0.925 and the minimum is 0.875.&lt;br /&gt;
&lt;br /&gt;
For the argument above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt; User input&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p = 0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.7) + 0.9 - (1-0.9)(0.25) = 0.91&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = (1-x_c) = (1-0.7) = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.3) + 0.9 - (1-0.9)(0.25) = 0.89&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see here that the Pro and Con sub-arguments exactly balance each other since they both have the same score. &lt;br /&gt;
&lt;br /&gt;
The equation above is piecewise linear and can be visualized as follows:&lt;br /&gt;
&lt;br /&gt;
![image](uploads/7f62b22b9eb19771503d654db29cab92/image.png)&lt;br /&gt;
&lt;br /&gt;
One important property of this equation is that the stronger (or weaker) an argument becomes, it becomes harder for a sub-argument to change it. This is because the maximum allowed movement is &amp;lt;math&amp;gt;1-s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; or simply &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;lt;= 0.5&amp;lt;/math&amp;gt;. The idea behind this property is that very strong arguments should be harder to dislodge precisely because they have covered themselves well. A weaker argument, for instance one that fails to mention an obvious supporting fact, is in a position to be bolstered more by a sub-argument which mentions the fact. Similarly a very weak argument should be difficult to bolster. If the argument is a lie or irrelevant, for instance, there isn&#039;t much that can be done to rescue it.  &lt;br /&gt;
&lt;br /&gt;
This property has the further consequence that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; cannot be changed by the sub-arguments if it is 1 or 0. A truly perfect argument, &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt;, cannot be weakened no matter how strong its Con sub-argument. Similarly a perfectly flawed argument, &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; cannot be bolstered with any Pro sub-argument. We will discuss below a method to deal with the fact that, regardless of the quality of the argument, users may still vote to score arguments 1 or 0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Population adjustments&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm described above assumes a single vote for the argument and sub-arguments. In fact, this will rarely be the case, because multiple users will be voting on each. The effect of a sub-argument on its parent should be weighed by the population of users who voted for the sub-argument and parent argument.&lt;br /&gt;
&lt;br /&gt;
Here we propose a simple modification factor, based on the ratio of users voting for each argument/sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop} = (s_{mod} - s){p_s\over p} + s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; is the population modified score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod}&amp;lt;/math&amp;gt; is the modified score without population modifications (see above)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt; is the population voting for the sub-argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting for the parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is the original score of the parent argument&lt;br /&gt;
&lt;br /&gt;
Usually we expect that sub-arguments will receive fewer votes than parent arguments, so &amp;lt;math&amp;gt;{{p_s\over p} &amp;lt;= 1}&amp;lt;/math&amp;gt; in general. For the case when &amp;lt;math&amp;gt;p_s &amp;gt; p&amp;lt;/math&amp;gt; we will force &amp;lt;math&amp;gt;{p_s\over p} = 1&amp;lt;/math&amp;gt;. Therefore there is no danger that a sub-argument can overwhelm a parent argument by voting power alone. This is in keeping with our philosophy that sub-arguments can have at best a limited effect on parent arguments.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Example calculation&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s do a problem with the following argument tree and &amp;lt;math&amp;gt;f=0.25&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, p = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, p = 30&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, p = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, p = 28&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Our objective here is to roll up the score for the Pro side of this tree. The Con side would be calculated similarly and we will skip this for the sake of brevity. Note that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; stands for the score and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting to produce that score. We start at the bottom, with the 2-3-4 portion of the tree, and for the sake of consistency with the above calculation we will recast &amp;lt;math&amp;gt;x_p&amp;lt;/math&amp;gt; for 2 as &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and label the population of the sub-arguments as &amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    2 [label=&amp;quot;2, Pro argument, s = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, ps = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, ps = 30&amp;quot;]&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.8) + 0.7 - (1-0.7)(0.25) = 0.745&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We modify this by the respective populations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop23} = (s_{mod} - s){p_s\over p} + s = (0.745 - 0.7){26\over 55} + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument we first modify its score,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_c = 0.4&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and write&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.4) + 0.7 - (1-0.7)(0.25) = 0.685&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and modify by the respective population,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop24} = (s_{mod} - s){p_s\over p} + s = (0.685 - 0.7){30\over 55} + 0.7 = 0.692&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These two values of &amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; can now be combined to create a new &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; for the Pro argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop23} - s) + (s_{mod,pop24} - s) + s = (0.721 - 0.7) + (0.692 - 0.7) + 0.7 = 0.713&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We note here that the Pro argument got a little stronger as a result of its sub-arguments. The Pro sub-argument was substantially stronger than the Con sub-argument and, although fewer people voted for it, the population difference was not large.&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument 5-6-7 we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    5 [label=&amp;quot;5, Con argument, s = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, ps = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, ps = 28&amp;quot;] &lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, for the Pro sub-argument, we first modify its score since it is the opposite of its parent. It is as if the parent were a Pro argument and the child were a Con argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_p = 1 - 0.85 = 0.15&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then proceed as usual with the calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.15) + 0.7 - (1-0.7)(0.25) = 0.6475&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop56} = (s_{mod} - s){p_s\over p} + s = (0.6475 - 0.7){19\over 43} + 0.7 = 0.677&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument &amp;lt;math&amp;gt;x = x_c = 0.95&amp;lt;/math&amp;gt; since the parent argument is also Con:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.95) + 0.7 - (1-0.7)(0.25) = 0.7675&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop57} = (s_{mod} - s){p_s\over p} + s = (0.7675 - 0.7){28\over 43} + 0.7 = 0.744&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We combine these two results in the same manner as above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop56} - s) + (s_{mod,pop57} - s) + s = (0.677 - 0.7) + (0.744 - 0.7) + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the bottom layer of the tree calculated, we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.713, ps = 55&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.721, ps = 43&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All that remains is to calculate the 1-2-5 portion, which is very similar to the 2-3-4 calculation performed above. Therefore will skip the details of this and simply report the results:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop12} = 0.906&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop15} = 0.895&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod, tot} = 0.901&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We see here that the final result is not much different than the original &amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;. This is a result of the Pro sub-arguments essentially being cancelled by the Con sub-arguments. Such a result is to be expected in many cases.&lt;br /&gt;
&lt;br /&gt;
In this example, we are skipping the Con side of the overall argument (node 8 in the tree above) because it would be exactly the same as what we have shown. If it had been calculated we would then combine the result for 1 and 8 to produce an overall score for the argument.&lt;br /&gt;
&lt;br /&gt;
The calculations above can be performed with the [attached snippet](https://gitlab.syncad.com/[[Peer|peer]]verity/trust-model-playground/-/snippets/164). The user input portion of the snippet is set up for the calculation we did immediately above:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
#User input&lt;br /&gt;
side_parent = &#039;pro&#039; #side, pro or con, that the parent argument is on    &lt;br /&gt;
s = 0.9 #score for the parent argument&lt;br /&gt;
mf = 0.25 #max fraction that parent argument can be changed in terms of (1-s) or (s-0)&lt;br /&gt;
p = 96.0 #population voting for the parent argument&lt;br /&gt;
x_pro_arr = [0.713] #score for the pro children&lt;br /&gt;
x_con_arr = [0.721] #score for the con children&lt;br /&gt;
ps_pro_arr = [55.0] #population voting for each pro child sub-argument&lt;br /&gt;
ps_con_arr = [43.0] #pop voting for each con child sub-argument&lt;br /&gt;
mods_if1or0 = True #True if we want scores of 1 or 0 to be modified to near 1 or 0 (otherwise they can&#039;t be adjusted by this calculation)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To note, the snippet contains arrays to handle any number of child arguments. These are combined in the same way we combined the single Pro and Con argument above.&lt;br /&gt;
 &lt;br /&gt;
Another variable, `mods_if1or0` controls whether we allow &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to be modified when it is set to 1 or 0. As discussed above, arguments where &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; are perfect, or perfectly flawed, and thus cannot be changed with sub-arguments. This idea may be theoretically plausible but it wouldn&#039;t stop users from voting 1 or 0 for arguments. In such cases the `mods_if1or0` switch, when True, changes &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to 0.99 and 0.01 respectively. &lt;br /&gt;
&lt;br /&gt;
As a side note, this property is similar to Bayesian probabilities of 1 or 0 which also cannot be changed. We have discussed this problem in earlier posts under the guise that 1 or 0 probabilities don&#039;t really exist because they would require an infinite sample size. In the same way, a perfect argument (or perfectly imperfect) argument cannot exist because it would, at some point, run into the same issues that Bayesian probabilities do. &lt;br /&gt;
&lt;br /&gt;
For example, suppose we&#039;ve invented a pill that cures cancer. It is one dose, costs 10 cents to make, has no side effects, has no environmental impact due to manufacture, and is certain to cure someone&#039;s cancer. The argument for a cancer patient taking the pill is, for all practical purposes, perfect. There is simply no plausible argument against it. We could score such an argument a 1 until we remember our probabilities. We only know the pill works and has no side effects on a limited population, say 100,000 patients. We don&#039;t know what effect it will have on the 100,001st patient. So the best we can say is that the drug is 0.99999 effective. Given that the argument is really predicated on the effectiveness of the drug we could say its score is also 0.99999.    &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;[https://slashdot.org/ Slashdot]&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot offers a system for content moderation summarized by the following from [wikipedia:Slashot|wikipedia]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;i&amp;gt;Slashdot&#039;s editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site&#039;s users. A user-based moderation system is employed to filter out abusive or offensive comments.[63] Every comment is initially given a score of −1 to +2, with a default score of +1 for registered users, 0 for anonymous users (Anonymous Coward), +2 for users with high &amp;quot;karma&amp;quot;, or −1 for users with low &amp;quot;karma&amp;quot;. As [[Moderator|moderator]]s read comments attached to articles, they click to moderate the comment, either up (+1) or down (−1). Moderators may choose to attach a particular descriptor to the comments as well, such as &amp;quot;normal&amp;quot;, &amp;quot;offtopic&amp;quot;, &amp;quot;flamebait&amp;quot;, &amp;quot;troll&amp;quot;, &amp;quot;redundant&amp;quot;, &amp;quot;insightful&amp;quot;, &amp;quot;interesting&amp;quot;, &amp;quot;informative&amp;quot;, &amp;quot;funny&amp;quot;, &amp;quot;overrated&amp;quot;, or &amp;quot;underrated&amp;quot;, with each corresponding to a −1 or +1 rating. So a comment may be seen to have a rating of &amp;quot;+1 insightful&amp;quot; or &amp;quot;−1 troll&amp;quot;.[57] Comments are very rarely deleted, even if they contain hateful remarks.&lt;br /&gt;
&lt;br /&gt;
::Starting in August 2019 anonymous comments and postings have been disabled.&lt;br /&gt;
&lt;br /&gt;
::Moderation points add to a user&#039;s rating, which is known as &amp;quot;karma&amp;quot; on Slashdot. Users with high &amp;quot;karma&amp;quot; are eligible to become moderators themselves. The system does not promote regular users as &amp;quot;moderators&amp;quot; and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user&#039;s moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from −1 to +5, and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level +5 will only see the highest rated comments, while a user reading at level −1 will see a more &amp;quot;unfiltered, anarchic version&amp;quot;. A meta-moderation system was implemented on September 7, 1999,to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either fair or unfair. For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. troll, funny), and the meta-moderator can click to see the context of comments surrounding the one that was moderated.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot&#039;s purpose is to promote high quality discussion which is somewhat similar to our purpose of promoting high quality arguments. In particular, the [[Reputation|reputation]] (karma) of the moderators is an interesting concept. We could use a similar system to weight voters with a good reputation higher in their argument scoring. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Refining our Argument Score with Reputation/Trust&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Scoring of individual arguments|Above]] we discussed an equation to score arguments on the basis of Veracity, Relevance, Freedom from Fallacies, and Clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is overall score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is Veracity &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is Relevance&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is Fallacies (ie freedom from fallacies)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is Clarity&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; is weighting for Fallacies, eg 0.7&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; is weighting for Clarity, eg 0.3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Each user would vote on each category and the resulting &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; would be calculated. However, each user may have a different reputation/trust for their ability to judge these four criteria. We can take this into account by simply adding a weighting factor for Trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = (T_vV)(T_rR)(w_fT_fF + w_cT_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_x&amp;lt;/math&amp;gt; = Trust in user&#039;s ability to evaluate each category &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; (Veracity, Relevance, Fallacies, and Clarity)&lt;br /&gt;
&lt;br /&gt;
Since the user evaluates trust in multiple categories, it would be useful to generate a composite trust for all the categories:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
T_{comp} = {(T_vV)(T_rR)(w_fT_fF + w_cT_cC) \over{VR(w_fF + w_cC)}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; is the composite trust for all categories.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; can also be seen as an &amp;quot;average&amp;quot; trust, ie the single factor that produces the same argument score as that resulting from the multiple trust factors.&lt;br /&gt;
&lt;br /&gt;
Once we have &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; we can use it to generate an average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; for all users:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S_{ave} = {\sum S \over{\sum T_{comp}}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
That is, instead of dividing by the number of people voting, we divide by the total of how much they &amp;quot;count&amp;quot;. This is similar to the [trust-weighted average scheme](A trust weighted averaging technique to supplement straight averaging and Bayes) we have proposed before. It is this average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; that we will take to be the score for the argument. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rhetorical vs. [[Practical argument|Practical Argument]]s&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So far our scoring methodology has been focused mainly on rhetorical aspects of arguments. Is the argument true, relevant to its parent contention, clear, and logical? These criteria certainly touch on the practical impact an argument may have and so far we have been merging Impact with Relevance since it is hard to distinguish the two. Consider the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Argument: Biden is a good President because he got an infrastructure \n bill passed that will do good things for the whole country.&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Sub-argument: The bill provides $2.9 million \n to Roanoke airport for improvements.&amp;quot;]&lt;br /&gt;
    2 [label = &amp;quot;2, Sub-argument: The bill provides $150 billion \n to combat climate change.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here it would seem appropriate to roll Impact into Relevance. Presumably most voters will recognize Sub-argument 2 as being the more relevant one simply because it is, on a practical level, the more impactful one.&lt;br /&gt;
&lt;br /&gt;
However, usually Relevance is a rhetorical quality, not a practical one. In this case, as a matter of rhetoric, both supporting arguments are relevant to the topic at hand. They are both, without question, part of Biden&#039;s infrastructure bill. They are both true, clear, and free of fallacies. But even so, although it is clear what is going on, it would be better to separate the issue of whether Biden is a good president from the issue of which infrastructure allocations add the most value. &lt;br /&gt;
&lt;br /&gt;
One reason for this separation, in terms of the [math laid out previously](Argument scoring), is that a weak sub-argument stops having any influence once its score goes below 0.5. However, if we are scoring Impact, either implicitly or explicitly, this rule would seem inappropriate. The rule makes many small sub-arguments, like the one about Roanoke airport, stop having any value. Perhaps the Roanoke airport argument alone is negligible but it still has a positive impact and, when added to all the other similar projects around the country, would amount to a sizeable contribution. Therefore it wouldn&#039;t be correct to nullify it altogether.&lt;br /&gt;
&lt;br /&gt;
We also wouldn&#039;t want arguments from Impact to prematurely influence necessary [[Rhetorical argument|rhetorical argument]]s. Let&#039;s take a favorite from moral philosophy, that of a healthy young person, John who comes in for a routine checkup at a clinic which has 5 critical patients in need of organ transplants. John has the organs they need and is a match for all of them. The doctors, using a purely utilitarian argument decide to kill John and harvest his organs. It makes sense: 1 person dies and 5 live so we&#039;re ahead. Let&#039;s for the moment disregard legal and other social artifacts which might persuade the doctors otherwise. This approach contrasts with a deontological perspective which argues that the ends do not justify the means and that, indeed, the means in this case are all important. But we  can only ferret out the deontological argument by actually having it, within the context of a rhetorical debate. We would hope such a debate would successfully preclude any utilitarian considerations whatsoever.    &lt;br /&gt;
&lt;br /&gt;
This leads us to the more general reason why separating the scoring techniques is appropriate. A score for Impact is essentially a [[Cost-benefit-risk analysis|cost-benefit-risk analysis]] for which established techniques exist and which would be confusing if scored together with the rhetorical argument. Indeed, by the time we reach an argument where impact is of interest we have usually dispensed with the rhetorical nature of the argument:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: We have $150 billion to spend and \n should spend it on climate change&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Electric cars and charging stations.\n I = 0.5&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Home and building insulation, solar, heat pumps, etc.\n I = 0.3&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Carbon capture.\n I = 0.2&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see that this argument is likely the outcome of previous arguments where basic points have been agreed to (or at least settled) such as whether climate change is real, Biden is a good president, etc. Here we are at the final stages of the argument, which consists of a resolution to move forward with some practical course of action. &lt;br /&gt;
&lt;br /&gt;
In this case each sub-argument is, in effect, a lobbying effort for the money. All the sub-arguments are equal in terms of their Veracity, rhetorical Relevance, freedom from Fallacies, and Clarity. The only point of dispute is whether the money is better spent on one option or another. In a situation like this, &amp;lt;math&amp;gt;\sum I = 1&amp;lt;/math&amp;gt; would be defined as the effect of all plausible infrastructural investments we could make to impact climate change. In this respect let&#039;s assume we are limited to the three options above. The voters, presumably armed with engineering studies, would then weigh in to assess the impact of each proposal.&lt;br /&gt;
&lt;br /&gt;
It is important to emphasize that the argument at this point is a technical one. It is numerical in nature and hinges on scientific rigor. Our system for assessing trust in the people who can reasonably provide input at this level will be important. One can easily imagine how interested parties (and the merely ignorant) could skew the results with their vote. At the same time we want to encourage participants to move toward this type of debate since it leads to practical benefits and, by its nature, tends to reduce partisan rancor.&lt;br /&gt;
&lt;br /&gt;
The idea outlined here stands apart, by design, from the method that scores the rhetorical quality (eg the VRFC equation) of the argument. Rhetoric is designed to convince you of the argument as a whole but arguments using Impact are designed to forward a specific recommendation. Clearly, bundling the Impact with the VRFC equation is inappropriate.&lt;br /&gt;
&lt;br /&gt;
In many cases arguments will have a hard time getting to this level of practicality. They tend to remain mired in the basic rhetoric that governs them:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is clearly truncated but we can see where it is going. It is, in essence, one about the Veracity of personal experience vs. demonstrable evidence, which is a philosophical debate. It is hard to ascribe Impact to it because it doesn&#039;t ever get to the point of enumerating proposals.&lt;br /&gt;
&lt;br /&gt;
But it could eventually transform itself into one that did. Let&#039;s assume the participants agree to settle, or at least table, the philosophical debate and concentrate instead on a test of how to improve your life. Both the religious and secular sides propose certain practices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3 ...more debate...&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4 ...more debate...&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Modified Thesis: Is your life best improved by religious or secular practices?&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Religious&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Secular&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Pray for 20 minutes \n every night and ask \n for what you need.&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, Go to your religious \n service every week and \n perform the rituals.&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, Study for 20 minutes \n every night in an area \n where your problems are.&amp;quot;]&lt;br /&gt;
    11 [label=&amp;quot;11, Find a support group \n and meet with them \n every week.&amp;quot;]   &lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;forward&amp;quot;]; &lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 8 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 9 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 10 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 11 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, the reader is being invited to try the techniques on offer and an evaluation of Impact might thus be made. Note that in this case we have suspended our evaluation of the rhetorical qualities of each argument and begun a new one which seeks a utilitarian appraisal of which side might offer a better personal outcome. This is in keeping with the notion of Impact as separate from the rhetorical qualities of the argument.&lt;br /&gt;
&lt;br /&gt;
We could, in theory, envision someone who is not religious adopting religious practices because he concludes that it does him good. Perhaps he tried both sides and decided the religious side was the one yielding the greatest benefit. This possibility is no doubt one motivation for religious debaters to drop their metaphysical convictions and adopt a practical way to approach their disagreement with the secular side. In any event, this progression seems like a healthy outcome since metaphysical debates usually have little hope of resolution.&lt;br /&gt;
&lt;br /&gt;
Impact is necessarily focused on some particular goal. If the goal is material benefit and you pray and get rich you might think prayer works. But if the goal is broader than that, you might be disturbed by the fact that you are doing something you don&#039;t really believe is true. Your goal might be to get rich without compromising philosophical integrity. In this case the object of our  Impact changes to become more than simply material wealth. A clear statement of the argument thesis in terms of goals is obviously important here.&lt;br /&gt;
&lt;br /&gt;
In spite of our attempts at separating impact from rhetoric we will often find ourselves with a mix of the two:&lt;br /&gt;
[[File:Procon.png|center|frame]]&lt;br /&gt;
Here we&#039;ve scored the Pro argument weaker than the Con argument using our standard rhetorical measures (VRFC). Arguably the Pro argument speculates more (a type of fallacy) about what would happen if we stopped supporting Ukraine. The Con argument isn&#039;t perfect either since it seems to assume that the money saved would actually be used in some constructive way. Still, money not spent is certainly money saved so we&#039;ll mark it down only slightly. That said, it would be ridiculous to stop the argument after concluding the Con side &amp;quot;won&amp;quot;. The argument is not really a rhetorical argument at all but rather a statement of Impact. One side argues for the impact of saving the money. The other argues for the impact of failing to spend the money. We may not know how events would play out in this situation but we acknowledge the risk of catastrophic consequences for failure to act. The impact score for the Pro side is thus much higher. &lt;br /&gt;
&lt;br /&gt;
This is a particular case where the argument should be separated out into one that is explicitly about impact but it is not clear how best to achieve that. One way would be to allow participants to intervene by asking questions or proposing to move the debate in a more fruitful direction, perhaps by suggesting a new main contention (ie thesis). &lt;br /&gt;
[[File:Procon2.png|center|frame]]&lt;br /&gt;
A basically new debate ensues. Incentives to move the debate might include reputational points for agreeing on a more productive direction.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s look at an example of what this &amp;quot;productive&amp;quot; argument might look like in terms of cost-benefit-risk analysis:&lt;br /&gt;
&lt;br /&gt;
[[File:Productiveargument.png|center|frame]]&lt;br /&gt;
The red boxes are Con arguments and the green box is the Pro argument. We can see right away why the Pro argument might have stiff opposition since it leads to an infinite cost-benefit ratio and is virtually certain to occur (it is our current policy). It is possible that other scenarios could play out within the context of supporting Ukraine but let&#039;s leave these aside for the moment.&lt;br /&gt;
&lt;br /&gt;
So the Pro side looks bad until we start looking at the Con scenarios. In Scenario 1 we envision taking the $75 billion spent on Ukraine aid and providing free community college instead. Doing so provides an economic benefit in the long run so we provide an estimate for that. However, military and policy experts have said that ignoring Ukraine would result in having to contain a newly resurgent Russia and this could double our defense costs in the near term ($800 billion). The resulting cost benefit ratio is 2.9, a positive number which is undesirable. It also is the highest probability scenario, at 70%. Other scenarios involve some type of war with Russia and would involve an even greater outlay of funds, not to mention the sheer human toll of war. Only Scenario 4 envisions a minor outlay to contain a victorious Russia which would be offset by the benefit of free community college. This scenario is desirable but unlikely. &lt;br /&gt;
&lt;br /&gt;
These scenarios are much like sub-arguments but stripped of any need to assess their rhetorical quality. By looking at CB ratios and probabilities we can determine which policy direction to take.   &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Interaction effects between arguments and sub-arguments&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although arguments have been presented as standalone entities, users may often score them after reading, and accounting for, their sub-arguments. In that case, the sub-argument&#039;s influence on the parent argument would be counted twice -- once due to the mathematical effect discussed [[#Scoring of individual arguments|above]] and twice due to the the influence the sub-argument has on the user&#039;s scoring of the parent argument. &lt;br /&gt;
&lt;br /&gt;
This effect is clearly undesirable and efforts should be made to control it. The software could, for instance, be equipped with the following checks:&lt;br /&gt;
&lt;br /&gt;
* If it detects that a user voted for a sub-argument and subsequently voted for an argument, it can flag the sub-argument score so it does not participate in the mathematical effect it would otherwise have on the argument. We are assuming here, of course, that a user who has voted for a sub-argument will be unable to avoid having it influence his vote for the parent argument.&lt;br /&gt;
* If it detects a vote for an argument but not a vote for its sub-argument, it doesn&#039;t know if the user has read the sub-argument in a way that would influence their vote for the parent argument. In such a case, the user can simply be asked if the sub-argument was read and, if so, to flag any subsequent vote by the user for the sub-argument as a non-participant in its mathematical influence on the parent argument.&lt;br /&gt;
* If the sub-argument does not yet exist when the vote for the parent argument is cast, the software will flag a subsequent vote for any newly developed sub-argument as a legitimate participant in the mathematical influence it has on the parent argument.&lt;br /&gt;
&lt;br /&gt;
It is probably difficult to make a system like this foolproof. A user might report not having read a sub-argument that they have, in fact, read. Tracking features could, in theory, be developed to check whether this is the case and react accordingly. However, it would still be difficult to know for sure how deeply the user understands the sub-argument just based on a record that they clicked on it or had it &amp;quot;open&amp;quot;. It also seems like it would be easy to overdo tracking of this kind to the point where it simply turns off an otherwise enthusiastic user. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Applications&amp;diff=1676</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Applications&amp;diff=1676"/>
		<updated>2024-09-08T18:19:03Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Ratings system}}&lt;br /&gt;
&lt;br /&gt;
Writ large we are creating a [[self-improvement]] tool with profound social ramifications. We are giving people the ability to use the internet&#039;s vast resources and each other in a constructive way, free of the negative influences that are normally associated with online information. It would seem that a software system of such ambitious magnitude would have a wide variety of applications. So far, however, we have only explored the system at a high level in terms of factual/probabilistic information retrieval and [[debate]]/logic. Superimposed on both is a [[trust]] network where users can rate other users and the information they are receiving in a personalized way. We have further noted that media/social media is a clear application for this system, for example a [[Use Case for Predicate Rating System: Discussion Board Multidimensional Sort|multi-dimensional sorting system]] for discussion boards, such as Reddit. Other applications might be:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Academia|Academia]]&#039;&#039;&#039;: Reliable information that can come from anywhere will free up educational programs to tap these resources and not have to rely on repetitive (and expensive) in-house content creation. Education can finally achieve the decentralization that we have always believed would come but has only made the slowest of progress. One problem we will need to solve is credentialing, that is, how do you obtain a &amp;quot;degree&amp;quot; from information like this? One answer, admittedly vague, is to have online communities function as &amp;quot;schools&amp;quot; that agree to ensure that students have completed some level of mastery over the subject matter.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Education|Educational Material]]&#039;&#039;&#039;: Have you ever read a Wikipedia post that assumed too much knowledge on your part, ie that was pitched to the expert? Wouldn&#039;t it be cool to be able to ask a question and get a response that anticipates your level of knowledge of a subject and pitches its content accordingly? Our system could do that because entries could be rated based on level of expertise required (simple to advanced). Since a demand will probably exist for simpler explanations, contributors could identify and meet them. Users could then adjust parameters having to do with their level and obtain a customized explanation. The system could obviously store articles based on rated expertise and call them up for future requests.&lt;br /&gt;
&lt;br /&gt;
An extension on this idea would be to have questions linked off the main article that readers would create, rate, and upvote. The highest ranked questions could be answered in their own link or alert the author that this is an area of confusion that needs to be fixed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Government|Government]]&#039;&#039;&#039;: Information sites run by local, state, and national officials might be better served by allowing people with first hand experience to provide this information, either alone or as a supplement to the official site. The government site can offer a tailored version of reliable views held by those it considers [[Trust|trust]]worthy. Needless to say, such a tool could also help policymakers design laws and regulations which have been properly vetted and are politically realistic.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Social services|Social Services]]&#039;&#039;&#039;: A great many services exist for the poor and disadvantaged which are underutilized because the paperwork needed to sign up and maintain them is onerous. Furthermore the government agencies responsible for them are not good at providing the needed customer service. The consumers, for their part, tend not to be savvy about navigating the system because they may be old, handicapped, or otherwise incapacitated. Private companies have little incentive to rectify the problem since the profit is not that great. Clearly this is an area which would benefit from a reliable and easy to use information system. Offering preset algorithms and mathematical weights will be important for the consumers of this service since it is unlikely they will be able to manipulate these on their own. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workplace / Business&#039;&#039;&#039;: The system can be deployed as an intranet for the purpose of information dissemination and project collaboration. It could serve as a way to mitigate the impact of cognitive biases that especially affect large workplaces (eg groupthink, authority bias). Large projects are often undertaken with insufficient scrutiny and without a proper ROI analysis. Even engineering designs are often committed to without a proper high level analysis of alternatives (AoA). Other workplace applications might include a more objective rating system for employees.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Medicine&#039;&#039;&#039;: A large problem in medicine is correct diagnosis, for which a wide network of experts would be ideal. Indeed, in hospitals today doctors rely on the independent assessment of other doctors, NPs, and PAs to come to conclusions about the patients in their care. The usual MO is many different medical providers come around, ask questions of the patient (many of them redundant), check the patient and then get together with the whole team to come up with a final assessment. It would seem that larger teams of specialists could be employed through the tool to achieve better outcomes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Foreign policy|Foreign Policy]]&#039;&#039;&#039;: The US frequently makes mistakes on the world stage simply because it doesn&#039;t understand the country or region it is interacting with. The Iraq war is a famous example of this. High level &amp;quot;experts&amp;quot; provide information and intelligence that people on the ground know is wrong. The information system we are contemplating can be used to break through this type of bias by including more voices, filtering in or out the correct people based on their expertise, and being able to properly [[Debate|debate]] policy positions before committing to them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Consumer services|Consumer Services / Shopping]]&#039;&#039;&#039;: Most retail websites feature a rating system for the products they sell but it is unclear how honest these are. They are, supposedly, fairly easy to game with fake reviews, bot votes, etc. Our system will obviously be able to filter this out and be highly customizable based on product/service and who is doing the rating. It could replace product information services such as [https://www.nytimes.com/wirecutter/ WireCutter] or [https://www.consumerreports.org/ Consumer Reports] which are presumably unbiased but not exactly clear about how their lack of bias is achieved.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Polling|Polling &amp;amp; Surveys]]&#039;&#039;&#039;: Polling is said to have become more unreliable in part because the individuals being polled are harder to reach or are skewing their answers due to their distaste for the media (which they consider polling to be a part of). For example, 2016 election pollsters failed to correct properly for the number of older white voters in key Midwestern states. Pollsters normally adjust their [[Prediction|predictions]] from the raw sample data to reflect demographic reality (eg if the number of African American voters is under-represented in the sample, the pollster will adjust their weight upward in calculating the final result). However, polls can be done using our system very easily and avoid mistakes by getting feedback from more users. If a key demographic group isn&#039;t being counted correctly, users can come forward to make that clear. The algorithms themselves, in our system, can be adjusted to do the final calculations and their workings will be open to all.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Predictions&#039;&#039;&#039;: It is well known that if a large number of people try to guess the number of coins in a jar and we average their results, the final answer tends to be very accurate. This &amp;quot;wisdom of the crowd&amp;quot; effect has been applied to many problems and shown to frequently outperform experts. It relies heavily on independent thinking by the participants (they can&#039;t influence each other or be influenced by some authority), a diversity of perspectives (not all the same kind of people), and proper aggregation techniques. Our system will have or can be easily adapted to achieve these characteristics. Our network will be able to judge the independence and diversity of its sources, individually and as a whole. The aggregation techniques will fit in with the algorithms we are already developing for the network itself (Bayes, averaging, etc.)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Whistleblowing|Whistleblowing and Call-outs]]&#039;&#039;&#039;: Our system can be used to allow people to anonymously call out bad actors or whistleblow at organizations they are part of. The system would need an anonymous feature so individuals couldn&#039;t be identified but could be verified as real people (instead of bots). A recent example of this is [https://www.wbur.org/onpoint/2024/02/13/the-fight-for-transparent-health-care-prices-in-america hospitals not revealing price information] for procedures even though it is the law that they do so. Either they have flouted the law completely or have complied in bad faith, releasing thousands of pages of cryptic procedure codes (along with prices), with no search functionality, that ordinary people will have difficulty interpreting. Those hospitals that have been targeted by complainers, and subsequently legal authorities, have quickly complied. Usually running a campaign to force this kind of change is difficult. However, since much of the infrastructure for information gathering, [[community]] activism, and consensus will exist in our system, it can be molded fairly easily to account for this situation.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Given the number and nature of the applications, it seems that an eventual extrapolation of this system would be to replace government (at least in its decision making capacity -- you still need firefighters) and [[Governance|governance]] in general. What is governance? Usually it is an elite group of decision makers, even in its democratic forms. But their elite nature, comparatively few numbers, and the political system they are part of, practically ensures bias. Not to mention capture by interested parties and outright corruption. Polls (wisdom of the crowd) reveal a low trust level for politicians in general. Most countries have a few political parties which are dedicated to advancing a biased agenda (almost by definition). The US, the largest [[Democracy|democracy]] and supposedly the vanguard democratic nation, has exactly two viable political parties. Even at their best, these two parties cannot capture the widely divergent thought of millions of people. Generally speaking, US democracy is widely regarded as highly flawed.&lt;br /&gt;
&lt;br /&gt;
The system we are building will enhance democracy by giving people more direct input, aggregating their results more accurately, and providing open debate which anyone can take part in. The [[Community|community]] building features could be used to include anyone who wanted to take part in government, presumably from among the population that lives in a certain area. They can then &amp;quot;vote&amp;quot; through our system which will be designed (if they desire) to filter out bad actors, factual mistakes, and bias. Open and full debates on policy decisions can be made and the community can agree on how to tally up the [[Opinion|opinion]]s that get put in. If the decision is appropriate for the &amp;quot;wisdom of the crowds&amp;quot; then they can tune their algorithms accordingly. If it requires an expert, that can also be arranged. And, since governance will largely be treated as a voluntary effort by all, the system will be difficult to manipulate by vested interests.&lt;br /&gt;
&lt;br /&gt;
This may seem like quite the vision but let&#039;s remember that the internet&#039;s original promise, that of a democratized information and collaboration system, has not really met its utopian expectations. We can certainly get information faster now but our social structures have not changed much. It would seem that the problem is not the internet itself, which is nothing more than a protocol and technology to implement it, but the software built on top. If the internet is simply an online newspaper rather than a paper one, that&#039;s progress, but it&#039;s not revolutionary. If it is online shopping, that too is progress but we used to be able to catalog-order things before the internet. If the internet is social media, well it&#039;s certainly nice to be able connect with friends so easily, but it has had disastrous side effects. The internet&#039;s original vision became dominated by narrow objectives and profit seeking. No one actually set out to build a software system which could organically change the way society functions, for the better. And these things don&#039;t build themselves.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Aggregation_techniques&amp;diff=1674</id>
		<title>Aggregation techniques</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Aggregation_techniques&amp;diff=1674"/>
		<updated>2024-09-08T17:53:15Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
[[Opinion aggregation|Aggregation]] refers to the way we combine the [[Opinion|opinions]] of others to obtain a final value of the opinion. A [[Polling|poll]] which takes the sum of each person&#039;s candidate preference in an [[Election|election]] and then calculates the percentage for each candidate is an aggregation technique.    &lt;br /&gt;
&lt;br /&gt;
A number of aggregation techniques are possible:&lt;br /&gt;
&lt;br /&gt;
# [[wikipedia:Bayes&#039; theorem|Bayes&#039; equation]] with a [[Technical overview of the ratings system|simple example of its use]].&lt;br /&gt;
# [[A simple averaging technique to supplement the Bayes equation|Simple averaging]] and a [[Privacy enhancing straight average algorithm|privacy enhancing variant]] thereof.&lt;br /&gt;
# [[A trust weighted averaging technique to supplement straight averaging and Bayes|Trust weighted averaging]]&lt;br /&gt;
# [[Other possible algorithms for calculating binary predicates|An analysis of these methods and simple weighted averaging]]&lt;br /&gt;
# [[Trust-weighted histograms|Trust-weighted histograms]]&lt;br /&gt;
# [[Trust/Probability/Population graphs algorithm|Trust/Probability/Population graphs algorithm]]&lt;br /&gt;
# [[Binned and continuous distributions|Binned and continuous distributions]]&lt;br /&gt;
# [[Population distributions and graphical output with privacy|Population distributions and graphical output with privacy]]&lt;br /&gt;
&lt;br /&gt;
More algorithms will be developed over time. Furthermore, the software will be built with an API to allow users to add their own algorithms.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=A_voluntary_peer-to-peer_gift_giving_economic_system&amp;diff=1673</id>
		<title>A voluntary peer-to-peer gift giving economic system</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=A_voluntary_peer-to-peer_gift_giving_economic_system&amp;diff=1673"/>
		<updated>2024-09-08T16:37:32Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|A moneyless economy based on reputation and need}}&lt;br /&gt;
&lt;br /&gt;
We have discussed both a subjective and community-based economic system. The subjective one might be called a &amp;quot;gift-giving&amp;quot; economy and the community-based one is based a [[Resource pooling|central resource pool]]. These two systems can converge in the following way. We would have a central pool, an online Amazon-like system, but transactions would be peer-to-peer. Individuals would make claims and producers would fulfill those claims based on [[Ratings system|ratings]] and their production. The system is entirely peer-to-peer. The only reason for the central pool is convenience.&lt;br /&gt;
&lt;br /&gt;
It would also include an [[Accounting|accounting system]] where we can track transactions and check if people are [[Wealth hoarding|hoarding]]. Individuals could do this themselves for the things they produce but it would be hard for them to know if the claimant already has 40 chairs, say, from other manufacturers. Judging need is still going to be a big part of knowing whether someone really deserves what you offer.&lt;br /&gt;
&lt;br /&gt;
This idea requires a change in culture because each individual is judging the “[[Deservingness|deservingness]]” of the claimant rather than the system doing so on the basis of a centralized [[community]] rating (which is what my system implies). This is a big difference and requires that we create a “culture of giving”. Not only that, but a culture of giving personally.&lt;br /&gt;
&lt;br /&gt;
It is doable if individuals want to take on the responsibility for judging others. Admittedly this is our natural state. Originally, we humans are used to living in close-knit communities where give and take economic interactions were personal and commonplace. Although communities certainly had a shared pool of resources that everyone contributed to (and received from), most interactions were personal. Interacting face to face with many different individuals in one’s community was normal.&lt;br /&gt;
&lt;br /&gt;
Nevertheless, it seems it would be hard to transform modern western society into something resembling this. We tend to value privacy more highly and are used to being alone more. Furthermore, any notion we may have once had regarding [[Duty|civic duties]] have gone by the wayside. My guess is that most people will delegate the task of deciding the merits of an economic claim to the central pool’s rating system (kind of like how it is in mine). Personally, I would just want someone else, ie the community, doing the ranking of who gets the stuff I produce. I would also want a community norm for [[Wealth distribution|wealth distribution]] which might be difficult in a peer-to-peer system.&lt;br /&gt;
&lt;br /&gt;
The gift-giving system can also work for [[Investment|investments]], where the investment request goes to the community in terms of goods/services and the providers then choose to “fund” the investment based on ratings. The “funders” can then be compensated in ratings or in terms of an “[[Ownership|ownership stake]]” in the new venture. What would an ownership stake mean? It would mean that part of the production of the new enterprise is counted as their own production and thus they are rated for being more “[[Productivity|productive]]”. When someone who produces in this way asks for goods/services, their rating will reflect their production and they will be seen as more deserving. Our accounting system would no doubt have to take into account this system of shared production.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=A_trust_weighted_averaging_technique_to_supplement_straight_averaging_and_Bayes&amp;diff=1659</id>
		<title>A trust weighted averaging technique to supplement straight averaging and Bayes</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=A_trust_weighted_averaging_technique_to_supplement_straight_averaging_and_Bayes&amp;diff=1659"/>
		<updated>2024-09-08T16:24:09Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Aggregation techniques}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Brief Recap&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last time we discussed a straightforward [[Internal:FromGitlab/A_simple_averaging_technique_to_supplement_the_Bayes_equation|averaging technique]] for situations where Bayes was not appropriate. Probabilities were [[trust]]-modified (per [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza ] or an [[Internal:FromGitlab/Modification_to_the_Sapienza_probability_adjustment_for_trust_to_include_random_lying,_bias,_and_biased_lying|augmented method]]) and passed up the tree until the top-most node was reached where they would be averaged. The [[Bayes&#039; theorem|Bayesian]] combined probability was also found and a “[[Weight function|weighting]] toward Bayes” could be used to achieve a result in between the average and the Bayesian combined probability.&lt;br /&gt;
&lt;br /&gt;
The [[Trust|trust]]-modification for both the average and Bayesian combined probability was the same (ie [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza ] or [[Internal:FromGitlab/Modification_to_the_Sapienza_probability_adjustment_for_trust_to_include_random_lying,_bias,_and_biased_lying|other]]). This means that Trust = 0 reduces any probability to 50% where, in Bayes, it stops contributing to the calculation. However, for averaging, it does contribute to the calculation. 100 50% [[Opinion|opinion]]s and 1 100% opinion will average to produce an opinion pretty close 50%. In Bayes, the combined probability in such a situation would be 100%.&lt;br /&gt;
&lt;br /&gt;
Indeed, this phenomenon is one of the reasons we advocated the averaging approach in the first place. If 100 people are uncertain about something and a 101st person expresses total confidence, you would probably be a little skeptical of the 101st person’s opinion.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Why does a 50% answer lead to such different results?&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Still, these results are so starkly different that they demand some fundamental explanation. What’s going on here? Why does this dichotomy exist? And is there anything can we do about it in terms of trust? Can we employ trust consistently to get comparable results for the averaging and Bayesian technique?&lt;br /&gt;
&lt;br /&gt;
So, to answer this question, let’s note that there is a difference between someone who replies “I don’t know” because they don’t know anything about the subject (simple ignorance) and someone who says the same because the answer really is unknown. The first person should be removed from the calculation. They simply shouldn’t count. The second person should count because they are contributing a valuable piece of information, that uncertainty is our current state of knowledge on some particular question.&lt;br /&gt;
&lt;br /&gt;
If 100 people in the first category answer the question and then a 101st person, an expert, answers we should ignore the first 100 and go with the expert. If 100 people in the second category answer and a 101st person contradicts their opinion, we should take that contradiction with a grain of salt because the uncertainty of the 100 is probably closer to the truth.&lt;br /&gt;
&lt;br /&gt;
In Bayes, the 50% answer doesn’t count, leading us to believe that Bayesian answers are in the first category. Does this make sense? A Bayesian answer is really just a test that has a probability associated with it. If the test gives you a 50-50 answer it’s as good as a coin toss. The test doesn’t really &amp;lt;i&amp;gt;know&amp;lt;/i&amp;gt; anything about the limits of knowledge in a particular area. So it does seem like the Bayesian 50% answer is just like a dumb human answer, one that should be removed from the calculation.&lt;br /&gt;
&lt;br /&gt;
A human being answering using their own judgement is in the second category, provided their judgement is based on real knowledge. This is something we can presumably assign a trust value to.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;The Trust-weighted Method&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the averaging scheme we proposed last time we are [[Trust modification|trust-modifying]] the probabilities using the Sapienza eqn. Therefore Trust = 0% leads to a probability of 50% which counts in the straight averaging approach (although it doesn’t count in Bayes). Here we propose a scheme in which the Trust is used as a weighting in the averaging scheme itself. Trust = 0% causes a source to not count at all, same as in Bayes. Trust = 100% causes the source to be counted as a full source. One advantage of this approach is that user’s could assign Trust &amp;amp;gt; 100% if they have an especially high regard for their source.&lt;br /&gt;
&lt;br /&gt;
For a single parent at Node 0 and n children nodes all on the same level, the equation looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{ave,w} = {\sum_{i=0}^n P_iT_i\over {\sum_{i=0}^n T_i}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_i&amp;lt;/math&amp;gt; is the probability for node &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;i&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;T_i&amp;lt;/math&amp;gt; is the trust between the parent node and node &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;i&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This allows Trust to have a similar effect on averaging that it has for Bayes: it removes a source from the calculation when that source’s Trust is zero.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Example 1&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First we will reproduce the [[Internal:FromGitlab/Modification_to_the_Sapienza_probability_adjustment_for_trust_to_include_random_lying,_bias,_and_biased_lying|example from last time]] and outline the calculation procedure for the weighted average case.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, P=50%&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, P=50%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=50%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=60%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=70%&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, P=80%&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, P=90%&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 5 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 6 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
This calculation works by starting at the bottom and collecting all the probability and trust numbers into a list that we can then apply the averaging calculations to. The leaf nodes, at the very bottom, require no calculation and are ignored.&lt;br /&gt;
&lt;br /&gt;
The first significant Node is Node 1 which consists of a list of its own information and that of its children. Each sublist has probability (P,1-P) followed by the trust.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PT_1 = [[0.5, 0.5, 1], [0.6, 0.4, 0.9], [0.7, 0.3, 0.9]]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The weighted average for this case is (just doing P, not 1-P):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave,w1} = {.5(1) + 0.6(0.9) + (0.7)(0.9)\over {1 + 0.9 + 0.9}} = 0.596&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node 2 is similar:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PT_2 = [[0.5, 0.5, 1], [0.8, 0.2, 0.9], [0.9, 0.1, 0.9]]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave,w2} = {.5(1) + 0.8(0.9) + (0.9)(0.9)\over {1 + 0.9 + 0.9}} = 0.725&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node 0 now consists of its own node and the two previous results for Node’s 1 and 2 with an additional trust of 0.9 to represent the link from 0-1 and 0-2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;PT_0 = [[0.5, 0.5, 1.0], [0.5, 0.5, 1.0, 0.9], [0.6, 0.4, 0.9, 0.9], [0.7, 0.3, 0.9, 0.9], [0.5, 0.5, 1.0, 0.9], [0.8, 0.2, 0.9, 0.9], [0.9, 0.1, 0.9, 0.9]]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this information the calculation can proceed for the weighted average of the top-node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave,w0} = {0.5(1) + 0.5(1)(0.9) + 0.6(0.9)(0.9) + 0.7(0.9)(0.9) + 0.5(1)(0.9) + (0.8)(0.9)(0.9) + 0.9(0.9)(0.9)\over {1 + 1(0.9) + (0.9)(0.9) + 0.9(0.9) + 1(0.9) + 0.9(0.9) + 0.9(0.9)}} = 0.634&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will recall that this example yielded the following results for the straight averaging technique and Bayes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave} = 0.616&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{bay} = 0.964&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The weighted average and straight average yield similar results because Trust = 0.9 at worst so all nodes have a relatively high weight. Indeed, if the Trust were set equal to 1, both the straight and weighted average results would be exactly the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{ave0} = P_{ave,w0} = 0.643&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following example will show how different the results of these two calculations can be and will illustrate some of the properties mentioned above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Example 2&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, P=50%&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, P=90%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=90%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=90%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=90%&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [label=&amp;quot;T=0.0&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [label=&amp;quot;T=0.0&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 3 [label=&amp;quot;T=0.0&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 4 [label=&amp;quot;T=1.0&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave,w0} = 0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave,0} = 0.58&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{bay} = 0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the only trustworthy nodes are 0 and 4. So the weighted average technique correctly drops nodes 1-3 from the calculation, yielding an average of 0.7, which can be confirmed by inspection. The straight average is 0.58, however, because nodes 1-3 are counted at 50% in this case (after adjusting them for Trust per Sapienza). The Bayesian combined probability is 0.9 because 50% probabilities simply don’t count in the Bayes equation.&lt;br /&gt;
&lt;br /&gt;
The 0.7 is a reasonable answer given that we trust ourselves (Node 0) and an expert (Node 4) who has a more certain answer. Note that the 0.58 unduly influences us toward Nodes that we do not trust because we believe them to be ignorant of the subject matter. The Bayesian result is, for reasons discussed above, also a bit too optimistic since we have another knowledgeable node with a valid, if uncertain, opinion.&lt;br /&gt;
&lt;br /&gt;
This [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/141 snippet ]will reproduce the above calculations and similar examples of your choosing.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=%22Open_source%22_decision_making&amp;diff=1651</id>
		<title>&quot;Open source&quot; decision making</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=%22Open_source%22_decision_making&amp;diff=1651"/>
		<updated>2024-09-08T15:21:51Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|[[Political systems|Political Systems]]}}&lt;br /&gt;
&lt;br /&gt;
One of the advantages and use cases our system offers is replacing closed decision processes with ones that are open to all. An example of this is college admissions, one that has a direct impact on the public, especially young people. Many high school students aim for the most competitive schools for which the admission rate might be less than 10%. Needless to say, the applicant pool, aside from those who are admitted for non-academic reasons (eg corruption, legacy, athletics), is outstanding. But the number of slots is extremely limited. So the admissions committees are forced to choose a class from candidates that are all uniformly excellent. It is easy to see how their decisions will hinge on subjective nuances such as whether the interview went well or if they liked the essay. It is also easy to see how the committee simply has too much information to deal with effectively. This is a case where opening up the process can provide a more democratic and effective means to select among very similar candidates.&lt;br /&gt;
&lt;br /&gt;
Students could submit their [[applications]] to our rating system and have the network judge them in general or for admission to any particular school (if raters are available with knowledge of specific schools). Obviously the quality of the raters would be important here so their own [[Rating|ratings]] in terms of objectivity would be important. The system could be streamlined by having [[Algorithm|algorithms]] to partially perform ratings of applications automatically. The algorithms, of course, would be open to scrutiny, and modification, by all.&lt;br /&gt;
&lt;br /&gt;
A system of this kind would gain credibility over time and eventually challenge the current closed admissions review committees. Colleges, who should be interested in cutting costs, might choose to hand over the admissions decision-making to our [[ratings system]] after tuning it to their own requirements. This is an example of an institution being dissolved out of existence by the constant presence of a better alternative.&lt;br /&gt;
&lt;br /&gt;
== Replacing closed/centralized traditional rating systems ==&lt;br /&gt;
College admissions brings to mind a related institution, the [https://www.usnews.com/best-colleges US News and World Report college rankings]. US News has had a profound influence on colleges, to the point where some schools have pulled out of its ranking system altogether (eg Harvard Medical School). But most simply try to rank highly by the standards set by US News. The system mostly relies on colleges self-reporting statistics and other facts about themselves. As a result, there is an unspecified amount of fraudulent data being submitted as well as shady data reporting practices for the purpose of obtaining a higher US News rank.&lt;br /&gt;
&lt;br /&gt;
From https://www.usatoday.com/story/news/education/2023/02/22/colleges-quitting-us-news-rankings/11274010002/&lt;br /&gt;
&lt;br /&gt;
“A General Accounting Office investigation in November found that 91 percent of colleges and universities misrepresented their expected cost of attendance”&lt;br /&gt;
&lt;br /&gt;
US News makes money from its [[Ratings system|ratings system]] by selling the right to advertise its rankings:&lt;br /&gt;
&lt;br /&gt;
https://www.nytimes.com/2024/01/06/us/college-rankings-us-news.html&lt;br /&gt;
&lt;br /&gt;
Clearly this, like college admissions, is a closed-door rating system with alot of problems. Converting it into an open source rating system where the quality of the raw data can also be assessed, would be an important reform. Another reform would be open algorithms used to aggregate the data instead of relying on US News for this. [[Weighting factor|Weighting factors]], in particular, are largely subjective and would benefit from public scrutiny.&lt;br /&gt;
&lt;br /&gt;
US News also produces rankings of high schools, hospitals, hotels, etc. Since using the US News rankings is quite well established in the public mind, we will have to change that over time. One effective technique, it would seem, is to give users the ability to build their own ratings system for a particular application. This, we should emphasize, is exactly what we are already doing.&lt;br /&gt;
&lt;br /&gt;
An example of this is a NY Times college ranking system that you build yourself:&lt;br /&gt;
&lt;br /&gt;
https://www.nytimes.com/interactive/2023/opinion/build-your-own-college-rankings.html?searchResultPosition=1&lt;br /&gt;
&lt;br /&gt;
Here the user selects the criteria of importance to them (high earnings post grad, low sticker price, campus safety, etc) and the system rates the colleges, around 900 in total, accordingly. It is fun to use and reasonably informative:&lt;br /&gt;
&lt;br /&gt;
[[File:A8ad56450816cdba6989dc5677197da3_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
The type of slider-bar tuning and filters shown here is something we can provide in our [[User interface|user interface]] as well. Indeed we could supply a widget library so users can put together their own customized UIs.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=A_simple_averaging_technique_to_supplement_the_Bayes_equation&amp;diff=1649</id>
		<title>A simple averaging technique to supplement the Bayes equation</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=A_simple_averaging_technique_to_supplement_the_Bayes_equation&amp;diff=1649"/>
		<updated>2024-09-08T15:16:52Z</updated>

		<summary type="html">&lt;p&gt;Lembot: Pywikibot 9.3.1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Aggregation techniques}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Background&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we saw [[Internal:FromGitlab/Error_Bars_and_a_Problem_with_Bayesian_Modeling|previously]], the [[Bayes&#039; theorem|Bayes equation]] can easily be misapplied to situations that are not based on rigorous [[Probability theory|probabilistic]] studies. The example given was along the lines of 100 people who are not sure whether it will be sunny or cloudy tomorrow because either they individually don’t know (P=50%) or their probabilities cancel out to become 50%. But the 101st person is certain that it will be sunny. According to Bayes, this will give you 100% certainty that tomorrow will be sunny because the 50% [[Opinion|opinion]], no matter how it is arrived at, simply has no influence.&lt;br /&gt;
&lt;br /&gt;
Another simpler example is if 10 people give you a 60% chance of rain tomorrow and thereby, via the Bayes eqn, cause you to believe with near certainty that it will rain tomorrow. The problem here is that the 10 people are not conducting independent tests (or simulations) to judge the probability of rain. They are, more likely, simply reflecting the single weather report they all watched on TV.&lt;br /&gt;
&lt;br /&gt;
It is clear that Bayes cannot be used in cases where the source offers no better than a handwaving estimate of probability resulting from a casual opinion. Since this is going to happen in a large number of cases, we will need a more realistic way to combine these opinions.&lt;br /&gt;
&lt;br /&gt;
One way is to simply average the probabilities being given by each source. For the 10 people who watched the same weather report, the average will then be 60%, reflecting only the single source they obtained their information from.&lt;br /&gt;
&lt;br /&gt;
So now we would have two methods for combining probabilities, a [[Simple averaging|simple averaging]] technique and Bayes. It would be left to the user to choose which of these to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;A [[Weighting factor|weighting factor]] between simple averaging and Bayes&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But these two choices seem like two opposite endpoints on a continuum. On the one hand we have Bayes for rigorous probabilistic tests and on the other we have simple averaging for the most unrigorous opinions. Unlike Bayes, the averaging technique provides no reinforcement (ie the &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt; can’t be higher than the highest &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P&amp;lt;/math&amp;gt;). But it seems like a large crowd should provide some reinforcement. If 10 people say 60%, isn’t that sometimes better than a single person saying 60%? What if there were two independent weather predictions that give the chance of rain at 60% and the 10 people are divided into these two groups? Then you could apply Bayes for two sources at 60% and get a reinforced result of 69%.&lt;br /&gt;
&lt;br /&gt;
A straightforward way to do this is to simply have a user-chosen weighting factor between simple averaging and Bayes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{comb} = (1-w_{bay})P_{ave} + w_{bay}P_{bay}   &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt; is combined probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave}&amp;lt;/math&amp;gt; is the simple-averaged probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{bay}&amp;lt;/math&amp;gt; is the [[Bayes&#039; theorem|Bayesian]] combined probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_{bay}&amp;lt;/math&amp;gt; is the weighting toward Bayes. If &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_{bay}&amp;lt;/math&amp;gt; = 0 then only simple averaging is used. If &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_{bay}&amp;lt;/math&amp;gt; = 1 then the algorithm uses Bayes only.&lt;br /&gt;
&lt;br /&gt;
The input probabilities for &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave}&amp;lt;/math&amp;gt; and &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{bay}&amp;lt;/math&amp;gt; are the same. That is, they are modified using [[trust]] in exactly the same way (using [https://ceur-ws.org/Vol-1664/w9.pdf Sapienza’s equation], or using the modified forms of this equation described [[Internal:FromGitlab/Modification_to_the_Sapienza_probability_adjustment_for_trust_to_include_random_lying,_bias,_and_biased_lying|here]]).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Combining input probabilities in simple averaging&amp;lt;/h2&amp;gt; &lt;br /&gt;
&lt;br /&gt;
However, the input probabilities are not rolled up in exactly the same way to create &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt;. For Bayes, [[Node|nodes]] are combined with their parent to create &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt;. The &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt; of the parents are then combined with their parent to create a new &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb}&amp;lt;/math&amp;gt; and so on all the way to the topmost node. For simple averaging, doing this will result in double-counting nodes, so the technique is to just append probabilities as we work up and then take the average once the top-most node is reached by dividing the sum by the number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Example&amp;lt;/h2&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The following example shows how this works:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, P=50%&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, P=50%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=50%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=60%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=70%&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, P=80%&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, P=90%&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 5 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 6 [label=&amp;quot;T=0.9&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
We start at the bottom and find the modified probabilities based on [[Trust|Trust]]. As noted above, this is no different than what we’ve always done:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod3} = P_{nom} + T(P-P_{nom}) = 0.5 + 0.9(0.6 - 0.5) = 0.59&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod4} = 0.5 + 0.9(0.7 - 0.5) = 0.68&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod5} = 0.5 + 0.9(0.8 - 0.5) = 0.77&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod6} = 0.5 + 0.9(0.9 - 0.5) = 0.86&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The modified probabilities for Nodes 3 and 4 are then appended to Node 1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod1} = [0.50, 0.59, 0.68]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The modified probabilities for Nodes 5 and 6 are appended similarly to Node 2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod2} = [0.50, 0.77, 0.86]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These lists of probabilities are modified again by trust (for the 0-1 and 0-2 connection) and appended to the Node 0 probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod01} = [0.5 + 0.9(0.5 - 0.5), 0.5 + 0.9(0.59 - 0.5), 0.5 + 0.9(0.68 - 0.5)] = [0.5, 0.581, 0.662]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod02} = [0.5 + 0.9(0.5 - 0.5), 0.5 + 0.9(0.77 - 0.5), 0.5 + 0.9(0.86 - 0.5)] = [0.5, 0.743, 0.824]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{mod0} = [0.5, 0.5, 0.581, 0.662, 0.5, 0.743, 0.824]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The average for node 0 can now be found:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{ave} = (0.5+0.5+0.581+0.662+0.5+0.743+0.824) / 7 = 4.31 / 7 = 0.616&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Bayesian combined probability for this case is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{bay} = 0.964&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we apply a weighting factor of, say, 20% as discussed above, we obtain:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;w_{bay} = 0.2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb} = (1-w_{bay})P_{ave} + w_{bay}P_{bay}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;P_{comb} = (1-0.2)(0.616) + 0.2(0.964) = 0.686&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/140 snippet] performs this calculation and allows you to change values and tree configuration. &lt;br /&gt;
&lt;br /&gt;
This algorithm can be [[Privacy enhancing straight average algorithm|modified to enhance the privacy]] of information transmitted up the nodes.&lt;/div&gt;</summary>
		<author><name>Lembot</name></author>
	</entry>
</feed>