<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.peerverity.info/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lem</id>
	<title>Information Rating System Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.peerverity.info/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lem"/>
	<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/wiki/Special:Contributions/Lem"/>
	<updated>2026-04-13T16:28:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Main_Page&amp;diff=15003</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Main_Page&amp;diff=15003"/>
		<updated>2026-03-26T13:54:01Z</updated>

		<summary type="html">&lt;p&gt;Lem: Reverted edit by RickieGordon8 (talk) to last revision by Pete&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Information Rating System Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://peerverity.info/ PeerVerity] is on a mission to combat disinformation by empowering individuals with tools to share and rate information. The project described in these pages is a peer-to-peer [[ratings system]]. Given the overwhelming quantity of online information, and the ease with which it can manipulate its audience, we seek to develop a better means to qualify, filter, and analyze it. The basic idea is that by using a network of users, and [[Aggregation techniques|aggregating their opinion]], we can more accurately (and quickly) assess information and make better decisions from it. &lt;br /&gt;
&lt;br /&gt;
The system will allow users to create networks of other users who, in turn, have their own networks. Therefore the system can be several levels deep. A question can be asked (eg should the Fed reduce interest rates?) and folks on the network would answer it in a privacy preserving manner. An aggregator would collect the answers, compute an average (or some other aggregate calculation), and return the result to the person asking the question. The questioner can assign [[trust]] levels to individuals in their network and use these in the aggregate calculation. This system, as described, is personal in nature and can be viewed as a [[The subjective and community ratings system|Subjective Ratings System (SRS)]].&lt;br /&gt;
&lt;br /&gt;
Some people on the network may choose to be public and allow anyone to link to them for their [[opinion]]. Therefore, there will be private nodes, who remain unknown to all but the nodes they choose to link with, and public nodes who are known to everyone.    &lt;br /&gt;
&lt;br /&gt;
We anticipate that like-minded people will join together to form communities and create a variant of the public SRS, known as a [[The subjective and community ratings system|Community Ratings System (CRS)]]. This ratings system will enable a voting system which will do a better job of understanding people&#039;s needs and allow them to participate in policy-making. In this manner, we anticipate a [[direct democracy]] where members themselves would vote on policy issues, pass laws, etc. The technology behind the ratings system would make direct democracy a practical and efficient mechanism for public participation. The CRS might also establish organizations and recognize experts who can help provide policy information in a public manner. Such organizations might then form the implementation arm for substantive policy decisions.&lt;br /&gt;
&lt;br /&gt;
There are other important aspects of the ratings system. The software and its algorithms will be open to all. Users and communities will be able to tweak the settings of current algorithms or come up with new ones of their own (eg for aggregation). The software will include features for [[debate]], an important part of investigating the truth of contentious issues and making policy in a direct democracy. It will also include a robust package of socio-economic simulation tools, based on engineering simulation. These would allow communities to study policy options in detail from a rational, unbiased perspective. Through the software, ordinary people will have the tools to understand the truth and select an optimized choice from many options. &lt;br /&gt;
&lt;br /&gt;
We thus see the formation of a “ratings-based society” (RBS) which can transform our current atmosphere of misinformation and dysfunctional governance. It is hoped that doing so will allow all members to benefit from a new [[consensual reality]] where high quality information is the norm and which complements rational decision-making.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Community_and_libertarianism&amp;diff=2400</id>
		<title>Community and libertarianism</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Community_and_libertarianism&amp;diff=2400"/>
		<updated>2024-10-09T19:26:54Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Community}}&lt;br /&gt;
&lt;br /&gt;
One aspect of our culture is a strong emphasis on individual agency and rights. We may not all be formally libertarians but a [[Philosophy of John Rawls#Libertarianism|libertarian ethos]] pervades our views and, as noted above, is strongly connected to the [[Cryptocurrency|crypto community]] where the idea of a [[ratings system]] might find its first adopters.&lt;br /&gt;
&lt;br /&gt;
There is inevitably a conflict between a [[Ratings system|ratings system]], [[Community|communities]], and libertarianism. There is further conflict when we add concerns about [[Privacy, identity, and fraud in the ratings system|privacy]], which I would argue is an important component of libertarianism. After all, keeping the state out of our affairs is much easier when it doesn’t &amp;lt;i&amp;gt;know&amp;lt;/i&amp;gt; about our affairs.&lt;br /&gt;
&lt;br /&gt;
But we have defined our communities, and any ratings system they might have, as voluntary in nature. Thus the individual, the libertarian if you will, can choose the [[community]] and ratings system they wish to join. Once in a community, they have continuing influence over its ratings system. In some ways we can view the ratings system as the glue that makes a libertarian society work since it can be tuned to be as obtrusive or unobtrusive as its members want. The ratings system can also be as individualistic or as communitarian as members choose. And members can, of course, leave the community and join another one whenever they want.&lt;br /&gt;
&lt;br /&gt;
Is this enough? Have we reconciled the libertarian with the community? Maybe. We will explore this a little further as we go along. Libertarians will certainly be attracted to joining communities of choice rather than the one they were born in, for which they had no choice and have almost no influence over. [[wikipedia:Nation state|Nation-states]] are not, to say the least, libertarian societies. But communities with ratings systems where everyone is judged must seem like quite an imposition on the libertarian mind.&lt;br /&gt;
&lt;br /&gt;
Instead of answering this directly let’s point to some of the advantages of libertarian philosophy in any community. The greatest one, it would appear, is that in a conflict between flawed social obligation vs. personal morality, the libertarian would favor the latter. Libertarians don’t commit [[wikipedia:Genocide|genocide]] or knowingly design faulty airplanes. We can expect their sense of personal morality and agency to assert itself in cultures that try to promote questionable behavior.&lt;br /&gt;
&lt;br /&gt;
Healthy communities require dissent and make progress as a result of it. [[wikipedia:Slavery|Slavery]] existed for thousands of years before dissenters realized that it was wrong and started campaigning to get rid of it. But it took a long time to accomplish and many vested interests had to be defeated in order to do it. Today we are experiencing a large scale evolution to a non-majority white Christian society and many people are against it. We are also changing our views on [[wikipedia:Greenhouse gas emissions|carbon emissions]] for [[Environmental policy in communities|environmental]] reasons but, again, there will be plenty of opposition and it will take a long time.&lt;br /&gt;
&lt;br /&gt;
It would help if we had a ratings system that could produce required cultural changes faster. By [[Debate|giving voice to dissenting views]] and simply having a quicker feedback loop, we can reduce the time needed for social change. It would also help to have small communities that can pioneer ideas and see if they work before they are adopted more generally. In this way, both the community and the dissenter are an active part of change.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Avoiding_feedback&amp;diff=2183</id>
		<title>Avoiding feedback</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Avoiding_feedback&amp;diff=2183"/>
		<updated>2024-09-25T15:43:44Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;methods-of-preventing-feedback&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
= Methods of preventing feedback =&lt;br /&gt;
&lt;br /&gt;
Say we have a fully-connected [[trust]] network that looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    alice -&amp;gt; bob [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    alice -&amp;gt; carol [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    carol -&amp;gt; bob [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
And &#039;&#039;alice&#039;&#039; wants &#039;&#039;bob&#039;&#039;’s [[Computed opinion|computed opinion]]. A careless implementation could run into an infinite loop, where &#039;&#039;alice&#039;&#039; asks &#039;&#039;bob&#039;&#039; for his computed [[opinion]], who asks &#039;&#039;carol&#039;&#039; for her computed opinion, who asks &#039;&#039;alice&#039;&#039; for her computed opinion, who asks &#039;&#039;bob&#039;&#039;…&lt;br /&gt;
&lt;br /&gt;
This also leads to the fact that &#039;&#039;alice&#039;&#039; can get &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039;’s opinion twice – once directly, once indirectly through the other. This is acceptable, and probably desirable. If we add some [[Trust factor|trust factors]] to this network like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    alice -&amp;gt; bob [label=&amp;quot;trust=0.8&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    alice -&amp;gt; carol [label=&amp;quot;trust=0.2&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    carol -&amp;gt; bob [label=&amp;quot;trust=0.8&amp;quot;,dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
We can see that &#039;&#039;alice&#039;&#039; doesn’t [[Trust|trust]] &#039;&#039;carol&#039;&#039; much, so she will assign a low weight &#039;&#039;carol&#039;&#039;’s direct opinions. But when she asks &#039;&#039;bob&#039;&#039; for his opinion, &#039;&#039;bob&#039;&#039; rates &#039;&#039;carol&#039;&#039; highly so his computed opinion will include &#039;&#039;carol&#039;&#039;’s opinion, and &#039;&#039;carol&#039;&#039;’s influence on &#039;&#039;alice&#039;&#039;’s computed opinion will probably be significantly higher through &#039;&#039;bob&#039;&#039; than when presented directly to &#039;&#039;alice&#039;&#039;. But we can say that &#039;&#039;alice&#039;&#039; has assigned high trust in &#039;&#039;bob&#039;&#039; because &#039;&#039;bob&#039;&#039;’s computed opinions are usually right, regardless of how he comes up with them. She doesn’t know that he trusts &#039;&#039;carol&#039;&#039;, and doesn’t need to. So getting &#039;&#039;carol&#039;&#039;’s opinion through &#039;&#039;bob&#039;&#039; is probably a good thing, as her low direct trust in &#039;&#039;carol&#039;&#039; may be a mistake. Or maybe this is a situation where we have different trust factors for different domains; where &#039;&#039;alice&#039;&#039; is asking a car question and &#039;&#039;bob&#039;&#039; knows that &#039;&#039;carol&#039;&#039; is a mechanic so has a high trust factor on automotive [[Predicate|predicate]]s, but &#039;&#039;alice&#039;&#039; just knows her as someone with a reputation for being prone to [[wikipedia:Groupthink|groupthink]] on cultrue-war issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;depth-limit&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Depth Limit ==&lt;br /&gt;
&lt;br /&gt;
A simple way to avoid [[wikipedia:Recursion|infinite recursion]] is to put in a depth limit. When &#039;&#039;alice&#039;&#039; asks &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039; for their computed opinions, she will set a depth limit of, say, &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 4&amp;lt;/math&amp;gt;. Then, when &#039;&#039;bob&#039;&#039; gets the request, he would ask the nodes he trusts for their computed opinions, but pass &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth - 1&amp;lt;/math&amp;gt; to &#039;&#039;alice&#039;&#039; and &#039;&#039;carol&#039;&#039;, and so on. Eventually, if you are asked for a computation with &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 0&amp;lt;/math&amp;gt;, you just return your own personal opinion, or &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; if you don’t have one. The initial depth limit would need to be set to be roughly the depth of an acyclic version of your network so there aren’t any nodes you can’t reach because they’re too far away.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* simple to implement&lt;br /&gt;
* doesn’t require any node identifiers&lt;br /&gt;
* low network overhead (1 byte?)/low computing overhead&lt;br /&gt;
* results are stable – if you ask the same question on the same network multiple times, you’ll get the same answer.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* peoples opinions are counted multiple times. my gut feeling is people near you tend to get relatively more weight than those farther away from you.&lt;br /&gt;
* you have to balance your initial value of &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth&amp;lt;/math&amp;gt; somewhere between cutting off distant sources of information, and over-valuing nearby information&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
&lt;br /&gt;
* The ideal value for the three-person fully-connected network would be &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 1&amp;lt;/math&amp;gt;, and even that may have &#039;&#039;alice&#039;&#039; getting her own personal opinion fed back to her unless the algorithm has a way for alice to tell bob not to turn around and ask her right back, which would be a step towards a node identifier.&lt;br /&gt;
* For [[Privacy|privacy]], you may want to stop at &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 1&amp;lt;/math&amp;gt; in the algorithm as described above because allowing &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;depth = 0&amp;lt;/math&amp;gt; requests directly exposes your personal opinion)&lt;br /&gt;
* My ten-seconds-of-effort google search turned up these statistics on [[Social network|social networks]]: https://miro.medium.com/v2/resize:fit:4800/format:webp/1*rkITSNe7ngMh8vmQexayQw.png The highlight being that for twitter, the average path length was 2.6 and the diameter was 18. Other services had somewhat higher average path lengths and somewhat lower diameters, but the difference was still at least a factor of 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;unique-identifier-for-each-inquiry&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Unique identifier for each inquiry ==&lt;br /&gt;
&lt;br /&gt;
This has a few variants. The property they all have in common is that when &#039;&#039;alice&#039;&#039; starts trying to figure out her computed opinion, she assigns a [[wikipedia:Universally unique identifier|unique identifier]] to the computation, and passes that identifier to &#039;&#039;bob&#039;&#039; and &#039;&#039;carol&#039;&#039; with her requests. If a node gets a request with an identifier it has never seen before, it does a full computation. If it gets a request with an identifier it has seen before, it does something less. That could be:&lt;br /&gt;
&lt;br /&gt;
* returning &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt;, effectively saying “I have no opinion on this, because my opinion has already been accounted for in this calculation”&lt;br /&gt;
* returning the same (cached) result they returned the first when they did the full computation&lt;br /&gt;
* returning their own personal opinion&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* low network overhead (16 bytes). low computational overhead&lt;br /&gt;
* doesn’t require node identifiers&lt;br /&gt;
* doesn’t loop back and count someone’s opinions multiple times&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* unstable. results change depending on which order nodes receive requests&lt;br /&gt;
* some variants wouldn’t allow someone’s opinion to count multiple times, which we convinced ourselves was desirable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;passing-the-path&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Passing the path ==&lt;br /&gt;
&lt;br /&gt;
In this method, each time you send a request to a new node, you include the list of all upstream nodes, so it knows not to loop back. When a node receives a request and all of the nodes they trust are listed in the request’s upstream nodes, it will just return its personal opinion, or null if it has none.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;seqdiag&amp;quot;&amp;gt;&lt;br /&gt;
seqdiag {&lt;br /&gt;
  alice -&amp;gt; bob [label = &amp;quot;request(ignore=alice)&amp;quot;];&lt;br /&gt;
  bob -&amp;gt; carol [label = &amp;quot;request(ignore=alice,bob)&amp;quot;];&lt;br /&gt;
  bob &amp;lt;-- carol [label = &amp;quot;carol&#039;s personal opinion&amp;quot;];&lt;br /&gt;
  alice &amp;lt;-- bob [label = &amp;quot;bob&#039;s computed opinion&amp;quot;];&lt;br /&gt;
  alice -&amp;gt; carol [label = &amp;quot;request(ignore=alice)&amp;quot;];&lt;br /&gt;
  carol -&amp;gt; bob [label = &amp;quot;request(ignore=alice,carol)&amp;quot;];&lt;br /&gt;
  carol &amp;lt;-- bob [label = &amp;quot;bob&#039;s personal opinion&amp;quot;];&lt;br /&gt;
  alice &amp;lt;-- carol [label = &amp;quot;carol&#039;s computed opinion&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* stable&lt;br /&gt;
* only counts opinions twice when we want it to&lt;br /&gt;
* doesn’t double-count anyone or loop back&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* requires node identifiers&lt;br /&gt;
* higher network overhead for the list of node identifiers&lt;br /&gt;
* list of upstream nodes leaks a lot of information&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;privacy-preserving-variant&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Privacy-preserving variant ===&lt;br /&gt;
&lt;br /&gt;
To make the above more palatable, we can change it as follows:&lt;br /&gt;
&lt;br /&gt;
* Every time a node makes a request&lt;br /&gt;
** they create a random [[wikipedia:Universally unique identifier|UUID]] to identify themselves for this computation. They store it in a short-term cache so they’ll know it if they see it again.&lt;br /&gt;
** they also generate a number of other decoy UUIDs that help obscure the trust paths. Maybe between 3 and 10 decoys per request.&lt;br /&gt;
** then they send their request with both the real and decoy UUIDs (sorted lexicographically, say)&lt;br /&gt;
* When a node receives a request, it checks to see if any of the UUIDs in the request are in its short-term cache of ids that represent itself.&lt;br /&gt;
** if so, it returns &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt;&lt;br /&gt;
** if not, it computes its opinion normally. When it has to send requests to other nodes for, it does the same thing the originating node did: create and cache a real UUID, create several decoys, and append them to the list of UUIDs in the request it’s processing&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
&lt;br /&gt;
* makes it much harder to figure out who is whom&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
&lt;br /&gt;
* makes requests much larger on the network (figure what, maybe another 512 bytes on average?)&lt;br /&gt;
* also increases the number of requests (a full new set of leaf nodes). In the non-privacy version, if &#039;&#039;carol&#039;&#039; was processing &amp;lt;code&amp;gt;request(ignore=alice,bob)&amp;lt;/code&amp;gt;, she knew not to contact &#039;&#039;alice&#039;&#039;. In the privacy variant, she needs to contact &#039;&#039;alice&#039;&#039; because she doesn’t know if any of the UUIDs represent alice.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Attenuation_in_trust_networks&amp;diff=2181</id>
		<title>Attenuation in trust networks</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Attenuation_in_trust_networks&amp;diff=2181"/>
		<updated>2024-09-25T15:25:04Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|Technical overview of the ratings system}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Review of Cycling&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To summarize [[Effect of cycling in trust networks|cycling]] briefly, if [[Trust]]=1.0 for all nodes, then cycling will rapidly lead to a confidence of 1 (100%) for a given question (if probabilities for most of the nodes are above 50%). However, if [[Trust|Trust]] &amp;amp;lt; 1.0 then the confidence will [[wikipedia:Asymptote|asymptotically]] approach some limit below 1. The question was asked, why? Why the asymptote? I hadn’t, and still really haven’t, done the math to answer this (See APPENDIX below for an attempt) so I offered up the following qualitative explanation:&lt;br /&gt;
&lt;br /&gt;
# The confidence goes to 100% in a cycling network with Trust=1 (as long as our probabilities are above 50%). If Trust=0, the confidence is simply the confidence of the head node, which trusts itself. We would think that if Trust is between 0 and 1 that the confidence would also go to 100% if enough cycling were to take place, just like the case when Trust=1.0. But this would imply a sharp discontinuity between Trust=0 and everything else. Usually, when given a choice between continuous and discontinuous, you should pick continuous. Nature just seems to work that way, at least from a human-scale point of view. It is more reasonable that a continuous variation exists between Trust=0 and Trust=1.&lt;br /&gt;
# The asymptote is the result of two forces fighting each other: one being the multiple counting of the same nodes over and over (cycling) and the other being the attenuation of the trust as the nodes get further away from the top-most node. When trust=1, no attenuation occurs so the full effect of multiple counting takes over. When Trust &amp;amp;lt; 1, nodes further out exert less and less influence on the final answer because they have to go through multiple trust layers and, hence, become attenuated. A node with Trust=0 has no influence on the final answer which is almost the same as the influence a distant node has. Since its influence is almost nothing we could view that as being effectively the same as a trust of zero. Hence the term “trust attenuation”. As nodes get farther away from the top node their “effective” trust declines more and more.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Effect of Trust Attenuation&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let’s note first that this has nothing to do with cycling. Even in networks where no cycling takes place, if the trust of multiple nodes is below 1, attenuation will cause the last node to have very little influence on the final answer.&lt;br /&gt;
&lt;br /&gt;
Let’s look at the implications of this. If a node that’s many levels down has low trust (due to attenuation) but is the only node that has any informed knowledge of a subject, how do we factor that in without completely washing out the information it provides? It seems a shame to discount the only node that knows anything because of attenuation.&lt;br /&gt;
&lt;br /&gt;
Cases of this would be any question that is difficult to answer and will require an extensive dive into the network to find an authoritative source. In this situation most of the network doesn’t know anything but there’s one guy who does, somewhere deep down. Examples would be serious questions we might really pose: Is Bibi, from Israel, who just contacted me for a business deal, a good guy? Is it better to get a liver transplant in India or the US? No one in my immediate network knows the answer to these questions so it’s going to take a few levels to get there. The problem is that the network path to anyone who knows is long and has trust factors built in to every node which will attenuate the result as we go along. Take a look at the following example:&lt;br /&gt;
&lt;br /&gt;
[[File:354dda5b3fb3b7a7d8220df8a448cb44_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here all nodes have a trust of 70% for their succeeding node and Nodes 0-7 have P=50%, meaning they don’t know anything and their contribution to the overall answer is nil. The only node with any real knowledge is 8, which we represent as P=100%. If we use [https://gitlab.syncad.com/peerverity/trust-model-playground/-/wikis/uploads/6484e59605fd81085eaa127bd98bb9ed/sapienza_trusttree.py sapienza_trusttree.py] to compute this case we arrive at Ptot = 53% as shown. This is not a very good result, just slightly better than random.&lt;br /&gt;
&lt;br /&gt;
Let’s contrast this with a network composed of two nodes, where there is a direct link between 0 and 8:&lt;br /&gt;
&lt;br /&gt;
[[File:6945896d20cb2f044417fa6bb1420df0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here, by eliminating all the nodes that contribute nothing to the answer except attenuation, we obtain the far more reasonable result of 85%.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Ideas for Dealing with Trust Attenuation&amp;lt;/h2&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To handle this we could allow the user to assign trust factors of 1 to nodes that don’t know the answer (ie ones that have 50% confidence). Trust seems like a less likely issue in cases where people simply don’t know but are willing to pass along information they’ve received. We could even automate this – the node could just pass the answer from their child up without doing anything if they, in fact, have no knowledge and, presumably, no stake in the outcome:&lt;br /&gt;
&lt;br /&gt;
[[File:16f4acbb814cbb110a2afa9eff0f51e0_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Here we’ve cleanly passed up Node 8’s confidence to the top node without that pesky trust attenuation problem getting in the way. But we’ve also introduced another obvious problem in that we are simply taking Node 8’s word for it on its 100% confidence level. Clearly we need to reality-check this with a trust factor but we’ve just assigned them all to 1 to avoid attenuation.&lt;br /&gt;
&lt;br /&gt;
We might call this “the biased source problem”. Frequently the people who know anything about an obscure subject are biased because they make their business that subject and have a stake in the outcome. Our best source on Bibi is his golfing buddy at the chamber of commerce. The best source on liver transplants in India is an Indian hepatologist who’d like to do the transplant himself. The results are unrealistically confident answers.&lt;br /&gt;
&lt;br /&gt;
So a variant on this idea is to allow the trust factor of the next-to-last node to stand but ignore the rest by assigning them to 1. This would preserve the trust information of the guy who knows the guy who knows. Here, Node 7’s trust for Node 8 stands (at 70%) but the rest of the nodes are set to 100%:&lt;br /&gt;
&lt;br /&gt;
[[File:6a34ceceeac7399d5f077c25fabefc7b_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
Another option would be to allow Node 0 to assign a single trust factor to the whole network, perhaps based on how far away the source node (Node 8) is.&lt;br /&gt;
&lt;br /&gt;
In any event, we are short-circuiting our way to the nodes that really matter, the one that knows the guy who knows, and the guy who knows. I think our system should allow this in some form or other. The user would then get an unfiltered [[Opinion|opinion]] on the question at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;APPENDIX -- Some math on the asymptote&amp;lt;/h2&amp;gt;&lt;br /&gt;
This is a continuation of the first paragraph of this post. Please read that before trying to understand what&#039;s going on here. Also, this is strictly a nerd&#039;s eye view of a (very) specific problem. Read it if you&#039;re having insomnia.&lt;br /&gt;
&lt;br /&gt;
I don’t have an elegant proof because it’s a crazy amount of algebra, but we can go over some thoughts:&lt;br /&gt;
&lt;br /&gt;
Let’s take a look at the following network:&lt;br /&gt;
&lt;br /&gt;
0 – 1 – 2 – 3&lt;br /&gt;
&lt;br /&gt;
where P=0.6 for all nodes and T=0.7 for all nodes. To roll up the confidence of Node 0 for any given [[predicate]] question we can start by computing the confidence of Node 2 after taking into account Node 3. This is just the [https://ceur-ws.org/Vol-1664/w9.pdf Bayes eqn. as modified by Sapienza’s trust factor]:&lt;br /&gt;
&lt;br /&gt;
P2 = 0.6*(0.5 + (0.6-0.5)&#039;&#039;T) / (0.6&#039;&#039;(0.5 + (0.6-0.5)&#039;&#039;T) + 0.4&#039;&#039;(0.5+(0.4-0.5)*T)) = 0.6654&lt;br /&gt;
&lt;br /&gt;
We continue by calculating the confidence of Node 1 using the above, just calculated, confidence of Node 2:&lt;br /&gt;
&lt;br /&gt;
P1 = 0.6*(0.5 + (0.6654-0.5)&#039;&#039;T) / (0.6&#039;&#039;(0.5 + (0.6654-0.5)&#039;&#039;T) + 0.4&#039;&#039;(0.5+(1-0.6654-0.5)*T)) = 0.7062&lt;br /&gt;
&lt;br /&gt;
and so on until we’ve calculated Node 0. If we just substitute T=0.7 into this, we can derive a [[wikipedia:Recurrence relation|recurrence relation]] of the form:&lt;br /&gt;
&lt;br /&gt;
Pnew = (.09 + .42&#039;&#039;P ) / (.43 + .14&#039;&#039;P)&lt;br /&gt;
&lt;br /&gt;
That is, the Probability of the next level (new) is a function of the probability of the previous level. The other numbers are just constants associated with the trust (0.7 to keep things simple) and the Pnom (0.5 for a [[Predicate|predicate]] question). P will vary from 0-1, so we can construct a table of Pnew as a function of P:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! When T=0.7&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
| P&lt;br /&gt;
| Pnew&lt;br /&gt;
|-&lt;br /&gt;
| 0&lt;br /&gt;
| 0.209302326&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.297297297&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.379912664&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.457627119&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.530864198&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.6&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.66536965&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.727272727&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.78597786&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.841726619&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 0.894736842&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We see here that P equal to and below 0.7 results in Pnew &amp;amp;gt; P. Numbers equal to and above 0.8 results in Pnew &amp;amp;lt; P. Therefore, no matter where we start, the recurrence relation will converge somewhere between 0.7-0.8. If we run sapienza_trusttree.py for many nodes (Level = 15) we will obtain P=0.7668. We can also just say Pnew = P in the equation above and solve to get the same result.&lt;br /&gt;
&lt;br /&gt;
When Trust=1, we get the following recurrence relation:&lt;br /&gt;
&lt;br /&gt;
Pnew = 0.6&#039;&#039;P / (0.2&#039;&#039;P + 0.4)&lt;br /&gt;
&lt;br /&gt;
Which leads to a table like this:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! When T=1&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
| P&lt;br /&gt;
| Pnew&lt;br /&gt;
|-&lt;br /&gt;
| 0&lt;br /&gt;
| 0&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.142857143&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.272727273&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.391304348&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.5&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.6&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.692307692&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.777777778&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.857142857&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.931034483&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
|-&lt;br /&gt;
| 1.1&lt;br /&gt;
| 1.064516129&lt;br /&gt;
|-&lt;br /&gt;
| 1.2&lt;br /&gt;
| 1.125&lt;br /&gt;
|-&lt;br /&gt;
| 1.3&lt;br /&gt;
| 1.181818182&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This shows that all values of P above 0 but below 1 increase the resulting Pnew. The asymptote here, of course, is 1 as we already know.&lt;br /&gt;
&lt;br /&gt;
The only difference between these recurrence relations is Trust. Therefore it is trust below 1 which generates a recurrence relation that leads to an asymptote below 1.&lt;br /&gt;
&lt;br /&gt;
Based on this, we can write a general equation as a function of T and P:&lt;br /&gt;
&lt;br /&gt;
Pnew = (0.3 + 0.6&#039;&#039;T&#039;&#039;(P-0.5)) / ( (0.3 + 0.6&#039;&#039;T&#039;&#039;(P-0.5)) + 0.2 + 0.4*(0.5-P)*T )&lt;br /&gt;
&lt;br /&gt;
If we set Pnew = P, we obtain, after some algebra,&lt;br /&gt;
&lt;br /&gt;
P**2 + P*(2.5/T - 3.5) + (1.5 - 1.5/T) = 0&lt;br /&gt;
&lt;br /&gt;
The solution of this equation defines the asymptote (Pasymp), ie the highest probability we can achieve given the trust. It can be solved numerically or using the quadratic formula:&lt;br /&gt;
&lt;br /&gt;
P=( (3.5-2.5/T) +- SQRT((2.5/T - 3.5)*&#039;&#039;2 - 4&#039;&#039;(1.5-1.5/T)) ) / 2&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! T&lt;br /&gt;
! Pasymp&lt;br /&gt;
|-&lt;br /&gt;
| 0.01&lt;br /&gt;
| 0.60096891&lt;br /&gt;
|-&lt;br /&gt;
| 0.1&lt;br /&gt;
| 0.610567768&lt;br /&gt;
|-&lt;br /&gt;
| 0.2&lt;br /&gt;
| 0.623475383&lt;br /&gt;
|-&lt;br /&gt;
| 0.3&lt;br /&gt;
| 0.639520137&lt;br /&gt;
|-&lt;br /&gt;
| 0.4&lt;br /&gt;
| 0.659852575&lt;br /&gt;
|-&lt;br /&gt;
| 0.5&lt;br /&gt;
| 0.686140662&lt;br /&gt;
|-&lt;br /&gt;
| 0.6&lt;br /&gt;
| 0.72075922&lt;br /&gt;
|-&lt;br /&gt;
| 0.7&lt;br /&gt;
| 0.766864466&lt;br /&gt;
|-&lt;br /&gt;
| 0.8&lt;br /&gt;
| 0.827934423&lt;br /&gt;
|-&lt;br /&gt;
| 0.9&lt;br /&gt;
| 0.906150469&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| 1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[File:96bd3aa31a128c2ea22634e90459d987_image.png|image]]&lt;br /&gt;
&lt;br /&gt;
This is not quite a proof but it gives us a more rigorous picture of what’s going on. When trust falls to 0, the confidence of the head node, 0.6, is as good as it gets. If Trust is 1, we will eventually reach P=1 after enough nodes have been factored in. For all T in between, we have a continuously varying degree of P. This makes sense and confirms our initial intuition that a continuous variation in P will result from varying T.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Allowing_for_more_than_predicate_questions_in_the_trust-weighted_histogram_(TWH)_algorithm&amp;diff=2178</id>
		<title>Allowing for more than predicate questions in the trust-weighted histogram (TWH) algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Allowing_for_more_than_predicate_questions_in_the_trust-weighted_histogram_(TWH)_algorithm&amp;diff=2178"/>
		<updated>2024-09-25T14:55:43Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Main|[[Trust-weighted histogram|Trust-weighted histograms]]}}&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;trust_weighted_histogram&amp;lt;/code&amp;gt; (TWH) algorithm was discussed last time but only handles [[predicate]] questions. Indeed, it only handles single valued probabilities under the assumption that the other [[Probability|probability]] in a [[Predicate|predicate]] [[wikipedia:Probability distribution|distribution]] is (1-P). Although the examples that exercise it are actually provided as a two-valued distribution [P, 1-P], the algorithm ignores the 2nd value.&lt;br /&gt;
&lt;br /&gt;
This was corrected using the &amp;lt;code&amp;gt;trust_weighted_histogram_sets&amp;lt;/code&amp;gt; algorithm where each “set” corresponds to a different probability and results in a different [[Histogram|histogram]]. In essence the algorithm does the same calculation as before for multiple probabilities and presents its results as a set of histograms, one for each probability.&lt;br /&gt;
&lt;br /&gt;
The setup and example are exactly the same as in the [[Internal:FromGitlab/Notes_on_using_Lem&#039;s_algorithm_interface|previous discussion]] and [[Internal:FromGitlab/Dan&#039;s_proposal_for_trust_weighted_histograms|Eric’s description of the TWH algorithm]]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    1 [label=&amp;quot;1, P=20%&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, P=30%&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, P=40%&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, P=45%&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, P=60%&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, P=90%&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, P=55%&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, P=65%&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, P=70%&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, P=80%&amp;quot;]&lt;br /&gt;
    1 -&amp;gt; 2 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 5 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 6 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 7 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 8 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 9 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 10 [label=&amp;quot;T=0.9&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
The output for both the actual and intermediate results has a 2nd set of values representing the 2nd probability in the predicate distribution:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[[Trust|trust]]_weighted_histogram_sets test -- TEST1:&lt;br /&gt;
Trust Weighted Histogram output256 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.0, 0.0, 0.9], [0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.0, 1.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output378 =  [[0.0, 0.0, 0.0, 0.0, 1.0, 0.9, 0.9, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output4910 =  [[0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.9, 0.0], [0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram Overall output1234 =  [[0.0, 0.0, 0.5555555555555556, 0.5, 1.0, 0.45, 0.9, 0.45, 0.45, 0.45], [0.0, 0.5, 0.5, 1.0, 1.0, 0.5555555555555556, 0.5555555555555556, 0.5555555555555556, 0.6172839506172839, 0.0]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
This is, of course, still a predicate question but any number of probabilities can now be handled. Another test (TEST2) of this algorithm is provided in the [https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/148 snippet ]which adds a 3rd probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;trust_weighted_histogram_sets test -- TEST2:&lt;br /&gt;
Add a 3rd probability&lt;br /&gt;
Trust Weighted Histogram output256 =  [[0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.0, 0.0, 0.9, 0.0], [0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output378 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.4736842105263158, 0.0, 0.0], [0.0, 0.0, 0.0, 0.9, 0.0, 0.9, 1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram output4910 =  [[0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.9, 0.9, 0.0, 0.0], [0.0, 0.0, 0.9, 0.9, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&lt;br /&gt;
Trust Weighted Histogram Overall output1234 =  [[0.0, 0.5555555555555556, 0.5, 1.0, 0.0, 0.45, 0.45, 0.6868421052631579, 0.45, 0.0], [0.0, 0.4736842105263158, 0.4736842105263158, 0.9473684210526316, 0.4736842105263158, 1.0, 0.5263157894736842, 0.5263157894736842, 0.5847953216374269, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
The functions for this algorithm are denoted by the suffix &amp;lt;code&amp;gt;_sets&amp;lt;/code&amp;gt;. They are largely the same as the functions for [[Internal:FromGitlab/Dan&#039;s_proposal_for_trust_weighted_histograms|TWH ]] algorithm discussed previously except modified to handle one extra dimension. This is mainly seen in any list/array operation which had to be modified to be a “list of lists” rather than a simple list.&lt;br /&gt;
&lt;br /&gt;
For example the functions GetHForLeafNode and GetHForLeafNode_sets differ in exactly this manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def GetHForLeafNode(opinion, bins):&lt;br /&gt;
    P = opinion.pdf_points&lt;br /&gt;
    H = []&lt;br /&gt;
    for binx in bins:&lt;br /&gt;
        if(P[0] &amp;gt;= binx[0] and P[0] &amp;lt; binx[1]):&lt;br /&gt;
            H.append(1.0)&lt;br /&gt;
        else:&lt;br /&gt;
            H.append(0.0)&lt;br /&gt;
    return H&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;def GetHForLeafNode_sets(opinion, bins):&lt;br /&gt;
    P = opinion.pdf_points&lt;br /&gt;
    H = []&lt;br /&gt;
    Hlist = []&lt;br /&gt;
    for p in P:&lt;br /&gt;
        for binx in bins:&lt;br /&gt;
            if(p &amp;gt;= binx[0] and p &amp;lt; binx[1]):&lt;br /&gt;
                H.append(1.0)&lt;br /&gt;
            else:&lt;br /&gt;
                H.append(0.0)&lt;br /&gt;
        Hlist.append(H)&lt;br /&gt;
        H = []&lt;br /&gt;
    return Hlist&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This function takes the [[Opinion|opinion]] of a leaf node and determines which probability bin it corresponds to. It places a 1 in the proper bin and 0 in the other bins, creating a histogram. The first function ignores the fact that P is multi-valued and simply uses the first value, P[0]. The second iterates over all the P values in an outer loop. An &amp;lt;code&amp;gt;Hlist&amp;lt;/code&amp;gt; is used to store each &amp;lt;code&amp;gt;H&amp;lt;/code&amp;gt; so created. Many of the other functions in the algorithm were modified to create an outer loop along these lines.&lt;br /&gt;
&lt;br /&gt;
No modifications to the &amp;lt;code&amp;gt;algorithms.py&amp;lt;/code&amp;gt; interface were required although it should be noted that the output [[opinion]] is now a list of lists (as you can see above) rather than a simple list. The pdf_points field in OpinionData is specified as a &amp;lt;code&amp;gt;list[float]&amp;lt;/code&amp;gt; but apparently it doesn’t care that the actual data type is a list of lists of floats.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;trustprobabilitypopulation-graphs-algorithm&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2175</id>
		<title>Argument evaluation and scoring</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2175"/>
		<updated>2024-09-25T13:53:49Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;Some thoughts on evaluating [[Argument|argument]]s&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our last meeting Dan asked how we might evaluate an argument made by a respondent instead of simply relying on the given [[Probability|probability]] (as we&#039;ve been doing so far). An argument, assuming it is made public, could then be evaluated by the questioner and others independently to find a more accurate probability. This opens up a new idea in our work, that of assessing the truth by evaluating the reasoning put forth in an [[Opinion|opinion]].&lt;br /&gt;
&lt;br /&gt;
One idea for doing this starts with a simple model for argument construction. The argument consists of [[Supporting statement|supporting statement]]s which are tied together with [[Logic|logic]] to form a conclusion. The conclusion is the answer to the overall question being asked of the network. Each supporting statement and the [[logic]] can be evaluated independently to determine the extent to which the conclusion is true. The following diagram illustrates this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Answer/Conclusion, Pc&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Logic, Pl&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Support. Stmt. 1, Ps1&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;Support. Stmt. 2, Ps2&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;Support. Stmt. 3, Ps3&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The probability of the supporting statements can be combined in a [[Bayes&#039; theorem|Bayes]]ian manner. This is in keeping with the Bayesian idea of modifying prior probabilities given new evidence (ie supporting statements). These probabilities can be [[Trust|trust]]-modified as Sapienza proposed (https://ceur-ws.org/Vol-1664/w9.pdf) but since they are likely being assigned by the questioner, we will assume that [[trust]] is already built into them. Of more importance is the [[Relevance|relevance]] of the supporting statements. They can range from completely irrelevant to completely relevant. A completely relevant statement will take the full value of the probability it was originally assigned. A completely irrelevant statement would reduce the probability to 50%, where it will have no influence on the outcome. In that sense relevance functions in the same way trust does to modify the probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{mod} = P_{nom} + R(P - P_{nom})&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is relevance (0.0 - 1.0) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{mod}&amp;lt;/math&amp;gt; is relevance-modified probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{nom}&amp;lt;/math&amp;gt; is the nominal probability (=0.5 for a [[Predicate|predicate]] question)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is the unmodified probability&lt;br /&gt;
&lt;br /&gt;
After the relevance-modification, each supporting statement is combined in the usual manner via Bayes. For the first two statements,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{comb1,2} = {P_{s1}P_{s2}\over {P_{s1}P_{s2} + (1-P_{s1})(1-P_{s2})}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for each additional statement. Here it is to be understood that &amp;lt;math&amp;gt;P_{s1}&amp;lt;/math&amp;gt;, etc. is the value &amp;lt;i&amp;gt;after&amp;lt;/i&amp;gt; the relevance modification.&lt;br /&gt;
&lt;br /&gt;
Logic will also have a probability assigned to it to represent its quality. A fully illogical argument would receive a 0, which when combined via Bayes with the supporting statements would render the probability of the entire argument 0. This makes sense because a completely illogical argument, regardless of the strength of its supporting statements, destroys itself. A fully logical argument, however, will not receive a 1 but rather a 0.5. When combined with the supporting statements a 1 would render the final probability a 1, which is not reasonable. A 0.5, however, would do nothing and the final probability would be the combined probability of the supporting statements. Thus we assume that perfect logic is neutral and less than perfect logic reduces the combined probability of the statements. Again, this seems reasonable. We expect, by default, logical arguments which then rest on the strength of their supporting statements. If we notice flaws in the logic we discount the strength of the argument accordingly.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s try an example with these ideas:&lt;br /&gt;
&lt;br /&gt;
Question: Are humans causing frog populations to decline?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Answer / Conclusion: Yes, mankind is causing a fall in frog populations.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Logic: Mankind is causing the fall of frog populations if we can show that frog populations are decreasing over time and can show a human behavior that causes the decline.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Supporting Statements:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# [https://en.wikipedia.org/wiki/Frog#:~:text=Frog%20populations%20have%20declined%20significantly%20since%20the%201950s. Frog populations have declined since the 1950&#039;s.]&lt;br /&gt;
# My wife complained that she doesn&#039;t see frogs anymore.&lt;br /&gt;
# [https://wwf.panda.org/discover/our_focus/freshwater_practice/freshwater_biodiversity_222/ Scientists say] that the loss of freshwater habitats has affected frog populations.&lt;br /&gt;
&lt;br /&gt;
We start by judging the quality of the supporting statements. 1 seems like a well substantiated statement (a high P) but is not completely relevant because it only hints at human involvement. 2 is completely true but mostly irrelevant. 3 is a contributor but seems less substantiated than 1 and contains no human cause. We proceed by assigning probability and relevance values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1}=0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s1}=0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1mod} = 0.5 + 0.7(0.9 - 0.5) = 0.78&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2}=1.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s2}=0.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2mod} = 0.5 + 0.0(1.0 - 0.5) = 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3}=0.75&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s3}=0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3mod} = 0.5 + 0.5(0.75 - 0.5) = 0.625&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since s2 won&#039;t count in the Bayesian calculation we can ignore it and:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{comb,s} = {(0.78)(0.625) \over {0.78(0.625)+0.22(0.375)}} = 0.855&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logic/conclusion in this case is reasonably strong so we will assign it a high value, say &amp;lt;math&amp;gt;P_l = 0.45&amp;lt;/math&amp;gt; (remember, out of 0.5). It could be improved by observing that the word &amp;quot;behavior&amp;quot; is too general and should be replaced by, say, &amp;quot;policy choice&amp;quot; (ie urban growth into ecologically important wetlands). We note here that logic is more than just the mathematical construction of an argument. Since we are speaking a human language, logic might also be flawed because it uses imprecise wording.&lt;br /&gt;
&lt;br /&gt;
Putting &amp;lt;math&amp;gt;P_{comb,s}&amp;lt;/math&amp;gt; together with &amp;lt;math&amp;gt;P_l&amp;lt;/math&amp;gt; using Bayes we obtain a concluding probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_c = {0.855(0.45) \over {{0.855(0.45) + 0.145(0.55)}}} = 0.82&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One potential pitfall of this model is that repetitive supporting statements of high probability will quickly render a combined probability near 1.0. As we&#039;ve seen in the past, this is simply the result of the Bayes equation. The user would need to watch for attempts like these to distort the answer by removing repetitive statements or making them irrelevant. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Scoring of individual arguments&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arguments can be scored on [[Veracity|veracity]], impact, relevance, clarity, and informal quality (lack of fallacies):&lt;br /&gt;
&lt;br /&gt;
- Veracity is how true the argument is based on source information. Source information itself will be scored: &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Impact &amp;amp; Relevance is how deeply the argument affects the main contention of the [[debate]] (or the argument immediately above): &amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Clarity is how understandable the argument is: &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Informal quality (lack of fallacies) is whether the argument commits any logical fallacies of its own. A list of informal fallacies (and formal ones) will be provided to help users select appropriately: &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since Impact and Relevance are closely related concepts we will merge these into one, Relevance. The simplest method for combining these is to average them, or weighted average them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = w_vV + w_rR + w_cC + w_fF&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_x&amp;lt;/math&amp;gt; is a weighting for category X (eg Veracity, Relevance, etc)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_v + w_r + w_c + w_f = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This seems reasonable and if we believe that certain criteria should weigh more (such as Veracity) we can easily make the weighting factors reflect this. However, intuitively it seems that a category such as Veracity should not only weigh more but have the power to take down the whole argument. After all, if the argument is a straightforward lie, it should receive a score of zero, regardless of its other attributes (such as relevance, clarity, etc):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he will land a person on Mars in his first year.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is a lie and although it is clear, has no evident fallacies, and is relevant to the question at hand, it should be thrown out.&lt;br /&gt;
&lt;br /&gt;
The same can be said of Relevance. A completely irrelevant argument should also have the power to render the whole argument moot:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he likes pizza and so do I.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this in mind, we can propose the following equation, which we will dub the &amp;quot;VRFC equation&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; = Score for the argument which varies from 0-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; = weighting factor for Fallacies.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; = weighting factor for Clarity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and each of the constituent variables (&amp;lt;math&amp;gt;V, R, F, C&amp;lt;/math&amp;gt;) has a range 0-1.&lt;br /&gt;
&lt;br /&gt;
In this equation either Veracity or Relevance have the power to nullify the entire argument. Similarly a combination of Fallacies and lack of Clarity can do the same. However, a fallacious argument alone seems like it could still have merit, as would an argument whose only flaw was lack of clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Czechoslovakia because if the Nazi&#039;s prevail they will conquer the world.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument commits the slippery slope [[Fallacy|fallacy]] but is not entirely invalid. Similarly an unclear argument can still manage to make a point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Europe because first the Sudetenland, then the Czechs, and soon enough it&#039;s over when all the Brits had to do was get rid of that weakling sooner.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It would seem that a fallacious argument should weigh more than an unclear one. Proposed weights might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f = 0.7, w_c = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rolling up the score of argument trees&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation above applies to a single argument but, as we&#039;ve seen, most arguments have sub-arguments below, sub-sub-arguments, and so forth. They are really trees in which each individual argument can be scored separately. &lt;br /&gt;
&lt;br /&gt;
Here we develop a proposed equation for rolling up the score for an argument based on its own score and that of its sub-arguments. In doing so we emphasize that any argument can stand on its own and be scored in the absence of sub-arguments. This creates an interesting dynamic. The sub-argument may bolster or detract from the parent argument but the extent to which it does should be limited.&lt;br /&gt;
&lt;br /&gt;
Furthermore, once the sub-argument becomes weaker than a certain threshold, it should stop influencing the parent argument altogether. Here, we will set this threshold at 0.5. Thus only Pro sub-arguments that score 0.5 or better will have any influence on the parent argument. For Con sub-arguments we will use the same threshold but first modify the sub-argument score by &amp;lt;math&amp;gt;1-S&amp;lt;/math&amp;gt;. Thus a strong Con sub-argument, scoring say 0.9, would enter the calculation with a score of 0.1. The result is a range of scores 0-1 of which 0-0.5 is Con and 0.5-1 is Pro. Scores of exactly 0.5 are neutral.   &lt;br /&gt;
&lt;br /&gt;
Let&#039;s consider the case with one argument and one pro sub-argument and one Con sub-argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Argument, s = 0.9&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Pro sub-argument, xp = 0.7&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Con sub-argument, xc = 0.7&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the argument&#039;s score is 0.9, and both the Pro/Con sub-argument score is 0.7. These numbers would normally be arrived at by using the VRFC eqn above, but we will just assume them for now. The first sub-argument, in this case, bolsters the argument because it is a Pro argument and has a score (0.7) greater than 0.5. The second sub-argument, with the same score, detracts from the argument because it is on the Con side. We emphasize that if these scores were at or below 0.5 they would have no effect on the argument.&lt;br /&gt;
&lt;br /&gt;
The general equation governing this situation is as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2(1-s)fx + s - (1-s)f&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;/math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2sfx + s - sf&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; = score for parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p&amp;lt;/math&amp;gt; = score for Pro arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1-x_c&amp;lt;/math&amp;gt; = score for Con arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; = maximum possible fraction of increase possible, 0-1&lt;br /&gt;
&lt;br /&gt;
The variable &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a user selected number between 0-1 and represents the extent to which the sub-argument score can be affected. For example, a sub-argument with x = 0.9, as in the diagram above, can be improved by 0.1 to a maximum of 1. Then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; represents the fraction of 0.1 that we will allow for our improvement. If &amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt;, for instance, then the maximum range around 0.9 that the sub-argument can affect is &amp;lt;math&amp;gt;(0.25)(0.1) = 0.025&amp;lt;/math&amp;gt;. Thus the maximum score the argument can have is 0.925 and the minimum is 0.875.&lt;br /&gt;
&lt;br /&gt;
For the argument above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt; User input&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p = 0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.7) + 0.9 - (1-0.9)(0.25) = 0.91&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = (1-x_c) = (1-0.7) = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.3) + 0.9 - (1-0.9)(0.25) = 0.89&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see here that the Pro and Con sub-arguments exactly balance each other since they both have the same score. &lt;br /&gt;
&lt;br /&gt;
The equation above is piecewise linear and can be visualized as follows:&lt;br /&gt;
&lt;br /&gt;
![image](uploads/7f62b22b9eb19771503d654db29cab92/image.png)&lt;br /&gt;
&lt;br /&gt;
One important property of this equation is that the stronger (or weaker) an argument becomes, it becomes harder for a sub-argument to change it. This is because the maximum allowed movement is &amp;lt;math&amp;gt;1-s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; or simply &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;lt;= 0.5&amp;lt;/math&amp;gt;. The idea behind this property is that very strong arguments should be harder to dislodge precisely because they have covered themselves well. A weaker argument, for instance one that fails to mention an obvious supporting fact, is in a position to be bolstered more by a sub-argument which mentions the fact. Similarly a very weak argument should be difficult to bolster. If the argument is a lie or irrelevant, for instance, there isn&#039;t much that can be done to rescue it.  &lt;br /&gt;
&lt;br /&gt;
This property has the further consequence that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; cannot be changed by the sub-arguments if it is 1 or 0. A truly perfect argument, &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt;, cannot be weakened no matter how strong its Con sub-argument. Similarly a perfectly flawed argument, &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; cannot be bolstered with any Pro sub-argument. We will discuss below a method to deal with the fact that, regardless of the quality of the argument, users may still vote to score arguments 1 or 0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Population adjustments&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm described above assumes a single vote for the argument and sub-arguments. In fact, this will rarely be the case, because multiple users will be voting on each. The effect of a sub-argument on its parent should be weighed by the population of users who voted for the sub-argument and parent argument.&lt;br /&gt;
&lt;br /&gt;
Here we propose a simple modification factor, based on the ratio of users voting for each argument/sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop} = (s_{mod} - s){p_s\over p} + s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; is the population modified score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod}&amp;lt;/math&amp;gt; is the modified score without population modifications (see above)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt; is the population voting for the sub-argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting for the parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is the original score of the parent argument&lt;br /&gt;
&lt;br /&gt;
Usually we expect that sub-arguments will receive fewer votes than parent arguments, so &amp;lt;math&amp;gt;{{p_s\over p} &amp;lt;= 1}&amp;lt;/math&amp;gt; in general. For the case when &amp;lt;math&amp;gt;p_s &amp;gt; p&amp;lt;/math&amp;gt; we will force &amp;lt;math&amp;gt;{p_s\over p} = 1&amp;lt;/math&amp;gt;. Therefore there is no danger that a sub-argument can overwhelm a parent argument by voting power alone. This is in keeping with our philosophy that sub-arguments can have at best a limited effect on parent arguments.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Example calculation&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s do a problem with the following argument tree and &amp;lt;math&amp;gt;f=0.25&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, p = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, p = 30&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, p = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, p = 28&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Our objective here is to roll up the score for the Pro side of this tree. The Con side would be calculated similarly and we will skip this for the sake of brevity. Note that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; stands for the score and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting to produce that score. We start at the bottom, with the 2-3-4 portion of the tree, and for the sake of consistency with the above calculation we will recast &amp;lt;math&amp;gt;x_p&amp;lt;/math&amp;gt; for 2 as &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and label the population of the sub-arguments as &amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    2 [label=&amp;quot;2, Pro argument, s = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, ps = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, ps = 30&amp;quot;]&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.8) + 0.7 - (1-0.7)(0.25) = 0.745&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We modify this by the respective populations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop23} = (s_{mod} - s){p_s\over p} + s = (0.745 - 0.7){26\over 55} + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument we first modify its score,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_c = 0.4&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and write&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.4) + 0.7 - (1-0.7)(0.25) = 0.685&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and modify by the respective population,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop24} = (s_{mod} - s){p_s\over p} + s = (0.685 - 0.7){30\over 55} + 0.7 = 0.692&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These two values of &amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; can now be combined to create a new &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; for the Pro argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop23} - s) + (s_{mod,pop24} - s) + s = (0.721 - 0.7) + (0.692 - 0.7) + 0.7 = 0.713&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We note here that the Pro argument got a little stronger as a result of its sub-arguments. The Pro sub-argument was substantially stronger than the Con sub-argument and, although fewer people voted for it, the population difference was not large.&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument 5-6-7 we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    5 [label=&amp;quot;5, Con argument, s = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, ps = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, ps = 28&amp;quot;] &lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, for the Pro sub-argument, we first modify its score since it is the opposite of its parent. It is as if the parent were a Pro argument and the child were a Con argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_p = 1 - 0.85 = 0.15&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then proceed as usual with the calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.15) + 0.7 - (1-0.7)(0.25) = 0.6475&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop56} = (s_{mod} - s){p_s\over p} + s = (0.6475 - 0.7){19\over 43} + 0.7 = 0.677&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument &amp;lt;math&amp;gt;x = x_c = 0.95&amp;lt;/math&amp;gt; since the parent argument is also Con:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.95) + 0.7 - (1-0.7)(0.25) = 0.7675&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop57} = (s_{mod} - s){p_s\over p} + s = (0.7675 - 0.7){28\over 43} + 0.7 = 0.744&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We combine these two results in the same manner as above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop56} - s) + (s_{mod,pop57} - s) + s = (0.677 - 0.7) + (0.744 - 0.7) + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the bottom layer of the tree calculated, we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.713, ps = 55&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.721, ps = 43&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All that remains is to calculate the 1-2-5 portion, which is very similar to the 2-3-4 calculation performed above. Therefore will skip the details of this and simply report the results:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop12} = 0.906&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop15} = 0.895&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod, tot} = 0.901&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We see here that the final result is not much different than the original &amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;. This is a result of the Pro sub-arguments essentially being cancelled by the Con sub-arguments. Such a result is to be expected in many cases.&lt;br /&gt;
&lt;br /&gt;
In this example, we are skipping the Con side of the overall argument (node 8 in the tree above) because it would be exactly the same as what we have shown. If it had been calculated we would then combine the result for 1 and 8 to produce an overall score for the argument.&lt;br /&gt;
&lt;br /&gt;
The calculations above can be performed with the [attached snippet](https://gitlab.syncad.com/[[Peer|peer]]verity/trust-model-playground/-/snippets/164). The user input portion of the snippet is set up for the calculation we did immediately above:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
#User input&lt;br /&gt;
side_parent = &#039;pro&#039; #side, pro or con, that the parent argument is on    &lt;br /&gt;
s = 0.9 #score for the parent argument&lt;br /&gt;
mf = 0.25 #max fraction that parent argument can be changed in terms of (1-s) or (s-0)&lt;br /&gt;
p = 96.0 #population voting for the parent argument&lt;br /&gt;
x_pro_arr = [0.713] #score for the pro children&lt;br /&gt;
x_con_arr = [0.721] #score for the con children&lt;br /&gt;
ps_pro_arr = [55.0] #population voting for each pro child sub-argument&lt;br /&gt;
ps_con_arr = [43.0] #pop voting for each con child sub-argument&lt;br /&gt;
mods_if1or0 = True #True if we want scores of 1 or 0 to be modified to near 1 or 0 (otherwise they can&#039;t be adjusted by this calculation)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To note, the snippet contains arrays to handle any number of child arguments. These are combined in the same way we combined the single Pro and Con argument above.&lt;br /&gt;
 &lt;br /&gt;
Another variable, `mods_if1or0` controls whether we allow &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to be modified when it is set to 1 or 0. As discussed above, arguments where &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; are perfect, or perfectly flawed, and thus cannot be changed with sub-arguments. This idea may be theoretically plausible but it wouldn&#039;t stop users from voting 1 or 0 for arguments. In such cases the `mods_if1or0` switch, when True, changes &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to 0.99 and 0.01 respectively. &lt;br /&gt;
&lt;br /&gt;
As a side note, this property is similar to Bayesian probabilities of 1 or 0 which also cannot be changed. We have discussed this problem in earlier posts under the guise that 1 or 0 probabilities don&#039;t really exist because they would require an infinite sample size. In the same way, a perfect argument (or perfectly imperfect) argument cannot exist because it would, at some point, run into the same issues that Bayesian probabilities do. &lt;br /&gt;
&lt;br /&gt;
For example, suppose we&#039;ve invented a pill that cures cancer. It is one dose, costs 10 cents to make, has no side effects, has no environmental impact due to manufacture, and is certain to cure someone&#039;s cancer. The argument for a cancer patient taking the pill is, for all practical purposes, perfect. There is simply no plausible argument against it. We could score such an argument a 1 until we remember our probabilities. We only know the pill works and has no side effects on a limited population, say 100,000 patients. We don&#039;t know what effect it will have on the 100,001st patient. So the best we can say is that the drug is 0.99999 effective. Given that the argument is really predicated on the effectiveness of the drug we could say its score is also 0.99999.    &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;[https://slashdot.org/ Slashdot]&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot offers a system for content moderation summarized by the following from [wikipedia:Slashot|wikipedia]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;i&amp;gt;Slashdot&#039;s editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site&#039;s users. A user-based moderation system is employed to filter out abusive or offensive comments.[63] Every comment is initially given a score of −1 to +2, with a default score of +1 for registered users, 0 for anonymous users (Anonymous Coward), +2 for users with high &amp;quot;karma&amp;quot;, or −1 for users with low &amp;quot;karma&amp;quot;. As [[Moderator|moderator]]s read comments attached to articles, they click to moderate the comment, either up (+1) or down (−1). Moderators may choose to attach a particular descriptor to the comments as well, such as &amp;quot;normal&amp;quot;, &amp;quot;offtopic&amp;quot;, &amp;quot;flamebait&amp;quot;, &amp;quot;troll&amp;quot;, &amp;quot;redundant&amp;quot;, &amp;quot;insightful&amp;quot;, &amp;quot;interesting&amp;quot;, &amp;quot;informative&amp;quot;, &amp;quot;funny&amp;quot;, &amp;quot;overrated&amp;quot;, or &amp;quot;underrated&amp;quot;, with each corresponding to a −1 or +1 rating. So a comment may be seen to have a rating of &amp;quot;+1 insightful&amp;quot; or &amp;quot;−1 troll&amp;quot;.[57] Comments are very rarely deleted, even if they contain hateful remarks.&lt;br /&gt;
&lt;br /&gt;
::Starting in August 2019 anonymous comments and postings have been disabled.&lt;br /&gt;
&lt;br /&gt;
::Moderation points add to a user&#039;s rating, which is known as &amp;quot;karma&amp;quot; on Slashdot. Users with high &amp;quot;karma&amp;quot; are eligible to become moderators themselves. The system does not promote regular users as &amp;quot;moderators&amp;quot; and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user&#039;s moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from −1 to +5, and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level +5 will only see the highest rated comments, while a user reading at level −1 will see a more &amp;quot;unfiltered, anarchic version&amp;quot;. A meta-moderation system was implemented on September 7, 1999,to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either fair or unfair. For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. troll, funny), and the meta-moderator can click to see the context of comments surrounding the one that was moderated.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot&#039;s purpose is to promote high quality discussion which is somewhat similar to our purpose of promoting high quality arguments. In particular, the [[Reputation|reputation]] (karma) of the moderators is an interesting concept. We could use a similar system to weight voters with a good reputation higher in their argument scoring. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Refining our Argument Score with Reputation/Trust&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Scoring of individual arguments|Above]] we discussed an equation to score arguments on the basis of Veracity, Relevance, Freedom from Fallacies, and Clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is overall score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is Veracity &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is Relevance&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is Fallacies (ie freedom from fallacies)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is Clarity&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; is weighting for Fallacies, eg 0.7&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; is weighting for Clarity, eg 0.3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Each user would vote on each category and the resulting &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; would be calculated. However, each user may have a different reputation/trust for their ability to judge these four criteria. We can take this into account by simply adding a weighting factor for Trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = (T_vV)(T_rR)(w_fT_fF + w_cT_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_x&amp;lt;/math&amp;gt; = Trust in user&#039;s ability to evaluate each category &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; (Veracity, Relevance, Fallacies, and Clarity)&lt;br /&gt;
&lt;br /&gt;
Since the user evaluates trust in multiple categories, it would be useful to generate a composite trust for all the categories:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
T_{comp} = {(T_vV)(T_rR)(w_fT_fF + w_cT_cC) \over{VR(w_fF + w_cC)}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; is the composite trust for all categories.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; can also be seen as an &amp;quot;average&amp;quot; trust, ie the single factor that produces the same argument score as that resulting from the multiple trust factors.&lt;br /&gt;
&lt;br /&gt;
Once we have &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; we can use it to generate an average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; for all users:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S_{ave} = {\sum S \over{\sum T_{comp}}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
That is, instead of dividing by the number of people voting, we divide by the total of how much they &amp;quot;count&amp;quot;. This is similar to the [trust-weighted average scheme]([[A trust weighted averaging technique to supplement straight averaging and Bayes]]) we have proposed before. It is this average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; that we will take to be the score for the argument. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rhetorical vs. [[Practical argument|Practical Argument]]s&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So far our scoring methodology has been focused mainly on rhetorical aspects of arguments. Is the argument true, relevant to its parent contention, clear, and logical? These criteria certainly touch on the practical impact an argument may have and so far we have been merging Impact with Relevance since it is hard to distinguish the two. Consider the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Argument: Biden is a good President because he got an infrastructure \n bill passed that will do good things for the whole country.&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Sub-argument: The bill provides $2.9 million \n to Roanoke airport for improvements.&amp;quot;]&lt;br /&gt;
    2 [label = &amp;quot;2, Sub-argument: The bill provides $150 billion \n to combat climate change.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here it would seem appropriate to roll Impact into Relevance. Presumably most voters will recognize Sub-argument 2 as being the more relevant one simply because it is, on a practical level, the more impactful one.&lt;br /&gt;
&lt;br /&gt;
However, usually Relevance is a rhetorical quality, not a practical one. In this case, as a matter of rhetoric, both supporting arguments are relevant to the topic at hand. They are both, without question, part of Biden&#039;s infrastructure bill. They are both true, clear, and free of fallacies. But even so, although it is clear what is going on, it would be better to separate the issue of whether Biden is a good president from the issue of which infrastructure allocations add the most value. &lt;br /&gt;
&lt;br /&gt;
One reason for this separation, in terms of the [math laid out previously](Argument scoring), is that a weak sub-argument stops having any influence once its score goes below 0.5. However, if we are scoring Impact, either implicitly or explicitly, this rule would seem inappropriate. The rule makes many small sub-arguments, like the one about Roanoke airport, stop having any value. Perhaps the Roanoke airport argument alone is negligible but it still has a positive impact and, when added to all the other similar projects around the country, would amount to a sizeable contribution. Therefore it wouldn&#039;t be correct to nullify it altogether.&lt;br /&gt;
&lt;br /&gt;
We also wouldn&#039;t want arguments from Impact to prematurely influence necessary [[Rhetorical argument|rhetorical argument]]s. Let&#039;s take a favorite from moral philosophy, that of a healthy young person, John who comes in for a routine checkup at a clinic which has 5 critical patients in need of organ transplants. John has the organs they need and is a match for all of them. The doctors, using a purely utilitarian argument decide to kill John and harvest his organs. It makes sense: 1 person dies and 5 live so we&#039;re ahead. Let&#039;s for the moment disregard legal and other social artifacts which might persuade the doctors otherwise. This approach contrasts with a deontological perspective which argues that the ends do not justify the means and that, indeed, the means in this case are all important. But we  can only ferret out the deontological argument by actually having it, within the context of a rhetorical debate. We would hope such a debate would successfully preclude any utilitarian considerations whatsoever.    &lt;br /&gt;
&lt;br /&gt;
This leads us to the more general reason why separating the scoring techniques is appropriate. A score for Impact is essentially a [[Cost-benefit-risk analysis|cost-benefit-risk analysis]] for which established techniques exist and which would be confusing if scored together with the rhetorical argument. Indeed, by the time we reach an argument where impact is of interest we have usually dispensed with the rhetorical nature of the argument:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: We have $150 billion to spend and \n should spend it on climate change&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Electric cars and charging stations.\n I = 0.5&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Home and building insulation, solar, heat pumps, etc.\n I = 0.3&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Carbon capture.\n I = 0.2&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see that this argument is likely the outcome of previous arguments where basic points have been agreed to (or at least settled) such as whether climate change is real, Biden is a good president, etc. Here we are at the final stages of the argument, which consists of a resolution to move forward with some practical course of action. &lt;br /&gt;
&lt;br /&gt;
In this case each sub-argument is, in effect, a lobbying effort for the money. All the sub-arguments are equal in terms of their Veracity, rhetorical Relevance, freedom from Fallacies, and Clarity. The only point of dispute is whether the money is better spent on one option or another. In a situation like this, &amp;lt;math&amp;gt;\sum I = 1&amp;lt;/math&amp;gt; would be defined as the effect of all plausible infrastructural investments we could make to impact climate change. In this respect let&#039;s assume we are limited to the three options above. The voters, presumably armed with engineering studies, would then weigh in to assess the impact of each proposal.&lt;br /&gt;
&lt;br /&gt;
It is important to emphasize that the argument at this point is a technical one. It is numerical in nature and hinges on scientific rigor. Our system for assessing trust in the people who can reasonably provide input at this level will be important. One can easily imagine how interested parties (and the merely ignorant) could skew the results with their vote. At the same time we want to encourage participants to move toward this type of debate since it leads to practical benefits and, by its nature, tends to reduce partisan rancor.&lt;br /&gt;
&lt;br /&gt;
The idea outlined here stands apart, by design, from the method that scores the rhetorical quality (eg the VRFC equation) of the argument. Rhetoric is designed to convince you of the argument as a whole but arguments using Impact are designed to forward a specific recommendation. Clearly, bundling the Impact with the VRFC equation is inappropriate.&lt;br /&gt;
&lt;br /&gt;
In many cases arguments will have a hard time getting to this level of practicality. They tend to remain mired in the basic rhetoric that governs them:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is clearly truncated but we can see where it is going. It is, in essence, one about the Veracity of personal experience vs. demonstrable evidence, which is a philosophical debate. It is hard to ascribe Impact to it because it doesn&#039;t ever get to the point of enumerating proposals.&lt;br /&gt;
&lt;br /&gt;
But it could eventually transform itself into one that did. Let&#039;s assume the participants agree to settle, or at least table, the philosophical debate and concentrate instead on a test of how to improve your life. Both the religious and secular sides propose certain practices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3 ...more debate...&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4 ...more debate...&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Modified Thesis: Is your life best improved by religious or secular practices?&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Religious&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Secular&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Pray for 20 minutes \n every night and ask \n for what you need.&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, Go to your religious \n service every week and \n perform the rituals.&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, Study for 20 minutes \n every night in an area \n where your problems are.&amp;quot;]&lt;br /&gt;
    11 [label=&amp;quot;11, Find a support group \n and meet with them \n every week.&amp;quot;]   &lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;forward&amp;quot;]; &lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 8 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 9 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 10 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 11 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, the reader is being invited to try the techniques on offer and an evaluation of Impact might thus be made. Note that in this case we have suspended our evaluation of the rhetorical qualities of each argument and begun a new one which seeks a utilitarian appraisal of which side might offer a better personal outcome. This is in keeping with the notion of Impact as separate from the rhetorical qualities of the argument.&lt;br /&gt;
&lt;br /&gt;
We could, in theory, envision someone who is not religious adopting religious practices because he concludes that it does him good. Perhaps he tried both sides and decided the religious side was the one yielding the greatest benefit. This possibility is no doubt one motivation for religious debaters to drop their metaphysical convictions and adopt a practical way to approach their disagreement with the secular side. In any event, this progression seems like a healthy outcome since metaphysical debates usually have little hope of resolution.&lt;br /&gt;
&lt;br /&gt;
Impact is necessarily focused on some particular goal. If the goal is material benefit and you pray and get rich you might think prayer works. But if the goal is broader than that, you might be disturbed by the fact that you are doing something you don&#039;t really believe is true. Your goal might be to get rich without compromising philosophical integrity. In this case the object of our  Impact changes to become more than simply material wealth. A clear statement of the argument thesis in terms of goals is obviously important here.&lt;br /&gt;
&lt;br /&gt;
In spite of our attempts at separating impact from rhetoric we will often find ourselves with a mix of the two:&lt;br /&gt;
[[File:Procon.png|center|frame]]&lt;br /&gt;
Here we&#039;ve scored the Pro argument weaker than the Con argument using our standard rhetorical measures (VRFC). Arguably the Pro argument speculates more (a type of fallacy) about what would happen if we stopped supporting Ukraine. The Con argument isn&#039;t perfect either since it seems to assume that the money saved would actually be used in some constructive way. Still, money not spent is certainly money saved so we&#039;ll mark it down only slightly. That said, it would be ridiculous to stop the argument after concluding the Con side &amp;quot;won&amp;quot;. The argument is not really a rhetorical argument at all but rather a statement of Impact. One side argues for the impact of saving the money. The other argues for the impact of failing to spend the money. We may not know how events would play out in this situation but we acknowledge the risk of catastrophic consequences for failure to act. The impact score for the Pro side is thus much higher. &lt;br /&gt;
&lt;br /&gt;
This is a particular case where the argument should be separated out into one that is explicitly about impact but it is not clear how best to achieve that. One way would be to allow participants to intervene by asking questions or proposing to move the debate in a more fruitful direction, perhaps by suggesting a new main contention (ie thesis). &lt;br /&gt;
[[File:Procon2.png|center|frame]]&lt;br /&gt;
A basically new debate ensues. Incentives to move the debate might include reputational points for agreeing on a more productive direction.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s look at an example of what this &amp;quot;productive&amp;quot; argument might look like in terms of cost-benefit-risk analysis:&lt;br /&gt;
&lt;br /&gt;
[[File:Productiveargument.png|center|frame]]&lt;br /&gt;
The red boxes are Con arguments and the green box is the Pro argument. We can see right away why the Pro argument might have stiff opposition since it leads to an infinite cost-benefit ratio and is virtually certain to occur (it is our current policy). It is possible that other scenarios could play out within the context of supporting Ukraine but let&#039;s leave these aside for the moment.&lt;br /&gt;
&lt;br /&gt;
So the Pro side looks bad until we start looking at the Con scenarios. In Scenario 1 we envision taking the $75 billion spent on Ukraine aid and providing free [[community]] college instead. Doing so provides an economic benefit in the long run so we provide an estimate for that. However, military and policy experts have said that ignoring Ukraine would result in having to contain a newly resurgent Russia and this could double our defense costs in the near term ($800 billion). The resulting cost benefit ratio is 2.9, a positive number which is undesirable. It also is the highest probability scenario, at 70%. Other scenarios involve some type of war with Russia and would involve an even greater outlay of funds, not to mention the sheer human toll of war. Only Scenario 4 envisions a minor outlay to contain a victorious Russia which would be offset by the benefit of free community college. This scenario is desirable but unlikely. &lt;br /&gt;
&lt;br /&gt;
These scenarios are much like sub-arguments but stripped of any need to assess their rhetorical quality. By looking at CB ratios and probabilities we can determine which policy direction to take.   &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Interaction effects between arguments and sub-arguments&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although arguments have been presented as standalone entities, users may often score them after reading, and accounting for, their sub-arguments. In that case, the sub-argument&#039;s influence on the parent argument would be counted twice -- once due to the mathematical effect discussed [[#Scoring of individual arguments|above]] and twice due to the the influence the sub-argument has on the user&#039;s scoring of the parent argument. &lt;br /&gt;
&lt;br /&gt;
This effect is clearly undesirable and efforts should be made to control it. The software could, for instance, be equipped with the following checks:&lt;br /&gt;
&lt;br /&gt;
* If it detects that a user voted for a sub-argument and subsequently voted for an argument, it can flag the sub-argument score so it does not participate in the mathematical effect it would otherwise have on the argument. We are assuming here, of course, that a user who has voted for a sub-argument will be unable to avoid having it influence his vote for the parent argument.&lt;br /&gt;
* If it detects a vote for an argument but not a vote for its sub-argument, it doesn&#039;t know if the user has read the sub-argument in a way that would influence their vote for the parent argument. In such a case, the user can simply be asked if the sub-argument was read and, if so, to flag any subsequent vote by the user for the sub-argument as a non-participant in its mathematical influence on the parent argument.&lt;br /&gt;
* If the sub-argument does not yet exist when the vote for the parent argument is cast, the software will flag a subsequent vote for any newly developed sub-argument as a legitimate participant in the mathematical influence it has on the parent argument.&lt;br /&gt;
&lt;br /&gt;
It is probably difficult to make a system like this foolproof. A user might report not having read a sub-argument that they have, in fact, read. Tracking features could, in theory, be developed to check whether this is the case and react accordingly. However, it would still be difficult to know for sure how deeply the user understands the sub-argument just based on a record that they clicked on it or had it &amp;quot;open&amp;quot;. It also seems like it would be easy to overdo tracking of this kind to the point where it simply turns off an otherwise enthusiastic user. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2174</id>
		<title>Argument evaluation and scoring</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2174"/>
		<updated>2024-09-25T13:52:17Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;Some thoughts on evaluating arguments&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our last meeting Dan asked how we might evaluate an argument made by a respondent instead of simply relying on the given probability (as we&#039;ve been doing so far). An argument, assuming it is made public, could then be evaluated by the questioner and others independently to find a more accurate probability. This opens up a new idea in our work, that of assessing the truth by evaluating the reasoning put forth in an [[opinion]].&lt;br /&gt;
&lt;br /&gt;
One idea for doing this starts with a simple model for argument construction. The argument consists of supporting statements which are tied together with [[logic]] to form a conclusion. The conclusion is the answer to the overall question being asked of the network. Each supporting statement and the logic can be evaluated independently to determine the extent to which the conclusion is true. The following diagram illustrates this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Answer/Conclusion, Pc&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Logic, Pl&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Support. Stmt. 1, Ps1&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;Support. Stmt. 2, Ps2&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;Support. Stmt. 3, Ps3&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The probability of the supporting statements can be combined in a Bayesian manner. This is in keeping with the Bayesian idea of modifying prior probabilities given new evidence (ie supporting statements). These probabilities can be [[trust]]-modified as Sapienza proposed (https://ceur-ws.org/Vol-1664/w9.pdf) but since they are likely being assigned by the questioner, we will assume that trust is already built into them. Of more importance is the relevance of the supporting statements. They can range from completely irrelevant to completely relevant. A completely relevant statement will take the full value of the probability it was originally assigned. A completely irrelevant statement would reduce the probability to 50%, where it will have no influence on the outcome. In that sense relevance functions in the same way trust does to modify the probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{mod} = P_{nom} + R(P - P_{nom})&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is relevance (0.0 - 1.0) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{mod}&amp;lt;/math&amp;gt; is relevance-modified probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{nom}&amp;lt;/math&amp;gt; is the nominal probability (=0.5 for a [[predicate]] question)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is the unmodified probability&lt;br /&gt;
&lt;br /&gt;
After the relevance-modification, each supporting statement is combined in the usual manner via Bayes. For the first two statements,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{comb1,2} = {P_{s1}P_{s2}\over {P_{s1}P_{s2} + (1-P_{s1})(1-P_{s2})}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for each additional statement. Here it is to be understood that &amp;lt;math&amp;gt;P_{s1}&amp;lt;/math&amp;gt;, etc. is the value &amp;lt;i&amp;gt;after&amp;lt;/i&amp;gt; the relevance modification.&lt;br /&gt;
&lt;br /&gt;
Logic will also have a probability assigned to it to represent its quality. A fully illogical argument would receive a 0, which when combined via Bayes with the supporting statements would render the probability of the entire argument 0. This makes sense because a completely illogical argument, regardless of the strength of its supporting statements, destroys itself. A fully logical argument, however, will not receive a 1 but rather a 0.5. When combined with the supporting statements a 1 would render the final probability a 1, which is not reasonable. A 0.5, however, would do nothing and the final probability would be the combined probability of the supporting statements. Thus we assume that perfect logic is neutral and less than perfect logic reduces the combined probability of the statements. Again, this seems reasonable. We expect, by default, logical arguments which then rest on the strength of their supporting statements. If we notice flaws in the logic we discount the strength of the argument accordingly.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s try an example with these ideas:&lt;br /&gt;
&lt;br /&gt;
Question: Are humans causing frog populations to decline?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Answer / Conclusion: Yes, mankind is causing a fall in frog populations.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Logic: Mankind is causing the fall of frog populations if we can show that frog populations are decreasing over time and can show a human behavior that causes the decline.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Supporting Statements:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# [https://en.wikipedia.org/wiki/Frog#:~:text=Frog%20populations%20have%20declined%20significantly%20since%20the%201950s. Frog populations have declined since the 1950&#039;s.]&lt;br /&gt;
# My wife complained that she doesn&#039;t see frogs anymore.&lt;br /&gt;
# [https://wwf.panda.org/discover/our_focus/freshwater_practice/freshwater_biodiversity_222/ Scientists say] that the loss of freshwater habitats has affected frog populations.&lt;br /&gt;
&lt;br /&gt;
We start by judging the quality of the supporting statements. 1 seems like a well substantiated statement (a high P) but is not completely relevant because it only hints at human involvement. 2 is completely true but mostly irrelevant. 3 is a contributor but seems less substantiated than 1 and contains no human cause. We proceed by assigning probability and relevance values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1}=0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s1}=0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1mod} = 0.5 + 0.7(0.9 - 0.5) = 0.78&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2}=1.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s2}=0.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2mod} = 0.5 + 0.0(1.0 - 0.5) = 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3}=0.75&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s3}=0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3mod} = 0.5 + 0.5(0.75 - 0.5) = 0.625&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since s2 won&#039;t count in the Bayesian calculation we can ignore it and:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{comb,s} = {(0.78)(0.625) \over {0.78(0.625)+0.22(0.375)}} = 0.855&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logic/conclusion in this case is reasonably strong so we will assign it a high value, say &amp;lt;math&amp;gt;P_l = 0.45&amp;lt;/math&amp;gt; (remember, out of 0.5). It could be improved by observing that the word &amp;quot;behavior&amp;quot; is too general and should be replaced by, say, &amp;quot;policy choice&amp;quot; (ie urban growth into ecologically important wetlands). We note here that logic is more than just the mathematical construction of an argument. Since we are speaking a human language, logic might also be flawed because it uses imprecise wording.&lt;br /&gt;
&lt;br /&gt;
Putting &amp;lt;math&amp;gt;P_{comb,s}&amp;lt;/math&amp;gt; together with &amp;lt;math&amp;gt;P_l&amp;lt;/math&amp;gt; using Bayes we obtain a concluding probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_c = {0.855(0.45) \over {{0.855(0.45) + 0.145(0.55)}}} = 0.82&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One potential pitfall of this model is that repetitive supporting statements of high probability will quickly render a combined probability near 1.0. As we&#039;ve seen in the past, this is simply the result of the Bayes equation. The user would need to watch for attempts like these to distort the answer by removing repetitive statements or making them irrelevant. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Scoring of individual arguments&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arguments can be scored on veracity, impact, relevance, clarity, and informal quality (lack of fallacies):&lt;br /&gt;
&lt;br /&gt;
- Veracity is how true the argument is based on source information. Source information itself will be scored: &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Impact &amp;amp; Relevance is how deeply the argument affects the main contention of the [[debate]] (or the argument immediately above): &amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Clarity is how understandable the argument is: &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Informal quality (lack of fallacies) is whether the argument commits any logical fallacies of its own. A list of informal fallacies (and formal ones) will be provided to help users select appropriately: &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since Impact and Relevance are closely related concepts we will merge these into one, Relevance. The simplest method for combining these is to average them, or weighted average them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = w_vV + w_rR + w_cC + w_fF&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_x&amp;lt;/math&amp;gt; is a weighting for category X (eg Veracity, Relevance, etc)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_v + w_r + w_c + w_f = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This seems reasonable and if we believe that certain criteria should weigh more (such as Veracity) we can easily make the weighting factors reflect this. However, intuitively it seems that a category such as Veracity should not only weigh more but have the power to take down the whole argument. After all, if the argument is a straightforward lie, it should receive a score of zero, regardless of its other attributes (such as relevance, clarity, etc):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he will land a person on Mars in his first year.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is a lie and although it is clear, has no evident fallacies, and is relevant to the question at hand, it should be thrown out.&lt;br /&gt;
&lt;br /&gt;
The same can be said of Relevance. A completely irrelevant argument should also have the power to render the whole argument moot:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he likes pizza and so do I.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this in mind, we can propose the following equation, which we will dub the &amp;quot;VRFC equation&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; = Score for the argument which varies from 0-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; = weighting factor for Fallacies.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; = weighting factor for Clarity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and each of the constituent variables (&amp;lt;math&amp;gt;V, R, F, C&amp;lt;/math&amp;gt;) has a range 0-1.&lt;br /&gt;
&lt;br /&gt;
In this equation either Veracity or Relevance have the power to nullify the entire argument. Similarly a combination of Fallacies and lack of Clarity can do the same. However, a fallacious argument alone seems like it could still have merit, as would an argument whose only flaw was lack of clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Czechoslovakia because if the Nazi&#039;s prevail they will conquer the world.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument commits the slippery slope fallacy but is not entirely invalid. Similarly an unclear argument can still manage to make a point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Europe because first the Sudetenland, then the Czechs, and soon enough it&#039;s over when all the Brits had to do was get rid of that weakling sooner.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It would seem that a fallacious argument should weigh more than an unclear one. Proposed weights might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f = 0.7, w_c = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rolling up the score of argument trees&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation above applies to a single argument but, as we&#039;ve seen, most arguments have sub-arguments below, sub-sub-arguments, and so forth. They are really trees in which each individual argument can be scored separately. &lt;br /&gt;
&lt;br /&gt;
Here we develop a proposed equation for rolling up the score for an argument based on its own score and that of its sub-arguments. In doing so we emphasize that any argument can stand on its own and be scored in the absence of sub-arguments. This creates an interesting dynamic. The sub-argument may bolster or detract from the parent argument but the extent to which it does should be limited.&lt;br /&gt;
&lt;br /&gt;
Furthermore, once the sub-argument becomes weaker than a certain threshold, it should stop influencing the parent argument altogether. Here, we will set this threshold at 0.5. Thus only Pro sub-arguments that score 0.5 or better will have any influence on the parent argument. For Con sub-arguments we will use the same threshold but first modify the sub-argument score by &amp;lt;math&amp;gt;1-S&amp;lt;/math&amp;gt;. Thus a strong Con sub-argument, scoring say 0.9, would enter the calculation with a score of 0.1. The result is a range of scores 0-1 of which 0-0.5 is Con and 0.5-1 is Pro. Scores of exactly 0.5 are neutral.   &lt;br /&gt;
&lt;br /&gt;
Let&#039;s consider the case with one argument and one pro sub-argument and one Con sub-argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Argument, s = 0.9&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Pro sub-argument, xp = 0.7&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Con sub-argument, xc = 0.7&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the argument&#039;s score is 0.9, and both the Pro/Con sub-argument score is 0.7. These numbers would normally be arrived at by using the VRFC eqn above, but we will just assume them for now. The first sub-argument, in this case, bolsters the argument because it is a Pro argument and has a score (0.7) greater than 0.5. The second sub-argument, with the same score, detracts from the argument because it is on the Con side. We emphasize that if these scores were at or below 0.5 they would have no effect on the argument.&lt;br /&gt;
&lt;br /&gt;
The general equation governing this situation is as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2(1-s)fx + s - (1-s)f&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;/math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2sfx + s - sf&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; = score for parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p&amp;lt;/math&amp;gt; = score for Pro arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1-x_c&amp;lt;/math&amp;gt; = score for Con arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; = maximum possible fraction of increase possible, 0-1&lt;br /&gt;
&lt;br /&gt;
The variable &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a user selected number between 0-1 and represents the extent to which the sub-argument score can be affected. For example, a sub-argument with x = 0.9, as in the diagram above, can be improved by 0.1 to a maximum of 1. Then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; represents the fraction of 0.1 that we will allow for our improvement. If &amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt;, for instance, then the maximum range around 0.9 that the sub-argument can affect is &amp;lt;math&amp;gt;(0.25)(0.1) = 0.025&amp;lt;/math&amp;gt;. Thus the maximum score the argument can have is 0.925 and the minimum is 0.875.&lt;br /&gt;
&lt;br /&gt;
For the argument above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt; User input&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p = 0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.7) + 0.9 - (1-0.9)(0.25) = 0.91&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = (1-x_c) = (1-0.7) = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.3) + 0.9 - (1-0.9)(0.25) = 0.89&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see here that the Pro and Con sub-arguments exactly balance each other since they both have the same score. &lt;br /&gt;
&lt;br /&gt;
The equation above is piecewise linear and can be visualized as follows:&lt;br /&gt;
&lt;br /&gt;
![image](uploads/7f62b22b9eb19771503d654db29cab92/image.png)&lt;br /&gt;
&lt;br /&gt;
One important property of this equation is that the stronger (or weaker) an argument becomes, it becomes harder for a sub-argument to change it. This is because the maximum allowed movement is &amp;lt;math&amp;gt;1-s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; or simply &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;lt;= 0.5&amp;lt;/math&amp;gt;. The idea behind this property is that very strong arguments should be harder to dislodge precisely because they have covered themselves well. A weaker argument, for instance one that fails to mention an obvious supporting fact, is in a position to be bolstered more by a sub-argument which mentions the fact. Similarly a very weak argument should be difficult to bolster. If the argument is a lie or irrelevant, for instance, there isn&#039;t much that can be done to rescue it.  &lt;br /&gt;
&lt;br /&gt;
This property has the further consequence that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; cannot be changed by the sub-arguments if it is 1 or 0. A truly perfect argument, &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt;, cannot be weakened no matter how strong its Con sub-argument. Similarly a perfectly flawed argument, &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; cannot be bolstered with any Pro sub-argument. We will discuss below a method to deal with the fact that, regardless of the quality of the argument, users may still vote to score arguments 1 or 0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Population adjustments&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm described above assumes a single vote for the argument and sub-arguments. In fact, this will rarely be the case, because multiple users will be voting on each. The effect of a sub-argument on its parent should be weighed by the population of users who voted for the sub-argument and parent argument.&lt;br /&gt;
&lt;br /&gt;
Here we propose a simple modification factor, based on the ratio of users voting for each argument/sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop} = (s_{mod} - s){p_s\over p} + s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; is the population modified score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod}&amp;lt;/math&amp;gt; is the modified score without population modifications (see above)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt; is the population voting for the sub-argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting for the parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is the original score of the parent argument&lt;br /&gt;
&lt;br /&gt;
Usually we expect that sub-arguments will receive fewer votes than parent arguments, so &amp;lt;math&amp;gt;{{p_s\over p} &amp;lt;= 1}&amp;lt;/math&amp;gt; in general. For the case when &amp;lt;math&amp;gt;p_s &amp;gt; p&amp;lt;/math&amp;gt; we will force &amp;lt;math&amp;gt;{p_s\over p} = 1&amp;lt;/math&amp;gt;. Therefore there is no danger that a sub-argument can overwhelm a parent argument by voting power alone. This is in keeping with our philosophy that sub-arguments can have at best a limited effect on parent arguments.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Example calculation&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s do a problem with the following argument tree and &amp;lt;math&amp;gt;f=0.25&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, p = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, p = 30&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, p = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, p = 28&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Our objective here is to roll up the score for the Pro side of this tree. The Con side would be calculated similarly and we will skip this for the sake of brevity. Note that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; stands for the score and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting to produce that score. We start at the bottom, with the 2-3-4 portion of the tree, and for the sake of consistency with the above calculation we will recast &amp;lt;math&amp;gt;x_p&amp;lt;/math&amp;gt; for 2 as &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and label the population of the sub-arguments as &amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    2 [label=&amp;quot;2, Pro argument, s = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, ps = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, ps = 30&amp;quot;]&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.8) + 0.7 - (1-0.7)(0.25) = 0.745&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We modify this by the respective populations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop23} = (s_{mod} - s){p_s\over p} + s = (0.745 - 0.7){26\over 55} + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument we first modify its score,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_c = 0.4&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and write&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.4) + 0.7 - (1-0.7)(0.25) = 0.685&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and modify by the respective population,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop24} = (s_{mod} - s){p_s\over p} + s = (0.685 - 0.7){30\over 55} + 0.7 = 0.692&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These two values of &amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; can now be combined to create a new &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; for the Pro argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop23} - s) + (s_{mod,pop24} - s) + s = (0.721 - 0.7) + (0.692 - 0.7) + 0.7 = 0.713&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We note here that the Pro argument got a little stronger as a result of its sub-arguments. The Pro sub-argument was substantially stronger than the Con sub-argument and, although fewer people voted for it, the population difference was not large.&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument 5-6-7 we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    5 [label=&amp;quot;5, Con argument, s = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, ps = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, ps = 28&amp;quot;] &lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, for the Pro sub-argument, we first modify its score since it is the opposite of its parent. It is as if the parent were a Pro argument and the child were a Con argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_p = 1 - 0.85 = 0.15&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then proceed as usual with the calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.15) + 0.7 - (1-0.7)(0.25) = 0.6475&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop56} = (s_{mod} - s){p_s\over p} + s = (0.6475 - 0.7){19\over 43} + 0.7 = 0.677&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument &amp;lt;math&amp;gt;x = x_c = 0.95&amp;lt;/math&amp;gt; since the parent argument is also Con:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.95) + 0.7 - (1-0.7)(0.25) = 0.7675&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop57} = (s_{mod} - s){p_s\over p} + s = (0.7675 - 0.7){28\over 43} + 0.7 = 0.744&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We combine these two results in the same manner as above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop56} - s) + (s_{mod,pop57} - s) + s = (0.677 - 0.7) + (0.744 - 0.7) + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the bottom layer of the tree calculated, we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.713, ps = 55&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.721, ps = 43&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All that remains is to calculate the 1-2-5 portion, which is very similar to the 2-3-4 calculation performed above. Therefore will skip the details of this and simply report the results:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop12} = 0.906&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop15} = 0.895&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod, tot} = 0.901&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We see here that the final result is not much different than the original &amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;. This is a result of the Pro sub-arguments essentially being cancelled by the Con sub-arguments. Such a result is to be expected in many cases.&lt;br /&gt;
&lt;br /&gt;
In this example, we are skipping the Con side of the overall argument (node 8 in the tree above) because it would be exactly the same as what we have shown. If it had been calculated we would then combine the result for 1 and 8 to produce an overall score for the argument.&lt;br /&gt;
&lt;br /&gt;
The calculations above can be performed with the [attached snippet](https://gitlab.syncad.com/peerverity/trust-model-playground/-/snippets/164). The user input portion of the snippet is set up for the calculation we did immediately above:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
#User input&lt;br /&gt;
side_parent = &#039;pro&#039; #side, pro or con, that the parent argument is on    &lt;br /&gt;
s = 0.9 #score for the parent argument&lt;br /&gt;
mf = 0.25 #max fraction that parent argument can be changed in terms of (1-s) or (s-0)&lt;br /&gt;
p = 96.0 #population voting for the parent argument&lt;br /&gt;
x_pro_arr = [0.713] #score for the pro children&lt;br /&gt;
x_con_arr = [0.721] #score for the con children&lt;br /&gt;
ps_pro_arr = [55.0] #population voting for each pro child sub-argument&lt;br /&gt;
ps_con_arr = [43.0] #pop voting for each con child sub-argument&lt;br /&gt;
mods_if1or0 = True #True if we want scores of 1 or 0 to be modified to near 1 or 0 (otherwise they can&#039;t be adjusted by this calculation)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To note, the snippet contains arrays to handle any number of child arguments. These are combined in the same way we combined the single Pro and Con argument above.&lt;br /&gt;
 &lt;br /&gt;
Another variable, `mods_if1or0` controls whether we allow &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to be modified when it is set to 1 or 0. As discussed above, arguments where &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; are perfect, or perfectly flawed, and thus cannot be changed with sub-arguments. This idea may be theoretically plausible but it wouldn&#039;t stop users from voting 1 or 0 for arguments. In such cases the `mods_if1or0` switch, when True, changes &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to 0.99 and 0.01 respectively. &lt;br /&gt;
&lt;br /&gt;
As a side note, this property is similar to Bayesian probabilities of 1 or 0 which also cannot be changed. We have discussed this problem in earlier posts under the guise that 1 or 0 probabilities don&#039;t really exist because they would require an infinite sample size. In the same way, a perfect argument (or perfectly imperfect) argument cannot exist because it would, at some point, run into the same issues that Bayesian probabilities do. &lt;br /&gt;
&lt;br /&gt;
For example, suppose we&#039;ve invented a pill that cures cancer. It is one dose, costs 10 cents to make, has no side effects, has no environmental impact due to manufacture, and is certain to cure someone&#039;s cancer. The argument for a cancer patient taking the pill is, for all practical purposes, perfect. There is simply no plausible argument against it. We could score such an argument a 1 until we remember our probabilities. We only know the pill works and has no side effects on a limited population, say 100,000 patients. We don&#039;t know what effect it will have on the 100,001st patient. So the best we can say is that the drug is 0.99999 effective. Given that the argument is really predicated on the effectiveness of the drug we could say its score is also 0.99999.    &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;[https://slashdot.org/ Slashdot]&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot offers a system for content moderation summarized by the following from [wikipedia:Slashot|wikipedia]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;i&amp;gt;Slashdot&#039;s editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site&#039;s users. A user-based moderation system is employed to filter out abusive or offensive comments.[63] Every comment is initially given a score of −1 to +2, with a default score of +1 for registered users, 0 for anonymous users (Anonymous Coward), +2 for users with high &amp;quot;karma&amp;quot;, or −1 for users with low &amp;quot;karma&amp;quot;. As moderators read comments attached to articles, they click to moderate the comment, either up (+1) or down (−1). Moderators may choose to attach a particular descriptor to the comments as well, such as &amp;quot;normal&amp;quot;, &amp;quot;offtopic&amp;quot;, &amp;quot;flamebait&amp;quot;, &amp;quot;troll&amp;quot;, &amp;quot;redundant&amp;quot;, &amp;quot;insightful&amp;quot;, &amp;quot;interesting&amp;quot;, &amp;quot;informative&amp;quot;, &amp;quot;funny&amp;quot;, &amp;quot;overrated&amp;quot;, or &amp;quot;underrated&amp;quot;, with each corresponding to a −1 or +1 rating. So a comment may be seen to have a rating of &amp;quot;+1 insightful&amp;quot; or &amp;quot;−1 troll&amp;quot;.[57] Comments are very rarely deleted, even if they contain hateful remarks.&lt;br /&gt;
&lt;br /&gt;
::Starting in August 2019 anonymous comments and postings have been disabled.&lt;br /&gt;
&lt;br /&gt;
::Moderation points add to a user&#039;s rating, which is known as &amp;quot;karma&amp;quot; on Slashdot. Users with high &amp;quot;karma&amp;quot; are eligible to become moderators themselves. The system does not promote regular users as &amp;quot;moderators&amp;quot; and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user&#039;s moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from −1 to +5, and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level +5 will only see the highest rated comments, while a user reading at level −1 will see a more &amp;quot;unfiltered, anarchic version&amp;quot;. A meta-moderation system was implemented on September 7, 1999,to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either fair or unfair. For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. troll, funny), and the meta-moderator can click to see the context of comments surrounding the one that was moderated.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot&#039;s purpose is to promote high quality discussion which is somewhat similar to our purpose of promoting high quality arguments. In particular, the reputation (karma) of the moderators is an interesting concept. We could use a similar system to weight voters with a good reputation higher in their argument scoring. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Refining our Argument Score with Reputation/Trust&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Scoring of individual arguments|Above]] we discussed an equation to score arguments on the basis of Veracity, Relevance, Freedom from Fallacies, and Clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is overall score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is Veracity &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is Relevance&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is Fallacies (ie freedom from fallacies)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is Clarity&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; is weighting for Fallacies, eg 0.7&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; is weighting for Clarity, eg 0.3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Each user would vote on each category and the resulting &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; would be calculated. However, each user may have a different reputation/trust for their ability to judge these four criteria. We can take this into account by simply adding a weighting factor for Trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = (T_vV)(T_rR)(w_fT_fF + w_cT_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_x&amp;lt;/math&amp;gt; = Trust in user&#039;s ability to evaluate each category &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; (Veracity, Relevance, Fallacies, and Clarity)&lt;br /&gt;
&lt;br /&gt;
Since the user evaluates trust in multiple categories, it would be useful to generate a composite trust for all the categories:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
T_{comp} = {(T_vV)(T_rR)(w_fT_fF + w_cT_cC) \over{VR(w_fF + w_cC)}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; is the composite trust for all categories.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; can also be seen as an &amp;quot;average&amp;quot; trust, ie the single factor that produces the same argument score as that resulting from the multiple trust factors.&lt;br /&gt;
&lt;br /&gt;
Once we have &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; we can use it to generate an average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; for all users:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S_{ave} = {\sum S \over{\sum T_{comp}}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
That is, instead of dividing by the number of people voting, we divide by the total of how much they &amp;quot;count&amp;quot;. This is similar to the [trust-weighted average scheme]([[A trust weighted averaging technique to supplement straight averaging and Bayes]]) we have proposed before. It is this average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; that we will take to be the score for the argument. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rhetorical vs. Practical Arguments&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So far our scoring methodology has been focused mainly on rhetorical aspects of arguments. Is the argument true, relevant to its parent contention, clear, and logical? These criteria certainly touch on the practical impact an argument may have and so far we have been merging Impact with Relevance since it is hard to distinguish the two. Consider the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Argument: Biden is a good President because he got an infrastructure \n bill passed that will do good things for the whole country.&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Sub-argument: The bill provides $2.9 million \n to Roanoke airport for improvements.&amp;quot;]&lt;br /&gt;
    2 [label = &amp;quot;2, Sub-argument: The bill provides $150 billion \n to combat climate change.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here it would seem appropriate to roll Impact into Relevance. Presumably most voters will recognize Sub-argument 2 as being the more relevant one simply because it is, on a practical level, the more impactful one.&lt;br /&gt;
&lt;br /&gt;
However, usually Relevance is a rhetorical quality, not a practical one. In this case, as a matter of rhetoric, both supporting arguments are relevant to the topic at hand. They are both, without question, part of Biden&#039;s infrastructure bill. They are both true, clear, and free of fallacies. But even so, although it is clear what is going on, it would be better to separate the issue of whether Biden is a good president from the issue of which infrastructure allocations add the most value. &lt;br /&gt;
&lt;br /&gt;
One reason for this separation, in terms of the [math laid out previously](Argument scoring), is that a weak sub-argument stops having any influence once its score goes below 0.5. However, if we are scoring Impact, either implicitly or explicitly, this rule would seem inappropriate. The rule makes many small sub-arguments, like the one about Roanoke airport, stop having any value. Perhaps the Roanoke airport argument alone is negligible but it still has a positive impact and, when added to all the other similar projects around the country, would amount to a sizeable contribution. Therefore it wouldn&#039;t be correct to nullify it altogether.&lt;br /&gt;
&lt;br /&gt;
We also wouldn&#039;t want arguments from Impact to prematurely influence necessary rhetorical arguments. Let&#039;s take a favorite from moral philosophy, that of a healthy young person, John who comes in for a routine checkup at a clinic which has 5 critical patients in need of organ transplants. John has the organs they need and is a match for all of them. The doctors, using a purely utilitarian argument decide to kill John and harvest his organs. It makes sense: 1 person dies and 5 live so we&#039;re ahead. Let&#039;s for the moment disregard legal and other social artifacts which might persuade the doctors otherwise. This approach contrasts with a deontological perspective which argues that the ends do not justify the means and that, indeed, the means in this case are all important. But we  can only ferret out the deontological argument by actually having it, within the context of a rhetorical debate. We would hope such a debate would successfully preclude any utilitarian considerations whatsoever.    &lt;br /&gt;
&lt;br /&gt;
This leads us to the more general reason why separating the scoring techniques is appropriate. A score for Impact is essentially a cost-benefit-risk analysis for which established techniques exist and which would be confusing if scored together with the rhetorical argument. Indeed, by the time we reach an argument where impact is of interest we have usually dispensed with the rhetorical nature of the argument:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: We have $150 billion to spend and \n should spend it on climate change&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Electric cars and charging stations.\n I = 0.5&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Home and building insulation, solar, heat pumps, etc.\n I = 0.3&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Carbon capture.\n I = 0.2&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see that this argument is likely the outcome of previous arguments where basic points have been agreed to (or at least settled) such as whether climate change is real, Biden is a good president, etc. Here we are at the final stages of the argument, which consists of a resolution to move forward with some practical course of action. &lt;br /&gt;
&lt;br /&gt;
In this case each sub-argument is, in effect, a lobbying effort for the money. All the sub-arguments are equal in terms of their Veracity, rhetorical Relevance, freedom from Fallacies, and Clarity. The only point of dispute is whether the money is better spent on one option or another. In a situation like this, &amp;lt;math&amp;gt;\sum I = 1&amp;lt;/math&amp;gt; would be defined as the effect of all plausible infrastructural investments we could make to impact climate change. In this respect let&#039;s assume we are limited to the three options above. The voters, presumably armed with engineering studies, would then weigh in to assess the impact of each proposal.&lt;br /&gt;
&lt;br /&gt;
It is important to emphasize that the argument at this point is a technical one. It is numerical in nature and hinges on scientific rigor. Our system for assessing trust in the people who can reasonably provide input at this level will be important. One can easily imagine how interested parties (and the merely ignorant) could skew the results with their vote. At the same time we want to encourage participants to move toward this type of debate since it leads to practical benefits and, by its nature, tends to reduce partisan rancor.&lt;br /&gt;
&lt;br /&gt;
The idea outlined here stands apart, by design, from the method that scores the rhetorical quality (eg the VRFC equation) of the argument. Rhetoric is designed to convince you of the argument as a whole but arguments using Impact are designed to forward a specific recommendation. Clearly, bundling the Impact with the VRFC equation is inappropriate.&lt;br /&gt;
&lt;br /&gt;
In many cases arguments will have a hard time getting to this level of practicality. They tend to remain mired in the basic rhetoric that governs them:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is clearly truncated but we can see where it is going. It is, in essence, one about the Veracity of personal experience vs. demonstrable evidence, which is a philosophical debate. It is hard to ascribe Impact to it because it doesn&#039;t ever get to the point of enumerating proposals.&lt;br /&gt;
&lt;br /&gt;
But it could eventually transform itself into one that did. Let&#039;s assume the participants agree to settle, or at least table, the philosophical debate and concentrate instead on a test of how to improve your life. Both the religious and secular sides propose certain practices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3 ...more debate...&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4 ...more debate...&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Modified Thesis: Is your life best improved by religious or secular practices?&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Religious&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Secular&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Pray for 20 minutes \n every night and ask \n for what you need.&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, Go to your religious \n service every week and \n perform the rituals.&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, Study for 20 minutes \n every night in an area \n where your problems are.&amp;quot;]&lt;br /&gt;
    11 [label=&amp;quot;11, Find a support group \n and meet with them \n every week.&amp;quot;]   &lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;forward&amp;quot;]; &lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 8 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 9 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 10 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 11 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, the reader is being invited to try the techniques on offer and an evaluation of Impact might thus be made. Note that in this case we have suspended our evaluation of the rhetorical qualities of each argument and begun a new one which seeks a utilitarian appraisal of which side might offer a better personal outcome. This is in keeping with the notion of Impact as separate from the rhetorical qualities of the argument.&lt;br /&gt;
&lt;br /&gt;
We could, in theory, envision someone who is not religious adopting religious practices because he concludes that it does him good. Perhaps he tried both sides and decided the religious side was the one yielding the greatest benefit. This possibility is no doubt one motivation for religious debaters to drop their metaphysical convictions and adopt a practical way to approach their disagreement with the secular side. In any event, this progression seems like a healthy outcome since metaphysical debates usually have little hope of resolution.&lt;br /&gt;
&lt;br /&gt;
Impact is necessarily focused on some particular goal. If the goal is material benefit and you pray and get rich you might think prayer works. But if the goal is broader than that, you might be disturbed by the fact that you are doing something you don&#039;t really believe is true. Your goal might be to get rich without compromising philosophical integrity. In this case the object of our  Impact changes to become more than simply material wealth. A clear statement of the argument thesis in terms of goals is obviously important here.&lt;br /&gt;
&lt;br /&gt;
In spite of our attempts at separating impact from rhetoric we will often find ourselves with a mix of the two:&lt;br /&gt;
[[File:Procon.png|center|frame]]&lt;br /&gt;
Here we&#039;ve scored the Pro argument weaker than the Con argument using our standard rhetorical measures (VRFC). Arguably the Pro argument speculates more (a type of fallacy) about what would happen if we stopped supporting Ukraine. The Con argument isn&#039;t perfect either since it seems to assume that the money saved would actually be used in some constructive way. Still, money not spent is certainly money saved so we&#039;ll mark it down only slightly. That said, it would be ridiculous to stop the argument after concluding the Con side &amp;quot;won&amp;quot;. The argument is not really a rhetorical argument at all but rather a statement of Impact. One side argues for the impact of saving the money. The other argues for the impact of failing to spend the money. We may not know how events would play out in this situation but we acknowledge the risk of catastrophic consequences for failure to act. The impact score for the Pro side is thus much higher. &lt;br /&gt;
&lt;br /&gt;
This is a particular case where the argument should be separated out into one that is explicitly about impact but it is not clear how best to achieve that. One way would be to allow participants to intervene by asking questions or proposing to move the debate in a more fruitful direction, perhaps by suggesting a new main contention (ie thesis). &lt;br /&gt;
[[File:Procon2.png|center|frame]]&lt;br /&gt;
A basically new debate ensues. Incentives to move the debate might include reputational points for agreeing on a more productive direction.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s look at an example of what this &amp;quot;productive&amp;quot; argument might look like in terms of cost-benefit-risk analysis:&lt;br /&gt;
&lt;br /&gt;
[[File:Productiveargument.png|center|frame]]&lt;br /&gt;
The red boxes are Con arguments and the green box is the Pro argument. We can see right away why the Pro argument might have stiff opposition since it leads to an infinite cost-benefit ratio and is virtually certain to occur (it is our current policy). It is possible that other scenarios could play out within the context of supporting Ukraine but let&#039;s leave these aside for the moment.&lt;br /&gt;
&lt;br /&gt;
So the Pro side looks bad until we start looking at the Con scenarios. In Scenario 1 we envision taking the $75 billion spent on Ukraine aid and providing free [[community]] college instead. Doing so provides an economic benefit in the long run so we provide an estimate for that. However, military and policy experts have said that ignoring Ukraine would result in having to contain a newly resurgent Russia and this could double our defense costs in the near term ($800 billion). The resulting cost benefit ratio is 2.9, a positive number which is undesirable. It also is the highest probability scenario, at 70%. Other scenarios involve some type of war with Russia and would involve an even greater outlay of funds, not to mention the sheer human toll of war. Only Scenario 4 envisions a minor outlay to contain a victorious Russia which would be offset by the benefit of free community college. This scenario is desirable but unlikely. &lt;br /&gt;
&lt;br /&gt;
These scenarios are much like sub-arguments but stripped of any need to assess their rhetorical quality. By looking at CB ratios and probabilities we can determine which policy direction to take.   &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Interaction effects between arguments and sub-arguments&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although arguments have been presented as standalone entities, users may often score them after reading, and accounting for, their sub-arguments. In that case, the sub-argument&#039;s influence on the parent argument would be counted twice -- once due to the mathematical effect discussed [[#Scoring of individual arguments|above]] and twice due to the the influence the sub-argument has on the user&#039;s scoring of the parent argument. &lt;br /&gt;
&lt;br /&gt;
This effect is clearly undesirable and efforts should be made to control it. The software could, for instance, be equipped with the following checks:&lt;br /&gt;
&lt;br /&gt;
* If it detects that a user voted for a sub-argument and subsequently voted for an argument, it can flag the sub-argument score so it does not participate in the mathematical effect it would otherwise have on the argument. We are assuming here, of course, that a user who has voted for a sub-argument will be unable to avoid having it influence his vote for the parent argument.&lt;br /&gt;
* If it detects a vote for an argument but not a vote for its sub-argument, it doesn&#039;t know if the user has read the sub-argument in a way that would influence their vote for the parent argument. In such a case, the user can simply be asked if the sub-argument was read and, if so, to flag any subsequent vote by the user for the sub-argument as a non-participant in its mathematical influence on the parent argument.&lt;br /&gt;
* If the sub-argument does not yet exist when the vote for the parent argument is cast, the software will flag a subsequent vote for any newly developed sub-argument as a legitimate participant in the mathematical influence it has on the parent argument.&lt;br /&gt;
&lt;br /&gt;
It is probably difficult to make a system like this foolproof. A user might report not having read a sub-argument that they have, in fact, read. Tracking features could, in theory, be developed to check whether this is the case and react accordingly. However, it would still be difficult to know for sure how deeply the user understands the sub-argument just based on a record that they clicked on it or had it &amp;quot;open&amp;quot;. It also seems like it would be easy to overdo tracking of this kind to the point where it simply turns off an otherwise enthusiastic user. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2173</id>
		<title>Argument evaluation and scoring</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Argument_evaluation_and_scoring&amp;diff=2173"/>
		<updated>2024-09-25T13:49:16Z</updated>

		<summary type="html">&lt;p&gt;Lem: Undo revision 2137 by Pete (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h2&amp;gt;Some thoughts on evaluating [[Argument|argument]]s&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In our last meeting Dan asked how we might evaluate an argument made by a respondent instead of simply relying on the given [[Probability|probability]] (as we&#039;ve been doing so far). An argument, assuming it is made public, could then be evaluated by the questioner and others independently to find a more accurate probability. This opens up a new idea in our work, that of assessing the truth by evaluating the reasoning put forth in an [[Opinion|opinion]].&lt;br /&gt;
&lt;br /&gt;
One idea for doing this starts with a simple model for argument construction. The argument consists of [[Supporting statement|supporting statement]]s which are tied together with [[Logic|logic]] to form a conclusion. The conclusion is the answer to the overall question being asked of the network. Each supporting statement and the [[logic]] can be evaluated independently to determine the extent to which the conclusion is true. The following diagram illustrates this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Answer/Conclusion, Pc&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Logic, Pl&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Support. Stmt. 1, Ps1&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;Support. Stmt. 2, Ps2&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;Support. Stmt. 3, Ps3&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 4 [dir=&amp;quot;back&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The probability of the supporting statements can be combined in a [[Bayes&#039; theorem|Bayes]]ian manner. This is in keeping with the Bayesian idea of modifying prior probabilities given new evidence (ie supporting statements). These probabilities can be [[Trust|trust]]-modified as Sapienza proposed (https://ceur-ws.org/Vol-1664/w9.pdf) but since they are likely being assigned by the questioner, we will assume that [[trust]] is already built into them. Of more importance is the [[Relevance|relevance]] of the supporting statements. They can range from completely irrelevant to completely relevant. A completely relevant statement will take the full value of the probability it was originally assigned. A completely irrelevant statement would reduce the probability to 50%, where it will have no influence on the outcome. In that sense relevance functions in the same way trust does to modify the probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{mod} = P_{nom} + R(P - P_{nom})&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is relevance (0.0 - 1.0) &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{mod}&amp;lt;/math&amp;gt; is relevance-modified probability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{nom}&amp;lt;/math&amp;gt; is the nominal probability (=0.5 for a [[Predicate|predicate]] question)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is the unmodified probability&lt;br /&gt;
&lt;br /&gt;
After the relevance-modification, each supporting statement is combined in the usual manner via Bayes. For the first two statements,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
P_{comb1,2} = {P_{s1}P_{s2}\over {P_{s1}P_{s2} + (1-P_{s1})(1-P_{s2})}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for each additional statement. Here it is to be understood that &amp;lt;math&amp;gt;P_{s1}&amp;lt;/math&amp;gt;, etc. is the value &amp;lt;i&amp;gt;after&amp;lt;/i&amp;gt; the relevance modification.&lt;br /&gt;
&lt;br /&gt;
Logic will also have a probability assigned to it to represent its quality. A fully illogical argument would receive a 0, which when combined via Bayes with the supporting statements would render the probability of the entire argument 0. This makes sense because a completely illogical argument, regardless of the strength of its supporting statements, destroys itself. A fully logical argument, however, will not receive a 1 but rather a 0.5. When combined with the supporting statements a 1 would render the final probability a 1, which is not reasonable. A 0.5, however, would do nothing and the final probability would be the combined probability of the supporting statements. Thus we assume that perfect logic is neutral and less than perfect logic reduces the combined probability of the statements. Again, this seems reasonable. We expect, by default, logical arguments which then rest on the strength of their supporting statements. If we notice flaws in the logic we discount the strength of the argument accordingly.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s try an example with these ideas:&lt;br /&gt;
&lt;br /&gt;
Question: Are humans causing frog populations to decline?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Answer / Conclusion: Yes, mankind is causing a fall in frog populations.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Logic: Mankind is causing the fall of frog populations if we can show that frog populations are decreasing over time and can show a human behavior that causes the decline.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Supporting Statements:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# [https://en.wikipedia.org/wiki/Frog#:~:text=Frog%20populations%20have%20declined%20significantly%20since%20the%201950s. Frog populations have declined since the 1950&#039;s.]&lt;br /&gt;
# My wife complained that she doesn&#039;t see frogs anymore.&lt;br /&gt;
# [https://wwf.panda.org/discover/our_focus/freshwater_practice/freshwater_biodiversity_222/ Scientists say] that the loss of freshwater habitats has affected frog populations.&lt;br /&gt;
&lt;br /&gt;
We start by judging the quality of the supporting statements. 1 seems like a well substantiated statement (a high P) but is not completely relevant because it only hints at human involvement. 2 is completely true but mostly irrelevant. 3 is a contributor but seems less substantiated than 1 and contains no human cause. We proceed by assigning probability and relevance values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1}=0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s1}=0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s1mod} = 0.5 + 0.7(0.9 - 0.5) = 0.78&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2}=1.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s2}=0.0&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s2mod} = 0.5 + 0.0(1.0 - 0.5) = 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3}=0.75&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;R_{s3}=0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P_{s3mod} = 0.5 + 0.5(0.75 - 0.5) = 0.625&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since s2 won&#039;t count in the Bayesian calculation we can ignore it and:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_{comb,s} = {(0.78)(0.625) \over {0.78(0.625)+0.22(0.375)}} = 0.855&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The logic/conclusion in this case is reasonably strong so we will assign it a high value, say &amp;lt;math&amp;gt;P_l = 0.45&amp;lt;/math&amp;gt; (remember, out of 0.5). It could be improved by observing that the word &amp;quot;behavior&amp;quot; is too general and should be replaced by, say, &amp;quot;policy choice&amp;quot; (ie urban growth into ecologically important wetlands). We note here that logic is more than just the mathematical construction of an argument. Since we are speaking a human language, logic might also be flawed because it uses imprecise wording.&lt;br /&gt;
&lt;br /&gt;
Putting &amp;lt;math&amp;gt;P_{comb,s}&amp;lt;/math&amp;gt; together with &amp;lt;math&amp;gt;P_l&amp;lt;/math&amp;gt; using Bayes we obtain a concluding probability:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;P_c = {0.855(0.45) \over {{0.855(0.45) + 0.145(0.55)}}} = 0.82&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One potential pitfall of this model is that repetitive supporting statements of high probability will quickly render a combined probability near 1.0. As we&#039;ve seen in the past, this is simply the result of the Bayes equation. The user would need to watch for attempts like these to distort the answer by removing repetitive statements or making them irrelevant. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Scoring of individual arguments&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arguments can be scored on [[Veracity|veracity]], impact, relevance, clarity, and informal quality (lack of fallacies):&lt;br /&gt;
&lt;br /&gt;
- Veracity is how true the argument is based on source information. Source information itself will be scored: &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Impact &amp;amp; Relevance is how deeply the argument affects the main contention of the [[debate]] (or the argument immediately above): &amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Clarity is how understandable the argument is: &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Informal quality (lack of fallacies) is whether the argument commits any logical fallacies of its own. A list of informal fallacies (and formal ones) will be provided to help users select appropriately: &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since Impact and Relevance are closely related concepts we will merge these into one, Relevance. The simplest method for combining these is to average them, or weighted average them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = w_vV + w_rR + w_cC + w_fF&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_x&amp;lt;/math&amp;gt; is a weighting for category X (eg Veracity, Relevance, etc)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_v + w_r + w_c + w_f = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This seems reasonable and if we believe that certain criteria should weigh more (such as Veracity) we can easily make the weighting factors reflect this. However, intuitively it seems that a category such as Veracity should not only weigh more but have the power to take down the whole argument. After all, if the argument is a straightforward lie, it should receive a score of zero, regardless of its other attributes (such as relevance, clarity, etc):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he will land a person on Mars in his first year.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is a lie and although it is clear, has no evident fallacies, and is relevant to the question at hand, it should be thrown out.&lt;br /&gt;
&lt;br /&gt;
The same can be said of Relevance. A completely irrelevant argument should also have the power to render the whole argument moot:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Who is the best choice for President? X is the best choice because he likes pizza and so do I.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this in mind, we can propose the following equation, which we will dub the &amp;quot;VRFC equation&amp;quot;: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Where &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; = Score for the argument which varies from 0-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; = weighting factor for Fallacies.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; = weighting factor for Clarity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and each of the constituent variables (&amp;lt;math&amp;gt;V, R, F, C&amp;lt;/math&amp;gt;) has a range 0-1.&lt;br /&gt;
&lt;br /&gt;
In this equation either Veracity or Relevance have the power to nullify the entire argument. Similarly a combination of Fallacies and lack of Clarity can do the same. However, a fallacious argument alone seems like it could still have merit, as would an argument whose only flaw was lack of clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Czechoslovakia because if the Nazi&#039;s prevail they will conquer the world.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument commits the slippery slope [[Fallacy|fallacy]] but is not entirely invalid. Similarly an unclear argument can still manage to make a point:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;We should support Europe because first the Sudetenland, then the Czechs, and soon enough it&#039;s over when all the Brits had to do was get rid of that weakling sooner.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It would seem that a fallacious argument should weigh more than an unclear one. Proposed weights might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f = 0.7, w_c = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rolling up the score of argument trees&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The equation above applies to a single argument but, as we&#039;ve seen, most arguments have sub-arguments below, sub-sub-arguments, and so forth. They are really trees in which each individual argument can be scored separately. &lt;br /&gt;
&lt;br /&gt;
Here we develop a proposed equation for rolling up the score for an argument based on its own score and that of its sub-arguments. In doing so we emphasize that any argument can stand on its own and be scored in the absence of sub-arguments. This creates an interesting dynamic. The sub-argument may bolster or detract from the parent argument but the extent to which it does should be limited.&lt;br /&gt;
&lt;br /&gt;
Furthermore, once the sub-argument becomes weaker than a certain threshold, it should stop influencing the parent argument altogether. Here, we will set this threshold at 0.5. Thus only Pro sub-arguments that score 0.5 or better will have any influence on the parent argument. For Con sub-arguments we will use the same threshold but first modify the sub-argument score by &amp;lt;math&amp;gt;1-S&amp;lt;/math&amp;gt;. Thus a strong Con sub-argument, scoring say 0.9, would enter the calculation with a score of 0.1. The result is a range of scores 0-1 of which 0-0.5 is Con and 0.5-1 is Pro. Scores of exactly 0.5 are neutral.   &lt;br /&gt;
&lt;br /&gt;
Let&#039;s consider the case with one argument and one pro sub-argument and one Con sub-argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;Argument, s = 0.9&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;Pro sub-argument, xp = 0.7&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;Con sub-argument, xc = 0.7&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the argument&#039;s score is 0.9, and both the Pro/Con sub-argument score is 0.7. These numbers would normally be arrived at by using the VRFC eqn above, but we will just assume them for now. The first sub-argument, in this case, bolsters the argument because it is a Pro argument and has a score (0.7) greater than 0.5. The second sub-argument, with the same score, detracts from the argument because it is on the Con side. We emphasize that if these scores were at or below 0.5 they would have no effect on the argument.&lt;br /&gt;
&lt;br /&gt;
The general equation governing this situation is as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2(1-s)fx + s - (1-s)f&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;/math&amp;gt;x &amp;gt; 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = 2sfx + s - sf&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;s &amp;lt; 0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x &amp;lt;= 0.5&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
s_{mod} = s&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; = score for parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p&amp;lt;/math&amp;gt; = score for Pro arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1-x_c&amp;lt;/math&amp;gt; = score for Con arguments&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; = maximum possible fraction of increase possible, 0-1&lt;br /&gt;
&lt;br /&gt;
The variable &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a user selected number between 0-1 and represents the extent to which the sub-argument score can be affected. For example, a sub-argument with x = 0.9, as in the diagram above, can be improved by 0.1 to a maximum of 1. Then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; represents the fraction of 0.1 that we will allow for our improvement. If &amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt;, for instance, then the maximum range around 0.9 that the sub-argument can affect is &amp;lt;math&amp;gt;(0.25)(0.1) = 0.025&amp;lt;/math&amp;gt;. Thus the maximum score the argument can have is 0.925 and the minimum is 0.875.&lt;br /&gt;
&lt;br /&gt;
For the argument above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;f = 0.25&amp;lt;/math&amp;gt; User input&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = x_p = 0.7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.7) + 0.9 - (1-0.9)(0.25) = 0.91&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = (1-x_c) = (1-0.7) = 0.3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.9)(0.25)(0.3) + 0.9 - (1-0.9)(0.25) = 0.89&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see here that the Pro and Con sub-arguments exactly balance each other since they both have the same score. &lt;br /&gt;
&lt;br /&gt;
The equation above is piecewise linear and can be visualized as follows:&lt;br /&gt;
&lt;br /&gt;
![image](uploads/7f62b22b9eb19771503d654db29cab92/image.png)&lt;br /&gt;
&lt;br /&gt;
One important property of this equation is that the stronger (or weaker) an argument becomes, it becomes harder for a sub-argument to change it. This is because the maximum allowed movement is &amp;lt;math&amp;gt;1-s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;gt; 0.5&amp;lt;/math&amp;gt; or simply &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;s &amp;lt;= 0.5&amp;lt;/math&amp;gt;. The idea behind this property is that very strong arguments should be harder to dislodge precisely because they have covered themselves well. A weaker argument, for instance one that fails to mention an obvious supporting fact, is in a position to be bolstered more by a sub-argument which mentions the fact. Similarly a very weak argument should be difficult to bolster. If the argument is a lie or irrelevant, for instance, there isn&#039;t much that can be done to rescue it.  &lt;br /&gt;
&lt;br /&gt;
This property has the further consequence that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; cannot be changed by the sub-arguments if it is 1 or 0. A truly perfect argument, &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt;, cannot be weakened no matter how strong its Con sub-argument. Similarly a perfectly flawed argument, &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; cannot be bolstered with any Pro sub-argument. We will discuss below a method to deal with the fact that, regardless of the quality of the argument, users may still vote to score arguments 1 or 0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Population adjustments&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm described above assumes a single vote for the argument and sub-arguments. In fact, this will rarely be the case, because multiple users will be voting on each. The effect of a sub-argument on its parent should be weighed by the population of users who voted for the sub-argument and parent argument.&lt;br /&gt;
&lt;br /&gt;
Here we propose a simple modification factor, based on the ratio of users voting for each argument/sub-argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop} = (s_{mod} - s){p_s\over p} + s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; is the population modified score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod}&amp;lt;/math&amp;gt; is the modified score without population modifications (see above)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt; is the population voting for the sub-argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting for the parent argument&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is the original score of the parent argument&lt;br /&gt;
&lt;br /&gt;
Usually we expect that sub-arguments will receive fewer votes than parent arguments, so &amp;lt;math&amp;gt;{{p_s\over p} &amp;lt;= 1}&amp;lt;/math&amp;gt; in general. For the case when &amp;lt;math&amp;gt;p_s &amp;gt; p&amp;lt;/math&amp;gt; we will force &amp;lt;math&amp;gt;{p_s\over p} = 1&amp;lt;/math&amp;gt;. Therefore there is no danger that a sub-argument can overwhelm a parent argument by voting power alone. This is in keeping with our philosophy that sub-arguments can have at best a limited effect on parent arguments.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h4&amp;gt;Example calculation&amp;lt;/h4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s do a problem with the following argument tree and &amp;lt;math&amp;gt;f=0.25&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, p = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, p = 30&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, p = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, p = 28&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Our objective here is to roll up the score for the Pro side of this tree. The Con side would be calculated similarly and we will skip this for the sake of brevity. Note that &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; stands for the score and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is the population voting to produce that score. We start at the bottom, with the 2-3-4 portion of the tree, and for the sake of consistency with the above calculation we will recast &amp;lt;math&amp;gt;x_p&amp;lt;/math&amp;gt; for 2 as &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and label the population of the sub-arguments as &amp;lt;math&amp;gt;p_s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    2 [label=&amp;quot;2, Pro argument, s = 0.7, p = 55&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Pro sub-argument, xp = 0.8, ps = 26&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4, Con sub-argument, xc = 0.6, ps = 30&amp;quot;]&lt;br /&gt;
    2 -&amp;gt; 3 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Pro sub-argument, we write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.8) + 0.7 - (1-0.7)(0.25) = 0.745&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We modify this by the respective populations:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop23} = (s_{mod} - s){p_s\over p} + s = (0.745 - 0.7){26\over 55} + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument we first modify its score,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_c = 0.4&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and write&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.4) + 0.7 - (1-0.7)(0.25) = 0.685&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and modify by the respective population,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop24} = (s_{mod} - s){p_s\over p} + s = (0.685 - 0.7){30\over 55} + 0.7 = 0.692&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These two values of &amp;lt;math&amp;gt;s_{mod,pop}&amp;lt;/math&amp;gt; can now be combined to create a new &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; for the Pro argument:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop23} - s) + (s_{mod,pop24} - s) + s = (0.721 - 0.7) + (0.692 - 0.7) + 0.7 = 0.713&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We note here that the Pro argument got a little stronger as a result of its sub-arguments. The Pro sub-argument was substantially stronger than the Con sub-argument and, although fewer people voted for it, the population difference was not large.&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument 5-6-7 we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    5 [label=&amp;quot;5, Con argument, s = 0.7, p = 43&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Pro sub-argument, xp = 0.85, ps = 19&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Con sub-argument, xc = 0.95, ps = 28&amp;quot;] &lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, for the Pro sub-argument, we first modify its score since it is the opposite of its parent. It is as if the parent were a Pro argument and the child were a Con argument.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;x = 1 - x_p = 1 - 0.85 = 0.15&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then proceed as usual with the calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.15) + 0.7 - (1-0.7)(0.25) = 0.6475&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop56} = (s_{mod} - s){p_s\over p} + s = (0.6475 - 0.7){19\over 43} + 0.7 = 0.677&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the Con sub-argument &amp;lt;math&amp;gt;x = x_c = 0.95&amp;lt;/math&amp;gt; since the parent argument is also Con:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod} = 2(1-s)fx + s - (1-s)f = 2(1-0.7)(0.25)(0.95) + 0.7 - (1-0.7)(0.25) = 0.7675&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop57} = (s_{mod} - s){p_s\over p} + s = (0.7675 - 0.7){28\over 43} + 0.7 = 0.744&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We combine these two results in the same manner as above:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,tot} = (s_{mod,pop56} - s) + (s_{mod,pop57} - s) + s = (0.677 - 0.7) + (0.744 - 0.7) + 0.7 = 0.721&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the bottom layer of the tree calculated, we have the following situation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis Statement&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Pro argument, s = 0.9, p = 96&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Pro sub-argument, xp = 0.713, ps = 55&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Con sub-argument, xc = 0.721, ps = 43&amp;quot;]&lt;br /&gt;
    8 [label=&amp;quot;8, Con argument, ....&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 2 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 5 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 8 [dir=&amp;quot;both&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All that remains is to calculate the 1-2-5 portion, which is very similar to the 2-3-4 calculation performed above. Therefore will skip the details of this and simply report the results:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop12} = 0.906&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod,pop15} = 0.895&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;s_{mod, tot} = 0.901&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We see here that the final result is not much different than the original &amp;lt;math&amp;gt;s = 0.9&amp;lt;/math&amp;gt;. This is a result of the Pro sub-arguments essentially being cancelled by the Con sub-arguments. Such a result is to be expected in many cases.&lt;br /&gt;
&lt;br /&gt;
In this example, we are skipping the Con side of the overall argument (node 8 in the tree above) because it would be exactly the same as what we have shown. If it had been calculated we would then combine the result for 1 and 8 to produce an overall score for the argument.&lt;br /&gt;
&lt;br /&gt;
The calculations above can be performed with the [attached snippet](https://gitlab.syncad.com/[[Peer|peer]]verity/trust-model-playground/-/snippets/164). The user input portion of the snippet is set up for the calculation we did immediately above:     &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
#User input&lt;br /&gt;
side_parent = &#039;pro&#039; #side, pro or con, that the parent argument is on    &lt;br /&gt;
s = 0.9 #score for the parent argument&lt;br /&gt;
mf = 0.25 #max fraction that parent argument can be changed in terms of (1-s) or (s-0)&lt;br /&gt;
p = 96.0 #population voting for the parent argument&lt;br /&gt;
x_pro_arr = [0.713] #score for the pro children&lt;br /&gt;
x_con_arr = [0.721] #score for the con children&lt;br /&gt;
ps_pro_arr = [55.0] #population voting for each pro child sub-argument&lt;br /&gt;
ps_con_arr = [43.0] #pop voting for each con child sub-argument&lt;br /&gt;
mods_if1or0 = True #True if we want scores of 1 or 0 to be modified to near 1 or 0 (otherwise they can&#039;t be adjusted by this calculation)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To note, the snippet contains arrays to handle any number of child arguments. These are combined in the same way we combined the single Pro and Con argument above.&lt;br /&gt;
 &lt;br /&gt;
Another variable, `mods_if1or0` controls whether we allow &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to be modified when it is set to 1 or 0. As discussed above, arguments where &amp;lt;math&amp;gt;s = 1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;s = 0&amp;lt;/math&amp;gt; are perfect, or perfectly flawed, and thus cannot be changed with sub-arguments. This idea may be theoretically plausible but it wouldn&#039;t stop users from voting 1 or 0 for arguments. In such cases the `mods_if1or0` switch, when True, changes &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to 0.99 and 0.01 respectively. &lt;br /&gt;
&lt;br /&gt;
As a side note, this property is similar to Bayesian probabilities of 1 or 0 which also cannot be changed. We have discussed this problem in earlier posts under the guise that 1 or 0 probabilities don&#039;t really exist because they would require an infinite sample size. In the same way, a perfect argument (or perfectly imperfect) argument cannot exist because it would, at some point, run into the same issues that Bayesian probabilities do. &lt;br /&gt;
&lt;br /&gt;
For example, suppose we&#039;ve invented a pill that cures cancer. It is one dose, costs 10 cents to make, has no side effects, has no environmental impact due to manufacture, and is certain to cure someone&#039;s cancer. The argument for a cancer patient taking the pill is, for all practical purposes, perfect. There is simply no plausible argument against it. We could score such an argument a 1 until we remember our probabilities. We only know the pill works and has no side effects on a limited population, say 100,000 patients. We don&#039;t know what effect it will have on the 100,001st patient. So the best we can say is that the drug is 0.99999 effective. Given that the argument is really predicated on the effectiveness of the drug we could say its score is also 0.99999.    &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;[https://slashdot.org/ Slashdot]&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot offers a system for content moderation summarized by the following from [wikipedia:Slashot|wikipedia]:&lt;br /&gt;
&lt;br /&gt;
::&amp;lt;i&amp;gt;Slashdot&#039;s editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site&#039;s users. A user-based moderation system is employed to filter out abusive or offensive comments.[63] Every comment is initially given a score of −1 to +2, with a default score of +1 for registered users, 0 for anonymous users (Anonymous Coward), +2 for users with high &amp;quot;karma&amp;quot;, or −1 for users with low &amp;quot;karma&amp;quot;. As [[Moderator|moderator]]s read comments attached to articles, they click to moderate the comment, either up (+1) or down (−1). Moderators may choose to attach a particular descriptor to the comments as well, such as &amp;quot;normal&amp;quot;, &amp;quot;offtopic&amp;quot;, &amp;quot;flamebait&amp;quot;, &amp;quot;troll&amp;quot;, &amp;quot;redundant&amp;quot;, &amp;quot;insightful&amp;quot;, &amp;quot;interesting&amp;quot;, &amp;quot;informative&amp;quot;, &amp;quot;funny&amp;quot;, &amp;quot;overrated&amp;quot;, or &amp;quot;underrated&amp;quot;, with each corresponding to a −1 or +1 rating. So a comment may be seen to have a rating of &amp;quot;+1 insightful&amp;quot; or &amp;quot;−1 troll&amp;quot;.[57] Comments are very rarely deleted, even if they contain hateful remarks.&lt;br /&gt;
&lt;br /&gt;
::Starting in August 2019 anonymous comments and postings have been disabled.&lt;br /&gt;
&lt;br /&gt;
::Moderation points add to a user&#039;s rating, which is known as &amp;quot;karma&amp;quot; on Slashdot. Users with high &amp;quot;karma&amp;quot; are eligible to become moderators themselves. The system does not promote regular users as &amp;quot;moderators&amp;quot; and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user&#039;s moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from −1 to +5, and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level +5 will only see the highest rated comments, while a user reading at level −1 will see a more &amp;quot;unfiltered, anarchic version&amp;quot;. A meta-moderation system was implemented on September 7, 1999,to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either fair or unfair. For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. troll, funny), and the meta-moderator can click to see the context of comments surrounding the one that was moderated.&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Slashdot&#039;s purpose is to promote high quality discussion which is somewhat similar to our purpose of promoting high quality arguments. In particular, the [[Reputation|reputation]] (karma) of the moderators is an interesting concept. We could use a similar system to weight voters with a good reputation higher in their argument scoring. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Refining our Argument Score with Reputation/Trust&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[#Scoring of individual arguments|Above]] we discussed an equation to score arguments on the basis of Veracity, Relevance, Freedom from Fallacies, and Clarity:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = VR(w_fF + w_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is overall score&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; is Veracity &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; is Relevance&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is Fallacies (ie freedom from fallacies)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is Clarity&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f&amp;lt;/math&amp;gt; is weighting for Fallacies, eg 0.7&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_c&amp;lt;/math&amp;gt; is weighting for Clarity, eg 0.3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;w_f + w_c = 1&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Each user would vote on each category and the resulting &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; would be calculated. However, each user may have a different reputation/trust for their ability to judge these four criteria. We can take this into account by simply adding a weighting factor for Trust:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S = (T_vV)(T_rR)(w_fT_fF + w_cT_cC)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_x&amp;lt;/math&amp;gt; = Trust in user&#039;s ability to evaluate each category &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; (Veracity, Relevance, Fallacies, and Clarity)&lt;br /&gt;
&lt;br /&gt;
Since the user evaluates trust in multiple categories, it would be useful to generate a composite trust for all the categories:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
T_{comp} = {(T_vV)(T_rR)(w_fT_fF + w_cT_cC) \over{VR(w_fF + w_cC)}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; is the composite trust for all categories.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; can also be seen as an &amp;quot;average&amp;quot; trust, ie the single factor that produces the same argument score as that resulting from the multiple trust factors.&lt;br /&gt;
&lt;br /&gt;
Once we have &amp;lt;math&amp;gt;T_{comp}&amp;lt;/math&amp;gt; we can use it to generate an average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; for all users:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
S_{ave} = {\sum S \over{\sum T_{comp}}}&lt;br /&gt;
&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
That is, instead of dividing by the number of people voting, we divide by the total of how much they &amp;quot;count&amp;quot;. This is similar to the [trust-weighted average scheme]([[A trust weighted averaging technique to supplement straight averaging and Bayes]]) we have proposed before. It is this average &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; that we will take to be the score for the argument. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Rhetorical vs. [[Practical argument|Practical Argument]]s&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So far our scoring methodology has been focused mainly on rhetorical aspects of arguments. Is the argument true, relevant to its parent contention, clear, and logical? These criteria certainly touch on the practical impact an argument may have and so far we have been merging Impact with Relevance since it is hard to distinguish the two. Consider the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Argument: Biden is a good President because he got an infrastructure \n bill passed that will do good things for the whole country.&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Sub-argument: The bill provides $2.9 million \n to Roanoke airport for improvements.&amp;quot;]&lt;br /&gt;
    2 [label = &amp;quot;2, Sub-argument: The bill provides $150 billion \n to combat climate change.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here it would seem appropriate to roll Impact into Relevance. Presumably most voters will recognize Sub-argument 2 as being the more relevant one simply because it is, on a practical level, the more impactful one.&lt;br /&gt;
&lt;br /&gt;
However, usually Relevance is a rhetorical quality, not a practical one. In this case, as a matter of rhetoric, both supporting arguments are relevant to the topic at hand. They are both, without question, part of Biden&#039;s infrastructure bill. They are both true, clear, and free of fallacies. But even so, although it is clear what is going on, it would be better to separate the issue of whether Biden is a good president from the issue of which infrastructure allocations add the most value. &lt;br /&gt;
&lt;br /&gt;
One reason for this separation, in terms of the [math laid out previously](Argument scoring), is that a weak sub-argument stops having any influence once its score goes below 0.5. However, if we are scoring Impact, either implicitly or explicitly, this rule would seem inappropriate. The rule makes many small sub-arguments, like the one about Roanoke airport, stop having any value. Perhaps the Roanoke airport argument alone is negligible but it still has a positive impact and, when added to all the other similar projects around the country, would amount to a sizeable contribution. Therefore it wouldn&#039;t be correct to nullify it altogether.&lt;br /&gt;
&lt;br /&gt;
We also wouldn&#039;t want arguments from Impact to prematurely influence necessary [[Rhetorical argument|rhetorical argument]]s. Let&#039;s take a favorite from moral philosophy, that of a healthy young person, John who comes in for a routine checkup at a clinic which has 5 critical patients in need of organ transplants. John has the organs they need and is a match for all of them. The doctors, using a purely utilitarian argument decide to kill John and harvest his organs. It makes sense: 1 person dies and 5 live so we&#039;re ahead. Let&#039;s for the moment disregard legal and other social artifacts which might persuade the doctors otherwise. This approach contrasts with a deontological perspective which argues that the ends do not justify the means and that, indeed, the means in this case are all important. But we  can only ferret out the deontological argument by actually having it, within the context of a rhetorical debate. We would hope such a debate would successfully preclude any utilitarian considerations whatsoever.    &lt;br /&gt;
&lt;br /&gt;
This leads us to the more general reason why separating the scoring techniques is appropriate. A score for Impact is essentially a [[Cost-benefit-risk analysis|cost-benefit-risk analysis]] for which established techniques exist and which would be confusing if scored together with the rhetorical argument. Indeed, by the time we reach an argument where impact is of interest we have usually dispensed with the rhetorical nature of the argument:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: We have $150 billion to spend and \n should spend it on climate change&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Electric cars and charging stations.\n I = 0.5&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, Home and building insulation, solar, heat pumps, etc.\n I = 0.3&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3, Carbon capture.\n I = 0.2&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can see that this argument is likely the outcome of previous arguments where basic points have been agreed to (or at least settled) such as whether climate change is real, Biden is a good president, etc. Here we are at the final stages of the argument, which consists of a resolution to move forward with some practical course of action. &lt;br /&gt;
&lt;br /&gt;
In this case each sub-argument is, in effect, a lobbying effort for the money. All the sub-arguments are equal in terms of their Veracity, rhetorical Relevance, freedom from Fallacies, and Clarity. The only point of dispute is whether the money is better spent on one option or another. In a situation like this, &amp;lt;math&amp;gt;\sum I = 1&amp;lt;/math&amp;gt; would be defined as the effect of all plausible infrastructural investments we could make to impact climate change. In this respect let&#039;s assume we are limited to the three options above. The voters, presumably armed with engineering studies, would then weigh in to assess the impact of each proposal.&lt;br /&gt;
&lt;br /&gt;
It is important to emphasize that the argument at this point is a technical one. It is numerical in nature and hinges on scientific rigor. Our system for assessing trust in the people who can reasonably provide input at this level will be important. One can easily imagine how interested parties (and the merely ignorant) could skew the results with their vote. At the same time we want to encourage participants to move toward this type of debate since it leads to practical benefits and, by its nature, tends to reduce partisan rancor.&lt;br /&gt;
&lt;br /&gt;
The idea outlined here stands apart, by design, from the method that scores the rhetorical quality (eg the VRFC equation) of the argument. Rhetoric is designed to convince you of the argument as a whole but arguments using Impact are designed to forward a specific recommendation. Clearly, bundling the Impact with the VRFC equation is inappropriate.&lt;br /&gt;
&lt;br /&gt;
In many cases arguments will have a hard time getting to this level of practicality. They tend to remain mired in the basic rhetoric that governs them:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening.&amp;quot;]&lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This argument is clearly truncated but we can see where it is going. It is, in essence, one about the Veracity of personal experience vs. demonstrable evidence, which is a philosophical debate. It is hard to ascribe Impact to it because it doesn&#039;t ever get to the point of enumerating proposals.&lt;br /&gt;
&lt;br /&gt;
But it could eventually transform itself into one that did. Let&#039;s assume the participants agree to settle, or at least table, the philosophical debate and concentrate instead on a test of how to improve your life. Both the religious and secular sides propose certain practices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;kroki lang=&amp;quot;graphviz&amp;quot;&amp;gt;&lt;br /&gt;
digraph G {&lt;br /&gt;
    fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;&lt;br /&gt;
    node [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    edge [fontname=&amp;quot;Helvetica,Arial,sans-serif&amp;quot;]&lt;br /&gt;
    layout=dot&lt;br /&gt;
    0 [label=&amp;quot;0, Thesis: Does God exist?&amp;quot;]&lt;br /&gt;
    1 [label=&amp;quot;1, Yes, because I can speak to Him and \n He responds by doing good things for me.&amp;quot;]&lt;br /&gt;
    2 [label=&amp;quot;2, No, because there is no objective \n evidence that there is anyone listening&amp;quot;]&lt;br /&gt;
    3 [label=&amp;quot;3 ...more debate...&amp;quot;]&lt;br /&gt;
    4 [label=&amp;quot;4 ...more debate...&amp;quot;]&lt;br /&gt;
    5 [label=&amp;quot;5, Modified Thesis: Is your life best improved by religious or secular practices?&amp;quot;]&lt;br /&gt;
    6 [label=&amp;quot;6, Religious&amp;quot;]&lt;br /&gt;
    7 [label=&amp;quot;7, Secular&amp;quot;] &lt;br /&gt;
    8 [label=&amp;quot;8, Pray for 20 minutes \n every night and ask \n for what you need.&amp;quot;]&lt;br /&gt;
    9 [label=&amp;quot;9, Go to your religious \n service every week and \n perform the rituals.&amp;quot;]&lt;br /&gt;
    10 [label=&amp;quot;10, Study for 20 minutes \n every night in an area \n where your problems are.&amp;quot;]&lt;br /&gt;
    11 [label=&amp;quot;11, Find a support group \n and meet with them \n every week.&amp;quot;]   &lt;br /&gt;
    0 -&amp;gt; 1 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    0 -&amp;gt; 2 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    1 -&amp;gt; 3 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    2 -&amp;gt; 4 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    3 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    4 -&amp;gt; 5 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    5 -&amp;gt; 6 [dir=&amp;quot;forward&amp;quot;]; &lt;br /&gt;
    5 -&amp;gt; 7 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 8 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    6 -&amp;gt; 9 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 10 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    7 -&amp;gt; 11 [dir=&amp;quot;forward&amp;quot;];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/kroki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, the reader is being invited to try the techniques on offer and an evaluation of Impact might thus be made. Note that in this case we have suspended our evaluation of the rhetorical qualities of each argument and begun a new one which seeks a utilitarian appraisal of which side might offer a better personal outcome. This is in keeping with the notion of Impact as separate from the rhetorical qualities of the argument.&lt;br /&gt;
&lt;br /&gt;
We could, in theory, envision someone who is not religious adopting religious practices because he concludes that it does him good. Perhaps he tried both sides and decided the religious side was the one yielding the greatest benefit. This possibility is no doubt one motivation for religious debaters to drop their metaphysical convictions and adopt a practical way to approach their disagreement with the secular side. In any event, this progression seems like a healthy outcome since metaphysical debates usually have little hope of resolution.&lt;br /&gt;
&lt;br /&gt;
Impact is necessarily focused on some particular goal. If the goal is material benefit and you pray and get rich you might think prayer works. But if the goal is broader than that, you might be disturbed by the fact that you are doing something you don&#039;t really believe is true. Your goal might be to get rich without compromising philosophical integrity. In this case the object of our  Impact changes to become more than simply material wealth. A clear statement of the argument thesis in terms of goals is obviously important here.&lt;br /&gt;
&lt;br /&gt;
In spite of our attempts at separating impact from rhetoric we will often find ourselves with a mix of the two:&lt;br /&gt;
[[File:Procon.png|center|frame]]&lt;br /&gt;
Here we&#039;ve scored the Pro argument weaker than the Con argument using our standard rhetorical measures (VRFC). Arguably the Pro argument speculates more (a type of fallacy) about what would happen if we stopped supporting Ukraine. The Con argument isn&#039;t perfect either since it seems to assume that the money saved would actually be used in some constructive way. Still, money not spent is certainly money saved so we&#039;ll mark it down only slightly. That said, it would be ridiculous to stop the argument after concluding the Con side &amp;quot;won&amp;quot;. The argument is not really a rhetorical argument at all but rather a statement of Impact. One side argues for the impact of saving the money. The other argues for the impact of failing to spend the money. We may not know how events would play out in this situation but we acknowledge the risk of catastrophic consequences for failure to act. The impact score for the Pro side is thus much higher. &lt;br /&gt;
&lt;br /&gt;
This is a particular case where the argument should be separated out into one that is explicitly about impact but it is not clear how best to achieve that. One way would be to allow participants to intervene by asking questions or proposing to move the debate in a more fruitful direction, perhaps by suggesting a new main contention (ie thesis). &lt;br /&gt;
[[File:Procon2.png|center|frame]]&lt;br /&gt;
A basically new debate ensues. Incentives to move the debate might include reputational points for agreeing on a more productive direction.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s look at an example of what this &amp;quot;productive&amp;quot; argument might look like in terms of cost-benefit-risk analysis:&lt;br /&gt;
&lt;br /&gt;
[[File:Productiveargument.png|center|frame]]&lt;br /&gt;
The red boxes are Con arguments and the green box is the Pro argument. We can see right away why the Pro argument might have stiff opposition since it leads to an infinite cost-benefit ratio and is virtually certain to occur (it is our current policy). It is possible that other scenarios could play out within the context of supporting Ukraine but let&#039;s leave these aside for the moment.&lt;br /&gt;
&lt;br /&gt;
So the Pro side looks bad until we start looking at the Con scenarios. In Scenario 1 we envision taking the $75 billion spent on Ukraine aid and providing free [[community]] college instead. Doing so provides an economic benefit in the long run so we provide an estimate for that. However, military and policy experts have said that ignoring Ukraine would result in having to contain a newly resurgent Russia and this could double our defense costs in the near term ($800 billion). The resulting cost benefit ratio is 2.9, a positive number which is undesirable. It also is the highest probability scenario, at 70%. Other scenarios involve some type of war with Russia and would involve an even greater outlay of funds, not to mention the sheer human toll of war. Only Scenario 4 envisions a minor outlay to contain a victorious Russia which would be offset by the benefit of free community college. This scenario is desirable but unlikely. &lt;br /&gt;
&lt;br /&gt;
These scenarios are much like sub-arguments but stripped of any need to assess their rhetorical quality. By looking at CB ratios and probabilities we can determine which policy direction to take.   &lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Interaction effects between arguments and sub-arguments&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although arguments have been presented as standalone entities, users may often score them after reading, and accounting for, their sub-arguments. In that case, the sub-argument&#039;s influence on the parent argument would be counted twice -- once due to the mathematical effect discussed [[#Scoring of individual arguments|above]] and twice due to the the influence the sub-argument has on the user&#039;s scoring of the parent argument. &lt;br /&gt;
&lt;br /&gt;
This effect is clearly undesirable and efforts should be made to control it. The software could, for instance, be equipped with the following checks:&lt;br /&gt;
&lt;br /&gt;
* If it detects that a user voted for a sub-argument and subsequently voted for an argument, it can flag the sub-argument score so it does not participate in the mathematical effect it would otherwise have on the argument. We are assuming here, of course, that a user who has voted for a sub-argument will be unable to avoid having it influence his vote for the parent argument.&lt;br /&gt;
* If it detects a vote for an argument but not a vote for its sub-argument, it doesn&#039;t know if the user has read the sub-argument in a way that would influence their vote for the parent argument. In such a case, the user can simply be asked if the sub-argument was read and, if so, to flag any subsequent vote by the user for the sub-argument as a non-participant in its mathematical influence on the parent argument.&lt;br /&gt;
* If the sub-argument does not yet exist when the vote for the parent argument is cast, the software will flag a subsequent vote for any newly developed sub-argument as a legitimate participant in the mathematical influence it has on the parent argument.&lt;br /&gt;
&lt;br /&gt;
It is probably difficult to make a system like this foolproof. A user might report not having read a sub-argument that they have, in fact, read. Tracking features could, in theory, be developed to check whether this is the case and react accordingly. However, it would still be difficult to know for sure how deeply the user understands the sub-argument just based on a record that they clicked on it or had it &amp;quot;open&amp;quot;. It also seems like it would be easy to overdo tracking of this kind to the point where it simply turns off an otherwise enthusiastic user. Another interesting idea is the use of word descriptors to match scores. In our system descriptors such as &amp;quot;Completely irrelevant&amp;quot;, &amp;quot;somewhat irrelevant&amp;quot;, etc. could be a useful way to break up corresponding numerical ranges in our 0-1 scoring system.&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Template:Transparent&amp;diff=1178</id>
		<title>Template:Transparent</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Template:Transparent&amp;diff=1178"/>
		<updated>2024-08-20T16:17:13Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span class=&amp;quot;mw-kroki&amp;quot;&amp;gt;{{{1}}}&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=Template:Transparent&amp;diff=1177</id>
		<title>Template:Transparent</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=Template:Transparent&amp;diff=1177"/>
		<updated>2024-08-20T16:16:25Z</updated>

		<summary type="html">&lt;p&gt;Lem: Created page with &amp;quot;&amp;lt;span class=&amp;quot;mw-kroki&amp;quot;&amp;gt;{{{0}}}&amp;lt;/span&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span class=&amp;quot;mw-kroki&amp;quot;&amp;gt;{{{0}}}&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1091</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1091"/>
		<updated>2024-08-19T17:44:34Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; and &amp;lt;kroki&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display),&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) .mw-kroki img {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
}&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display),&lt;br /&gt;
.skin-citizen-auto .mw-kroki img {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1090</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1090"/>
		<updated>2024-08-19T16:32:17Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; and &amp;lt;kroki&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display) {&lt;br /&gt;
/*:is(.poncho-dark-mode, .skin-citizen-dark) .mw-kroki * {*/&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
}&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display) {&lt;br /&gt;
/* .skin-citizen-auto .mw-kroki * {*/&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1089</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1089"/>
		<updated>2024-08-19T16:09:03Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; and &amp;lt;kroki&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display),&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) .mw-kroki * {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
}&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display),&lt;br /&gt;
.skin-citizen-auto .mw-kroki * {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1088</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1088"/>
		<updated>2024-08-19T16:02:01Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display, .mw-kroki),&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display, .mw-kroki) *&lt;br /&gt;
  {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display),&lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display) * {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1087</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1087"/>
		<updated>2024-08-19T15:59:27Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display, .mw-kroki) *&lt;br /&gt;
  {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display) * {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1086</id>
		<title>MediaWiki:Common.css</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=MediaWiki:Common.css&amp;diff=1086"/>
		<updated>2024-08-19T15:49:29Z</updated>

		<summary type="html">&lt;p&gt;Lem: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* CSS placed here will be applied to all skins */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Dark Mode Maths */&lt;br /&gt;
/* Here we fix &amp;lt;math&amp;gt; tags for dark modes in Poncho and Citizen skins. */&lt;br /&gt;
&lt;br /&gt;
:is(.poncho-dark-mode, .skin-citizen-dark) :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display, .mw-kroki)&lt;br /&gt;
  {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
@media (prefers-color-scheme: dark) { &lt;br /&gt;
.skin-citizen-auto :is(.mwe-math-fallback-image-inline, .mwe-math-fallback-image-display) {&lt;br /&gt;
    filter: hue-rotate(180deg) invert(1);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
/* END: Dark Mode Maths */&lt;br /&gt;
&lt;br /&gt;
/* BEGIN: Disable site subtitles */&lt;br /&gt;
#siteSub {&lt;br /&gt;
    display: none;&lt;br /&gt;
}&lt;br /&gt;
/* END: Disable site subtitles */&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:A52b326fa655929fd493d2be0a7306c5_image.png&amp;diff=470</id>
		<title>File:A52b326fa655929fd493d2be0a7306c5 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:A52b326fa655929fd493d2be0a7306c5_image.png&amp;diff=470"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:080a2c8343a03800f3444e57c9594531_image.png&amp;diff=469</id>
		<title>File:080a2c8343a03800f3444e57c9594531 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:080a2c8343a03800f3444e57c9594531_image.png&amp;diff=469"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:45e92b3f14760819c72890d38b146a0a_image.png&amp;diff=468</id>
		<title>File:45e92b3f14760819c72890d38b146a0a image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:45e92b3f14760819c72890d38b146a0a_image.png&amp;diff=468"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:Fa69aeace1af49be15342203befde824_image.png&amp;diff=467</id>
		<title>File:Fa69aeace1af49be15342203befde824 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:Fa69aeace1af49be15342203befde824_image.png&amp;diff=467"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:F9d0e2fd462652a2af8f477771ba7572_image.png&amp;diff=466</id>
		<title>File:F9d0e2fd462652a2af8f477771ba7572 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:F9d0e2fd462652a2af8f477771ba7572_image.png&amp;diff=466"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:4609cc2eae97fa085d85606fa1005d4e_image.png&amp;diff=465</id>
		<title>File:4609cc2eae97fa085d85606fa1005d4e image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:4609cc2eae97fa085d85606fa1005d4e_image.png&amp;diff=465"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:2301a13db3d43f186a5a1f0ae4d7507d_image.png&amp;diff=464</id>
		<title>File:2301a13db3d43f186a5a1f0ae4d7507d image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:2301a13db3d43f186a5a1f0ae4d7507d_image.png&amp;diff=464"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:C2e083ee755438b20d676ad21e6f1d54_image.png&amp;diff=463</id>
		<title>File:C2e083ee755438b20d676ad21e6f1d54 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:C2e083ee755438b20d676ad21e6f1d54_image.png&amp;diff=463"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:3a12bb995a861f9fc48de19ca6654e20_image.png&amp;diff=462</id>
		<title>File:3a12bb995a861f9fc48de19ca6654e20 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:3a12bb995a861f9fc48de19ca6654e20_image.png&amp;diff=462"/>
		<updated>2024-08-01T18:22:31Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:1eda7c1df0d63a3c9c9a31def2cae641_image.png&amp;diff=461</id>
		<title>File:1eda7c1df0d63a3c9c9a31def2cae641 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:1eda7c1df0d63a3c9c9a31def2cae641_image.png&amp;diff=461"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:Acdcaa8e47a7eeb86038b8fff97c271b_image.png&amp;diff=460</id>
		<title>File:Acdcaa8e47a7eeb86038b8fff97c271b image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:Acdcaa8e47a7eeb86038b8fff97c271b_image.png&amp;diff=460"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:7f62b22b9eb19771503d654db29cab92_image.png&amp;diff=459</id>
		<title>File:7f62b22b9eb19771503d654db29cab92 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:7f62b22b9eb19771503d654db29cab92_image.png&amp;diff=459"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:E4d2785d9c566d9086dba77b01ea9b90_image.png&amp;diff=458</id>
		<title>File:E4d2785d9c566d9086dba77b01ea9b90 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:E4d2785d9c566d9086dba77b01ea9b90_image.png&amp;diff=458"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:485534ab8975877931f5e107f9f27f89_image.png&amp;diff=457</id>
		<title>File:485534ab8975877931f5e107f9f27f89 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:485534ab8975877931f5e107f9f27f89_image.png&amp;diff=457"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:8e91c66d896e08f8e27549fb5ff01e25_image.png&amp;diff=456</id>
		<title>File:8e91c66d896e08f8e27549fb5ff01e25 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:8e91c66d896e08f8e27549fb5ff01e25_image.png&amp;diff=456"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:68a419870bb11999c652ec700a22bb94_image.png&amp;diff=455</id>
		<title>File:68a419870bb11999c652ec700a22bb94 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:68a419870bb11999c652ec700a22bb94_image.png&amp;diff=455"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:31a5b9b3879010d6ea6638921f73c4ee_image.png&amp;diff=454</id>
		<title>File:31a5b9b3879010d6ea6638921f73c4ee image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:31a5b9b3879010d6ea6638921f73c4ee_image.png&amp;diff=454"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:20f9692255c23f181952aa5d9f085ee5_image.png&amp;diff=453</id>
		<title>File:20f9692255c23f181952aa5d9f085ee5 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:20f9692255c23f181952aa5d9f085ee5_image.png&amp;diff=453"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:2ddf69d33af71b3ed6b567a7e1ea7ad3_image.png&amp;diff=452</id>
		<title>File:2ddf69d33af71b3ed6b567a7e1ea7ad3 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:2ddf69d33af71b3ed6b567a7e1ea7ad3_image.png&amp;diff=452"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:07ae1324ff849a893ad4e7bab588626c_image.png&amp;diff=451</id>
		<title>File:07ae1324ff849a893ad4e7bab588626c image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:07ae1324ff849a893ad4e7bab588626c_image.png&amp;diff=451"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:Fa7f4b5858531a007ce30a5c107814dc_image.png&amp;diff=450</id>
		<title>File:Fa7f4b5858531a007ce30a5c107814dc image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:Fa7f4b5858531a007ce30a5c107814dc_image.png&amp;diff=450"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
	<entry>
		<id>https://wiki.peerverity.info/w/index.php?title=File:Ec2b50ff02af57869ee4c928bba4ffb1_image.png&amp;diff=449</id>
		<title>File:Ec2b50ff02af57869ee4c928bba4ffb1 image.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.peerverity.info/w/index.php?title=File:Ec2b50ff02af57869ee4c928bba4ffb1_image.png&amp;diff=449"/>
		<updated>2024-08-01T18:22:30Z</updated>

		<summary type="html">&lt;p&gt;Lem: == Summary ==
Importing file&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Importing file&lt;/div&gt;</summary>
		<author><name>Lem</name></author>
	</entry>
</feed>