Toggle menu
122
332
11
3.4K
Information Rating System Wiki
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

User:Dan

34 editsJoined 22 July 2024
Revision as of 21:41, 24 October 2024 by Dan (talk | contribs)

Ideas for Debate software prototype

Meta-data for all types of predicates

Adding tags to predicates

  • One or more domain topic tags can be assigned to a predicate. Whenever a tag is assigned to a predicate, this generates a first-class predicate that is vote-able to determine the accuracy of the tag. E.g. if predicate X is tagged with the “biology” tag, then a predicate is auto-generated that states: “predicate X is related to biology”). Tags themselves are not actual predicates, however, they are more like a grouping mechanism for a set of predicates. Tag names should probably be at most two words long.
  • Topics themselves will need rating, so there’s probably another predicate associated with each tag like “I find X a useful tag for filtering”.

Linking predicates

  • How should we categorize links? Here's some of the "types" of links I think I've described in this writeup: 1) links between rewordings of a predicate, 2) links between an argument and the pro/con sub-arguments, 3) links between a policy and debates related to the policy.
  • I think basically all the links we establish between predicates in this system should be bidirectional, but assuming we’re using a database for the links I guess we have that already. But even though they should be bidirectionally travelable, the links often do have a "direction" of a sort, often being one where there is an initial predicate that "inspires" the other predicate. For example, a reworded predicate inspired by another predicate, a debate sparked by a policy proposal, etc.

Data editing

For the most part, we probably want to maintain an immutable history of edits to the database, so rather than editing data we may want to use a copy-on-write style of editing.

Data model for policy votes

  • In a community rating system, members vote on a “community-binding” predicate to undertake or not undertake a specific policy. A policy describes a set of actions to be performed. These actions can be either very high-level and vague or very concrete actions.
  • One or more debates will likely be created to discuss issues of competing policies being voted on by the community. We need a mechanism for linking policy predicates to debate predicates.

Data model for debates

  • Each debate argument is a predicate. And similarly, any predicate can be debated.
  • Sub-arguments can be linked to a parent argument in some manner that allows both an understanding of and a calculatable impact of the sub-argument on the parent argument. WHAT FORM?
  • No predicate is ever edited, but new predicates can be reworded with a reworded link to old ones. A rewording link isn’t strictly necessary, but these may allow for a better understanding of how a debate has evolved over time.
  • If a predicate is reworded, what mechanism(s) could be used to attach old sub-arguments to the new predicate (some may no longer even apply and certainly their impacts could change as a result, but likely most will still apply)?

Example debate

Debate predicate: It is better to have 3 people make decisions for a group than have just one of the three make the decisions.

Arg1: It can take longer for 3 people to agree on a decision than one person.

Arg2: The sum total knowledge available between 3 people is larger.

Arg3: Two people may “collude” to vote for each other’s personal needs, and it is less obvious than if one person is voting just for their own personal needs.

Arg4: More time and effort is consumed if three people have to spend time thinking about and voting on a decision. This is especially wasteful if the decision making is simple and obvious.

Arg1 and Arg4 lead to reformulations to clarify that issue is about non-trivial decisions and that voting will be used rather than 100% consensus required.

Reworded debate topic: It is better to have 3 people vote on making non-trivial decisions for a group rather than just have one of the three people making the decision.

This also leads to a reworded arg4 (simple and obvious part gets dropped since debate is about non-obvious decisions now): More time and effort is consumed if three people have to spend time thinking about and voting on a decision.

Arg1 could arguably be considered a pro or con argument, since it could be argued than taking longer to decide a decision leads to better outcomes (yet another debate, now on the “impact” of the argument). So how should we model the “impact” mathematically?

Maybe it is some positive/negative scale (where positive impact values add to the computed rating for the parent predicate and negative impact values subtract from it)? Note that this could result in a case where a sub-predicate is considered to have little impact, even though two “sides” think it has a lot of impact, just in opposite directions. But this seems like a reasonable outcome.

Filtering mechanisms for predicates

Note: these filtering ideas apply both to filtering policy predicates and debate predicates.

Filtering should be done by some rating formula that results in a ranked ordering of the predicates to view, with the highest ranked at the most visible position. If predicates are paginated in this ranking, then there probably is no need for an absolute filtering out of any predicate, but we could also include some kind of hard limit (e.g. user may only want to see 10 pages of predicates or user may not want to see any predicates below some threshold rating).

A user could have multiple rating formulas that they may want to choose from, depending on what they are interested in seeing at a given time. For example, if a user wants to simply rate predicates in one of their domains of interest, they might select a rating formula that ranks high that type of predicate. Or if they want to debate a topic in near real-time, they could select a rating formula that ranks actively debated predicates.

  • Users can filter based on domain tags they either want to see or don’t want to see based on voting on the accuracy of the tag.
  • Users can vote directly on the importance of people seeing a predicate (not a good-bad value judgment, just voting it is an important topic that people should see).
  • In cases where other people make their filters available, people can filter based on filters used by one or more other users to avoid having to create detailed filters of their own.
  • Users can filter based on “linked” predicates (for example, to find or exclude all re-worded forms of a debate predicate). In such case, we might want some way to figure out which of the re-wordings is the best (which might not be the most active since it may be a new re-wording, so probably need some more thought on how this should be done).
  • Individuals can filter based one or more rating systems (e.g. a specific CRS or the user’s personal SRC). There are also a number of options for how to select and combine values from multiple rating systems over the set of all predicates to determine a ranking.
  • Since it is probably desirable that filters are extremely personalized, perhaps it is better to use a filter rating algorithm where the user's personal voting if specified has more dominance than the ratings of others. For example, if the user has directly rated a topic, that rating would be the one used for the topic, instead of the aggregated rating (unrated ones would still use the calculated rating, so "suggested topics" would still be findable from the user's network).
  • A filter rating could be specific to a specific tagging, or even to a particular debate or policy. For example, debates on politics could be filtered to ignore arguments about dead politicians. This perhaps could be done via a combination of a tag plus a key word search.

Implementation issues

Ratings can be very dynamic. If predicates are stored in an SQL database, for example, does a user’s software just periodically update the predicates with ratings from their rating system, then generate a filtering query? Or are the raw predicates fetched from the database, then ratings fetched to use for ranking?

If the latter, seems like there would need to be filtering on which predicates get stored in the database originally. So at any given time, the database would only contain predicates that met some minimal rating formula threshold at the time it was last reported to the user’s node.

But this brings up the question of how predicates arrive at the user’s node, a question which also depends on the topology of the network.

How do predicates get added to a user’s database?

For fully connected nodes (e.g. initial community systems will fall into this category), each node will need to make a decision about every predicate that gets created (in an extreme case this could just be blocking predicates from peers that are particularly bad at generating predicates). In a subjective rating system, a node would have the option to not pass on predicates to other peers if they don’t consider them interesting/desirable, resulting in an automatic form of filtering.

There’s also the issue of whether to push or pull predicates, or some combination of both.

Both of these options allow for filtering at the network level: a push could explicitly ask for predicates with a given tag threshold on the peer it is querying or it could register with the peer that it only wants to receive pushes for predicates above the tag threshold.

Solution for now

For now, I suppose we should assume a relatively small but fully connected model, where ratings changes are constantly pushed by peers (similar to Lem’s prototype) and we update the ratings in the database.

If we use a copy-on-write methodology to allow for analysis of historical changes, then we would timestamp all the changes to a rating as reported by each peer I suppose.

In a community rating system, each peer would just directly report its own ratings “vote” change to one or more central databases that would then generate aggregated ratings.

Prototype UI

For now, I propose we should keep the prototype UI as simple as possible so we can quickly create and deploy something we can test in the real world and primarily use paginated lists displayed on each page.

I’m assuming a web browser-based UI (e.g. written in Typescript).

Here’s a list of UI areas I can think of (not all would need to be created in the prototype):

  • A page to define different types of predicate rating filters. I think we should skip this page in the prototype and just have some “default” rating filters, for example one for filtering debate topics based on a direct rating of the topic’s importance, one for filtering sub arguments within a debate based on their impact, etc. because there’s a lot of different options on how we can filter and it shouldn’t be a big issue when dealing with a small number of reasonable people participating in the prototype testing. Even when we open it to some public testing, probably a simple muting mechanism for spammers is the primary need at the beginning, along with basic filtering based on tags. Filtering will become more important the more active the userbase becomes.
  • A page (FILTERED DEBATE LIST) where we could see debates/policies ranked and paginated based on a dropdown at the top of the page where the user can select from available rating filters. From this page, user can select any debate/policy to navigate to.
  • A page for viewing a single specific debate predicate containing (DEBATE PAGE):
  • the wording of the predicate
  • if reworded forms of the debate exist, links to reworded forms of the debate predicate (paginated and ranked by a rewording filter).
  • immediate sub arguments (paginated and ranked by an argument filter such as rated “impact”). Sub arguments are also links to a debate on the sub argument itself, allowing for navigation down the argument tree.
  • links to other debates that this is a sub-argument of, allowing navigating to any debate depending on this argument.
  • A set of navigation path links at the top of the page so that user can navigate back to any point in the path that led him to the current predicate. This would need some kind of squeezing if it gets too long and we would need some way to assign “short names” for links too. This is probably too much work for now and maybe we can just rely on browser navigation for this.

Every displayed predicate should have the following info:

  • the wording of the predicate
  • the user’s own rating (starts as “Vote” or maybe just “?”, clickable to assign a rating)
  • A direct rating of the predicates from a ratings system chosen by the user. This could be clickable to change which rating system is used based on available ratings systems/algorithms.
  • a computed rating based on its sub arguments (using whatever algorithm the user selected for computing this rating). This could be clickable to change the rating algorithm.
  • a button to reword the predicate. This would open a new predicate page with the wording of the old predicate as a starting point.
  • a button to add a sub-argument
  • Optional: we could show the “filter rating” computed by whatever filter is allowing the predicate to be displayed (this wouldn’t apply when someone directly clicks on a predicate, just when it is in a list of predicates and was determined by such a filter).

Implementation Issues:

  • How should we limit the size of predicates for reasonable viewing? Maybe just a simple character limit (e.g. 160 characters and allow for a maximum of 80 characters per line and 2 lines)? Another option would be some kind of “squeezing” of predicate text when it is over a specific length (this would discourage creating long predicates without being an absolute prohibition). Both methods could be employed in tandem, with some “maximum allowed length” and a lower “squeeze length”, with the max length a database setting and the squeeze length just a UI setting.
  • Need to avoid cycles when computing a rating based on sub-arguments

Other potential features

  • search for predicates matching keywords
  • search for sub-argument predicates matching keywords

Page Mockups

For the initial prototype, I propose we start with just two pages: a debate list page and a debate page.

UI Element Notes:

[] indicates a button. Buttons will typically open a dialog for input or a dropdown list to select from.

[Vote Truth: NULL] This button would initially be displayed as [VoteTruth: NULL]. Once a user clicks on it and votes (just open a dialog that accepts percentage values with two decimal places for now, between 0.01 and 99.99), it would display also display user's current voted value instead of NULL (or some similar text to indicate no vote has been cast). For example, [VoteTruth: 99.99%]. Also, need a way for a user to remove their vote, for now maybe remove their vote if they delete everything in the edit box.

[Vote Impact: NULL] This button would basically work the same as the VoteTruth button.

[FilterDropDown] is a dropdown list showing default predicate ratings (probably one for “all tags” and then one for each tag, order alphabetically for now). There’s an implication here that predicate rating filters have “names”. Right now I use this control for essentially every "list" in the UI.

[PageSize: 20] This sets the page size of the associated list. Different lists will have different "default" sizes. User clicks this button to change the page size of the list, maybe allow values between 1-200?

PAGINATION BAR: a standard bar at the bottom of a list to allow navigation through pages in the list. Almost every list will have a pagination bar.

Debate List page (very simple page)

Debate Filter: [FilterDropDown]

Debates (matching filter) [PAGE SIZE: defaults to 20 per page]:

  1. Dogs are better than cats. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  2. Ice cream is healthy. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  3. Table tennis is the best sport. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]

PAGINATION BAR

Debate page (4 sections: debate predicate, rewordings, sub arguments, super arguments)

Debate: It is better to have 3 people make decisions for a group than have just one of the three make the decisions. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]

Rewordings [FilterDropDown] [PAGE SIZE: defaults to max of 3 per page]:

  1. It is better to have 3 people vote on making non-trivial decisions for a group rather than just have one of the three people making the decision. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]

PAGINATION BAR

Arguments ranked by total impact (impact*rated truth) [FilterDropDown] [PAGE SIZE: defaults to 10 per page]:

  1. The sum total knowledge available between 3 people is larger. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  2. More time and effort is consumed if three people have to spend time thinking about and voting on a decision. This is especially wasteful if the decision making is simple and obvious. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  3. It can take longer for 3 people to agree on a decision than one person. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  4. Two people may “collude” to vote for each other’s personal needs, and it is less obvious than if one person is voting just for their own personal needs. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]

PAGINATION BAR

Debates depending on this predicate [FilterDropDown] [PAGE SIZE: defaults to 5 per page]:

  1. The President of the US has too much power. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]
  2. Dan should be replaced by Donna, Eric, and Pete as manager of SynaptiCAD. CRS: 100% SRS: 75% [Reword] [Vote Truth] [Vote Impact] [Add Arg]

PAGINATION BAR