Just chilling and sharing a stream of thought…
So how would a credibility system work and be implemented. What I envision is something similar to the up votes…
You have a credibility score, it starts a 0 neutral. You post something People don’t vote on if they like, the votes are for “good faith”
Good faith is You posted according to rules and started a discussion You argued in good faith and can separate with opposing opinions You clarified a topic for someone If someone has a polar opinion to yours and is being down voted because people don’t understand the system Etc.
It is tied to the user not the post
Good, bad, indifferent…?
Perfect the system
Are you thinking of something like Stack Overflow’s reputation system? See https://stackoverflow.com/help/whats-reputation for a basic overview. See https://stackoverflow.com/help/privileges for some examples of privileges unlocked by hitting a particular reputation level.
That system is better optimized for reputation than the threaded discussions that we participate in here, but it has its own problems. However, we could at minimum learn from the things that it does right:
- You need site (or community) staff, who are not constrained by reputation limits, to police the system
- Upvoting is disabled until you have at least a little reputation
- Downvoting is disabled until you have a decent amount of reputation and costs you reputation
- Upvotes grant more reputation than downvotes take away
- Voting fraud is a bannable offense and there are methods in place to detect it
- The system is designed to discourage reuse of content
- Not all activities can be upvoted or downvoted. For example, commenting on SO requires a minimum amount of reputation, but unless they’re reported as spam, offensive, fraudulent, etc. (which also requires a minimum reputation), they don’t impact your reputation, even if upvoted.
If you wanted to have upvoted and downvoted discourse, you could also allow people to comment on a given piece of discourse without their comment itself being part of the discourse. For example, someone might just want to say “I’m lost, can someone explain this to me?” “Nice hat,” “Where did you get that?” or something entirely off topic that they thought about in response to a topic.
You could also limit the total amount of reputation a person can bestow upon another person, and maybe increase that limit as their reputation increases. Alternatively or additionally, you could enable high rep users to grant more reputation with their upvotes (either every time or occasionally) or to transfer a portion of their rep to a user who made a comment they really liked. It makes sense that Joe Schmo endorsing me doesn’t mean much, but King Joe’s endorsement is a much bigger deal.
Reputation also makes sense to be topic specific. I could be an expert on software development but be completely misinformed about hedgehogs, but think that I’m an expert. If I have a high reputation from software development discussions, it would be misleading when I start telling someone about hedgehogs diets.
Yet another thing to consider, especially if you’re federating, is server-specific reputations with overlapping topics. Assuming you allow users to say “Don’t show this / any of my content to <other server> at all,” (e.g., if you know something is against the rules over there or is likely to be downvoted, but in your community it’s generally upvoted) there isn’t much reason to not allow a discussion to appear in two or more servers. Then users could accrue reputation on that topic from users of both servers. The staff, and later, high reputation users of one server could handle moderation of topics differently than the moderators of another, by design. This could solve disagreements about moderation style, voting etiquette, etc., by giving users alternatives to choose from.
Thank you all for the discussion! I have read all the comments and enjoyed each response and will continue to do so. I came out with pretty much the same feelings as the rest of you…. In an ideal world…
Once again, thank you and good luck to everyone out there…we got this!
Just disregard ‘votes’ entirely. What exactly are you hoping to achieve? Do you want “low-credibility” users highlighted in red so you don’t have to bother reading their comments? Have them hidden entirely? Seems like existing tools like blocking and banning already accomplish these goals.
While I would never support it, the main way to improve online discussion is by removing anonymity. Allow me to go back a couple decades and point to John Gabriel’s Greater Internet Fuckwad Theory. People with a reasonable expectation of anonymity turn into complete assholes. The common solution to this is by linking accounts to a real identity in some way, such that online actions have negative consequences to the person taking them. Google famously tried this by forcing people to use their real name on accounts. And it was a privacy nightmare. Ultimately though, it’s the only functional solution. If anti-social actions do not have negative social consequences, then there is no disincentive for people to not take those actions and people can just keep spinning up new accounts and taking those same anti-social actions. This can also be automated, resulting in the bot farms which troll and brigade online forums. On the privacy nightmare side of the coin, it means it’s much easier to target people for legitimate, though unpopular, opinions. There are some “in the middle” options, which can make the cost to creating accounts somewhat higher and slower; but, which don’t expose peoples’ real identities in quite the same way. But, every system has it’s pros and cons. And the linking of identities to accounts
Voting systems and the like will always be a kludge, which is easy to work around. Any attempt to predicate the voting on trusting users to “do the right thing” is doomed to fail. People suck, they will do what they want and ignore the rules when they feel they are justified in doing so. Or, some people will do it just to be dicks. At the same time, it also promotes herding and bubbles. If everyone in a community chooses to downvote puppies and upvote cats, eventually the puppy people will be drown out and forced to go off and found their own community which does the opposite. And those communities, both now stuck in a bias reinforcing echo chamber, will continue to drift further apart and possibly radicalize against each other. This isn’t even limited to online discussions. People often choose their meat-space friends based on similar beliefs, which leads to people living in bubbles which may not be representative to a wider world.
Despite the limitations of the kludge, I do think voting systems are the best we’re going to get. I’d agree with @grue that the Slashdot system had a lot of merit. Allowing the community to both vote on articles/comments and then later have those votes voted on by a random selection of users, seems like a reasonable way to try to enforce some of the “good faith” voting you’re looking for. Though, even that will likely get gamed and lead to herding. It’s also a lot more cumbersome and relies on the user community taking on a greater role in maintaining the community. But, as I have implied, I don’t think there is a “good” solution, only a lot of “less bad” ones.
You know that the current voting system isn’t like/dislike, right? Or it’s not supposed to be. Your proposed system would have the same problem: users would use it as like / dislike buttons.