There are around 156 active fact checking organizations around the world, and there are many methods or scales adopted by these organizations to categorize checked claims.
Those involved in the WikiTribune fact checking project have not yet agreed on which categorization to adopt. It would be wise to list all the methods currently used by these organizations, and then we can either adopt one of them or create our own.
1. PolitiFact uses a “truth meter” visualization and categorizes the claims to: True, Mostly true, Half true, Mostly false, False, and Pants on fire.
2. Google Fact Checks suggests an example scale of True, Mostly true, Half true, Mostly false, and False.
3. The Washington Post fact check uses “Pinocchios.” The more pinocchios, the more egregious the falsehood (named after the fictional puppet whose nose grew each time he lied).
4- Full Fact does not use any rating they argue that, “We feel that such ratings, while appealing at a glance, can sometimes be reductive, and not provide you with the information you need to understand the claim as a whole. It is often the case that a claim is not just ‘True’ or ‘False,’ it just hasn’t given you the whole picture.”
5- Snopes : has many categorizations to rate the claims, such as ( True, mostly true, false, mostly false, mixture, outdated, unproven, scam, legend, etc ).
6- Facebook : uses this rating system : ( False, Mixture, True, not eligible, satire, opinion, prank generator, not rated).
We should try to keep fact checks short and concise. This may be difficult to do when passions run high, but remember that the fact check should be accessible to anyone, regardless of whether they support or oppose the statement being fact checked. Remember, the goal is to check facts, not to swing a mallet.
A common structure to fact checks may help. Here are questions that could be asked for all fact checks.
1: Is the claim fact-checkable to begin with? That is, is the claim a matter of fact, or is it something else, like a subjective judgment or opinion. Example: If a politician says that an agency or initiative had made “amazing progress”, we may be able to determine if it made progress on some metric, but we can’t fact-check their opinion that it’s “amazing”.
2: What evidence exists to support the claim being checked?
3: What evidence exists to oppose the claim being checked?
4: Who would be most able to verify or dispute the claim and what have they said?
5: Why might it be in the interests of the claimant to make the claim?
This last could be left out, unless one is able to make a statement of some benefit to the claimant without engaging bringing in ad hominem arguments.
Thoughts on categorizations:
- We should never call people liars unless there’s clear evidence of it, which will be very rare. Some outfits routinely accuse people of lying – it’s built into their categorization schemes. For example, Polifact has “Pants On Fire” (it’s taken from the expression “Liar, liar, pants on fire!“). The Washington Post is even worse in that every level of falsity or partly false (I think they have four) uses Pinochios, which clearly implies lying. The New York Times “fact check” is the worst so far in that they headline it “The Lies of Donald Trump”, by which they mean everything he’s ever said that they dispute. Only in rare cases will we know if someone is lying, and it’s wildly unprofessional to accuse people of lying without evidence. Most of the time, all we’ll know is if a claim is true or false, or somewhere in between, and honest error on the part of the claimant is always a possibility. It’s also worth contemplating the fact that politicians talk a lot. They talk a lot more than most people, in speeches, press conferences, interviews, etc. If I had to talk as much as they do, I’d make some number of mistakes on arcane budget numbers and everything else.
- We know we need the categories True and False. Beyond that, I’m struggling with the in-betweens. If we had Partly True and Partly False as separate categories, I think bias (and disagreements) could creep in on whether one or the other was appropriate in a given case. It’s a glass half full/half empty situation, and I think people’s political leanings would influence which category they wanted to use.
- Mostly True seems like it would be useful in a lot of fact checks I’ve seen. So perhaps a Mostly True and Mostly False setup would be good. So what about the ones we think are halfsies? Partly True, Partly False, Half True, etc. I would go with Half True, and hope to avoid the conflict of a Partly True/ Partly False schema.
- This, surprisingly, leaves us with Google’s suggested schema: True, Mostly true, Half true, Mostly false, and False. What do you think?
- Finally, a quick note: Some claims are very complicated. Some claims lend themselves to proportionality, and some don’t. Really, there are almost an infinite number of types of claims. I’m building a taxonomy for them. All kinds of issues of come up, like I saw some contributors being really strict about rounding numbers, and others not allowing rounding. That’s just one easy example of where we need a common standard, but some of the claims we deal with will strenuously test any schema we come up with.