Talk for Wiki Project "What categories should WikiTribune use for claims?"

Talk about this Project

  1. I also don’t think we can get rid of arbitrariness and interpretation completely. We can try to avoid it but there will always be some degree of these.

    Edited: 2018-09-21 00:47:53 By

    (talk | contributions) + 22 Characters .. + 16% change.‎‎ (Note | Diff)

  2. Just one quick thought: we shouldn’t let this discussion delay moving forward with anything. We can move forward now with what we’ve got, and modify it over time. This isn’t a decision that can’t change over time as we learn more!

  3. The challenge will be defining each rating so that they aren’t arbitrary. The “Mostly False” and “Mostly True” categories will be especially difficult.

    Here’s an example. Feel free to add onto it. I’ll use EDIT to include suggestions.

    False: An inaccurate claim.
    This rating includes instances when a source is wrongly cited (i.e. 56 percent of dogs have fleas, according to the National Canine Association… but the stat actually came from the American Association of Pet lovers).
    This rating includes instances when a statistic is cited with a margin of error exceeding three percentage points.

    Mostly False: An inaccurate claim that includes a portion that is true.
    This rating includes instances when a cited statistic is within a margin of error of one to three percentage points.
    This rating includes instances when two statistics are cited, but each are picked from different data sets in a way that boosts the veracity of the claims.

    Mostly True: An accurate claim that lacks critical context.
    This rating includes instances when a general statement is true, but without necessary specifics, leaving the claim open to misinterpretation.

    True: An accurate claim.
    This rating is reserved for accurate claims that include necessary context.

    Undetermined: A claim has no way of being empirically verified and thus not appropriate to be fact checked

    Opinion: A claim that reflects a personal preference or belief and thus not appropriate to be fact checked.

    1. Personal opinions can also be right or wrong. I think personal *taste* should not be checked because taste is a matter of preference and not objective fact. Vanilla ice cream is better than chocolate is an example. Or Jaws 2 is better than the original.

      However I’ve seen many people say what they claim to be personal opinions but which are either true or false. “Opinion” can mean a lot of things.

  4. Hi folks, there are really many interesting suggestions presented here but I think we can look to this issue from a different perspective:

    Soon or later we are going to connect our fact checking system with an external partner such as Facebook, Google , etc and it seems we should adhere to their categorization , so the solution is straightforward, we should decide based on their recommendation, what do you think?

    1. I checked the edit provided by simon, yes, google just suggest an example, so we are very flexible here.

    2. I had a phone call with the person in Facebook who is responsible for their efforts in this area. I can explain how it works with them and what they have recommended to us.

      Right now, once we are approved by the IFCN, we will gain access to a tool on Facebook where we can submit our factchecking stories which they will then distribute in various ways across their platform. An example of this, I think, would be if we find a “fake news article” circulating and then we debunk it, we would use their tool to submit, and they would then make sure that people who receive the fake news article also receive our debunking article. How exactly that works is something I don’t know right now.

      In their tool they have a set of categories that they made up – pretty much the usual “false”, “true”, “mostly true”, etc. However, the way it works is that when we submit our story there, we use a dropdown box to pick which it is, and if we use a different scheme, we can just have a standard way to match it up to what they use.

      She also indicated that they are planning to support https://schema.org/ClaimReview , which was invented by Google but which is an “open source” standard that is being used more and more by others. The concept here is that we should, on the tech side, use this kind of markup, and follow Google’s guidelines on how to use it.

      I think this is basically good news in the sense that we don’t have to follow multiple standards.

  5. Unconfirmed or Opinion
    Either one I think would be alright depending on what it was. Just needs to be clear it should not be taken alone as fact.

  6. What about “Opinion”? Some “stated facts” are in fact no more than opinions, not verifiable. It does not mean they are false, just that they can’t be verified, whether they are about the future, or about phenomenon that are too complex too easily explain by facts.

    Some “Opinions” can obviously be rejected because there are studies after studies that disprove them, or at least failed to prove them repeatedly. That’s the case for example about vaccinations and autism, where there are enough studies to claim the link is false.

    But other opinions are more difficult to disprove, whether we don’t have enough information/hindsight yet, or because they are just too complex and have too many factors (economic crashes…) and it is very difficult to weight the relevance of the different elements.

    So we could outline that if a statement may not be provably true or false, it is more in the realm of opinions. Some are just impossible to know (aliens, god…) and in the area of faith. I doubt they would have their place here. But other statements about economic events, disease spread, political situation, cause of wars may be just opinions, at least until fact are unveiled.

    1. An example of this, where I think we nearly got it wrong, was the attempt to “fact check” Elon Musk’s claim that we are “most likely” living in a simulation. That’s a topic of very interesting and deep (or silly, depending on your perspective!) philosophical debate and not something that we can easily say is “right” or “wrong”.

      Similarly, claims about the future are tricky. If someone says “Trump’s economic plan will generate thousands of new jobs next year” then opinions on that will be all over the map, and we can and should probably do a good neutral job of explaining the arguments that people are putting forward, we basically can’t give it a simple “true” or “false” rating that is anything more than yet another opinion.

      That’s different from a claim about the past. “Since Trump became President, US companies have moved over 1,000,000 jobs overseas” is a statement about the past, and it’s quite specific, and so we could look into whether it is true or not.

      1. Jimmy: i do agree strongly with your points, actually in ourWT fact checking guideline we put a section to explain what to fact check under title ” choosing claims”, I think we need to add the examples your provided above to that section.

        however, I think as we move one, we will all, as a community, develop a sense of what to check or not. things take time.

        https://www.wikitribune.com/article/76053/

  7. I made these suggestions for fact-checks and some standardized language before (let me know if you think this warrants an stand-alone individual article, Mohamed).

    1. True
    2. Mostly true (or likely true for when it’s not 100% certain)
    3. Mostly false
    4. False (or likely false for when it’s not 100% certain to be false)
    5. Disclaimer “True but needs context”
    6. Disclaimer: “True but misleading”
    7. Undetermined

    1-4 don’t need much explanation (I hope). However I think we need 5, 6 and 7. which adds nuance to our ratings and to standardize them. Nuance is good so long as it’s well justified in the explanation in the fact check.

    Claims with one publicly verifiable uncontroversial source should be rated “mostly true” (unless there are counter claims from equally legit counter sources). Claims with clear video evidence or multiple independent verification from scientific or otherwise uncontroversial sources should be rated “true.”

    “Needs context”: Sometimes a claim can be strictly speaking false but the person misspoke or made a claim that needs more context for evaluation. One recent fact-check here illustrates this.

    There was the fact-check on Bernie Sanders’s claim that “43% can’t afford basics” of life. He made two other claims next to that claim in his tweet. The other two claims in his tweet had explicitly said “Americans” and “adults.” I found the study he used for those two claims and they asked American adults to survey their financial situations.

    So one might assume the 43% are American adults. Nope. He got the 43% from another study which surveyed the financial situation of households, not individual people. So while it is true of households, because he made it seem like it was of “American adults” by the context, I rated it “not completely accurate.” I didn’t want to say it was false because I figured that language was a little too strong. I also did’t want to say it was “mostly false” because it is true (at least according to one source) once put in the right context.

    I usually give people charitable interpretations (especially if they are speaking off-the-cuff or if English isn’t their primary language, which isn’t the case for Sanders, but still). So that claim needed more context. So looking back, I would rate this as “needs more context”.

    One can also make true but misleading claims. Example, say someone is asked whether they support some public policy X. They say yes because states with policy X have lower cancer, which is true as shown by high-quality studies.

    However, let’s say that all the high-quality studies show that this is merely a spurious correlation and that it’s not policy X that caused the lower cancer rate but some other factor that causes both (say the state’s economic development level). In other words, states with more financial resources are able to curb cancer and to implement policies like X.

    In this case what the person said was true but it’s misleading because policy X will not cause any lowering of cancer in a state. It’s merely statistical correlational.

    Some claims are hard or impossible to verify at the moment or have differing accounts from equally legitimate sources. These should be rated “undetermined” some something similar in meaning.

    Any suggestions on improvement to these?

    1. Unconfirmed might be better than undetermined, in some cases. But this looks like a decent set of options to start with.

      1. I second this. I was going to suggest “undetermined”, but “unconfirmed” is better.

        However, I’m afraid some ambiguity could arise with this classification. That is, there will be both statements that cannot be confirmed because they simply are not verifiable, but also those statements that WikiTribune has yet to verify.

    2. I like the idea of “True but misleading,” it just need to be more concise in order to be a rating.

      Ultimately, I think the trick will be defining each rating, which will involve agreeing upon the details.

      I’ll include my definitions above.

  8. I took the initiative and started this WT draft, please help us by looking to the fact checking organizations and just add a link to the organization and the write one line to explain its method, then we can all work from a bottom-up and choose which method to adopt or we can create our own.

Subscribe to our newsletter and be the first to collaborate on our developing articles:

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Email us