Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

79 points
*

My personal view is to remove the bot. I don’t think we should be promoting one organisations particular views as an authority. My suggestion would be to replace it with a pinned post linking to useful resources for critical thinking and analysing news. Teaching to fish vs giving a fish kind of thing.

If we are determined to have a bot like this as a community then I would strongly suggest at the very least removing the bias rating. The factuality is based on an objective measure of failed fact checks which you can click through to see. Although this still has problems, sometimes corrections or retractions by the publisher are taken note of and sometimes not, leaving the reader with potentially a false impression of the reliability of the source.

For the bias rating, however, it is completely subjective and sometimes the claimed reasons for the rating actually contradict themselves or other 3rd party analysis. I made a thread on this in the support community but TLDR, see if you can tell the specific reason for the BBC’s bias rating of left-centre. I personally can’t. Is it because they posted a negative sounding headline about Trump once or is it biased story selection? What does biased story selection mean and how is it measured? This is troubling because in my view it casts doubt on the reliability of the whole system.

I can’t see how this can help advance the goal (and it is a good goal) of being aware of source bias when in effect, we are simply adding another bias to contend with. I suspect it’s actually an intractable problem which is why I suggest linking to educational resources instead. In my home country critical analysis of news is a required course but it’s probably not the case everywhere and honestly I could probably use a refresher myself if some good sources exist for that.

Thanks for those involved in the bot though for their work and for being open to feedback. I think the goal is a good one, I just don’t think this solution really helps but I’m sure others have different views.

permalink
report
reply
5 points

Removing the bias rating might be enough indeed.

permalink
report
parent
reply
17 points

Nah even credibility is subjective to MBFC.

permalink
report
parent
reply
28 points
*

The bot calls Al Jazeera “mixed” factually (which is normally reserved for explicit propaganda sources), and then if you look at the details, they don’t even pretend it has anything to do with their factual record – just, okay they’re not lying but they’re so against Israel that we have to say something bad about them.

permalink
report
parent
reply
-1 points

One issue with poor media literacy is that I don’t think people are going to go out of their way to improve their literacy on their own just from a pinned post. We could include a link in the bot’s comment to a resource like that though.

Do you think that the bias rating would be improved by aggregating multiple factors checkers’ opinions into one score?

permalink
report
parent
reply
13 points

Yeah it’s definitely a good point, although I would argue people not interested in improving their media literacy should not be exposed to a questionable bias rating as they are the most likely to take it at face value and be misled.

The idea of multiple bias sources is an interesting one. It’s less about quantity than quality though I think. If there are two organisations that use thorough and consistent rating systems it could be useful to have both. I’m still not convinced that it’s even a solvable problem though but maybe I’m just being too pessimistic and someone out there has come up with a good solution.

Either way I appreciate that it’s a really tough job to come up with a solution here so best of luck to you and thanks for reading the feedback.

permalink
report
parent
reply
75 points

One problem I’ve noticed is that the bot doesn’t differentiate between news articles and opinion pieces. One of the most egregious examples is the NYT. Opinion pieces aren’t held to the same journalistic standards as news articles and shouldn’t be judged for bias and accuracy in the same way as news content.

I believe most major news organizations include the word “Opinion” in titles and URLs, so perhaps that could be something keyed off of to have the bot label these appropriately. I don’t expect you to judge the bias and accuracy of each opinion writer, but simply labeling them as “Opinion pieces are not required to meet accepted journalistic standards and bias is expected.” would go a long way.

permalink
report
reply
18 points

Thanks for this. As a mod of /c/news, I hadn’t really thought about that. We don’t allow opinion pieces, but this is very relevant if we roll out a new bot for all the communities that currently use the MBFC bot.

permalink
report
parent
reply
13 points

No problem. Specifically came to my attention about a week ago on this post where the bot reported on an opinion piece as if it was straight news.

BTW, I actually do appreciate the bot and think it’s doing about as well as it can given the technical limitations of the platform.

permalink
report
parent
reply
6 points
*

Hi. I have a suggestion:

Try to make it more clear that this is not a flawless rating (as that is impossible).

Ways to implement:

  • Make sure the bot says something along the lines of “MBFC rates X news as Y” and not “X news is Y”.
  • Make a caveat (collapsable) at the bottom, that says something along the lines of “MBFC is not flawless. It has an american-centric bias, is not particularly clear on methodology, to the point where wikipedia deems it unreliable; however, we think it is better to have this bot in place as a rough estimate, to discourage posting from bad sources”
  • If possible, add other sources, Like: “MBFC rates the Daily Beast as mostly reliable, Ad Fontes Media rates it as unreliable, and Wikipedia says it is of mixed reliability”
  • Remove the left right ratings. We already have a reliability and quality rating, which is much more useful. The left-right rating is frankly poorly done and all over the place, and honestly doesn’t serve much purpose.
permalink
report
parent
reply
7 points

This contributes significantly to the noise issue most people complain about

permalink
report
parent
reply
6 points

Interesting that people say that opinion pieces should not be held to the same standard. I personally see such pieces contribute to fake news going around. Shouldn’t a platform with reach, held accountable for wrong information, they hide behind an opinion piece?

permalink
report
parent
reply
8 points

It’s not a question of “should” - an opinion piece is rhetoric, not reporting. You can fact check some of it sometimes but functionally can’t hold it to the same standards as a regular news article. I agree that this can sometimes lead to “alternative facts” and disingenuous arguments, but the only other option is to forbid the publication of them which is obviously an infringement of first amendment rights. It’s messy, and it can lead to people being misinformed, but it’s what we’re stuck with.

permalink
report
parent
reply
4 points

Can you explain how a piece with a title like “Helldivers is awesome and fun” can be judged at all for factual accuracy?

permalink
report
parent
reply
4 points

The NYT ran an opinion recently where the author pretty clearly was using the NYT along with other outlets as part of a voter demobilization tactic in which the author lied about not voting. The NYT was skewered on twitter, and had to alter the opinion after the fact. It seems like some basic fact checking would have been useful in that situation. Or really, just any amount of critical thought on the part of the NYT in general.

permalink
report
parent
reply
2 points

This. Otherwise op-eds get a free pass to launder opinions the paper wants to publish, but can’t.

permalink
report
parent
reply
65 points
*

You don’t need every post to have a comment basically saying “this source is ok”. Just post that the source is unreliable on posts with unreliable sources. The definition of what is left or right is so subjective these days, that it’s pretty useless. Just don’t bother.

permalink
report
reply
17 points

I agree with that. Having a warning message when the source is known to be extremely biased and/or unreliable is probably a good thing, but it doesn’t need to be in every single thread.

permalink
report
parent
reply
2 points

If a source is that bad, it should be banned. I think bot comments on just some posts presents inconsistency.

permalink
report
parent
reply
60 points

I think it should be removed

permalink
report
reply
48 points

My personal view is that the bot provides a net negative, and should be removed.

Firstly, I would argue that there are few, if any, users whom the bot has helped avoid misinformation or a skewed perspective. If you know what bias is and how it influences an article then you don’t need the bot to tell you. If you don’t know or care what bias is then it won’t help you.

Secondly, the existence of the bot implies that sources can be reduced to true or false or left or right. Lemmy users tend to deal in absolutes of right or wrong. The world exists in the nuance, in the conflict between differing perspectives. The only way to mitigate misinformation is for people to develop their own skeptical curiosity, and I think the bot is more of a hindrance than a help in this regard.

Thirdly, if it’s only misleading 1% of the time then it’s doing harm. IDK how sources can be rated when they often vary between articles. It’s so reductive that it’s misleading.

As regards an open database of bias, it doesn’t solve any of the issues listed above.

In summary, we should be trying to promote a healthy sceptical curiosity among users, not trying to tell them how to think.

permalink
report
reply
-8 points

Thanks for the feedback. I have had the thought about it feeling like mods trying to tell people how to think, although I think crowdsourcing an open source solution might make that slightly better.

One thing that’s frustrating with the MBFC API is that it reduces “far left” and “lean left” to just “left.” I think that gets to your point about binaries, but it is a MBFC issue, not an issue in how we have implemented it. Personally, I think it is better on the credibility/reliability bit, since it does have a range there.

permalink
report
parent
reply
8 points

That’s perhaps a small part of what I meant about binaries. My point is, the perspective of any given article is nuanced, and categorising bias implies that perspectives can be reduced to one of several.

For example, select a contentious issue like abortion. Collect 100 statements from 100 people regarding various related issues, health concerns, ethics, when an embryo becomes a fetus, fathers rights. Finally label each statement as either pro-choice or pro-life.

For sobering trying to understand the complex issues around abortion, the labels are not helpful, and they imply that the entire argument can be reduced to a binary choice. In a word it’s reductive. It breeds a culture of adversity rather than one of understanding.

In addition, I can’t help but wonder how much “look at this cool thing I made” is present here. I love playing around with web technologies and code, and love showing off cool things I make to a receptive audience. Seeking feedback from users is obviously a healthy process, and I praise your actions in this regard. However, if I were you I would find it hard not to view that feedback through the prism of wanting users to find my bot useful.

As I started off by saying, I think the bot provides a net negative, as it undermines a culture of curious scepticism.

permalink
report
parent
reply
-7 points

Just a point of correction, it does distinguish between grades. There is “Center-Left,” “Left,” and “Extreme Left.”

permalink
report
parent
reply

News

!news@lemmy.world

Create post

Welcome to the News community!

Rules:

1. Be civil

Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.

Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.

Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.

Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.

Posts must be news from the most recent 30 days.


6. All posts must be news articles.

No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.

If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.

Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.

The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body

For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

Community stats

  • 15K

    Monthly active users

  • 6.4K

    Posts

  • 108K

    Comments