Hello World, As many of you have probably noticed, there is a growing problem on the internet when it comes to undisclosed bias in both amateur and professional reporting. While not every outlet can be like the C-SPAN, or Reuters, we also believe that it’s impossible to remove the human element from the news, especially when it concerns, well, humans.
To this end, we’ve created a media bias bot, which we hope will keep everyone informed about WHO, not just the WHAT of posted articles. This bot uses Media Bias/Fact Check to add a simple reply to show bias. We feel this is especially important with the US Election coming up. The bot will also provide links to Ground.News, as well, which we feel is a great source to determine the WHOLE coverage of a given article and/or topic.
As always feedback is welcome, as this is a active project which we really hope will benefit the community.
Thanks!
FHF / LemmyWorld Admin team 💖
I think having this post isn’t a great idea because you are just assuming the websites bias are legit. At the very least there needs to be a lot of warnings in the bots post about the websites biases and the methodology they use so the reader can come to their own conclusion.
Just looking over the methodlogy it’s clear that it has it’s own biases:
American Bias
The website itself says it’s distinctions of left and right are US based which is very skewed from the rest of the world. There should be a disclaimer or it shouldn’t be used in any world news communities.
Centrist Bias
The website follows the idea of “enlightened centrism” since if it determines a website has a left/right lean (again arbitrary) it affects the factual ratings of the sources.
Examples of this are: FAIR only getting the 2nd highest rating despite never having failed a fact check.
The Intercept getting only a “mostly factual” rating (3rd highest) despite their admittance it has never failed a fact check.
Despite my personal opinions on the pointlessness of using a US based left/right bias criteria I’d feel better if it was at least kept it it’s own section but when you allow it to affect the factual rating of the source it’s just outright wrong. The factual accuracy of the website should be the sole thing that affects this rating.
Questionable Fact Checking
Even just checking some of their ratings raises doubts on the websites credibility.
The ADL is rated as high (2nd highest) and wasn’t found to fail any fact checks.
The ADL was found to be so unreliable on it’s reporting of the Israel-Palestine conflict it is considered an unreliable source by Wikipedia.
“Wikipedia’s editors declared that the Anti-Defamation League cannot be trusted to give reliable information on the Israel-Palestine conflict, and they overwhelmingly said the ADL is an unreliable source on antisemitism.”
Maybe Wikipedia editors are a good arbiter of truth and maybe they aren’t but as people can see there isn’t a consensus and so by choosing Media Bias/Fact Check you’re explicitly choosing to align your “truth” with this websites biases.
This is a really well-reasoned response… Which probably means the mods will ignore it
I’ll add UN Watch to the list.
MBFC rates it as “highly credible” despite it publishing laughably bad hit-pieces on UN officials who openly criticize Israel.
I did a debunk on one of their articles that was removed from this very community due to disinformation, but I’ve posted a screenshot of my critique here for anyone who is interested.
This is literally in bold at the top of the page:
Fabricated work.
Is there anything that’s more of a capital crime in journalism than fabricating quotes? Surely we can all agree that publishing fiction as news is the opposite of factual reporting? They may not have failed a fact check in the last five years but it just isn’t possible for them to have published fabricated news without ever failing at least one. By their own admission they failed five in that incident alone.
I’m not going to die on the intercept hill here I’m fine with the fact that even though they fired the person it’s a stain on their record so sure let’s say that rating is fine.
It was one of the first 3 I checked so I’m sure I’ll find more that are problematic when I have a chance to look because it’s their methodology that’s biased. Also the other 2 I pointed out are clearly not correct.
Got rebuttals for any of my criticisms about the methodology?
I do!
I think the importance of American bias is overstated. What matters is that they’re transparent about it. That bias also impacts the least important thing they track. People often fixate on that metric when it has little impact on other metrics or on the most important question for this community: ‘how likely is it that this source is telling the truth?’ Left and right are relative terms that change drastically over time and space. They even mean different things at local and national levels within the same country. It’s not really an MBFC problem, it’s a the-world-is-complicated problem that isn’t easily solved. And it’s not like they’re listing far-right publications as far-left. Complaints are almost always like, “this source is center not center-left!” It’s small problems in the murky middle that shouldn’t be surprising or unexpected.
It’s also capturing something that happens more at the extremes where publications have additional goals beyond news reporting. Ignoring Fox’s problem with facts/misinfo, it doesn’t really bother me that they’re penalized for wanting to both report the news and promote a right-wing agenda. Promoting an agenda and telling the truth are often in conflict (note Fox’s problem with facts/misinfo). CBC News, for example, probably should have a slightly higher score for having no agenda beyond news reporting.
It might matter more if it impacted the other metrics, but it doesn’t really. Based on MBFC’s methodology, it’s actually impossible for editorial bias alone to impact the credibility rating without having additional problems – you can lose a max 2 points for bias, but must lose 5 to be rated “medium credibility”. I don’t know why FAIR is rated highly factual (and I’d love for them to be a bit more transparent about it) but criticizing bias leading to them being rated both highly factual and highly credible feels like less than a death blow. If it’s a problem, it seems like a relatively small one.
MBFC also isn’t an outlier compared to other organizations. This study looked at 6 bias-monitoring organizations and found them basically in consensus across thousands of news sites. If they had a huge problem with bias, it’d show in that research.
On top of that, none of this impacts this community at all. It could be a problem if the standard here was ‘highest’ ratings exclusively, but it isn’t. And no one’s proposing that it should be. I post stories from the Guardian regularly without a problem and they’re rated mixed factual and medium credibility for failing a bunch of fact checks, mostly in op-ed (And I think the Guardian is a great, paywall-less paper that should fact check a bit better).
So I think the things you point out are well buffered by their methodology and by not using the site in a terrible, draconian way.
It affects the overall credibility rating of the source, how is that the least important thing? They also seem to let it affect the factual reporting rating despite not clearly stating that in the methodology.
This is only true specifically when you’re thinking about it as a great source can’t have its credibility rating lowered. A not great factual source can get a high credibility rating if it’s deemed centrist enough which again is arbitrary based on the (effectively) 1 guys personal opinion.
High Credibility Score Requirement: 6
Example 1
Factual Reporting Mixed: 1
No left/right bias: 3
Traffic High: 2
Example 2
Factual Reporting Mostly Factual: 2
No left/right bias: 3
Traffic Medium: 1
See how weighing credibility on a (skewed) left/right bias metric waters this down? Both of these examples would get high credibility.
That’s a fair point and I did state in my original post that despite my own feelings I’d be open to something like this if the community had been more involved in the process of choosing one/deciding one is necessary and also if we had the bots post clearly call out it’s biases, maybe an explanation of its methodology and the inherent risks in it.
The way it’s been pushed from the mod first without polling the community and seeing the reaction to criticism some of which was constructive is my main issue here really.
The impact either way is slight. I’m sure you could find a few edge cases you could make an argument about because no methodology is perfect, but each outlier represents a vanishingly small (~0.01%) amount of their content. When you look at rigorous research on the MBFC dataset though, the effect just isn’t really there. Here’s another study that concludes that the agreement between bias-monitoring organizations is so high that it doesn’t matter which one you use. I’ve looked and I can’t find research that finds serious bias or methodological problems. Looking back at the paper I posted in my last comment, consensus across thousands of news organizations is just way too high to be explainable by chance. If it was truly arbitrary as people often argue, MBFC would be an outlier. If all the methodologies were bad, the results would be all over the map because there are many more ways to make a bad methodology than a good one. What the research says is that if one methodology is better than the others, it isn’t much better.
Again, I think you make a really good argument for why MBFC and sites like it shouldn’t be used in an extreme, heavy-handed way. But it matters if it has enough precision for our purposes. Like, if I’m making bread, I don’t need a scale that measures in thousandths of a gram. A gram scale is fine. I could still churn out a top-shelf loaf with a scale that measures in 10-gram units. This bot is purely informational. People are reacting like it’s a moderation change but it isn’t – MBFC continues to be one resource among many that mods use to make decisions. Many react as though MBFC declares a source either THE BEST or THE WORST (I think a lot of those folks aren’t super great with nuance) but what it mostly does is say ‘this source is fine but there’s additional info or context worth considering.’ Critics often get bent out of shape about the ranking but almost universally neglect the fact that, if you click that link, there’s a huge report on each source that provides detailed info about their ownership history, funding model, publishing history, biases, and the press freedom of the country they’re in. Almost every time, there are reasonable explanations for the rankings in the report. I have not once ever seen someone say, like, ‘MBFC says that this is owned by John Q. Newspaperman but it’s actually owned by the Syrian government,’ or ‘they claim that they had a scandal with fabricated news but that never happened’. Is there a compelling reason why we’re worse off knowing that information? If you look at the actual reports for Breitbart and the Kyiv Independent, is there anything in there that we’re better off not knowing?
Like I kinda said in my last paragraphs you’ve got fair points that it may be good enough for what it’s being used for here (despite it’s clear biases) since it’s not being used to disallow posts. Although other commenters have said it has a pro-Zionist bias as well which is honestly more concerning than things I’ve pointed out. Haven’t had time to check beyond the ADL one.
Overall my main issue is the community wasn’t really asked if one was desired, which one should be used, how it should be used, etc. Because of that and the lack of good response by the poster I’ve already decided to follow other world news communities instead of this one.
A standard of factuality needs to include a provision of avoiding emotionally-loaded, manipulative language. Otherwise you can pump unlimited amounts of propaganda with full factuality simply by “asking questions”.
I wont disagree that there should be a ranking for using loaded language but combining it with the factuality ranking twists what the ranking means since to the average person they’re going to read that as how accurate the facts are.
It should be its own separate rating from factuality. Again if we’re going to have to have a bot like this put clear disclaimers and ideally find a better one than this.
I disagree. I think emotional language is fundamentally the opposite of real objectivity, and cannot be honestly acknowledged as factual in any confirmable way.
It has no place in objective discussions, and employing it in any way, shape or form makes one deserve objectivity demerits.
edit: And objectivity and factuality are synonyms.