This is one of those ones that makes a lot of sense when you take a broader view, and consider the overall impacts.
Facebook's fact-checking labels on disputed stories are a good way to reduce the spread of misinformation, with prominent tags on fact-checked posts giving users reason for pause before they go to distribute the same within their networks.

Studies have shown that these labels do work, that they do reduce people's propensity to share such. That's obviously a good outcome, but a new analysis conducted by MIT has potentially found a significant flaw in Facebook's fact-check labeling system.
While disputed stories that are labeled as such do see fewer shares, not every story on Facebook is fact-checked, not every link and report uploaded to The Social Network is subject to such scrutiny. That's a problem - what this new study has found is that, by comparison, any story that doesn't have the fact-check label can automatically take on a higher level of authority, with users potentially assuming that it's correct simply because they're not being informed otherwise.
As per the report:
"In Study, we find that while warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study, we find the same effects in the context of decisions about which headlines to consider sharing on social media."
To reach this conclusion, MIT conducted a study in which 6,000 participants were shown variations of Facebook posts, some with fact-check labeling, others without. Some of the stories included were false and untagged, while some were marked as such.
As explained by Fast Company:
"Their findings were disquieting. Yes, warning labels did work to flag fictitious content. For instance, when no true or false labels were used, people considered sharing 29.8% of all false stories - yet when false stories were labeled as false, people only shared 16.1% of them. This reduced figure sounds promising, but the twist is that the unmarked false stories were shared 36.2% of the time. That means we are more gullible to share these fake news stories when some are marked and others aren’t."
In summary, Facebook's disputed content labels work to reduce people's propensity to share misinformation by around 13% - but increase the credibility of untagged fake news by 6%. So it's kind of six of one, half a dozen of the other - and with so many fake reports being circulated throughout The Social Network, those labels could actually be far worse for addressing the problem than not having them at all, dependent on individual exposure.
As noted, it makes sense when you think about it in those terms. But it's not great for The Social Network.
Of course, the impact of such also relates to how significant you consider fake news to be. In a recent memo to Facebook staff, Facebook's head of VR Andrew Bosworth played down the impacts of fake news, particularly with respect to political manipulation. But then again, Facebook has been ramping up its efforts to catch coronavirus misinformation of late, so the potential for the malicious use of such is significant.
Within this, fact-checks and labeling do seem like they can still play a role, but by checking some and not others, you can see how people would start to associate the tags with definitive truth - and the lack of the same as endorsement.
How you address that is more complex. Do you just abandon the labels altogether? Subject more content to fact-checking? At Facebook's scale, it's impossible to run every single link through the fact-check sieve. Maybe, then, Facebook needs to focus its resources on the most potentially damaging untruths to reduce their spread - but again, that could still lead to complications.
MIT suggests that Facebook could reduce the distribution of content from providers that are repeatedly flagged as publishing false reports, or look to highlight more credible sources to drown out the rubbish. That could be a part of Facebook's thinking with its coming news tab, which is currently being tested among some users.
It's an interesting consideration to the broader process, though one that may not be so easy to resolve.