I have said it many times that I agree with the countless studies and articles showing how Wikipedia is not an accurate source of information. Many articles contain accurate information, but everything is written by volunteer editors who must interpret the source of information and put it into their own words when adding it to a Wikipedia article. However, even though Wikipedia has in-accuracies, the majority of the population still considers them an authority. The same people who say that Wikipedia is inaccurate are the same people who click on it first when searching for something in Google. It is in the top 10 of most visited websites in the world. So, Wikipedia is kind of like the United States legal system. Although badly flawed, it is the best we have.
As a law graduate, I love to point out fallacies. Sometimes just for fun, but other times to debunk statements that people will take as fact, even though they clearly are not. So, after reading a recent article in Mashable about a new algorithm that is said to assess the quality of Wikipedia articles and "reassure visitors and help focus editors on entries that need improving," I decided to compare the claims to the facts. While Mashable is only reporting what the developer of the algorithm claims, it is clear that the algorithm developer has little or no experience editing Wikipedia.
Editor authority:
One of the claims is that the algorithm is said to "assess the quality of Wikipedia pages based on the authoritativeness of the editors involved and the longevity of the edits they have made." The developer is saying that if an edit is made by an editor who has been a member of Wikipedia a long time and has made substantial edits to the same article, then the article is going to be better quality.
Truth is, that is not the case. While the article links to a recent article in MIT Technology that reported on the decline of Wikipedia, it failed to correctly quote it. "Wikipedia is known for having a relatively small number of dedicated editors who play a fundamental role in the community." The first part of the sentence is correct, the latter is not. There is a small number of dedicated editors, but as the MIT article points out, the site operates as a "crushing bureaucracy with an often abrasive atmosphere that deters newcomers who might increase participation in Wikipedia and broaden its coverage."
So, the new algorithm is going to base Wikipedia articles on the authority of editors. Editors who are declining and the remaining ones who have such control over the content that goes into each article that readers are going to obtain the bias of the bureaucracy. Not a good factor to use when assessing a Wikipedia article.
Assessing editor authority:
Any assessment that needs to further assess something before giving you results is simply not going to be accurate. The algorithm in question uses an assessment to determine the authority of authors prior to applying it to the assessment of an article. One such factor is how many editors a single editor is linked to. They look at the links and assign the editor a similar rank that mirrors Google PageRank. While this may seem like a good idea, it is flawed as the algorithm does not assess the reason why authors are linked to and from each other. Wikipedia is an open community with many discussion boards and talk pages. So, this is basically ranking a website based on how many forum comments someone leaves.
What is flawed about this ranking is that an editor who has numerous comments on their talk page would assumedly be ranked higher than someone with fewer comments on their talk page. Many editors who are experienced and interact regularly through their talk page often archive the discussions of their talk pages. So, while an experienced administrator may have only a few messages (as they just archived a hundred messages left on their page in a week), a new editor may have more (including a welcome message template which should basically be ignored in the algorithm).
Another flaw is that the messages left may actually be warnings to an editor about the content they are introducing. It may be a newbie who uploaded 20 images and they are now flagged for deletion (there will be 20 messages on the user's talk page due to this). So, measuring the amount of links to and from user pages is simply not the way to assess the quality of an editor.
Finally, editors are human. Even experienced editors make a mistake such as introducing content that they misinterpreted from the source. However, the new algorithm will apparently "reassure visitors" that the content is correct because the editor who introduced it has a high editor authority. That is like saying everything that President Obama says should be taken as gold, as after all he is the president.
Test sampling:
The algorithm developer tested the theories against 9,000 articles in Wikipedia that were already assessed by Wikipedia editors. This sampling is extremely small given that Wikipedia has more than 4 million articles. What is even more disturbing is that it is pointed out that these articles are already assessed. So, the new algorithm is assessing articles that are already assessed by editors who will receive a page ranking based on their editor authority? I'm confused. This would be similar to me appraising my own house. The test is relying on editors (who have not yet received editor authority rankings) and their assessment of articles. What if their assessment is wrong? And the world turns.
And my favorite:
The article does point out that there are limitations, but states that the algorithm could be used as a tool to assist editors assess an article. "With the well-documented decline in Wikipedia's workforce, automated editorial tools are clearly of value in reducing workload for those that remain." Well, that is kind of counter-productive. Wikipedia is the "Encyclopedia that anyone can edit," not the site that bots and flawed tools can polish up in an attempt to "reassure visitors" that they are reading accurate content. Tools used by Wikipedians have actually contributed to the decline of editors. Many of these tools flag new edits by new users as vandalism. Experienced editors who leave warning messages for new users about such edits (based on what the "tool" told them was vandalism) without properly assessing if the edit was true vandalism will often chase away new editors. The tools are overused and often used simply for editors to increase their "edit count." It's like giving the car keys to a ten a year old.
However you see Wikipedia (accurate or inaccurate), the truth remains that it will always have flawed content as long as anyone is allowed to edit by adding, removing, or changing information in an article. What also remains is the fact that people still trust Wikipedia as an authoritative source of information, despite the proven inaccuracies. What is not needed is a flawed algorithm that tries to prove to people anything other.