Recently there has been a lot of press regarding the rise of so-called “fake news”. The coverage has focused on how mistakenly or intentionally erroneous reports, often from ordinary citizens, have been reported on by major news networks and people or presented on a site masquerading as one of greater repute.
Many have looked to lay the blame at the door of social media, although this is to look beyond social media’s quality as a platform for many as opposed to a unique entity. The crux is that no single person or collective is to blame and the larger issue is how, in an age of personalised content, can publishers refrain from spreading fake news?
What Is Fake News?
Fake news is a term used to describe an article, usually appearing online, that is either exaggerated or misreported. There are a wide range of ways in which fake news can be written and shared and these articles can be created and distributed both innocently and maliciously. The topic of fake news gained a lot of coverage during the 2016 US Election as both parties accused the other of misdeed and foul play.
Recently many media outlets, including Google and Facebook, have been accused of being party to the spread of fake news. The fact that such major organisations have been corrupted by previously insipid attempts to confuse the masses has raised the alarm and left software developers scrambling to find a solution to this dizzying problem.
How Can Developers Stop The Spread Of Fake News?
It has been pointed out that there are relatively simple ways for an individual to verify the validity of a news item – the larger problem is for organisations, such as Facebook, who disseminate trending articles that are procured using algorithms. Part of the issue for Facebook has been from an editorial and political perspective. There have been many claims in the past as to Facebook’s party affiliations, including those that paint the social media giant as a self-serving expurgator.
The challenge for developers is to create software that can tell the difference between stories that are true and those that are false. There are many ways of checking the basis for many news stories. Several such methods involve crawling the site the original piece came from and looking into certain aspects including the sources of its pictures and text. Extensions can even be added to an internet browser to search for malware and dead links.
However, one very tricky aspect of this method can be if the news story is based on a real report that happened a long time ago. In this instance, the software often has trouble distinguishing the real news from the fake as it is able to verify a source but unable to determine that the source, whilst genuine, is old and therefore the story cannot be considered “news”. The difficulty arises as it is very hard to determine a cut-off point in terms of old links or sources, as many reputable sites will link to old stories that are either part of an overall narrative or relevant in some way. Bloomberg has suggested that employing human fact checkers may be the only way to get around this issue, as even after altering its software Facebook still saw fraudulent news stories appearing on its Trending Topics sidebar.
For Facebook, this is a solution easier said than done. In August a high profile mass-firing of its Trending Topics curators was followed by the uncensored and disastrous postings by the software that replaced them. Facebook was eager to distance itself from the accusations of politically-motivated editing of its Trending Topics sidebar but was equally at pains to safeguard the legitimacy of its feature. The doublethink attitude of Facebook towards automacy and intimate control has so far landed it in a problematic scenario, although it will be interesting to see how publishers with more halcyon pasts in this matter choose to act.
Google has looked to kill the problem at its root and ban fake news sites from using its AdSense. It is a forthright strategy, but it is worth considering that Google is not the only online advertiser available to publishers. Google has also started funding developers who claim to offer a solution. UK-based FullFact announced it received £43,000 from Google’s Digital News Initiative to create a “fully automated end-to-end fact checking system”. Dedicated fact-checking websites such as Snopes have looked to call out fake news stories and have done so with moderate success, albeit that it will need to improve on its global rank to start having a real impact on fake news.
For individual internet users, determining the accuracy and validity of a news post is a fairly simple process as long as the user is aware of what to look out for and how to look out for it. The larger problem is for publishers who need to find a means of siphoning out fake news stories.
The argument between automation and human intervention will go on as both sides claim the other to fall short in certain areas. It can only be hoped that an amicable solution can be found in order that software can be created to stop the propaganda perpetuated by fake news.
What are your thoughts on fake news? Are there any specific steps that software developers can take to improve their efforts? Get in touch with us on social media and let us know your thoughts on this matter.