Back to news

The Weaponizing of Fake News

It is becoming increasingly commonplace for fake news to be used on a very large scale in order to influence people’s behaviour. We have all heard about efforts focused on changing political landscapes. Throughout the last couple of years, different agencies have been publishing carefully targeted ads aimed at individual groups of local voters around many different countries with a specific goal of influencing voter behaviour.

Election interference is not the only issue. Misinformation can also be used by whole industries to, not only promote its products and services, but influence opinions away from factual conclusions that do not fit their narrative. A clear example of such tactics is seen within the banking industry – spreading fear, uncertainty, and doubt about cryptocurrencies.

We have already written extensively about some of the problems that fake news present for the blockchain and crypto community, as well as questioned the role that mainstream media plays in the distribution of misinformation. Therefore, this time we are looking at fake news from a different perspective. Specifically, we will focus on some of the more technologically advanced methods utilized to create content that is totally fabricated – content that could be categorized as fake news and used as a weapon with far greater consequences than simply affecting the price of a single coin.

The “information warfare” we describe could not have been possible a short time ago, since technology advancements, such as the Internet, social media, and artificial intelligence are usually at the centre of such controversies. Who knows what the future might bring?

The Role of Artificial Intelligence

The algorithms used by search engines, websites, and social media platforms have often been associated with the spread of fake news. They tend to push the most viewed content regardless of it being true and verified, which is exactly why the impact of misinformation can be so significant. Studies show that fake news on social media can spread much faster than the truth.

Consequences can be seen on YouTube, where conspiracy theories can generate much more traffic than properly sourced and accurate videos. Facebook’s algorithms have also been scrutinized – especially in the context of serving ads full of falsehoods to those particularly susceptible to the ads’ claims. It’s not hard to imagine actions such as these used to spread fake news about a particular cryptocurrency or blockchain project.

“Deepfake” Videos

Simply serving false content is not the only danger AI can bring. Nowadays technology has progressed to a level where it can simply produce content that is completely made-up. One such example is the so-called “deepfake” videos. These are artificial intelligence-based human image synthesis techniques used to combine existing images and videos onto source images or videos using a specific machine learning technique.

The resulting combination is a fake video that shows a person or a group of people performing an action that, in reality, never occurred. These capabilities give the deepfakes the power to create fake news by altering politicians’ (or other public figures’) words and gestures, making them appear to do something they have never done. The techniques are not yet perfect, but they are getting better and will improve as programs become more sophisticated. The consequences could be grave.

Language Modelling

A similar effect was achieved not long ago by a company called OpenAI. They used a language algorithm, trained on a vast amount of text sourced from the Internet, to create a product that is capable of producing very realistic text – be it news or works of fiction. The AI is trained for a task called language modelling, which involves predicting the next word of a piece of text based on knowledge of all previous words.

The system is fed text, anything from a couple of words to a whole page, and uses it as input to write the next couple of sentences based on its predictions of what should come next. The quality of the output is truly astonishing, but so are the possible dangers. Completely believable fake news could be produced in a matter of seconds.  

The power of the machine-learning technology used by the company was so strong that the researchers who created it grew concerned about its potential for abuse. For now, they decided not to release their research to the general public, but it is only a matter of time before others start to reproduce similar knowledge.

Stay Informed

Fake news is already a problem. If it were to become fully automated, the problem could increase exponentially. Looking at the recent technological advancements, it may not be long before AI can reliably produce fake stories, false tweets, and misinformation that is more and more convincing as well as effectively disseminate it.

Such issues could be solved by decentralising trust. In this way we would no longer need a few institutions to guarantee whether information is genuine, but instead rely on multiple sources with credible reputations. One way to do this could be through blockchain technology, utilizing its features of immutability and transparency. Using a public ledger stored on multiple computers, its algorithms could enable the computers to agree on the validity of any changes to the ledger, making it much harder to record false information.

This is exactly why the BLOCKBIRD platform, together with the help of our users’ votes and ratings, is trying to recognize manipulated news. We are observing the community response, source reputation, content processing, and pattern repetition of the spreading news. The blockchain technology allows us to enhance our product by enabling the credibility of our users’ votes and ratings on the platform. When we store the rating results on the distributed ledger, we make sure they are transparent and immutable. If you are interested in knowing more, read our blueprint and become a part of the BLOCKBIRD movement!

 

February 25, 2019, Nejc Horvat