
Let’s step back in time. The Russian invasion of Ukraine has begun. Panic is already sweeping the nation. Videos of bogus bomb blasts have started appearing on social media. Deepfake videos of Zelensky and Putin are circulating online, causing unnecessary tensions. Thus, counterfeit videos of Zelensky and Putin caused tensions to escalate even before the war started. However, the incident had no adverse effect on Ukrainian citizens, as Ukrainian President Volodymyr Zelensky informed them that his video was fake.

Disinformation as a service is a growing problem that can have severe consequences for individuals, organisations, and society as a whole. It refers to the coordinated efforts to spread false or misleading information for a specific purpose, such as manipulating public opinion, undermining the credibility of individuals or organisations, or sowing confusion and chaos. This activity can have serious consequences, including damaging reputations, inciting violence, or influencing political or social outcomes. Since 2020, the Covid-19 pandemic has been critical in spreading disinformation. Infodemics is the term coined by the World Health Organization (WHO) to describe this disinformation circulating during the Covid pandemic.
The growth of disinformation and how it turned into a service model?
On December 4th 2022, Discern Report published a blog post stating, “SECRET CDC REPORT: Since the Covid Jabs were Rolled Out, at Least 1.1 Million Americans Have ‘Died Suddenly.'”

The post with a screenshot went live on Instagram. With over 2 billion active users, disinformation about the covid vaccine or news of any kind can ignite the platform in seconds, causing individuals to start to believe the post is legit. However, since Instagram developers intervened promptly, it prevented the post from wreaking havoc.
There have been numerous instances of disinformation as a service in recent years. Here are a few examples:
1. In 2016, the Russian government was accused of using disinformation as a service to interfere in the U.S. presidential election. This included creating fake social media accounts and spreading false or misleading information about candidates to influence the election outcome.
2. In 2019, a group called the Internet Research Agency was indicted by the U.S. Department of Justice for conducting a “sustained campaign” of disinformation as a service on social media platforms. The group was accused of using fake accounts and bots to spread false information and sow discord in the U.S. and other countries.
3. In 2020, the U.S. Federal Bureau of Investigation (FBI) issued a warning about disinformation as a service related to the COVID-19 pandemic. The FBI warned that bad actors were using social media and other online platforms to spread false information about the virus, including fake cures and conspiracy theories, to sow confusion and disrupt efforts to combat the pandemic.
4. In April 2021, the European Union’s East StratCom Task Force (ESCTF) released a report stating that Russia was using disinformation-as-a-service to interfere in the 2021 Belarusian presidential election. The report stated that Russia was using social media and other online platforms to spread false or misleading information to support the incumbent president and undermine the credibility of the opposition.
One factor contributing to the growth of disinformation as a service is the use of automation and artificial intelligence to create and spread false information at scale. Another factor is the increasing polarisation of political and social discourse, which can make people more receptive to disinformation that confirms their existing beliefs or biases. This can create a feedback loop, where false information is amplified and spreads more quickly, further polarising the public and making it harder to distinguish fact from fiction.
Unlike traditional media, the majority of articles that circulate through citizen journalism sites do not undergo fact-checking. The digital press, including Twitter, Reddit, Facebook, Instagram, and YouTube, is employed in disseminating this information. With the increasing use of social media and other online platforms, it has become easier for bad actors to spread disinformation on a large scale, making it a serious threat to the integrity of information and the credibility of sources.
Today, disinformation is a weapon to destabilise a nation or an organisation. Cybercriminals took advantage of the opportunity to run this service, which now falls under the name disinformation-as-a-service.
Disinformation-as-a-service: what do cybercriminals do?
Disinformation-as-a-service (DaaS) involves dealing with threat actors through the dark web to purchase the service, which, in different forms(e.g., bots), can spread misleading information on social media detrimental to the target’s beliefs using a variety of tactics. The targeted crowd may be overwhelmed with information and start rethinking their ideology, at which point they may begin to turn against their government or organisation, tearing it apart.
On the dark web, cybercriminals offer a Disinformation-as-a-Service toolkit for a low price to anyone who wants it. Due to the ease of accessibility, a wide range of people could use the tool at their discretion.
Threat actors usually give a plethora of options to the client:
1. Which social media platforms do the customers wish to use to spread false news?
2. What type of news will benefit the business propaganda the customers are promoting?
3. How would they like the information to reach their target audience?
Final rates are determined based on the needs of the customer.
Disinformation circulates via the following means of communication:
- Social media posts
- Emails
- Text messages
- Articles
- Deep fake images,
- Deep fake video clips.
Overall, it is essential to be aware of the various mediums through which disinformation as a service can be spread and to be cautious about the information we consume online. It is essential to fact-check information and to be sceptical of sensational headlines or claims that seem too good to be true.
How is disinformation-as-a-service a cyber security threat?
Disinformation and social engineering attacks are intertwined. Disinformation via phishing uses malicious links.
For example, Crypto Funding was used to offer fundraising to support mainly the Ukrainian soldiers during the Russia – Ukraine war. Cybercriminals took this opportunity to send out emails pretending to be authentic crypto exchange groups that are a part of the Ukraine fundraising team. The messages spread disinformation, and phishing links were attached to steal cryptocurrency. As a result, the genuine victims could not receive financial assistance.
Here you can see a real case is manipulated and taken advantage of to raise funds by utilising phishing tactics.
Imagine two restaurant owners, Sarah and John, competing for customers in a popular tourist destination. Sarah’s restaurant has been attracting many tourists, and business is booming, while John’s restaurant has struggled to keep up.
One day, John decides that he has had enough and decides to try and sabotage Sarah’s restaurant. He hears about disinformation as a service and decides to buy an AI bot tool. With the bot service, he successfully creates fake news articles and social media posts claiming that Sarah’s restaurant serves contaminated food.
The disinformation campaign takes off, and soon people are talking about the health risks of dining at Sarah’s restaurant. Tourists start to avoid Sarah’s restaurant, and her business begins to suffer. Sarah is devastated and has no idea why her business is suddenly in trouble.
It takes some time, but eventually, Sarah learns about the disinformation campaign that John has bought against her. She is furious and takes legal action against John, but the damage has already been done. Sarah’s business has suffered significantly, and it will take a long time to recover.
This scenario illustrates the potential consequences of disinformation as a service and how it can be used to manipulate public opinion and damage reputations. Earlier it was disinformation-for-hire, where companies or governments hired PR agencies to spread disinformation. Even if caught in the act, the customers can deny responsibility.
Deniability – a feature of disinformation-as-a-service that cause major concerns to cybersecurity
Deniability refers to the ability to deny involvement or responsibility for an action. In the context of disinformation as a service, deniability may be used to protect the identity or motivations of the parties responsible for spreading false or misleading information. Here, the threat actors impersonate legitimate PR or marketing firms.
For example, suppose an employee uses this service to expose their company (insider threat), and the company tracks down the fake news origin to this employee. If that occurs, the employee can quickly clear up any doubts by stating they assumed the company was legitimate and didn’t realise any malicious activities were happening.
Deniability allows the culprit to remain anonymous, so identifying them gets complicated. Unable to trace the culprit and initiate legal action is a severe cybersecurity threat!
Deniability can be challenging to maintain, as it requires careful planning and execution to avoid leaving behind evidence that could be used to trace the disinformation back to its source. However, the use of automation and artificial intelligence can make it easier to create and spread disinformation in a way that is difficult to trace, which can make it easier for groups to maintain deniability.
Low service costs are another concern. The low cost makes it accessible to most people on the internet, thus stimulating the growth of disinformation-as-a-service. In effect, this poses a threat to cyberspace.
The motive behind spreading disinformation: Case studies
There are many different motives behind the spread of disinformation as a service. Some groups or individuals may use disinformation as a service for financial gain, while others may do so for political or ideological reasons.
During the election period, the spread of disinformation spikes up. Parties start throwing false claims and news against the opposition parties to gain people to their side. In India, especially during the election of 2019, several party pages were taken down by Facebook by reporting them as spam. Caste, religion, ideologies, and related factors played a role when customising these spam messages.
As disinformation-as-a-service came into the limelight, we started to observe the unlimited number of biased news and fake news overflowing the internet, especially during the Palestinian and Israeli conflict. For example, the internet was buzzing with a screenshot of a tweet claiming to be from the Israel Defense Force. A fact-check revealed that the tweet came from a fake Twitter account that impersonated the real Israeli Defense Forces. The tweet stated that the Israeli defence forces do not require any support from the Indians.

Credit: Fact Check
It is possible to create screenshots like these with several free tools available online. Software based on natural language processing, such as OpenAI’s GPT-3, could be one such example. There is even the option to use this tool to create phishing websites. These software tools have made it even easier for cybercriminals to generate fake news and provide customers with disinformation-as-a-service. While OpenAI has regular auditing, most other open-source Software does not.
How are corporates affected by the spread of disinformation?
With readily available online tools, disinformation actors can damage the reputation of a rival company by extorting or defaming them.
In May 2021, malicious actors initiated a covid disinformation campaign to discredit the BioNTech Pfizer covid vaccine. Health bloggers received messages from disinformation actors and were offered money to spread false information about the vaccine, saying they knew something about the particular vaccine the media intentionally omitted.
A campaign like this could quickly tarnish the reputation of a big company if the truth is not said out loud and people fall for it. Especially during the time of covid vaccination, there was a lot of speculation regarding taking vaccines among people. And so disinformation like this could make people refrain from taking vaccines as well could bring down the organisation as a whole.
What impact could disinformation have on a company like BioNTech?
The company will lose its credibility, and people will stop buying its products. The company is likely to experience a financial crisis as a result.
The spread of disinformation as a service can have significant consequences for corporations. Companies must be aware of how false or misleading information can be spread and take steps to protect themselves from it. This may include implementing measures to monitor and detect disinformation campaigns, engaging in crisis communication efforts, and working with stakeholders to address any issues that may arise.
Conclusion
AI-based tools help detect disinformation by scrutinising textual content and identifying the difference between auto-generated and human-produced content. AI programs can employ reverse engineering techniques to uncover deep fake videos and images. Always remember AI can be used to help detect and counter disinformation as a service, but it can also be used to facilitate the spread of disinformation-as-a-service. Hence, corporates should hire a security department to track fake news continuously.
Also, keep in mind that even though disinformation does not qualify as a cyber attack, it can enhance phishing and social engineering tactics.
References
https://www.pwc.com/us/en/tech-effect/cybersecurity/corporate-sector-disinformation.html
https://www.dw.com/en/covid-disinformation-campaign-targeted-biontech-pfizer/a-57702440
https://hackernoon.com/disinformation-as-a-service-content-marketings-evil-twin
https://blog.agthentic.com/disinformation-as-a-service-is-here-what-can-it-mean-for-ag-eae9c9b5b329
https://www.zdnet.com/article/the-dark-webs-latest-offering-disinformation-as-a-service/
https://www.weforum.org/agenda/2022/07/disinformation-ai-technology
https://www.peakmetrics.com/insights/6-reasons-why-disinformation-is-a-cybersecurity-risk
https://www.weforum.org/agenda/2022/08/four-ways-disinformation-campaigns-are-propagated-online
https://www.weforum.org/agenda/2022/09/tackling-disinformation-agenda-dialogues