Stop COVID-19 bots

How you can prevent the spread of misinformation online

This an image of robotic face appearing amongst computer data.

Image: Getty Images / Jackie Niam

Image: Getty Images / Jackie Niam

Like many, I am getting updates about the novel coronavirus COVID-19 from social media sites like Twitter.

Scrolling through the comments on a post from Queensland Health (the official government health agency account for Queensland, Australia) sent me on an emotional roller coaster ride.

Some comments made me laugh, but two particular comments by Sharon and Sara (see below) made me feel angry, especially for Sara’s mum.

An official post from Queensland Health (left) and comments from ‘Sharon’ and ‘Sara’ (right).

An official post from Queensland Health (left) and comments from ‘Sharon’ and ‘Sara’ (right).

After a couple of seconds, my cyber security senses started tingling. Why do their accounts have an alphanumeric Twitter handle? Have Sara and Sharon always been so emotional across their accounts?

Introducing 'Sharon' and 'Sara' 


Curious, I had a deeper look at their profiles and realised that they were not humans at all.  

Rather, they were ‘bots’: fake social media accounts. These bots, and thousands of others, were generated by an automated service called a ‘bot campaign’. The bots plant comments into social media feeds, usually with a deeper intention to misinform or alter opinions of the general public. 

This was a problem that came to the public’s attention during the United States elections. And it is a problem which has escalated since. 

The dissemination of information about the recent Australian bushfires was also affected by similar misinformation from bots. It is apparent now that COVID-19 has been infected by bots too.

You might think the toilet paper fiasco is a low for humanity, but I would argue a bot campaign at a time like this is worse. It is potentially dangerous because of the fear and misinformation it can cause.

An image representing the spread of news on social media via mobile devices.

Image: Getty Images / alexsl

Image: Getty Images / alexsl

Telltale signs of a bot


Here are the accounts of 'Sara' and 'Sharon'.

The Twitter accounts of bots ‘Sara’ and ‘Sharon'.

The Twitter accounts of bots ‘Sara’ and ‘Sharon'.

Both 'Sara' and 'Sharon' displayed a few tell-tale signs that they are possibly bots: 

  • they have no followers 
  • they have only recently joined Twitter 
  • they have no last names, and have alphanumeric handlers (e.g. Sara89629382)
  • they have only tweeted a few times 
  • their posts have only one theme: to spread alarmist comments (look at the picture post above by 'Sara’ on 17 March). 

A further investigation into 'Sharon' revealed that it has also attempted to misinform and exacerbate anger on a news article reporting updates about the government (see image below).  

Bot ‘Sharon’ spreading fear and misinformation on a news site’s tweet.

Bot ‘Sharon’ spreading fear and misinformation on a news site’s tweet.

The language “Health can’t wait. Economic (sic) can” was a giveaway statement showing a potentially non-native English speaker. It can be seen from that 'Sharon' was trying to stoke public anger by calling out "bad decisions". 

An image representing the spread of news on social media via mobile devices.

Image: Getty Images / metamorworks

Image: Getty Images / metamorworks

The tip of the iceberg


When looking through the tweets of ‘Sharon’, I discovered 'Sharon’s' friend, ‘Mel’ (see below) – a bot with another level of evil – spreading false information about COVID-19 test result delays, and retweeting hate memes and posts.

Bot ‘Mel’ spreading false information (left) and retweeting hate memes/posts (right).

Bot ‘Mel’ spreading false information (left) and retweeting hate memes/posts (right).

But what was more concerning was that humans were actually replying to ‘Mel’ (see below):

A potentially human user responding to ‘Mel’ like it was a human.

A potentially human user responding to ‘Mel’ like it was a human.

It makes me wonder:

  1. Who is controlling these bots?
  2. What is the objective of such a misinformation campaign?
  3. How can we hunt these bots down and stop the spread of fear and misinformation?

Currently, no one can easily attribute the source. The motive may be pure mischief or geopolitics. However, one thing is for sure: we need to understand and develop legislation and mechanisms to detect and stop these bots. The major social media platforms have recently banded together to work on taking these down.

If you are part of an organisation running a legitimate social media campaign, it may be useful to dedicate part of your media or IT team to use bot detection tools that report bot posts to the social media platform.

Even as a reader of posts, you can play a part. Join me. Hunt them down. When you see bot accounts, report them to the social media platforms. Let’s use the hashtag #StopCOVIDBots to round them up and expose this evil.

In these uncertain times, the last thing we want is to exacerbate fear and anxiety. It takes a vigilant social media community to preserve the fabric of our society.

#StopCOVIDBots now.

This an image of robotic face appearing amongst computer data.

Image: Getty Images / Jackie Niam

Image: Getty Images / Jackie Niam

#StopCOVIDBots – reporting bots on Twitter


Using ‘Sara’ as an example:

1. Go to the bot’s account page
2. Click on the ‘…’ button next to the ‘Follow’ button

3. Click on ‘Report @(name of the bot here)’
4. Click ‘It’s a fake account’.

5. After confirming the report, you will also have a final step of blocking or muting the bot.

For more information on what UQ is doing to improve cyber security, visit the School of Information Technology and Electrical Engineering.


Support COVID-19 research

Together, we can help our researchers fast-track a potential vaccine for COVID-19. Join us by making a donation. We appreciate it is an incredibly challenging time for many in our community. If you cannot make a gift, you can still play a critical role by sharing the work of our research team on social media and giving your 'virtual' applause. 👏 #clapCOVID19researchers

An image of Professor Ryan Ko.

About the author

Professor Ryan Ko is Chair and Director of UQ Cyber Security at the University of Queensland. His applied research in cyber security focuses on returning control of data to cloud computing users. His research reduces users' reliance on trusting third-parties and focusses on provenance logging and reconstruction, and privacy-preserving data processing (homomorphic encryption).

Prior to UQ, he was the highest funded computer scientist in New Zealand, as Principal Investigator and Science Leader of the largest MBIE-awarded cloud security research funding for STRATUS (NZ$12.2 million) from 2014 to 2018.

Prof Ko has a strong record in establishing university-wide, multi-disciplinary academic research and education programs.