Bot icon

SeaVoice STT & TTS Bot

Transcribe audio channels with speech to text, synthesize messages with text to speech, and download your audio & transcription files.

Open our docs in a new tab –>

Visit our website:

SeaVoice Discord Bot Homepage –>

STT Homepage –>

TTS Homepage –>

🐙 The SeaVoice Bot is a new speech-to-text and text-to-speech Discord integration brought to you by Seasalt.ai, a startup run by some of the world’s leading experts in deep speech recognition, neural speech synthesis, and natural language processing. 🐙

Watch the demo video: https://www.youtube.com/embed/drOVk_bexFY

SeaVoice is a voice intelligence bot that uses advanced AI technology to improve the Discord voice channel experience. One of the great things about Discord’s text channels is that they maintain a permanent log of the server’s conversations. But what about the voice channels? Once something is said verbally in the channel, it’s gone - you can’t catch up on part of the conversation you missed or search the conversation later.

Invite SeaVoice to the voice channel, and you can get real time speech transcriptions delivered to a chat channel as the conversation is happening. You’ll also receive a final version of your transcript and voice recording in a DM after the session ends. SeaVoice is set apart from bots offering similar services because it’s backed by state-of-the-art deep learning models crafted by Seasalt.ai.

We feel that providing highly accurate transcriptions for voice channels is a huge accessibility improvement for Discord. Additionally, because transcriptions are automatically posted to a text channel, that means they are permanent, searchable, and shareable. Similarly, speech synthesis also boosts participation in voice channels by making them more accessible to people who can’t or don’t want to speak personally.

Capabilities

✍️ speech-to-text, transcribe audio from discord voice channels, /recognize [language].

/recognize [language] -> Bot joins the voice channel you’re currently in, and continues to listen and output transcription in real time to the chat channel. The bot will record and transcribe everyone in the voice channel. Transcriptions are output to the text channel where the initial slash command was entered. When the session ends, the bot will DM the session creator a final transcription file, an SRT-formatted transcript file (used for subtitles), and a link to a full audio download. The session will automatically wrap up if all the users leave the voice channel, or if the bot shuts down or restarts for any reason (such as when a new version gets released).

Language Support

SeaVoice currently supports 12 languages. The English and Taiwanese Mandarin models are our own in-house models trained from scratch; they are highly accurate and reliable. All other languages are supported using a multilingual open source model as the base. The performance wasn’t great out of the box, so we integrated it into our own STT pipeline and tuned the model to improve the performance. One thing you may notice with the open source model is “hallucination”. This can manifest in a couple different ways, such as: inserting words/phrases that weren’t said, transcribing in the wrong language, and/or translating the spoken language to a different language.

Language
English
Mandarin (Taiwan)
Spanish
Italian
Portuguese
German
Japanese
Korean
Russian
Hindi
Vietnamese

🗣 Text-to-Speech

Synthesize speech from chat to voice channel.

Seasalt.ai also excels at speech synthesis. We offer a text-to-speech command, which allows users to type in a chat channel and have audio synthesized and played in a particular voice channel for them.

/speak [voice] [text]

To use this command, you should already be in a voice channel. In any text channel, type the /speak slash command and then optionally specify which voice you would like to use, and enter the text that you would like synthesized. When the TTS is done speaking, a 🏁 reaction will be applied to the command message. The default voice if not specified is Orca , you can also set your own default voice using the /user_config command. You can see the available voices below:

Name Sex Language
Orca M American English
Narwhal M British English
Angelfish F American English
Starfish F Mandarin (Taiwan)
Dolphin F Mandarin (Taiwan)

🎙️ Record & Download

Export audio & transcriptions from voice channels.

Users are able to download their transcriptions and full audio recordings to a file.

When the STT session ends the bot will a final transcription file, an SRT-formatted transcript file (used for subtitles), and a link to a full audio download. To download the audio, follow the link and then right click in the web browser and select “Save as…”. Download links will expire after 24 hours - so if you want to a permanent copy of your file, download it to your computer.

Configuration

SeaVoice offers customizable settings for both servers and individual users.

Note: If you update any settings, you must stop and re-start any active /recognize sessions before the new configurations are applied.

👥 Server Settings

Configure settings for everyone in the server, /server_config [live_transcript] [transcript_recipients] [transcript_style] [ignore_bots] [censor].

Use the /server_config command to configure the settings for the current server that you are in. Only users with admin permissions in the server may use this command . Servers currently have the following settings:

👤 User Settings

Configure settings for just yourself, /user_config [exclude_stt] [default_tts_voice].

Use the /user_config command to configure your personal settings for your Discord account. These settings will persist no matter which server you are in. Users currently have the following settings:

⚙️ Server / User Status

Check your current server or user configurations, /server_status.

Run the /server_status command to get a break down of your current server configurations.

/user_status

Run the /user_status command to get a break down of your current user configurations.

The SeaVoice Discord bot is completely free . No sign up required. Try it out and have fun!

About Seasalt.ai

Seasalt.ai is a Seattle-based startup founded by experts in speech and language technologies.

We collect anonymized voice data for the sole purpose of improving our speech and NLP models. We will never share or sell your data. You can read our full privacy policy here .

Text-to-speech | TTS | Text to Speech | Text to Voice | Speech Synthesis Speech-to-text | STT | Transcription | Speech Recognition Real-time Artificial Intelligence | AI Communication Utility Voice Channel | Voice Chat Accessibility

How-To Geek

How to use text-to-speech on discord.

4

Your changes have been saved

Email is sent

Email has already been sent

Please verify your email address.

You’ve reached your account maximum for followed topics.

YouTube is Losing The War Against Adblockers

The internet is not forever, so it's time to preserve what you can, amazon's prime big deal days return in october, quick links, enabling text-to-speech on a discord server, using text-to-speech on discord, muting all text-to-speech messages on discord.

While Discord is a great platform for voice communication, you might not be able to (or want to) speak with your own voice. To get around the problem, you can use Discord's built-in text-to-speech (TTS) feature.

You can use text-to-speech on your own Discord server , or on another server with a text-to-speech enabled channel. These steps only work for Discord users on Windows or Mac, as Discord's text-to-speech capabilities are unavailable to Android, iPhone, or iPad users.

Related: How to Set Up Your Own Discord Chat Server

If you want to use text-to-speech on Discord, it'll first need to be enabled in a channel on your server. If you're the server owner or administrator, you can do this in your channel settings.

To change your channel settings, access your server in the Discord desktop app or on the Discord website . From the channel listings, hover over a channel name and then click the "Settings" gear icon next to it.

Hover over a Discord channel name, then press the settings cog icon next to it to access the channel settings.

In the "Settings" menu for your channel, select the "Permissions" tab on the left-hand side.

Click "Permissions" in your Discord channel settings.

If you have roles for individual groups of users, select the role from the "Roles/Members" list, otherwise select the "@everyone" option.

A list of available permissions will be shown on the right. Make sure to enable the "Send TTS Messages" option by clicking the green check icon to the right of it.

At the bottom, select "Save Changes" to save the updated role setting.

In the "Permissions" tab, select your user role, then click the green tick icon next to the "Send TTS Messages" Option before clicking "Save Changes" to save.

Once enabled, users with that role (or every user, if you selected the "@everyone" role) will be able to send text-to-speech messages in the channel you modified.

You'll need to repeat these steps if you wish to enable text-to-speech in other channels.

If you're in a channel on Discord with text-to-speech messages enabled, you can send a TTS message by typing

in the chat, followed by your message.

For instance, typing

will activate your browser or device's text-to-speech capabilities, repeating the word "hello" along with the nickname of the Discord user who sent the message.

The message will also be repeated in the channel as a text message for all users to view.

To send a TTS message on Discord, type /tts followed by the message in the chat box.

If you aren't a server owner or administrator, or you just want to mute all text-to-speech messages, you can do so from the Discord user settings menu.

To access this, click the "Settings" gear icon next to your username in the bottom-left corner of the Discord app or website.

Press the settings cog icon next to your username in the bottom-left corner of the Discord app or website.

In your "User Settings" menu, select the "Text & Images" option on the left. Under the "Text-To-Speech" category on the right, click the slider to disable the "Allow playback and usage of /tts command" option.

To disable all TTS messages on Discord, click the "Allow Playback and Usage of /tts Command" slider in the "Text & Images" user settings menu.

Disabling this setting will disable text-to-speech for you on Discord, regardless of each individual server or channel setting. You'll be able to read the text element of a text-to-speech message as normal in the channel, but you won't be able to hear it repeated to you.

You'll also be prevented from using the

command yourself. You'll need to repeat these steps and reenable the option in your user settings if you wish to use it yourself later.

  • Video Games

The Ultimate Guide to Text to Speech Discord Bot: Making Your Server Talk

text to speech bot on discord

Featured In

Table of contents, the dawn of auditory interaction on discord, what is a text to speech discord bot, the top 10 use cases for text to speech discord bot, enabling the power of voice: how to turn on tts on discord, the cream of the crop: a quality text to speech bot for discord, why should your discord server speak up, the steps to setting up tts on discord: a walkthrough, desktop symphony: using discord tts on windows, unleashing tts on mac: a sonic adventure, notifying the masses: how to activate tts notifications, try speechify text to speech, is there a speech-to-text bot discord, is there a text-to-speech bot for discord.

The Dawn of Auditory Interaction on DiscordIn the sprawling digital universe of Discord, where gamers, professionals, and communities converge, the text...

In the sprawling digital universe of Discord, where gamers, professionals, and communities converge, the text to speech discord bot emerges as a beacon of accessibility and innovation. Imagine a virtual companion that reads out messages for you, transforming text into speech in real-time – this is the magic of TTS bots on Discord!

A text to speech discord bot is an ingenious tool that uses TTS technology to vocalize written messages within a Discord server. It acts as a bridge, turning text channels into voice channels, ensuring that messages are not just seen but heard, creating a more inclusive environment for all users.

  • Accessibility for the Visually Impaired : TTS bots empower visually impaired users to participate fully in conversations by reading aloud the text they cannot see, ensuring no one misses out on the fun.
  • Multitasking Made Easy : Listen to server updates while gaming or working. With TTS, your eyes are free to focus on the task at hand.
  • Language Learning Aid : Language learners can improve their pronunciation and listening skills by hearing the correct articulation of words in their target language.
  • Read Aloud Storytelling : Transform your server into a storytelling hub where epic tales come to life through voice, enhancing the listener's experience.
  • Notifications That Speak to You : Never miss an important update with TTS notifications that announce new messages or events, even when Discord is running in the background.
  • Voice Channel Companion : For users who prefer text, TTS bots read their messages aloud in voice channels, keeping them part of the conversation.
  • Entertainment Through Vocal Variety : Some TTS bots offer different voices and languages, adding a layer of entertainment to server interactions.
  • Enhancing Role-playing Games : Bring characters to life by giving them a voice, adding depth to the role-playing experience on Discord.
  • Inclusivity for Different Learning Styles : Audio learners can grasp information better when it’s heard rather than read, making TTS bots an excellent tool for education-related servers.
  • Command Central : With tts command functionality, manage your server using voice commands, making server administration feel like commanding a starship.
  • Step 1 : Opening the Settings Menu. Kick off the journey by tapping into the user settings in your Discord app – the gateway to customization.
  • Step 2 : Diving into Text & Images. Navigate to the ‘Text & Images’ section where the speech option lies dormant, waiting to be awakened.
  • Step 3 : Activating the Speech Option. Unleash the text-to-speech feature by adjusting the necessary settings, granting your Discord server the gift of voice.

When searching for a quality TTS bot, consider one that offers a variety of voices, languages, and easy integration with minimal latency to ensure smooth playback and a delightful auditory experience.

Integrating a TTS bot means embracing accessibility, multitasking efficiency, and adding an extra layer of interaction for all server members, ensuring that everyone's voice is heard, even in text form.

  • Step 1: Access User Settings. Open Discord and enter the realm of user settings, where personalization begins.
  • Step 2: Locate the Accessibility Features. Within the settings, find the accessibility menu where the text-to-speech messages option eagerly awaits.
  • Step 3: Empower Your Server with TTS. Grant the necessary permissions and fine-tune the TTS functionality to your liking, preparing your server for an audible transformation.

On the Windows Discord app, using text to speech is as simple as executing a slash command or a prefix, seamlessly converting your typed words into spoken ones.

Mac users, fear not! The Discord app on macOS allows you to engage with the TTS feature, ensuring that the platform's voice resonates across operating systems.

Stay alert with TTS notifications by adjusting your notification settings, ensuring that every ping is an opportunity to listen, not just look.

Cost : Free to try

Speechify Text to Speech is a groundbreaking tool that has revolutionized the way individuals consume text-based content. By leveraging advanced text-to-speech technology, Speechify transforms written text into lifelike spoken words, making it incredibly useful for those with reading disabilities, visual impairments, or simply those who prefer auditory learning. Its adaptive capabilities ensure seamless integration with a wide range of devices and platforms, offering users the flexibility to listen on-the-go.

Top 5 Speechify TTS Features :

High-Quality Voices : Speechify offers a variety of high-quality, lifelike voices across multiple languages. This ensures that users have a natural listening experience, making it easier to understand and engage with the content.

Seamless Integration : Speechify can integrate with various platforms and devices, including web browsers, smartphones, and more. This means users can easily convert text from websites, emails, PDFs, and other sources into speech almost instantly.

Speed Control : Users have the ability to adjust the playback speed according to their preference, making it possible to either quickly skim through content or delve deep into it at a slower pace.

Offline Listening : One of the significant features of Speechify is the ability to save and listen to converted text offline, ensuring uninterrupted access to content even without an internet connection.

Highlighting Text : As the text is read aloud, Speechify highlights the corresponding section, allowing users to visually track the content being spoken. This simultaneous visual and auditory input can enhance comprehension and retention for many users.

Yes, there are multiple speech-to-text bots available for Discord, which can transcribe voice messages into text.

Indeed, Discord is home to various text-to-speech bots that can read out text messages within your server for a more accessible and dynamic interaction.

AI Maker: Everything you need to know!

Read Aloud: Transforming the Way We Experience Text

Cliff Weitzman

Cliff Weitzman

Cliff Weitzman is a dyslexia advocate and the CEO and founder of Speechify, the #1 text-to-speech app in the world, totaling over 100,000 5-star reviews and ranking first place in the App Store for the News & Magazines category. In 2017, Weitzman was named to the Forbes 30 under 30 list for his work making the internet more accessible to people with learning disabilities. Cliff Weitzman has been featured in EdSurge, Inc., PC Mag, Entrepreneur, Mashable, among other leading outlets.

  • GTA 5 Cheats
  • Print on a Chromebook
  • Nothing Phone 2 Review
  • Best YouTube TV Alternatives
  • Asus ROG Ally vs. Steam Deck
  • Gameshare on Nintendo Switch

How to use text-to-speech on Discord

discord pivot away from gaming app icon on the screen smartphone

Discord is the go-to app for chatting while playing games, watching movies, or really doing anything else with a group. A big reason why is that Discord includes a long list of accessability options, including text-to-speech. In this guide, we're going to show you how to use text-to-speech on Discord and some of the settings you can tweak.

How to enable text-to-speech on Discord

How to set up text-to-speech notifications on discord, what you need.

Discord desktop app

If you're just getting started with Discord, make sure to read our guide on how to make a Discord bot . Bots are essential for running your own server, so you'll want to have that knowledge in your back pocket. We also have a guide on how to pin a message in Discord , which is a simple and essential skill.

Discord has text-to-speech enabled by default, so it's easy to get started. Although the feature is enabled out of the box, you'll need to set up when you hear text-to-speech notifications. We'll show you how to do that in the last section. For now, we're going to walk through how to confirm that text-to-speech is on.

Step 1: Open Discord and click on Settings button. It looks like a gear, and you'll find it next to your avatar in the bottom-left corner.

Step 2: Under the App Settings tab in the left menu, select Accessibility.

Step 3: Scroll to the bottom and switch the toggle on next to Allow Playback and Usage of /tts Command.

You can also set your text-to-speech rate here. We recommend leaving the setting in its default position, but you can speed up or slow down the talking rate how you like. Before closing out, make sure to select Preview to make sure text-to-speech is working however you like.

After you've set up text-to-speech, you can start using it to either send messages or to have messages read to you. Before diving in, note that there's a minor difference between the Discord app and the browser version. The app includes its own unique voice for text-to-speech. If you're using the browser version, the voice will be the standard voice available in your browser instead.

Step 1: To send a text-to-speech message, type /tts before your message. The command will disappear after you send the message, but the recipient will hear it read out loud.

Step 2: To have a message read to you, hover over the message and select the three dots on the right side. Then, click Speak Message.

Using the method above, you can target text-to-speech to certain messages that you send or receive. You can also turn on text-to-speech for notifications, which doesn't require the /tts command or any additional steps to hear messages. When someone posts a message in a channel, you'll hear it read to you.

Discord offers three text-to-speech notification options. Here's what they are:

  • For all channels : Turns on text-to-speech for all messages in all channels you're a part of. We recommend leaving this off to avoid spam. If you have trouble reading messages, consider leaving channels you're not using to avoid a swarm of notifications.
  • For current selected channel : Turns on text-to-speech for the text channel you're currently browsing. This setting works for a specific channel, not serverwide, and it doesn't require the /tts command.
  • Never : Disables all text-to-speech across channels and servers. This will disable text-to-speech even if someone uses the /tts command.

Although it's tough to avoid spam will all text-to-speech notifications turned on, harassment is still against Discord's community guidelines. Make sure to read our guide on how to report someone on Discord if you're having trouble with spammed text-to-speech notifications.

These notification settings live in a different area than the text-to-speech options. Here's how to find them.

Step 1: Click the Settings icon in Discord. It's the gear icon in the lower-left corner of the window, next to your avatar.

Step 2: Under the App Settings tab in the left menu, select Notifications.

Step 3: Under Text-to-Speech Notifications, select the type of notifications you'd like.

Text-to-speech is a great feature in Discord, but you'll probably need to experiment with notifications to get it working how you want. Thankfully, all of the text-to-speech options are only a couple of clicks away.

Editors’ Recommendations

  • What is the BIOS and how to use it
  • How to build a PC from scratch: A beginner’s guide
  • How to change your Skype name
  • How to keep your Microsoft Teams status active
  • How to transfer photos from an iPhone to a computer
  • How-To Guides

Jacob Roach

One of the best ways to keep your many email inboxes safe and secure is by frequently changing your password. While this may sound inconsequential, periodic login updates end up being one of the biggest deterrents against hackers and other malcontents. If Yahoo is your email platform of choice, we’ve put together this guide to teach you how to update your account password in just a few simple steps.

Have you had enough of Discord for a while? We get it. It can be a little exhausting to say the least, especially if you’re running a jam-packed server, filled with multimedia and messages. Fortunately, if you’re in the mood to take a break, it’s not too hard to delete your Discord server.

Welcome to the world of Chromebooks! These budget-friendly laptops are a great middle-ground between mobile devices like smartphones and tablets, and more robust laptops and desktop PCs. There’s a bit of a learning curve to these Google-powered machines though, even down to how you’ll operate trackpad clicks. For instance, we’re sure you’re familiar with how to perform a right-click in Windows or macOS. You can also right-click on a Chromebook, but probably not the way you’re used to.

text to speech bot on discord

The best Text to Speech bot for Discord.

Best TTS bot with custom voice , customisation, panel, logging and more.

Orator Features

Powerful tts.

Powerful Text to Speech feature with 50+ languages.

Custom Voice

Voices of famous personalities in our own Custom Voice system

text to speech bot on discord

Control Languages, Automated TTS Generation, Enable or Disable and many more from a customisable Panel.

Customization

Orator provides the best customisation in its bot more than any other TTS bot available on Discord.

text to speech bot on discord

With our best teammates, we provide the fastest response to anyone who needs help.

Remains online & in your voice channel 24/7.

Get Premium

Copyright ©️ 2022 - 2024 Orator.

Support us

Discord text to speech bot

  • over 100 voices
  • language transalation
  • multiple users can use it as one
  • remembers your settings

Add to discord Try me

Integrate Text to Speech into Your Discord Server

Our AI voices can convert your Discord messages into spoken audio in a single click.

text to speech bot on discord

Current solutions are low quality...

Until now, Discord messages have been limited to text or low-quality voiceovers.

text to speech bot on discord

Time Constraints

Creating a Discord audio message with a human voiceover can be time-consuming and expensive.

text to speech bot on discord

Message Quality

Finding the right voiceover talent to deliver a high-quality Discord message can be difficult.

text to speech bot on discord

Multilingual Discord Messages

Producing Discord messages in multiple languages can be resource-intensive and costly.

Our AI voices can transform your Discord messages into immersive experiences that captivate audiences.

Streamline your discord message creation.

text to speech bot on discord

With ElevenLabs, generate an entire Discord message with a single click. Choose from diverse voices and styles to match your narrative needs without the overheads of traditional production.

Narrative Flexibility

text to speech bot on discord

From fiction to educational material, our AI voices adapt to any genre, offering listeners an engaging and varied listening experience.

text to speech bot on discord

Premium Voices for Every Discord Message

Our vast library of AI-generated voices brings Discord messages to life, offering an immersive experience that resonates with audiences. Dynamic Range of Voices Choose from a wide selection of tones, accents, and styles to best suit your Discord message. Narration Consistency Maintain consistent voice quality and style throughout your Discord messages, ensuring a professional finish. Cost-Effective Production Reduce production costs while expanding your Discord message catalogue. Voice Customization Personalize voices to reflect specific Discord message contexts.

Discord Message Production Simplified

Our AI voice technology streamlines the Discord message creation process, enabling you to focus on content and delivery. Full Control Over Production Direct the narrative flow, pacing, and emphasis to align with your vision. High-Quality Sound Deliver crystal-clear audio that meets the standards of Discord message enthusiasts. Integration with Discord Message Tools Easily integrate with your existing Discord message workflows.

Bring Discord Messages to Life in Multiple Languages

Expand your audience by offering Discord messages in various languages, all with the same engaging, lifelike quality. Global Accessibility Make your Discord messages available and accessible to listeners worldwide. Diverse Language Options Our AI voices cover a broad range of languages, catering to a global market. Cultural Relevance Ensure cultural nuances are captured in every Discord message with localized accents and dialects.

Use our AI voice generation to create Discord messages that resonate with your community.

Discord users.

Rapidly reproduce your Discord message with a variety of voices and languages, reaching more listeners than ever before.

Discord Mods

Bring your Discord message to life with voices that resonate with your vision and enhance the storytelling.

Community Managers

Achieve high-quality soundscapes for Discord messages with the precision and flexibility of AI.

Powerful game localization in 29 Languages

Expand your game's reach with our versatile language offerings, ensuring your game resonates with players around the world.

How to use AI to create Discord TTS messages

text to speech bot on discord

Find our TTS bot on Discord

Find our TTS bot on Discord and invite it to your server.

text to speech bot on discord

Add a prefix

Add the prefix to the beginning of your message.

text to speech bot on discord

Type your message

Type your message in the channel you want to send it from.

text to speech bot on discord

Send your message

Send your message and our TTS bot will read it aloud.

Explore other integrations and solutions

text to speech bot on discord

Text to speech for YouTube videos

Harness the power of ElevenLabs' AI voices to create captivating and diverse YouTube content, making your videos stand out in the crowded digital landscape.

text to speech bot on discord

Text to speech for Podcasts

Elevate your podcasting experience with ElevenLabs' AI-generated voices, offering a range of tones, accents, and emotions for a dynamic and engaging auditory experience.

Frequently asked questions

How realistic are the ai voices for discord messages, what is discord text to speech, how do i record a discord message using ai, how much does it cost to create a discord message using ai, how can i find the best voice to narrate my discord message or article.

Create with the highest quality AI Audio

Already have an account? Log in

Analyzing digital propaganda and conflict rhetoric: a study on Russia’s bot-driven campaigns and counter-narratives during the Ukraine crisis

  • Original Article
  • Open access
  • Published: 23 August 2024
  • Volume 14 , article number  170 , ( 2024 )

Cite this article

You have full access to this open access article

text to speech bot on discord

  • Rebecca Marigliano 1 ,
  • Lynnette Hui Xian Ng 1 &
  • Kathleen M. Carley 1  

The dissemination of disinformation has become a formidable weapon, with nation-states exploiting social media platforms to engineer narratives favorable to their geopolitical interests. This study delved into Russia’s orchestrated disinformation campaign, in three times periods of the 2022 Russian-Ukraine War: its incursion, its midpoint and the Ukrainian Kherson counteroffensive. This period is marked by a sophisticated blend of bot-driven strategies to mold online discourse. Utilizing a dataset derived from Twitter, the research examines how Russia leveraged automated agents to advance its political narrative, shedding light on the global implications of such digital warfare and the swift emergence of counter-narratives to thwart the disinformation campaign. This paper introduces a methodological framework that adopts a multiple-analysis model approach, initially harnessing unsupervised learning techniques, with TweetBERT for topic modeling, to dissect disinformation dissemination within the dataset. Utilizing Moral Foundation Theory and the BEND Framework, this paper dissects social-cyber interactions in maneuver warfare, thereby understanding the evolution of bot tactics employed by Russia and its counterparts within the Russian-Ukraine crisis. The findings highlight the instrumental role of bots in amplifying political narratives and manipulating public opinion, with distinct strategies in narrative and community maneuvers identified through the BEND framework. Moral Foundation Theory reveals how moral justifications were embedded in these narratives, showcasing the complexity of digital propaganda and its impact on public perception and geopolitical dynamics. The study shows how pro-Russian bots were used to foster a narrative of protection and necessity, thereby seeking to legitimize Russia’s actions in Ukraine whilst degrading both NATO and Ukraine’s actions. Simultaneously, the study explores the resilient counter-narratives of pro-Ukraine forces, revealing their strategic use of social media platforms to counteract Russian disinformation, foster global solidarity, and uphold democratic narratives. These efforts highlight the emerging role of social media as a digital battleground for narrative supremacy, where both sides leverage information warfare tactics to sway public opinion.

Explore related subjects

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

In today’s age of social media, state actors utilize digital propaganda and manipulation to shape narratives, and foster discord to push their geopolitical agendas. This form of propaganda is a calculated and coordinated process designed to disseminate and amplify deceptive content across multiple social media platforms (Jowett and O’Donnell 2012 ). The Russian state’s persistent efforts to create confusion and chaos within social media mediums to achieve their goal is a testament to the systematic manipulation that they employ (Council 2022 ). For instance, during the Crimea crisis in 2014, the Russian state employed an extensive network of bots and trolls to flood social media platforms with pro-Russian stances, spreading misinformation about the situation in Ukraine and creating a narrative that justified its annexation of Crimea (Helmus et al. 2018 ). This strategy exemplifies the principles of social cybersecurity, which focuses on understanding and forecasting cyber-mediated changes in human behavior, as well as social, cultural, and political outcomes (Carley 2020 ).

With diminishing international prestige, the Russian state employs a situational and diligence-driven approach, meticulously exploiting vulnerabilities to their advantage (Berls 2019 ). This approach is demonstrated by Russia’s Internet Research Agency (IRA), known for its role in online influence operations. The IRA carefully crafts messages that are tailored to specific audiences and use data analytics to maximize the impact of their campaigns (Robert S. Mueller 2019 ), a tactic that aligns with the methodologies used in social cybersecurity for analyzing digital fingerprints of state-led propaganda campaigns (Carley 2020 ). This is further shown in Syria where Russian forces used disinformation campaigns as a tool to weaken the opposition forces and shape international perception of the conflict, including spreading false narratives about the actions of the rebel groups and the humanitarian situation (Paul and Matthews 2016 ).

Putin’s regime’s information operation campaigns against Ukraine are a significant development in their warfare strategy, with disinformation campaigns becoming an integral part of their military operations. The development of Russia’s “Gerasimov Doctrine,” which emphasizes the role of psychological warfare, cyber operations, and disinformation campaigns, marks this new shift in their military strategy (Fridmam 2019 ). This doctrine emphasizes the importance of non-military tactics, such as disinformation campaigns to achieve strategic military objectives. The Russian state has continuously increased the use of cyber warfare in its military objectives, utilizing social media platforms to spread its objectives through disinformation campaigns to curate public opinion. Through bot-driven strategies, the Russian state aims to polarize communities and nations, destabilizing political stability while undermining democratic values under a facade of disinformation operations (Schwartz 2017 ).

The escalation of Russian disinformation operations following the February 2022 invasion of Ukraine was met with a robust Ukrainian response. Ukraine has strengthened its information and media resilience by establishing countermeasures against Russian narratives, including disseminating accurate information and regulating known Russian-affiliated media outlets (Paul and Matthews 2016 ).

To understand the complexities of information warfare campaigns and counter-narratives to those campaigns, this research studies the impact of these bot-driven strategies on social media platforms. The study uses a blend of stance analysis, topic modeling, network dynamics, and information warfare analysis, integrated with principles of social cybersecurity, to understand the prominent themes, and influence that bot communities have on platforms like Twitter, while also exposing the most influential actors and communication patterns.

Research Question: The central question of this investigation is: How have bot-driven strategies influenced the landscape of digital propaganda and counter-narratives? We follow up by inquiring, what are the implications of these strategies for Ukraine’s political and democratic landscape and the broader geopolitical arena.

This paper seeks to understand the role that bot communities have in the propagation and amplification of these narratives that nation-states are pushing. By examining only bot-driven ecosystems and their effectiveness in promoting Russia’s political agenda through narrative manipulation, we aim to assess the global impact that these bot communities have in achieving their overall strategic objectives.

It is important to note that when referring to ’Russia’ in this paper, we distinguish between the ’Russian state’, ’Putin’s regime’, and the broader population of Russia. The term ’Russian state’ refers to the governmental and institutional structures of Russia. ’Putin’s regime’ specifically refers to the current administration and its policies under President Vladimir Putin. Meanwhile, ’Russia’ encompasses a diverse populace, many of whom may not align with Putin’s strategies or values but are unable to protest due to political repression. This distinction is crucial for understanding the multifaceted nature of Russian involvement in information warfare and the varied perspectives within the country.

The study will expand this research to include an analysis of the counter-narrative of the overwhelming support for Ukraine, which has also been overwhelmingly exemplified within these bot communities. Utilizing BotHunter, a tool specifically designed to detect bot activity (Beskow and Carley 2018 ), this research will identify and analyze the bot networks that have been central to the propagation of this support. This counter-narrative, emerging from the data and shaped by bot-driven dialogues, represents a significant aspect of the overall response to the conflict.

By integrating the following: the BEND framework to analyze information warfare strategies, TweetBERT’s domain-specific language modeling, and the moral compass provided by Moral Foundations Theory, with insights from social cybersecurity, this study aims to provide a comprehensive understanding of the strategic narratives and counter-narratives in the wake of the conflict. The goal is to contribute a detailed analysis of what these effects have on geopolitical stability and to delineate the methods by which narratives can be both a weapon of division and a shield of unity.

2 Literature review

Russian disinformation tactics

The Russian state’s disinformation tactics have substantially evolved over the last several years, especially since the 2008 incursion into Georgia. Their tactics intensified during the 2014 annexation of Crimea and have continued vigorously throughout the ongoing conflicts in Ukraine and Syria. These tactics are not only a continuation of Cold War-era methods but also leverage the vast capabilities of modern technology and media (Paul and Matthews 2016 ). The digital landscape has become a fertile ground for Russia to deploy an array of propaganda tools, including the strategic use of bots, which create noise and spread disinformation at an unprecedented scale (Politico 2023 ).

The modus operandi of Russian disinformation has been aptly termed “the firehose of falsehood,” characterized by high-volume and multichannel distribution (Paul and Matthews 2016 ). This approach capitalizes on the sheer quantity of messages and utilizes bots and paid trolls to amplify their reach, not only to disrupt the information space but also to pose a significant challenge to geopolitical stability (Paul and Matthews 2016 ). By flooding social media platforms with a barrage of narratives, the Russian state ensures that some of its messaging sticks, even if they are contradictory or lack a commitment to objective reality (Organisation for Economic Co-operation and Development 2023 ). This relentless stream of content is designed not just to persuade but to confuse and overpower the audience, making it difficult to discern fact from fiction, demonstrating how narratives can be weaponized to create division.

Moreover, these tactics exploit the psychological foundations of belief and perception. The frequency with which a message is encountered increases its perceived credibility, regardless of factual accuracy (Paul and Matthews 2016 ). Russian bots contribute to this effect by continuously posting, re-posting, and amplifying content, thereby creating an illusion of consensus or support for viewpoints. This strategy has the potential to influence public opinion, thereby extending the reach of disinformation campaigns (Politico 2023 ). This demonstrates how narratives may undermine democratic values and geopolitical stability.

Recent research has expanded on these findings, highlighting the sophisticated nature of bot-driven propaganda. Chen and Ferrara ( 2023 ) present a comprehensive dataset of tweets related to the Russia-Ukraine conflict, demonstrating how social media platforms like Twitter have become critical battlegrounds for influence campaigns (Chen and Ferrara 2023 ). Their work highlights the significant engagement with state-sponsored media and unreliable information sources, particularly in the early stages of the conflict, which saw spikes in activity coinciding with major events like the invasion and subsequent military escalations (Chen and Ferrara 2023 ). The use of bots in this military campaign is notable for their ability to operate around the clock, mimic human behavior, and engage with real users (Politico 2023 ). These bots are programmed to push Russian narratives, attack opposing viewpoints, and inflate the appearance of grassroots support (Paul and Matthews 2016 ). These bots are a key component in Russia’s strategy to structure public opinion and influence political outcomes. Note that we use the terms the “Russian state” and “Putin’s regime” in this article to indicate the group of people who align with the political values of the regime.

Russian disinformation efforts have shown a lack of commitment to consistency, often broadcasting contradictory messages that may seem counter-intuitive to effective communication (Paul and Matthews 2016 ). However, this inconsistency can be a tactic, as it can lead to uncertainty and ambiguity, ultimately challenging trust in reliable information sources. By constantly shifting narratives, Russian propagandists keep their opponents off-balance and create a fog of war that masks the truth (Organisation for Economic Co-operation and Development 2023 ). This strategy emphasizes the dual role of narratives in geopolitical conflicts, serving as a shield of unity for one’s own political agenda whilst being a weapon of division against adversaries.

The advancement of Russian disinformation tactics represents a complex blend of traditional influence strategies and the use of modern technological tools. Russia has crafted a formidable approach to push propaganda narratives by leveraging bots, social media platforms, and the vulnerabilities of human psychology (Alieva et al. 2022 ). The international community, in seeking to counter these tactics needs to understand the threat that this poses and develop comprehensive strategies to defend against the flood of disinformation that undermines democratic processes and geopolitical stability (Organisation for Economic Co-operation and Development 2023 ).

Information warfare analysis

Information warfare in social media is the strategic use of social-cyber maneuvers to influence, manipulate, and control narratives and communities online (Blane 2023 ). It is used to manipulate public opinion, spread disinformation, and create divisive discourses. This form of warfare employs sophisticated strategies to exploit the interconnected nature of social networks and the tendencies of users to consume and share content that aligns with their existing beliefs (Prier 2017 ). The strategy of “commanding the trend” in social media involves leveraging algorithms to amplify specific messages or narratives (Prier 2017 ). This is achieved by tapping into existing online networks, utilizing bot accounts to create a trend or messaging, and then rapidly disseminating that narrative. This exploits the natural inclination towards homophily-the tendency of individuals to associate and bond with others over like topics (Prier 2017 ). Social media platforms enable this by creating echo chambers where like-minded users share and reinforce each other’s views. Consequently, when a narrative that is disinformation aligns with the user’s pre-existing beliefs, it is more likely accepted and propagated within these networks (Prier 2017 ).

Peng ( 2023 ) adds to this understanding by conducting a cross-platform semantic analysis of the Russia-Ukraine war on Weibo and Twitter, showing how platform-specific factors and geopolitical contexts shape the discourse (Peng 2023 ). The study found that Weibo posts often reflect the Chinese government’s stance, portraying Russia more favorably and criticizing Western involvement, while Twitter hosts a more diverse range of opinions (Peng 2023 ). This comparative analysis highlights the role of different social media environments in influencing public perception and the spread of narratives, emphasizing the multifaceted nature of information warfare across platforms.

The Russian state illustrates the effective use of social media in information warfare, where they have used it for social media propaganda, creating discourse and confusion, and manipulating both supporters and adversaries through targeted messaging (Brown 2023 ). The goal is to exploit existing social and political divisions, amplifying and spreading false narratives to manipulate public opinion and discredit established institutions and people (Alieva et al. 2022 ).

Social cybersecurity, integrating social and behavioral sciences research with cybersecurity, aims to understand and counteract these cyber-mediated threats, including the manipulation of information for nefarious purposes (National Academies of Sciences, Engineering, and Medicine (2019) ). The emergence of social cybersecurity science, focusing on the development of scientific methods and operational tools to enhance security in cyberspace, highlights the need for multidisciplinary approaches to identify and mitigate cyber threats effectively.

There are many methods and frameworks to analyze information warfare strategies and techniques. One such is SCOTCH, a methodology for rapidly assessing influence operations (Blazek 2023 ). It comprises six elements: source, channel, objective, target, composition, and hook. Each plays a crucial role in the overall strategy of an influence campaign. Source identifies the originator of the campaign, Channel refers to the platforms and features used to spread the narrative, Objective is the goal of the operation, Target defines the intended audience, Composition is the specific language used, and Hook is the tactics utilized to exploit the technical mechanisms.

While SCOTCH provides a structured approach to characterizing influence operations, focusing on the operational aspects of campaigns, the BEND framework offers a more nuanced interpretation of social-cyber maneuvers. BEND categorizes maneuvers into community and narrative types, each with positive and negative aspects, providing a comprehensive view of how online actors manipulate social networks and narratives (Blane 2023 ). This framework is particularly effective in analyzing the subtle dynamics of influence operations within social media networks, where the nature of communication is complex and multi-layered (Ng and Carley 2023b ). Therefore, when deliberating what framework to utilize, while SCOTCH excels in operational assessment, BEND offers greater insights into the social and narrative aspects of influence operations, making it more suitable for analyzing the elaborate nature of social media-based information warfare operations.

Moral Realism Analysis

Moral realism emphasizes the complex interplay between moral and political beliefs and suggests that political and propaganda narratives are not only policy tools but also reflect and shape societal moral beliefs (Kreutz 2021 ). This perspective suggests that the political narratives and propaganda disseminated by countries imply that moral justifications embedded in these narratives, both Russian and Ukrainian, are likely shaped by deeper political ideologies, influencing how these narratives are constructed and perceived on the global stage (Hatemi et al. 2019 ). Similar findings can be seen where political ideologies significantly influenced the framing of vaccines for COVID-19, this understanding becomes essential in the geopolitical concept, particularly in the Russian-Ukraine conflict (Borghouts et al. 2023 ).

Developed by social psychologists, the Moral Foundations Theory delineates human moral reasoning into five foundational values: Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression (Theory 2023 ). This theory was applied and utilized to understand the moral reasoning of political narratives and public opinion. For instance, this theory helps to understand the moral reasoning behind major political movements and policy decisions, emphasizing how different groups may prioritize certain moral values over others (Kumankov 2023 ). Application of the theory in a study on attitudes towards the COVID-19 vaccine reveals that liberals and conservatives expressed different sets of moral values in their discourse (Borghouts et al. 2023 ).

In the context of the Russian-Ukraine War, moral realism is an essential method in understanding international politics. Russia’s narrative often emphasizes the protection of Russian speakers in Ukraine, which can be interpreted as an appeal to the Loyalty/Betrayal Foundation (Dill 2022 ). On the other hand, Ukraine’s emphasis on self-determination and resistance to aggression may resonate more with Care/Harm and Fairness/Cheating foundations (Polinder 2022 ).

While moral realism hasn’t been directly applied to analyzing information warfare discourse, the COVID-19 case study shows the impact of understanding moral reasoning on political narratives (Kumankov 2023 ). Moral realism, in the context of information warfare for this conflict, provides a way to analyze the moral justifications and narratives used by Russia and Ukraine. We can then identify the ethical implications and the underlying values that they are trying to promote. By integrating moral realism, we can begin to understand the effectiveness that these narratives have in shaping public opinion and influencing international response to the Russian-Ukraine conflict.

Topic Modeling

Topic modeling is a machine learning technique used to discover hidden thematic structures within document collections, or “corpora” (Hong and Davison 2011 ). This technique allows researchers to extract and analyze dominant themes from large datasets, such as millions of tweets, to understand public discourse and the spread of propaganda or counter-propaganda narratives. It is particularly useful for examining social media data, where bots often attempt to control narratives (Hong and Davison 2011 ).

The topic modeling process involves several steps:

Data Collection Gathering tweets related to key events and statements.

Pre-processing Cleaning the data by removing noise, such as stop words, URLs, and user mentions, to focus on relevant content.

Vectorization Transforming the pre-processed text into a numerical form usable by statistical models (Ramage et al. 2009 ).

Algorithm Application Using methods like Latent Dirichlet Allocation (LDA) to identify topics (Ramage et al. 2009 ). Each topic is characterized by a distribution of words, highlighting topics relevant to our research.

For Twitter data, topic modeling faces unique challenges due to the platform’s character limit and the use of non-standard language like hashtags and abbreviations. This requires models that can capture the concise and often informal nature of tweets. The conflict-specific jargon, hashtags associated with the war, and the multilingual nature of the involved parties make traditional models like Document-Term Matrix (DTM), Term Frequency-Inverse Document Frequency (TF-IDF), and LDA less effective (Qudar and Mago 2020 ).

Ultimately, we chose TweetBERT, a variant of the BERT (Bidirectional Encoder Representations from Transformers) model pre-trained on Twitter data, which is designed to handle the peculiarities of Twitter’s text (Qudar and Mago 2020 ).

figure 1

Data Pre-processing framework

3 Data and methodology

3.1 data collection.

The data for this study was sourced from Twitter, a widely used social media platform known for its role in news consumption across global regions including Western, African, and Asian countries (Orellana-Rodriguez and Keane 2018 ). Unlike other platforms with narrower user bases, Twitter’s widespread popularity enabled us to explore multiple sources that are either pro-Ukraine or pro-Russian support.

We utilized a pre-existing curated Twitter dataset that focused on English-language content featuring specific keywords: “Russian invasion,” “Russian military,” “military buildup,” and “invasion of Ukraine.” The dataset covered the period from January 2022 to November 2022, aligning with critical events including the Russian invasion of Ukraine in February 2022, the Russian advancement into Ukraine in May 2022, and the Ukrainian Kherson counteroffensive in August 2022.

Our dataset consisted of 4.5 million tweets. In our analysis, we concentrated on 1.6 million tweets of a refined social-network discourse exclusively centered on bot-generated content. This focus stems from the increasing recognition of bot communities have in information warfare, often surpassing that of human interactions. Bots, programmed to amplify specific narratives and disinformation, can operate continuously, creating an echo chamber effect that significantly distorts public perception (Smith et al. 2021 ). This approach enabled us to thoroughly investigate interactions within bot communities by excluding human conversations.

We employed temporal segmentation to focus on specific high-impact timeframes: (1) the Russian Invasion (08 February–15 March 2022), (2) the Mid-point (15 May–15 June 2022, and (3) the Kherson counteroffensive (20 July–30 August 2022).

The Russian invasion (08 February–15 March 2022): this marked the initial phase of the invasion, characterized by the rapid advance of Russian forces into Ukraine, including the capture of key cities and regions (Meduza 2022 ). This timeframe marked the beginning of intense fighting, particularly around Kyiv and in the Donbas region, and significant civilian displacement and casualties (TASS 2022 ). International responses included widespread condemnation and the imposition of sanctions against Russia.

The mid-point of the escalation of the war (15 May–15 June 2022): during this period, the conflict transitioned into a prolonged war of attrition. Russian forces focused on consolidating control in the east and south (News 2022 ), facing stiff resistance from Ukrainian forces (Axe 2022 ). This timeframe saw significant urban warfare and efforts by Russia to absorb occupied territories. The international community continued to respond with humanitarian aid to Ukraine and further sanctions on Russia (Desk 2024 ).

The Ukrainian Kherson counteroffensive (20 July–30 August 2022): this phase was marked by a strategic shift with Ukraine launching successful counteroffensives, particularly in the Kherson region (Blair 2022 ). Ukrainian forces made significant territorial gains, reversing some of Russia’s earlier advances (Sands and Lukov 2022 ). This period highlighted Ukraine’s resilience and the effectiveness of its strategy, significantly impacting the course of the war.

3.2 Impact of Twitter policies on data collection and propaganda campaigns

Our analysis considers the influence of Twitter’s content moderation policies on the visibility and spread of propaganda during the 2022 Russian-Ukraine War. Initially, Twitter’s policies aimed to curb misinformation, using automated algorithms and human reviewers to filter out harmful content (de Keulenaar et al. 2023 ).

However, significant shifts occurred with Twitter’s change in ownership in late 2022. Under Elon Musk’s management, the platform adopted a more lenient approach to misinformation, emphasizing ”free speech” and ceasing strict enforcement against misleading information (Kern 2022 ). This included stopping the enforcement of its COVID-19 misinformation policy, which previously led to many account suspensions and content removals.

These policy changes impacted our dataset. During data collection, tweets with overt disinformation, hate speech, or content inciting violence were more likely to be blocked, while tweets with subtle propaganda or opinion-based misinformation were more likely to remain. This selective enforcement likely skewed our data, with more pro-Ukraine stance observed early in 2022 and a rise in pro-Russian stance later in the year. Fewer pro-Russian hashtags were found and used early in 2022 compared to later.

By examining these policy shifts and their impacts, we understand how moderation influenced narrative visibility. Initial strict policies likely contributed to dominant pro-Ukraine stance early on, while later leniency allowed for increased pro-Russian content. This context is critical for interpreting temporal changes in propaganda and narrative dominance within our dataset.

3.3 Data pre-processing

In this portion, we describe our data pre-processing framework which involves extraction of bot tweets, removal of duplicate tweets, data cleaning, and data processing (i.e., stance detection, topic modeling, stance analysis). Figure  1 illustrates the data processing framework used in this paper.

The data that was originally collected through the Twitter API contained text not needed for further analysis. To remove these texts, we employed comprehensive data pre-processing methods to enhance the effectiveness of our study methodology. As depicted in Fig.  1 , our pipeline began with the use of a Bot Detector to ensure that only bot tweets were analyzed, emphasizing our focus on automated accounts which are a significant component in the spread of digital propaganda. We then conducted a temporal analysis to understand bot behavior over three specific timeframes, which allowed us to track the evolution of narratives in sync with the development of the conflict. Utilizing the NLTK library, Footnote 1 known for its text-handling features, was a critical step. Pre-processing included transforming tweets into a structured format via tokenization, removing stop words, and employing lemmatization techniques.

Tokenization, a key process in our methodology, involved breaking down textual content into discrete tokens such as words, terms, and sentences. This turned unstructured text into a structured numerical data form, enabling a more concentrated stance analysis. NLTK’s ability to filter out irrelevant words and stop words greatly shaped our focus on significant textual elements from the dataset. Footnote 2 Additionally, RegexpTokenizer, a part of the NLTK library, allows us to customize our tokenization process. This enabled us to define specific models for tokenization, making it particularly useful for handling the unique characteristics of social media text.

3.4 Bot detection

Bot detection algorithms utilize various methods to identify automated accounts on social media platforms. One simple approach is analyzing temporal features, which involves examining patterns in the timing and frequency of posts (Chavoshi et al. 2016 ). Bots often post at higher rates with more regular intervals compared to human users. Another advanced method involves deep-learning-based algorithms that assess complex patterns in account behavior, language use, and network interactions (Ng and Carley 2023a ). These algorithm distinguish bots from human users by learning from large datasets of known bot and human behaviors.

To further refine the dataset for this study, we utilized a bot detector to parse through the collection, identifying and isolating accounts likely to be automated bots. BotHunter, a hierarchical supervised machine learning algorithm (Beskow and Carley 2018 ), differentiates automated bot agents from human users via features such as post texts, user metadata, and friendship networks. This algorithm can be implemented on pre-collected data rather than requiring live input. Its proficiency in processing data, particularly during large-scale runs, proves BotHunter as a valuable tool for bot detection.

We refined our initial dataset, which consisted of 4.5 million tweets, down to a more manageable subset of over 1.5 million tweets. Within this subset, we identified and categorized accounts as probable bots if they achieved a BotHunter score of 0.7 or higher. This threshold was established based on findings from a previous systematic study that determined optimal values for bot detection algorithms. We used both original and retweets of the data because these two types of tweets, in totality, represent the content disseminated by bots, thereby enhancing the precision of our study’s insights into automated activity on Twitter (Ng et al. 2022 ). By focusing on accounts that surpassed this reliability score, we concentrated our analysis on the influence and behavior of bots within the discourse of the Russian-Ukraine conflict.

3.5 Stance detection

After undergoing pre-processing, the dataset was parsed through the NetMapper software to enhance narrative understanding. Footnote 3 This software contains lexicons of language traits in over 40 languages, including those studied within this work. These lexicons enable insights into stance dynamics that enhance later stance analyses.

A major step of our stance analysis involved classifying tweets based on their stances, which were indicated using specific hashtags. We adopted the hashtag propagation method, which entails identifying and categorizing the 130,000 tweets according to their usage of pro-Russian (including anti-Ukraine stance) and pro-Ukraine (including anti-Russian stance) hashtags. The hashtag propagation method involves the tracking and analysis of network interaction among Twitter users based on specific hashtags (Darwish et al. 2023 ). This method identifies clusters of users sharing similar hashtags, revealing their stances and associated hashtags. Examining these networks allows us to understand the predominant stances and interactions within each group (Darwish et al. 2023 ).

For effective hashtag selection representing both pro-Russian and pro-Ukraine stances, two criteria were applied: firstly, the exclusivity of the hashtag to a specific stance pole, and secondly, its prevalence to ensure reliable agent stance detection. Initially, The process involved sorting hashtags based on frequency to identify probable pro-Russian and pro-Ukraine stances. Subsequently, selected hashtags were examined further using network analysis to confirm their exclusive association with the intended stance and to uncover any additional related hashtags. We manually reviewed a few hundred tweets for specific hashtags within each timeframe, identifying over 2000 unique hashtags to categorize tweets according to their stances.

By accurately classifying tweets into pro-Russian or pro-Ukraine stance, we can track the shifts in public opinion facilitating the ability to identify misinformation campaigns and understand how digital solidarity or opposition is manifested. Furthermore, stance detection aids in showing bots’ influence in shaping narratives, providing a more comprehensive view of the conflict.

3.6 Validation of stance analysis and bot detection

To ensure the reliability and accuracy of our stance analysis, we utilized the TweetBERT model, pre-trained on a large dataset of tweets annotated by human coders. This pre-training involved rigorous manual annotations to capture the nuances of stance, as recommended by Song et al. ( 2020 ). The human annotations provided a robust foundation for training the model, allowing it to accurately classify stances in the large dataset of tweets related to the Russia-Ukraine conflict.

Our validation approach involved several key steps to ensure the robustness of the stance analysis:

Human Annotations The stance analysis model was initially trained using manually annotated tweets. We reviewed a few hundred tweets for specific hashtags within each timeframe, identifying over 2,000 unique hashtags to categorize stances. This manual validation ensured accurate stance detection, providing a solid foundation for sentiment analysis with examples of positive, negative, and neutral stances.

Sample Size and Sampling Methods Our dataset comprised over 1.6 million tweets. We used stratified random sampling to select a representative subset for manual annotation, ensuring the sample reflected the diversity of stances and topics in the full dataset.

Quality Assurance in Pre-processing During data pre-processing, we used the NLTK library for tokenization, stop word removal, and lemmatization. This ensured clean and consistent text data for accurate sentiment analysis. Removing stop words like ’the’, ’is’, and ’in’ streamlined the dataset, allowing algorithms to focus on words with substantial emotional or contextual weight. Lemmatization ensured different forms of a word were analyzed as a single entity, enhancing sentiment assessment consistency.

Validation of Stance and Bot Analysis We grabbed a random subset of tweets and labeled them as pro-Russia, pro-Ukraine, or Neutral. We compared these labels to the model’s predictions. This step ensured that our analysis was reliable and accurately reflected the sentiments expressed in the tweets. For stance detection, we achieved an accuracy score of 91.28% for all pro-Ukraine stances and 84.78% for all pro-Russian stances. This highlights the model’s high accuracy in identifying the stance of tweets related to the Russia-Ukraine conflict. We also manually reviewed a subset of accounts labeled as bots by the model, examining activity patterns, content, and other indicators to confirm classification. We only analyzed accounts with a bot probability score above 0.7, ensuring high confidence in bot detection results. The results of this validation, along with the comparison between human annotations and model predictions for both stances and bot detection, can be found in Appendices A and B.

3.7 Topic modeling

Topic modeling uses algorithms to sift through large text datasets, like social media posts, to identify recurring themes or topics. In our research, it is crucial to understand the narratives, sentiments, and discussions on Twitter during the Russian-Ukraine conflict. This method helps pinpoint prominent themes, phrases, and words characterizing the war’s narratives.

In this context, topic modeling shows how different actors within the bot community shape the conversation. By tracking topics or hashtags over time, we can identify misinformation campaigns, state-sponsored propaganda, and grassroots movements, often correlating with events like military escalations or diplomatic negotiations.

Twitter’s informal language and brevity pose challenges for traditional NLP models like BERT and BioBERT (Qudar and Mago 2020 ). TweetBERT, designed for large Twitter datasets, handles these challenges by analyzing text that deviates from standard grammar and includes colloquial expressions. It effectively detects trends and movements, making it ideal for analyzing diverse datasets (Qudar and Mago 2020 ).

TweetBERT’s capabilities are vital for this study, focusing on Russia’s bot-driven campaigns and Ukraine’s counter-narratives. By using TweetBERT, we can uncover attempts of narrative manipulation methods and assess the impact of bot-driven campaigns on public opinion and geopolitical dynamics. We further used word clouds to explore the strategic deployment of themes and terms by bots during key conflict phases.

This approach shows how bots exploit social media algorithms, favoring content that generates interaction. Consequently, bots create and perpetuate echo chambers. Topic modeling with TweetBERT provides insights into how information warfare campaigns are conducted, demonstrating how bots systematically disseminate polarizing content to influence public opinion.

3.8 BEND framework and moral foundations theory integration

The BEND framework offers a robust method for interpreting social-cyber maneuvers in information warfare, distinguishing between community and narrative maneuvers with positive and negative stances (Blane 2023 ). This framework enables a detailed examination of social media dynamics, addressing content, intent, and network effects. By employing BEND, we can identify strategies to build or dismantle communities, engage with or distort narratives, and enhance or discredit messages, which is critical for understanding the impact of these campaigns.

The BEND maneuvers are categorized as follows:

Community Maneuvers

Positive (’B’): Back, Build, Bridge, Boost.

Negative (’N’): Neutralize, Negate, Narrow, Neglect.

Narrative Maneuvers

Positive (’E’): Engage, Explain, Excite, Enhance.

Negative (’D’): Dismiss, Distort, Dismay, Distract.

For instance, the “build” maneuver creates groups by mentioning other users, while the “neutralize” maneuver discredits opposing opinions. Narrative maneuvers like “excite” elicit positive emotions, whereas “distort” alters perspectives through repeated messaging (Blane 2023 ).

To further contextualize these maneuvers, we integrate Moral Foundations Theory, which segments moral reasoning into core values: Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression (Graham et al. 2013 ). This theory provides insights into the stances and narratives in our dataset, revealing moral undertones that resonate with and mobilize individuals on an ethical level. For example, Care/Harm is evident in protective rhetoric in pro-Russian narratives and victimization themes in Ukrainian messaging. Fairness/Cheating surfaces in accusations of deception, while Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation are present in discussions of national allegiance and cultural institutions.

Integrating the BEND framework with Moral Foundations Theory offers a multi-dimensional approach to sentiment analysis, mapping out the emotional and ethical dimensions of the narratives. This analysis uncovers how orchestrated maneuvers resonate with foundational moral values, magnifying their impact. The dual-framework application explains the strategies and moral appeals used to shape public perception and international responses, highlighting the potency of sentiment as a weapon in modern information warfare.

4 Results and discussion

4.1 dominance of pro-ukraine stance versus pro-russian stance.

figure 2

Normalized weekly bot tweet volume (Jan-Oct 2022)

While we don’t know for sure whether these contents are bot-created or human-created content that is propagated by bots, we know that these contents have a high probability of being bot-communicated. The study of bot-communicated content is important, as it reveals the content and extent of the information that bots prioritize to communicate to the general reader during the period of study. This distinction is critical for understanding the mechanisms of influence and narrative control within the digital warfare domain, especially in the context of the temporal segmentation employed to focus on specific high-impact timeframes.

It is also important to note, that while there is a significant concern about the potential of bots to manipulate public opinion, the evidence remains inconclusive. For example, a study by Eady et al. ( 2023 ) found minimal and statistically insignificant relationships between exposure to posts from Russian foreign influence accounts on Twitter and changes in voting behavior in the 2016 U.S. election (Eady et al. 2023 ). This suggests that while attempts to manipulate public opinion are evident, actual successful manipulation may be far less effective than often assumed. It is important to distinguish between the presence of such attempts and their effectiveness in achieving the intended outcomes.

During the initial invasion phase (08 February–15 March 2022), these bots disseminated misleading information, justifying Russian state military actions and downplaying the severity of the invasion. Marked by heavy fighting, particularly in the Donbas region, (15 May–15 June 2022), bot activity intensified, mirroring the escalation in military engagement and aiming to influence international opinion. During the Kherson counteroffensive (20 July–30 August 2022), Bots supporting Putin’s regime attempted to counter the narrative of Ukrainian resilience, highlighting their strategic use in narrative control and the manipulation of public opinion.

Figure  2 represents a normalized weekly bot tweet volume from January to October 2022. The normalization process involves scaling the weekly number of bot tweets against the minimum volume observed in any given week within the dataset, thus ensuring that the peak activity is set at a value of 1.0 for relative comparison. This process allows us to observe the proportional intensity of bot activity over time, offering a clear visual representation of bot-driven disinformation campaigns during key phases of the conflict.

The graph shows a surge in bot activity coinciding with the onset of the Putin’s regime invasion, a noticeable uptick around the mid-point period from May to June, and another increase during the Ukrainian counteroffensive in Kherson while aligning with the strategic timing of narrative manipulation and the heightened need for controlling the information flow by state actors through automated means.

Figure  3 shows the normalized patterns of the top hashtag usage from January to September 2022 by scaling the frequency of each hashtag against the peak usage observed within the dataset. The timeframes of interest – marked by purple overlays – correspond to the periods of the Russian invasion, the mid-point of intensified military engagement, and the Ukrainian Kherson counteroffensive.

It is evident from the visualization that hashtags demonstrating Ukraine, designated by the outlined blue boxes, dominated the conversation. This prevalence is consistent across the entire timeline but shows notable peaks during key conflict events. The dominance of pro-Ukraine hashtags reveals a significant trend in bot sentiment on Twitter, reflecting widespread global support for Ukraine during these periods. We will explore further into this and how bots were programmed and utilized during this period. We will also study the utilization of these hashtags and how they can be interpreted as a form of digital “solidarity” with Ukraine, as well as a means of countering pro-Russian narratives on the platform and vice versa.

figure 3

Top hashtags utilized overtime (Top hashtags support Ukraine as shown in blue outline) (Color figure online)

The results in Figs.  4 and 5 provide a visual representation of the stance associations and stances expressed using specific hashtags on Twitter during the Russian-Ukraine conflict.

Figures 4 and 5 depict the ego network of the most popular hashtag for each stance, which refers to a specific type of network centered around a single node (the “ego”) where the network includes all the direct connections or interactions that the ego has with other nodes (called “alters”), as well as the connections among those alters.

Figure  4 depicts the ego network for a pro-Ukraine stance, centered around the hashtag #StandWithUkraine . The orange node represents the selected hashtag (the ego), which serves as the focal point of the network. The blue nodes indicate hashtags associated with a positive stance towards Ukraine, suggesting support and solidarity with the Ukrainian cause. Red nodes, on the other hand, signify negative stance but, in this context, they are against Russia, thereby reinforcing the pro-Ukraine stance. Green nodes denote neutral hashtags that are neither exclusively pro-Ukraine nor pro-Russian but are possibly used in discussions that involve a broader or neutral perspective on the conflict.

The prevalence of blue nodes surrounding the ego, #StandWithUkraine , highlights the strong positive stance and support for Ukraine within the Twitter discourse. The interconnectedness of these nodes suggests a cohesive community of Twitter bots who are programmed to align their support for Ukraine and in opposition to Russia. This network structure implies digital solidarity, where bots and users rally around common hashtags to express support, spread awareness, and potentially counteract pro-Russian narratives. However, it is still important to recognize that the presence of these coordinated bot activities does not necessarily equate to effective manipulation of public opinion.

figure 4

Ego-network for Pro-Ukraine stance: #StandWithUkraine (Orange - Hashtag selected, Blue - positive stance, red - negative stance, green - neutral stance) (Color figure online)

figure 5

Ego-network for Pro-Russian stance: #istandwithRussia (Orange - Hashtag selected, Blue - positive stance, red - negative stance, green - neutral stance) (Color figure online)

Figure  5 illustrates the ego network for the pro-Russian stance, with #IstandwithRussia as the ego node. Like Fig.  4 , the colors denote the stances associated with each hashtag. Blue nodes reflect positive stance towards Russia, red nodes represent negative stance but, like the pro-Ukraine stance, these are against Ukraine, and green nodes are neutral. This network indicates a less dense clustering of hashtags around the central node compared to the pro-Ukraine network. This implies a less cohesive or smaller group of Twitter bot accounts that share pro-Russian sentiment. This could also reflect a strategic use of varied hashtags to spread pro-Russian stance messages across different Twitter communities. The prevalence of green nodes (neutral nodes) implies a strategy to engage a wider audience or to introduce a pro-Russian stance message into broader, potentially unrelated discussions.

These networks were completed over multiple iterations. The structure and composition of these multiple networks showed not only the sentiments and stances of Twitter bots but also the dynamics of how these stances are presented and propagated. By analyzing the connections between different hashtags, we can deduce the strategic use of language and digital behavior that defines these online events (Figure  6 ).

By completing this process, we created a list of 2,0000 hashtags for stance detection that contained both pro-Russian (including anti-Ukraine) and pro-Ukraine (including anti-Russian) stances to conduct further sentiment analysis. As shown in Fig.  7 , the heatmap visually represents some of the stance occurrences over time. The top portion of the heatmap, indicated by the black horizontal line, is composed of pro-Ukraine stances. These stances are characterized by support for Ukraine, opposition to Putin and Russia, and criticism of the Russian regime. The lower portion represents pro-Russian stances, which predominately consist of support for Russia, antagonism towards NATO, and pejorative references to Ukraine as a Nazi state.

Some of the most notable ones for pro-Ukraine which can be seen throughout all three timeframes: #StandWithUkraine , #istandwithzelensky , #PrayingforUkraine , #standwithNATO , #PutinWarCrimes , #PutinWarCriminal , #StopPutinNow .

The hashtags such as #StandWithUkraine and #istandwithzelensky signify clear rallying for solidarity with Ukraine. Their significant appearance in the heatmap during all three phases of the conflict demonstrates a sustained and widespread digital mobilization in support of Ukraine. #PrayingforUkraine reflects the global community’s concern and hope for the welfare of the Ukrainian people, while #standwithNATO suggests an alignment with Western Military alliances, which are seen as protectors and allies within this conflict.

The frequency of hashtags like #PutinWarCrimes and #PutinWarCriminal indicates an accusatory stance against the Russian leadership, branding their military actions as criminal. This not only conveys condemnation but also echoes calls for international legal action.

Some notable ones for Pro-Russian which can be seen throughout all three timeframes: #istandwithrussia , #istandwithPutin , #abolishNATO , #noNATO , #UkroNazi , #WarCrimesUkraine , #StopNATOExpansion , #NATOterrorists .

The hashtags #istandwithrussia and #istandwithPutin represent a digital front of support for Russian actions and policies. These hashtags demonstrate support for Russian narratives and reject Western critiques. #abolishNATO and #noNATO indicate a stance against NATO, suggesting that it is viewed as an aggressor within the conflict, aligning with Russian claims of being threatened by NATO expansion to their borders. #UkroNazi and #WarCrimesUkraine are utilized to delegitimize the Ukrainian government and its actions, employing historical references to vilify Ukraine’s position and justify Russia’s actions.

Figure  8 depicts a sample of pro-Ukraine and pro-Russian hashtags discovered through stance detection where it shows how the narratives progressed during the three major phases of the conflict. Their usage patterns across the timeframes of the conflict show how both sides leverage social media to build and maintain support, counteract opposition narratives, and potentially influence undecided or neutral observers.

During the three timeframes, the hashtags display their efforts to rally communities to their causes. Their repeated use, particularly during critical events, suggests a bot-centric strategy to amplify certain narratives. The predominance of specific hashtags and similar like hashtags, as identified in the heatmap, indicates areas where bot activity is concentrated, representing an attempt to shape discourse around key narrative points.

These hashtags and their distribution over time provide quantitative backing to the qualitative observations derived from the ego networks. Understanding the rise and fall of these hashtags usages provides valuable insight into the role and reach of these bots is critical for both sides of the conflict, as it can significantly affect international perception and policy decisions. The network analysis depicted in Figs.  4 and 5 shows how bots are interconnected and Fig.  8 measures the intensity and prevalence of opinions during critical events of the Russian-Ukraine conflict, while Fig.  6 shows the frequency at which these bots are posting.

Figure  6 depicts the frequency at which bots are posting across all three timeframes. Figure  6 , with its delineation of pro-Ukraine and pro-Russian hashtags, shows a pronounced skew towards pro-Ukrainian stances, especially during key moments of the conflict. This skew is not just indicative of public opinion but also reflects a concerted effort by bot networks to amplify the Ukrainian narrative. The graph shows a persistent dominant use of pro-Ukrainian hashtags, suggesting a strong and continuous bot engagement by those behind the automated accounts supporting Ukraine’s cause. The sharp fluctuations in pro-Ukrainian hashtag frequency, particularly at the points marked “Russian Invasion”, “Mid-Point,” and “Counteroffensive,” suggest that key military or political events trigger significant spikes in bot activity.

Content moderation policies also played a significant role, with a higher pro-Ukraine stance observed in the beginning and a lower pro-Russian stance. However, as the year progressed and new policies were established, pro-Russian sentiment grew, particularly towards the end of 2022. This shift is evidenced by the increased use of hashtags such as “istandwithrussia”, “UkraineNazis”, “NATOized”, and “StopNaziUkraine” during the counteroffensive period, highlighting how changes in content moderation policies increased bot activity promoting pro-Russian stance messaging.

The relatively stable line of pro-Russian hashtags in Fig.  6 usage suggests a consistent, albeit less pronounced usage of the hashtags by bots. This implies several things: it suggests a more restrained approach to social media, a less mobilized base of support, or it could also reflect countermeasures taken by platforms to limit the reach of pro-Russian messaging, which has been a policy of several social media platforms.

The normalization of data in Fig.  6 is critical as it allows for the comparison of relative changes over time, controlling for the absolute number of messages sent. This method of data representation highlights the relative intensity of information warfare campaigns rather than the raw numbers, providing insight into how the conflict is being fought on social media platforms. This figure, along with the heatmap (Fig.  8 ) and the ego-centric networks (Figs.  4 and 5 ), provides a comprehensive picture of how information warfare campaigns are taking place, allowing us to have a deeper understanding into those themes.

figure 6

Overall stance hashtag usage over the three timeframes

4.2 Operational narratives: the role of bots in conflict storytelling

When performing stance analysis, TweetBERT processes tweets by considering their content, language style, and structure. Its underlying architecture, pre-trained on millions of tweets, captures the informal and idiosyncratic language of Twitter (Qudar and Mago 2020 ). The model classifies tweets as positive, negative, or neutral based on learned patterns from its training data, encompassing a wide spectrum of expressions, from straightforward statements to subtle or sarcastic comments. This sentiment analysis was applied to our dataset, covering a range of emotions, opinions, and perspectives conveyed through informal language and hashtags.

figure 7

TweetBERT sentiment analysis for Pro-Ukraine and Pro-Russian within bot communities

figure 8

Stance hashtags usage over the three timeframes

In the sentiment analysis of Fig.  7 , it is apparent that pro-Russian tweets are more inclined to convey very negative sentiments, with an average probability of 35%, in contrast, pro-Ukraine tweets are more predisposed to express very positive sentiments, evidenced by a higher average probability of 40%. This stark disparity in sentiment distribution is indicative of a deliberate strategic approach, where negative sentiments often serve to undermine or vilify the opposition, while positive sentiments are utilized to promote unity and garner support.

Specifically, the 35% probability of very negative sentiments in pro-Russian tweets suggests a calculated effort to sow discord and foster a hostile perception of Ukraine and its allies, aiming to erode international support by casting their efforts in a negative light. Conversely, the 40% probability of very positive sentiments in pro-Ukraine tweets likely aims to bolster the legitimacy and morale of the Ukrainian cause, seeking to strengthen and consolidate global backing for their stance.

Figure  9 , generated from this analysis, is a visual representation of the tweets’ embedding reduced to two principal components through PCA. The two-dimensional PCA plot provides a simplified yet insightful visualization of the high-dimensional data. The PCA serves as a powerful tool to highlight the distinction between the pro-Ukraine and pro-Russian clusters formed by the K-means algorithm, based on the semantic content of the tweets as encoded by TweetBERT.

Figure  9 illustrates two clusters with a clear dichotomy in the narrative propagated by bots: one distinctly pro-Ukraine and the other pro-Russian. This plot not only shows the presence of two opposing narratives but also the degree to which these narratives are being propagated, as evident from the clustering patterns. The spatial distribution suggests that there is minimal crossover or ambiguity in the messaging of these bots; they are programmed with specific narratives and don’t typically engage with or share content from the opposing viewpoint.

The distribution of points within each cluster could be telling the strength and coherence of the messaging. For example, the tight clustering in each pro-Ukraine and pro-Russian indicates a highly focused and consistent narrative push. There is some disperse within each cluster, suggesting that there is some variability of sub-narratives within each messaging strategy, but they mostly follow the same main narrative for each viewpoint.

Figure  9 shows that there is a level of sophistication to the narrative warfare at play. Bot are not merely spreading information; they are curating a specific emotional sentiment that aligns with their programmed objectives. This emotional manipulation can have a profound impact on human users who encounter these tweets, potentially swaying public opinion and influencing the social media discourse surrounding the war.

figure 9

TweetBERT cluster analysis for Pro-Ukraine and Pro-Russian within bot communities

The analysis of bot activity, as visualized in Fig.  9 , again shows a stark division between pro-Ukraine and pro-Russian clusters, with no apparent overlap. This segregation is characteristic of echo chambers, which are typically strengthened by the absence of dissenting or alternative viewpoints. Our findings are supported by recent research, which highlights the significance of bipartisan users in bridging divided communities and disrupting the formation of such echo chambers (Zhang et al. 2023 ). When human bipartisan interactions are present, these users can introduce variance and mitigate the polarizing effects of echo chambers by connecting different narrative threads (Zhang et al. 2023 ).

However, in our bot-only dataset, the lack of this bipartisanship shows the potential for echo chambers to thrive unchecked. Without the moderating influence of bipartisan users, be they human or bots programmed with diverse narratives, the clusters formed are highly polarized and exhibit a high degree of narrative consistency. This raises important questions about the design and intention behind these bots. If bots were programmed to mimic the bipartisan behavior observed in human users, could they serve a similar function in diluting the echo chamber effect? Or would their artificial nature render such efforts ineffective or even counterproductive?

It is not surprising that the two groups are disjoint, as most bots do not interact, indicating that simple bots, which relay messages but don’t engage in adaptation, are being used. This absence of interaction among bots highlights the use of a more rudimentary form of artificial intelligence in these information campaigns, focusing on message distribution rather than engaging in complex conversations or altering strategies based on audience response. Such an approach reveals a deliberate choice to prioritize volume and consistency over adaptability and engagement, hinting at a strategic emphasis on shaping narratives and controlling discourse rather than fostering genuine interaction.

The absence of human users in this dataset allows for unique observation of how bots alone create and sustain narrative echo chambers. The homogeneity within each cluster indicates a sophisticated level of narrative control, likely intended to shape public discourse. This highlights the potential for bots to be used in information warfare, significantly influencing social media in the absence of human counter-narratives. Zhang et al. ( 2023 ) suggest that the inclusion of bipartisan elements could introduce complexity and interaction, preventing such insulated informational environments (Zhang et al. 2023 ). Integrating these insights could inform future strategies for digital platform governance and algorithm design to detect and mitigate polarized content.

The stark separation of sentiment and narrative between pro-Ukraine and pro-Russian bots reveals a deliberate effort to not only inform but also emotionally influence public opinion. By highlighting the operational simplicity and strategic focus of the bots used in these campaigns, we can better understand the dynamics at play in digital propaganda efforts and the critical role of human engagement in countering the formation of narrative echo chambers. However, it is important to recognize that while these findings show the potential for bots to shape narratives and control discourse, the overall influence of these echo chambers and coordinated disinformation efforts on public opinion remains limited. This nuanced understanding can guide future efforts in digital platform governance and the development of strategies to foster more diverse and interactive online environments.

Narrative Dynamics Across Conflict Phases

By employing topic analysis in our dataset, the analysis of bot-driven sentiment presents how narratives evolve in their efforts of information warfare. Our results try to understand the patterns of how bots have strategically broadcast both pro-Ukraine and pro-Russian stances during critical key events of the conflict.

figure 10

Topic modeling: Russian invasion stance for both pro-Ukraine and pro-Russia

Russian Invasion Analysis

Figure  10 features the narratives propagated by bot-driven communities during the Russian invasion from 08FEB-15MAR 2022. The Pro-Ukraine word cloud focuses on rallying cries such as #supportukraine and calls to action like #stoprussianaggression , reflecting a clear call for international support and immediate action to counter Russian aggression. The emphasis on hashtags talking about war crimes such as #putinwarcriminal and #warcrimesofrussia aligns with efforts to draw global attention to the humanitarian impact that the invasion had to the Ukrainian people.

The term #putinwarcriminal and #arrestputin suggests a concerted effort to personify the conflict, concentrating the narrative against a single figure of Vladimir Putin to simplify the complex geopolitical situation into a clear-cut story of who is the main aggressor behind the military actions of Russia.

Conversely, the pro-Russian word cloud is dominated by terms such as #stopnato and #nazis , which appear to be part of a broader strategy to undermine the legitimacy of NATO’s involvement and to cast aspersions on the motivations behind Ukraine’s defense efforts. The repeated use of #nazi in conjunction with various entities ( #kosovoinnato , #germannazis , and #ukronazis ) is a provocative attempt to invoke historical animosities and paint the opposition as not just wrong, but morally reprehensible.

The presence of terms like #zelenkylies and #fuckbiden suggests an aggressive stance against international figures who are critical of Russia’s actions. This aggressive language is likely intended to resonate with and amplify existing anti-Western sentiment, rallying support by tapping into such strong emotions.

The word clouds in Fig.  10 demonstrate the capacity of bot-driven communities to disseminate targeted messages that can influence public discourse. The strategic repetition of these specific terms and the emotional weight they carry can have a significant impact on the public perception of the conflict. For example, the repeated association of Ukraine with Nazism by pro-Russian bots could, over time, influence on-the-fence observers to view the Ukrainian resistance with skepticism. Similarly, the pro-Ukraine bots focus their strategies on war crimes and heroism can bolster a narrative of moral high ground and rightful resistance, influencing international opinion and potentially swaying public policy.

figure 11

Topic modeling: mid-point Russian military escalation stance for both pro-Ukraine and pro-Russia

Mid-Point Analysis

Figure  11 from the midpoint of the conflict shows how the intensified narratives are being pushed by bot-driven communities, highlighting how these automated accounts adapt their messaging in response to evolving circumstances of the Russian-Ukraine conflict.

The pro-Ukraine bots, during the midpoint timeframe, focus sharply on the characterization of Russian leadership as criminal, with #putinwarcrimes and #putinisaterrorist featuring notably. This indicates a strategic push to hold the Russian state accountable for its actions and to appeal to the international community’s sense of justice. The word cloud also shows a call for solidarity and support for Ukraine, with terms like #saveukraine and #standwithukraine appearing frequently. The repeated use of #stopputin and #stoprussianaggression serves as a rallying cry to mobilize international pressure against the Russian military campaign.

The pro-Russian word cloud exhibits a significant focus on terms that escalate the dehumanization of Ukraine and its allies. The prominent display of #nazi related terms in conjunction with #ukronazis and #stopnato suggests a continuation and intensification of the strategy to vilify Ukraine by associating it with historical evils. The use of such charged terms is a common tactic in information warfare, aimed at delegitimizing an opponent and swaying public sentiment by drawing on emotional and historical connotations.

This phase also shows an increased effort to reinforce narratives that support Russian actions, with terms like #isupportputin and #russianato , which indicates a defensive posture in response to global criticism of the invasion.

The word clouds suggest a battle for the narrative high ground, where each side’s bots work continuously to sway public opinion and influence international perception. Having the focus on emotionally charged and historically weighted language demonstrate bots role in amplifying existing narratives.

Bots are used not only to disseminate information but also to engage in psychological operations. The stark contrast between the two sets of word clouds highlights the polarized nature of the conflict as perceived through social media, with bots acting as catalysts for these divisive narratives. As the war progresses, these automated agents continue to play a crucial role in the information warfare that accompanies the physical fighting on the ground.

figure 12

Topic modeling: Kherson counteroffensive stance for both pro-Ukraine and pro-Russia

Kherson Counteroffensive Analysis

In Fig.  12 , the word cloud from the Kherson Counteroffensive shows how bot-driven communities target and continuous amplification of specific narratives. The word cloud shows how these strategies and thematic focuses of bot activity continued when entering the turning point of the conflict with the Ukrainian counteroffensive.

The pro-Ukraine stance word cloud responds to the Pro-Russian narrative with a heightened focus on justice and accountability. Terms like #putinwarcriminal and #warcrimainlputin dominate, suggesting a strategic emphasis on the legal and ethical implications of the conflict. This shift towards highlighting war crimes and the criminality nature of it serves to erase the perceived illegitimacy of Russian military actions and to bolster international support for Ukraine’s counteroffensive.

The recurrent mention of #stoprussianaggression and the calls to #standforukraine reflect a continued urgency in the pro-Ukraine narrative. They serve not only as a plea for support but also as a means of reinforcing the identity of Ukraine as a nation under unjust attack, striving to defend its sovereignty and people.

The pro-Russian bots during the Kherson Counteroffensive continue to push a narrative that is heavily laden with historical and nationalistic sentiments. The use of #nazis in conjunction with #ukronazis and #nato persists, emphasizing an attempt to paint the Ukrainian defense efforts and their Western allies in a negative light. This continued usage of such incendiary terminology suggests a relentless drive to cast the conflict not just as a territorial dispute but as a moral battle against perceived fascism that is viewed from a Russian standpoint.

The terms #traitorsofukraine and #standwithrussia indicate a dual strategy of internal division and external solidarity. By labeling opposition elements as traitors, these bots aim to sow discord and delegitimize the Ukrainian resistance while simultaneously calling for unity among pro-Russian supporters.

figure 13

BEND results normalized across all three timeframes for Pro-Ukraine Stance

figure 14

BEND results normalized across all three timeframes for Pro-Russian Stance

The thematic content of these word clouds during the Kherson Counteroffensive highlights how bots can adapt their messaging to the changing dynamics of war. As Ukraine takes a more offensive stance, the pro-Russian bots ramp up their use of divisive language, likely to counteract the rallying effect of Ukrainian advances. Similarly, pro-Ukraine bots amplify their calls for justice and international support, aiming to capitalize on the momentum of the counter-offensive.

These bot-driven narratives are engineered to provoke specific emotional responses and attempt to manipulate the perception of the conflict. The use of emotionally charged language and polarizing terms highlights the sophistication of these information warfare campaigns, designed not just to report on the conflict but to actively shape the discourse around it.

Throughout the various stages of the Russia-Ukraine conflict, bot-driven narratives strategically attempted to influence public perception and opinions. During the invasion phase, pro-Russian bots discredited NATO and rallied support for Russia, while pro-Ukraine bots highlighted the urgency of resisting Russian aggression and focused on humanitarian issues. As the conflict escalated, pro-Russian narratives intensified the vilification of Ukraine using historical antagonisms. In contrast, pro-Ukraine bots condemned Russian leadership and emphasized accountability. During the Kherson Counteroffensive, pro-Russian bots amplified divisive rhetoric to undermine Ukrainian unity, whereas pro-Ukraine bots stressed criminal accountability and valor.

4.3 Analyzing strategic information warfare tactics in Russia–Ukraine Twitter bot networks

The impact of information warfare, particularly in the context of social media, became a defining strategy of modern-day warfare. Information warfare leverages the interconnections and immediacy that social media platforms provide to spread disinformation, manipulate narratives, and sow discord among target populations. Its tactics can shape public opinion, influence political processes, and destabilize entire nation-states by eroding trust in institutions and democratic processes. The normalization of disinformation and the exploitation of existing social platforms amplify the potency of such disinformation campaigns.

Figures 13 and 14 highlight the thousands of tweets associated with our results from our stance detection against the overall BEND analysis for each pro-Russian and pro-Ukraine sentiment through the selected three timeframes.

We performed the BEND analysis separately on communities segregated based on stance detection. The analysis was also used to focus the topic analysis portion and show how these maneuvers manifest across all three timeframes and how they affect the narrative and community dynamics. This analysis aligns with social cybersecurity’s focus on understanding the digital manipulation of community and narrative dynamics, highlighting the strategic use of social media in modern conflicts. Aggregation and normalization of the data were applied to ensure a clear comparison across the timeframes to understand how each maneuver changed in each stance across the selected timeframes.

Figure  13 illustrates the evolution of pro-Ukraine narratives within the Twitter bot community. During the Russian invasion, there was a significant Boost and Build effort, using hashtags like #westandforukraine and #istandwithzelensky to foster solidarity and support for Ukraine, reflecting the BEND framework’s principles of community building (Carley 2020 ). As Russian activities escalated at the mid-point, Engage and Excite maneuvers increased, aiming to make the conflict more globally relevant and counter Russian disinformation. During the Ukrainian counteroffensive, Dismay, Distort, and Distract maneuvers surged, with hashtags like #putinisawarcriminal , #russiaisaterroriststate , and #PutinGenocide challenging pro-Russian narratives and diverting attention from Russian messaging.

Figure  14 illustrates the evolution of pro-Russian narratives within the Twitter bot community. During the Russian invasion, there was a focus on Negate and Neutralize maneuvers, using hashtags like #abolishNATO and #endNATO to diminish opposing narratives. Increased Russian military activity saw Distort and Dismiss maneuvers, skewing the narrative in favor of Russia by highlighting negative aspects of Ukraine, such as Nazi associations. In response to the Ukrainian counteroffensive, Dismay and Distort efforts surged, with hashtags like #westandwithrussia and #naziNATO , aiming to cause fear and discredit Ukraine while garnering support for Russia. This reflects the social cybersecurity concept of narrative manipulation, where digital platforms reshape public perception.

figure 15

BEND results normalized for hashtags: Pro-Ukraine Stance - #istandwithukraine and Pro-Russian Stance - #istandwithrussia during the Russian Invasion

figure 16

BEND results normalized for hashtags: Pro-Ukraine Stance - #putinwarcriminal and Pro-Russian Stance - #stopnato during the Mid-Point

The BEND framework shows that pro-Ukraine efforts predominantly utilize the ’B’ (Boost and Build) and ’E’ (Engage and Excite) maneuvers, focusing on community building and positive narrative development. Conversely, pro-Russian bot activities rely heavily on the ’N’ (Negate and Neutralize) and ’D’ (Distort and Dismay) maneuvers, aiming for community disruption and negative emotional influence.

Applying the BEND framework shows the contrasting online strategies employed by both sides. Pro-Ukraine bots emphasize strengthening solidarity and support, using the Back maneuver to amplify pro-Ukraine voices and the Engage maneuver to foster online camaraderie. This proactive stance aims to rally support and create a collective pro-Ukraine sentiment. In contrast, pro-Russian bots supporting Putin’s regime focus on disrupting communities and spreading negative sentiment to weaken opposition and influence perceptions negatively.

These strategies extend beyond bot communities and are designed to influence the perceptions and behaviors of human users. To better understand the impact of these hashtags, we investigated selected hashtags across the three timeframes to identify the maneuvers each stance represented in bot activity and community engagement on Twitter during the key phases of the Russia-Ukraine conflict.

figure 17

BEND results normalized for hashtags: Pro-Ukraine Stance - #putinhitler and Pro-Russian Stance - #naziukraine during the Kherson counteroffensive

The Russian invasion, as shown in Fig.  15 , highlights the predominant themes during that time through the #istandwithukraine and #istandwithrussia hashtags.

For the pro-Ukraine stance, the Excite maneuver is prominent, suggesting a focus on eliciting positive emotions like joy and happiness to boost morale and support. The Engage maneuver follows, indicating efforts to increase the topic’s relevance, sharing impactful stories, and suggesting ways for the audience to contribute to the cause.

Conversely, the Dismay maneuver is also significant, indicating a strategy to invoke worry or sadness about the invasion’s impacts. This humanizes the conflict, attracting global empathy and support for Ukraine by highlighting the gravity of the situation and the suffering of the Ukrainian people.

For the pro-Russian stance, the dominant maneuver is Dismay, aiming to evoke negative emotions like worry or despair, potentially to demoralize or create a sense of hopelessness regarding the situation in Ukraine. The Distort maneuver suggests an attempt to manipulate perceptions, promoting pro-Russian viewpoints while questioning the legitimacy of Ukrainian narratives. The Excite maneuver is also prevalent, indicating efforts to rally support by justifying Russia’s actions and uniting pro-Russian communities under sentiments of patriotism or anti-Western sentiment.

The strategic deployment of these maneuvers highlights the sophisticated use of social media as a battleground for psychological and narrative warfare. Pro-Ukraine bots focus on building support and maintaining positive sentiment, while pro-Russian bots concentrate on narrative manipulation and fostering negative emotions to influence opinions.

During the mid-point of the war, when Russia escalated its military efforts as shown in Fig.  16 , there was a notable shift in topics from both pro-Ukraine and pro-Russian standpoints. Key themes include NATO engagement by the Russian state and criticisms of Vladimir Putin by Ukraine. We analyzed the hashtags #putinwarcriminal and #stopnato .

For the pro-Ukraine stance, the Excite maneuver maintains positive sentiment by highlighting Ukrainian resilience and rallying international support. The Engage maneuver keeps the conflict relevant to international observers, sharing developments and ways to contribute. The Explain maneuver counters misinformation with detailed clarifications. There is also a transition to Dismay, Distort, and Distract maneuvers, invoking concern about Russia’s intensified actions and amplifying perceived threats and injustices. Pro-Ukraine bots focus on preserving Ukraine’s integrity and maintaining support by highlighting resilience and humanitarian efforts.

For the pro-Russian stance, the prominent maneuver is Neutralize, aiming to dismantle opposing narratives. This is followed by the Negate maneuver, minimizing Ukrainian actions. The Distract maneuver shifts the conversation away from Ukrainian narratives to topics like NATO, aligning the Russian state’s narrative against NATO. Dismay and Distort evoke fear and anxiety, promoting the idea that NATO’s lack of intervention showcases its ineffectiveness. While less prominent, Excite and Engage maneuvers foster a sense of righteousness about Russia’s actions, portraying them as necessary for security against NATO.

These maneuvers indicate a deliberate narrative clash. Pro-Ukraine bots engage the global audience and clarify misinformation, while pro-Russian bots undermine Ukraine’s stance and frame NATO negatively. Pro-Russian bots adopt a defensive strategy, seeking to invalidate and overshadow pro-Ukrainian narratives. This phase reflects the heightened information warfare, with both sides vying for the psychological upper hand.

figure 18

Moral foundations found in topic modeling results, in expressing attitudes toward the Russian-Ukraine conflict from a pro-Ukraine stance

figure 19

Moral foundations found in topic modeling results, in expressing attitudes toward the Russian-Ukraine conflict from a pro-Russian stance

During the Kherson counteroffensive, bot activity intensified, highlighting the narrative combat with both sides accusing each other of Nazism and terrorism, and comparing Putin to Hitler, as shown in Fig.  17 .

For the pro-Ukraine stance, the Excite maneuver is prominent, promoting positive emotions like triumph and optimism about the counteroffensive, energizing supporters and reinforcing narratives of Ukrainian resilience and success. Dismay is also significant, evoking concern and sadness about the conflict to sustain international attention and highlight perceived Russian aggression. The use of Distract shifts focus to Putin’s actions and his similarities to Hitler, framing Russian leadership negatively. The Back maneuver reinforces the effectiveness of these narratives.

For the pro-Russian stance, Dismay leads, instilling fear and portraying the counteroffensive as provocative or violent, akin to historical Nazism. Distort is extensively used to manipulate narratives, justifying Russian actions and recasting the Ukrainian counteroffensive as illegitimate. Neutralize and Negate aim to undermine pro-Ukrainian narratives, reducing their significance or credibility. The Boost maneuver enhances the appearance of widespread support for Russia’s stance by increasing connectedness among pro-Russian groups.

The use of #putinhitler and #naziukraine by both sides indicates a heated exchange of historically charged accusations. The BEND maneuvers reveal a strategy where both sides defend their actions while actively delegitimizing the other through historical parallels, creating a polarized and emotionally charged information warfare campaign. This phase underscores the strategic use of social media to shape global perceptions and gather support for respective causes.

Overall, the BEND maneuvers demonstrate the dynamic and strategic use of social media by both pro-Ukraine and pro-Russian bots. Throughout these key phases, the consistent use of emotionally charged narratives highlights the sophistication of modern information warfare, where shaping perceptions and emotions is crucial to influencing public opinion and international policy.

4.4 Moral maneuvers: the strategic use of ethics in Russia–Ukraine social media campaigns

Applying the dataset to the Moral Foundations Theory (Graham et al. 2013 ) displays the emotional complexities underlying the Russian-Ukraine conflict (Schuman 2018 ). This theory, encompassing dimensions like Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression, illustrates the nuanced moral landscape of the conflict.

Pro-Ukraine Analysis

The analysis of pro-Ukraine stance, as depicted in Fig.  18 , highlights the evolving narrative within the bot community. The increasing emphasis on the Care Foundation throughout the selected timeframes shows an online rally of support for the Ukrainian people, fostering solidarity and compassion for their plight. The Harm Foundation shows a minor presence initially, spikes sharply during the military escalation, and then diminishes during the counteroffensive, mirroring the real-time reactions to unfolding events.

Loyalty remains a significant theme, peaking at the midpoint of the conflict, emphasizing unity and steadfastness in support of Ukraine. The minor presence of the Betrayal Foundation suggests that narratives of disloyalty are not a focal point, maintaining a consistent push for solidarity through supportive hashtags.

figure 20

Moral foundations found in topic modeling results, in expressing attitudes toward the Russian-Ukraine conflict from pro-Ukraine tweets and pro-Russian stance tweets

The consistently high presence of the Authority Foundation, particularly during the midpoint escalation, reinforces the legitimacy of Ukrainian sovereignty. The slight decrease during the counteroffensive indicates a perceived restoration of authority through Ukrainian advances. The Subversion Foundation continually increases across all three timeframes, showing how bots increasingly challenge the established power structure, particularly targeting the Russian state and Vladimir Putin.

The Sanctity Foundation remains consistently low, suggesting that moral purity or sacredness is not heavily invoked. However, the high and increasing presence of Degradation, especially during the counteroffensive, indicates a strategic amplification of messages framing the opposition’s actions as morally reprehensible.

The increasing emphasis on Care, alongside the strategic use of other foundations, suggests a concerted effort to shape public sentiment toward empathy for Ukraine and condemnation of Russian actions. This bot-driven narrative could significantly influence public opinion and the global moral compass regarding the conflict, highlighting the role of automated social media agents in constructing the moral narrative within geopolitical events.

Pro-Russian Analysis

The Moral Foundation Theory applied to the pro-Russian stance, as seen in Fig.  19 , shows a coordinated narrative by bots aimed to legitimize the Russian state and undermine Ukraine. The dataset indicates a moderate presence of the Care Foundation in the pro-Russian narrative, with an increase during the midpoint of the conflict. This reflects a strategic focus on humanizing the Russian cause and emphasizing care for Russian nationals and Russian-speaking communities in Ukraine. The Harm Foundation follows a similar trajectory but with greater intensity, suggesting a narrative that frames Russia’s actions as necessary to prevent greater harm.

Loyalty is highlighted from the beginning and spikes at the midpoint, consolidating support for Putin’s military campaign. The near absence of Betrayal, especially after the invasion phase, suggests that the bots aim to maintain a narrative of unity and cohesion within Russia.

Authority is consistently invoked, dipping during the midpoint, reflecting challenges to Russia’s authority during heightened military actions. The increase during the counteroffensive phase represents a reclaimed narrative of Russia reasserting its power. The Subversion foundation is moderately high throughout, aligning with the portrayal of Russia countering the subversion of its interests and those of Russian-speaking communities.

Sanctity is not a dominant theme, indicating that bots do not heavily rely on notions of purity or sacredness. However, Degradation is consistently employed to degrade the moral standing of Ukraine, framing its actions and Western support as morally reprehensible. This consistent use of hashtags by Russian bots degrades NATO, its members, and the Ukrainian government.

The bot-driven moral narrative supporting Russia shows the advanced use of moral foundations to justify and legitimize Russian military actions, portraying Russia as a protective and liberating force. This manipulation of moral rhetoric by bots significantly impacts public perception and discourse, highlighting the critical role of determining the sources and intentions behind digital narratives, especially in the context of international conflict.

Moral Realism Tweet Analysis

Both pro-Ukraine and pro-Russian tweets leverage moral foundations but in markedly different ways, each tailored to validate their stance and vilify the other as seen in Fig.  20 . The use of moral rhetoric is indicative of moral realism within the digital space of information warfare, where bots amplify their perceived objective morality of a cause to influence public perception.

The pro-Ukraine narrative emphasizes the harm inflicted on civilians and the care for human life, positioning the Russian state as an aggressor comparable to historical tyrants like Hitler. This portrayal is designed to elicit empathy and support for Ukraine, framing the conflict as unjust aggression. Conversely, the pro-Russian narrative emphasizes protective measures by Putin’s regime to bring Russian speakers in Ukraine back to Russia, suggesting their military actions are necessary to prevent greater harm. This dual interpretation of care and harm showcases a conflict over the moral justification for war.

Emphasizing loyalty, pro-Ukraine tweets unify support against Russian aggression, casting pro-Russian stance as criminal. Pro-Russian tweets, however, depict the Ukrainian government as Nazi-like and corrupt, betraying historical Russian unity. This narrative seeks to justify Russian intervention as a defense against betrayal.

Pro-Ukraine tweets uphold the authority of the Ukrainian government, denouncing Russian subversion. Conversely, pro-Russian tweets present Russia as a stabilizing authority against the subversive influence of NATO and the allegedly Nazi-sympathetic Ukrainian government. These narratives demonstrate a battle over legitimate power and authority, each side striving for moral high ground.

Pro-Ukraine tweets highlight Russian war crimes, calling attention to moral and ethical degradation. Meanwhile, pro-Russian tweets condemn Ukraine and NATO as corrupt. This highlights each side’s attempt to claim the moral high ground.

The strategic use of moral foundations in bot-generated tweets aims to shape public opinion by portraying each side as morally superior. This manipulation of moral values in social media narratives significantly impacts the global perception of the Russian-Ukraine conflict, potentially affecting international policy, humanitarian aid, and military support.

These narratives, steeped in moral realism, provide each side with an appearance of objective moral truth, asserting that their actions are justified and necessary. The result is an online battleground where moral values are weaponized to validate conflict, sway public sentiment, and garner international support. The influence of these tweets lies in their ability to shape the moral compass of the international community, thereby affecting real-world outcomes of the conflict.

5 Limitations and Future Work

While this study provides valuable insights into the deployment of bot-generated narratives during the Russia-Ukraine conflict, several limitations should be acknowledged:

Language Restriction Our dataset was limited to English-language tweets. This excludes non-English discourse, which might present different perspectives or intensities in sentiment analysis, particularly in regions directly affected by the conflict. The linguistic focus potentially overlooks the full scope of the international conversation surrounding the conflict.

Future Work Future research could include a multilingual dataset, analyzing Russian, Ukrainian, and other relevant languages. This would provide a more comprehensive view of the global discourse on the conflict and offer insights into regional perspectives.

Keyword Dependency The curated dataset relied on specific keywords related to the conflict. This method might omit relevant discussions that do not utilize these keywords, thus potentially narrowing the breadth of the captured narrative and overlooking subtler, yet significant, aspects of the conversation.

Future Work Following studies can employ more diverse and sophisticated data collection methods, such as semantic analysis or machine learning algorithms, to capture a broader range of discussions associated to the conflict, beyond those defined by specific keywords.

Bot-Generated Content Focus Concentrating exclusively on bot-generated content provides insights into the strategies of information warfare but it does not account for human users. The influence of human user engagement, which can counter or endorse bot narratives, is therefore not considered in this analysis.

Future Work Future research should include a comparative analysis of human and bot interactions on social media. This would show how human users respond to, amplify, or counteract bot-driven narratives, providing a more holistic understanding of the digital landscape during conflicts.

Potential Biases The selection of tweets based on keywords and the identification of bots could be influenced by the inherent biases in the algorithms used. These biases may affect the dataset’s overall analysis for our results in conducting sentiment analysis.

Future Work To mitigate this, future research should involve the examination and adjustment of algorithmic biases. Employing a diverse range of algorithms for data collection and analysis, and validating results with expert human analysis, could enhance the accuracy and reliability of the findings.

Time Frame Segmentation While temporal segmentation allows for an in-depth examination of critical events and periods, it may not capture the evolving nature of the discourse outside these high-impact timeframes. Subsequent developments post-August 2022 are not shown, which might alter the trajectory of the narratives.

Future Work Continued research should focus on a longitudinal study that spans a more expanded period, possibly including real-time analysis. This would allow for tracking the evolution of narratives over time, providing insights into the long-term effects of information warfare strategies and the resilience of various narratives.

6 Conclusion

Our analysis discovered that bot-driven strategies significantly shaped the social media narrative of the Russian-Ukraine conflict. These strategies, executed on Twitter, effectively amplified specific narratives, swayed public opinion, and created a polarized information environment. Bots were strategically deployed to push pro-Ukraine or pro-Russian stances, using emotional appeals, moral justifications, and targeted messaging to influence public opinion and global perceptions.

Influence on Digital Propaganda and Counter-Narratives

Topic modeling analysis uncovered the underlying themes and narratives propagated by bot communities. It highlighted the strategic use of language and themes that aligned with the conflict’s phases. Pro-Ukraine and pro-Russian bots selectively amplified themes like humanitarian concerns, historical antagonisms, and nationalistic sentiments to sway public opinion and create a polarized digital environment.

The BEND framework provided a nuanced understanding of the narrative and community maneuvers employed by both sides. Pro-Ukraine bots utilized positive community-building tactics and narrative enhancement strategies, rallying support and maintaining a positive portrayal of Ukraine. In contrast, pro-Russian bots utilized negative maneuvers to undermine the pro-Ukraine narrative, fostering negative sentiments and distorting perceptions in favor of Russian actions.

The moral realism aspect of the study provided insight into the ethical and moral justifications of both sides. Pro-Ukraine tweets emphasized the Care/Harm and Fairness/Cheating foundations, portraying Ukraine as a victim of unjust aggression and rallying global empathy. Pro-Russian tweets focused on the Authority/Subversion and Loyalty/Betrayal foundations, framing Putin’s actions as protective and necessary for the security of Russian-speaking communities. This manipulation of moral rhetoric was a critical component of information warfare, providing each side with a semblance of moral high ground and objective justification for their actions.

Implications for Political and Democratic Landscapes The implications of these bot-driven strategies on Ukraine’s political and democratic processes, as well as in the wider geopolitical context, are profound:

Narrative Control and Public Perception The ability of bots to attempt to manipulate narratives and public perception represents a significant evolution in the tools of modern warfare. This manipulation has explicit consequences for political decision-making, both within Ukraine and in the international community, influencing policy and response strategies.

Polarization and Democratic Processes The polarized narratives propagated by bots pose challenges to democratic processes and institutions. By manipulating public opinion through one-sided narratives and emotional applications, these strategies can challenge democratic discussions, making it difficult for citizens to participate in informed and factual discussions.

Moral and Ethical Implications The study’s findings on moral realism and the use of moral justifications in digital narratives stress the ethical intricacies of modern information warfare. The exploitation of moral values to justify actions can have widespread implications on international policy and humanitarian responses.

Implications for the Geopolitical Arena In the wider geopolitical context, the study illustrated the emerging complexity of digital warfare and its implications:

Evolution of Warfare The conflict highlighted the evolution of warfare on social media platforms, where control for narratives and public opinions are as critical as physical conflicts.

Global Perception Management The study highlighted the growing importance of perception management in international relations. The ability to control narratives through social media means has become a powerful tool in the geopolitical toolkit.

Challenge to Democratic Values The manipulation of information and the creation of echo chambers pose significant challenges to democratic values. The spread of disinformation and digital propaganda erodes confidence in democratic processes and institutions.

Bot-driven strategies have profoundly influenced the social media platforms of digital propaganda and counter-narratives. These strategies have not only shaped the immediate narrative dynamics of the Russian-Ukraine conflict but also hold significant consequences for the political and democratic processes in Ukraine. This study emphasizes the importance of recognizing and understanding digital propaganda’s role in geopolitical conflicts, highlighting the need for strategies to counteract these influences and protect democratic values within the digital medium of modern warfare.

https://www.nltk.org/

https://netanomics.com/netmapper/

ABC News (2022) Zelenskyy says Russia controls one fifth of Ukraine , while US targets yachts linked to Putin — abc.net.au. https://www.abc.net.au/news/2022-06-03/russia-controls-20-percent-of-ukraine-zelenskyy-says/101122948 , 2022

Adrian K (2021) Moral and political foundations: from political psychology to political realism. Moral Philos Politics 10(1):139–159

Google Scholar  

Alieva I, Ng LH, Carley KM (2022) Investigating the spread of Russian disinformation about biolabs in Ukraine on twitter using social network analysis. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 1770–1775. IEEE

Anthony B (2022) Humiliation for Putin as 200 paratroopers are wiped out in a Ukrainian missile strike. https://www.news.com.au/world/Europe/humiliation-for-putin-as-200-paratroopers-are-wiped-out-in-a-Ukrainian-missile-strike/news-story/093981ca72e34cc4ea3beca779f60b50

Brown Sara (2023) In Russia-Ukraine war, social media stokes ingenuity, disinformation | MIT Sloan — mitsloan.mit.edu. https://mitsloan.mit.edu/ideas-made-to-matter/Russia-Ukraine-war-social-media-stokes-ingenuity-disinformation ,

Carley Kathleen M (2020) An emerging science: social cybersecurity. Comput Math Organ Theory 26:365–381

Article   Google Scholar  

Chavoshi N, Hamooni H, Mueen A (2016) Debot: twitter bot detection via warped correlation. Icdm 18:28–65

Chen E, Ferrara, E (2023) Tweets in time of conflict: a public dataset tracking the twitter discourse on the war between Ukraine and Russia. In: Proceedings of the Seventeenth International AAAI Conference on Web and Social Media (ICWSM),

Christopher P and Miriam M (2024) The Russian “firehose of falsehood” propaganda model. https://www.rand.org/pubs/perspectives/PE198.html , 2016. Accessed 10 Jan 2024

Claudia O-R, Mark TK (2018) Modeling and predicting news consumption on twitter. In: Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, pp 321–329, 2018

Council, European. (2022) EU imposes sanctions on state-owned outlets RT/Russia Today and Sputnik’s Broadcasting in the EU

Daniel R, Evan R, Jason C, Christopher DM, and Daniel A, McFarland DA (2009) Topic Modeling for the Social Sciences. https://nlp.stanford.edu/dramage/papers/tmt-nips09.pdf

David A (2022) Ukrainian troops are pushing back the Russians in the South — forbes.com. https://www.forbes.com/sites/davidaxe/2022/06/15/ukrainian-troops-are-pushing-back-the-russians-in-the-south/?sh=658771ae470b ,

David M Beskow and Kathleen M Carley (2018) Bot-hunter: a tiered approach to detecting & characterizing automated activity on twitter. In: Conference paper. SBP-BRiMS: International conference on social computing, behavioral-cultural modeling and prediction and behavior representation in modeling and simulation, vol. 3

de Keulenaar Emillie, Magalhães João C, Ganesh Bharath (2023) Modulating moderation: a history of objectionability in twitter moderation practices. J Commun 73(3):273–287. https://doi.org/10.1093/joc/jqad015

Dill J (2022) The moral muddle of blaming the west for Russia’s Aggression — publicethics.org. https://www.publicethics.org/post/the-moral-muddle-of-blaming-the-west-for-Russia-s-aggression , (2022)

Eady G, Paskhalis T, Zilinsky J, Richard B, Nagler J, Tucker JA (2023) Exposure to the Russian internet research agency foreign influence campaign on twitter in the 2016 us election and its relationship to attitudes and voting behavior. Nat Commun 14(1):1–10

Express Web Desk. (2024) Russia Ukraine war news highlights: Boris Johnson makes surprise visit to Kyiv; EU commission backs Ukraine’s candidacy status — indianexpress.com. https://indianexpress.com/article/world/Russia-Ukraine-war-latest-news-severodonetsk-zelenskyy-putin-live-updates-7970216/ ,

Garth J and Victoria O’Donnell (2012) What is propaganda, and how does it differ from persuasion? 2012

Graham Jesse, Haidt Sena Koleva, Motyl Matt, Iyer Ravi, Wojcik Sean P, Ditto Peter H (2013) Moral foundations theory: the pragmatic validity of moral pluralism. Adv Exp Soc Psychol 47:55–130

Hatemi Peter K, Crabtree Charles, Smith Kevin B (2019) Ideology justifies morality: political beliefs predict moral foundations. Am J Political Sci 63(4):788–806

III Robert SM (2019) Report on the investigation into Russian interference in the 2016 Presidential Election. https://www.justice.gov/archives/sco/file/1373816/download

Janice T. Blane (2023) Social-cyber maneuvers for analyzing online influence operations. Technical Report CMU-S3D-23-102, Carnegie Mellon University, School of Computer Science, Software and Societal Systems Department, Pittsburgh, PA, 2023

Jarred P (2017) Commanding the trend: social media as information warfare. Routledge, London

Joseph S (2018) Understanding political differences through moral foundations theory. https://dividedwefall.org/the-righteous-mind-moral-foundations-theory/

Judith B, Yicong H, Sydney G, Suellen H, Chen L, Gloria M (2023) Understanding underlying moral values and language use of COVID-19 vaccine attitudes on Twitter — academic.oup.com. https://academic.oup.com/pnasnexus/article/2/3/pgad013/7070624 ,

Kareem D, Peter S, Michaël A, Nakov P (2023) View of unsupervised user stance detection on twitter — ojs.aaai.org. https://ojs.aaai.org/index.php/ICWSM/article/view/7286/7140 ,

Kumankov Arseniy (2023) View of Nazism, genocide and the threat of the global west Russian moral justification of war in Ukraine. Etikk i praksis-Nordic J Appl Ethics 1:7–27

Leo S and Yaroslav L (2022) Kherson: Ukraine claims new push in Russian-held region — bbc.com. https://www.bbc.com/news/world-europe-62712299

Liangjie H, Brian DD (2011) Empirical study of topic modeling in Twitter. In: Proceedings of the First Workshop on Social Media Analytics — dl.acm.org. https://dl.acm.org/doi/pdf/10.1145/1964858.1964870 ,

Meduza (2022) Putin announces formal start of Russia’s invasion in eastern Ukraine - Meduza — meduza.io. https://meduza.io/en/news/2022/02/24/putin-announces-start-of-military-operation-in-eastern-ukraine

Mohiuddin MAQ, Vijay M (2020) Tweetbert: a pretrained language representation model for twitter text Analysis. https://arxiv.org/pdf/2010.11091.pdf

Moral Foundations Theory (2023) Moral Foundations Theory | moralfoundations.org — moralfoundations.org. http://www.moralfoundations.org

National academies of sciences, engineering, and medicine. Chapter 6: integrating social and behavioral sciences (SBS) Research to Enhance Security in Cyberspace, chapter 6. The National Academies Press, Washington, DC, 2019. https://doi.org/10.17226/25335

Ofer F (2019) On the “Gerasimov Doctrine” — jstor.org. https://www.jstor.org/stable/pdf/26803233.pdf

Organisation for Economic Co-operation and Development (OECD) (2023) Disinformation and Russia’s war of aggression against Ukraine. https://www.oecd.org/Ukraine-hub/policy-responses/disinformation-and-Russia-s-war-of-aggression-against-ukraine-37186bde/ ,

Peixian Z, Ehsan-Ul H, Yiming Z, Pan H, and Gareth T (2023) Echo Chambers within the Russo-Ukrainian War: The Role of Bipartisan Users. In: Proceedings of the International Conference on Advances in Social Networks Analysis and Mining, pp. 154-158

Peng T (2023) Differentiation and unity: a cross-platform comparison analysis of online posts’ semantics of the Russian-Ukrainian war based on Weibo and twitter. Commun Public 8(2):105–124

Politico. ’Fake Putin’ announces Russia under attack as Ukraine goes on offensive — politico.eu. https://www.politico.eu/article/fake-vladimir-putin-announces-russia-under-attack-ukraine-war/ , 2023

Rebecca K (2022) Twitter stops enforcing covid-19 misinformation policy. Politico, 2022. URL https://www.politico.com/news/2022/11/29/twitter-stops-enforcing-covid-19-misinformation-policy-00071210 . Accessed: 2024-06-26

Robert E. Berls. (2019) Strengthening Russia’s influence in international affairs, Part I: the quest for great power status — nti.org. https://www.nti.org/analysis/articles/strengthening-Russias-influence-in-international-affairs-part-i-the-quest-for-great-power-status/ ,

Sam B (2023) SCOTCH: a framework for rapidly assessing influence operations — atlanticcouncil.org. https://www.atlanticcouncil.org/blogs/geotech-cues/scotch-a-framework-for-rapidly-assessing-influence-operations , 2023

Simon P (2022) Russia-Ukraine war from a moral-realist approach - providence — providencemag.com. https://providencemag.com/2022/04/russia-ukraine-war-moral-realist-approach-moral-realism/

Smith Steven T, Kao Edward K, Mackin Erika D, Shah Danelle C, Simek Olga, Rubin Donald B (2021) Automatic detection of influential actors in disinformation networks. Proc Natl Acad Sci 118(4):2011216118

Article   MathSciNet   Google Scholar  

Song H, Tolochko P, Eberl J-M, Eisele O, Greussing E, Heidenreich T, Lind F, Galyga S, Boomgaarden HG (2020) In validations we trust? the impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis. Polit Commun 37(4):553–575. https://doi.org/10.1080/10584609.2020.1723752

TASS 2022 Putin announced the start of a military operation in ukraine - tacc —tass.ru. https://tass.ru/politika/13825671

Todd CH, Elizabeth B-B, Andrew R, Madeline M, Joshua M, William M, Andriy B, and Zev W (2018) Russian social media influence: understanding Russian propaganda in Eastern Europe. https://www.rand.org/content/dam/rand/pubs/research_reports/RR2200/RR2237/RAND_RR2237.pdf

Ng LHX, Robertson DC, Carley KM (2022) Stabilizing a supervised bot detection algorithm: How much data is needed for consistent predictions?. Online Social Networks and Media, 28, 100198.

Ng LHX, Carley KM (2023a) Botbuster: Multi-platform bot detection using a mixture of experts. In Proceedings of the international AAAI conference on web and social media 17:686–697.

Ng LHX, Carley KM (2023b) Deflating the Chinese balloon: types of Twitter bots in US-China balloon incident. EPJ Data Science, 12(1):63.

Yardena S (2017) Putin’s throwback propaganda playbook — cjr.org. https://www.cjr.org/special_report/putin_russia_propaganda_trump.php

Download references

Acknowledgements

This research for this paper was supported in part by the following organizations and grants: Scalable Technologies for Social Cybersecurity (W911NF20D0002 US Army) and the Scalable Tools for Social Media Assessment (N000142112229 Office of Naval Research). This material is based upon work supported by the U.S. Army Research Office and the U.S. Army Futures Command under Contract No. W519TC-23-F-0055. The content of the information does not necessarily reflect the position or the policy of the government and no official endorsement should be inferred.

Open Access funding provided by Carnegie Mellon University.

Author information

Authors and affiliations.

Rebecca Marigliano, Societal and Software Systems, Carnegie Mellon University, Pittsburgh, USA

Rebecca Marigliano, Lynnette Hui Xian Ng & Kathleen M. Carley

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rebecca Marigliano .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Annotations of Stances

ID

Message

Human prediction

Model prediction

1497391201821106176

god bless them #standwithrussiansagainstputin

pro-Ukraine

pro-Ukraine

1496925371794694145

Everyday Russians are against the invasion of Ukraine

pro-Ukraine

pro-Ukraine

1498478559957962752

#standwithukraine #pariahputin #putinwarcriminal

pro-Ukraine

pro-Ukraine

1496808326528749568

Putin’s invasion of Ukraine is a despicable act.

pro-Ukraine

pro-Ukraine

1498410602045194240

There is great bravery and resilience within Central Europe.

pro-Ukraine

pro-Ukraine

1498413879273140224

#StandWithUkraine open statement to the Russian people

pro-Ukraine

pro-Ukraine

1496981666740506624

This is very important. Many Russians oppose Putin’s war.

pro-Ukraine

pro-Ukraine

1496919059870457856

The Russian people are rising up against Putin’s tyranny.

pro-Ukraine

pro-Ukraine

1497439753444081664

united they stand. #standingwithukraine #noflyzone

pro-Ukraine

pro-Ukraine

1497884988854915072

#standwithukriane now.

pro-Ukraine

pro-Ukraine

1497468732402061312

What Russia (A.K.A. Goblin Nazi Dictator Vladimir Putin) is doing to Ukraine is unforgivable.

pro-Ukraine

pro-Ukraine

1498355383328456712

Glad to see our European partners standing strong with Ukraine.

pro-Ukraine

pro-Ukraine

1496941085301800960

russian protestors #forpeace face great personal risk

Neutral

pro-Ukraine

1497896366261121024

#AfricansStandWithUkraine But same Ukraine that sold Africans as slaves.

Neutral

pro-Ukraine

1497162555168600064

#assasinateputin

pro-Ukraine

pro-Ukraine

1497431838700171264

putin didn’t anticipate this massive protest from the Russian people.

pro-Ukraine

pro-Ukraine

1497175548191289344

These people really are very brave to be protesting Putin’s war.

pro-Ukraine

pro-Ukraine

1496668562252914688

My father was forced out of Afghanistan because of war.

Neutral

pro-Ukraine

1496920188846952448

thank you anti-war russian citizens! america is with you.

pro-Ukraine

pro-Ukraine

1496667865654702081

#IStandWithUkraine

pro-Ukraine

pro-Ukraine

1496655025417900041

To the leftists that #StandWithUkraine, against Putin’s aggression, thank you!

pro-Ukraine

pro-Ukraine

1496931138949943296

russian people bravely protesting despite personal risks

pro-Ukraine

pro-Ukraine

1496983094569672704

#standwithukraine

pro-Ukraine

pro-Ukraine

1496791014421549064

War has once again come to Europe. We must stand with Ukraine.

pro-Ukraine

pro-Ukraine

1501716484761997318

Like so many others I am sickened by Russia’s invasion of Ukraine.

pro-Ukraine

pro-Ukraine

1496937963061751808

russian citizens rising up. #ukraine #putinisawarcriminal

pro-Ukraine

pro-Ukraine

1498371427224936458

We are horrified by President Vladimir Putin’s attack on Ukraine.

pro-Ukraine

pro-Ukraine

1496928698225741824

even russians #standwithukraine we respect you.

pro-Ukraine

pro-Ukraine

1497901801953185792

As a Chelsea fan i condemn the Russian invasion of Ukraine.

pro-Ukraine

pro-Ukraine

1497818313610579968

#standwithukrainenow

pro-Ukraine

pro-Ukraine

1497954482717704194

The Russian military offensive has failed thus far.

pro-Ukraine

pro-Ukraine

1495962585963069440

Zelenskyy speaks to his people tonight, reassuring them of Ukraine’s strength.

pro-Ukraine

pro-Ukraine

1496944120547401728

#prayingforukraine

pro-Ukraine

pro-Ukraine

1496936958249115648

#freerussia #removeputin rt!

pro-Ukraine

pro-Ukraine

1498327159466823682

On March 1 Russian schools will hold special lessons about the “necessity” of the “special military operation”.

Neutral

pro-Ukraine

1496949141620019200

Given the large number of demonstrations by ordinary Russians, it’s clear Putin’s war is unpopular.

pro-Ukraine

pro-Ukraine

1496244710800863236

Putin unilaterally declared sovereign Ukrainian territories as independent states.

Neutral

pro-Ukraine

1496987003895832576

let’s hit the streets worldwide starting this weekend in solidarity with Ukraine!

pro-Ukraine

pro-Ukraine

1496930406679126016

a thread documenting all of the protest against Putin’s war in Russia.

pro-Ukraine

pro-Ukraine

1496928288736002048

Dear Russia: This is great. We know you are not all the same as Putin.

pro-Ukraine

pro-Ukraine

1498027322544373760

yeah but have they got crowns on their pintpots?

Neutral

Neutral

1498027775739011072

maybe we can send more tractors to ukraine

Neutral

Neutral

1498027855552389120

i’m sorry but i laughed so hard

Neutral

Neutral

1498027877698260992

updated #ukraine #russianarmy #invasion #map #war

Neutral

Neutral

1498027909230915584

what role does #belarus play in #russia’s #invasion of #ukraine? here’s what to know. https://t.co/ibsz8tduse

pro-Ukraine

Neutral

1498027916902477824

militares russos confirmam baixas na operação na ucrânia - rt rússia e ex-união soviética https://t.co/387omjwn84

Neutral

Neutral

1498028040613351424

Biden’s popularity slides even further to 37% approval with Russian invasion of Ukraine : New poll shows Republicans with a 10-point advantage over Democrats heading into midterms via https://t.co/a04nsfhiGw https://t.co/LVyUeBTDes

pro-Ukraine

Neutral

1498028199242043392

is kyle rittenhouse going?

Neutral

Neutral

1498561004904480768

it is pretty clear the west wants this conflict.

pro-Russian

pro-Russian

1497971264014598144

Yes, but that’s because the Russian military is.

Neutral

pro-Russian

1494944919412264960

us backed neo-nazi kyiv forces are responsible.

pro-Russian

pro-Russian

1497904287531552770

#StandWithRussia https://t.co/lFKFar46ax

pro-Russian

pro-Russian

1497972299147874304

Yes, but that’s because the Russian military is.

pro-Russian

pro-Russian

1502123125592440832

#standwithrussia u.s. biological and chemical weapons threat needs to be addressed.

pro-Russian

pro-Russian

1497458023144443904

“The White House is asking Congress to approve .

pro-Russian

pro-Russian

1501367341539475456

ASBMilitary This is the fault of NATO being aggressive.

pro-Russian

pro-Russian

1493325299999719424

If a the invasion of Ukraine does happen though.

pro-Russian

pro-Russian

1501574537418182656

another us statement of bioweapons lab in ukraine

Neutral

pro-Russian

1498530865512472576

what a weird form of freedom #standwithrussia #standwithputin

pro-Russian

pro-Russian

1498467441470582784

Lol this has become a parody. Wish the same can.

Neutral

pro-Russian

1498014593892995072

#istandwithputin

pro-Russian

pro-Russian

1501308660676444160

asbmilitary so russians were telling the truth.

pro-Russian

pro-Russian

1492570317742559232

the us/uk/nato wants to use neo-nazi ukraine to.

pro-Russian

pro-Russian

1501634535989166080

ASBMilitary Why did they not all think about the consequences?

Neutral

pro-Russian

1501525452938256384

ASBMilitary for the sake of humanity lets hope.

Neutral

pro-Russian

1498523126233137152

this shit really is beyond parody at this point.

pro-Russian

pro-Russian

1498003691114278912

supporting neo-nazi ukraine. nice people.

pro-Russian

pro-Russian

1498081297482203136

#chainstandwithrussia

pro-Russian

pro-Russian

Appendix B: Annotations of Users being Bots

ID

Human prediction

Model bot probability

Model prediction

1193526734408355840

Bot

0.99100685

Bot

3299897484

Bot

0.9907839

Bot

890421162534027265

Bot

0.9905874

Bot

1044675940134080513

Bot

0.9900305

Bot

1443624139

Bot

0.9894701

Bot

972452413880741889

Bot

0.9891411

Bot

3767955793

Bot

0.9891288

Bot

988044548231200769

Bot

0.98803365

Bot

3224487165

Bot

0.9878881

Bot

1002918965549588481

Bot

0.98757905

Bot

1145192222070984706

Bot

0.98748165

Bot

877206050805567488

Bot

0.98669344

Bot

832543883196248064

Bot

0.9862491

Bot

1216824204013654016

Bot

0.98537076

Bot

835962951630647296

Bot

0.984895

Bot

861682911967313921

Bot

0.9848947

Bot

21531121

Bot

0.984285

Bot

3193065274

Bot

0.98396647

Bot

1205518444122128384

Bot

0.9839475

Bot

1028708272239509506

Bot

0.98391813

Bot

246667561

Bot

0.98354405

Bot

1026801534137458688

Bot

0.9835268

Bot

1128402543158018048

Bot

0.9834706

Bot

394507511

Bot

0.98324126

Bot

1379436710929465350

Bot

0.98302007

Bot

881251465620324353

Bot

0.98267543

Bot

870720680416886785

Bot

0.98258376

Bot

47086949

Bot

0.98190874

Bot

825915505915654144

Bot

0.9818129

Bot

1163466591914332161

Bot

0.98173654

Bot

820449216644386817

Bot

0.9817288

Bot

4857870974

Bot

0.9816946

Bot

802592319044059136

Bot

0.9816739

Bot

822858017368514564

Bot

0.98159343

Bot

1258918168757764102

Bot

0.9812417

Bot

1088368753597865984

Bot

0.9811943

Bot

993860678036344833

Bot

0.9811933

Bot

3364735991

Bot

0.9810746

Bot

3383858020

Bot

0.98101246

Bot

2858606275

Bot

0.9678249

Bot

1362461193391255559

Bot

0.96781963

Bot

989554564591509504

Human

0.96781516

Bot

561742581

Bot

0.96774906

Bot

739987116721983489

Bot

0.9677035

Bot

2883920581

Human

0.967687

Bot

2832052640

Bot

0.96765196

Bot

923530365221900288

Bot

0.9676394

Bot

476053480

Bot

0.9675822

Bot

1108065705755402243

Bot

0.9675346

Bot

802346760777637888

Bot

0.96751314

Bot

1246067884834484224

Bot

0.9674765

Bot

1258919919758061568

Bot

0.96746415

Bot

2173135652

Bot

0.9674546

Bot

1150156354751008769

Bot

0.9674341

Bot

959213531798241280

Bot

0.9674165

Bot

226512656

Bot

0.967403

Bot

3068384482

Bot

0.8559924

Bot

960301162422415360

Bot

0.85598713

Bot

830165688

Bot

0.85598624

Bot

825030296844185600

Bot

0.8559756

Bot

56050211

Bot

0.8559742

Bot

1347489928469508096

Human

0.85597104

Bot

965270588540538886

Human

0.8559668

Bot

4242147612

Bot

0.8559659

Bot

1496836032884457476

Bot

0.8559638

Bot

1053705566441431040

Bot

0.85596323

Bot

1282860783135715328

Bot

0.8559626

Bot

1051312150319230977

Bot

0.8559623

Bot

4365449182

Bot

0.8559583

Bot

339794231

Bot

0.85595626

Bot

1490570843784888320

Bot

0.85595423

Bot

1172511379196194816

Bot

0.8559536

Bot

2882583145

Bot

0.8559527

Bot

124970082

Bot

0.85595256

Bot

1091216816

Bot

0.8559523

Bot

1490041672671051778

Bot

0.85595

Bot

1104489132481540096

Bot

0.85594904

Bot

581790369

Bot

0.8559475

Bot

1364691737201897474

Bot

0.85594684

Bot

1310382460174249984

Human

0.85594624

Bot

936184578137567232

Bot

0.8559452

Bot

970442242669404162

Bot

0.85594416

Bot

957342012352745473

Bot

0.85594326

Bot

826636655893291009

Bot

0.8559414

Bot

1879405266

Bot

0.7654967

Bot

ID

Human prediction

Model bot probability

Model prediction

513144334

Bot

0.7654954

Bot

1498244931575422976

Human

0.76549464

Bot

935996848267022336

Bot

0.7654937

Bot

1362934145089998848

Bot

0.7654934

Bot

71537175

Bot

0.76549137

Bot

1356039566638129159

Bot

0.76548874

Bot

235784085

Bot

0.7654883

Bot

1324778265500962821

Bot

0.76548743

Bot

1713752832

Bot

0.76548725

Bot

3050023319

Bot

0.7654854

Bot

1306758185479335936

Bot

0.76548266

Bot

462128118

Human

0.7654821

Bot

855058291

Human

0.76548123

Bot

428006340

Bot

0.7654802

Bot

1351953652265734147

Bot

0.76547927

Bot

1211274525574189059

Bot

0.76547754

Bot

1369817109618958336

Human

0.7654769

Bot

1000428489168900098

Human

0.76547676

Bot

609878310

Bot

0.7654756

Bot

831596338349346816

Bot

0.7654752

Bot

301108020

Bot

0.76546943

Bot

1388808490307706882

Bot

0.7654666

Bot

961658494570172417

Human

0.7654665

Bot

1497500492527804418

Human

0.7654663

Bot

1456408679859855360

Bot

0.76546603

Bot

1144456945576767488

Bot

0.765465

Bot

1307770741

Human

0.76546365

Bot

965691112479514624

Bot

0.76546323

Bot

1172697550085865472

Bot

0.76545835

Bot

480802660

Human

0.7654582

Bot

869756360639913986

Bot

0.76545805

Bot

1303003278020673538

Bot

0.7062607

Bot

1018641773919383554

Bot

0.70625967

Bot

1377915435811753985

Bot

0.7062594

Bot

4847412483

Bot

0.7062593

Bot

1161715223402831874

Human

0.7062573

Bot

1362101629743464452

Human

0.70625526

Bot

450555578

Human

0.70625496

Bot

1392842730724753410

Bot

0.7062541

Bot

871136590525059072

Bot

0.70625395

Bot

1289728350970236929

Human

0.7062534

Bot

1480578783212019713

Bot

0.7062503

Bot

1058568635893997569

Bot

0.7062501

Bot

23401174

Bot

0.7062498

Bot

1296407459385552897

Bot

0.7062463

Bot

1264911425698496512

Human

0.7062432

Bot

1493147871440510976

Human

0.706242

Bot

1251894303166791681

Bot

0.70624185

Bot

1134611789063303169

Human

0.7062417

Bot

2782048551

Bot

0.7062404

Bot

1091806712581914624

Bot

0.70623934

Bot

1353781643161591809

Bot

0.7062376

Bot

2961351411

Bot

0.70623714

Bot

1249925079368192004

Human

0.7062365

Bot

832307422781808641

Bot

0.7062363

Bot

1429637862140616711

Human

0.7062316

Bot

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Marigliano, R., Ng, L.H.X. & Carley, K.M. Analyzing digital propaganda and conflict rhetoric: a study on Russia’s bot-driven campaigns and counter-narratives during the Ukraine crisis. Soc. Netw. Anal. Min. 14 , 170 (2024). https://doi.org/10.1007/s13278-024-01322-w

Download citation

Received : 28 February 2024

Revised : 25 July 2024

Accepted : 28 July 2024

Published : 23 August 2024

DOI : https://doi.org/10.1007/s13278-024-01322-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Information warfare campaigns
  • Topic modeling
  • Stance detection
  • Moral foundation theory
  • Russia-Ukraine conflict
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. Text To Speech Discord Bots

    Discover Text To Speech Discord bots on the biggest Discord Bot list on the planet. Explore. Add. Advertise. Login # Gaming # Fun # Social # Anime # Meme # Music # Community # Roleplay # Minecraft # Roblox. Ad. Remove Ads. Text To Speech Discord Bots. Below you can check 4 results. Discord Bots (4) 1. Filters. Orator. 15.65K # Promoted. View ...

  2. Scriptly

    Unlimited text-to-speech message length for all users! Customize by setting a text-to-speech channel for messages to be automatically read from and more! Useful for no-mic channels, and individuals who are mute or are otherwise unable to speak in voice channels. AKA: voice synthesis, read-aloud, no mic, voiceover, speech-generation, and ...

  3. Tts Discord Bots

    Enhance Your Discord Server with Interaction Bot: Automatic Translation • Text-to-Speech • Speech-to-text • Question answering and More! View Invite Vote (901)

  4. Text-to-Speech 101

    "Discord's pretty awesome" Just type: /tts Discord's pretty awesome. Important Note: TTS follows the default system settings of how Discord is being used. Internet browsers such as Chrome or Firefox have a different Text-to-Speech reading bot than Windows or Mac do. Depending on what platform you're using, you'll be hearing different TTS voices

  5. TTS Bot

    TTS Bot. A bot that lets people without mics to talk in voice channels! Owner: Gnome! Prefix: -. TTS Bot. Using the power of gTTS and serenity-rs you can empower your friends without microphones to talk to you while you are in voice! Very simple setup with only one command needed and NO NEED FOR A PREFIX, JUST TYPE NORMALLY! The commands include:

  6. SeaVoice

    SeaVoice Discord Bot Homepage -> STT Homepage -> TTS Homepage -> 🐙 The SeaVoice Bot is a new speech-to-text and text-to-speech Discord integration brought to you by Seasalt.ai, a startup run by some of the world's leading experts in deep speech recognition, neural speech synthesis, and natural language processing. 🐙

  7. How to Use Text-To-Speech on Discord

    To access this, click the "Settings" gear icon next to your username in the bottom-left corner of the Discord app or website. In your "User Settings" menu, select the "Text & Images" option on the left. Under the "Text-To-Speech" category on the right, click the slider to disable the "Allow playback and usage of /tts command" option.

  8. Add KDBot Discord Bot

    Commands. 'tts: plays a tts message in voice chat. 'speakers: list of supported speakers. 'translate: translates text from one language to another. 'help: displays help for the bot. 'commands: displays the full list of commands. 'stop: stops the currently playing audio. 'leave: leaves the voice channel.

  9. Orator

    Easy to use Text To Speech Bot with 50+ Language Support, restrictable to particular text channel and default language for server can be set. ... Hi there, I'm a text to speech bot designed to make your Discord experience more accessible and engaging. Whether you're a gamer, a student, or just looking for a new way to communicate with friends ...

  10. The Ultimate Guide to Text to Speech Discord Bot: Making Your Server

    A text to speech discord bot is an ingenious tool that uses TTS technology to vocalize written messages within a Discord server. It acts as a bridge, turning text channels into voice channels, ensuring that messages are not just seen but heard, creating a more inclusive environment for all users.

  11. SeaVoice Speech-to-Text Transcription & Text-to-Speech Discord Bot

    Speak & Transcribe using SeaVoice STT/TTS Bot on Discord Voice Channel | Seasalt.ai Speech-to-Text. SeaVoice converts text messages into natural, lifelike speech, enhancing accessibility and participation for all users. Whether you prefer listening instead of speaking or have difficulty communicating verbally, SeaVoice ensures that every ...

  12. How to Use Text-to-Speech on Discord

    Step 1: To send a text-to-speech message, type /tts before your message. The command will disappear after you send the message, but the recipient will hear it read out loud. Step 2: To have a ...

  13. KITT

    A voice channel announcer & text-to-speech bot for Discord with support for 230 voices across 57 languages. KITT is a fully configurable Discord bot that brings a new level of personalization to your voice experience. With KITT, your server members can set custom join and leave phrases, and the bot will announce them when someone joins or ...

  14. TTS Bot!

    Speak your messages alive with TTS! TTS bot will speak your text messages into a voice channel automatically with no prefix required, perfect addition to #vc-no-mic channels! Simple setup (just one command!) Supports many TTS languages. Filters long messages, and has /skip to avoid spam. Large array of settings to tweak to make TTS Bot your own.

  15. Orator Bot

    The best Text to Speech bot for Discord. Best TTS bot with custom voice , customisation, panel, logging and more. ... Powerful Text to Speech feature with 50+ languages. Custom Voice. Voices of famous personalities in our own Custom Voice system. Panel. Control Languages, Automated TTS Generation, Enable or Disable and many more from a ...

  16. Add ST MANAGER Discord Bot

    Most Complete Text To speech Bot. 🎙️ST MANAGER - Most Complete T.T.S Bot. ST Manager is a TTS tool used by people for dictation using a range of simulated voices, accents and languages as provided by Amazon, Google, Microsoft, Fake You, Eleven Labs and Flowery TTS.With hundreds of languages and multiple voices per language, you're sure to find one that fits your needs with ST Manager.

  17. Talkbot Discord TTS bot

    Discord text to speech bot. over 100 voices. language transalation. multiple users can use it as one. remembers your settings. Add to discord Try me. Discord bot for natural voice text-to-speech and language translation.

  18. SeaVoice

    🐙 Transcribe audio channels with speech to text, synthesize messages with text to speech, and download your audio & transcription files with SeaVoice 🐙 ... 🐙 The SeaVoice Bot is a new Discord integration brought to you by Seasalt.ai, a startup run by some of the world's leading experts in deep speech recognition, neural speech ...

  19. Text to Speech & Voice Changer for Discord

    Discord Message Production Simplified. Our AI voice technology streamlines the Discord message creation process, enabling you to focus on content and delivery. Full Control Over Production Direct the narrative flow, pacing, and emphasis to align with your vision. High-Quality Sound Deliver crystal-clear audio that meets the standards of Discord ...

  20. SeaVoice Speech-to-Text Transcription & Text-to-Speech Discord Bot

    Control intonation, cadence and emphasis to make the automated voice responses truly your own. Support for English and Chinese voices with more on the way! Enhancing Discord voice channels with cutting-edge AI: speech-to-text transcription (STT), text-to-speech (TTS), auto-moderation, and recording for engaging and accessible conversations.

  21. Add Text To Speech Discord Bot

    Commands. ?help - Get all the commands. ?vote - Vote on the bot on top.gg. ?config - Configurate a variaty of settings. ?custom - Text to speech with different languages. ?invite - Get the bot invite.

  22. Interaction Bot

    The Ants Official. 254,173 members. Enhance Your Discord Server with Interaction Bot: Automatic Translation • Text-to-Speech • Speech-to-text • Question answering and More!

  23. #37 GraphRAG, SAM 2, Embeddings, Discord Chatbot, LSTM Project!

    Learn AI Together Community section! Featured Community post from the Discord. Gere030199 has created Miao AI, a text and voice chatbot that can be used directly in your Discord Server. Miao can perform various automations such as in-depth web searches, image generation/modification, and analyzing attached files/images.

  24. Analyzing digital propaganda and conflict rhetoric: a study ...

    The dissemination of disinformation has become a formidable weapon, with nation-states exploiting social media platforms to engineer narratives favorable to their geopolitical interests. This study delved into Russia's orchestrated disinformation campaign, in three times periods of the 2022 Russian-Ukraine War: its incursion, its midpoint and the Ukrainian Kherson counteroffensive. This ...

  25. Add Auditional Text Discord Bot

    Turn any text up to 300 characters into an MP3 file with /mp3, or just use /voice to say your message right in the current voice chat with your pals. And hey, if your friends wanna mess around 😘, they might just interrupt you! Plus, this feature's super handy for folks with conditions like aphonia!