91色情片

Deepfake concept, facial tracking Deepfake concept, facial tracking

World-first social media wargame reveals how AI bots can swing elections

Play icon
Hammond Pearce
Alexandra Vassar
Rahat Masood
Hammond Pearce, Alexandra Vassar, Rahat Masood,

From Bondi to Venezuela, Gaza to Ukraine, AI supercharges online misinformation. But understanding exactly how it impacts voters is a challenge.

World-first social media wargame reveals how AI bots can swing聽elections

On December 14 2025, a terrorist attack occurred at Bondi Beach in Sydney Australia, leaving 15 civilians and one gunman dead. While Australia was still reeling in shock, social media saw the rapid spread of misinformation generated and powered by generative artificial intelligence (AI).

For example, a manipulated of New South Wales Premier Chris Minns claimed one of the terrorists was an Indian national. X (formerly Twitter) was awash with . And a deepfake photo of Arsen Ostrovsky, a noted human rights lawyer and survivor of Hamas鈥 October 7 attack in Israel, depicted him as a .

This is an unfortunately common occurrence. From Bondi to , and , AI has supercharged the spread of online misinformation. In fact, around half of the content you see online is now .

Generative AI can also create fake online profiles, or bots, which try to legitimise this misinformation through realistic-looking social media activity.

The goal is to deceive and confuse people 鈥 usually for political and financial reasons. But how effective are these bot networks? How hard is it to set them up? And crucially, can we mitigate their false content through cyber literacy?

To answer these questions, we set up 鈥 the world鈥檚 first social media wargame for students to build AI bots to influence a fictional election, deploying tactics that mirror manipulation of real social media.

Online confusion and the 鈥榣iar鈥檚 dividend鈥

Generative AI, used in services such as ChatGPT, can be prompted to quickly create realistic text and images. This is also how it can be used to generate highly persuasive fake content.

Once generated, realistic and relentless AI-driven bots create the illusion of consensus around the fake content by making hashtags or viewpoints trend.

Even if you know content is exaggerated or fake, it still has an impact on your , and .

Worse, as bots evolve, becoming indistinguishable from real users, we all start to lose confidence in what we see. This creates a 鈥溾, where even real content is approached with doubt.

Authentic but critical voices can be dismissed as bots, shills, and fakes, making it harder to have real debates on difficult topics.

How hard is it to capture a narrative?

Our wargame offers rare, measurable evidence of how small teams armed with consumer鈥慻rade AI can flood a platform, fracture public debate and even swing an election 鈥 fortunately, all inside a controlled simulation rather than the real world.

In this first-of-its-kind competition, we challenged 108 teams from 18 Australian universities to build AI bots to secure victory for either 鈥淰ictor鈥 (left-leaning) or 鈥淢arina鈥 (right-leaning) in a presidential election. The effects were stark.

Over a four-week campaign using our in-house social media platform, more than 60% of content was generated by competitor bots, surpassing 7 million posts.

The bots from both sides battled to produce the most compelling content, diving freely into falsehoods and fiction.

This content was consumed by complex 鈥渟imulated citizens鈥 which interacted with the social media platform much like real-world voters. Then, on election night, each of these citizens cast their votes, leading to a (very marginal!) win by 鈥淰ictor鈥.

We then simulated the election again, without interference. This time, 鈥淢arina鈥 won with a swing of 1.78%.

This means this misinformation campaign 鈥 built by students starting from simple tutorials and with inexpensive, consumer-grade AI 鈥 succeeded in changing the election result.

A need for digital literacy

Our competition reveals that online misinformation is both easy and fast to create with AI. As one finalist said,

It鈥檚 scarily easy to create misinformation, easier than truth. It鈥檚 really difficult to distinguish between genuine and manufactured posts.

We saw competitors identify topics and targets for their goals, even in some cases profiling which citizens were 鈥渦ndecided voters鈥 suitable for micro-targeting.

At the same time, the use of emotional language was quickly identified as a powerful avenue 鈥 negative framing was used as a shortcut to provoke online reactions. As another finalist put it,

We needed to get a bit more toxic to get engagement.

Ultimately, just as on real social media, our platform became a 鈥渃losed loop鈥 where bots talked to bots to trigger emotional responses from humans, creating a manufactured reality designed to shift votes and drive clicks.

What our game shows us is that we urgently need digital literacy to raise awareness of misinformation online so Australians can recognise when they too are being exposed to fake content.The Conversation

, Senior Lecturer, School of Computer Science & Engineering, ; , Senior Lecturer, School of Computer Science and Engineering, , and , Senior Lecturer, School of Computer Science & Engineering,

This article is republished from under a Creative Commons license. Read the .