Frank Schweitzer, how can we prevent malicious AI swarms from destroying our democracies?
Artificial intelligence is increasingly being used for mass disinformation and manipulation. 22 researchers from different disciplines warned against this in an article recently published in the journal?Science. The authors include Frank Schweitzer, Professor Emeritus of Systems Design at ETH Zurich.
Today, what we see popping up in social media networks are no longer just individual automated bots that influence discussions, but whole swarms of them. They simulate debates and, at the same time, adapt to the behaviour and tastes of the users. Generative AI makes propaganda more convincing and more comprehensive. This raises fears that even election results can be deliberately manipulated with malicious swarms of AI swarms. For example, swarms feign polarised discussions or promote minority opinions by reinforcing them with likes, retweets, comments and ratings.
The expert
Frank Schweitzer is Professor Emeritus of Systems Design at ETH Zurich and a founding member of the ETH Risk Center. His research interests include user interaction in online social networks, collective decisions in animal groups, cascades of failure and systemic risks in economic networks.
A few years ago, it was much harder to program these types of manipulative swarms and adapt them to user behaviour. Today, it is getting easier all the time because artificial intelligence is developing very quickly and computing power is expanding around the world. This means that AI swarms are turning into just another business model, whether legal or illegal.
The perfidious thing is that it is becoming increasingly difficult for cybersecurity experts to distinguish the behaviour of these types of swarms from the real social behaviour of social media users, as the agents in these swarms coordinate their behaviour and evolve continually in the process. By interacting with users, they learn continuously and adapt to user behaviour. In doing so, they suggest a social reality that does not exist as such.
Recognising such manipulations is a scientific and technical challenge. Today, it is no longer enough to identify a single source of false information. Instead, specialists need to use statistical methods to identify ‘fingerprints’: certain patterns of behaviour or linguistic repetitions that point to automated bots. With the increasingly powerful large language models (LLMs), the ability of AI swarms to circumvent such detection methods is also increasing. It is a constant race between those who use malicious AI swarms for their own ends and those who want to protect the Internet from such attacks.
Stronger regulations only have a limited impact, because criminal actors usually quickly find ways to circumvent them. In the best-case scenario, regulations might increase the effort – and thus the costs – required for successful attacks. It seems more important to me, however, that the citizens develops an awareness of the dangers of AI swarms. We need to educate schoolchildren and students in digital media literacy and teach them about the risks of AI swarms and automated influence on social media.
In Switzerland, we are starting in a good position: Swiss voters are accustomed to dealing with complex voting issues on a regular basis and obtaining a wide range of information. For this purpose, in addition to social networks, it is important to use traditional media, regional newspapers, public radio and television as well as Federal Council Dispatches and official communications from the political parties.
The more diverse and independent the information to which citizens have access, the less likely it is that their opinions will be manipulated by AI swarms. Strong state media and a diverse media landscape play an important role in our democracy – provided, of course, that citizens actually use the various information channels and participate in the political process.