news-details

AI bots easily bypass some social media safeguards, study reveals

While artificial intelligence (AI) bots can serve a legitimate purpose on social media—such as marketing or customer service—some are designed to manipulate public discussion, incite hate speech, spread misinformation or enact fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on using bots and created technical mechanisms to enforce those policies.

But are those policies and mechanisms enough to keep social media users safe?

Research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes. Their research is published on the arXiv preprint server.

The researchers successfully published a benign "test" post from a bot on every platform.

"As computer scientists, we know how these bots are created, how they get plugged in and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn't really be a problem," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market