【】

X, the Elon Musk-owned social media platform formerly known as Twitter, has a significant fake account problem. The proliferation of bots on the social network has been acknowledged by Musk himself, as he cited it as the main reason he originally tried to back outof acquiring the company.
And new researchfrom the Observatory on Social Media at Indiana University, Bloomington paints a good picture of one such exact bot network that's been deployed on X. Professor Filippo Menczer along with student Kai-Cheng Yang recently published a study concerning a botnet dubbed Fox8, according to Wiredwho first reported on the research.
The researchers discovered a network of at least 1,140 fake Twitter accounts just this past May that constantly posted tweets linking to a string of spammy no-name online "news" websites that would just repost content scraped from legitimate outlets.
The vast majority of posts published by this network of bot accounts were related to cryptocurrency and often included hashtags such as #bitcoin, #crypto, and #web3. The accounts would also frequently retweet or reply to popular crypto users on Twitter, such as @WatcherGuru, @crypto, and @ForbesCrypto.
How did a bot network of more than one thousand accounts post so much? It utilized AI, in this case, specifically ChatGPT to automate exactly what was posted. The purpose of these AI-generated posts appeared to be to spam Twitter with as many crypto-hyping links as possible, in order to get in front of as many legitimate users as possible in hopes that they'd click on the URLs.
According to Wired, X the accounts were eventually suspended by X after the research was published in July. Menczer says that his research group would previously inform Twitter of such botnets but stopped doing so after Musk's acquisition as they found the company was "not really responsive" anymore.
While AI tools like ChatGPT helped the botnet owner pump out content for thousands of accounts, it also ended up being its eventual downfall.
According to the published study, the researchers noticed an eventual pattern with these accounts: They would post tweets beginning with the phrase "as an AI language model." ChatGPT users will be familiar with this phrase as the AI assistant often provides this as an addendum to any output it provides that it decides can have potential issues due to it, well, simply being an AI language model.
The researchers pointed out that if it wasn't for this "sloppy" mistake, the botnet potentially could have continued on undiscovered.
TopicsArtificial IntelligenceSocial MediaTwitterCryptocurrencyChatGPT
相关文章
This German startup wants to be your bank (without being a bank)
BERLIN -- “That is f*cking clever,” said Ben Floyd, 33, as we sat in a trendy cafe in Be2025-09-18Wordle today: Here's the answer and hints for September 21
Happy Thursday, Wordlers! As always, we're serving up our daily hints and tips to help you figure ou2025-09-18How to preorder the two new Microsoft Surface laptops
UPDATE: Sep. 22, 2023, 5:00 a.m. EDT This story has been updated with additional specs and links to2025-09-18'Thank You For Coming' review: An empowering and raunchy feminist comedy
At 32, Kanika Kapoor has never had an orgasm. This is the central problem that the character, played2025-09-18Uber's $100M settlement over drivers as contractors may not be enough
UPDATE: Sept. 7, 2016, 4:41 p.m. EDT。 A ruling in a different case on Wednesday, Sept. 7 may have ch2025-09-18For Hollywood, scary AI is an old trope. It's now a true threat.
Fear of artificial intelligence has long proven fertile ground for film and TV makers — and ye2025-09-18
最新评论