Skip to main content

Using Artificial Intelligence (AI) to Spread Misinformation and Fake Content within Social Media Posts

The purpose of this study was to explore how Artificial Intelligence (AI) can be used to generate and automate social media content for spreading misinformation and fake content. While prior research has examined the ethical and societal impact of misinformation, few studies have tested the real-world effectiveness of fully automated AI generated posts compared to those reviewed with human oversight. To address this gap, two experimental models were developed: Model 1 used a fully automated AI workflow, while Model 2 combined AI generated content with human interaction. Data was collected from Instagram and Meta between February 12th, 2025, and ending April 20th, 2025. Both models posted inspirational content with images, but Model 2, which featured human refinement and engagement, saw significantly higher user interaction, including friend requests, messages, and promotional offers. In contrast, Model 1 experienced low engagement and technical issues. The results highlight the importance of human oversight in boosting credibility and interaction with AI generated content. This study contributes to the understanding of misinformation by demonstrating how different levels of automation influence the content reach, user behavior, and potential ethical risks, emphasizing the need for platform regulation, and responsible AI usage.

Vasilka Chergarova
Florida International University
United States
vchergar@fiu.edu

 

Mel Tomeo
Miami Dade College
United States
mtomeo@mdc.edu

 

Enas Albataineh
Florida Memorial University
United States
enas.albataineh@fmuniv.edu

 

Wilfred Mutale
Duquesne University
United States
mutalew@duq.edu

 

John J. Scarpino
Washington State University
United States
john.scarpino@gmail.com

 

Heidi Morgan
University of Southern California
United States
hlmorgan@isi.edu