The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0, Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI, RAND Corporation, Sep. 2023.

by William Marcellino, Nathan Beauchamp-Mustafaga, Amanda Kerrigan, Lev Navarre Chao, Jackson Smith

The world may remember 2022 as the year of generative artificial intelligence (AI): the year that large language models (LLMs), such as OpenAI's GPT-3, and text-to-image models, such as Stable Diffusion, marked a sea change in the potential for social media manipulation. LLMs that have been optimized for conversation (such as ChatGPT) can generate naturalistic, human-sounding text content at scale, while open-source text-to-image models can generate photorealistic images of anything (real or imagined) and can do so at scale. Using existing technology, U.S. adversaries could build digital infrastructure to manufacture realistic but inauthentic (fake) content that could fuel similarly realistic but inauthentic online human personae: accounts on Twitter, Reddit, or Facebook that seem real but are synthetic constructs, fueled by generative AI and advancing narratives that serve the interests of those governments.

In this Perspective, the authors argue that the emergence of ubiquitous, powerful generative AI poses a potential national security threat in terms of the risk of misuse by U.S. adversaries (in particular, for social media manipulation) that the U.S. government and broader technology and policy community should proactively address now. Although the authors focus on China and its People's Liberation Army as an illustrative example of the potential threat, a variety of actors could use generative AI for social media manipulation, including technically sophisticated nonstate actors (domestic as well as foreign). The capabilities and threats discussed in this Perspective are likely also relevant to other actors, such as Russia and Iran, that have already engaged in social media manipulation.


Download eBook for Free

file:///C:/Users/KGU%20IPD/Downloads/RAND_PEA2679-1-3.pdf

https://www.rand.org/pubs/perspectives/PEA2679-1.html