Abstract
Keywords
Social media platforms have transformed how individuals consume information by customizing content to align with user preferences. This customization cuts down on information irrelevant to the user, enhances user engagement, and improves user experience. However, customization can have unintended consequences, such as creating “filter bubbles,” which isolate users from content they have not expressed interest in, and “echo chambers,” in which users are repeatedly exposed to information that reinforces their existing beliefs.1–3 These effects contribute to social polarization,4,5 the spread of misinformation,6,7 and the promotion of harmful behaviors by, for example, promoting risky TikTok “challenges.”8–11
Generative artificial intelligence (GenAI) is the type of AI behind the models, or core components, used in applications like ChatGPT. It represents a major technological advance, enabling the creation of entirely new content (including text, images, video, and programming code) rather than merely retrieving or curating existing information. Trained on vast datasets, often including all publicly available data on the internet (such as Wikipedia and online forums) and even proprietary data (such as books and newspaper articles), these models learn complex patterns, capture contextual nuances, and generate highly adaptive responses.
GenAI can deliver an unprecedented level of customization, a capability we call
In the development process, GenAI applications are often fine-tuned—in other words, refined, adapted, and specialized based on human feedback—to align with the values or norms of the society for which they are intended or in which they are deployed. 12 That means that two GenAI applications, such as ChatGPT (United States) and DeepSeek (China), could systematically differ in their responses to controversial topics, such as China’s 1989 crackdown on protesters in Tiananmen Square. 13 GenAI can also be tailored for concrete tasks and roles, from assisting in clinical decision-making and synthesizing medical research 14 to performing psychotherapy 15 and debunking false claims. 16
GenAI can adapt its responses to every question or prompt from users 17 while also being attuned to implicit cues in language (such as wording that conveys emotion) and behavior (such as a strong and sudden interest in certain topics) to inform its responses. These capabilities can enable a deeply tailored and interactive experience for the user. For example, a GenAI model that offers companionship might infer that certain topics make a user angry or that the user does not tend to talk about work in the evening. The process of customization at both the general and user-specific level is largely opaque to users, whose understanding of the application is limited to a provider’s descriptions of its capabilities.
The consequences of such hypercustomization are similar to those from social media customization but have the potential to be more severe for reasons we explore in detail later in this article. The risks stem largely from the reinforcement of a person’s existing beliefs, habits, and tendencies.
As users engage with GenAI applications that can align with their biases, ideological extremism may grow, and collective consensus may erode, 18 which at scale may contribute to the decline of democratic institutions and processes—a phenomenon known as democratic backsliding. 19 Beyond polarization, GenAI might enable new ways of shaping narratives that could fragment shared reality, producing multiple and customized versions of events 13 and thereby deepen social divisions. Misinformation could exacerbate these effects, as GenAI can fabricate and reinforce false but persuasive narratives, potentially fueling social unrest and institutional distrust.
It has been claimed that, on an individual level, certain GenAI applications can goad people into making poor choices, 20 including engaging in destructive behavior such as self-harm and suicide. 21 Other GenAI applications, such as those providing companionship, could foster an emotional attachment that might be beneficial in some cases but could also have negative psychological or social effects.20–22
In this article, drawing from governance and behavioral science perspectives—particularly insights from social media’s impact on society—we examine aspects of GenAI that make hypercustomization particularly problematic. We then explore challenges in mitigating these risks and develop recommendations to address them. When discussing GenAI, we focus on four key applications that have significant potential for harm. These include GenAI information provision assistants, such as ChatGPT, which provide information through conversational interactions. In a different use, applications like ChatGPT (for text), Dall-e (for images, https://openai.com/index/dall-e-3/), and Sora (for video, https://sora.com/), and similar ones from other providers can function as content generation assistants to generate new content, such as edited videos, images, and customized texts. The applications also include GenAI-based autonomous agents, which may provide customer service or populate social media platforms to influence the opinions of individuals on a large scale. And they include the GenAI-based social companions referred to earlier, which mimic human interactions to offer company and support. (See the sidebar “Four GenAI Applications That Pose Risks From Hypercustomization” for more details about the four application types.)
