Abstract
Keywords
As artificial intelligence (AI) systems become more capable, interactive, and autonomous, they are no longer confined to the functions of mechanistic tools. Increasingly, they act as agentic entities that exhibit autonomy, initiate actions, and interact socially with humans. With the advent of generative AI and large language models, “agentic AI” capable of sophisticated reasoning and iterative learning for problem solving and task completion has become the next frontier of AI (Pounds 2024). This development is evidently reflected in the arms race among leading tech companies to build ecosystem and infrastructure for AI agents, such as OpenAI Operator, Google Agentspace, and NVIDIA Agentic AI Blueprints.
These emerging agentic capabilities of AI mark a shift not only in what AI can do but also in how people perceive, receive, and interact with it. To fully realize the benefits of AI and ensure its effective adoption in human society, it is essential to understand how people accept AI. However, traditional models of technology acceptance and innovation adoption, as well as prior literature summarizing AI acceptance, have paid limited attention to the growing salience of agentic aspects of AI (e.g., Kelly, Kaye, and Oviedo-Trespalacios 2023; Mehta et al. 2022; Venkatesh and Davis 2000; Zehnle, Hildebrand, and Valenzuela 2025). As AI becomes more autonomous and socially present, it necessitates an integrative understanding of AI acceptance to address this theoretical and empirical gap.
This research incorporates two meta-perspectives of human–AI interaction, AI as a tool versus AI as an agent, to foster a comprehensive understanding of AI acceptance. Naturally, acceptance depends on various factors, including the features of AI, the context of its use, and the characteristics of individual users. Understanding these nuances is critical for designing and communicating AI in a way that people welcome. What is especially relevant and actionable are “engineerable” AI characteristics—AI system features that can be tailored to enhance user acceptance. Therefore, this research aims to identify these engineerable AI characteristics and understand their effects on human acceptance from the dual perspectives.
Existing literature explores various AI characteristics that drive positive attitudes and behaviors toward AI. Such drivers as output performance and interpretability align with established theories in innovation and technology adoption such as the Technology Acceptance Model (TAM), which have been instrumental in understanding how individuals perceive and adopt AI as a new technological tool (Kelly, Kaye, and Oviedo-Trespalacios 2023). However, AI differs from other technological tools, exhibiting agency and acting autonomously without direct human intervention. These agentic qualities alter how people perceive these technologies (Vanneste and Puranam 2025). As a result, traditional adoption models may be insufficient, calling for alternative theories like the social response theory in the “Computers Are Social Actors” (CASA) paradigm (Nass and Moon 2000). These theories introduce different factors that significantly shape AI acceptance, such as anthropomorphism and human control.
In this research, we conduct a systematic literature review and a quantitative synthesis of effect sizes from existing empirical studies. Drawing on theories from multiple disciplines, we propose a framework that integrates key drivers of AI acceptance from both perspectives. Specifically, we examine a set of engineerable AI characteristics, including capability, transparency, reliability, anthropomorphism, expertise scope, human involvement, role, and cost. Based on the empirical findings of 287 effect sizes from 136 studies in 61 publications, we investigate what AI characteristics drive acceptance, when each driver has a greater impact, and which meta-perspective dominates in explaining human acceptance of AI. We adopt an overarching user-centered design (UCD) framework to structure and interpret the relationships among focal engineerable AI features and other relevant factors in an actionable way. To support further exploration, we have developed an interactive meta-analysis web tool (accessible at https://ai-meta-analysis.shinyapps.io/web-tool/) that enables readers to examine subgroup effects and interaction patterns that suit their research interests or AI design needs.
This dual-perspective approach and the design-relevant focus differentiate our study from existing meta-analyses that examine only one perspective of AI (as tools or agents) or remain equivocal about the dichotomy. As summarized in Table 1, our work systematically examines AI acceptance through the combined (and to some degree contradicting) lens of tool and agent perception with an emphasis on engineerable AI features that can be modified to influence human acceptance of AI.
Comparison of Relevant Recent Meta-Analytical Literature.
Our research makes several key contributions to the AI acceptance literature. First, we provide a systematic, theory-driven synthesis of engineerable AI characteristics that influence human acceptance. By integrating theories from both technological innovation and social agent paradigms, we develop a dual-perspective framework that explain AI acceptance as tools and agents. Also, grounded in system usage literature and UCD, we enrich the framework by including relevant user and task factors. Additionally, through the systematic synthesis of effect sizes across extensive studies, we conclude that there is an overall negative response toward the use of AI, contributing to the ongoing debate of AI aversion versus appreciation and underscoring the need for further research to address barriers to AI acceptance. Most importantly, by highlighting the systematic differences and the necessity of examining people's acceptance of AI as a (semi)autonomous agent beyond a mechanistic tool, we provide a foundation for future research on AI acceptance and, more broadly, human–AI interaction in the coming era of agentic AI.
Our findings also offer actionable insights for managers and policy makers. When practitioners better understand which engineerable AI characteristics best enhance people's acceptance, they are able to (1) design AI systems that users are more likely to accept, (2) communicate about AI systems in ways that reduce psychological and behavioral barriers, and (3) develop interventions to promote (or restrain) AI adoption in different contexts. These insights are crucial for implementing effective acceptance-enhancing strategies that encourage consumers, professionals, companies, and governments to adopt and benefit from AI technologies.
Theoretical Background
Definitions of AI: A Tool or an Agent
AI is one of the most advanced and influential technologies ever created. Its complexity and versatility have led to diverse definitions of AI. The father of AI, John McCarthy (McCarthy et al. 1955), broadly defines it as “intelligent machines”; Russell and Norvig, in their canonical book (2009), describe AI as “agents that receive percepts from the environment and perform actions” (p. viii). These definitions reflect a key divide: Some emphasize AI's function as problem-solving machines, while others focus on AI's capability as intelligent agents. This divergence has shaped two schools of thought: AI as a tool and AI as an agent. The former argues that AI should and will remain a tool. From an instrumentalist standpoint, the current trajectory of AI development focuses on creating tools that assist humans rather than autonomous systems with consciousness (Brynjolfsson and McAfee 2014). Yet, AI's evolution brings the concept of agency to the forefront (Wooldridge and Jennings 1995). AI systems increasingly resemble rational agents that interact with, learn from, and adapt to their environment, and they even demonstrate the potential of developing humanlike mental and moral faculties (Anderson and Anderson 2007; Strachan et al. 2024). The agentic perspective has started to dominate the contemporary discourses on human–AI interaction, AI ethics, and the philosophy of intelligence and consciousness (Bickmore and Picard 2005; Floridi and Cowls 2019).
In this research, we define AI systems, based on the description in the EU AI Act (European Parliament 2023), as human-designed software systems (and possibly also hardware) that perceive their environment through data acquisition, interpret the collected data, reason on the knowledge derived from data, and decide the best action(s) to take to achieve a given goal in the physical or digital world. This broad definition encompasses algorithms that make forecasts based on extant data, chatbots that respond to users’ queries, and tools that augment human decision-making, among others.
Historical Overview of General-Purpose Technologies and Their Acceptance
A transformative technology like AI that is capable of reshaping industries and driving exponential productivity growth is often referred to as a “general-purpose technology” (Bresnahan and Trajtenberg 1995). Throughout industrial revolutions, technological advancements have unleashed the power of fundamental physical and mathematical laws—thermodynamics, electromagnetism, binary logic, and quantum mechanics—to revolutionize how we harness, transport, and utilize energy and information. Table 2 outlines widely acknowledged general-purpose technologies and their impacts on human society.
Historical Overview of General-Purpose Technologies.
For any general-purpose technology to achieve widespread societal impact, it must be broadly accepted. Scholars have examined the adoption and diffusion of these technologies, noting both commonalities and differences (Agrawal, Gans, and Goldfarb 2023). Prior to AI, general-purpose technologies gained traction based on efficiency, reliability, and cost-effectiveness to enhance human productivity (Moser and Nicholas 2004). Although people had to learn to operate them, their outputs remained predictable and fully governed by human control. AI started similarly, but its advancement has outgrown mechanistic tools, setting it apart from other general-purpose technologies like steam engines and computers (Moravec 1998). It marks a shift from passive systems with transparent mechanisms to active decision-making entities whose high-level autonomy and black-box nature challenge conventional oversight. This transformation introduces new acceptance and diffusion dynamics beyond straightforward productivity gains.
Hence, the tool-versus-agent dichotomy of AI matters beyond the arena of academics; it carries important implications for applications. Particularly for marketing, this perception as a tool or an agent not only affects how individuals interact with an AI system itself but also shapes how they respond to various AI-powered marketing deliverables. Understanding this helps companies design and promote AI-driven products and services that align with user expectations. For example, Sedlakova and Trachsel (2023) examine how this perception changes interactions between patients and AI therapists, ethical concerns, and design priorities. In creative fields, AI-assisted art is perceived differently depending on whether AI is seen as a tool or as an agent (Dunn 2020). Recent research shows that when people see AI as an agent, they experience greater betrayal aversion and hold AI to higher standards of trust (Vanneste and Puranam 2025).
Put simply, humans inherently treat tools and agents differently; it is natural that AI acceptance factors vary based on this distinction. Drawing from various theories explaining AI acceptance and human–AI interactions, whose basic premises are that users perceive and interact with AI as a tool or as an agent, we examine the key factors, particularly engineerable AI characteristics, that shape AI acceptance.
AI Acceptance Drivers in a Unified Framework
AI Acceptance
The focal outcome measure, AI acceptance, is a composite construct consisting of a spectrum of positive responses toward AI as a substitute for human counterparts in certain tasks. It reflects the receptiveness, willingness, or decision to use AI systems. This construct includes both attitudinal and behavioral dimensions, following conventions in meta-analyses (Ceylan, Diehl, and Wood 2024; Schamp et al. 2023). Building on prior research (Fishbein and Ajzen 1975), we define attitudinal AI acceptance as an individual's favorable affective and cognitive responses toward an AI system or the perceived outcomes of using AI. This includes positive beliefs and perceptions regarding the capabilities or reliability of AI, as well as the extent to which users feel comfortable about relying on AI instead of humans to achieve a certain goal. According to the Theory of Planned Behavior (Ajzen 1991), a positive attitude toward AI serves as a precursor to the behavioral intentions and actual behaviors of using AI, which we refer to as behavioral AI acceptance, involving the decision to use AI and the action of initial usage.
The construct of AI acceptance is closely related to two concepts: AI adoption and AI appreciation, yet with nuances. First, behavioral AI acceptance emphasizes the choice to use AI and the initial adoption behavior before full integration into daily activities and task routines. Second, while some literature (e.g., Logg, Minson, and Moore 2019; Qin et al. 2025) defines the preference toward AI over humans as “AI appreciation,” we consider “AI acceptance” a more precise term: Semantically, appreciation highlights valuing or admiration, whereas acceptance focuses on the de facto decision and willingness to use AI; theoretically, acceptance situates our research in the broader scheme of technology acceptance literature.
Acceptance of AI as a Tool
When considering AI as a tool, people base their acceptance on its practical utility, similar to how they evaluate other technological tools designed to assist in performing tasks and achieving objectives. This perspective emphasizes cost–benefit analysis (Beach and Mitchell 1978), weighing the benefit of using AI (e.g., accuracy and efficiency) against associated costs (e.g., infrastructure investment, risks). Several established theories align with this perspective, most notably TAM and Diffusion of Innovation (DOI) theory. TAM posits that perceived usefulness and perceived ease of use drives technology acceptance (Venkatesh and Bala 2008; Venkatesh and Davis 2000). Researchers have applied TAM to explain AI acceptance and explored external antecedents that enhance these two perceptions, thereby increasing AI acceptance (Gursoy et al. 2019), across different contexts, such as health care (Panagoulias, Virvou, and Tsihrintzis 2024), arts (Gao et al. 2024), and legal services (Xu, Wang, and Lin 2022). Across the literature, key drivers of AI acceptance include output quality, task compatibility, and demonstrability. DOI theory outlines five key innovation characteristics that influence individuals’ decisions to accept or reject an innovation: relative advantage, compatibility, complexity, trialability, and observability (Rogers 1962). In the context of AI, its acceptance increases when it demonstrates a clear relative advantage over human alternatives, aligns with users’ needs and prior experiences, is straightforward to understand and use, requires minimal effort or cost to try out, and has observable benefits. DOI theory has been applied to AI acceptance in various settings, including corporate environments (Xu et al. 2024), education (Uzumcu and Acilmis 2024), and customer service (Syvänen and Valentini 2024).
Acceptance of AI as an Agent
When perceiving AI as an agent, people assess its acceptability based on its ability to think, plan, and act, much like how they would evaluate human agents (Gray, Gray, and Wegner 2007). Unlike the tool perspective, this agent perspective highlights people's perception of and attention to AI's agentic aspects, such as intentionality and autonomy (Waytz, Heafner, and Epley 2014). AI agents are evaluated through a more complex assessment of trust, control, and ethical implications (Vanneste and Puranam 2025), necessitating understanding human–AI interactions through the lens of AI as an agentic entity capable of certain degrees of social interaction and autonomous decision-making.
The CASA paradigm posits that humans display social responses to computers (Nass and Moon 2000). The underpinning theory is that human–computer interactions are shaped by mindless behaviors triggered by social cues (Langer 1992), whereby people apply social rules, norms, and expectations as they do in human–human interactions. This social response theory and the CASA paradigm have been applied to emerging AI technologies such as chatbots, robots, and virtual agents (Heyselaar 2023; Xu, Chen, and Huang 2022). According to the CASA paradigm, AI features such as natural language communication, interactivity, and anthropomorphized interfaces serve as social cues, inducing humans to perceive AI as agents and respond socially.
Conceptual Framework
While human acceptance of AI is inherently shaped by the engineerable AI characteristics, it is also largely influenced by the interplay between the AI system itself, the nature of the task it performs, and the characteristics of the human user. To develop a more holistic understanding, we draw on the system usage framework developed by Burton-Jones and colleagues (Burton-Jones and Gallivan 2007; Burton-Jones and Straub 2006). Tailored to the AI domain, the framework conceptualizes AI system usage, including the initial decision to use it and subsequent adoption, as an activity involving three elements: (1) a user, the individual employing AI for a task; (2) a task, the function or goal-directed activity AI performs; and (3) an AI system, the technological artifact with features designed to support task execution. These elements align with the main dimensions of the UCD framework, which guides the design and development of interactive systems to meet user needs (International Organization for Standardization 2019; Salinas, Cueva, and Paz 2020). The framework, as well, emphasizes the importance of specifying the user and organizational requirements (user characteristics) and understanding and specifying the context of use (task characteristics) when producing design solutions (AI characteristics). Thus, we develop a unified framework (Figure 1), outlining the key drivers of AI acceptance examined in this meta-analytic study, together with methodological controls.

Conceptual Framework.
AI Characteristics: Tool Perspective
From the tool perspective, we identify the following AI characteristic variables that align with the key constructs and antecedental factors in innovation and technology adoption.
Input transparency
Input transparency is the extent to which users understand the data an AI system processes to make decisions. A key advantage of AI over humans is its ability to analyze vast amounts of input data (Davenport and Ronanki 2018). The transparency of what data an AI system utilizes to generate its outputs clarifies its relative advantages over human equivalents. Also, input transparency helps users ensure that an AI system's decisions are based on inputs consistent with their objectives, values, and experiences, enhancing perceived compatibility as per DOI theory. Additionally, input transparency mitigates concerns about data privacy associated with using AI. The knowledge of the data an AI system collects and uses makes users feel more secure about the AI system's operations, fostering greater trust and acceptance (Open Data Institute 2024).
Process transparency
Process transparency refers to the clarity and interpretability of how AI systems function and generate outputs (e.g., recommendations, forecasts, and operations). Yet, AI often operates as a “black box” because the complexity of the underlying algorithms typically results in low interpretability (Lipton 2018). The lack of transparency undermines trust, driving the demand for explainable AI as a solution (Rai 2020; Von Eschenbach 2021). A clearer understanding of how an AI tool works can reduce perceived complexity and uncertainty, which in turn enhances perceived ease of use and trust, consequently increasing acceptance (Liu 2021; Vössing et al. 2022). Therefore, increasing process transparency is likely to drive higher acceptance of AI.
Reliability
Reliability refers to the extent to which the consistency of an AI system's performance, validity, and other performance measurables can be anticipated (Zhou et al. 2023). Or, in statistical terms, reliability indicates low error margins of outcomes. Intuitively, reliability enhances acceptance for two reasons. First, high reliability of an AI system signals consistent performance, rendering it less uncertain and more controllable from users’ standpoint (Rahwan et al. 2019). Second, higher reliability makes it easier to understand an AI system's strengths and weaknesses, analyze cost and benefit, and decide whether to use the tool. However, narrow error margins can imply systematic errors. Research on algorithm aversion suggests that people are averse to AI because algorithms err systematically while humans err randomly, falsely believing that human judgments allow for near-perfect outcomes (Dietvorst, Simmons, and Massey 2015). Thus, we find the impact of reliability on AI acceptance uncertain.
AI Characteristics: Agent Perspective
From the perspective of AI being accepted as a social entity and an agent, we identify additional AI characteristics that influence acceptance, distinct from the tool perspective.
Anthropomorphism
Anthropomorphizing AI with humanlike traits such as a humanoid interface, gendered voice, or personality is a widely used strategy to trigger social responses toward AI. It is one of the most studied characteristics in the CASA paradigm (e.g., Belanche et al. 2021; Wang 2017; Xu, Chen, and Huang 2022). When humans interact with AI socially, they are influenced by interpersonal psychology principles, such as the similarity-attraction theory (Berscheid and Hatfield 1969). The more AI resembles humans, the more likely it is to be perceived positively (Glikson and Woolley 2020). Existing literature suggests that anthropomorphism can foster a sense of social presence and competency, thereby increasing users’ trust and positive evaluations of AI (Waytz, Heafner, and Epley 2014; Zhang, Pentina, and Fan 2021), as well as its acceptance (Kim, Chen, and Zhang 2016; Stroessner and Benitez 2019). However, anthropomorphism may backfire. While perfect implementations of human-mimicking AI can elicit favorable social responses, the real-world ersatz versions might not, because a lack of verisimilitude increases the salience of “nonhumanness” (Nass and Moon 2000; Reeves and Nass 1996). This nonhumanness can evoke psychological discomforts, as explained by the “uncanny valley” effect (Ho and MacDorman 2010; Mori 1970). Therefore, it is not straightforward to hypothesize whether anthropomorphism increases or decreases AI acceptance.
Role: advisory versus performative
When AI partakes in human activities as a social entity, we need to consider its role, a feature rarely ascribed to a tool. AI agents typically fulfill two types of roles: advisory and performative (Jussupow, Benbasat, and Heinzl 2020; Nissen and Sengupta 2006). The role differentiates whether humans or AI dominate execution and outcomes of a system. A performative AI system independently carries out tasks by collecting data and making and executing decisions, whereas an advisory AI system merely provides recommendations or helps users. With the salience of AI agency, people pay attention to decision-making authority and control. A performative AI system may largely reduce a user's sense of autonomy (André et al. 2018), evoking unease of losing control to the AI system (Legaspi, He, and Toyoizumi 2019). Moreover, people view performative AI as supplanting them, thus threatening their feelings of competence and self-worth (Granulo, Fuchs, and Puntoni 2019). Conversely, an advisory AI is more likely to be viewed as complementary, enhancing rather than replacing human decision-making (Palmeira and Spassova 2015). Taken together, these factors suggest that a performative (vs. advisory) role negatively influences AI acceptance.
AI Characteristics: Dual Perspectives
Certain AI characteristics are relevant both when AI is seen as an agentic entity and when it is seen as an inanimate tool. They may have convergent or divergent effects on AI acceptance; investigating these factors from a different angle provides a fuller picture of AI acceptance.
Capability
Capability is a key trait influencing AI acceptance both as a tool and as an agent, though with some nuances. People use AI to achieve task performance superior or comparable to that of human counterparts but with less effort. With this motivation, AI acceptance is contingent on whether it can help users reach their goals with greater accuracy and efficiency (in short, AI capability). From the perspective of AI as a tool, capability is an essential determinant of perceived usefulness and relative advantage, both of which drive acceptance, as suggested by TAM and DOI theory. When AI is seen as an agent, its high capability signals greater expertise, effectiveness, and reliability in assisting decision-making or performing tasks autonomously; high capability builds trust and confidence (McAllister 1995), which in turn makes people more likely to accept AI as an agent (Glikson and Woolley 2020). Previous literature has shown that when AI is presented as having higher accuracy or when AI is equipped with better ability than humans in certain tasks (e.g., financial market analysis, guesstimation questions), people are more likely to use AI (Castelo, Bos, and Lehmann 2019; Longoni and Cian 2022; Qin et al. 2025). Thus, both perspectives unequivocally point to the positive effect of capability on AI acceptance.
Expertise scope: specialist versus generalist
Some AI systems are designed for specific domains (e.g., financial consultation or disease diagnosis), while others, like ChatGPT, function as generalists capable of handling diverse tasks. We refer to this distinction as AI's expertise scope: general-purpose AI (generalist) versus domain-specific AI (specialist). As a tool, general-purpose AI has higher versatility, adaptability, and pervasiveness across various tasks. According to TAM, users value usefulness and ease of use (Venkatesh and Bala 2008); AI capable of addressing a wide range of needs without requiring learning and handling multiple systems is naturally seen as more useful and easier to use. Also, AI that serves as general-purpose technology tends to be pervasive due to its versatile functions (Stackpole 2024). The ubiquitous presence normalizes its utilization, which increases the subjective norms of accepting this tool (Venkatesh and Davis 2000). Therefore, we expect that people are more likely to accept a general-purpose (vs. domain-specific) AI tool. From the agent perspective as opposed to the tool perspective, we expect a preference reversal. When turning to an agent, people expect deep expertise in the domains where they seek advice or delegate tasks. This is analogous to people seeking specialist professionals in a particular area (Zambrana and Zapatero 2021). For instance, an endocrinologist doctor or a divorce lawyer is typically perceived as more competent in the subject matter than a general practitioner or generalist lawyer. Likewise, a generalist AI may be seen as lacking the depth of knowledge required for highly complex or critical tasks, compared with a specialist AI that is fine-tuned for a specific purpose. As task complexity increases, depth prevails over breadth and people favor specialization (Anderson 2012). Therefore, we expect higher acceptance of specialist AI over generalist AI agents.
Cost
We consider the influence of utilization cost—both direct financial expenses to employ AI tools and resource trade-offs (e.g., electricity costs and staffing needs). According to the DOI model, trialability—the ability to experiment with a technology at minimal cost—enhances acceptance. In neoclassical economics, people favor free tools as cost imposes negative utility, consequently reducing perceived benefits and lowering acceptance. Thus, from the tool perspective, we expect a straightforward negative relationship between cost and AI acceptance. We anticipate a reversal in the effect of cost when AI is viewed as a social agent rather than an inanimate tool, as additional psychological and social factors beyond utility shape acceptance. First, according to equity theory (Adams 1965), people prefer fairness and reciprocity in social interactions, meaning that free services from an agent may disrupt the perceived balance of exchange. In this context, cost helps establish an explicit contract between users and AI. Second, people tend to devalue free labor, associating unpaid work with low commitment, expertise, and professionalism (Rezvani and Hedges 2012; Rix 2020). Money priming often enhances the perceptions of competence (Gasiorowska et al. 2016; Zaleskiewicz, Gasiorowska, and Vohs 2017), and people are willing to pay more for tasks involving expertise (Godek and Murray 2008; Hertzum 2014; Nasr Bechwati 2011), as they equate higher costs with higher skill and performance. When AI acts as an agent offering advice or performing tasks, people may similarly associate higher costs with greater reliability and competence. Thus, we expect a positive effect of cost on AI acceptance as an agent.
Human involvement
Human involvement refers to the extent to which users participate in or oversee the processes of an AI system, from providing input data to manually adjusting its operation. AI systems vary in their degree of required human interaction: Some require only a single command to initiate the process, while others involve back-and-forth input and feedback loops throughout the process of solving focal tasks. We expect the level of human involvement to influence AI acceptance. For AI tools, users typically seek efficiency, automation, and reduced effort (Davenport and Kirby 2015; Onnasch et al. 2014; Paschen, Pitt, and Kietzmann 2020). From this perspective, greater human involvement, such as manual oversight or interaction, adds complexity and diminishes the advantages of AI. Imagine a robot vacuum requiring manual configuration of its cleaning schedule and map versus one that operates automatically via camera detection; which one would you prefer? The need for human intervention makes AI less effective as a tool that provides automation and reduce human effort. Therefore, higher human involvement decreases the perceived ease of use and effectiveness, key determinants of AI acceptance (Gursoy et al. 2019). As a result, higher human involvement is likely to lower AI acceptance. In contrast, we expect human involvement to have a divergent effect on AI acceptance as agents. This divergence arises because involvement with an AI agent denotes interactivity. When users engage in back-and-forth inputs and feedback loops, the interaction mirrors human turn-taking in conversations, which has been shown to positively influence the perception and acceptance of AI (Banks 2019; Murray, Rhymer, and Sirmon 2021; Zhao et al. 2025). Also, when people attend to the agentic aspects of AI, autonomy and control become salient concerns. High interactivity and engagement help users maintain an internal locus of control, fostering positive feelings about the agent (Shneiderman and Plaisant 2010). High human involvement may also lead users to attribute to AI human mental faculties, traits often considered lacking in AI (Bigman and Gray 2018; Gray, Gray, and Wegner 2007). Previous research shows that involvement in an AI's learning phase enhances users’ feeling of control, perceived understanding, and personalization (Sieger et al. 2022). These perceptions often increase AI acceptance (André et al. 2018; Laitinen and Sahlgren 2021; Liu and Tao 2022). Thus, high human involvement, while leading to reduced automation from the tool perspective, implies interactivity from the agent perspective, leading to opposite effects on AI acceptance.
Task Characteristics
Context: professional versus consumer
AI is ubiquitously employed in both consumer and professional domains. Consumers use AI to select products, navigate routes, control smart devices, and interact with virtual assistants, among other tasks. Professionals use AI to automate administrative tasks, assist in medical diagnoses, make judicial decisions, enhance financial forecasting, and so on. These differing applications lead to context-dependent variations in AI acceptance. First, the bearer of decision consequences differs between the two contexts. Consumers generally make self-impacting decisions, whereas professionals make decisions affecting others. Research indicates that people employ different decision-making strategies when choosing for themselves than when choosing for others (Ritov and Baron 1990). Second, while consumer decisions often do not require demonstratable, rigorous explanations, professional decisions demand clear justifications, as professionals are held accountable for these outcomes. In social and organizational contexts, the justifiability and interpretability of AI-assisted decisions become crucial (Brkan 2019). Finally, professional decisions often have more consequential ramifications than consumer decisions. In high-stakes scenarios, people demand greater accuracy, transparency, and certainty (Tversky and Kahneman 1974; Yeomans et al. 2019). Given these distinctions, we expect that the task context in which AI is used significantly impacts its acceptance and the effects of its various drivers.
Moral relevance
Moral reasoning is often seen as a core human mental faculty, raising skepticism about AI’s capacity to understand human ethics and make moral judgments (Searle 1980; Wallach and Allen 2009). First, decisions requiring moral judgments are inherently complex and often pose dilemmas where determining right or wrong is not necessarily a matter of utility calculation. Second, morality is rooted in collective intentionality, cultural learning, and shared norms, which distinguish humans from other species (Dawkins 1978). Research shows that people perceive machines as lacking emotional and empathetic capabilities (Haslam et al. 2008), the qualifications essential for moral judgments (Cameron, Payne, and Doris 2013; De Waal 2006). Consequently, moral reasoning is perceived as an exclusively human domain (Lee 2018). Dietvorst and Bartels (2022) further show that people object to AI making morally relevant trade-offs because AI is believed to follow consequentialist decision-making, which is often criticized by those in favor of deontological morality based on universal ethical principles. Given these concerns, we expect AI acceptance in moral domains to be significantly lower.
Privacy
As with other information technology, privacy concerns play a critical role in consumers’ assessment of AI applications. We consider the extent to which a focal task involves the handling of sensitive personal information (Malhotra, Kim, and Agarwal 2004). Research suggests that privacy concerns can lead to negative reactions toward algorithms (Araujo et al. 2020) and voice-based digital assistants (Vimalkumar et al. 2021). Using an AI system often requires personal data input, yet when and how this information is stored, processed, or shared remain opaque to end users. Such uncertainty may trigger reluctance to disclose information to the system (Acquisti, Brandimarte, and Loewenstein 2015), ultimately reducing the likelihood of accepting an AI system. Thus, we expect lower AI acceptance in tasks involving sensitive personal information, as people experience more concerns about privacy and data security.
Societal externality
Beyond personal privacy and moral considerations, societal externality is another critical factor that influences AI acceptance. We define societal externality as the potential for widespread impact, and especially unfavorable consequences for others beyond the decision-makers or AI users themselves. When a task carries high societal stakes, people tend to be more conservative and exhibit status quo bias (Samuelson and Zeckhauser 1988). In tasks that potentially impose societal externality, we expect people to have lower acceptance of AI (i.e., novel solutions) and to favor human counterparts (i.e., conventional practices). Previous research suggests that, in high-stakes tasks, people show lower trust in automation and a stronger preference for human oversight (Burton, Stein, and Jensen 2020; Cummings 2004). This is particularly pronounced when AI is applied to tasks where poor implementation or misuse could lead to social injustices, economic instability, or threats to democratic integrity (Eubanks 2018; Ferguson 2017; Pasquale 2016).
User Characteristics
Individual differences significantly shape AI acceptance, as users bring varying experiences, attitudes, and cognitive biases to their interactions with AI. We consider three demographic factors—gender, age, and region—as they have been shown to systematically influence technology acceptance and adoption. Gender differences in AI acceptance often stem from variations in risk perception, trust, and technology-related self-efficacy. Studies demonstrate that men and women show systematic differences in trust and willingness to adopt AI-based technologies (Chalutz Ben-Gal 2023). Age also plays a role, with younger individuals being more receptive to emerging technologies (Charness and Boot 2009; Czaja et al. 2006). We also expect regional differences as they reflect broader cultural attitudes, social norms, regulatory policies, and AI development levels. Additionally, we consider whether users are university students because we expect that, as digital natives, they tend to be more adaptable to AI, whereas nonstudent populations may be more resistant due to established work practices and professional norms (Selwyn 2007; Vanneste and Puranam 2025). These four user characteristics (gender, age, region, and student status) are also in line with common practices in meta-analysis literature for analyzing demographic variables of study samples (e.g., Cadario and Chandon 2020; Ceylan, Diehl, and Wood 2024; Khamitov, Wang, and Thomson 2019; Schamp et al. 2023).
Methodological Controls
We include several study characteristics as methodological controls. First, we examine whether the task to be delegated to or accomplished with assistance by an AI system (or a human equivalent) is incentivized. Incentive-compatible experiments encourage respondents to put in more effort for optimal performance by offering extra rewards for completing the questionnaire. Existing literature suggests that both economic incentives (e.g., financial rewards for accuracy) and social incentives (e.g., reputation, social norms) can reduce reluctance to use algorithmic aids in various tasks (e.g., Alexander, Blinder, and Zak 2018; Önkal et al. 2009). Therefore, we anticipate that AI acceptance will be higher in incentivized tasks. In terms of experimental designs and settings, we differentiate within-subjects and between-subjects designs. We expect a systematic difference between experiments where both AI and human options are presented to participants (within-subjects design) and those where either an AI or human option is presented (between-subjects design). Typically, effect sizes from within-subjects studies are expected to be larger (Borenstein et al. 2009). We also distinguish between hypothetical and real-world scenarios of AI usage. Finally, we consider the recency of publication. It is expected that as AI becomes increasingly indispensable in daily life, people grow more accustomed to it; naturally, AI acceptance tends to increase over time.
Methodology
Data Collection
Literature search
We adopted two strategies to collect the primary articles for our meta-analysis. Detailed information regarding our search strategy is included in Web Appendix A. First, we conducted a comprehensive literature search in the database EBSCO Business Source Complete, which includes major business journals and those in related disciplines such as human–computer interaction, psychology, sociology, and so forth. To capture articles studying human acceptance of AI, we used four groups of search terms (with an “OR” logic within each group and an “AND” logic between groups): (1) “algorithm,” “artificial intelligence,” “AI,” “machine learning”; (2) “response,” “acceptance,” “aversion,” “appreciation,” “preference,” “adoption,” “usage”; (3) “consumer,” “customer,” “user,” “human,” “people”; (4) “experiment,” “survey,” “empirical.” Second, we checked the forward and backward references of three systematic reviews on algorithm aversion (Burton, Stein, and Jensen 2020; Jussupow, Benbasat, and Heinzl 2020; Mahmud et al. 2022). Last, we conducted ad hoc searches to identify recent articles not covered in the previous two search strategies. Our search yielded an initial set of 2,488 articles.
Inclusion criteria
We screened the articles in the initial set and included those studies that met the following criteria: (1) experimental or quasi-experimental studies that compare human acceptance of AI versus human agents; (2) dependent variables measuring either the respondents’ attitudinal or behavioral responses of acceptance; (3) publication in a high-quality peer-reviewed outlet 1 that is within the first two quartiles of its corresponding discipline based on the impact factor; 2 and (4) enough information included to enable us to calculate common effect sizes. Our final sample consisted of 61 articles with 287 effect sizes extracted from 136 studies. The final set of articles, studies, and effect sizes included in our meta-analysis is reported in Web Appendix B.
Coding Scheme
Two coders first coded a small subsample (54 effect sizes) of the collected articles independently. We then compared the coding results between the two coders. All the disagreements were resolved after discussions. After that, one coder continued coding the remaining articles based on a mutually agreed coding scheme iterated through the preliminary coding. We coded each study on the following variables: AI characteristics, task characteristics, user characteristics, and methodological control variables. We also extracted effect sizes or statistics that could allow us to calculate effect sizes. Table 3 summarizes the coding scheme. Full details and examples for each coding variables are included in Web Appendix C.
Summary of Coding Scheme for the Variables Used in the Meta-Analysis.
Mean values for continuous variables and number of observations for binary variables.
Meta-Analytical Strategy
Measure of effect size
We use the standardized mean difference Cohen's d as the measure of effect sizes (Cohen 2013). It is calculated as the mean difference between the outcome measures of the control and treatment groups divided by the pooled standard deviation. In our study, human equivalents in the experiments, from which effect sizes are extracted, are set as the baseline control. The effect size is calculated as d = (MeanAI − MeanHuman)/Stdpooled, and
Hierarchical linear model specification
We start with the examination of the overall response of humans to AI with the average meta-analytical effect, or the intercept-only model. In meta-analyses where effect sizes are nested in experiments that are nested within a given article, data generally possess a multilevel structure. This hierarchical composition of the data renders conventional regression approaches such as ordinary least squares error prone (Krasnikov and Jayachandran 2008). To account for the nested structure of our data, we use a three-level hierarchical linear model (HLM) to regress the dependent effect sizes (Assink and Wibbelink 2016; Konstantopoulos 2011). As an extension of conventional two-level HLM, a three-level model adds a cluster effect on the original two levels (i.e., participants at Level 1 and studies at Level 2), capturing both within-study (Level 2) and between-study (Level 3) heterogeneity (Cheung 2014). The model specification is as follows:
For an effect size ESij, β0 is the meta-analytic effect size estimated across all studies; u(2)ij and u(3)j denote the Level 2 and Level 3 heterogeneity, respectively; and the variance of eij is the known sampling variance in the ith effect size in the jth study.
Next, we consider the effect of each engineerable AI factor. To do so, we first estimate one univariate meta-regression for each predictor x. These univariate analyses provide benchmark values to compare with the estimates obtained in the full multivariate model. As all our AI-related predictors are binary (e.g., anthropomorphism: 0 for absent or 1 for present), the univariate model in estimates the β coefficients corresponding to the effect of the present level of the binary predictor, as opposed to the absent level, without any covariate.
With univariate analysis, we are able to compare the influence of each AI characteristic on human acceptance; however, multivariate models generally improve estimation with better statistical properties as well as reduce the risk of bias, such that a significant result in univariate analyses may not hold using the multivariate model (Jackson, Riley, and White 2011). Therefore, we estimate a full model with all AI-characteristic factors and contextual, population, and methodological control variables. The parameter estimates for these factors are denoted as βk.
We estimate all mixed-effects, three-level meta-analytic models using maximum likelihood with the rma and rma.mv functions in the metafor R package provided by Viechtbauer (2010).
Results
AI Acceptance Versus Rejection
The meta-analysis includes 136 studies from 61 published manuscripts, which provide 287 total effect sizes based on 119,358 individual participants. The aggregated empirical evidence shows a notable distribution of the documented effects, differing with respect to their directions and magnitudes. The effect sizes range from −2.040 to 1.530, while the majority of observations are between −.425 (the first quantile) and .260 (the third quantile). The sizable I2 heterogeneity score (93.3%) indicates a high level of heterogeneity, suggesting that the variability of the effect sizes is caused by true heterogeneity rather than sampling errors. Consistently, Cochran's Q-test for heterogeneity is significant (Q = 4,260.25,
The intercept-only three-level model yields a significantly negative main effect, −.150 (95% CI = [−.220, −.080]). It indicates that people generally respond more negatively to AI than to its human equivalent. When we divide the outcome measure into attitudinal acceptance (number of effect sizes = 127) and behavioral acceptance (number of effect sizes = 160), we observe comparable negative effects: d = −.122, 95% CI = [−.212, −.032] for attitudinal measures and d = −.174, 95% CI = [−.275, −.074] for behavioral measures. There is no statistically significant difference between attitudinal and behavioral acceptance (t = .334,
Influence of AI Characteristics
We use three-level hierarchical models to regress people's acceptance of AI on (1) each individual AI characteristic, (2) all AI characteristics combined, and (3) the full set of AI–task–user variables with methodological controls. These follow the same operation as moderator analysis in other meta-analyses. Summary statistics and bivariate correlations for the included moderators are detailed in Web Appendix D. Table 4 presents the estimation results. The univariate and multivariate analyses align in terms of the directions of AI characteristics’ effects, with some differences in magnitude and statistical significance. Since univariate analyses are prone to biased estimators and confounds due to omission of other relevant variables (Jackson, Riley, and White 2011), we interpret the full multivariate model results as empirical evidence.
Results of Univariate and Multivariate Analysis for AI Characteristics.
The findings show that people are significantly more likely to accept AI when it is perceived as highly capable (β = .475,
Regarding the task, user, and methodological control variables: First, people are less likely to accept AI in tasks involving moral judgment and reasoning (β = −.204,
Robustness Checks
We performed several robustness checks to ensure the stability and reliability of our meta-analysis results. First, we checked for multicollinearity. All variance inflation factors are below 3, with a mean variance inflation factor of 1.647, ruling out concerns about multicollinearity among the variables (see details in Web Appendix E). Next, we reran the analysis using a different effect size measure, weighted Hedges's g, which corrects for small-sample biases (Hedges and Olkin 1985). The estimation results show consistent effects in both direction and significance as in analyses using Cohen's d. We also conducted a series of sensitivity tests on the moderators’ effects by testing different model specifications: three-level HLM adding task factors, user factors, and methodological controls consecutively. Particularly, we examined the influence of variables with low variations (i.e., process transparency and cost) by removing them one by one and altogether from the model. Then, to assess the sensitivity of our estimates to individual effect sizes, particularly for moderators with low representation, we conducted a leave-one-out influence analysis (Viechtbauer 2010). Finally, we conducted a robustness check using the least absolute deviation estimator to assess robustness of the findings against outliers, following the approach reported in Edeling and Fischer (2016). The models and estimation details for robustness checks can be found in Web Appendix F. All alternative models yield similar directional patterns for the moderators’ effects, except for the variable of cost. The instability of the estimation of cost effect is also indicated in the leave-one-out analysis, where a few observations drive the coefficient of this variable.
In addition, we assessed the concerns for publication bias. We started by calculating the fail-safe N, which estimates the number of unpublished null effect sizes needed to render the observed effects insignificant at the level of α = .05 (Rosenthal 1979). Then, we investigated the asymmetry in the funnel plot with Egger's test (Egger, Smith, and Phillips 1997). For any asymmetry, we used the trim-and-fill method to test how the pooled effect size would alter after accounting for unpublished results (Duval and Tweedie 2000). Additionally, we used the
In brief, the series of robustness checks confirmed the overall stability of our model and results.
Developing and Marketing AI: Roadmap of User-Centric Design
Recent decades have witnessed the transformation of AI, exemplified by the shift from awkward machine translation into large language models foreshadowing “sparks of artificial general intelligence” (Bubeck et al. 2023). AI as a general-purpose technology can only benefit humans ubiquitously if we are willing to accept it and subsequently adopt it in various situations. However, given the overall aversion toward AI, businesses cannot assume that people will readily embrace their AI-powered products. Thus, a user-oriented design approach is needed.
Our meta-analysis integrates relevant AI, task, and user characteristics, with an emphasis on engineerable AI features—those modifiable by practitioners to enhance acceptance. To provide actionable recommendations for designing, developing, and promoting AI, we grouped the examined factors and developed a roadmap within the UCD framework (Figure 2).

AI, Task, and User Characteristics in a User-Centered Design Roadmap.
The UCD framework starts with understanding the context that AI is used for, which is largely shaped by task characteristics examined in our meta-analysis. First, as with other products and services, companies need to consider the difference between business-to-consumer and business-to-business contexts of use. While the absolute level of AI acceptance does not significantly differ between professional and consumer tasks, a split-sample analysis (i.e., taking three-way interactions between this task variable and each AI variable) reveals systematic differences in how users in these contexts evaluate AI features. We report the detailed estimation results in Web Appendix H; here, we highlight the main patterns. In consumer contexts, acceptance of AI is more malleable in response to the engineerable AI features, including capability, input transparency, anthropomorphism, role, and cost of AI; one exception is generalist AI, which exhibits a more pronounced positive effect among professionals. These findings underscore the importance of context-aware AI design and marketing strategies. When balancing trade-offs in design priorities and budget allocation, practitioners should tailor AI features and invest resources in those with the greatest influence in each context. In critical domains that concern policy makers, users are more likely to reject AI for tasks that intrude on morality and personal privacy than for tasks imposing broader societal risks. Practitioners, rather than attempting to persuade individuals to adopt AI, may find greater success by securing endorsements from decision-makers responsible for societal-level decisions. Once AI usage becomes a social norm across various contexts, individual acceptance may follow, even in morally sensitive and privacy-related applications.
Next, UCD requires the consideration of users’ heterogeneous needs, which are influenced by user characteristics. In the meta-analysis, we examine the effect of basic demographic and geographic factors; due to data limitations, we are unable to explore the impact of behavioristic and psychographic factors in greater depth. A key insight for practitioners is that female users are not inherently more averse to AI technology, contrary to conventional wisdom and some prior literature (e.g., Stein et al. 2024; Tang et al. 2025). Consequently, such marketing practices as algorithmic ad bidding that underprioritize or overlook potential female users for AI-driven products and services are not recommended. Instead, inclusive approaches that recognize diverse user segments should be emphasized to maximize AI acceptance.
The core of our meta-analysis is the engineerable AI features, which directly guide design solutions (i.e., attributes of AI artifacts). To enhance the interpretability of our findings, we translate them into common language effect size, 3 introduced by McGraw and Wong (1992). This metric denotes the probability that a score randomly sampled from one distribution (i.e., AI condition) will be larger than a randomly sampled score from another distribution (i.e., baseline comparison). Accordingly, we provide the following recommendations. First, while anthropomorphizing AI has gained popularity, our findings suggest that companies should prioritize enhancing AI's inherent capabilities over humanlike interfaces. Improving AI capability and clearly communicating its advantages over human counterparts increases acceptance by 13.15%, making it the most effective strategy for fostering acceptance. Also, transparency regarding data collection and usage in AI-enabled products and services enhances acceptance by 4.03%, whereas understanding how AI processes data to generate outcomes does not significantly impact people's attitudes and behaviors toward AI. Next, policy makers seeking to facilitate the safe, ethical, and widespread use of AI technologies should prioritize guidelines that require AI developers to disclose the sources and types of data their systems use. While process transparency is important, it matters to a lesser extent. Stringent regulations on transparency regarding how AI's underlying algorithms work may thwart the development of capable yet incomprehensible AI systems. Policy makers need to balance the trade-off between transparency and capability when regulating and promoting the effective use of AI. Then, we recommend that firms introduce to the market AI products with advisory functionalities rather than performative capabilities, as advisory AI is 9.36% more likely to be accepted. The widespread adoption of ChatGPT exemplifies this principle, not only due to its high capability but also because it primarily serves as an information and advice provider rather than an autonomous task performer. This ensures that humans retain decision-making authority over AI-generated outputs. Additionally, the success of ChatGPT and other similar platforms echoes another recommendation for practice revealed in our findings: developing general-purpose AI instead of domain-specific AI. Users are 7.79% more likely to accept AI with broad, generalist expertise than one specialized in a narrow domain. Last, businesses need to evaluate designs against requirements and iteratively improve AI products or services before market deployment.
Theoretical Implications and Future Research
Our meta-analysis reveals a small yet robust negative response toward AI compared with its human alternatives. This finding contributes to the scholarly debate surrounding AI aversion versus appreciation (e.g., Dietvorst, Simmons, and Massey 2015; Granulo, Fuchs, and Puntoni 2019; Logg, Minson, and Moore 2019; Longoni, Bonezzi, and Morewedge 2019). Our findings support the aversion view overall but also show that acceptance of AI has increased over time. More importantly, the heterogeneity revealed in our analysis indicates that people's acceptance depends on various factors: AI, user, and task characteristics.
A core theoretical contribution of this meta-analysis is the dual-perspective framework distinguishing acceptance of AI as a tool versus acceptance of AI as an agent. The tool perspective encompasses foundational theories such as TAM and DOI theory, based on which the literature examines how various AI characteristics influence perceived utility, ease of use, or barriers to adoption. The agent perspective investigates AI acceptance shaped by features like anthropomorphism, autonomy, and role—the traits people evaluate in social entities. Our study highlights the importance of viewing AI as an agent, showing how agentic qualities alter user perceptions in ways not captured by traditional models. This is particularly relevant as AI development is advancing toward agentic AI, with systems increasingly designed to reason, decide, and interact with users autonomously. These two perspectives propose distinct mechanisms driving AI acceptance and help explain inconsistencies in prior research. For future research, a direct examination of how perceiving AI as a tool or as an agent shapes acceptance would provide further theoretical insight and empirical support. Also, as AI technology continues advancing, the discourse on the social and ethical dimensions of AI grows more prevalent. We expect the weight of accepting AI as an agent to increase accordingly. Future research might investigate more factors and mechanisms under the umbrella of the AI-as-an-agent perspective.
Another major contribution of our study is the focus on engineerable AI features. These features are the external antecedents to the prevailing constructs in the AI acceptance literature, such as perceived usefulness, ease of use, and trustworthiness. Beyond identifying a broad set of features, we differentiate closely related constructs: input transparency (awareness of data sources) versus process transparency (understanding of AI's decision-making logic), and capability (AI's performance level or accuracy) versus reliability (the consistency of its outcomes). These AI characteristics, along with user and task factors, are integrated into a theoretically grounded AI–task–user framework that captures key drivers of AI acceptance.
Our findings show that AI acceptance varies by task context. We observe strong rejection of AI in moral and privacy-related contexts, consistent with prior literature and received wisdom (e.g., Bigman and Gray 2018; Dietvorst and Bartels 2022). We find systematic differences in how AI characteristics drive acceptance in professional and consumer settings. The split-sample analysis shows that most engineerable AI characteristics believed to influence people's acceptance are only effective for consumers. We speculate that this difference is due to the type of data involved in using AI in consumer versus professional contexts, and the former is more personally sensitive and relevant. While it is not feasible to include all subgroup analyses that differentiate the effects of AI characteristics across diverse tasks and users or to examine between-factor interaction, our interactive meta-analysis web tool enables readers to explore further.
The examined AI characteristics in the meta-analysis are limited to variables that are extractable from the literature. This limitation leaves unexamined several speculated engineerable AI characteristics, such as personalization. Two variables, process transparency and cost, are affected by limited variations. Future research may continue delving into these factors. The user dimension remains relatively underexamined in our research due to the constraints of the data. Future studies should further consider individual heterogeneity, including factors such as prior AI experience, social influence, education level, and cognitive biases.
As research on AI acceptance accelerates across diverse disciplines, maintaining an up-to-date empirical knowledge base is increasingly challenging. Meta-analyses on this topic are valuable but may quickly become outdated. To address this, our web tool enables researchers to input new effect sizes and update analyses, supporting a living, dynamic meta-analysis that facilitates ongoing knowledge consolidation (Cadario and Chandon 2020; Martin et al. 2023).
Above all, we encourage researchers and practitioners to consider the key drivers of AI acceptance through the lenses of both perspectives: AI as a tool and AI as an agent. We hope that future research builds on these insights, explores new avenues, and constantly updates our knowledge on this important topic as AI advances. We believe that it is critical for practitioners to stay up-to-date on the evolving landscape of relevant AI features and to strategically design and communicate those features to enhance acceptance and foster positive responses.
Supplemental Material
sj-pdf-1-jmx-10.1177_00222429251355266 - Supplemental material for From Tools to Agents: Meta-Analytic Insights into Human Acceptance of AI
Supplemental material, sj-pdf-1-jmx-10.1177_00222429251355266 for From Tools to Agents: Meta-Analytic Insights into Human Acceptance of AI by Bingqing Li, Edward Yuhang Lai and Xin (Shane) Wang in Journal of Marketing
Footnotes
Coeditor
Associate Editor
Consent to Participate
Declaration of Conflicting Interests
Ethical Considerations
Funding
Data Availability Statement
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
