Abstract
Artificial intelligence (AI)–enabled technology is changing the nature of work, altering what and how employees learn and remember, and how organizations apply that knowledge to create competitive advantages. Research that informs the role of AI in organizations is rapidly increasing and has delivered important insights about the complexity of human–machine interaction (e.g. Allen and Choudhury, 2022; Bailey et al., 2022; Davenport, 2018; Raisch and Fomina, 2024), yet few studies examine how AI affects work that is interdependent and completed by multiple people in collaboration. This has led to calls for a “systems perspective” to studying AI in organizations, with human and non-human entities operating in a dynamic system of interdependent relations (Agrawal et al., 2022; Anthony et al., 2023; Bailey et al., 2022; Faraj and Leonardi, 2022). Therefore, integrating AI technologies in organizations demands a strategic effort to design organizational systems that (continue to) allow for effective collaborative relations between humans and with increasingly agentic technologies.
In this study we analyze the effects of AI that is embedded in a system of humans, groups, and the organization. We draw from the organizational learning and strategic management literature (Argote, 2013; Argote et al., 2021; Barney, 1991; Edmondson, 2002; Eisenhardt and Martin, 2000; Harvey et al., 2022; Kemp, 2024) to develop a system dynamics model of individual, collective, and AI learning processes over time, focusing on accumulated knowledge, learning, forgetting, and coordination among human and non-human entities. Our approach is consistent with calls to link organizational learning and the organizational capabilities literature (e.g. Argote, 2013; Edmondson, 2002; Harvey et al., 2022). These calls argue that connecting multiple levels of analysis (individuals, groups, organizations) 1 not only contributes to a better understanding of how AI might help organizations achieve long-term performance, but also reveals “underexplored dynamic managerial capabilities stemming from team and organizational design” (Harvey et al., 2022: 497; Rahmandad, 2012). The organizational learning lens relates to research examining the microfoundations of organizational capabilities, providing a link between decisions of senior managers and organizational capabilities (Argote and Levine, 2020).
Our “learning system” model also reveals the conditions under which intertwined learning processes at different levels of analysis can produce organizational performance. The model incorporates findings from extant empirical research, conceptual papers, and computational modeling. It allows for new insights about how interrelated learning systems of individuals, groups, and AI produce or interfere with knowledge accumulation, organizational learning, and performance. Specifically, we examine how AI-enabled technologies affect the complex processes and outcomes of work teams and larger collectives. Our research question is: How does the use of AI in a collective setting alter individual and collective learning, the accumulation of expertise, and ultimately performance? By focusing on the microprocesses that are well understood from studies of group and organization learning, we aim to shed light on how AI can best be leveraged for overall organizational performance.
We present a formal mathematical model that explains how individual and collective learning unfold when AI is embedded in a group of humans whose work is interdependent, collaborative, and depends on creating and leveraging knowledge. Knowledge-based work is common in all organizations and especially relevant for organizations whose core activities involve innovation, software or product development, data analysis, or the use and interpretation of information in general. The use of AI is becoming normative in such organizations, which are leveraging the power of AI to try to increase human performance (Agrawal et al., 2022; Anthony, 2021; Dell’Acqua et al., 2023). Despite the increasing use of AI, we still know little about the effects of using AI in collective work processes that up until this point have been quite well understood (Argote, 2013; Argote et al., 2021; Argote and Miron-Spektor, 2011). Moreover, much of the current research on AI examines AI as isolated and independent from the rest of the organization (Agrawal et al., 2021, 2022; Raisch and Fomina, 2024). For these reasons, several authors have argued for a systems approach to studying AI in organizations (Agrawal et al., 2022; Anthony et al., 2023; Bailey et al., 2022; Faraj and Leonardi, 2022).
Our mathematical model reflects the dynamic, interdependent, non-linear relationships of systems in which AI is embedded. Complex interdependencies and non-linear relationships that evolve over time can be modeled with system dynamics (Sterman, 2001, 2010), yielding insights that are difficult to observe through traditional empirical methods. The model’s assumptions and parameters are based on established theory in the organization and strategy science literatures and extant empirical research on AI. Based on this model, we conduct computational experiments to compare different ways AI influences individual and collective learning.
Background and theory
AI-enabled technology
Emerging technologies are equipped with AI features, participating in organizational processes and performing cognitive tasks that previously required human knowledge and capabilities. Increasingly “AI or cognitive technologies employ such capabilities—previously possessed only by humans—as knowledge, insight, and perception to solve . . . tasks.” (Davenport, 2018: 9). These technologies are (to varying degrees) autonomous and can interact with collaborating humans in teams (Hu et al., 2021; O’Neill et al., 2022, 2023; Raisch and Fomina, 2024) and have the ability to learn from data (e.g. Wang et al., 2023), either independently or with human intervention. Fiore and Wiltshire (2016) even stipulate that technologies should be designed to include “scaffolding” functionality that “directly supports team-level processes by helping to mediate and support the interaction between individual and team-level cognitive activity” (p. 11). We refer to AI-enabled technology as “AI” throughout this article.
While AI is evolving from a tool to becoming a counterpart in teamwork (Anthony et al., 2023; Malone, 2018), prior research predominantly analyzes AI and their “users” in isolation from organizational processes and at the level of individual tasks. Recent attempts to build a theoretical foundation for inquiries into collaboration with AI have repeatedly used classic frameworks (such as input-process/mediator-output) from team research (O’Neill et al., 2022; Sebo et al., 2020; You and Robert, 2017). Research has mostly focused on the effects of AI use by individuals, but this is changing. Some studies examine how AI-enabled technologies (especially robots) affect teams and complex social settings (Jung and Hinds, 2018; Sebo et al., 2020). Findings from these studies suggest that not only are AI usage patterns markedly different in teams compared with usage by individuals (Sebo et al., 2020), but also that AI affects the way that team members interact
Despite these advances, we still have a very limited understanding of how team learning, knowledge, and social dynamics are affected when teams work together with AI (Argote et al., 2021; Jung and Hinds, 2018). Answers to the question of how humans and AI can work together often focus on how AI can be designed, rather than on how collaborating human workers need to adapt, learn, organize, and interact with AI for high performance (Wang et al., 2023). Team learning is emergent: it “originates in the cognition, affect, behaviors, or other characteristics of individuals, is amplified by their interactions, and manifests as a higher-level, collective phenomenon” (Kozlowski and Klein, 2000: 55). These complex interdependencies are difficult to study with traditional empirical techniques.
Some ethnographic research does, however, offer insights into the complex adaptive dynamics involved in adopting AI-enabled technology. For example, Beane (2019) found that the use of surgical robots altered the practices of operating teams in a way that rendered prior learning strategies of surgical trainees less effective. Barrett et al. (2012) found that teams needed to re-negotiate role structures when the relative skills or status between occupational groups changed as a result of integrating a programmable dispensing robot into a hospital’s systems. Beane and Orlikowski (2015) and Sergeeva et al. (2020) showed that the knowledge that team members need to access from each other and the accompanying coordinative processes were different when using AI-enabled technologies. These technologies change how humans interact (via robotic telepresence, see Beane and Orlikowski, 2015) or what they can do (enabled by a surgical robot, see Sergeeva et al., 2020). Informed by these deep descriptions of the intricate changes in teams that work alongside AI, we aim to conceptualize how individual and collective learning change when teams collaborate with AI.
When AI is embedded in organizational learning systems, AI is capable of influencing the learning and knowledge of human actors and is itself a learning entity in the system (Anthony et al., 2023; Kemp, 2024). AI-enabled technologies are able to “continuously acquire knowledge and skills, possibly operating autonomously or in concert with humans” (Bailey et al., 2022: 2). Understanding these complex systems of human and non-human entities requires researchers to “conceive of technologies as constituted by relations and interleaved with other relations that are always evolving” (Bailey et al., 2022; Faraj and Leonardi, 2022: 777). Too often, however, AI and algorithms are treated as a black box, and different types of organizations are treated similarly (Zeng and Glaister, 2018).
Understanding the dynamic and long-term effects of using AI-enabled technologies in organizations would go a long way toward explaining why some organizations perform better when using the same technology (Kemp, 2024; Zeng and Glaister, 2018). While much evidence points toward AI augmenting individual capabilities for many tasks (e.g. Dell’Acqua et al., 2023; Noy and Zhang, 2023), we do not yet understand how AI changes the complex relationships that comprise teams and organizations. Past research has been conducted mostly within disciplinary silos, and, in particular, outside of strategy, organization science, and organizational learning research. Beyond questions on how to design technology to make it more suited for humans to work with, we need a more comprehensive strategic understanding of how AI technologies can be leveraged by organizations. In this article, we offer a new, strategic organization perspective informed by organizational learning theory to analyze how organizations can redesign their work systems to support learning, collaboration, and performance. 2
Organizational learning and forgetting
Organizational learning theory is especially suitable for explaining organizational processes that span different levels of analysis, such as individuals, groups, and organizations (Argote, 2013; Argote et al., 2021; Crossan et al., 2021; Huber, 1991; Shapira, 2020). As we explain in this article, organizational learning theory can also explain the relationships between AI, human learning, and organizational knowledge (Argote et al., 2021; Sturm et al., 2021).
Learning from experience, by individuals, groups, or organizations, is often operationalized as learning curves (also known as experience curves, progress curves, and learning by doing) (Argote, 2013). A learning curve represents the relationship between cumulative experience and performance improvements. Since Wright (1936) documented the industry learning curve, hundreds of studies have examined learning curves for a variety of organizational settings, outcomes, and across different levels of analysis (e.g. Anderson and Parker, 2002; Argote, 1993; Argote and Epple, 1990; Nembhard and Uzumeri, 2000; Reagans et al., 2005; Sterman et al., 1997).
Research on organizational learning has revealed various mechanisms through which organizations can learn. One prominent mechanism is learning by doing, where organizations acquire knowledge through direct experience and experimentation. This can involve trial and error, experimentation with new processes or technologies, and reflecting on the outcomes. Another mechanism is learning from mistakes. By analyzing past failures and understanding their underlying causes, organizations can identify areas for improvement and develop strategies to prevent similar mistakes in the future (see Argote et al., 2021; Argote and Miron-Spektor, 2011 for literature reviews on organizational learning).
Retaining knowledge accumulated via organizational learning is inherently imperfect. Retained knowledge may become obsolete because of external changes (e.g. industry changes or technological advancements) or decisions by the organization (e.g. a new strategy that requires different knowledge). Knowledge may also be forgotten, for instance, when it remains unused for too long or when the individuals who remember it leave the organization (and the knowledge was not codified). Organizational forgetting refers to this phenomenon, where an organization’s knowledge depreciates in value or is lost over time.
An important insight of the organizational learning literature is that knowledge gained from experience becomes embedded not only in the minds of humans, but also in routines, processes, structures, and tools of the organization (Argote, 2013; Argote et al., 2021). “In order for learning to be organizational, the knowledge an individual acquires would have to be embedded in a supra-individual repository . . . so that the knowledge would persist in the organization even if the individual were to depart” (Argote et al., 2021: 5403). One such supra-individual repository is a
Anderson and Lewis (2014) modeled the learning processes involved in developing a TMS. Two key findings from their study are worth mentioning here. First, when individuals could rely on their shared (collective) knowledge of “who knows what” in the group, they were more apt to develop specialized expertise, refine their collective understandings of “who knows what” and further accelerate individual learning. These cycles of learning at different levels of analysis (Lewis et al., 2005) are positively reinforcing over time, resulting in increased performance. Second, members’ knowledge contributes to productivity only up to a point—if members’ knowledge becomes “overspecialized,” the group’s performance declines. Without enough shared (collective) knowledge, highly specialized members may have difficulty finding common ground (e.g. Clark, 1985, 1998; Krauss and Fussell, 1996), as specialized vocabulary, references, and knowledge may not be understood by members with disparate expert knowledge (Fraidin, 2004). Without sufficient collective knowledge, members will be unable to build on and integrate other members’ disparate knowledge, which in turn further slows collective learning.
Thus,
We construct a computational model of the AI-Human Learning System consistent with the learning curves literature, incorporating learning rates, forgetting rates, and intertwined learning processes that connect individual learning with collective learning (group, unit, or organization). We consider AI itself to be a learning entity that accumulates knowledge as a function of task completion. AI learning can be construed as refinements or expansions in AI algorithms that tend to increase overall performance in a given organizational context. AI also learns by extracting patterns from a large number of cases or task iterations. 3
In the next section, we describe a computational model, conceived as a learning system that combines AI learning, individual learning, and collective learning. Our model recognizes that AI is embedded in a system of interactions that unfold over time, and which include both human and non-human actors (Anthony et al., 2023; Bailey et al., 2022; Faraj and Leonardi, 2022). We run computational experiments with the model, systematically varying the potential influences of AI on individual and collective learning. Thus, our model does not emphasize specific features of AI but instead emphasizes the impact of AI on accumulated knowledge and learning at multiple levels of analysis. By systematically varying model parameters, we create different cases (conditions) wherein AI augments individual or collective learning or causes obsolescence of individual or collective knowledge. Comparing these different cases provides insights that are applicable for a variety of knowledge-intensive organizations and settings.
AI-human learning system
Systems are a set of relationships between actors and related interdependent components that work together to perform whatever functions are required to achieve the system’s objective (Meadows, 2008). The learning system model presented in this article assumes that organizational outcomes are a function of learning from multiple human and non-human entities that work together to complete tasks. The combined effects of learning from AI and interacting humans are depicted in the

Causal loop diagram of the AI-human learning system.
We model AI itself as a learning entity, such that AI accumulates knowledge with each task that is completed by the group, in the given context. We assume that AI learning is intertwined with the individual and collective learning systems, such that the AI learning system affects the other learning systems, and vice versa. As stated in Sturm et al. (2021), “humans are no longer the only ones capable of learning and contributing to an organization’s stock of knowledge” (p. 1581).
A few assumptions undergird our model. First, we assume that the task is one that AI has the capacity to address. Second, we assume that the task is one that AI cannot accomplish by itself—that is, both human and non-human entities need to coordinate for high task performance (cf. Raisch and Fomina, 2024: 4–5). This is captured by the way that performance is calculated: as a function of the contributions from the individual, collective and AI knowledge stocks.
Mathematical specification
In this section, we describe the mathematical specification of the AI-Human Learning System.
4
The simulation model described by the equations below uses the Stella v. 2.1.2 simulation application. The model is available upon request from the authors. The simulation executes every time period
Performance
Performance
Learning and forgetting
Performance depends not only on knowledge accumulated through learning, but also on knowledge lost to “forgetting” (Argote, 1993, 2013; Argote et al., 2021; Wright, 1936). Forgetting occurs when knowledge becomes obsolete because of new or changing task conditions, when knowledge leaves the group via turnover, or when AI takes over aspects of the task previously completed by humans. Learning results from the tasks completed in prior time periods, and forgetting results from the depreciation of knowledge accumulated in prior periods (Darr et al., 1995). 5 We begin with the specification for individual knowledge that captures these effects, following Anderson and Lewis (2014): 6
As with individual knowledge, collective (
and
For convenience and without loss of generality, all three knowledge stocks initially have a value of 1. Overspecialization can change the value of
Finally, it should also be noted that
Learning curve for individuals
The contribution of each knowledge stock to performance is the standard power-law formulation. For individual knowledge,
Learning curve for collective and AI knowledge
The contributions of the collective and AI knowledge stocks are calculated analogously with their respective constants
Opportunity to specialize
When members learn together, new collective knowledge (about who knows what, shared vocabulary, shared routines) can develop, and an adaptive cycle of learning can unfold (Anderson and Lewis, 2014). As collective knowledge increases, it affects what, and how quickly, individuals learn. TMS research suggests that when individual members can rely on collective knowledge, it provides cognitive slack for deeper individual learning—typically in knowledge areas that are not already covered by other members (Hollingshead, 2001; Lewis et al., 2007; Wegner et al., 1985; Wittenbaum et al., 1996). This “progressive specialization” can benefit performance because it allows a greater amount of task-relevant knowledge to be applied to the group’s tasks. As collective knowledge increases, individuals can increase their expertise more effectively, albeit with diminishing returns. We specify this effect based on the standard literature of diminishing returns to functions of knowledge or other attributes (Ellis, 1965):
Ability to coordinate
If members’ knowledge becomes too specialized, it can impede communication and reduce members’ ability to build on and integrate knowledge from other members of the group. The result is a slowing of collective learning and decreased collective knowledge. Because learning entities are not independent but are intertwined, we can expect that lower collective knowledge will eventually slow individual learning. The interplay between individual knowledge, collective knowledge, and subsequent learning can be represented mathematically with a reverse “S-curve.” S-curves are often used to represent the effects of parameters in learning models (Repenning, 2002; Sterman et al., 1997). Specifically, we use the formulation of the S-Curve by Repenning (2002) as adapted by Anderson and Lewis (2014) and specify the curve as
where
Model testing and simulations
We conducted standard simulation robustness tests appropriate to system dynamics-based models of theory (Sterman, 2010). Table 1 shows the parameters we use for a “base” case, which we later compare with simulations from computational experiments. We use the parameter values that were validated by Anderson and Lewis (2014); these same parameter values are informed by empirical research on learning and forgetting (e.g. Argote, 2013; Boone et al., 2008; Darr et al., 1995). Unless stated otherwise, all the variables and parameters will always be greater than or equal to zero. We also conduct sensitivity tests (Monte Carlo analyses, see Appendix 2) that vary parameter values to ensure that our model, which adds AI as a learning entity, is robust and that the results of our computational experiments are valid for a broad range of parameters.
Base model parameters.
Base Case simulation
Simulations using the mathematical specifications above are illustrated with Figure 2. Note that the x-axis is time, and the y-axis represents performance. Each “trace” (or curved line) represents the behavior of the model over time.

Base Case.
Results from the Base Case simulations show that performance of the AI-Human Learning System increases up to week 25. The peak performance of the AI-Human Learning System is 12.5 tasks per week, which is higher than the peak performance of a learning system model
Interpreting the Base Case simulation results
To get a deeper understanding of these Base Case results, we disentangle effects emanating from the different learning entities (collective, individual, and AI). The results are shown in Figure 3. We “zoom in” on the weeks 0–120 for convenience. Figure 3 shows each of the knowledge stocks (collective, individual, and AI) over time, and their relative contribution to performance.

Base Case details: knowledge stocks and their contribution to performance.
Initially, collective, individual, and AI knowledge all increase, driving up each of their contributions to performance (bottom right). Individual knowledge and its contribution to performance is accelerated from the beginning by increased collective knowledge, which allows individuals to develop new specialized knowledge. As members’ knowledge becomes overspecialized, however, communication and coordination eventually become less effective, progressively reducing the accumulation of collective knowledge. When the rate of collective learning falls below collective forgetting at week 5, collective knowledge and its contribution to performance begin to decline. In addition, individual and AI contributions to performance begin to flatten from diminishing returns to knowledge. The result is that collective knowledge’s contribution begins dragging down increases to performance, until at week 25 the loss overtakes the flattening contributions from individual and AI knowledge, causing overall performance to drop. At this point a vicious cycle begins. The performance decline reduces learning, which in turn slows the accumulation of individual and AI knowledge. Eventually, the individual learning rate drops below that of the individual forgetting rate, which results in a decline of individual knowledge and its contribution to performance at time 45. The same happens to AI learning, knowledge, and its contribution to performance. Since the creation of new AI knowledge slows down with the dwindling performance, AI’s learning rate drops below its forgetting rate as well.
This “overshoot” behavior is an established pattern in system dynamics research, reflecting the dynamic, non-linear, and interdependent relationships in the model and, specifically, the interrelated learning curves that capture organizational learning. Prior research in system dynamics has uncovered such “overshoot” patterns time and again when modeling learning or process improvement activities using learning curves (e.g. Anderson and Chandrasekaran, 2024; Anderson and Parker, 2002; Repenning and Sterman, 2001; Sterman et al., 1997). Anderson and Lewis (2014) added “overspecialization” as a reason for why this pattern occurs to the literature on organizational learning systems.
Interpreting the impact of AI on human learning
In the Base Case, performance declines over the long run as the team loses its capability to use the AI and integrate it appropriately in their collaborative work. This, combined with overspecialization of members’ knowledge, creates a performance decline that is difficult to avoid or repair.
While the effects of AI use on the human learning system may seem counterintuitive, there is new research that explains why using AI in collaborative environments can be detrimental in the long term. Specifically, negative long-term effects of AI use have been documented in case studies of AI coding assistants (Anderson et al., in press). In controlled environments on isolated tasks, there is usually a clear performance benefit for individual human coders using AI coding assistants. However, AI-generated code in real organizational (legacy) environments may create “technical debt,” that is, the need for future rework due to, for instance, an entangled web of dependencies that make it difficult for human coders to contribute meaningfully in the future (Anderson et al., in press). In addition, when AI is used by individual coders that are working as a team, integrating that code takes time away from production activities. Informants in informal interviews conducted for this study noted that it was hard to keep track of “who knows what” after members individually used AI coding tools. For tasks that require the integration of human and AI knowledge, this can be detrimental for task performance in the long run.
Computational experiments
The Base Case simulation showed that AI can amplify early performance by increasing a team’s capabilities to carry out tasks. To further explore how AI affects the human learning system, we conduct computational experiments that systematically vary model parameters to produce four cases that can be compared with the Base Case and with each other. The cases are shown below in Table 2 and explained below.
Computational experiments of AI’s influence on the AI-human learning system.
Recall that performance, in our model, is a multiplicative function of collective, individual, and AI knowledge stocks. Focusing on knowledge stocks, we capture an AI’s influence in the AI-Human Learning System by varying the rates at which the knowledge stocks are filled by learning and depleted by forgetting. We consider
Table 2 shows four different cases representing different scenarios for how AI could impact the AI-Human Learning System. For example, AI can accelerate the learning rate for individuals (first column of the table) or the collective (first row of the table), augmenting the knowledge stocks that drive performance. Alternatively, AI may cause individual or collective knowledge to become obsolete. This is modeled as increases in the forgetting rates for individuals (second column) and the collective (second row), leading to a reduction in the knowledge stocks that drive performance.
Table 2 also explains in practical terms how varying learning and forgetting rates can manifest in the AI-Human Learning System. For example, when AI increases the individual learning rate, it can be said that AI
Case #1
First, we consider a case where AI increases both the collective learning rate
Experiments show that it is indeed possible for a technology to shape human behavior via certain “nudges” and influence a group’s collective intelligence (Gupta et al., 2024). Using AI to monitor, track, and analyze the role structure could mitigate problems of discoordination that occur when members become overspecialized. Recent technical developments and theoretical conceptualizations point toward a path where future technologies actively influence individual and collective learning in an AI-Human Learning System.
Figure 4 shows the dynamic effects of Case #1, compared with the Base Case described earlier. When AI increases both the collective and individual learning rates, performance is higher than in the Base Case throughout most of the simulated timeframe, but it does not prevent a decline beginning in week 28. Like in the Base Case, the decline is ultimately a result of increasing specialization of the individuals’ expertise, which hampers communication and coordination and slows the growth of collective (shared) knowledge. In the short term, the faster collective learning rate in Case #1 enables the team to better cope with overspecialization. Eventually, however, it accelerates the rate of tasks completed per week,

Case #1 compared to Base Case.
Case #2
Next, we consider a case in which AI increases the collective learning rate
Figure 5 shows the dynamic effects of Case #2, compared with the Base Case. Initially, this case behaves similar to the previous case (Case #1, see Figure 4) with a similar peak magnitude followed by a decline. The computational experiment that created this case shares an increased collective learning rate with Case #1. However, this computational experiment also simulates an increase in the individual

Case #2 compared to Base Case.
In the long run, Case #2 is superior to Case #1. This is somewhat counterintuitive at first sight. The key to explaining why an increase in individual forgetting rather than an increase of individual learning seems beneficial is that there are diminishing returns to performance for both collective and individual knowledge. In the Case #1 simulation, there is—in the long run—a relatively high level of individual knowledge and a relatively low level of collective knowledge. The learning system is “stuck” during the rest of the simulation period; it does not recover from this situation and remains at a relatively low performance level. In the Case #2 simulation, however, individual learning is kept to a more moderate level because the individual forgetting rate is increased as part of the computational experiment. This constrains the impact of overspecialization, permitting collective learning to increase to a higher level. The increase of the collective contribution to performance more than makes up for a lower individual contribution to performance resulting from less specialized personnel. The result is that performance stabilizes at a significantly earlier point in time (around week 150) and at a higher average level than the performance level at the end of Case #1.
Case #3
This case represents a situation in which AI augments human skill or specialized expertise, but at the same time disrupts the group’s role structure. The effects of AI on collective processes have been examined mostly in ethnographic studies, which show how the introduction of technology into a team may lead to changes in the team’s role structures, on which employees depend to effectively work together. Adapting to the technology often means redefining the role structure and creating the need for different human expertise, that is, to learn new things both individually and collectively. For example, several studies showed that integrating intelligent robots into medical teams prompted a significant reconfiguration in existing team roles, responsibilities, and coordination processes (Barrett et al., 2012; Sergeeva et al., 2020). Such changes in the role structure also had longer-term effects on individual expertise. For example, deploying robots in surgical teams enhanced aspects of the surgeons’ access to information, perception, or operational dexterity (Beane, 2019; Sergeeva et al., 2020). Individual expertise was developed in one group (surgeons using the robot), and at the same time, nurses and anesthesiologists adapted to new roles that required them to learn new specialized knowledge (Sergeeva et al., 2020). An organizational learning lens helps us understand the short- and long-term effects of such role structure (understanding “who knows what” and “who does what”) change and the development of new individual expertise, relevant for a new role structure.
An example for this might be the drug discovery with machine learning case that Sturm et al. (2021: 1585) describe. Scientists are provided with novel insights on molecule substructures and cause-effect relationships of how these structures contribute to pharmacological effects. Based on these insights, they may accumulate individual knowledge faster. However, the human research teams still retain agency over the process, deciding on which learning algorithms to use, setting up data for the learning algorithms, deciding on the direction that their research will be taking (e.g. which disease will be targeted, which specific process that leads to a pathogen’s effect will be targeted). The advantage of these machine learning for drug discovery techniques explicitly lies in detecting “previously disregarded substructures in molecules” and machine learning systems may “not necessarily follow the same reasoning that is provided by existing textbook knowledge of chemistry,” (Sturm et al., 2021: 1585). Those new insights may help scientists to advance more quickly but also introduce new roles to the process (e.g. data scientists) and may require new—even surprising—changes to the collaborative team efforts and to co-specialization requirements.
Figure 6 compares the simulation of Case #3 with the Base Case. Because AI increases the collective forgetting rate, the accumulation of collective knowledge is slowed down. It is more difficult to accumulate enough collective knowledge before overspecialization sets in. Hence, the organization cannot allocate roles as effectively and gets less out of the individual capabilities of its team members, which reduces the contribution of collective knowledge to performance. Thus, Case #3 falls short of the Base Case’s performance. Although recovering slightly after about half of the simulation period (week 112), its recovery is weak and the performance level remains below the Base Case throughout the simulation.

Case #3 compared to Base Case.
Case #4
Last, we consider a case in which AI increases both the collective forgetting rate
For instance, the use of surgical robots to assist in surgery can alter the practices of operating teams in a way that renders prior learning strategies of surgical trainees ineffective, since trainees no longer assist the surgeons to the same extent they did before in robot-assisted surgery (Beane, 2019). A study in the banking industry showed that the skills by junior investment bankers are rendered obsolete in areas where the analyses that junior bankers previously completed are now automated by AI tools (Anthony, 2021). When AI autonomously completes part of the overall analysis, not only is individual knowledge depreciated, AI also changes the interaction patterns between technology, junior-, and senior bankers (Anthony, 2021). Teams faced with robotic innovations (a pharmacy dispensing robot) also need to re-negotiate disrupted role structures when the relative skills or status between occupational groups changes (Barrett et al., 2012). Teams need to rebuild collective and individual knowledge continuously when they are faced with a technology that renders both individual and collective knowledge obsolete.
When AI disrupts an existing role structure, knowledge of “who knows what” in the team—may unwittingly be disrupted (seen in Cases #3 and #4). We interviewed a senior software engineer at a top 3 AI firm (according to market capitalization) who works on multiple applications and is “tech lead” for one of them. Software engineers at this firm each work individually on component “services” that are modules of code. Those component services need to be integrated at a higher level to create an application. It is crucial to align individuals to create component services that will work together coherently and robustly at the application level. If an individual programmer does their “own thing,” their services will not be aligned with the overall application’s goals. To succeed, management and ideally other programmers need to “track what the others are doing.” AI has not only enabled individuals’ activities “to change faster, but it also [the engineer threw up her hands up at this point] mutates unpredictably.” The result is that while AI has accelerated individuals’ productivity, the hoped-for overall performance increases in creating applications have not yet happened.
Asked how AI might help overcome problems of “mutation” in the role structure, the senior software engineer commented that “it would help us deal with the team’s treadmill of rework keeping us from getting the apps to work.” In particular, she commented that if an AI could help the team members document each individual’s work on services and then summarize that documentation so that managers and other individuals could more easily understand others’ roles, that “would be golden.” This suggests that AI with some agency over managing, tracking, and analyzing the role structure could help avoid these integration problems (see Case #1).
In another interview with a software developer of a major software company using AI coding assistants, the software developer was less optimistic regarding the possibility that AI could take on tasks previously performed by human coders. He said that “AI isn’t achieving the productivity savings we want, and we need them because we don’t have enough ‘hands’ [qualified programmers] so that we can scale [keep up with the exponentially growing need to provide more AI capabilities] or pivot [adapt to changing market conditions].” The previous senior software engineer echoed this theme in her interview. While she did believe that AI was reducing the rate at which they needed to add headcount, her team was still being overwhelmed by scaling, which is why she was so interested in a team-level AI that could manage roles. However, she added an interesting nuance: If a [team-level] AI actually worked out and could reduce headcount, it’ll start separating out programmers . . . If anything, we need the really, really capable programmers to do more high-level work like trying to figure out what we need to do to scale over the next five years.
In short, AI might require more- rather than less-trained workers. This observation addresses a comparison of Case #3 and #4. Even though Case #4 resulted in higher long-term performance, certain organizational imperatives like rapid scaling may make replacing members (or human knowledge and skills in general) untenable.
Figure 7 compares Case #4 with the Base Case. In the short run, Case #4 is similar to the previously discussed Case #3. Because AI increases the collective forgetting rate and slows down the accumulation of collective knowledge, Case #4 initially also falls short of the Base Case’s performance. In the longer run, however, Case #4 performs markedly better than Case #3. The increased individual learning rate in Case #3 leads to lower performance than the increased individual forgetting rate in Case #4. All other things equal, faster forgetting leads to a lowered level of individual knowledge that indirectly benefits performance, since overspecialization is alleviated, allowing a higher level of collective knowledge, whose contribution to performance outweighs the decreased contribution of individual knowledge. Therefore, performance in Case #4 reverses course and begins to climb at week 70. Interestingly, the indirect benefit of individual forgetting stabilizes performance even over the Base Case after about week 120.

Case #4 compared to Base Case.
Case comparison
Figure 8 shows all cases—that have been shown separately (in Figures 4 to 7) —in one graph. At the beginning of the simulation, up until about week 75, the visual traces of the four cases are split into two groups, according to whether AI increases the collective learning rate (Cases #1 and #2) or AI increases the collective forgetting rate (Cases #3 and #4), that is, “by row” in Table 2. The two cases where collective learning is accelerated (Case #1 and #2) lie above the Base Case; the two cases where collective forgetting is accelerated (Cases #3 and #4) lie below the Base Case. This points to the importance of collective learning: When the accumulation of collective knowledge is disturbed, the learning system’s performance suffers.

Comparison of all four cases to each other and to Base Case.
All cases show the characteristic “overshoot” behavior, that is, a decline in performance after the initial peak. However, while Cases #2 and #4 stabilize on a higher level (above the Base Case), Cases #1 and #3 oscillate on a lower level (below the Base Case). This observation highlights the importance of the increased individual
Overview of sensitivity testing
While our results for the Base Case and the computational experiments were simulated with the parameters shown in Table 1, the simulation results qualitatively hold under a wide range of parameter values. This is discussed in more detail in the Monte Carlo analyses presented in Appendix 2. For example, we ran 1000 simulations for each of the analyses presented in Figures 4 to 7. Each simulation run draws a random value for the parameters that were varied in the computational experiments (individual and collective learning and forgetting rates) from a range of values that starts 50% lower and ends at a 50% higher increase than the increase in the main analyses. For instance, when AI increases individual learning (in Case #1 and Case #3), we use an increase from 1.0 to 2.0 (units of knowledge accumulated per task completed) in the main analyses; in the Monte Carlo simulations, this parameter is randomly selected for each sensitivity run from a range between 1.5 and 2.5. We visualize the results in Figure 10, which shows that the behaviors’ shapes and changes in performance versus the Base Case remain qualitatively the same in all cases.
Discussion
Computer scientist and psychologist Licklider (1960: 4), a pioneer of the Internet architecture, interactive computing, and AI, expressed the hope that “human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”
However, despite recent interest in AI “augmentation” (cf. Baer et al., 2024) and “teaming” with AI (cf. O’Neill et al., 2023) scant research is devoted to how AI use in organizations affects the human systems in which AI is embedded. In considering AI as a learning entity that is embedded in a system of work performed by humans, we advance the field’s understanding of a
Key findings
“[O]rganizational system[s] for collective encoding, storing, and retrieving knowledge” (Argote and Ren, 2012: 1375) have been proposed to constitute a microfoundation of dynamic capabilities and, ultimately, an important source of competitive advantage (Argote and Ren, 2012). These systems develop as individuals learn through experience working together (Argote and Ren, 2012: 1378) and increasingly through experience working in systems of humans, groups, and organizations in which AI is deeply embedded.
First, our results highlight the importance of collective knowledge for coordinating members with disparate expertise in teams and larger organizational systems in which AI is integrated. To date, the role of the collective (group, unit, or organization) in producing knowledge and performance is largely absent in the literature on AI. Because collective and individual learning are intertwined, each depends on the other. Without a stock of collective knowledge, individuals’ learning slows, which reduces the overall performance of the system. Second, we consider the role of “forgetting” in the AI-Human Learning System. Forgetting can stem from knowledge obsolescence or from AI taking over (parts of) a task previously performed by humans. The promise of AI to take over tasks previously performed by humans is often touted as a key productivity benefit of AI. Our study adds nuance to this notion by showing why this benefit might not be realized. We find different effects of forgetting, depending on whether the forgetting occurs at the individual level or at the collective level.
The Base Case simulation integrates AI learning with individual and collective learning. Simulations of this learning system show an appreciable improvement in performance over human-only learning systems (cf. Anderson and Lewis, 2014). While not surprising, the Base Case does align with the expectation that AI helps organizations by expanding capabilities. It also highlights the dangers of expanding individual knowledge unchecked until individuals become overspecialized, leading to problems with communication and collaboration among members. Insufficient collective knowledge has dramatic and long-lasting effects on the learning system. Hence, using an AI can improve performance overall relative to not using it, but it does not necessarily prevent performance collapses when team coordination breaks down.
Computational experiments were conducted to compare different cases, representing the potential effects of AI under four different conditions. The cases reflect examples in the literature and technology already used by organizations. Case #1 shows that teams generally benefit from AI that increases individual and collective learning but also highlights the importance of considering the indirect effects on learning. While performance is higher than in the Base Case for most of the simulation, the direct increase of individual and collective learning is countered by AI’s indirect effects on the learning system and, ultimately, performance declines in the long run. With Case #2, the computational experiments revealed a somewhat counterintuitive and surprising development in the long run. An AI that increases individual forgetting instead of individual learning prevents some of the decline in performance and, in the long run, might lead to higher (above Base Case) performance. 8 Cases 3 and 4 both analyze an AI that increases collective forgetting. Simulations show that when AI increases collective forgetting, early performance is significantly lower than in the Base Case. In the long run, however, increased performance is observed when AI increases individual forgetting (Case #4) compared with when AI increases individual learning (Case #3).
Theoretical implications
These findings substantiate recent conceptual work that describes technologies as more constitutive and questions whether humans performing tasks—who used to be alone at the center of theories of organizing—can be considered in isolation from the team structure in which they are embedded alongside AI (Faraj and Leonardi, 2022). Our results point toward intricate ways that AI as a learning entity is intertwined in relationships with human team members. While the role of the collective in producing knowledge and performance is largely absent in the literature on AI, the results from our simulations highlight its importance in coordinating members with disparate expertise. Instead, collective and individual learning are entangled in a complex dynamic dance with AI. If AI inhibits the accumulation of collective knowledge, any benefits from augmenting individual learning are not realized. Indeed, an AI that causes the obsolescence of individual knowledge is often superior in such cases. In contrast, AI with some agency over important pillars of team functioning, such as the division of cognitive labor and role assignment (Cases 1 and 2), can improve performance. These results highlight how AI is central to organizing and organization theory.
Our findings contribute to organizational learning theory and how it relates to the microfoundations of organizational capabilities and competitive advantage (Argote and Ren, 2012; Volberda et al., 2021) providing a link between decisions of senior managers and organizational performance (Argote and Levine, 2020; Harvey et al., 2022). In support of this view, Argote (2013) noted that organizations adapt through learning activities, which occur within and between teams in a larger organizational context. Strategy scholars generally agree that managers must leverage knowledge generated by teams to gain and maintain competitive advantage (e.g. Eisenhardt and Martin, 2000), but without a much-needed focus on learning at the team or unit level (Harvey et al., 2022). Our study shows how organizational capabilities can develop and change, by linking AI to individual- and team-level learning processes.
These insights may help to better understand how organizations can establish a sustainable competitive advantage with AI when many companies have access to similar technologies (Kemp, 2024; Zeng and Glaister, 2018). The field of strategic organization has defined its focus as the “design, administration, arrangement, or structuring of [a . . .] group, or system so that it will be most useful or have the greatest effect” (Baum et al., 2022: 683). Our study emphasizes the importance of developing systems that maintain a shared understanding, promoting cross-functional communication. When AI is integrated in teams with varying expertise and the teams’ patterns of interactions, AI’s learning and actions become tailored to the specific contexts and teams. Such a socially complex learning system—once established—is difficult to imitate by competitors (Argote and Ren, 2012; Barney, 1991). The flip side is that this also makes it challenging for strategic leaders to establish and manage such a system.
Our results also inform another core area of (behavioral) strategy: the consideration of dual (or multiple) opposing goals (e.g. Levinthal and Rerup, 2021). In designing learning systems, the delivery of high performance should be considered alongside the need for humans to accumulate more expertise than may be optimal for (short- or medium-term) performance. For instance, do our results mean that human expertise is losing some of its value as it is—at least in part—replaced by AI? Not necessarily. Studies on automation technologies (e.g. Bainbridge, 1983; Baxter et al., 2012) have shown time and again that it is problematic when humans are “out of the loop” since their knowledge deteriorates, but is necessary to intervene when technology reaches its limits. Similarly, human expertise might play an important role in monitoring AI performance, especially if transparency of AI’s work products is low or new contexts make AI unreliable.
Practical implications
Strategy practitioners must consider and integrate AI when (re-)designing organizational systems. We advocate that collective knowledge should be comprehended as an organizational design principle. When considering the “intelligence” of emerging technologies, the focus usually is on their ability to solve tasks that, previously, only humans were able to solve. An (at least) equally important dimension in organizational design is the social embeddedness of technology. When integrating AI into organizations—and while considering AI’s implications for individual and collective learning—tasks and organizational structures should be designed accordingly. For example, modular interfaces could create sub-tasks where AI can be used in isolation from team collaboration. Other parts of the task, where teams need more collective knowledge, are “protected” and AI use is restricted in ways that allows for building individual and collective knowledge. Although at the expense of (short-term) performance, this would benefit longer-term performance by setting the learning system on a more beneficial long-term path.
Another practical implication of our study is that using AI to improve individual performance in isolation, in a broad range of contexts is likely insufficient to improve overall performance. Without consideration of the collective level, AI might solely improve the individual contribution to performance, interfering with coordination between individuals, often in unpredictable ways. This makes it more difficult to track and understand how role structures change over time, inadvertently creating higher level misalignments. To leverage new configurations of people and AI, investments must also be directed toward systemic integration in contrast to merely providing “tools” for individual workers.
Given that in the cases we investigated AI generally improves peak performance in the short run, an important lever to optimize performance is finding ways to prevent the declines that follow in the longer run. Organizations need to actively monitor and strategically manage their learning system’s status and dynamically react to overspecialization or other warning signals of discoordination. Might a sudden adaptation like disbanding and reassembling teams—and the corresponding changes in collective and individual knowledge—help to reset the system back to a state with higher performance? How often and following what developments are these adjustments necessary?
Limitations and future research
While our results are robust to a wide range of parameters (see the Monte Carlo analysis in Appendix 2), our model cannot consider all variables that have been examined in AI research. For instance, we do not focus on
Another promising area for future research could consider the social attributions connected to AI use, as these could also affect learning and expertise. Studies on algorithmic management indicate that teams supervised by algorithms are viewed as less capable and that these perceptions are negatively related to the budget that is allocated to the teams (Schweitzer and De Cremer, 2024).
Other intriguing avenues for future research are the strategic considerations in the design and management of these learning systems. In our model, AI learning is affected directly by task completion and indirectly by the particular state of learning at the individual and collective levels. We do not consider changes or tweaks to the AI that might be made from within or outside (e.g. central unit in charge of AI development) of the group. Future research could consider human interventions that may alter the Human-AI Learning System. Such interventions might include task changes that reset parts of the learning system in a way that avoids the longer-term performance decline. As certain task changes require the team to build new knowledge and co-specialize in different ways, the pitfalls of overspecialization and disrupted role structures can be managed. An exciting extension of our work could also be the consideration of tasks that require other learning types and innovation activities, such as exploitative and explorative search (March, 1991).
Finally, any model is necessarily a simplification of reality and is, therefore, incomplete. For example, in order to maintain focus and readability, we have not considered the effects of the accumulations of knowledge resulting in other aspects of task completions such as improving quality (although this is often modeled similarly, see the literature review in Anderson and Chandrasekaran, 2024). Such simplifications nevertheless produce complex dynamic results that are nearly impossible to predict in advance. That said, the computational model and simulations have revealed several insights that contribute to theory and practice and may further guide future research on AI-Human Learning Systems.
Conclusions
AI-enabled technologies are rapidly evolving. It seems likely that AI will increasingly be integrated organization-wide and become more interdependent with the work and knowledge of human knowledge workers and teams. Recent trends such as “AI agents” and “multi-agent AI systems” also point to these technologies becoming more autonomous, with more leeway to “act” alongside human workers. If true, a systems view that considers AI as embedded in organizational work is even more important. With AI becoming more autonomous, carefully designing AI-Human Learning Systems is increasingly important for leveraging human knowledge and effectively integrating AI into organizations.
