Abstract
Keywords
Introduction
Big data has frequently been conceptualized as a resource mobilized in order to claim people's attention, which is then redirected towards advertisements (Birch et al., 2021; Goldhaber, 1997; Hwang, 2020). This claiming feeds a bargain where attention is given in exchange for rewards that are embedded in a platform's design, and personalized through data analysis (Zuboff, 2019). This is achieved using techniques pioneered in the behaviourist sciences (Alter, 2017), mass-marketed by the gambling industry (Schüll, 2012), and perfected by big technology companies (Wu et al., 2021). Seeing new generative artificial intelligence (AI) products and services in this light (e.g. O’Reilly et al., 2024), however, runs the risk of overlooking a more fundamental change afoot in the monetization of machine-human relationships and the competitive dynamics that shape the industry. This, I argue, requires a new and deeper view of the claims being made of users in a changing techno-commercial landscape, which goes beyond attention and sets its sights on cognition.
In this commercial paradigm, consumers can harness functional capacities of big data and machine learning only if they accept that these capacities are computed outside the self; it is a kind of cognition that cannot be owned by its users, but can be unlocked and recalled through ongoing consumption. Its value does not arise from its ability to mobilize attention, but from its necessity in replicating cognitive processes. As such, these technologies make a permanent claim on our cognition, who we are, and what we can do. Through this claim, Big Tech is aiming to surpass the behaviourist roots of the attention economy through processes that I propose can be unpacked by using the concept of ‘cognitive lock-ins’.
From attention to cognition
Attention can be conceptualized as the focused perception of some particularity (Crary, 2001). It follows that we are conscious of the things that we attend to, and unconscious of the things that we are inattentive to (Wu, 2018). Attention thus implies a general orientation of a person's senses towards something, without specifying the nature of the relationship between the person and the thing their attention is focused on. We immediately see why attention is such a useful concept for animating a digital economy, because it can be operationalized through the capture of behavioural metrics such as clicks and likes. Grounded in a psychological approach known as ‘behaviourism’, this reduces human nature to behaviour caused by environmental factors (Hock, 1998). Attending to a website can in this view be conditioned with the correct schedule of rewards (Spicer et al., 2022), and has inspired technologists to design captivating algorithms (Seaver, 2022) that ‘bypass cognition’ altogether (McStay, 2016: 4). The particularities of how and what we think, which belong to the realm of cognition (i.e. the mental structures and processes we activate when making sense of the world; DiMaggio, 1997), is much harder to elicit and capture.
Digital economy scholars have been grappling with how to study this commercial environment in ways that recognize the importance of these behaviourist techniques, without accepting the reductionist view of human nature that underpins them: we know, for example, that users can mobilize their metacognition to deduce a platform owner's intent to capture their attention, and to resist it by adjusting their behaviour accordingly (Odell, 2020). One approach, which I argue needs to be explored in greater depth, is to recognize AI as a ‘means of cognition’ used to capture and distribute ‘cognitive and perceptive tasks to machines, which would perform them in different, machinic ways, with potentially revolutionary effects on the mode of production’ (Dyer-Witheford et al., 2019: 31). Just like electricity has become indispensable in industrial production, Dyer-Witheford and colleagues show us how commercial actors try to position AI as indispensable for producing ideas. Thus, when thinking about AI as a means of cognition, the test is not whether a student uses ChatGPT or Claude to write an essay, but whether they become unable to produce it without the support of a large language model (LLM).
As such, I argue that we should take a relational approach to machine-human configurations (e.g. Suchman, 2006), whilst taking into account the anti-competitive practices that shape these technologies (Hindman, 2018; Varoufakis, 2024). This would extend a rich tradition of sociologists, anthropologists, and science and technology scholars (STS) who have shown how our cognition goes beyond internal mental computations, to processes that are distributed across the goods and technologies we interact with (Douglas and Isherwood, 1979; Fiske and Taylor, 2013; Latour, 1994; Salomon, 1993). The implications of integrating the use of proprietary tools into our modes of thinking are that distributed cognitive processes can be reconfigured in such a way that users over time become more dependent on the technology they use.
I call this development a ‘cognitive lock-in’. It is a perspective that shifts the focus away from the cognition that AI can compute without us, and towards the cognition that we become unable to do without it. This is because techno-power arises from the way technology changes us (Frischmann and Selinger, 2018), and we must tend to the computational dependencies that we are potentially vulnerable to.
Locking cognition to technology
To date, lock-ins have been understood as anti-competitive arrangements that companies construct to make it harder for consumers to switch to competing firms (Khan, 2017). In an attention economy regime, this means thwarting competition between platforms by ensuring that users attend to your ads over those of the competitors. Lock-ins include the development of closed ecosystems of complementary products, the personalization of services, and the implementation of addictive design features. Research on cognitive lock-ins specifically has shown how repeated interaction with a platform reduces users’ cognitive loads through familiarization, thereby boosting convenience and efficiencies relative to less-known competitors (e.g. Murray and Häubl, 2007; Sénécal et al., 2015): we keep using a platform not because it is better than the alternative, but because we know it.
I suggest that we deepen this work through a subtle shift in focus, away from relative convenience in the market (e.g. ChatGPT vs. Claude) and towards substantial dependence in relation to the self (e.g. LLMs vs. the capacity to self-compute). This is because, in these emerging techno-commercial configurations, the goal is less about locking in users’ attention (i.e. what they attend to and how), and more about locking in their very cognition (i.e. what they can think and how). I propose that cognitive lock-ins can be defined as arrangements reconfiguring cognition across users and technology in ways that makes replication contingent on that specific technology. It is achieved through three interrelated practices: black-boxing, distanced-probabilistic computation, and access-based consumption.
Black-boxing
Black-boxing is the concealment of information or processes, and it animates the political economy of big data (Brevini and Pasquale, 2020). Like a one-sided mirror, platforms can see user behaviour in great detail, while the user cannot see the inner workings of the platform itself, nor of the AI being trained on their data (Zuboff, 2019). Black-boxing should be seen as a condition for intensifying the unequal automate/informate 1 relationship between humans and machines (Zuboff, 1988). This means that the more an algorithmic arrangement knows about a user and other texts it reads, the better it can automate computational schemas and processes as requested by them.
The concealment of computational processes and decisions underpinning the automation processes results in unequal learning opportunities between the algorithm and its users. In short, platforms’ abilities to engage in learning that will result in lasting changes in capabilities that they can control independently of continued access to any specific user is much greater than vice versa. For example: a hobby baker queries an LLM for a recipe each time they bake bread to circumvent the more time-intensive task of committing a recipe to memory. While the use of an LLM might improve the baker's efficacy in the short run, it has also fundamentally changed their practice. Recipes often black-box the underlying chemistry (thus potentially short-circuiting the learning that comes with experimentation), but still afford opportunities for intentional tweaks. LLMs, however, can change this recipe at any time, unprompted by the baker, and therefore hampering the possibility of an ongoing reflective practice. In other words, developing a skill is complicated because of the nature of LLMs’ probabilistic-distanced computation.
Probabilistic-distanced computation
Natural language processing, a component part of any LLM, is underpinned by probabilistic computing. This has been called stochastic parroting, referring to the probabilistic stitching together of words ‘without any reference to meaning’ (Bender et al., 2021: 617). Using LLMs to solve problems, such as writing an email to a friend, reframes the solution to the problem as probabilistic, resulting in cognitive processes that are distinct from the ones users could engage in independently of LLM technology because human cognition, by contrast, relies on meaning making (Mohr et al., 2020).
Probabilistic-distanced computation means that a query is not computed by the hardware that the LLM is accessed with, but by self-augmenting models living in networks of large data centres. By changing ‘where’ computation takes place, one equally changes ‘how’ it is taking place, and ‘what’ potential outcomes are possible as a result (Amoore, 2020). For LLM usage this means developing dependencies on distributed cognitive processes that are characterized by an extreme need for computing power and data input (Hwang, 2018), making it improbable that the user could recreate that capability by themselves.
The consequences of this are profound. While engagement with any technology is likely to occur with a degree of ignorance over its inner workings, distanced computing fundamentally changes the relationship between humans and machines. For example, a violinist can master her profession through continued practice without knowing the details of how the vibration of the violin's strings results in sound waves. This gives humans a stable, or what Arendt calls ‘objective’ (1998), artefact to engage with. While popular commercial AI products like ChatGPT are presented as stable artefacts, they fundamentally are not (Suchman, 2023). The violin's internal properties are bounded by the object itself, but the internal properties of the LLM are dynamic because they are produced by evolving models that can be altered at a keystroke thousands of miles away. This matters because humans rely on the ‘stability and solidity’ (Arendt, 1998: 136) of the artefacts that they interact with – like violins and recipe books – in order to develop into agents who are independent of those very things. As Csikszentmihalyi and Halton highlight… … men and women make order in their selves by first creating and then interacting with the material world. The nature of that transaction will determine, to a great extent, the kind of person that emerges. […] The material objects we use […] constitute the framework of experience that gives order to our otherwise shapeless selves (1981: 16).
Access-based consumption
Like all economic goods, technologies are designed to be consumed (Csikszentmihalyi and Halton, 1981). Some things we purchase before consuming, like clothing or food; in these cases, the buyer purchases control over the thing being consumed. Other things we consume without controlling or owning, like our social media account, the music we stream, and LLMs (Mardon et al., 2023): we simply access them in the moment of consumption (Bardhi and Eckhardt, 2012). For firms, this requires maintaining legal and technical control over the data and software produced for and through the platform to ensure that users cannot feasibly consume their services or substitute services without going through the platform itself (Pistor, 2019).
A key feature of LLMs is that they increase the control that firms hold over the access-based consumption they make possible. Besides the black-boxing and distance computing aspects discussed above, this is revealed particularly by the affordances of their interfaces (Bucher and Helmond, 2018) that explicitly encourages the outsourcing of cognitive tasks such as judgment, reasoning, memory recall, problem solving, and choice (Reisberg, 2013). Examples include the incorporation of past user behaviour into future algorithmic responses in order to reduce the necessity for users to engage in synthesis and memory recall (OpenAI, 2024), and the production of linguistic outputs that imply judgment and reasoning by emulating subjectivity (Magee et al., 2023). This kind of access-based consumption might thus be able to move away from ad-based business models because continued engagement is not merely about mediating relationships between different groups in order to open up multisided markets (Nieborg and Poell, 2018), but about establishing computational dependencies.
Conclusion
History is awash with warnings of technology's impact on cognition, from Socrates’ assertions in Plato's
The trouble deepens when commercial AI is developed in ways that dispositions users to intensify cognitive dependencies. This should give us pause for thought because it points to an emerging economic system where technology companies’ key positional good is not its chokehold on advertisers’ continued access to users, but users’ continued access to their distributed cognition. The quiet integration of Copilot into Microsoft 365 portends an AI creep where users may struggle to control the extent to which they come to rely on these tools. Such dependencies may also exacerbate existing socio-digital inequalities, that is ‘the systematic differences in the ability and opportunity for people to beneficially use (or decide not to use)’ technology (Helsper, 2021: 28). If left unfettered, cognitive lock-ins could affect peoples’ personal and economic relationships to the internet, as well as themselves.
