Abstract
It is a time of unprecedented profit and power for information technology companies. The so-called GAMAM (Google, Amazon, Meta, 1 Apple, Microsoft) is made up of some of the most consistently profitable companies in the world. The end of 2024 saw for them record profits, including net incomes of $94.2 billion for Alphabet (Google's parent company), $93.74 billion for Apple, $88.14 billion for Microsoft, $55.5 billion for Meta, and $49.8 billion for Amazon. 2 Amazon saw a staggering 148 percent year-over-year growth, while AI-driven ad technologies helped turn 2024 into Meta's most successful year yet. 3 Apple, Microsoft, Alphabet, and Amazon are consistently among the five largest companies in the world by market capitalization. 4 Technology companies also hold enormous power through the massive amounts of intellectual property they hold. Apple's intangible assets, including intellectual property such as patents, software copyrights, know-how, and trademarks, have been estimated at roughly $1.87 trillion, making up most of the company's value. 5 What's more, studies of market power in this sector have revealed that the marketplace for digital services is much more stable than previously thought. Not only is it rare for a site to enter or leave the top ten in terms of users, but the overall distribution of web traffic is also relatively stable as the concentration in audiences holds stable. 6 Technology companies hold steadily their power.
Such market power would not be possible without the critical mass of users that engage with their platforms. The number of users across all social media platforms as of April 2024 was 5.07 billion, over 62 percent of the world's population. 7 The cumulative number of daily Meta product users alone reached 3.24 billion in the first quarter of 2024, 8 almost half of the world's population. In order to maintain their dominant market position, digital platforms and services not only must amass users, but also sustain their attention. Their power and their profitability are proportional to how long they are able to keep us, the users, engaged and paying attention to their platforms. Until now, they have done so successfully. The average person spends upward of six hours a day online. Over a third of that time (roughly 143 minutes per day) is spent on social media. According to DataReportal, worldwide users spend a collective 720 billion minutes per day on social platforms, or 500 million years of collective human time per year. 9 Platforms want our attention, and we are giving it to them.
Existing scholarship refers to this current landscape of digitality as the “attention economy,” 10 as attention is the source of value that defines the era of digitality and platform power. Information technology has long profited on our attention, 11 but what has resulted from the digital revolution and the rise of the platform economy is an “unproductive arms race” for our attention. 12 Importantly, the problems that arise from the attention economy are not a matter of individual lack of self-control or addiction. 13 They are matters of technological and market design. For this reason, a new approach to regulation that has revitalized antitrust, referred to as the “neo-Brandeisian” approach, has caused regulatory bodies such as the Federal Trade Commission to turn their attention to powerful companies in the digital economy. The approach is named after Justice Louis Brandeis, who famously asserted the mutual exclusivity of democracy and concentrated wealth in the hands of a few. 14 The approach emphasizes the conjunction of market and political power, and challenges the consumer welfare standard as a mechanism of price, along with the assumed equivalence of firm size and efficiency, in favor of an approach that focuses on the competitive process and market structure.
This article explores avenues to address the attention economy in an era of revitalized antitrust from a normative perspective, making two contributions. First, it defends user autonomy as the standard for the regulations of the Bureau of Consumer Protection of the Federal Trade Commission, and examines what avenues are consistent with such a standard; and second, it argues that facilitating competition must not be the orienting goal for regulating markets for attention that threaten user autonomy. Legal scholars have praised the neo-Brandeisian approach to antirust as having legitimate intellectual foundations in the Law and Political Economy movement, and highlight its openness to a wider range of scholarship beyond economic and quantitative analysis that allows it to better advance the goal of competition.
15
To be sure, the movement has also been accused of “turning antitrust law into a Swiss army knife of social welfare optimization.”
16
This article is not a direct criticism of the neo-Brandeisian approach, but a call for a particular orientation for antitrust regulation in the digital era. While the power of technology and platform companies in this “new Gilded age” must be addressed if we are to reclaim technology as a tool that serves our goals rather than a corrupting force on them, competition as such must not be the ultimate goal of regulating digital markets. In other words, while there is a role for antitrust in addressing the
The article first argues that user autonomy must be the normative standard for regulating digital services and their power vis-à-vis users. It considers the relationship between autonomy and attention in the digital era and then examines the harm to user autonomy that results from the structure of digital markets, and the data-mining and design practices of digital services that operationalize the commodification of attention as a product of value. Next, we consider the possibilities available to antitrust law in regulating the attention economy, namely, the abandonment of the consumer welfare standard measured in price, the conceptualization of markets for attention, and the establishment of ex ante limits on corporate strategies that seek to manipulate the consumer, referred to as “dark patterns” (including data mining and “programmatic advertising,” also known as “behavioral advertising”). The section concludes by comparing ex ante limits on autonomy-reducing markets for attention in the United States to those in the European Union, in particular the statutory language of the Digital Services Act, Data Act, and AI Act, highlighting the value of forward-looking legislation that can augment the ability of administrative law to foster user autonomy in its regulating efforts. Lastly, the article argues that while there is a potentially robust role for antitrust law in mitigating the power of technology companies, enhancing competition as such cannot address the harms to user autonomy that stem from such power. It first demonstrates the insufficiency of regulating attention markets through heightened competition between firms, and second, it argues that fostering more competitive markets for attention may in fact exacerbate harms to user autonomy.
Autonomy and Attention Markets
Let us first consider the relationship between autonomy and the economics of attention in the digital era. This article adopts autonomy as the central normative concern for regulating digital services, as fostering individual and collective self-determination is a necessary responsibility of law in a liberal democratic order. There exists a robust scholarship on the relationship between autonomy and private law, and contracts in particular. Legal scholars have argued that autonomy is the appropriate normative standard for contract law, and private law more broadly. 17 To this end, some scholars have argued that the terms-of-service agreements between users and owners of digital platforms do not even fit the current legal paradigm of a “contract,” given that meaningful consent to terms of interaction with platforms is absent. 18 In contrast to this literature, this article considers user autonomy as it relates to the regulation of markets and competition, specifically markets for consumer attention. The argument situates itself, then, among the burgeoning scholarship on the role of autonomy in the regulation of digital technologies. 19
Attention, Autonomy, and Deception
The structure of the digital economy requires companies to compete for engagement, figuring out new and ever more invasive ways to control our behavior. To understand why competition as such could exacerbate the harms of the attention economy, we must first understand what we mean by autonomy. Autonomy is understood here as a doctrine of self-determination. The ability to author our lives as individuals is integral to our self-realization. An individual lives under the conditions of autonomy if she is able to determine her own conduct, to act on the basis of her values and reasons, and her actions are attributable to herself as an agent. 20 Joseph Raz famously articulated that “a person who forces another to act in a certain way, and therefore one who coerces another, makes him act against his will. He subjects the will of another to his own and thereby invades that person's autonomy.” 21 Autonomy-reducing interferences therefore include those that entail the manipulation of decision-making. 22 The attention economy interferes with the ability of users to be fully autonomous over our life activity; 23 it is a purposeful manipulation of our will for the achievement of market-oriented goals. It amounts to a theft of our attention, 24 a theft of our ability to choose what we do with our time.
Attention is both individual and interpersonal. It is as much about our ability to meaningfully navigate the world through conscious activities based on our will, as it is about meaningful engagement with others in our close personal networks and our communities. Scholars of technology and philosophy have connected autonomy to attention in the era of digitality. While others have used the notion of consumer sovereignty as the orienting norm, especially as it relates to antitrust and consumer protection law, 25 autonomy is broader in its normative implications, for as a standard it requires more than the mere capacity for consumer choice. As articulated by Kaisa Kärki, the standards of “autonomy of attention” are met when the agent “through her second-order desires” effectively interferes “with her non-automatic decision-making on what she currently pays attention to.” 26 Said differently, an agent is autonomous over her activity when she is able to make conscious decisions based on self-reflection about her long-term plans, values, or second-order desires. It follows that autonomy can be undermined by external control and relations of domination that sever the connection between second-order desires and where one is currently giving their attention. 27
Legal scholars have criticized the manipulative capabilities of digital technologies in relation to the decision-making capacities of users. Susser et al. argue that “subverting” one's “decision-making power undermines his or her autonomy,” a process that information technologies make easier, as well as making its effects “potentially more deeply debilitating.” 28 When digital technologies manipulate the user, they are subverting their “capacity for self-government”; they are depriving them of “authorship over their actions.” 29 Rather than coerce the user in a manner that is “blunt and forthright,” digital services manipulate the user into a desired behavior in a manner that is “subtle and sneaky,” 30 or in other words, deceptive. 31 Along these lines, scholars of regulation have argued that digital technologies are capable of a form of exploitation that violates our autonomy, an exploitation that regulators ought to protect against. Martin Brenncke argues that there is a harm to users in commercial practices such as online choice architectures that take advantage of consumer biases. 32 This harm amounts to the exploitation of the user, an exploitation that necessarily threatens their autonomy. Furthermore, as discussed below, he argues that the autonomy theory of exploitation must serve as the normative framework for determining the legal wrongness of digital services’ exploitation of users, and that addressing such harm to autonomy is consistent with existing EU consumer law.
The attention economy threatens this autonomy over our activity and its connection to our long-term plans that is fundamental to our humanity. It is a “concrete apparatus aimed at the reproduction of capital's capacity to impose its command over human activity.” 33 In the words of Byung-Chul Han, we are “controlled and programmed” by the “internal algorithmic life” of smartphones and digital technologies. 34 It is not we who use these technologies, he argues. These technologies use us. In this “hypnotic abdication of reason and will,” 35 our activity serves not to facilitate our own autonomy but the goals of those that design the platforms that mediate our relationship to the world. The sophistication and pervasiveness of technology today requires a careful examination of the legal structures that ought to protect our autonomy in a liberal democratic social and political order. Law's ability to facilitate individual and collective autonomy is imperative in light of the ubiquity of technologies that stand to impede our self-determination.
It is worth noting that the Federal Trade Commission uses the term
The Commodification of Attention in the Digital Era
The “attention economy” is a social economic organization in which the primary object of value sought and maximized is the engagement of individual users, en masse. It entails the monetization of human effort, activity, relationships, and emotions, on top of the production of goods and services that drive economic exchange. While trading in attention as such is not a new phenomenon, 39 the digital revolution has not only given those who trade in attention more tools to influence user behavior but also increased its ubiquity to practical inescapability. The attention economy today is thus a reorganization of the economy around digital platforms that require human beings to contribute our attention in an effort to digitize and monetize “value-creating human activities.” 40 It relocates the production of value to the engagement of the user. Scholars have worked to show the history of what Jonathan Beller calls the “attentional theory of value.” 41 While Beller centers the production of attentional value on moving pictures, especially online videos and cinema, this article takes a broader focus of the attentional production of value to our relationship to digitality and its services writ large.
The production of value in the attention economy is first a story of control. As Gilles Deleuze argued in 1992, “societies of control” are in the process of replacing “disciplinary societies.” 42 While the latter sought to enclose the individual, control uses technological innovations to mutate our relationship to production into a continuous, free-moving network in which “one is never finished with anything.” Control as a new sociotechnological form is “short-term and rapidly shifting, but at the same time continuous and unbounded.” 43 Users as producers of attentional value are no longer enclosed in a particular temporal or spatial relationship to production as in the days of factory discipline, but “in orbit, in a continuous network.” 44 The experience of life post–digital revolution is a dialectic of untethered individuality, on the one hand, and hyperconnectivity to devices and corporately mediated relations, on the other. In the words of Rogers Brubaker, hyperconnectivity is not a “thing” or a “force,” but an “environment, a terrain, an ecology of communication,” a physical infrastructure that is “culturally understood, socially organized,” and importantly for our purposes, politically and legally regulated. 45 While the “neoliberal self” of the twentieth century was “responsibilized” as an individual with ultimate control over their life, governed through their freedom to choose, the “post-neoliberal self” of the digital era is “produced through techno-social engineering, and through the direct governing of behavior. If the neoliberal self is self-steering, the post-neoliberal self is steered by algorithmic systems. If the neoliberal self is self-activated, self-reflexive, and entrepreneurial, the post-neoliberal self is conditioned to respond to increasingly pervasive and finely calibrated stimuli.” 46 In other words, we are no longer autonomous, but controlled.
Technology companies seek this control over our attention to maximize profit and maintain their dominant position in the market ecosystem of digitality. The more of our attention they control, the more likely they are able to reproduce their firms and maintain a stable market for their incumbency. The “market as field” theory put forth by Neil Fligstein entails a stable market as one in which the main players are able to “reproduce their firms.” 47 These incumbent firms dominate a particular market by creating stable relations with customers, other producers, and the government, thus ensuring their survival and their profit-maximization. Platforms in the attention economy are no different. Firms in technology markets seek a position of dominance and establish reliable patterns of user behavior for stability of their firm's position in the market. The “stars of the internet economy,” 48 the GAMAMs, have all achieved the goal of rising to dominance in their relevant market. In order to reach this dominant position, and to maintain it, they must work to ensure users keep engaging with their platforms for as long as possible or risk losing the privileges of power and profit that come from market dominance.
The attention economy is also a story of property. In order to profit from the monopolization and trade of consumer attention, platforms must first gather our data, and have a tool to process it. Property rights “define who is in control of the capitalist enterprise and who has rights to claim the surplus.” 49 In the attention economy, proprietary user data and algorithms for processing it into behavioral analytics define who has power in the market. Law itself has evolved to create more property or “property-like” protection for data and information processing tools. 50 In the digital economy, companies rely on patents and trade secrecy laws to protect the data they gather on users, and the algorithms constructed to process them. As Katharina Pistor shows, the digital revolution saw the rise in “guild-style practices, which they often defended successfully in court” to protect “their most valuable asset: data about us.” 51 Google was able to obtain a patent for its filing system PageRank, a filing system only unique in the fact that it was digital rather than analogue; on this patent, Google was able to “build an enormous database of ordinary Internet users that is matched only by close rivals such as Facebook or Amazon.” 52 Just as we do not own the products of our labor under wage capitalism, neither do we own the product of our activity online; it is owned by the company that is usurping our attention for market ends.
Markets of Attention
There are two ways by which platforms profit from the commodification of attention. First is to gain capital in real time through programmatic advertising. Programmatic, or “behavioral,” advertising is the use of automated technology to buy and sell ad space on digital platforms to target specific users at specific times. The digital advertisement ecosystem is made up of “publishers” who sell attention “inventory” to buyers who pay to display a message or visual in an automated process referred to as “real-time bidding.” Buyers vary between companies looking to promote a product, marketing agencies working on behalf of such companies, and “agency trading desks,” specialized companies that navigate the programming-based advertising ecosystem. 53 For attention to be a tradable commodity, engagement online must be measurable such that it can be sold to advertisers. This means that “the broad range of expression that the Internet might otherwise enable has been limited to ways of connecting that are consistent with the financial needs of advertising.” 54 The attention economy has incentivized developers to structure platforms in such a way so that attention can be easily measured, but it goes beyond mere product design. In the words of Tim Hwang, “We have been taught to interact with other people online by platforms built to buy and sell our attention. One wonders if that will constrain the social possibilities of the future.” 55 The attention economy has structured digitality and our interactions online not to improve user experience, but to maximize the extractable surplus value available to platform owners.
Second, for companies that do not rely on external programmatic advertising, attention is captured to increase their share of the relevant market, attract investors, and increase existing shareholder value. The economics of platform markets, as outlined by Lina Khan, incentivize companies to “pursue growth over profits, a strategy that investors have rewarded.” 56 Prior to implementing a tiered advertising subscription model, Netflix, for example, told shareholders in its first quarter report for 2024 that they will stop reporting its subscriber count to investors, instead shifting to metrics based on “engagement,” meaning time spent on the platform. Co-CEO Greg Peters stated that this change in reporting and measuring the company's value reflects “the key metrics” that “matter most to the business.” 57 These “engagement goals” 58 reflect the company's desire to monopolize user attention and maximize time spent on their platform to attract investors and increase the market value of their company's shares, even if they were not making direct profit from programmatic advertising. Platform end users can therefore plausibly be characterized as “suppliers,” as argues Jonathan Baker. 59 They supply data through their online activity that is comprehensively monitored by the platforms, and those data are used to manipulate the user to supply further attention.
Both operational schemes require an immense amount of personal data of users, data about us that are proprietary to the companies. Such behavioral data are gathered for two main purposes. First, to apply the data back into the product or service to improve it. The early days of Google saw such reinvestment of behavioral data back into the development of its search engine to improve the quality of its service. 60 But it has a second, more insidious purpose. Consumer data are fed into computing processes to facilitate machine learning, algorithms that create “prediction products” that anticipate our actions. These predictions are traded in a marketplace that Shoshana Zuboff calls “behavioral futures markets.” 61 These markets consist of endless competition over our data to acquire “ever-more predictive sources of behavioral surplus,” with the end goal to manipulate that behavior into desired actions.
The data extracted from every online interaction are processed through advanced consumer analytics and sold, the purpose being to “modify real-time actions in the real world.”
62
After the data are gathered and our behavior algorithmically predicted, companies set out to modify our behavior. Using their proprietary information about us, they first call us to action with external triggers (notifications, pop-ups, advertisements), and work to create habits through integrated systems of variable rewards and investments. The three principal types of rewards used are rewards of the hunt (satisfying the need to acquire goods and supplies; think Amazon), rewards of the tribe (satisfying our need for interconnection and community; think social media), and rewards of the self (satisfying our desire to feel accomplished; think language learning or fitness applications).
63
Platforms are carefully designed such that we feel we are acquiring necessary goods and/or solidifying our social standing, and feel self-gratification. Platforms also use the psychological phenomenon of
It is worth noting that the attention economy stretches beyond social media to all digital platforms. While social networking sites like Instagram, Facebook, and X (formerly Twitter) are often exemplified in critiques of the attention economy, the structure of our economy since the digital revolution requires essentially all organizations to compete for our attention. Social media platforms embody both approaches to the attention economy as they utilize programmatic advertising as well as user engagement levels to increase shareholder value. That being said, every application, website, or digital service engages in markets of attention. Importantly, companies must also work to belie their true economic incentives to users, which in turn exacerbates the threat to our autonomy. For example, it may be obvious that the goal of a video streaming service such as Netflix is to have users watching for as long as possible; this has been the economic incentive for broadcast television stations since the mid-twentieth century. 65 However, there are other platforms and digital services whose economic incentives in the attention economy may in reality be mutually exclusive with their intent as stated to users.
In the recent class action lawsuit filed in California against Matchgroup Inc., the parent company to dating apps like Tinder and Hinge, the plaintiffs argue the company has engaged in deceptive trade practices, false advertising, and negligence in design. While the company claims to be a service for finding a romantic partner (as stated in their ads that their platforms are “designed to be deleted”), the platforms, according to the plaintiffs, have a “uniform defect” that they have “designed, developed, managed, operated, tested, produced,” a defect that they “knew, or by the exercise of reasonable care, should have known.” 66 The defect entails the causing of harms like “addiction, compulsive use, diminished self-esteem, perpetuation of insecurity, anxiety and/or depression” as well as other “physical or psychological injuries” to users. 67 It is clear from this case that the goals of such platforms that commodify attention do not have goals that align with the goals of the user; they may in fact be in diametric opposition. They “deceive reasonable consumers” 68 into thinking that the platform is to serve one purpose, in this case to facilitate the finding of off-app relationships, rather than the actual purpose of the platform, to “sustain paying subscribers and keep them on app.” 69 In other words, to control and profit from the attention of the consumer.
Antitrust Regulation in the Digital Era
Antitrust law, established with the Sherman Act of 1890, sets out to promote economic fairness and competitiveness in interstate commerce, and includes a prohibition on trusts, monopolies, and cartels. It regulates relations between firms, as well as relations between firms and the users of their products and services. The relevant regulatory body in the United States is the Federal Trade Commission (FTC), which consists of three bureaus: the Bureau of Competition, the Bureau of Economics, and the Bureau of Consumer Protection. Founded in 1914, its stated mission is to protect the public from unfair or deceptive acts in the marketplace and promote fair competition. There are three elements to any antitrust offense: anticompetitive conduct; a “factual or likely increase in market power,” including monopoly power or a likelihood of obtaining monopoly power; and a causal connection between them. 70 The postwar era of antitrust under Justice Robert Jackson saw an impressive 1,375 complaints in 213 cases and in forty industries, garnering political support as a “bulwark” against the antidemocratic ideologies of centralized economic power and fascism in the former Axis powers. 71 There remained broad political support for antitrust efforts through the mid-1960s as a point of agreement for both those skeptical of the political power of concentrated wealth, on the one hand, and neoclassical economists skeptical of monopolies for the health of the competitive market, on the other.
The Chicago school approach emerged within this context, part of a “broader revolution in antitrust law” that embraced consumer welfare as the “lodestar of antitrust” and price theory as the “proper methodology for analyzing competition.” 72 In the past few years, however, the neo-Brandeisian approach to antitrust has facilitated a more robust role for regulation, centering the critical relationship between market power and political power. 73 This article sets aside the political power of corporations and focuses on the power of technology corporations vis-à-vis the users. The rest of this section examines the possibilities of antitrust regulation in the digital era. It argues that revisiting the consumer welfare standard, a conceptualization of markets for attention, and the definition of new categories of corporate behavior are necessary steps in addressing the fundamental issue of the attention economy, namely, the harm to user autonomy.
Consumer Welfare with Nominally Free Services
The consumer welfare standard is attributed to Robert Bork and his influential work
The neo-Brandeisian approach has argued against the use of the consumer welfare standard in antitrust analysis. Here we situate their critique of the price mechanism within the attention economy, which is largely made up of “zero-price markets” (markets where products are nominally free to consumers). Some scholars have defended the consumer welfare standard on the grounds that harms in zero-price markets fall under the category of monopsony power over labor and inputs that consumer welfare can, in fact, capture. 80 However, as Khan has argued, the consumer welfare standard leaves open the ability for judges to disregard corporate behavior, including anticompetitive behavior and consumer harms to privacy, where monetary costs to consumers remain low or nonexistent. 81 Along these lines, antitrust scholars have begun to consider the true costs of nominally free services in which consumers pay with their attention. These “zero-price products,” John Newman argues, are not free to consumers. Such services are paid for, either with personal information, attentional activity, or both. For-profit firms, as Newman shows, offer products and services at nominally free prices precisely because “they have determined that doing so is profitable,” and customers engage in the trade-off between giving time and information for those products and services, thereby meeting the statutory essence of “trade” and “commerce” as outlined in the Sherman and Clayton Acts. 82 Thus, taking zero-price markets seriously creates opportunities for addressing the “creation, enhancement, or abuse of market power,” the precise “evils that antitrust laws are intended to remedy.” 83 Rather than the standard of consumer welfare, regulatory agencies must address the violations of user autonomy that extend beyond the price mechanism. Work has been done to this end in EU consumer law, as will be discussed in further depth below. Online choice architectures have a significant ability to exploit consumers in digital environments. 84 Such exploitation that threatens user autonomy ought to be the central focus of regulators, especially in markets for attention.
Defining Relevant Markets: SSNIP to A-SSNIP?
In traditional antitrust analysis, a market is a collection of products and geographic locations that are used to make inferences about market power and the anticompetitive effects of business conduct.
85
Once defined, market participants can be identified, and market shares computed. Courts and regulatory agencies such as the FTC apply the “hypothetical monopolist” test, which helps determine if it would be profitable to exercise market power in a given market, keeping in mind the incentive of buyers to respond by shifting to other products and locations. The “small but significant and non-transitory increase in price” (SSNIP) test measures the sufficient number of customers it would require to switch to other goods to render the price increase unprofitable. Gal and Rubinfeld demonstrate the limits of the SSNIP test where firms offer free goods.
86
As shown in the findings of a federal court in
Legal scholars have argued that the standard approach to market definition is often appropriate in evaluating issues that arise with multisided platforms. Social media and Internet search could count as “product markets for evaluating alleged harm to users even though they do not charge participants directly” as they “collect data about participant behavior and arguably lessen the quality of the user experience by inserting ads or promoted search results.” 87 It is worth noting that most consumers using the Internet engage in multi-homing, meaning they divide their attention across various platforms. That being said, consumers have limited attention. This means that despite multi-homing, there is an “intense competition for viewers” that involves all digital services. 88 Therefore, all digital services competing for attention are effectively in the same market.
To enhance the test's specificity to the economics of attention, Tim Wu has suggested the creation of a new test to address market dynamics when firms offer free products that are actually competing for attention.
89
He offers a conceptualization of
Besides the A-SSNIP test, Wu suggests that users can be surveyed to find out whether a novel or competing digital service has made them use a similar, previous service (e.g., Instagram taking time away from Facebook). Additionally, a regulatory agency may examine documents internal to companies to see whom they consider competitors for attention, or those that raise “attentional prices” without competitor constraint. 90 An example of this would be the career networking site LinkedIn.com, who sees itself as competing both with social networks like Facebook and with job websites like Monster.com, CareerBuilder.com, and Indeed.com. 91 These suggestions reinforce the idea that regulatory agencies will likely have to take a different approach to understanding what counts as competition or substitution in the attention economy. Platforms that appear to provide different services are in fact competing in the same attention market. While identifying the relevant market is an important step in understanding the dynamics of competition in the economics of attention, identification alone is insufficient to protect user autonomy. There must be, in addition, efforts to limit the strategic design practices of digital platforms and services that threaten user autonomy.
Limiting Strategic Design Practices: Dark Patterns
The Federal Trade Commission Act imbues the FTC with the power to ban corporate strategies and practices that interfere with consumer autonomy. As mentioned, Section 5(a) of the act gives the FTC the power to prevent “persons, partnerships, or corporations” from using “unfair or deceptive acts or practices.” 92 Referred to as “dark patterns,” these highly effective practices intend to “trick or manipulate users into making choices they would not otherwise have made and that may cause harm,” often intended to “get consumers to part with their money or data.” 93 The FTC acknowledges that dark patterns have only grown in scale and sophistication in the digital economy, presenting a greater threat for consumers. Part of their rise in sophistication is due to the pervasive data collection techniques that allow companies in the digital economy to gather massive amounts of information about consumers’ identities and online behavior, allowing them to target individual users more effectively, as well as the ability to experiment with digital dark patterns more easily and at a larger scale than traditional in-person retailers. The use of artificial intelligence algorithms in particular has increasingly been used to manipulate consumer decisions. 94
The FTC has identified four design elements that serve as examples of dark patterns. 95 First are design elements that induce false beliefs, such as ads deceptively formatted to look like news articles to entice users to buy their products. Second are those that hide or delay disclosure of material information, such as burying key limitations of the product or service in dense terms-of-service documents that consumers do not see before purchasing. Third are those that lead to unauthorized charges, effectively tricking someone into paying for goods or services they did not want or intend to buy, as either a single charge or a recurring charge. This issue has been particularly pervasive in games for children involving in-app purchases. Lastly, those that obscure or subvert privacy choices. Most digital services do not allow consumers to definitively reject data collection or use. Rather, they repeatedly prompt customers to select settings they wish to avoid, present confusing toggle settings leading consumers to make unintended privacy choices, purposefully obscure consumers’ privacy choices, make the option for limiting data collection more difficult to see on the screen, or simply set the defaults to maximize data collection and sharing allowances.
The concept of dark patterns may be a useful existing regulatory tool to help legally classify practices in the commodification of attention that seek to maximize user engagement against the autonomy of the user. However, while “deceptive” practices are defined as a “representation, omission, or practice that is likely to mislead the consumer,” to meet the standards of “unfair,” a practice must cause or “be likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and
Regulatory agencies should consider programmatic advertising as qualifying for such categorization. As discussed above, programmatic, or “behavioral” advertising involves an automated decision in which user data are used to purchase ad inventory within fractions of a second. These advertisements come in the form of search advertising, display advertising, and online video advertising, and they appear on both desktop and mobile devices. 98 Consumer data are particularly important for display and video advertising, as the systems cannot rely on direct user input the same way that search advertising does. Programmatic advertising makes up a vital part of how attention is commodified in the digital era. Importantly, it is distinct from market research. Rather than sampling audiences to determine average income or voting preferences, it amounts to surveillance of individual users with the purpose of manipulating their behavior. For this reason, critics of such invasive tactics to manipulate individual consumers on a large scale have called for the “wholesale solution” of banning programmatic advertising to fulfill the duty of the government of “protecting the public.” 99
These ex ante limits on corporate behavior arguably meet the standard threshold for antitrust action. As outlined by Ronald Cass, antitrust law should condemn conduct that is “recognized as wrong,” should be readily understandable to those who must obey them and to those who must apply them, and should be interpreted in ways that minimize the costs of their application, including error costs. 100 To the first condition, the growing number of class action lawsuits in the United States as well as statutory limits in place in the European Union that relate directly to the precise design elements meant to usurp user attention under the guise of another service supports the consensus of wrongdoing on the part of technology companies. 101 Second, these practices are manifestly intelligible to the companies who would be responsible for applying the rule, as they are intentionally worked into legal agreements and the algorithmic design of the platforms. Such ex ante limits on corporate behavior that belies the intentions to control user attention rather than the stated purpose of the digital service understood by the user are important avenues for antitrust regulation to protect user autonomy in the era of digitality.
Ex Ante Statutory Limits
As outlined above, regulators must place ex ante limits on the kind of data technology companies may collect, and how platforms and digital services are structured as they relate to user attention and autonomy. The FTC has dealt with some of these design elements in recent years. Two rules regarding consumer protection in relation to deceptive online practices are relevant for our purposes. First is the Restore Online Shoppers’ Confidence Act (ROSCA) that prohibits “certain unfair and deceptive Internet sales practices,” specifically any “post-transaction” third-party seller to charge or attempt to charge a consumer's financial account after the initial transaction without the consumer's express consent. 102 The declaration of policy cites the importance of the Internet in commerce today, and that “consumers’ expectations” are defied when charges appear without express consent. Second is the Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM) Act. 103 Similar to the Telemarketing Sales Rule that regulates phone advertising, 104 the CAN-SPAM Act aims to protect consumers from unwanted commercial electronic mail. It prohibits the use of materially false or “deceptive” header information in the subject line of emails, requires the relevant identification of the message as an advertisement, and requires the sender to clearly explain how the recipient can “opt out” of marketing emails from the sender in the future. The FTC ought to consider the nature of express consent and autonomy not only in relation to material financial transactions and in relation to unwanted email but also with regard to markets of user attention across the Internet.
Regarding data mining, the United States currently does not have a federal policy on what data technology firms may collect from users and how they may use that data. The Children's Online Privacy Protection Act (COPPA) provides certain data protection for children under the age of thirteen, but does not apply to all users across the Internet. 105 Specifically, it prohibits “unfair or deceptive acts or practices in connection with the collection, use, and/or disclosure of personal information from and about children on the Internet.” Websites that are “directed to children” must not collect or transfer personal information about the user without “verifiable parental consent.” In a patchwork policy approach, certain states have taken up the mantle of consumer privacy protection. 106 California, for example, has enacted legislation to regulate certain forms of dark patterns in relation to data mining. The California Consumer Privacy Act (CCPA) requires transparency on what personal information a business collects and how it is used and shared, establishes the right to delete personal information collected from them, and the right to opt out of the sale or sharing of personal information. 107 Relevant for our purposes, the act prohibits the use of “dark patterns” in presenting the choice to opt out, defining a dark pattern as a “user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by regulation.” While such states recognize the harms to autonomy that dark patterns pose on users of digital technologies, federal regulation on the deception involved in attention markets is required to meet the standards of user autonomy across all digital platforms, and for users of all ages.
Looking to the European Union may provide a framework for broader statutory restraints on technology corporations to empower agencies in the United States in their regulatory enforcement. The Digital Services Act (DSA), Data Act, and Artificial Intelligence Act (AI Act) are particularly salient to concerns around the power of technology companies, their role in the attention economy, and the relationship between digitality and human autonomy. The DSA provides EU-wide regulation for intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. It seeks to address harmful activities and disinformation, as well as protect fundamental rights of users and foster innovation and competitiveness by bolstering smaller platforms. Its stipulations include the ability to contest the removal of content by platforms to protect the freedom of speech, and stronger obligations for “very large online platforms” or VLOPs to mitigate risks for users’ rights, with special protections for minors. 108 It requires platforms to guarantee transparency and control of the content presented to users. It also requires the ability to opt out of personalized recommendations and personalized content in regard to both feeds and advertisements.
Moreover, Article 34 requires that VLOPs “diligently identify, analyze and assess” on an annual basis any “systemic risks” that stem from the “design or functioning of their services and its related systems, including algorithmic systems, or from the use made of their services.” It includes “any actual or foreseeable negative effects,” including “serious negative consequences to the person's physical and mental well-being.” 109 Clause 81 states that a digital service must consider its effect on the “exercise of fundamental rights,” including but not limited to “human dignity, freedom of expression and of information, including media freedom and pluralism, the right to private life, data protection,” and the “right to non-discrimination.” The cause also explicitly requires VLOPs to consider the design and functioning of their services in reference to their ability to “impair minors’ health,” and/or their “physical, mental and moral development,” including how the design of the interface may “intentionally or unintentionally exploit the weakness and inexperience of minors or which may cause addictive behavior.” While the act stipulates that online advertising does not prima facie constitute a “dark pattern” that interferes with user autonomy, it prohibits “manipulative techniques” such as profiling used in targeted advertising based on consumer data. 110 The act therefore is a monumental step in assigning responsibility to platforms for the harmful effects not only of online content, but of the design of the platform and its underlying algorithms on the physical and mental well-being of users of all ages.
Furthermore, Article 6 of the Data Act prohibits third parties from making the exercise of choices or rights “unduly difficult, including by offering choices to the user in a non-neutral manner, or by coercing, deceiving or manipulating the user, or by subverting or impairing the autonomy, decision-making or choices of the user,” including through the particular design of a “user digital interface.” 111 The AI Act is even more explicit when it comes to the harms to autonomy that digital technologies pose. The preamble acknowledges the “many beneficial uses of AI,” but quickly points to its capacity to “provide novel and powerful tools for manipulative, exploitative and social control practices” that violate the “Union values of respect for human dignity, freedom, equality, democracy, and the rule of law and fundamental rights.” 112 Paragraph 29 highlights the “AI-enabled manipulative techniques” that can be used to “persuade persons to engage in unwanted behaviors, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices” in ways that “people are not consciously aware of,” or when they are aware of them, can “still be deceived or are not able to control or resist them.” 113
The combined language of the DSA, Data Act, and AI Act has led scholars to put forth that autonomy has transformed in EU law from a metaprinciple of the Charter of Fundamental Rights to a regulatory posture specific to digital technologies. 114 Maximilian Gartner, among them, argues that despite these legislative instruments offering no legal definition of autonomy, their focus on privacy reveal a policy orientation around protecting user autonomy. In his analysis, he distinguishes between informational and mental privacy. Informational privacy is the “freedom from informational interferences and intrusions,” while mental privacy is the “freedom from psychological interferences and intrusions,” including exploitation measures of psychological features. 115 The connection between privacy and autonomy is manifest in that mental privacy protects autonomy by “ensuring the integrity of an individual's decision-making and deliberation process.” 116 Gartner illuminates that the mental privacy considerations have only become more explicit in EU law, revealing an acknowledgment of the “potential to interfere with an individual's non-physical autonomy” 117 of digital services, and their provision of authority to regulatory agencies to protect consumers from such harm.
To this end, regulatory authorities across Europe have taken measures to enforce policies that set out to protect user autonomy, including their privacy. Such measures have resulted in substantial fines against Google and Facebook for their use of dark patterns. The National French Data Protection Authority (“CNIL”) has fined the companies €150 million and €60 million, respectively, for providing unclear and confusing instructions to users on how to avoid data tracking, practices that “affect the freedom of the website users’ consent,” as it “influences users’ behavior,” according to the
The argument here is that there must be forward-looking law, both administrative and statutory, to limit the data-mining abilities of technology companies, restrict if not prohibit programmatic advertising, and stipulate the respect for user autonomy that can be used in the future to rein in novel practices by attention-seeking digital services. Turning back to the United States, it is worth noting a relevant yet failed effort by Congress to do so, namely, the bipartisan Deceptive Experiences to Online Users Reduction (DETOUR) Act. The bill's primary function was to prohibit the “use of exploitative and deceptive practices by large online operators” and stipulated promoting “consumer welfare in the use of behavioral research by such providers.” 120 Specifically, the prohibited conduct for any large online operator in Section 3(a) included “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data,” as well as online services “directed to a child” designed with the “purpose or substantial effect of causing, increasing, or encouraging compulsive usage, inclusive of video auto-play functions initiated without the consent of a user.” It also gave explicit authority to the FTC to enforce such measures. While the bill was introduced in 2021 and never made it to a vote, it demonstrates an acknowledgment on the part of Congress that there is a substantial gap in the statutory limitations set on digital services and on their capacity to exploit and deceive users for their own ends.
Competition in the Attention Economy
The debate over whether and at what levels competition is desirable is anything but novel. As Gerald Berk shows in his seminal historical work, Brandeis actively condemned the House Democrats of his time for their “blind faith in competitive markets.” 121 Brandeis, instead of “free and unrestricted competition,” called for “regulated competition,” 122 regulation that would “redirect business administration from deceit and opportunism to engineering, invention, and incremental improvement.” 123 The argument here is that while using antitrust regulation to foster competitive markets may address the market power of technology firms in relation to each other, enhancing competition without ex ante limits on the practices of those firms in relation to the consumer may in fact exacerbate the harms to user autonomy in the attention economy. Competition for our attention in the digital economy, in which growth and network effects have been sought before immediate profit, has led companies to gather more and more personal user data and use them to manipulate and control our behavior. It is user autonomy, not competition, that must be the orienting goal of antitrust regulation in the digital age.
Competition, as we have seen, is generally a goal of antitrust regulation, as it is what defines a healthy, nonmonopolistic market. Even those critical of market power uphold competition as the ultimate end for a healthy digital economy. Baker, among them, outlines ways in which market power is on the rise in the United States, including anticompetitive mergers, the role of diversified financial investors, and the dominance of information technology platforms, arguing that it is a problem for the national economy. In acknowledging that large IT and Internet platforms have delivered significant consumer benefits like lowering search, communication, and purchasing costs, he argues that both consumers and the US economy would benefit even more if such platforms “faced greater competition.” Greater levels of competition, he argues, would “increase the rate of innovation, increase the rate at which firms lower quality-adjusted prices, and reduce the potential for harm from anticompetitive exclusionary conduct in markets dominated by large IT and Internet platforms.” 124 Along similar lines, Justin Hurwitz has argued that competition is the key to solving the issue of dark patterns, as consumers are “generally keenly aware of design issues,” and “where firms are able to compete,” regulation over “design elements or design decisions is likely undesirable except in the rarest cases of overtly intentional or exceptionally harmful design patterns.” 125 Others have argued that antitrust law should seek to channel platforms into a decentralized ecosystem by making partnership more attractive than mergers and enlist open-source foundations to foster interoperability of cloud computing technologies. 126
The argument here is that when it comes to addressing the harm of the attention economy to user autonomy, competition is a misguided goal. Competition implies the ability to exit, the idea that the consumer has a meaningful ability to abandon one provider of a product or service for another. In the context of antirust, exit is typically understood from the perspective of the firm, meaning the ability of a firm to switch jurisdictions, 127 or leave the market altogether. 128 This article adopts the perspective of the user. In the digital era, the network effects of platforms extend beyond the typical definition of user utility and their willingness to pay rising based on the number of other users consuming the service. 129 Rather than functioning as more utility based on more users, today the power of some platforms and services is such that users who do not engage with certain platforms and services are in a significantly disadvantaged position socially, economically, and politically. Existing scholarship recognizes that technology is not disembedded from social relations and practices. Yochai Benkler, one such scholar, has worked to show how technology both shapes and is shaped by social forces and law, and that the “winner-takes-all” force of network effects in the digital economy entails the significant rise of market power for particular firms, and ultimately exacerbates income inequality and the rise of oligarchic capitalism. 130 In other words, the structure of the digital economy and its powerful network effects make it impossible for the users to meaningfully exit a particular digital platform or service, as they become embedded in vital social institutions and norms.
Exit for users of digitality takes two forms: exiting a particular digital platform or service, and exiting digital spaces altogether. First, the market power of particular firms and their social embeddedness makes it practically impossible to exit certain platforms in favor of competing ones. Students today, for example, are often required to submit papers using Microsoft Word. It is impossible for them to seek educational opportunities without that particular software. Forcing Microsoft to compete with other software companies will not guarantee autonomy over what platforms users are socially or institutionally required to engage with. More importantly, within an economic system in which attentional value production is the central mechanism, exiting realistically entails entering another domain of the same threats to autonomy. As Fred Block notes, even where exit is a “feasible move” for the user, most firms use “similar algorithms and similar strategies to maximize profits.” 131 Without limits on what companies can do in relation to the user, the user is merely deciding between different forces of control, between the same diminished self-determination under a different trademark or IP address (Instagram or X, Tinder or Hinge, Microsoft or Google). The ability in and of itself to exit one platform does not guarantee user autonomy.
What's more, the ubiquity of digitality means that exiting digitality altogether would entail significant harms to individual and collective autonomy by restricting the ability to exist meaningfully in today's digitally embedded world. User choice boils down to a “devil's bargain” between exiting services that undermine autonomy through their mechanisms of attention control, on the one hand, and the significant social, economic, and political sacrifices that would arise from exiting digitality altogether, on the other. The ability to engage in political deliberation, obtain topical news, publish an academic paper, connect with one's community, and so on all realistically entail the maintenance of an online presence, or at least a minimal use of digital technologies. The same holds true for businesses. Large platforms provide businesses of any size a “crucial distribution architecture for which there are no adequate substitutes.” 132 Digital connectivity is no longer a choice. Neither is there a meaningful choice among platforms.
Setting competition as the orienting goal of regulatory action cannot address the fundamental issue of autonomy in the attention economy. Competition may increase the autonomy of the consumer in terms of exiting one platform or service for another. However, without meaningful ex ante rules protecting the autonomy of the user in relation to digital services, exit would entail either a choice between companies in a competitive market all seeking to usurp user attention, and thus the threat to autonomy remains, or leaving digitality altogether, thus accepting significant barriers to substantive equality and the social, financial, and political opportunities in today's digitized society. Said differently, if attention markets are made more competitive without the ex ante limits on the data-mining and attention-maximizing design of digital platforms, the user will be in the no-win position to choose between platforms that threaten their autonomy, or exiting digitality altogether, rendering themselves to a significantly unequal position in today's social, economic, and political landscape.
Furthermore, making attention markets more competitive may in fact exacerbate the harm to user autonomy. As antitrust scholars have argued, competition benefits society when “firms compete to help consumers obtain or find solutions for their bounded rationality and willpower.”
133
That being said, firms must also compete “to better exploit consumers’ bounded rationality or willpower” in order to maintain or strengthen their position in the market. This holds especially true for digital markets for attention. In the words of Ben Tarnoff, “competitive pressure compels companies to seek every possible advantage. Indeed, the industry practices with the most destructive effects, such as the obsession with user engagement, were first developed by social media firms when they were comparatively leaner and hungrier and needed to grab market share as quickly as possible—they came out of competition, in other words.”
134
Meaning, addressing the power of companies as they relate to each other in attention markets will not necessarily address the issue of the relation of power between the technology company and the
At its best, antitrust regulation that promotes competition in markets for digital products and services can ensure that “customers receive the best possible zero-price product while minimizing the attention and information costs those consumers must exchange for the products.” 135 However, even in a perfectly competitive market for attention, a market in which all actors act rationally, with perfect knowledge, zero transaction costs, low entry and exit barriers, and with many producers, there is still the fundamental problem of corporate control over user behavior. In the case of attention commodified to sell to advertisers, as Peter Evans shows, those advertisers consider different sources of attention as substitutes. 136 From the perspective of attention purchasers, the number of platforms selling user attention merely means more available purchase options. Forcing more competition may in fact lower the cost for advertisers, making markets of attention more profitable for those benefiting from consumer manipulation. Digital technology is an “infrastructure for behavior modification at scale,” an infrastructure for “capturing, diverting, and retaining our attention.” 137 And yet, competition for our attention is as fierce as ever. As Brubaker reminds us, digital platforms and services extract value from both user-created content and the secondary circulation of and engagement with others’ content. There is a constant battle over every millisecond of our activity, to draw us in or keep us engaged for half a second longer. The idea of competition for attention goes back to Herbert Simon and the coining of the very term “attention economy.” Information must compete for our attention. Yet Simon himself was wrong to claim that “information overload” is merely “in the mind of the reader.” 138 Today we are confronted with more powerful, more sophisticated, and more ubiquitous tools of control that threaten our autonomy and that must be addressed at the level of law and policy.
In the digital economy broadly construed, addressing corporate power is an appropriate legal and normative goal for regulators. In relation to attention and autonomy, however, the goal of more competition as such must not be. As articulated by Nick Srnicek, “It's competition—not size—that demands more data, more attention, more engagement and more profits at all costs.” 139 Attention and engagement as the product of value in the digital economy is not going away. Neither is competition for it. The orientation of antitrust in the digital era must be toward the facilitation of user autonomy against deceptive practices of control, rather than increased competition. This will require meaningful ex ante limits on the kind of user data technology companies are allowed to collect, and the ways in which they translate that data into proprietary algorithms that are weaponized against the user for the purpose of controlling their activity.
Conclusion
In the words of Justice Brandeis, the “right to life” guaranteed by the Constitution means “the right to live, and not merely to exist. In order to live men must have the opportunity of developing their faculties; and they must live under conditions in which their faculties may develop naturally and healthily.” 140 It is the role of the nation to “protect men and women from any forces, public or private, that might stifle the opportunities for thriving and life.” 141 These forces may take the form of oppression by the state, but Brandeis held that such oppression for most people takes the form of private force and economic structure. We “are not free,” he wrote, “if dependent industrially on the arbitrary will of another”; 142 in other words, we are not free if our autonomy is under the control of another, including in private relations that are exploitative, and as argued here, in exploitative digital markets for our attention.
Technology has the capacity to serve as a tool that facilitates our ability to achieve our individual and collective goals; it also has the ability to hinder them if used for the sole purpose of capitalizing on user attention. Without proper regulation, technology stands to corrupt our relationship to the world, including our interpersonal relationships and our ability to achieve self-determination. As outlined in
Footnotes
Acknowledgments
I would like to thank the editors of
Funding
This work was supported by the Berkeley Economy and Society Initiative (BESI).
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
