Abstract
This article is a part of special theme on The State of Google Critique and Intervention. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/stateofgooglecritiqueandIntervention
Introduction
Big tech companies - often addressed as “big four/five”, GAFA(M), FA(A)NG, MAMAA, etc. - have not only captured large parts of (online) activity and stock market valuation, but also of critical attention in fields like media studies, political science, or economics. Terms like “platformization” and “monopolization” have been used to spotlight their increasing penetration into everyday life and staggering market power. Economists, in particular, have attributed their success to business models organized around multi-sided markets that give rise to powerful (cross-side) network effects, economies of scale, user lock-in, and other mechanisms lead markets to “tip” when an early advantage becomes a full-fledged monopoly. Alphabet/Google's outsized role in web search or Meta/Facebook's hold on social networking are two emblematic cases. Based on their dominance in core markets and flush with cash from successful IPOs, tech companies have expanded rapidly into many new areas, in part through internal product development, but more often through series of acquisitions.
Critical commentators point to the collection of user data as a central element in these expansion strategies, as these data feed into product improvements, ad targeting, and market research (Stucke & Grunes, 2016). Although engineering capabilities are seen as crucial in these processes, the sheer mass and complexity of technological components involved is daunting. Certainly, scholars know that “data centers” are necessary for collecting and storing large amounts of information, for making online applications react quickly to user input, and for hosting the algorithmic magic that infuses “surveillance capitalism” (Zuboff, 2018). It is also understood that “AI” enables features like voice recognition, computational photography, and content recommendation. But the particularities of these technological feats and how they relate to the political economy of “big tech” remain underexplored.
In what follows, I will sketch a research perspective that attends to technological factors in the functioning of large tech companies in more detail, particularly with respect to synergies across markets and products. The conceptual core of this project is the notion of “technical system”, which I will briefly outline next. The remaining sections are then dedicated to an exemplary discussion of Google through this analytical lens.
The systems concept
The term “system” has been used in many different academic disciplines, sometimes in a rather general sense - “a set of things working together as parts of a mechanism or an interconnecting network” (OED) - sometimes with much heavier epistemological investment, for example when theorists seek to identify general properties (e.g., feedback, homeostasis, etc.) of complex systems. The goal, here, is not to make any such far-reaching claims about
This project resonates, to a degree, with the work of Hughes and others on “large technical systems” (LTS), for example transport or power networks. This work started from the claim that “all major perspectives in present day social science theorizing about technical systems share the common feature of ignoring the material-operational cores of such systems” (Mayntz and Hughes 1988: 18). Furthering our understanding of the material-operational dimension of companies like Google is precisely what I have in mind. But LTS research focuses on singular technical structures that work in unison to produce specific outcomes. What makes large tech companies so remarkable, however, is that they are engaged in
A second conceptual reference point is the work of historian of technology Bertrand Gille (1978). Gille's understanding of “technical system” is much broader, aiming at the overall state of technical - and, by extension, economic and social - development in a society. The core idea is that historical periods are not characterized by arbitrary assemblies of unrelated technologies but by compatibilities, synergies, and dependencies, for example between energy sources, materials engineering, and industrial machinery. These “coherences on different levels of all structures, ensembles, and branches”, Gille writes, “compose what one could call a technical system” (1978: 19). Taking cues from the structuralist philosophies of his time, Gille argues that transformations and innovations within a given system unfold according to the rules of the system itself, until insurmountable difficulties limit further development, to the point where new inventions appear, the system topples, and a new set of coherences emerges. For my purposes, however, it is not the succession of historical stages that makes Gille's work appealing, but the idea that technologies at a given time are heavily interlinked. They depend on each other, are mutually stabilizing, and economically integrated.
While Gille does not use the term himself, his technical systems are organized around what economists have called “general purpose technologies” (GPTs), such as steam power or electricity, which are useful and consequential across sectors. Transversality is thus one way how synergies and coherences develop within a given system. Commentators have indeed argued that ICT (Brynjolfsson and McAfee, 2016) and AI (Lee, 2018) are GPTs, conferring to those that master them the capacity to succeed in a wide array of domains. While the overall argument is persuasive, terms like “ICT” and “AI” risk glossing over many particularities that condition actual outcomes. A more nuanced understanding of “general-purpose” must thus take technical specificities and materialities seriously. As Dourish writes, “the social world manifests itself in the configuration and use of physical objects and […] the properties of those physical objects and the materials from which they are made - properties like durability, density, bulk, and scarcity - condition the forms of social action that arise around them” (Dourish, 2017: 3).
A fully worked out methodology for analyzing the “properties” of contemporary technology in relation to political-economic consequences is clearly beyond the scope of a short research commentary. While the eventual goal is to analyze “big tech” - including its Chinese variant - more broadly, the following section sketches an exemplary analysis of the quintessential tech company, Google, to show where the analytical focus on material-operational compatibilities, synergies, and dependencies between technologies can lead.
Google as technical system
Attempts to explain Google's success have on occasion singled out specifically technological factors, for example the alleged advantage of the PageRank algorithm over early competitors or the use of consumer hardware to scale up data centers quickly and cheaply. The following three examples, which cover different levels of generality, show in more detail how to integrate technicity into an analysis of the company's power.
Software and data centers as GPTs
To begin, one can break down established arguments about ICT or AI into more specific lines of analysis, following, for example, the classic separation between hardware and software.
These are cross-product synergies on the broadest level, and much more could be said about these and other examples. The key, here, is to identify key technology clusters
Hardware and software integration
To probe deeper, one can highlight how recent developments in AI have amplified the role of hardware, prompting increased vertical integration. To understand this shift, some background is needed.
All Universal Turing Machines are equivalent from a logical perspective, but concretely existing processing units differ in a variety of ways, including performance characteristics, energy use, bugs, and so forth. For decades, the chips used in devices such as personal computers, servers, or mobile phones were produced by chipmakers like Intel, AMD, IBM, and Qualcomm. They were suited for all kinds of tasks and were used by equipment manufacturers to make devices for end users. Moore's law guaranteed that performance improved at a steady pace. While some level of task specialization has existed since the early days of microchips, computing mostly revolved around these widely available general-purpose CPUs.
One exception were graphics processors specialized in producing ever more detailed 3D images for gamers and designers. Their massively parallel architectures were applied to certain non-graphics calculations early on, but a major turning point came with the discovery that deep learning algorithms could be run quickly and efficiently on consumer-grade graphics cards, fueling the AI-renaissance of the 2010’s and triggering renewed interest in specialized hardware. (Hooker, 2020) While graphics card derivatives, mainly designed by Nvidia, still play a central role in ML, all of the major “AI companies” - Amazon, Facebook, Google, Microsoft, etc. - have started to design specialized hardware for their data centers. Google publicly announced its TPUs (Tensor Processing Units) in 2016 and made them available to cloud customers in 2018. These chips are optimized for the company's widely used open-source ML framework TensorFlow, tying the transversality mentioned above to a specific hardware platform. Compared to general-purpose hardware, TPUs require less energy for the specific tasks they are designed for, reducing electricity consumption, cooling needs, and space requirements. Since training a large ML model can now cost tens of millions of dollars, AI companies compete not only for researchers and engineers but also for the fastest and most energy-efficient hardware. Benefits are again reaped across application domains and the colossal capital requirements constitute new barriers of entry for competitors.
A second example further illustrates “hardware-software co-design” (Ranganathan et al., 2021), that is, the close coordination between hardware and software components around specific application requirements. To deal with the vast number of uploaded videos and the increasing diversity of viewing devices, YouTube has designed Video Coding Units (VCUs) that can transcode files to different formats in parallel, using a “multiple-output transcoding” approach where certain processing steps are shared to increase throughput and efficiency (Ranganathan et al., 2021). Google also uses these VCUs for their cloud gaming service Stadia, again showing how product variety enables synergies that make costly investments in cutting edge technologies viable.
A third example indicates that Google hopes to transfer these principles to consumer devices. Running in the company's Pixel 6 mobile phone, the Google Tensor system on a chip (SoC) again focuses on accelerating ML training and particularly inference, improving performance in areas such as image processing, speech recognition, and real-time translation. Processing workloads “on-device” instead of sending them to data centers and back reduces latency significantly and has the potential to enable new use cases. These chips will likely spread to devices like tablets and smart speakers. Google emphasizes that Tensor was developed in collaboration with Google Research, giving “insight into where ML models are heading, not where they are today” 1 . Investment into fundamental R&D is clearly central to establishing a highly synergetic technical system.
Taken together, these examples demonstrate how Google leverages expertise and capital to develop infrastructural resources that can only be rivaled by its largest competitors. Consequently, domains where speed and efficiency are key differentiators - AI most notably - will likely be dominated by a low number of actors with outsized influence over the direction the field is headed.
Data amalgams
While critical voices often point out that large tech companies have collected enormous amounts of data, the term “data” covers an exceedingly large space that can include very different things, both in terms of what is covered and how they are collected, cleaned, stored, organized, processed, and operationalized in concrete products. Semantic data can serve as a short example that opens onto another dimension of cross-product synergies resulting from considerable long-term investments.
While all data can be thought of as having meaning, the term “semantic”, here, signals the explicit encoding of propositional knowledge - Paris is the capital of France, has 2.16 M inhabitants, and is located at 48°51′24″N/2°21′08″E - into machine readable form. This normally implies specific modeling methods, such as the Resource Description Framework (RDF), and stored information takes the form of large databases of “facts”. The knowledge panels appearing next to linked results on Google Search are well-known surface manifestations, but semantic data are valuable in less obvious ways, for example in speech recognition, automatic translation, and other language tasks where propositional knowledge can complement statistical approaches.
While semantic data are often not more than nuggets extracted from Wikipedia, Google increasingly commands proprietary pools of knowledge that are much harder to reproduce. Maps is built on broadly available administrative data, but also relies on accurate and therefore costly maps, on information provided freely by businesses and public institutions, on reviews that add cultural salience to mere facts, and on data captured in real time from mobile phones. Google's navigation services register traffic patterns from people using the app and features such as the “not too busy” markers or “popular times” panels rely on the same principle. Statistical counts are added to propositional knowledge, creating integrated data amalgams that are more valuable than their individual components. Some of the information coming from Maps then also feeds back into Search, the Assistant, and smaller services like Travel. Investing in Android is thus not only a way to earn direct revenue from ads and sales commissions but also a means to collect, contextualize, and valorize data that connect intimately to everyday practices. The term “vertical integration” hardly captures the level of amalgamation between the different components involved here.
This basic example shows how the work coming out of fields like critical data studies (e.g., Iliadis & Russo, 2016) can both inform and profit from a “technical systems” perspective attentive to strategic synergies between technologies, data, and their operationalization in business processes. Google may “know” a lot about its users and the world, but it is the connection between the material-operational dimension and specific business logics that gives meaning to that knowledge.
Conclusion
While these three probes into the compatibilities, synergies, and dependencies that arise from the material properties of specific technologies remain illustrative rather than properly analytical, they indicate how interest for the “tech” in “big tech” can open directions for critical analysis that connect these materialities to questions of political economy. The notion of “technical system” draws attention to forms of transversality and interrelatedness that have repercussions for how these companies function and compete, affecting the products and services that infiltrate users’ lives. Google and other firms are not simply “making tech” or “good at tech” but organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. The materialities that are mobilized in this process are part of the power relations that emerge and understanding them is crucial for our capacity to critique and, if necessary, intervene.
