Abstract
This article is a part of special theme on Algorithms in Culture. To see a full list of all articles in this special theme, please click here: http://journals.sagepub.com/page/bds/collections/algorithms-in-culture.
Introduction
Fetish discourse always posits this double consciousness of absorbed credulity and degraded or distanced incredulity. (Pietz, 1985: 14) If fetishism is, at root, our tendency to see our own actions and creations as having power over us, how can we treat it as an intellectual mistake? Our actions and creations do have power over us. This is simply true … The danger comes when fetishism gives way to theology, the absolute assurance that the gods are real. (Graeber, 2005: 431)
Algorithms in recent years have become a catchword: a focus of public fascination, a rarified artifact that commands extraordinarily high salaries for those who make them, a lightning rod for business secrecy and “a magic black box” for those use them. At the same time, we take them for granted. They order our news feeds, personalize our wish lists, turn on our heaters and optimize our fitness routines. They are the very stuff of everyday life.
This slippage between algorithms as shiny objects of value, their taken-for-granted ubiquity and their signature technical complexity challenges social scientists who research them. Paul Dourish (2016) calls for more precise use of the term, algorithm, within software studies and notes that in technical circles it has a specific meaning even if its implementation makes it nearly indistinguishable from code. An algorithm, he clarifies, is an “abstract, formalized description of a computational procedure” whose on-the-ground effects depend on how it is written into code, with what infrastructures and with what data (Dourish, 2016: 3). Samir Passi and Steven Jackson (2017) add that algorithms are rule-based, not rule-bound, procedures and therefore their enactments are inextricably situated and contingent. Jenna Burrell (2016) highlights the particular opacity of deep learning algorithms given unpredictable input data and a gap between how people value these data versus how the code handles them. All recommend social scientists be empirically precise and careful when talking about what algorithms are and what they do in the world.
We agree. Taking “the algorithm” or “algorithms” as givens blurs the details of how algorithmic capabilities register as outcomes, how these outcomes generate promise and how these promises invite new possibilities. With each step, we lose sight of so much – the data preparation, the coding, the learning and application of all those rules, the repeated experimentation and testing and the debates over what the algorithms actually do and then what they do in someone else’s hands and with someone else’s intentions.
We social scientists are not the only ones at risk. In fieldwork with computer vision professionals and the Quantified Self (QS) community, our research participants also slipped between referring to algorithms as reified things with promise and then recounting the trickier nuts and bolts of how to work with them. They spoke of false promises and lost opportunities when algorithms did not deliver, and of the magic and faith necessary when they did. This blurring of what algorithms did and what was promised caught our attention.
In the following paragraphs, we revisit our two field studies to ask who and what are involved in rendering algorithms (and ultimately data) as tangible and trade-able objects of promise. Our discussion centers on Thomas’s study of computer vision developers, rooted in 43 in-situ, semi-structured interviews with computer vision professionals across North America and East Asia. 1 We also draw on Nafus and Sherman’s ethnographic research in the QS community, a three year project including participant observation, semi-structured and open-ended interviews and the ongoing co-design and development of biosensor data sense-making tools (see Nafus and Sherman, 2014). 2
In neither study did we set out to investigate algorithms. Thomas focused on the changing work practices of computer vision development. Nafus and Sherman documented the cultures and practices of self-tracking. Yet in both studies, our research participants wrestled with claims to algorithms’ powers – what algorithms could do both in fact and potential. Questions about who and what constituted algorithms’ efficacy surfaced when QS participants debated what sort of “nudge” might make them floss their teeth more, or when a senior technologist admitted surprise that the medical robot his team managed to build after six years of work actually surpassed their expectations.
Their mix of belief and disbelief in what algorithms could do led us to the fetish. In 20th-century anthropological thinking, fetishes are not indices of false thinking, as they are in vernacular usage. They are, rather, material objects that stabilize complex and ongoing social relations because people invest them with this effect. What made the fetish so apt in anthropological thinking is how this investment is also marked by what William Pietz described as “this double consciousness of absorbed credulity and degraded or distanced incredulity” (Pietz, 1985: 14) and what David Graeber (2005) then summed up as a socially generative leap of faith. The value of analyzing algorithms as fetishes, then, goes beyond understanding how people invest algorithms with efficacy to understanding that they also do so productively.
In the next paragraphs, we start with a brief primer on fetishism and explain how we use the concept as a heuristic for analyzing contemporary algorithms. Then, we turn to our fieldwork. In the case of computer vision professionals, we show how specialized algorithms stand front and center as traded emblems of only their inventors’ disciplinary expertise. Yet fierce debates over what makes a computer vision algorithm good enough point to shifting social and professional contracts between who makes and who uses them. In our second case, that of the QS community, no one technology or technique defines self-tracking as a practice. Instead, QS participants ask what outcomes can algorithms really bring about. How might desired outcomes best be effected – as an algorithm to count steps, a sensor that increases energy levels, a “nudge”? Their experiments with what “works” in practice are critical unpaid intellectual labor, as significant to making self-tracking algorithms work in the real world as the paid design and test cycles of formal device production. We then conclude by calling attention to what we gain and lose in the slippages between what algorithms do, what they promise and the faiths and possibilities that can result.
Why fetishes?
We did not start our research looking for the fetishization of algorithms. It emerged as we sought to explain how our research participants granted algorithms powers – the capacity to act in the world, to “know” things, and to make things happen. In our interviews, calling algorithms “magic black boxes” had become an accepted fact more than an accusation. Computer vision algorithm developers, in effect, admitted this when they dubbed their work a “black art,” as did one interactive game developer, or relished in their rarified expertise, as did many PhD research scientists. It is true, the latter would equivocate, that their algorithms had yet to mature, or their deep learning neural nets remained opaque, but these were simply calls for more work not dismissal. Early QS participants, who had little access to the analytical workings of self-tracking devices, debated which devices and what output resulted in a useful insight or a company’s wrestling away of control. While QS participants and computer vision professionals might quibble about the pro’s and con’s of a particular analytic process, few questioned that, in general, algorithms did and should work.
More difficult to explain was how research participants slipped between talking about algorithms’ technological efficacy and social accountability. In one example, we asked Gerald, an interactive virtual reality product manager at a multinational corporation, what more he wanted from computer vision algorithms. He retorted, “Intellectual honesty.” He went on to explain, When you’re talking about stuff that takes place in a black box, the agendas of the people that are controlling the activities of the black box are all the more important. You [algorithm developers] know what’s possible, what’s not possible, what can be done, what cannot be done. Really, be honest with yourself, ‘cause there’s the right approach and the wrong approach.
A term with baggage
In contemporary writing about algorithms and data, the term fetish is rarely used with more than a glancing look at its historiography (Chun, 2008 is an important exception). Yet, it is the history of the term’s usage that we find so useful. Pietz (1985) traces the term’s origins to the 16th to 18th centuries Portuguese and Dutch traders who went to West African trading towns looking for gold. There they also found what they called “fetish.” For them, fetishes were objects of nominal material value imbued by their African trading partners with magical powers. As they saw it, fetishes’ powers were capricious, arbitrary and constructed: in short, products of false beliefs and mistaken attributions. Yet, they also acknowledged how well the promise of these powers facilitated trade both locally and across continents, and traders used them to their own ends.
By the 18th-century, African fetishes were landing in private collections and museums in the West. The histories of their creation and use were largely erased by equivocal claims to magical powers in the writings of prominent 19th- and 20th-century critical thinkers. Karl Marx saw fetishism in “the whole mystery of commodities, all the magic and necromancy that surrounds the products of labour on the basis of commodity production” (Marx, 1977[1887]: 169). Sigmund Freud (1961[1927]) used it to explain sexual deviance. Fetishism came to describe a socio-cultural mechanism through which objects accrued value, meaning and efficacy through a process of substitution and misrecognition. As the term gained widespread adoption, naming a thing a fetish came to carry with it a provocative ambivalence, a simultaneous affirmation and doubt about its effects in the world.
More recently, African art collectors and historians have sought to recover lost histories of fetish objects’ creation and use. In a description of a Congonese nail fetish statue, Thompson (1987) explains, To decode the meaning of the blades and nails is to expand our understanding of the world of the famed lawcourts of Kongo. Each blade or nail is a mambu. A mambu is a legal matter or problem, nailed-in, literally and metaphorically, in the search for restitution of what is right and just, between two or more parties.
Fetishes as good to think with
Pietz (1985), Marx (1977[1887]), Freud (1961[1927]) and Graeber (2005) might disagree on exactly what qualities of fetishism are heuristically most important. Pietz (1985), for example, insisted on highlighting the replicability of a fetishes’ effect and the singular accountability of the creator. Fetishes were crafted objects, not idols. Marx (1977[1887]: 163-177), in contrast, wanted to explain how commodity exchange became a social fact and how the unique values of labor were erased by the exchange value of their product. Freud (1961[1927]: 147–157), in turn, focused on the mechanisms of such misrecognitions, but this time a body part (a nose or foot) as something erotic and phallus-like. Graeber (2005), likewise, explored the agency of misrecognition but in generating social creativity not social deviance.
From their work, we distill four attributes of the fetish and how they distribute power as capability, promise, faith and possibility:
The fetish is a material object imbued with capabilities that are not inherently properties or functions of the object itself. These excess capabilities are generated at the point of contact between differently positioned people and thus widen the scope of their outcomes to the social, cultural and economic. These social, cultural and economic outcomes are misrecognized or substituted as belonging to the fetishized object as its promise. This substitution or misrecognition is itself efficacious: it enables something to take place that might otherwise not happen.
These four attributes organize our discussion of computer vision professionals, QS participants and algorithms. As a set of attributes, they also dispel other claims to power that haunt the algorithm. If we were to stop our analysis with the first two attributes of the fetish, algorithms could be imbued with a technological determinism, where the technology itself is credited with capabilities that directly lead to change. If we were to only focus on the second two attributes, we might conclude that algorithms act as a kind of technological sublime granted those people revel in their god-like promise and possibility. By analyzing algorithms using all four attributes, we see a fuller picture of how algorithms as material objects are invested with and thereby gain powers as they change hands. We can see algorithms as traded talismans that invite slippages between their effect, promise and possibility, and not as artifacts of a technological determinism or sublime.
There is some disagreement about what constitutes the materiality of algorithms, particularly if we compare them to Pietz (1985) and Freud’s (1961[1927]) fetishes (noses, amulets and so forth). Algorithms sometimes manifest as math, sometimes as lines of code and sometimes the parsed data or visualizations they produce. They are, as Josh Berson (2015) puts it, representationally promiscuous. They shapeshift in the hands of those who design and engineer them and especially those who use them. In our research, algorithms more often were defined by what they did than what they were. Their workings were undeniably concrete. Lights on a screen. A robot arm moving. A sensor-triggered video of one’s child. A haptic vibration to prod someone to start moving.
By starting with how and when our research participants materialize algorithms, we echo the call for more emic understandings of algorithms as practice (Dourish, 2016; Kitchin and Lauriault, 2014; Passi and Jackson, 2017). We agree with Wendy Chun that algorithms (in her case source code) must be considered “in media res” (Chun, 2008: 323) rather than as reifications of what they “really are,” an analytical mistake she calls fetishization. We extend Chun’s argument to focus on the labor relations and contracts that make code “work” both in terms of machine execution and in terms of what it does for humans. We also build on the prior work of critical social scientists who identify how algorithmic work structures power relations by enacting discrimination and social sorting (Barocas and Selbst, 2016; Pasquale, 2015), promulgating labor inequalities (Gray et al., 2016; Irani, 2015) or shaping cultural production through algorithm design choices (Gillespie, 2014; Hamilton et al., 2014; McKelvey, 2014).
By analyzing algorithms as materialized and misrecognized social contracts, we also can tackle less straightforward enactments of power, ones where accusations of magic and black boxing come into play. With fetishism as a heuristic, we can empirically disentangle how algorithms materialize as things in themselves, how people invest them with powers to do things and how the promise of these powers ultimately valorizes some people’s work and opportunity at the expense of others.
Finally, as we analyze algorithms as fetishes, it is not to say that other people are naive to believe in algorithms’ efficacy while we remain wiser. Rather, it is to say that people position algorithms in ways that make algorithms promise more than they can deliver in strictly material terms. For Graeber (2005) and us, this is the moment of social creativity when faith in a promise delivers possibility.
Computer vision expertise and algorithms that can “see”
Computer vision remains a nascent, but not new, disciplinary field. It marries sensors and image signal processing with a growing and diverse span of human digital work to make sense of light for humans and machines. Although it emerged in academia over forty years ago, the technical challenges remain daunting. There is much room to invent new mathematical and computational ways to envision light and even more room to make these algorithms relevant and useful in commercial and industrial products. For those we interviewed, computer vision is simultaneously an ambitious technical project inspired by notions of artificial intelligence, an academic discipline, an increasingly in-demand technological capability, a qualifier for a well-paying job and the belief that computers can, one day, “see” as if or better than humans.
The materiality of computer vision algorithms
By definition, vision algorithms digitize and analyze light captured by cameras and make it available to do a particular task, such as identify a landmark, towards a particular end, say, locate a robot. In this way, they act like other algorithms (Barocas et al., 2014; Dourish, 2016). Yet computer vision professionals use the term more loosely. It interchangeably indicates a single mathematical or logical step as well as a series of such steps, as in an imaging or vision processing pipeline. One computer vision PhD explained that she cobbles together prior algorithms, as units and as pipelines, to tackle whatever vision problem she has at hand. For her, all algorithms are amalgamates of prior algorithms.
In this way, vision algorithms are both the media for and products of computer vision work. Charles Goodwin (1994) distinguishes between what professionals work with and how they materially represent that work to others. In computer vision, algorithms are both. In this tight, competitive world of using algorithms to create new ones, careers hinge on staking claims to the novelty of algorithmic invention, particularly for academics as well as corporate research and development research scientists. These so-called “pure” algorithm developers race to benchmark the comparative performance of their algorithms against same class but prior art. Presentations of these state-of-the-art vision algorithms are white-knuckled affairs, with audience members often challenging the testimony to competitive performance or unique design. At stake is not just what the algorithm is or does but what it promises to do.
The materiality of the algorithm gate-keeps who can build on its promise. A holographic display engineer complained that some computer vision academics only published their algorithms as mathematical proofs. He needed working C/C++ code, the tools of his trade. Gale, a PhD computer vision research scientist, explained that by writing algorithms as math in Matlab, she broadens their industrial applicability even if she must then optimize them for each application. Graduate students typically eschew such expensive algorithm development toolchains and turn to prototyping computer vision algorithms in the more commonly available Python instead. For-profit and nonprofit organizations increasingly curate algorithms into libraries, be they open-source, such as OpenCV, custom built pipelines or included as part of product software development kits (SDKs) or development toolchains, such as Matlab. In all cases, each algorithmic unit and pipeline comes with user, licensing and intellectual property agreements.
Vision algorithms materialize as artifacts for and outcomes of computer vision professionals’ labor. They are products whose uses are contractually governed and whose promises build careers and reputation. In the past, some algorithms were named in honor of those who created them, but now they more often are named for what they achieve, such as the simultaneous localization and mapping (SLAM) or structure from motion (SfM) algorithms. Here, we begin to see how algorithms gain their powers. Like magic incantations, these names spell out the promise of what the lines of code or series of mathematical functions should do once they leave the hands of their creators to those chartered with their use.
Algorithm makers versus algorithm users
Theorists concur that the fetish gains its powers in the encounter and exchange between diverse peoples, such as the mid-century African and European traders (Pietz, 1985), the laborer and the capitalist (Marx, 1977[1887]) or the child and his mother (Freud, 1961[1927]). In the case of computer vision algorithms, their promises materialize in the change of hands from makers to users. Careers, reputations and commerce divide those who create and those who use computer vision algorithms. Yet, significantly, the distinction between the two is more often social and economic than pragmatic.
When describing their day jobs, computer vision professionals parse their work into a series of familiar tasks: make image data accessible and available, curate images into data sets, design algorithms, code algorithms, optimize algorithms (which means to ensure they on particular hardware software systems), make them do something for someone and use the completed solution with its vision and other capabilities. As activities, it is difficult to know where exactly the labor of making ends and using begins (as per activity theorists Engestrom and Miettinen, 1999; Goodwin, 1994; Suchman, 1987). Add to this that the tasks are, in practice, portable. In some cases, PhD algorithm developers spend months hand tagging video. Under deadline, software developers will scrabble together sextant algorithms to automate horizon detection. Even system engineers find themselves curating and logging image data sets.
Yet those we interviewed assign the tasks to specific job roles in an idealized structure with socio-economic implications. Interns and crowdsourced workers, the “data janitors,” do the onerous and generally unrecognized creative work of gathering, cleaning, labelling and curating data sets for algorithmic modeling or neural network training (Gray et al., 2016; Irani, 2015; Lohr, 2014). Algorithm developers generate state-of-the-art or reliably “good enough” vision algorithms and are typically the stars of the show, commanding salaries that Peter Lee, head of Microsoft Research, once compared with top NFL quarterbacks (Vance, 2014). Some of these turn to software developers to translate the step-by-step series of their mathematical functions into programming languages. Finally, system engineers make sure these algorithmic steps work optimally on the hardware and software technology at hand, such as a medical robot or car.
This social and economic division of labor further crystallizes as the algorithm comes into focus. On one side are the algorithm makers who have a vested interest in claiming the invention and therefore intellectual ownership of a state-of-the-art algorithm. Their work includes, but rarely credits, the work of those who collect and curate data sets as well as those who code the more mathematical algorithms. Instead, they nod to their academic training and fiercely defended disciplinary specialization. For them, vision algorithms remain crafted objects, not Marx’s fetishized commodities (Marx, 1977[1887]: 163–177). Their novelty, and therefore the incommensurability of their labor, stems from the still rarified craft of their making. On the other side are the algorithm users: system engineers, application developers and software developers. Positioned as the inexpert wielders of “magic black boxes,” they hold the creators accountable for the promise of an algorithm’s performance, as Gerald articulated earlier.
The injustice of who and who does not get credit or compensation for what an algorithm does is clear. Lilly Irani (2015) and Mary Gray et al. (2016) eloquently make this argument, although we would add contract software developers into the mix of disenfranchised professionals. But the degree of vitriol that accompanies the distinction between algorithm makers and users suggests more is at play. In a mid-project interview, we suggested to one well-known computer vision academic that a team of PhD trained vision experts designing localization and mapping algorithms for consumer robots did work very like his. He snapped, “That’s not computer vision, that’s system engineering.” When we recounted this story to the founder of an interactive game development studio, he laughed and then quipped, “We don’t hire prima donnas.” He proceeded to explain that he only hires computer vision algorithm developers willing to “get their hands dirty” and both design and code algorithms that
We are at a time when the unquestioned prestige and expertise granted to algorithm developers is beginning to unravel. Vision product teams at large multinational companies and startups alike liken the trade in state-of-the-art algorithms to older “waterfall” work practices vilified by software development norms like the agile movement. Managers talk passionately about how they have reorganized their teams to pair algorithm developers with software developers and system engineers. Their goals are to accelerate design, development and deployment by making all algorithmic and engineering work collective and collaborative. As a result, the algorithm dissolves into a common code base. Accountability for the algorithm’s performance and optimization becomes shared. The distinction between algorithm maker and user further blurs. Software developers gain near equal status on the team (although not necessarily an equivalent rise in pay). Significantly, these small but nimble teams also bring back in-house the “dirty” work of collecting, annotating and curating data. Not all firms thrive by introducing these agile- or lean startup-inspired work practices and some, notably the algorithm developers, protest and in private admit to us researchers that they miss the familiar jostling for prestige and resources.
Not surprisingly, we also see the valuation of algorithms shifting away from general purpose or “pure” algorithms to the proven performance of an algorithm over time and at scale. Many we interviewed still welcome “pure” algorithm invention but only if those algorithms also can reliably and robustly repeat the same effect across their specific product lines. One medical imaging startup founder joked that his relationship with academic computer vision was parasitic. He had no desire to pay the salaries of algorithm developers, he just wanted someone, an academic or a large corporate R&D arm, to deliver him predictable, affordable and usable vision outcomes.
As the professional distinctions between making and using vision algorithms blur, the vision algorithm loses its luster. With the skyrocketing popularity of deep learning methods, some professionals hazard that the magic of computer vision might better reside in a well-curated dataset. What makes the lens of fetishism so revealing is its ability to track in whose hands and to whose advantages the materialization of professional privilege occurs.
Faith in algorithms
For Graeber (2005), the social creativity of fetishism arrives at the last misrecognition, the vesting of the algorithm (or the dataset it produces) with possibility. No one in our conversations ever doubted the efficacy of algorithms, be they mathematical functions, code or parsed data. But when we look at how people believe in the promise of algorithms, we hear echoes of Pietz’s (1985) simultaneous belief and disbelief. Jules, the COO of a medical robot manufacturer, explained, When I look back to six years ago, and look at the system we have today, I could not have imagined then what it could do today. It’s so much better than what I thought. Because we didn’t cast ourselves into an idea of that’s what we going to have. We instead cast ourselves into ‘we’re going to do the best we can, every time, in incremental steps.
To believe that vision algorithms do things vests them with social power beyond their capabilities as math or code. It obfuscates some labor to the credit of others by vesting the algorithm, not its trade, with function and effect. It also secures a broader social and economic commitment to vision as a promise endemic to the algorithms. With this commitment, vision algorithms gain materiality and agency to operate outside of and independently from the professionals who design, develop and deploy them. This is how Jules can be surprised by what he and his team built.
The promise that computers can “see” fuels professionals to continue their work, to be a part of making this magic happen. It allows them to forget, for a moment, the hours of labor onerously drawing bounding boxes on video footage or rewriting camera APIs to ensure different camera feeds can be similarly analyzed. It allows them to mistake their and others’ labor for the workings of a truly powerful, awe-inspiring algorithm (even when it does not work, as Gerald reminded us). But as they do so, they (and we) risk forgetting the specificities of their work in the name and promise of algorithms that can “see.”
Efficacy and awareness in the Quantified Self community
Where computer vision professionals lose sight of the particularities of some people’s labor in the broader social and economic commitment to vision, the QS community, in contrast, celebrates these particularities. Participants publicly register the specific effects of various self-tracking technologies through their experiments on and through their bodies. Their attention to exactly where, how and what “works” for whom widens the space for many faiths many different kinds of technologies, sometimes algorithms and sometimes not. In these venues, the efficacy of algorithms alongside that of spreadsheets, sensors and data is questioned and debated.
QS participants’ commitment to self-experimentation and learning slows down the contractual clarity that underwrites full-scale belief in particular self-tracking technology, like a mindfulness pill, a behavioral nudge or an algorithm that counts steps. In the QS community, few technological promises or market values rest assured. QS experimentation instead exposes the fragile consensus that underwrites the emerging, self-tracking consumer commodity market. It muddies “the whole mystery of commodities, all the magic and necromancy” (Marx, 1977[1887]: 169) that companies use to sell commodity products. It is ironic, then, that some critics accuse QS participants of fetishizing self-tracking technology (see Sharon and Zandbergen, 2016 for an excellent overview). We counter that QS community attentiveness to and faith in the particular and diverse possibilities of self-tracking as technology and practice actually temper the pace of what Marx (1977[1887]) would call commodity fetishism.
The QS community consists of people who gather online and in major cities around the world to discuss what they can learn by collecting and analyzing data about themselves. Some participants join out of curiosity, and some to tackle a medical problem. Some join because they (also) work at a technology manufacturer, academic research institute or medical organization interested in productizing self-tracking technologies. Despite their differences, they meet with the explicit purpose of discussing the use and efficacy of self-tracking techniques and technologies, such as activity tracking, stress detection, microbiome tests and more. To keep the focus on the practices of self-tracking, meeting protocol requires participants to speak about what they learned as individuals and not deliver product pitches or make broad scientific claims (Berson, 2015; Sharon and Zandbergen, 2016). In this way, QS meetings are more than communities of practice interested in furthering a rich body of shared knowledge (Lave and Wenger, 1991). They are communities of encounter, much like Pietz’s (1985) 17th-century Gold Coast trading towns, that trade in ideas, methods and claims about the potential worth of this or that technology.
Recalibrating efficacy
In an early paper (Nafus and Sherman, 2014), we argued that QS participants creatively reworked the capabilities proposed by existing self-tracking devices, like algorithmically parsed steps or sleep, as well as their promises, such as “improving health.” They interrogated manufacturers’ marriage of product and promise. We called this reworking a “soft resistance,” in short a process of disentangling what algorithm-containing products can do materially from what they promise to do bodily, mentally or socially.
QS participants chipped away at products’ promises by asking two questions: does a particular technology work and does it work for me. To explore the former, participants head-to-head compared products that claimed to sense the same thing and then measured how well they actually did so. To get at the latter question, they pitted the output of the technology, whether it worked via an algorithm or not, against whatever purposes the self-tracker defined. These purposes could have little to do with what the product and algorithm engineers intended as outcomes. With these dueling questions, QS participants effectively broadened the scope of what self-tracking technologies could and should do beyond what product designers and manufacturers proposed.
Participants turned to terms such as “mindfulness” and “awareness” to locate the embodied effectiveness of self-tracking (see Sharon and Zandbergen, 2016). Consider, for example, a talk given by Nancy Dougherty (2011) who created what she calls “mindfulness pills.” Dougherty made her own blister pack of sugar pills, each containing an ingestible sensor. Not coincidentally, Dougherty worked for an ingestible sensor company at the time. She labelled each pill with a mental state she desired, such as “energy.” As she took a pill, the pill’s ingested sensor sent to her phone biostatistics about her body that revealed that yes, indeed, she did bicycle much harder shortly after taking the “energy” pill, which she herself knew to be a placebo from the outset. “In fact all of my biggest heart rate spikes were after ‘energy’ pills,” she commented.
By including “mindfulness” as a measure of her placebo’s efficacy, Dougherty unabashedly queried the effects of belief and disbelief in her experimentation. Upon taking her pill, she tracked that her attention was drawn to energy levels. That recognition was perceived as in and of itself effective in changing her energy levels. She was aware that her pills were placebos, therefore ineffective by definition. Yet, she accepted that the effects registered in the mind were real. The pills were not ineffective in that the sensors in them corroborated or refuted the existence of a mental effect. The pills (of her own design) invited her to believe that what the sensors and algorithms measured, such as an elevated heart rate, could be translated into higher “energy.”
In some domains, this relationship between heart rate and energy might be plausible, but in the QS community, a different experimenter might not accept this correlation and instead register energy levels in terms of the capacity to write many words on a page. Another experimenter might argue that the absence of high heart rate be considered evidence of a conditioned heart, as is the case with “energetic” athletes who have low heart rates. In Dougherty’s case, she created a situation in which she had to believe in the efficacy of the placebo and believe in the correspondence between her specific mental constructs and the sensor data in order to arrive at a conclusion about her self-tracking experiment’s efficacy. She also had to put faith in the sensors themselves. It would have been a failed experiment if the sensors produced no data at all, or their algorithms parsed it into implausible data as often happens with bodily sensors.
These were fine and calculated parsings of belief and disbelief, more scrutinized than those that Pietz (1985) associated with the trades in Gold Coast fetishes. Dougherty worked through this play of credulity and incredulity by temporarily granting the pill an excess of capability she knew it did not have. By design, her experiment helped her pinpoint the effectiveness of self-tracking as the method for “gaining energy” by using sensing technologies, here ingested sensors.
Dougherty’s work is an extreme example of the more common discussions of whether keeping a daily step count really “made” someone take more steps or seeing streaks of consistent behavior displayed on a screen produced the desire to continue that behavior. It demonstrates how carefully QS participants reflected on exactly what works, how, why and for whom. Such reflections were necessary in part because the techniques and technologies often came from other, less familiar domains – medical, alternative health, sensors, algorithms and more. As a result, the rules governing their use were not self-evident. Making a placebo do work in the context of one’s personal life was not a self-evident maneuver to make. Migrating technologies and techniques from established to new domains of use did open up new sites for realizing the efficacy of the algorithms that occasionally played a role. They also required, as we saw in Dougherty’s case, carefully tested leaps of faith.
Commodifying promise
This faith takes a different valence in the neo-behaviorism that informs the production of self-tracking technologies. Silicon Valley companies position themselves not as mere producers of data but as producers of their users’ behavior change (Schüll, 2016). While QS participants asserted dominion by creatively repurposing technologies, self-tracking companies seek to effect more controllable and measurable outcomes in their users, in particular algorithmically timed “nudges” toward “healthy” choices, such as a fork that vibrates when it infers a person eats too quickly. While this trope of nudge-based “awareness” has made its way into the QS community, in commercial circles the active learning and more expansive questioning that QS participants value all but disappear. Products like activity trackers are sold with the promise of keeping the user “on track.” Schüll describes these nudges as a “curious mechanism, for it both presupposes and pushes against freedom; it assumes a choosing subject, but one who is constitutionally ill equipped to make rational, healthy choices” (Schüll, 2016: 12).
To achieve the kind of social and economic consensus necessary to trade these nudges on a commodity market, firms seek to solidify exactly what nudging technologies do to whom and how. Gone is the experimentation. To nudge people “for life”, as Schüll (2016) puts it, eclipses the cognitive work and social negotiations that consumers must do to make a call to action, like “time to get more exercise,” plausible and effective. Instead, the trope of the nudge offers manufacturers a technologically buildable and institutionally scalable, in short trade-able, response to the QS debates over who or what has the power to change me and my body.
As we saw in Dougherty’s case, QS self-tracking and experimentation rely on leaps of faith similar to the leaps of faith embedded in the commercial commodities that Schüll critiques. In both cases, people build on the possibility that a technology could “make” someone do something, even if they have radically different views of why or how. This suggests an uncomfortable prospect. The self-tracking world trades in non-commodified and commodified technologies as well as in technologies that will commodify or de-commodify over time. Commodity fetishization in the full Marxian sense – the systematic erasure of human labor so that commodities can be traded and capital accrued – is never inevitable, but also never very far away. When self-trackers tinker with commodities in hopes that the objects actually fulfill their promises, or at least make them available for repurposing, the broader cultural injunction to suspend disbelief about the conditions of their production becomes a prospect, too.
In the QS case, the proposed social contract between nudge-ers and nudge-ees is still nascent enough to make room for QS participants’ soft resistance. Here, we remember Graeber’s (2005) suggestion that the occasional leap of faith, taken with talisman in hand, is less socially concerning than full-blown mythologies, such as the constant references to “healthy living” that we see in advertisements for various self-tracking devices. There is a budding theology forming on the consumer health technology marketplace that proposes the gods of behavior change are real and that someone could prove it if only they knew what talisman to bring and effect to register. The QS injunction to experiment in a personal setting works against the formation of a totalizing theology of “health.”
3
But it, too, begins with a leap of faith that some technology somewhere could work. This faith is not an intellectual mistake. Self-trackers are giving technologies and their producers the benefit of the doubt, selectively and for a time. It
Conclusion
In our two ethnographic cases, people divide and are divided into algorithm makers and algorithm users. Sometimes the divide is blurred, and sometimes sharply defined. The division is social in that it generates claims to status, expertise and community. It is cultural in that each side crafts its own practices, rituals and knowledges. As Marx (1977[1887]), Pietz (1985) and Graeber (2005) remind us, the divide is economic in that it makes possible the exchange of algorithms (and data) as materialized artifacts. These algorithms, like fetishes, enable parties to productively misrecognize what the technology is and does and, as a result, invites them to engage in their trade of promise and possibility.
Using the fetish as a heuristic, we can lay out the steps by which people vest algorithms with promises and possibilities that extend beyond what the math, lines of code, steps or ingested sensors can do. It explains how slippages occur between human practice and human possibility, algorithmic capability and algorithmic promise and how they tender the algorithms’ exchange.
The twist of fetishism that Graeber (2005) so artfully reveals is that this economic segregation of us and them happens at those historical moments when in practice the division barely holds. Accusations of “fetish!” are caustic in that they insist on the discriminatory act, a critical stance that purports to unmask and yet reveals its own anxieties in the process. Yet in technology as elsewhere, there is also a creativity sparked by the leap of faith that fetishism allows. Call it magical realism in practice. Novice and expert vision product teams are building computational systems that can “see,” although perhaps only to distinguish a baby from a dog. QS participants do effect new modes of knowing about bodies in the name of awareness. These steps are possible, in part, thanks to the agency we grant algorithms.
Lest we believe the idols of our own making, we urge caution. Too easily the work of purported “users” of algorithms becomes lost in the noise of capitalizing on invention and innovation. We do not think that algorithms, and those who can claim to have invented them, deserve all the credit for what algorithms can do. Our fieldwork testifies to the generative possibility of believing for a moment in the powers of algorithms, but only if we stop short of demonizing or deifying them.
