Abstract
Introduction
On digital platforms, users and algorithms are intertwined in a convoluted and reactive relationship. One instance in which users’ encounters with algorithms are expressed most explicitly are targeted advertisements (targeted ads). Rooted in algorithmic predictions and prescribed data profiles, targeted ads are a result of dataveillance. Dataveillance is the modus operandi of targeted ads and recommendation systems alike; it is an automated and continuous collecting, datafying, and processing of unspecified behaviors and personal attributes, with an aim of storing and correlating data points to influence or regulate (users’) behavior. In this process, data profiles (“data-selves”) are continuously recalibrated based on users’ interactions with algorithmic outputs (Cheney-Lippold, 2017; Lupton and Michael, 2017; van Dijck, 2014). When users encounter targeted ads, they are faced with a conspicuous manifestation of their data profile(s), an otherwise obscured actuality of dataveillance.
One way to scrutinize the processes of dataveillance in relation to targeted ads is to turn to users to investigate how are such ads perceived and responded to (Kappeler et al., 2023; Nicholas et al., 2021; Strycharz and Segijn, 2022). Users’ imaginaries, that is, the way in which users imagine and perceive these processes to work, play a crucial role in shaping user behavior (Kappeler et al., 2023). Understanding these perceptions is also essential for developing regulatory approaches and bolstering user-awareness (Nicholas et al., 2021; Strycharz and Segijn, 2022), and as such, studying these imaginaries can help develop more effective strategies of media literacy. One particularly productive approach to conduct such an investigation is to study how users make sense of dataveillance in everyday life in situ (Büchi, Festic, and Latzer, 2022: 10), remaining close to the site of encounter and its impression. This article seeks to understand how users imagine their relationship to datafied environments, specifically through the lens of targeted ads.
This article is an invitation to consider TikTok, a short-video social media site, as an empirical ground on which such imaginaries and reflections are performed and circulated. With over 1 billion active users, TikTok is considered one of the most influential social media platforms today (Cervi and Divon, 2023; Lee et al., 2022; Sherman, 2020; Silberling, 2021). Since its introduction to the international userbase in 2017, TikTok has grown into a platform with its own vernacular, meaning specific “genres of communication” that “emerge from the affordances of particular social media platforms and the ways they are appropriated and performed in practice” (Gibbs et al., 2015: 257). Moreover, TikTok has gained the label of an especially algorithm-driven platform, with its recommendation algorithm considered the most “aggressive” and “addictive” (Schellewald, 2021; Siles and Meléndez-Moran, 2021) compared to other platforms. It is more suitable to investigate the narratives about algorithmic recommendations that users share on a platform known for its stark algorithmic layer.
In the recent scholarship, the notion of algorithmic imaginaries in the context of platforms, personalization, and dataveillance has grown into an analytical lens through which an interplay of social, technical, and cultural entanglements can be identified (Lupton, 2020; Nicholas et al., 2021; Sörum and Fuentes, 2023; Zhang et al., 2024). There have been several studies trying to reiterate what types of imaginaries of algorithmic systems (also referred to as “folk theories of algorithms,” “lay understandings,” “awareness,” “gossips,” “stories,” and indeed “algorithmic imaginaries”) are enacted by users (e.g. Bishop, 2019; Bucher, 2017; DeVito et al., 2018; Eslami et al., 2016; Lee et al., 2022; Lupton, 2020; Peterson-Salahuddin, 2022; Schellewald, 2022; Siles et al., 2022). However, there have been a few studies that approach the users indirectly, by turning to content shared on the platforms, such as memes. Crucially, none of the above-mentioned studies focused explicitly and in-depth on memes as a site of collective imaginaries of dataveillance. This article does so by analyzing the TikTok meme #targetedAds.
Memes, as cultural artifacts, are inherently collective in nature and thus can reflect (and challenge) collective imaginaries. There is no incongruity that memes, online artifacts circulating and continuously changing, became a part of public discourse, communicating ideas ranging from civil discontent, corporate campaigns, and political stances (Rogers and Giorgi, 2023: 1; Zeng and Abidin, 2021: 2462). Indeed, this article follows the argument that memes can be considered a “vehicle to extrapolate users’ engagements with algorithms” and their imaginaries (Stanusch, 2024: 2). Building upon the “entanglements of algorithms, users, and memes” (Stanusch, 2024: 3), an analysis of memes related to targeted ads can make relevant imaginaries “more tactile and inhabitable” (Stanusch, 2024: 18). Turning to memes’ “multiplicity” (Milner, 2016: 39) by considering memes as expressions, visualizations, and commentary on collective anxieties, hopes, and shared beliefs, allows us to assemble a particular imaginary of users’ understanding of targeted ads and dataveillance.
This article explores how users imagine the relation between themselves as datafied subjects and the outputs of algorithmic predictions in the form of targeted ads. It does so by, first, venturing into the space of TikTok and exploring the user-generated content under the hashtag #targetedAds; second, analyzing the dominant narrative framings users share on the topic; and, third, applying the notion of “collective imaginaries” (Gandini et al., 2023) to the analyzed TikTok videos (“tiktoks”). This article intends to contribute to the holistic understanding of dataveillance in the targeted ads context by looking at indirect expressions of users’ perceptions and feelings about it, adding to the body of work on shared imaginaries of dataveillance across everyday social media use. Unveiling or assembling imaginaries is not only an explanatory endeavor, but also an act of engaging with power embedded in the circulating “notions” of algorithms (Beer, 2017: 2). As such, this article expands the research within the area of algorithmic and dataveillance imaginaries by simultaneously turning to memes as sense-making devices and to TikTok as a platform particularly suited to discussing users’ socio-political activities and opinions.
Theoretical framework
Dataveillance and algorithmic imaginaries
Whether used to target by showing—for example, to influence political orientation (Susser et al., 2019)—or by not showing—for example, to reinforce racialized structurers of oppression by withholding certain ads or content from certain groups of users (Peterson-Salahuddin, 2022)—targeted ads, integral part of dataveillance, are the backbone of most popular platforms and an inherent part of daily practices of most online users (Kant, 2020: 10; Srnicek, 2017: 6; Strohmaier et al., 2021: 197; Strycharz and Segijn, 2022: 575). Targeted ads are a manifestation of the data-driven profiling and algorithmic decision-making, which pitfalls have been a growing object of scrutiny, 1 raising concerns around users’ agency, privacy, justice, exploitation, and the threats of generating predictions based on obscure and often inaccessible criteria. However, user behavior is shaped not just by manifestations of how dataveillance operates, but also by perceptions and assumptions about how such operations work.
To study the knowledge users share of dataveillance and its operations is also to study how users feel and experience these algorithm-driven systems and these systems’ expressions, for example, in the form of targeted ads; “algorithms are not only known or understood but also
Following the research direction of (cultural) sociology of algorithms proposed by Airoldi (2022), this article seeks to shed light onto “socio-material entanglements from their [users’] surveilled and classified perspective” (Airoldi, 2022: 152). This article also takes from the premise that algorithms produce data subjects who, in turn, ascribe subjectivity to these algorithms via the process of “othering” (Gandini et al., 2023: 420; following Goriunova, 2019). In the process of “othering” of the algorithm, a by-product of personalization, certain “collective imaginaries” are assembled (Gandini et al., 2023: 427). To quote directly from Gandini et al. (2023: 427), “collective imaginaries”—that is, the conviction that the algorithmic systems of a certain platform actually work in a standardised way for all users, applying not solely to them. These are part of a counter-subjectivation process whereby imaginaries emerge to be as a) somewhat congruent, in that most users share a similar picture of how algorithms work; b) particularistic but generalised, i.e., mostly due to the user's own (largely unaware) personalisation; and c) surprisingly congruent, regardless of the use they make of the platform.
Memes as analytical windows
Memes are digital assemblages in both content and form that serve as communicative acts and channels. As medium-native objects that are embedded and co-created by the sociotechnical ecology of the digital, memes are both empirically and ontologically interesting to explore, particularly through their relationship with larger algorithmic systems that they are a part of. Indeed, “memes can only be understood in relation to algorithms because algorithms constitute the flow that controls a meme” (Stanusch, 2024: 2). Algorithms can be understood as computational procedures that utilize large quantities of data to automate decision-making processes (Katzenbach and Ulbricht, 2019; Wijermars and Makhortykh, 2022: 944). But when asked to define an algorithm, even practitioners tend to share a “vague, ‘non-technical’ meaning, indicating various properties of a broader ‘algorithmic system’” (Seaver, 2017: 3). In the daily engagement of users with social media, algorithms deliver personalized content—such as memes and ads—to users’ feeds (Bucher, 2017). As such, memes are a part of the infrastructure that contributes to dataveillance.
The content of memes is a particularly fruitful site of negotiation and expression of both narratives and broader imaginaries of dataveillance. 2 On the one hand, memes are vehicles of collective expressions, with their “multimodality, reappropriation, resonance” greater than any single text or instant (Milner, 2016: 40); “internet memes have become digital narrative tools with which people understand and interpret reality” (Galip, 2024: 2). On the other hand, memes foster niche communities with distinct cultural identities, often positioning themselves in contrast to mainstream discourses (Milner, 2016: 49). Indeed, as Zeng and Abidin (2021: 2460) argue, following Highfield and Leaver (2016: 48), “these seemingly trivial visual formats should not be categorically overlooked as isolated social media artefacts, because ‘visual social media content can highlight affect, political views, reactions, key information and scenes of importance.’” When made and circulated across TikTok, memes such as #targetedAds can express collective imaginaries of both content creators on the site and their audiences. Given the multiplicity of meanings that memes invite, #targetedAds can activate several narratives, while only some of them are reflective of algorithmic processes in question (Figure 1).

Examples of comments from TikTok users in response to #targetedAds videos. User profile pictures and names have been omitted for anonymization purposes.
TikTok as a sociotechnical and empirical field
This article approaches TikTok as more than a distribution platform, but rather as a discursive space where sociotechnical commentary unfolds in playful and vernacular forms. TikTok's vernacular is expressed via interface design and algorithmic ecology which affords but also nudges the users to actively participate in social activities on the site by operationalizing a playful content creation (Cervi and Divon, 2023: 3). Similarly to other social media platforms, TikTok also functions as an issue space (Rogers, 2018), where actors (users and organizations) can collectively engage and organize around socio-political issues. Indeed, by bringing together “performativity of YouTube, the scrolling interface of Instagram, and the deeply weird humor usually reserved for platforms like Vine and Tumblr” (Abidin, 2021: 84), TikTok's vernacular centers around activist, collective, but also ambiguous forms of trend-driven participation.
Aside from inherent platform affordances and a vernacular that offer particularly fruitful ground to research socio-political issues, TikTok is perceived as a particularly algorithm-driven platform. Once the user opens TikTok, she is welcomed by the For You Page (FYP), a landing site in the form of a continuous scroll feed. FYP is the main space on TikTok in which users encounter new content (videos and advertisements). FYP is curated according to TikTok's proprietary “recommender system,” which combines algorithmic personalization-based and prediction-based anticipation and delivery of content that the user might like (Peterson-Salahuddin, 2022: 5). It is through FYP that users “actively seek out, learn, participate in, and engage in what is ‘going viral’ at the moment” (Abidin, 2021: 79), consuming content but also being prompted to become active creators (Divon and Eriksson Krutrök, 2023: 124).
TikTok's affordances that aid users in navigating the flow of TikToks on FYP are challenges and hashtags. Challenges are, inherently, “complex multimodal memes” (Divon and Eriksson Krutrök, 2023: 125) which consist of moving images, text, and sound, and which call for an active mode of performability, replicability, but also adaptation. Social media challenges (and TikTok challenges alike) are primarily an expression of play and playfulness, delivering viral content and inviting participation (Songer and Miyata, 2014). However, within TikTok's vernacular, challenges are often expressions of “playful activism,” political opinions, and public discourses, “inviting publics to gather around specific issues” (Cervi and Divon, 2023: 3). Users can localize content that responds to a given challenge (and, often by extension, a particular issue) via hashtags, which can be searched for through TikTok's interface (Cervi and Divon, 2023: 3) and via “sounds.” These qualities turn TikTok users into what Zulli and Zulli (2020: 7) define as “imitation publics (…) collection of people whose digital connectivity is constituted through the shared ritual of content imitation.” Thus, the built-in functionalities and affordances of TikTok, and especially TikTok challenges, encourage and reflect the qualities of memetic content (Zeng and Abidin, 2021: 2462). These memetic qualities invite one to search and “read” TikTok challenges as memes.
To analyze the TikTok challenge of #targetedAds and its memetic qualities is to engage with the ways in which #targetedAds activates and materializes the collective imaginaries of dataveillance within a particular platform context. By doing so, this article answers the following research questions (RQs):
RQ1: What kinds of narratives about algorithmic recommendation systems and dataveillance emerge from #targetedAds on TikTok?
RQ2: What imaginaries of dataveillance are normalized through the analyzed #targetedAds TikToks?
RQ3: How does (or not) the platform specificity of TikTok play a role in the way imaginaries performed in #targetedAds might have been framed to appeal to TikTok publics?
Methodology
Data collection
In the following analysis, I employ digital methods, as developed by Rogers (2019), and follow an exploratory research approach, as described by Rogers and Giorgi (2023), to scrape memes and set up a meme collection. To study memes, one has to engage in a process of searching and “collecting” memes into a sample group which becomes a set or a curated “collection” (Rogers and Giorgi, 2023: 3). Thus, collection-making replicates the ontological quality of memes as it treats them not as single instances but as collections of online artifacts that exist in relation to others (Rogers and Giorgi, 2023: 3). To locate the memes, I use the hashtag #targetedAds. Hashtags allow for querying content which has been labeled by meme-creators to describe meme's format and/or content of TikToks. Following Zulli & Zulli memetic analysis of TikTok (2020), a hashtag functions not only as an organizational principle in a query-sense, but also as a digital

A monthly histogram of #targetedAds generated using 4CAT, illustrating the dissemination of content across time in the collected data sample.
The search was conducted on a newly created account and on a research browser, to minimalize the level of personalization that could influence the results. Memes were accessed through the TikTok hashtag page of #targetedAds and scraped. Since the TikTok API was inaccessible for research purposes, Zeeschuimer, a browser-extension tool, was utilized to scrape TikToks via computer browser access. Zeeschuimer is compatible with 4CAT Capture and Analysis Toolkit, which is an open-source web-based research toolkit that captures, manipulates, translates, and visualizes thread-like data from various online sources (Peeters and Hagen, 2022: 572). This facilitated the transfer of the scraped data and ultimately the creation of a dataset. The hashtag page, on the day of scraping, contained 677 videos with a total of 7,255,353 views.
To perform a qualitative content analysis of collected TikToks, the top 10 most viewed, liked, and shared TikToks were selected. Given that TikTok's ordering of content is opaque and ever-changing, the choice to collect content coming from three different “rankings” was opted for to expand the variety of data. Choosing the most viewed, liked, and shared TikToks was also a reflection of the most successful content on the platform (Hautea et al., 2021: 5). To ensure data protection for the users, the following discussion of the findings focuses on dominant patterns rather than singular descriptions of each of the TikToks. In the following analysis and included tables, TikToks’ titles were substituted with numbers.
It is important to note the dependency on scraping data that is inevitably organized according to algorithmic means designed by TikTok, and thus provides but one view of the content available on the platform. One has to acknowledge the challenges of relying on the platform's affordances and its continuously changing algorithmic environment while studying it. Indeed, “it is these characteristics of dynamicism, heterogeneity, interconnectedness and opacity of algorithmically infused societies that makes their study more challenging” (Strohmaier et al., 2021: 197). While performing an initial exploration of this hashtag space using the scroll-through method, some videos—some of which with thousands of views and likes—did not appear in the hashtag page on the day of the scraping. Similarly, in a spawn of another few months, some videos that were scraped from the hashtag page did not appear in the hashtag page anymore, despite remaining accessible on the platform via a direct link. One can also point out possible discrepancies between the content feed that is delivered on the desktop version of TikTok versus the mobile app.
Data analysis
To analyze memes from TikTok, I work inductively to perform a thematic analysis and create a codebook that ensures consistency. The qualitative analysis is based on the tradition of close-reading and visual analysis. It is crucial to note that memes are “grounded in contextualism,” and as such their reading depends on cultural contexts of the users who encounter them (Cervi and Divon, 2023: 5; Divon and Eriksson Krutrök, 2023: 125). Taking from other scholars who analytically analyzed TikTok content (Cervi and Divon, 2023: 5; Zeng and Abidin, 2021: 2463), and in recognition of the “multimodal grammar” (Milner, 2016: 50) that characterizes memes, the codebook considered three modalities of TikTok videos of #targetedAds memetic challenge: content, stance, and form.
The three content levels of the codebook were chosen according to a scheme developed by Shifman, which distinguished between content, stance, and form (Shifman, 2013: 367; Shifman, 2014: 40). Content refers to the semiotic communicative core of the meme, referring to the ideas but also possible imaginaries that are being conveyed. Form relates to the formal qualities of the meme in relation to its meme format, such as the visual elements and audio components, ranging from memetic moving-image patterns to the logics of text-image juxtaposition; Shifman calls it “the physical incarnation of the message, perceived through our senses” (2014: 40). Stance, in Shifman's understanding, refers to the way in which the meme is set to convey the communication, such as the discursive position towards its audiences; stance is thus a way to analyze and “depict the ways in which addressers position themselves in relation to the text” (Shifman, 2019): 40).
“Oh Yeah, It’s All Coming Together.” becoming the algorithm
The first sub-collection (10 most liked videos) is discussed as an expression of “Activated Collaborative Imaginary,” where users actively and affirmatively engage with algorithms to influence outcomes (e.g. by “training” the algorithm) (Table 1). As one of the top 10 videos did not relate to the topic of targeted ads, the 11th most liked video was included in the sample to maintain the coherency across data sub-categories.
Top 10 TikToks from the #targetedAds hashtag page ordered according to the highest number of likes.
#TargetedAds meme shares the same formal qualities. It is useful here to turn to repeated patterns—or, as Pilipets (2023) calls them, “gestures”—that make up the form of this TikTok meme. The focus on the embodied and sensory character of the memetic gestures (Goriunova, 2014; Pilipets, 2023: 121) allows one to extract the core mimicry elements of the meme. In #targetedAds, the video is recorded from the “perspective” of the phone: the camera is on, recording empty space “above” the phone or in front of it. Into the frame enters a person, looks around, and quickly leans over the phone, beginning to whisper out various phrases and words, sometimes covering their mouth with a hand in a secretive gesture. The audio is either recorded anew or reused as a TikTok “sound” template—one of the driving principles of memetic content creation and organization on TikTok (Zeng and Abidin, 2021: 2462; 2469)—from other #targetedAds memes. Another repetitive formal element is a short text within the TikTok frame staging the context, for example, “when my girlfriend leaves her phone out unattended.”
Content-wise, #targetedAds depicts the act of speaking aloud next to the phone of another person to influence that person's targeted ads and thus influence their behavior in favor of the meme's protagonist. For example, a spouse tries to influence their partner's targeted ads so that the ads would reveal what gifts to buy. The parties involved vary, from spouses, friends, employees, and family members. The overarching idea is that the phone “listens in” on our conversations, and by repetitively saying given keywords, the phone will “catch” the desires of the user and will begin to target them with preferred ads.
In #targetedAds, the stance of meme protagonists is that of a savvy and opportunistic main character; they influence other people's targeted ads either for self-gain (gifts they wish to receive) or for the other party's seeming gain (prompting to go to therapy). The meme protagonists often play on various stereotypes, including gender stereotypes (men not being able to express their feelings, women wanting to receive material gifts). It is worth to note a particularly strong presence of gendered (heterosexual) dynamics embedded in those stereotypes. The meme protagonist appears to side with the viewer in mocking (and exploiting) the “phone owner.”
Interestingly, us, the “viewers,” are situated as the phone itself; our perspective is that of the phone's camera, and the meme protagonists seem to be speaking to “us” (see Figure 3). Is the phone similar to the viewer, a passive listener that watches through the camera? If so, the material phone is given the embodiment of an autonomous, human-like actor, where no notion of “algorithm” appears. Even if a phone is a hard-wired device, in #targetedAds it is given the agency to “decide” what to

A frame from a #targetedAd TikTok posted on the official account of NordVPN (video 5C).
To influence the calculated, mechanical behavior, that is, the algorithmic recommendation of targeted ads, one has to
The collective imaginary in #targetedAds is tied to the materiality of “the phone,” a misconception that provides an algorithm with a material and
In their discussion on harmful outcomes of algorithmic use—what they call “algorithmic violence”—Bellanova and others argue that the obfuscation of algorithmic processes and their potential to cause statistically motivated or invisible harm comes from two localities: data collection and processing, and “the technical operations of translation that render the world computable” (Bellanova et al., 2021: 142 following Goodwin, 1994). In other words, one locality is how a variety of our direct and indirect actions as users is being datafied, while the other locality is how these abstracted, statistical, and datafied points are being “repacked” into concrete algorithmic predictions (Bellanova et al., 2021: 142). #TargetedAds provides a glimpse into users’ collective imaginary of both of these localities.
On the one hand, #targetedAds reveals a widespread awareness of surveillance and that, indeed, “in many situations the subjects of surveillance are aware of what is happening” (Lyon, 2018: 75). On the other hand, these memes suggest that individuals have power over both localities: the practices of data collection and its following computational outcomes. In this view, users seem to assume that their physical, embodied selves are directly translated into their “algorithmic identities” (Cheney-Lippold, 2017: 6). However, the major consequence of being trapped within algorithmic identities is that users have little to no direct agency over negotiating what these algorithmic identities are; rather, this task falls only on the “private parlance of capital or state power” (Cheney-Lippold, 2017: 6). While #targetedAds spreads and conveys an affirmative and active imaginary of users collaborating, influencing, and altering the “algorithmic identities,” the shortcuts they penetrate are inaccurate. “Prompt feeding” of the phone that is “listening in” to influence recommendation algorithms can further obfuscate actual processes and outputs of dataveillance.
It is not the users who are being watched, but rather their digital, datafied avatar; “it is our data that is being watched, not our selves” (Cheney-Lippold, 2017: 21). This uncanny ontological dissonance between the statistical (or the algorithmic) and the individual is also what Celis (2020) discusses. While focusing on the question of power relations and algorithms, Celis notes that most of the social infrastructures that we are embedded in are “characterized by a tension between an emphasis on the individual (through dataveillance and personalization) and an emphasis on the statistical analysis of populations in which the individual loses its key role (through big data analysis and pattern recognition technologies)” (2020: 296). Such tension situates users in a tricky position: even if they are aware that their data is being collected, the data subject they represent for the algorithm is unavailable for them to either question, analyze, or meaningfully influence. As part of a larger dataset, users belong to categories larger and wider than what is contained within the notion of a “self,” which they are performing off and online. Users are trapped between the binary of the ultimately personalized subjectification and dissolved multiplications.
#TargetedAds shows an overwhelmingly affirmatory stance towards algorithmic recommendation systems and targeted ads. TikTok memes of #targetedAds promote a collaborative relationship towards dataveillance. Such a collaborative relation assumes that users can benefit from embracing the data collection practices by influencing the algorithm for their own gain. As such, the collective imaginary expressed in #targetedAds corresponds to the sentiments expressed by participants studied by Sörum and Fuentes (2023), the imaginary of “the good data,” where dataveillance is a way “of offering convenience, relevance and better services (…) [and] as accepted, expected and even appreciated” (2021: 32, 33). Sörum & Fuentes note that such positive yet deeply passive imaginary relies on a belief that data extraction benefits consumers by serving company interests in delivering the best experiences and goods (2021: 32–33). Targeted ads, alongside other forms of dataveillance, are considered as convenience (Strycharz et al., 2019; Zhang et al., 2024: 2714). In terms of #targetedAds, a similar belief is expressed, yet it appears as more active towards participation in dataveillance.
The collective imaginary that #targetedAds evokes is an active and collaborative one, despite the fact that users are savvy in influencing the algorithmic identity of others (their partners, friends, and supervisors) rather than their own. As such, #targetedAds provides an example of socializing algorithms (Airoldi, 2022). While all machine learning systems are socialized during their training because data intrinsically carries a specific cultural imprint (Airoldi, 2022: 21), some forms of such algorithmic socialization come from “local data contexts”: user-generated feedback to the systems’ workings, for example, fixing an incorrectly marked spam email (Airoldi, 2022: 56). This also links to the finding of Siles, Siles et al. (2022: 7), who defined a stage called “training” as a step in “knowing” TikTok algorithm: users taking active steps in shaping their FYP by actively “awaiting” when personalization becomes visible. “Training” is precisely what #targetedAds alludes to. The collective imaginary of #TargetedAds is rooted in the premise of (re)creating local data contexts for recommendation algorithms. Furthermore, Airoldi speaks of a rare occasion where users explicitly the case of an interactional configuration (…) [that] implies a horizontal circulation of information between an algorithmically aware user and a successfully socialized machine, which is responsive to the user's datafied inclinations. (…) the machine learning system does not need to spy on us in order to accumulate data traces. (Airoldi, 2022: 97)
“Hold Up!” when the algorithm knows too much
The second sub-collection of the top 10 most shared TikToks is discussed as an expression of “Antagonistic Passive Imaginary,” where users express discomfort or unease about being surveilled, portraying the algorithm as “creepy” or too perceptive. In this collection, eight out of the total 11 videos were repeated from the first sub-collection. Two out of three previously not included TikToks were not following the format of the #targetedAds. These two TikToks both contained the hashtag and focused on the topic of targeted ads. Given the centrality of the hashtag to gather and connect users around a certain issue or a TikTok challenge (Cervi and Divon, 2023: 3), the two TikToks were included for a further close reading (Table 2).
Top 10 TikToks from the #targetedAds hashtag page ordered according to the highest number of shares.
The two TikToks follow one of the standard formats of TikTok videos; therefore, they were most likely organically made for the platform. They replicate some of the memetic forms of TikTok and thus embody the platform's memetic affordances: the short commentary videos share a story with a “pun” at the end. The videos were recorded from the phone, either from a “selfie” perspective or a mirror reflection of the protagonist holding a phone.
Content-wise, both TikToks focus on significantly different issues than the #targetedAds memes. Firstly, the form suggests a more direct format of content delivery; rather than recording oneself preforming an action as if the camera was not on, these two tiktoks speak directly
For both protagonists, the targeted ads they encountered seem “creepy.” The protagonists acknowledge that they knew that they were being surveilled and targeted according to their personal preferences; however, they share that the targeted ads were “too personalized,” revealing an intimate knowledge that they themselves had not knowingly shared online. These TikToks stand in stark contrast to the #targetedAds meme. Rather than take on a role of a savvy game of collaboration with the algorithm, the protagonists express their helplessness and fright at the “knowledge” targeted ads seem to possess. These tiktoks share a commonality with a previous finding of Bucher (2017), who speaks of “whoa moments,” that is, an encounter with an algorithmic recommendation that makes the user realize that profiling takes place; “whoa moments arise when people become aware of being found” (2017: 35). Bucher argues that users shared a common dissatisfaction with such uncanny, algorithm-generated moments and how they “felt wrong” morally (2017: 35).
As a collective imaginary, the “perceived source” of dataveillance, according to those TikToks, are either commercial companies behind platforms such as Facebook or material technologies such as a phone, a belief similarly found in the research by Zhang et al. (2024: 2716). The materiality of a phone is prominent, similarly to its role in the Activated Collaborative Imaginary discussed in the previous section. This phenomenon, signifying the “worry that their [users’] smart devices listen in on them and relevant ads are displayed in social media feeds or websites based on recent conversation topics” is referred to as “surveillance effect” (Nicholas et al., 2021: 3). The rise of smart devices has perpetuated the concern about technology, particularly phones, listening in to the users (Pridmore and Mols, 2020: 3). Despite ambiguous truthfulness of these claims and no clear empirical evidence (Nicholas et al., 2021: 3), many users believe that their devices are constantly listening to them for targeted advertising purposes (Zhang et al., 2024, following Fowler, 2019). The Antagonistic Passive Imaginary expressed in these two TikToks reflects this belief.
When the algorithm remembers it all too well
The third sub-collection, consisting of the top 10 most viewed TikToks on the hashtag page, is discussed as an expression of a “Antagonistic Proactive Imaginary,” where users resist dataveillance, often by promoting solutions like VPNs or behavioral changes, sometimes appropriated by corporate interests. Five out of 10 analyzed TikToks were created and posted by official, company-associated accounts, ultimately serving as ads. Four out of five of these TikToks were posted by the company NordVPN (Table 3).
Top 10 TikToks from the #targetedAds hashtag page ordered according to the highest number of views.
TikToks posted by NordVPN follow three different meme formats. The first format is a popular TikTok meme template where the same protagonist is filmed from two different angles, appearing to engage in a dialogue with themselves. The two other TikToks follow recent meme templates; the first one is “Pedro Pascal Eating a Sandwich,” which gained popularity in March 2023. The second one, “Nicolas Cage Looking at Pedro Pascal/Make Your Own Kind of Music,” was popularized in February 2023. Both of these meme templates originated on TikTok and spread to different platforms in still and moving image formats.
Content-wise, all three memes produced by NordVPN situate the meme protagonist as the user (to whom the viewer is supposed to relate) who is faced with unwanted targeted ads. For example, in the first Pedro Pascal meme, the protagonist sees an ad promoting a drain hair removal, resulting in the meme's protagonist realization that, despite no longer being in a relationship, the targeted ad reminded him of his ex-girlfriend's hair clogging the drain (Figure 3). Targeted ads are a record of the past, intimately tracking the daily life (also one's “past” life). By being somewhat “off-synch,” targeted ads remind the user of the past “self.” Stance-wise, this TikTok could easily be mistaken for an actual—non-commercial—meme, rather than a promotion of NordVPN. While targeted ads are portrayed as keeping a detailed record of one's past, a VPN is presented as the solution to dataveillance. Using VPN is savvy, like gaming the system of targeted ads. Aside from appropriating the memetic template, NordVPN also somewhat “hijacked” the hashtag of #targetedAds by attempting to commodify it.
TikTok memes analyzed here open the possibility of algorithmic interpellation of data subjects. The notion of interpellation is exemplified by the moment an individual recognizes to be a subject once he is hailed by a policeman through a call “hey, you!” In an algorithmic context, an individual can be hailed as not a singular subject but rather as several algorithmically constructed profiles (Celis, 2020: 308; following Rouvroy and Berns, 2013: 12). A user can encounter an ad that is based on her data-profile that is no longer a part of her lived experience. The very moment such algorithmic interpellation happens, a performative process dependent on both stable and unstable, past and present data collection and statistical calculation, follows (Matzner, 2016: 206–207; as quoted by Celis, 2020: 309). As a certain melancholic and even mortifying aesthetics of these TikTok memes exemplify, for one moment, the user becomes only a singular instant of a selection of many statistically probable data subjects that can have little to no connection to the way the user would choose to identify himself. Such an encounter and interpellation of a data subject by a targeted ad can serve as a “chilling effect”; “people's sense of being subject to digital dataveillance can cause them to restrict their digital communication behavior” (Büchi M Festic and Latzer, 2022: 10). Ironically, one of the results of such “chilling effects” is users turning to use VPN as a protective measure (Kappeler et al., 2023: 9; Strycharz and Segijn, 2022: 581), a solution which may “lead to a false sense of control” (Strycharz and Segijn, 2022: 581) over dataveillance processes.
While memes have been argued to serve as a counter voice to the dominance of ad culture (C_YS, 2019: 321), NordVPN surfaces the reality of repurposing memes or meme aesthetics for marketing campaigns (Figures 4 and 5). Meme formats become co-opted as paid promotion. This phenomenon is not new, however. Goriunova argues that the creative and affective force of memes is something that corporate PR “envies” and therefore attempts to objectify the “memetic virality” (2014: 54). Online cultural activities of remix and reassembly—memes being part of that landscape—are being hijacked by companies; “the logic of tactics has now become the logic of strategies” (Manovich, 2009: 324). The commodification and appropriation of #targetedAds by NordVPN reveals more than just a corporate strategy of capitalization of subcultures. NordVPN's TikToks demonstrate how memes function as enactments of a collective imaginary, in that case, of dataveillance and recommendation algorithms. NordVPN's #targetedAds memes embody the corporate interest in detecting and shaping these imaginaries, a practice of employing strategic advertisements and control, what Jasanoff noted to be an increasing strategy in the corporate context (Jasanoff, 2015: 27). In a somewhat similar spirit, Mager and Katzenbach (2021), in their work on imaginaries, note that “[b]y guiding the making of things and services to come, imaginations of the future are coproducing the very future they envision. Hence, future visions are performative” (2021: 224). NordVPN uses memes to both track down what collective imaginaries of targeted ads are, and how to influence these imaginaries by structuring a vision (VPN as a solution to dataveillance) that is the most profitable for the company.

One of the two frames of a #targetedAds TikTok posted by the official account of NordVPN (video 6C).

The second of the two frames of a #targetedAds TikTok posted by the official account of NordVPN (video 6C).
Discussion and conclusions
This article contributed to the growing and needed research examining beliefs and their effects on the users making sense of dataveillance processes (Kappeler et al., 2023; Lyon, 2018; Strycharz and Segijn, 2022). By focusing on memes—intrinsically affective digital objects—this article also sheds light on users’ emotional grammar of dataveillance that resurfaces in their encounters with targeted ads. This article also added to a more in-situ body of work of users’ engagements with targeted ads as a particular platform and vernacular critique. It is thus a response to a call for “more empirically grounded” (Kappeler et al., 2023: 2) studies of users’ beliefs and responses to dataveillance by investigating these phenomena through memes. As demonstrated, the focus on the memes conceptually ties to the notion of collective imaginaries in that memes are created to be collective and relatable. #TargetedAds is indeed an expression of how “the conviction that the algorithmic systems of a certain platform actually work in a standardised way for all users, applying not solely to them” (Gandini et al., 2023: 427).
In TikTok memes collected under #targetedAds, several imaginaries come into play, further nuancing and expanding existing understandings of dataveillance. One can divide them into Activated Collaborative Imaginary, Antagonistic Passive Imaginary, and Antagonistic Proactive Imaginary. The Activated Collaborative Imaginary advocates for influencing of data profiles by embracing a collaborative relation to the recommendation algorithm—or “socializing” the recommendation algorithm. The Antagonistic Passive Imaginary centers around the overwhelming scale of dataveillance as well as its uncanny accuracy. The Antagonistic Proactive Imaginary positions users as opposing dataveillance through tech-savvy solutions and behavioral changes, yet this imaginary in the analyzed sample is cooped by corporate interests.
This article aimed to do research
The analyzed sample of memes can be considered to act in the digital space as an affective force in influencing and materializing users’ collective imaginaries. However, there are some obstacles inherent in the memetic context, including the semiotic and affective inconsistencies of memes. The general issue with approaching memes with an aim of theorizing their subject matter or their operationalization capabilities is that, as Bown & Russell point out, memes tend to “resist these terms altogether” (2019: 408). The ambiguous and affective core of memes, which often comes down to “indignity, stupidity, and crassness, and a joyful frivolousness” (Bown and Russell, 2019: 408), puts any analysis at risk as either missing the point of the meme, or over-attributing it with meanings and operations as a result of taking it too “seriously.”
Memes in the format of TikTok challenges can function particularly effective as an affective force in digital spaces, shaping and materializing users’ imaginaries. TikTok's #challenge affordance exemplifies this dynamic, as it enables playful activism, allowing users to shift between modes of engagement and play (Cervi and Divon, 2023: 3). Given their ability to transcend individual social boundaries and enter public discourse, challenges on TikTok can serve as a means to raise awareness, spread ideologies, and “externalize personal political opinion via an audiovisual act” (Cervi and Divon, 2023: 3, following Medina-Serrano et al., 2020: 264). As such, this article emphasis the benefits and importance of studying users’ entanglements and agency negotiations in dataveillance processes as mediated by and through online embeddings into platform vernaculars and ad-driven business models.
One has to note some unavoidable limitations to this study, which can serve as directions for further research. Only a sample of memes was analyzed, which was overwhelmingly created in the Global North, and mostly in the English language (though singular examples in German and French were also present). As Milan and Treré call for, there is a critical need for acknowledging and researching data flows from the South, an approach that relies on embracing a critical reexamination of “
More research is needed on the relationship between the perception of dataveillance and the results evoked by an encounter with content such as a meme about dataveillance. Ethnographic engagement with meme creators and communities can offer valuable insights into the vernacular practices and lived realities of content creators who create and share memetic content (Galip, 2024: 3), such as TikTok challenges related to dataveillance. Additionally, further studies should explore how specific sub-groups engage with dataveillance. Such engagements could also pay attention to the materialities of users’ affective engagements in more apparent relation to a platform's algorithm or modes of engagement, such as comments. It is also worth pursuing further analysis of the lack of certain issues in the above-mentioned collective imaginaries, such as the government as a source of dataveillance.
