Algorithms profoundly shape user experiences on digital platforms, raising concerns about their negative impacts and highlighting the importance of algorithm literacy. Research on individuals’ understanding of algorithms and their effects is expanding rapidly but lacks a cohesive framework. We conducted a systematic integrative literature review across social sciences and humanities (n = 169), addressing algorithm literacy in terms of its key conceptualizations and the endogenous, exogenous, and personal factors that influence it. We argue that existing research can be framed in terms of experiential learning cycles and outline how this approach can be beneficial for acquiring algorithm literacy. Finally, we propose a future research agenda that includes defining core competencies relevant to algorithm literacy, standardization of measures, integrating subjective and factual aspects of algorithm literacy, and task- and domain-specific approaches.
Over the past few decades, algorithms have emerged as key elements shaping user experiences across digital platforms. While no single algorithm exclusively defines any platform, algorithms’ primary purpose is to optimize user engagement, enhance content relevance, and improve overall user experience, encouraging prolonged platform interaction. Consequently, the term “algorithmic media” underscores algorithms as computational routines to be these platforms’ overarching features without excessively emphasizing the significance of any individual algorithm within a system or the role of written code (McKelvey, 2014). This shift signifies a substantial transformation in how content is distributed, consumed, and interacted with on various platforms, including traditional social media sites (e.g., Facebook, Instagram), e-commerce websites (e.g., Amazon, Etsy, eBay), dating apps (e.g., Grinder, Tinder), and video streaming platforms (e.g., YouTube, TikTok). Simultaneously, algorithmic platforms have been identified as sources of challenges to citizens’ rights (Leslie et al., 2021), content diversity (Møller, 2022; Scalvini, 2023), information search (Bogers et al., 2020; Noble, 2018), and mobilization and polarization (e.g., Gagrčin et al., 2023; Törnberg, 2022), holding the potential to exacerbate existing inequalities and threaten democracy (Leslie et al., 2021; O’Neil, 2016).
Given the pervasive nature of algorithmic media across various domains, it is imperative that users can assert agency over their experiences in everyday algorithmic media use (Pronzato and Markham, 2023; Savolainen and Ruckenstein, 2024). Unsurprisingly then, and in addition to regulatory frameworks for ethical algorithmic systems (Elkin-Koren, 2020), we have seen a growing interest in users’ algorithm literacy (AL). AL research complements the extensive scholarship on media literacy. Following a skills-based approach, media literacy embraces “the ability to access, analyse, evaluate, and create [media] messages in a variety of forms” (Livingstone, 2004: 5; Aufderheide, 1993). With the diffusion of new technologies, a multitude of subconcepts have been put forward addressing the material particularities of these technologies (e.g., computer literacy, Johnston and Webber, 2005; social media literacy, Schreurs and Vandenbosch, 2021). Additionally, scholars have considered the overlapping skills referenced in the various literacy approaches (e.g., Koltay, 2011). When it comes to literate usage of algorithmic media, the intersection of media and digital literacy is of high relevance. Literate users command cognitive and affective structures and behavioral skills to mitigate risk and maximize opportunities in their media usage (Schreurs and Vandenbosch, 2021). AL can support an informed citizenry able to partake in the public discourse about the ethical implications of algorithms (Chung, 2023), advocate for policies and regulations that safeguard their rights (Leslie et al., 2021), and contest unfair practices in algorithmically driven platform work (Cotter, 2023; Qadri and D’Ignazio, 2022).
However, studying AL faces challenges due to the algorithms’ opaque nature (Just and Latzer, 2017). The process involves collecting extensive data in the input stage and algorithmic processing in the throughput stage, often deemed a “black box” since the exact functioning of algorithms is a proprietary secret (Reviglio and Agosti, 2020). Finally, users are presented with algorithmically generated output, continuously adapting based on user engagement. This dynamic feedback loop complicates defining parameters for AL. Despite these challenges in studying AL, the field has seen a surge in theoretical and methodological approaches. While conceptual ambiguity is common in emerging research areas, a lack of integration can impede scientific progress.
In the present study, we answer Oeldorf-Hirsch and Neubaum’s (2023: 12) call to “focus on further developing frameworks that incorporate sub-dimensions [of AL]” and set out to devise an overarching theoretical framework that integrates existing AL research and considers antecedents and outcomes proposed in the literature. To this end, we systematically reviewed 169 scientific contributions in the social sciences and humanities published between 2000 and 2023. Specifically, we were interested in the following research questions:
RQ1: How is algorithm literacy conceptualized and measured?
RQ2: How do users acquire algorithm literacy?
RQ3: What outcomes can users hope to have?
We proceeded to integrate the findings into an organizing framework, pinpoint knowledge gaps, and offer a future research agenda.
Review method
We conducted a systematic integrative review of the literature. This type of review is particularly valuable in emerging research fields as it helps assess, map, and bridge existing literature from different disciplines and epistemological frameworks (Torraco, 2005). Our approach combines elements of systematic reviews that aim to map the literature and identify relationships between constructs and gaps with integrative reviews that aggregate existing literature and connect disparate scholarly conversations (Cronin and George, 2023).
Data collection
The data collection for our literature review involved two distinct steps (Figure 1). In the first step, we used the Web of Science database to identify peer-reviewed academic papers in English published between January 1, 2000, and September 6, 2023, in the fields of social science and humanities (including the categories Communication, Political Science, Anthropology, Human Geography, Information Systems, Educational Science, Sociology). Drawing upon terms identified by Oeldorf-Hirsch and Neubaum (2023), we searched for articles in which the term “algorithm” appeared within two words (NEAR/2) of the terms “literacy,” “knowledge,” “competence,” “awareness,” “skill,” “education,” “belief,” “attitude,” “experience,” “folk theory,” or “imagination” in the abstract, title, and keywords. Our inclusion criteria in this step covered peer-reviewed empirical studies, reviews, theoretical works, and doctoral dissertations, while initially excluding preprints, unpublished work, reports, and conference proceedings due to variations in the quality of the peer-review processes associated with these products (Scherer and Saldanha, 2019; Paez, 2017).
PRISMA flow diagram.
This search yielded a total of 729 publications. After applying formal criteria, including publication date, language, peer-review status, and alignment with humanities and social sciences, we excluded 286 items. The remaining 443 publications were further assessed for eligibility, using a sequence of exclusion criteria: 1) not related to communication on and through algorithmic platforms (e.g., the role of algorithmic mathematics in Wall Street’s financial system), 2) substantial content not related to algorithms (e.g., general media literacy pieces that do not address algorithms as object of inquiry), 3) not related to literacy relevant for using, evaluating, or navigating algorithmic media use (e.g., policy analysis of EU legislation on algorithms). All publications were double-coded for eligibility, which resulted in an acceptable Krippendorff’s alpha value (α = .71). After further discussing the differences in coding, 101 publications were selected for retrieval. After a full-text analysis, 91 publications (86 journal articles and five doctoral dissertations) met the substantive criteria for inclusion.
In the second step of data collection, we conducted forward searches (via Google Scholar) and backward searches (via reference lists and Connected Papers) using publications from our initial sample. In this step, we broadened our scope to include conference proceedings identified through these searches. This decision represented a compromise, as it involved a) acknowledging recommendations to incorporate conference proceedings in systematic literature reviews (Scherer and Saldanha, 2019); b) recognizing the interdisciplinary nature of AL research, spanning fields such as human-computer interaction at the intersection of social science and computer science/informatics, primarily published through conference proceedings; and c) acknowledging the impractical volume of proceedings beyond our resources. This search yielded 119 articles, which, after being subjected to our exclusion criteria sequence, resulted in the inclusion of 78 additional articles. Our final dataset, therefore, comprises 169 articles. (See the full sample list in Supplementary Materials or the dedicated Open Science Framework directory: https://osf.io/scxmu/).
Data analysis
We analyzed the sample using deductive and inductive coding. While we outlined these stages sequentially, our process was iterative, often involving simultaneous engagement with different phases and revisiting earlier stages as we incorporated new sources into our sample (Cronin and George, 2023). First, we coded the articles deductively based on key descriptors: the research paradigms (social constructivism, positivism), conceptual or empirical approach and methods (qualitative, quantitative, mixed methods), publication year, and national affiliation of the authors. Next, we performed an in-depth full-text examination. We randomly selected a subsample of articles, and each team member conducted open coding on this subsample guided by our RQs and theoretically defined focal categories (dimensions of AL, influencing factors, antecedents, and outcomes). This stage involved extensive note-taking. Through regular discussions, we inductively refined our focal categories. We also created subcategories based on our notes to identify patterns within these categories. For example, while we deductively identified the need to code for exogenous factors (based on DeVito et al., 2018), we further developed these categories inductively by reviewing the literature. Having developed a cohesive coding scheme, we systematically coded the remaining sample, distributing it among team members. To ensure rigor, two team members independently coded about 40% of the sample, resolving discrepancies through discussion. In the final stage, we revisited the focal categories of interest (e.g., dimensions of AL identified by Oeldorf-Hirsch and Neubaum, 2023) and used them to construct our organizing framework (Cronin and George, 2023). For a detailed overview of the category structure including references to exemplary publications, see Supplementary Material (SM-Tables 1, 2, and 3) or under the above provided OSF link.
Descriptive findings
Scholarly work on users’ perspectives toward algorithms and their practices has strongly increased since 2018 (Figure 2), presumably in parallel with an increased public awareness of algorithms and datafication, due, for example, to the Facebook Cambridge-Analytica scandal in 2018 (Hinds et al., 2020). Another contributor to the heightened interest was presumably the global launch of TikTok in 2016, which soon established itself as one of the most used apps worldwide (Bhandari and Bimo, 2022).
Number of publications per year by empirical method (2010–2023).
Regarding methodological approaches, conceptual work (n = 18, 10.7%) is far less prevalent than empirical work (n = 151, 89.3%). Empirical articles favor qualitative methods, such as interviews or qualitative content analyses (57.4%), followed by quantitative methods, such as surveys and experiments (22.5%; Table 1).
Applied methods.
Method
n
%
Qualitative
interviews
42
24.9%
content analysis
14
8.3%
ethnography
8
4.7%
other (participatory methods, focus groups, literature reviews, media diaries)
8
4.7%
Quantitative
survey
25
14.8%
experimental survey
10
5.9%
content analysis
1
0.6%
Mixed Methods
qualitative
25
14.8%
qualitative & quantitative
16
9.5%
quantitative
2
1.2%
Conceptual
18
10.7%
Total
169
100%
Organizing the algorithm literacy landscape
In the following, we address our research questions (RQs) one by one. We then integrate the various dimensions of AL, endogenous and exogenous factors, and different AL outcomes into an organizing framework (Figure 3).
Integrative framework.
Defining and measuring algorithm literacy (RQ1)
Guided by the dimensions of AL conceptualized by Oeldorf-Hirsch and Neubaum (2023), we classified the sampled articles according to the cognitive, affective, and behavioral dimensions of AL. (For a summary, see Supplementary Materials, SM-Table 1). In addition, we assessed whether authors examined the users’ perceived, subjective qualifications of algorithmic functions or measured AL against a factual benchmark.
Cognitive dimension
Most analyzed publications extensively addressed the cognitive dimension of AL (n = 117, multiple coding possible), ranging from users’ basic awareness of algorithms to a deeper understanding of their mechanisms. As a starting point, scholars commonly consider general awareness of algorithms in the context of algorithmic media use and divergent awareness of algorithmic processes across platforms. Though lacking longitudinal studies, authors suggest increased public awareness in recent years (not least due to increased news media reporting, Nguyen and Hekman, 2024), albeit unevenly distributed in society (Hargittai et al., 2020). Furthermore, studies delve into users’ detailed understanding of algorithmic media, exploring how familiar mental frameworks influence the adoption and use of new technology, often indicated by users’ folk theories and algorithmic imaginaries (e.g., Bucher, 2017; for an overview of empirically found folk theories about algorithmic media, see Dogruel, 2021).
While many articles try to assess AL empirically, they face a particular challenge due to the absence of a definitive “ground truth” as a baseline for assessing cognitive AL (e.g., Ytre-Arne and Moe, 2021). Articles that address this issue mostly point to the opacity of proprietary algorithms, the dynamic nature of code, and the diverse algorithmic processes across platforms, which make it impossible even for researchers to make factually correct statements about particular algorithmic processes at work. Thus, most studies fall under “subjective AL” (n = 102), investigating how users reflect on algorithms. However, reflection does not imply that the users possess specific cognitive competencies or factual knowledge that would enable them to achieve desirable and mitigate undesirable outcomes through algorithmic media use. For instance, while integral to user engagement in media environments, folk theories about algorithms can be based on limited or misleading knowledge (e.g., Ytre-Arne and Moe, 2021).
A few articles pursue “objective AL” (n = 15), seeking factual benchmarks to assess and compare users’ cognitive AL, mainly concentrating on undisputed facts about algorithms (e.g., Dogruel et al., 2022). These authors accept that algorithm-literate users cannot fully know which exact input data is processed and which exact algorithmic throughput is at work; instead, they measure general awareness of algorithmic processes and of challenges associated with them (e.g., in survey and interview approaches, see Cotter and Reisdorf, 2020; Festic, 2022; Klawitter and Hargittai, 2018; e.g., in the analysis of content-creator videos addressing algorithms, Issar, 2023) or if users hold general misconceptions about algorithms (e.g., algorithms to be unbiased, Zarouali et al., 2021b; Facebook news feeds to be non-personalized, Brodsky et al., 2020).
The growing number of standardized studies on AL strengthens the need for validated instruments. Zarouali and colleagues (2021a) present a measure to grasp users’ awareness of content filtering, automated decision-making, human-algorithm interplay, and ethical considerations for media content recommendation. Dogruel and colleagues (2022) have validated a scale specifically assessing cognitive AL. This scale gauges awareness of algorithms in various areas and applications, encompassing knowledge about the input data algorithms generally process, their intended objectives, and the subsequent impact on media output.
Affective dimension
In coding for the affective dimension of AL (n = 79), we included articles on how users “feel” and “sense” algorithms (e.g., Bishop, 2019). This dimension captures the emotional responses and sentiments evoked by interactions with algorithmic media. Based on qualitative interviews and surveys with algorithmic media users, the literature repeatedly finds the following four affective responses to algorithms. Appreciation: Users perceive algorithms as helpful, trustworthy, and reliable (e.g., Avella, 2023). This response is closely associated with satisfaction and certainty, reflecting contentment as individuals rely on algorithms for specific tasks and recommendations (e.g., Yeomans et al., 2019). Apprehension: Users experience unease and anxiety, often rooted in uncertainties about how algorithms operate, including their ranking/sorting criteria, and concerns about their impact on various aspects of users’ lives (e.g., Bucher et al., 2021). Aversion: A heightened sense of discomfort and discontent represents a gradation of apprehension (e.g., Bishop, 2019). Resignation: Reflecting a perceived inability to influence or fully comprehend algorithmic systems, users experience feelings of powerlessness, frustration, or disillusionment (e.g., Das, 2023).
In line with Oeldorf-Hirsch and Neubaum (2023), we also coded for attitudes toward and personal assessments of algorithms as part of the affective AL dimension. These include perceptions about the quality of algorithms (e.g., their transparency, accountability, fairness, explainability, and credibility; e.g., Shin and Park, 2019), assessments of the usefulness of algorithmic outputs (e.g., Sundar and Marathe, 2010; Taylor and Choi, 2022), and their societal effects (e.g., Calice et al., 2021). While such perceptions may be based on cognitive assessments rather than purely emotional responses, studies in this area mainly focus on the subjective feeling toward the quality of algorithms and how users’ “emotional experiences of algorithms play into their norms and attitudes about how algorithms ought to function” (Swart, 2021: 6). In contrast, “affective encounters with algorithms entail evaluations” informing the meaning-making process (Lomborg and Kapsch, 2020: 752).
While assessing awareness and factual knowledge of algorithmic processes against objective benchmarks is possible (despite the aforementioned challenges), attitudes and perceptions toward technology inherently remain subjective. In defining “social media literacy,” Schreurs and Vandenbosch (2021) argue that being literate is not about showing the “right” affective responses. However, affective reactions can lead to behavioral consequences (e.g., Bucher, 2017; see below on behavioral AL), which might be more or less socially desirable or individually beneficial in the long run.
Behavioral dimension
The behavioral dimension of AL (n = 40) pertains to how users practically engage with algorithms, thereby shaping their experiences of using algorithmic media more or less intentionally. Across the sampled empirical literature, we found these four behavioral responses to algorithmic systems. Alignment: Users actively shape their experience to align with personal values, goals, or preferences, maintaining consistency with their beliefs and interests without harnessing a naive appreciation of algorithmic systems (e.g., DeVito et al., 2018). Compliance: Users adhere to algorithm-generated recommendations, content, or features, even when dissatisfied or frustrated. Compliance is often rooted in feelings of resignation. For example, Bucher et al. (2021) demonstrate anticipatory compliance among gig workers at Upwork when fear and unawareness of the algorithm’s material properties strengthen algorithmic influence (similarly, e.g., Cotter, 2019; Duffy and Meisner, 2023). Subversion: Users actively manipulate, undermine, or “game” algorithms to achieve their goals or express dissatisfaction (e.g., DeVito et al., 2017). Resistance: Users limit their interaction with or the influence of algorithms on their online experience. This may involve turning off algorithm-driven features or choosing not to use specific platforms. Resistance is related to aversion and is avoidance-oriented (e.g., Xie et al., 2022).
Based on the literature, we find that it is normatively desirable that users have the agency to act “in their best interest” when engaging with algorithmic media (e.g., Das, 2023; Pronzato and Markham, 2023). However, most studies describe how users cope day-to-day, without discussing whether the approaches users employ are actually effective and contribute to their “best interest.” When authors do discuss user agency, they tend to frame it in terms of tactics and strategies of resistance, often with an underlying (productive or agonistic) tension between users and the systems they are navigating (Adams-Grigorieff, 2023: 15-16; Velkova and Kaun, 2021). Thus, subversive practices are most likely to be seen as expressions of agency since they manifest in challenging the rationalities and mentalities imposed by algorithms (DeVito et al., 2017; Velkova and Kaun, 2021). For example, in algorithmic content moderation, users employ “algo speak,” intentionally modifying or replacing words in hashtags or post/video descriptions to circumvent algorithmic moderation and evade restrictions (e.g., Klug et al., 2023). In the labor domain, delivery couriers exert agency through both collective practices (aligning with fellow workers and sharing orders) and individual practices (disregarding algorithmic calculations and relying on their own experience) (e.g., Sun, 2019). However, it is acknowledged that the agency requires self-efficacy (Helsper, 2021), which can be challenging to attain under the constant pressure of algorithmic management (e.g., Bucher et al., 2021). Under what conditions users acquire and possess agency to act in their best interest remains open.
Developing algorithm literacy (RQ2)
In the following section, we outline the endogenous and exogenous factors of algorithmic media use and the user characteristics that shape AL. (For a summary and exemplary studies for each subcategory, see Supplementary Materials, SM-Table 2.)
Endogenous factors
Cotter and Reisdorf (2020: 754) and Swart (2021: 8) characterize algorithmic media as “experience technologies,” positing that individuals develop a basic understanding of algorithms through their engagement with algorithmic media. Thus, the characteristics of individual media use comprise relevant endogenous factors shaping experiential learning. Studies propose that frequent and intensive use fosters reflection about algorithms (e.g., Cotter and Reisdorf, 2020; DeVito et al., 2018). Intensive use can lead to algorithmic incidents, where outputs are perceived as surprising or users feel misunderstood (e.g., Swart, 2021; Velkova and Kaun, 2021). Moreover, frequent users are attuned to platform changes, thus constantly reevaluating their folk theories (DeVito, 2022). While such expectancy violations and personalization cues trigger behavioral adaptation, many young people cannot verbalize these intuitive insights (e.g., Swart, 2021). Festic (2022) consistently shows that heavy users may value platforms but diverge from intended functions, with frequent interactions not guaranteeing AL. Further, since algorithms are platform-specific, context shapes users’ algorithm understanding (e.g., Adams-Grigorieff, 2023; Swart, 2021). Accordingly, studies suggest that cross-platform use aids AL (e.g., Espinoza-Rojas et al., 2023; Gruber et al., 2021).
Usage episodes vary in elaboration level and topic involvement, from everyday media use with limited selection effort to highly strategic media use. Most reviewed literature focuses on everyday media use, involving activities like staying in touch, relaxation, entertainment, and surfing. Such use requires limited effort, with users interacting intuitively (e.g., exploring TikTok’s ForYouPage) and in a shallow state of attention, reflecting on algorithms mainly when issues arise in recommendations, prompting irritation (e.g., Siles et al., 2022). Other articles explore algorithmic media usage with higher elaboration, where users apply media for specific purposes such as information seeking as opposed to aimless scrolling (e.g., Bakke, 2020; Bogers et al., 2020), engaging in activism like the body positivity movement (DeVito et al., 2017), seeking a romantic partner (e.g., Hu and Wang, 2023), or enhancing environmental navigation (Ramizo, 2022). Finally, other studies focus on users strategically employing algorithmic media to achieve specific, long-term goals, such as pursuing financial or political advantages within the platform economy, including content creators (e.g., influencers, Duffy and Meisner, 2023), gig workers (e.g., Uber drivers, Curchod et al., 2020), and activist groups (e.g., right- or left-wing groups, Maly, 2019). Motivated by livelihood concerns, they actively seek information about algorithms to manage visibility (e.g., Bishop, 2018; Cotter, 2019) or “game” the algorithm to enhance agency (e.g., Curchod et al., 2020). Their strategies include careful planning, deliberate tactics, proactive experimentation with algorithms (such as reverse-engineering or conducting A/B testing) and monitoring metrics to gauge prospects of increased content visibility (e.g., Cotter, 2024; Duffy and Meisner, 2023).
Exogenous factors
Exogenous factors influencing AL refer to elements beyond individuals’ direct experiences and actions that contribute to developing and enhancing their understanding and competence in navigating algorithmic media (DeVito et al., 2018). Our review suggests that interaction with others helps deepen critical reflection (Morris, 2020), including interpersonal exchanges about news feeds revealing absences or differences (e.g., Adams-Grigorieff, 2023; Rader and Gray, 2015; Velkova and Kaun, 2021) and participation in communities of practice, where individuals sharing an interest in algorithmic media enhance AL through interaction and information exchange (e.g., Cotter, 2019, 2024). Given the central role of interaction on the interpersonal and group level in people’s knowledge, opinions, and skill acquisition (Helsper, 2021; Schreurs and Vandenbosch, 2021), more research is needed into the social contexts in which media users talk about algorithms, and how these exchanges foster awareness and knowledge.
Unlike peer exchanges, targeted literacy interventions offer quality checks and secure factual information (Schreurs and Vandenbosch, 2021). However, research on their effects is scarce (e.g., Brodsky et al., 2020) compared to traditional and social media literacy interventions (Jeong et al., 2012). Evaluations of educational interventions within the formal education system are rare yet are demonstrably effective in increasing algorithm awareness and resilience (e.g., Adams-Grigorieff, 2023; Bakke, 2020; Pronzato and Markham, 2023). Intervention studies suggest that users learn about algorithmic operations by systematically experimenting with input and output and maintaining media diaries. For example, Adams-Grigorieff (2023) conducted an 11-session-long after-school program to enhance students’ critical algorithm literacy by investigating and experimenting with their preferred platforms. As a result, students’ understanding of and agency in dealing with algorithmic platforms progressively increased throughout the course. Such interventions are worthy of further theoretical elaboration and systematic empirical testing.
Finally, exogenous factors include information from media sources such as companies’ blogs (e.g., Cotter, 2019; Dowell, 2023), algorithmic lore—a type of content created by individuals claiming expertise in algorithms (e.g., Bishop, 2020; MacDonald, 2023)—and traditional media reporting (Dowell, 2023; Zarouali et al., 2021b). The latter has mostly been explored in relation to AI’s broader implications and associated risks. For instance, Nguyen and Hekman (2024) observe a shift in AI news coverage over the past decade, from viewing AI as speculative to focusing on its tangible social, economic, and political impacts, with growing concern about data bias and discrimination. News media still serve as crucial observers of technological trends and can have an “awareness effect” on audiences by simply informing them about what aspects of an issue are on the public agenda, especially with respect to benefits and risks (Nguyen 2023: 8). The current focus on data scandals and technology misuse can shape individual perceptions of algorithmic media, guiding both public understanding and research focus on specific harms, domains, and literacy needs (Nguyen and Hekman, 2024: 449). Thus, there is ample space for examining the roles of various traditional and online media regarding agenda-setting and framing, and their implications for cognitive, affective, and behavioral AL.
Overall, exogenous influences complement learning through algorithmic media by introducing external information and perspectives, potentially broadening individuals’ AL through exposure to diverse insights and knowledge beyond their immediate interactions with algorithmic media. More research is needed in this area since exogenous factors can balance the risks of self-referentiality and a lack of objective quality checks that come with learning from the personal use of algorithmic media. In this regard, Boulamwini (2022) takes a step further and advocates for evocative audits of algorithmic systems that “provide[s] personal/visceral evidence of algorithmic harms by using counter-demos to show real-world algorithmic systems failing in some way that expose systemic issues” (Boulamwini, 2022: 160) and create public awareness.
Personal characteristics
Various articles consider the influence of users’ personal characteristics on AL. Standardized empirical studies examine how sociodemographics affect reflection about algorithms and engagement with algorithms. Findings suggest that algorithm awareness is higher in men, younger users, and users with higher education (e.g., Cotter and Reisdorf, 2020; Gran et al., 2021; Makady, 2023), that these groups are less likely to hold misconceptions about algorithms (Zarouali et al., 2021b), and have more positive attitudes toward algorithmic processes (Gran et al., 2021). In addition, our review indicates that general media and digital literacy is related to AL in that it affects engagement with algorithms, that is, the behavior dimension of AL (e.g., DeVito et al., 2018; Festic, 2022). Better internet skills, such as knowledge of and motivation to assess media content or manage privacy, also enable users to adapt their usage to the conditions of algorithm media (e.g., DeVito et al., 2018; Just and Latzer, 2017).
A relevant share of articles address the AL of marginalized groups, how they perceive and interpret algorithmic outcomes, and how they adapt their online behavior to reach visibility within structurally biased systems that suppress their voices. Existing studies do not explicitly compare AL between social groups but rather point to the specific experiences of particular groups. For example, TikTok users express folk theories claiming that the “For You Page” algorithm actively suppresses content related to marginalized social identities based on race and ethnicity, body size and physical appearance, ability status, class status, LGBTQ+ identity, and political and social justice group affiliation (e.g., DeVito, 2022; DeVito et al., 2018; Karizat et al., 2021; Simpson and Semaan, 2021). Similarly, social media creators from historically marginalized identities and stigmatized content genres understand that platforms enact governance unevenly—through formal (human and automated content moderation) or informal (shadow-bans, biased algorithmic boosts) means. This leads to adaptive behaviors ranging from self-censorship to concerted efforts to circumvent algorithmic intervention (e.g., Duffy and Meisner, 2023). However, findings from Zhang and Chen (2023) suggest that algorithm knowledge might not necessarily help discriminated groups overcome biases against them.
Research on digital inequalities indicates that homogeneous groups of lower socioeconomic status may not significantly improve their literacy through in-group exchanges (Helsper, 2021). Therefore, future research should explore how user characteristics moderate interpersonal and group interactions, identifying factors that foster awareness and factual knowledge. Here, it is crucial to consider both material resources, such as wealth, occupation, and formal education, as well as embodied resources, such as socialization based on education, ethnicity, and gender (Helsper, 2021: 182), since their interplay is crucial for understanding the dynamics of and inequalities in AL acquisition.
Outcomes of algorithm literacy (RQ3)
Outcomes of AL highlight the practical achievements enabled through algorithmic media in everyday contexts. Although often implied and serving as the normative basis for calls for increased AL, these outcomes are rarely explicitly examined, thus remaining a relatively uncharted territory. In our review, we align with Helsper’s (2021) argument on the importance of differentiating various outcome domains, as success in one literacy-related outcome may not necessarily translate into success in others. (For a summary and exemplary publications for each subcategory, see Supplementary Materials, SM-Table 3.)
To begin with, AL is linked with notable consequences on the individual level. Our analysis identifies heightened elaboration as a key personal outcome of AL, with studies showing that individuals with elevated AL levels engage with algorithmic media more mindfully and intentionally (e.g., Adams-Grigorieff, 2023; Pronzato and Markham, 2023). Another presumed personal outcome is increased agency over algorithmic experiences, involving intentional interaction with algorithms to improve user experience (e.g., Adams-Grigorieff, 2023; DeVito, 2022). AL also enables users to achieve selective (in)visibility, protecting themselves and their communities from online harassment on algorithmic platforms (DeVito, 2022). Further, AL contributes to self-actualization, allowing individuals to pursue interests, shape identities, and gain visibility in their communities of interest (e.g., DeVito, 2022; Simpson et al., 2022).
Finally, a segment of the literature examines personal well-being as an outcome of AL, albeit rarely as a focal variable. One exception is studies on challenges faced by women using algorithmic media to cope with pregnancy loss stigma, as algorithms targeting these women assume all pregnancies proceed as expected, thus leading to a decrease in their well-being (Andalibi and Garcia, 2021; Bogers et al., 2020). Additionally, research indicates that engagement in the platform economy negatively impacts well-being, inducing stress irrespective of individuals’ AL levels (e.g., Bishop, 2018; Curchod et al., 2020). We derive two observations from the reviewed literature: 1) the surprising underemphasis on well-being as a focal AL outcome, despite established links between media literacy and well-being (Schreurs and Vandenbosch, 2021), and 2) the predominant focus on adverse well-being outcomes of insufficient AL. In contrast, research on general digital literacy prioritizes resilience, akin to well-being, defined as “learning from past positive and negative experiences online to avoid negative outcomes and exploit ICT benefits in the future” (Helsper, 2021: 80).
Community building stands out as a central social outcome of AL, fostering online communities based on shared interests and values (e.g., Avella, 2023). In this case, AL implies that individuals and communities can control their presence within the algorithmic flows and shape the narratives surrounding them. Similarly, AL allows users to accumulate social capital (e.g., Bhandari and Bimo, 2022) through affiliations with like-minded communities or by establishing themselves as knowledge brokers, as exemplified by those who disseminate algorithmic knowledge on platforms like YouTube (e.g., Bishop, 2020).
Regarding economic outcomes, the literature suggests a link between AL and the ability to achieve visibility on algorithmic platforms, which is central to platform workers in creative industries, as it opens opportunities for monetization, such as brand collaborations (e.g., Avella, 2023). Conversely, studies on gig workers, such as Uber drivers and delivery couriers, reveal that lacking AL can result in precarity and exploitation (e.g., Curchod et al., 2020; Sun, 2019). These workers face the constant dilemma of striving for visibility, reputation, and consistent clients by engaging in emotional labor while risking financial loss by undervaluing their work to avoid penalties (e.g., Bucher et al., 2021). Nevertheless, it remains an open question whether greater AL directly leads to greater visibility, or if those with higher AL are simply better positioned to achieve visibility due to other factors. For example, socioeconomic background and societal status may play a significant role in influencing both AL and visibility.
Political outcomes associated with AL encompass the augmented capacity to mobilize online communities for social and political causes. Although examples are limited, instances of AL contributing to political mobilization are evident across the political spectrum (e.g., Cotter, 2024; Maly, 2019). Moreover, a lack of AL increases the likelihood of polarization through personalized algorithmic news based on political affiliations (Calice et al., 2021). Conversely, AL is positively related to informed citizenship (e.g., Makady, 2023), enabling individuals to exercise information care and due diligence in everyday news practices, ensuring a more balanced approach compared to overreliance on algorithmic curation (Du, 2023; cf. Schaetz et al., 2023). Studies emphasize the importance of critical consciousness in contributing to ethical discussions on algorithmic systems and advocating for improved regulation, such as addressing content manipulation and the addictive nature of personalized content (Chung, 2023; Scalvini, 2023). Thus, individuals with AL are more prone to garner policy support for matters related to platform regulation (e.g., Chung, 2023). Conversely, a lack of AL may result in “algorithmic impotence,” where individuals lack the capacity to critique or influence perceived unfair algorithmic processes (e.g., Qadri and D’Ignazio, 2022; Sun, 2019).
The extent of AL within a population can significantly influence broader societal outcomes. Epistemic privilege refers to unequal access to knowledge about algorithms that bestows greater privilege upon certain actors, typically platforms, compared to others, such as content creators subject to those algorithms (Cotter, 2023). The overarching implication of epistemic privilege is that specific knowledge becomes more potent, shaping decision-making processes across various life domains and determining who controls these decision-making processes (Lloyd, 2019). Likewise, a lack of AL within segments of the population is likely to exacerbate existing digital divides (Cotter and Reisdorf, 2020; Gran et al., 2021; Zarouali et al., 2021b).
In summary, research on AL outcomes predominantly emphasizes the negative consequences of a lack of literacy, particularly for vulnerable groups. Indeed, the unequal adoption of AL is likely to exacerbate existing inequalities across diverse population strata since outcomes of AL are linked to individuals’ other resources, with disadvantaged individuals facing compound barriers in terms of access, competences, and norms in algorithmic media use (Helsper, 2021: 118). This points to the need to situate individuals within broader social and societal contexts to understand the dynamics of acquisition and outcome quality of AL.
Future research agenda
Toward an experiential learning framework for algorithm literacy
As illustrated previously, scholarship tends to treat algorithms as experience technologies (Cotter and Reisdorf, 2020; Swart, 2021), and while this idea is echoed within our sample, the literature needs a theoretical framework to integrate these findings. Based on our reading of the literature, we suggest that users acquire AL through algorithmic media use in a process that can be classified as experiential learning. We contend that the Experiential Learning Theory (ELT) proposed by Kolb (1984, 2015) offers a suitable framework for further thinking about AL in algorithmic media use. The core tenet of ELT is that “learning is the process whereby knowledge is created through the transformation of experience” (Kolb, 1984: 38). Accordingly, ELT treats learning as an ongoing, cyclical adaptation to the world through interactions between the individual and the environment (Vince, 1998). While this theory has been applied in other disciplines to explain learning experiences and outcomes (e.g., Morris, 2020), it has not received much attention in media and communication science outside of media pedagogy (for an exception, e.g., Greenberg, 2007).
ELT outlines four stages representing an idealized learning cycle: concrete experience, reflection on thoughts and feelings, abstraction by drawing conclusions, and experimentation involving behavioral adaptation based on prior conclusions (Kolb, 1984; Vince, 1998). Concrete experiences are “highly contextualized, primary, experience[s] that involve hands-on learner experience in uncontrived real-world situations” (Morris, 2020: 1070). In the context of AL (see Figure 4), concrete experiences involve encounters with algorithms in daily life through algorithmic media use, such as algorithmically curated news feeds, search engine results, or personalized content recommendations on streaming platforms. Based on these experiences, individuals become aware of algorithms and (ideally) reflect on these encounters. Reflection plays a central role in the learning process and is vital for making sense of the experience. With heightened awareness, individuals engage in abstract conceptualization (abstraction). ELT further suggests that learning is incomplete without applying knowledge in new situations. This involves testing the fit of abstract conceptualizations formulated against new concrete experiences (Morris, 2020). Concrete experiences with algorithms necessarily entail affective responses that accompany all learning cycle stages. In the preceding sections, we highlighted that literature suggests that people who use algorithmic media strategically are more likely to have a higher level of AL. This aligns with ELT, underscoring the importance of purposeful learning within specific contexts and concrete problems (Morris, 2020: 1069). The reviewed studies reveal that individuals engaging with algorithms in practical tasks or with specific goals are more motivated to learn and encounter more tangible learning opportunities. In the context of AL, active experimentation is manifested as (more or less) conscious adaptation in users’ engagement with algorithms. This involves experimenting with different online behaviors and observing algorithmic responses, refining one’s understanding and strategies in algorithmic interactions. In the preceding section, we categorized these strategies as alignment, compliance, subversion, and resistance.
Experiential learning cycle.
The stages of the learning cycle are not rigidly delineated but rather fluid in nature, meaning that the learner “touches all the bases” in a “recursive process that is sensitive to the learning situation and what is being learned” (Kolb, 2015: 51). Since conditions of the context may change across time and place, “all knowledge is provisional and needs testing in context” (Morris, 2020: 1072). This aligns with literature describing learning through experience in algorithmic media use (specifically, see DeVito, 2021, on adaptive folk theorization). At the same time, because learning is always context-specific, ELT requires people to be comfortable with ambiguity and uncertainty related to new learning experiences (Morris, 2020: 1071).
By employing an established framework and aligning review findings with it, we hope to provide a structured foundation for future research endeavors to understand the acquisition and cultivation of AL. For example, the reviewed literature suggests that individual characteristics might moderate the learning cycle, with factors like the level of elaboration and topic involvement fostering abstraction and adaptation. At the same time, a lack of general digital literacy or a history of marginalization may hinder development (e.g., Helsper, 2021). Thus, future research should investigate individual factors and the circumstances under which user experiences, reflection, abstraction, and adaptation lead to an increased AL. In addition, according to Morris (2020), a fundamental aspect of the experiential learning process involves recognizing that knowledge is situated within specific contexts and evolves over time and space. Given that learning is context-dependent and involves managing ambiguity and dissonance (Morris, 2020), it is crucial to foster critical reflection on algorithmic experiences amidst the uncertainty often associated with them (Vince, 1998).
Beyond experiential learning: defining benchmarks and standardizing algorithm literacy
While we argue for the usefulness of ELT (Kolb, 1984, 2015) as a conceptual umbrella, experiential learning alone may not provide a comprehensive understanding of algorithms and one’s behavioral options. In fact, experientially developed AL might be self-referential and confined since it might drive adaptation based on impressions rather than a thorough grasp of the underlying algorithmic processes (cf. Morris, 2020). If being algorithmically literate ultimately means acting in one’s best interest through algorithmic media use (Das, 2023), AL must extend beyond establishing folk theorization, and we must determine what this best interest might mean in different domains. Thus, we see several critical avenues for 1) standardizing AL measures (e.g., developing scales and measuring AL across populations, cf. Dogruel et al., 2022) and 2) further elaborating on the relationship between subjective vs. objective (or “factual”) aspects of AL.
Related to standardization, we hope to have shown in this review that descriptive studies of AL are aplenty. Drawing from prior studies (Dogruel et al., 2022; Zarouali et al., 2021a), additional efforts are necessary to establish valid and reliable AL measures. Further, in the next phase of the field development and in line with the aim that AL should enable people to act in their best interest through algorithmic media (Das, 2023), researchers should not stray away from defining desirable outcomes of AL across domains of study against which we can examine the level and content of competencies needed to reach these outcomes. Relatedly, future research must delineate domain-specific competencies indicative of AL encompassing cognitive, affective, and behavioral dimensions. Defining these is crucial for measuring AL, thus enabling a more rigorous examination of the influence and value of AL in various domains and across populations. Emphasizing the importance of cross-domain skills, we encourage research to develop transferable AL indicators, recognizing their enduring relevance in the context of evolving platforms (Helsper, 2021).
The relationship between the subjective and objective aspects of AL requires better integration. While establishing absolute truths about algorithms is challenging (or perhaps impossible) due to their proprietary and dynamic nature, it is worthwhile trying to define a minimal set of competences as a baseline on top of which users can build. Consider the analogy of driving a car. Central to driving literacy is the objectively ascertainable knowledge of traffic regulations and the ability to operate a vehicle, typically acquired through formal instruction (exogenous factor). However, the effectiveness of this learning typically depends on the learner’s involvement and self-efficacy (endogenous factors). After acquiring basic knowledge, becoming a proficient driver requires further practice in various settings—driving different vehicles, traversing roads in different conditions, and customizing the driving experience to personal needs. The starting competencies remain central, but true proficiency develops through habituation and experimentation, resulting in individualized driving styles and preferences. Regarding AL, establishing fundamental competencies should go hand-in-hand with the development of contextual and experiential knowledge, where users refine their understanding through personal interactions with algorithms. This is contrary to the current situation, where users often lack a foundational baseline. With a solid base of objective knowledge, users could better develop their specific preferences, ideas, and strategies. This is to say that while we do not favor any specific dimension, we do wish to underscore that AL involves a mix of objective and subjective elements, shaped by endogenous and exogenous learning processes. In this context, formal education (exogenous) can provide a foundational understanding and create awareness for ethical considerations of algorithms (comparable, e.g., to knowledge on the environmental impact of cars), while personal experience and adaptation (endogenous) refine and deepen that knowledge to meet individual needs.
Researching algorithm literacy comparatively
The literature consistently suggests that cross-platform use enhances algorithmic learning. However, a systematic examination is warranted. To this end, we encourage scholars to carefully consider designations such as “the TikTok algorithm” or “the Facebook algorithm,” which, while acknowledging the context-dependent nature of algorithms shaped by specific platform affordances and vernaculars, can also reinforce the mystification of algorithms. Instead, we should consider algorithms as a collection of mechanisms that can be disentangled and studied separately, much like how we have learned to approach “the internet” not as a monolithic entity but as a bundle of mechanisms (Farrell, 2012). This shift in perspective would allow for a more systematic examination of how algorithms function across different platforms, and how algorithm literacy develops in various contexts. To this aim, we propose two approaches to cross-platform comparisons of AL. First, a user-centric approach would involve specifying focal-use goals (e.g., parenting, political information) and comparing individual user engagement, issue involvement, and affective assessments across platforms. This approach could directly examine the sought-after AL outcomes, enabling a finer understanding of how users interact with and learn about different algorithms. Second, an algorithm-centric approach would compare algorithms that execute specific tasks across platforms, focusing on how users understand these functions. While proprietary mechanisms often obscure the inner workings of algorithms, it is known that they are designed to fulfill specific roles for platform users. By systematically disentangling these mechanisms, we can begin to demystify algorithms.
Conclusion
Researching algorithm literacy in the ever more complex landscape of algorithmic media platforms is a formidable challenge, not least because these platforms often lack the bedrock of good governance principles—clarity, stable norms, and consistent enforcement (Cotter, 2023). Thus, the essence of AL contends with the underlying logic of platform capitalism, which thrives on intentional obscurity and frequent algorithmic changes designed to keep users uninformed and powerless (Curchod et al., 2020; Petre et al., 2019). It is in this intentional uncertainty that phenomena like algorithmic precarity find their roots, revealing that adverse outcomes do not solely stem from a lack of AL but from the deliberate ambiguity cultivated in algorithmic moderation. Furthermore, platform capitalism encourages different actors to play each other through and around algorithms (Ramizo, 2022), highlighting the ambivalence in how AL is applied and underscoring that it is not just about individual empowerment but also about reshaping power dynamics in algorithmically mediated interactions.
For us as researchers, this means that we must reflect on the extent to which our work and the way we frame it unintentionally normalize platform capitalism. This is particularly significant for groups with constrained resources, limited capacities, and heightened vulnerability to exploitation or misinformation. We also need an honest normative debate about attainable levels of AL relevant to different outcomes, all while considering inequalities in personal and structural capacities (Helsper, 2021). Also, while we have argued that AL is relevant for navigating various domains effectively, it is but one facet of competences needed to improve working conditions, mobilize for social causes, fight marginalization, or establish meaningful interpersonal connections. Thus, we encourage future research to pursue interdisciplinary inquiries to fully grasp the requirements for and implications of AL across diverse domains. Confronting these challenges urges us to delineate the boundaries between individual responsibility and collective demands directed at platforms and regulators. This implies a shift from merely prescribing what individuals should learn to advocating for systemic changes that foster an equitable, transparent, and informed digital society.
Supplemental Material
sj-pdf-1-nms-10.1177_14614448241291137 – Supplemental material for Algorithmic media use and algorithm literacy: An integrative literature review
Supplemental material, sj-pdf-1-nms-10.1177_14614448241291137 for Algorithmic media use and algorithm literacy: An integrative literature review by Emilija Gagrčin, Teresa K. Naab and Maria F. Grub in New Media & Society
Supplemental Material
sj-pdf-2-nms-10.1177_14614448241291137 – Supplemental material for Algorithmic media use and algorithm literacy: An integrative literature review
Supplemental material, sj-pdf-2-nms-10.1177_14614448241291137 for Algorithmic media use and algorithm literacy: An integrative literature review by Emilija Gagrčin, Teresa K. Naab and Maria F. Grub in New Media & Society
Supplemental Material
sj-pdf-3-nms-10.1177_14614448241291137 – Supplemental material for Algorithmic media use and algorithm literacy: An integrative literature review
Supplemental material, sj-pdf-3-nms-10.1177_14614448241291137 for Algorithmic media use and algorithm literacy: An integrative literature review by Emilija Gagrčin, Teresa K. Naab and Maria F. Grub in New Media & Society
Footnotes
Funding
The author(s) received no financial support for the research,authorship,and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
ORCID iDs
Emilija Gagrčin
Teresa K. Naab
Author biographies
Emilija Gagrčin (PhD,Freie Universität Berlin) is a postdoctoral researcher and lecturer at the Institute for Media and Communication Studies at the University of Mannheim. She is affiliated with the Media Use Research Group at the University of Bergen,and the Weizenbaum Institute for the Networked Society. She researches social and normative aspects of platformized civic life.
Teresa K. Naab (PhD,Hochschule für Musik,Theater und Medien Hannover) is a Professor of Digital Communication at the University of Mannheim,Germany. Her research interests include digital communication,audience research,and social science methods.
Maria F. Grub (MA,Universität Mannheim) is currently pursuing her PhD at Friedrich Schiller University in Jena,Germany,exploring the role of AI in detecting,disseminating,and countering online disinformation.
References
1.
Adams-GrigorieffJ (2023) Grounded critical digital literacy: Youth countering algorithmic and platform power in school and everyday life. PhD dissertation, University of California, Berkeley, USA.
2.
AndalibiNGarciaP (2021) Sensemaking and coping after pregnancy loss: The seeking and disruption of emotional validation online. Proceedings of the ACM on Human-Computer Interaction 5(CSCW1): 1–32.
3.
AufderheideP (Ed) (1993) Media literacy: A report of the national leadership conference on media literacy. Aspen, CO: Aspen Institute.
4.
AvellaH (2023) “TikTok ≠ therapy”: Mediating mental health and algorithmic mood disorders. New Media & Society26(10): 6040–6058.
5.
BakkeA (2020) Everyday googling: Results of an observational study and applications for teaching algorithmic literacy. Computers and Composition57.
6.
BhandariABimoS (2022) Why’s everyone on TikTok now? The algorithmized self and the future of self-making on social media. Social Media + Society8(1).
7.
BishopS (2018) Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence24(1): 69–84.
8.
BishopS (2019) Managing visibility on YouTube through algorithmic gossip. New Media & Society21(11–12): 2589–2606.
9.
BishopS (2020) Algorithmic experts: Selling algorithmic lore on YouTube. Social Media + Society6(1): 205630511989732.
10.
BogersLNiedererSBardelliF, et al (2020) Confronting bias in the online representation of pregnancy. Convergence26(5–6): 1037–1059.
11.
BoulamwiniJ (2022) Facing the coded gaze with evocative audits and algorithmic audits. PhD Thesis, Massachusetts Institute of Technology, USA.
12.
BrodskyJEZombergDPowersKL, et al (2020) Assessing and fostering college students’ algorithm awareness across online contexts. Journal of Media Literacy Education12(3): 43–57.
13.
BucherELSchouPKWaldkirchM (2021) Pacifying the algorithm: Anticipatory compliance in the face of algorithmic management in the gig economy. Organization28(1): 44–67.
14.
BucherT (2017) The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society20(1): 30–44.
15.
CaliceMNBaoLFreilingI, et al (2021) Polarized platforms? How partisanship shapes perceptions of “algorithmic news bias”. New Media & Society25(11), 2833-2854.
16.
ChungM (2023) What’s in the black box? How algorithmic knowledge promotes corrective and restrictive actions to counter misinformation in the USA, the UK, South Korea and Mexico. Internet Research33(5):1971–1989.
17.
CotterK (2019) Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram. New Media & Society21(4): 895–913.
18.
CotterK (2023) “Shadowbanning is not a thing”: Black box gaslighting and the power to independently know and credibly critique algorithms. Information, Communication & Society26(6): 1226–1243.
19.
CotterK (2024) Practical knowledge of algorithms: The case of BreadTube. New Media & Society26(4): 2131–2150.
20.
CotterKReisdorfBC (2020) Algorithmic knowledge gaps: A new dimension of (digital) inequality. International Journal of Communication14: 745–765.
21.
CroninMAGeorgeE (2023) The why and how of the integrative review. Organizational Research Methods26(1): 168–192.
22.
CurchodCPatriottaGCohenL, et al (2020) Working for an algorithm: Power asymmetries and agency in online work settings. Administrative Science Quarterly65(3): 644–676.
23.
DasR (2023) Parents’ understandings of social media algorithms in children’s lives in England: Misunderstandings, parked understandings, transactional understandings and proactive understandings amidst datafication. Journal of Children and Media17(4): 506–522.
24.
DeVitoMA (2022) How transfeminine TikTok creators navigate the algorithmic trap of visibility via folk theorization. Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2):1–31.
25.
DeVitoMAGergleDBirnholtzJ (2017) “Algorithms ruin everything”: #RIPTwitter, folk theories, and resistance to algorithmic change in social media. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, Colorado USA, 2May2017, pp. 3163–3174. ACM.
26.
DeVitoMABirnholtzJHancockJT, et al (2018) How people form folk theories of social media feeds and what it means for how we study self-presentation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada, 19 April 2018, pp. 1–12. ACM.
27.
DogruelL (2021) Folk theories of algorithmic operations during Internet use: A mixed methods study. Information Society37(5): 287–298.
28.
DogruelLMasurPJoeckelS (2022) Development and validation of an Algorithm Literacy Scale for Internet users. Communication Methods and Measures16(2): 115–133.
29.
DowellML (2023) The same information is given to everyone: Algorithmic awareness of online platforms. PhD dissertation, University of Wisconsin-Milwaukee, USA.
30.
DuYR (2023) Personalization, echo chambers, news literacy, and algorithmic literacy: A qualitative study of AI-powered news app users. Journal of Broadcasting & Electronic Media67(3): 246–273.
31.
DuffyBEMeisnerC (2023) Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility. Media, Culture & Society45(2): 285–304.
32.
Elkin-KorenN (2020) Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence. Big Data & Society7(2): 205395172093229.
33.
Espinoza-RojasJSilesICastelainT (2023) How using various platforms shapes awareness of algorithms. Behaviour & Information Technology42(9): 1422–1433.
34.
FarrellH (2012) The consequences of the Internet for politics. Annual Review of Political Science15(1): 35–52.
35.
FesticN (2022) Same, same, but different! Qualitative evidence on how algorithmic selection applications govern different life domains. Regulation & Governance16(1): 85–101.
36.
GagrčinEOhmeJButtgereitL, et al (2023) Datafication markers: Curation and user network effects on mobilization and polarization during elections. Media and Communication11(3).
37.
GranABBoothPBucherT (2021) To be or not to be algorithm aware: A question of a new digital divide?Information, Communication & Society24(12): 1779–1796.
38.
GreenbergS (2007) Theory and practice in journalism education. Journal of Media Practice8(3): 289–303.
39.
GruberJHargittaiEKaraogluG, et al. (2021) Algorithm Awareness as an Important Internet Skill: The Case of Voice Assistants. International Journal of Communication15: 1770–1788.
40.
HargittaiEGruberJDjukaricT, et al (2020) Black box measures? How to study people’s algorithm skills. Information, Communication & Society23(5): 764–775.
41.
HelsperE (2021) The digital disconnect: The social causes and consequences of digital inequalities. London: SAGE.
42.
HindsJWilliamsEJJoinsonAN (2020) “It wouldn’t happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human–Computer Studies143: 102498.
43.
HuJWangR (2023) Familiarity breeds trust? The relationship between dating app use and trust in dating algorithms via algorithm awareness and critical algorithm perceptions. InternationalJournal of Human–Computer Interaction40(17): 1–12.
44.
IssarS (2023) The social construction of algorithms in everyday life: Examining TikTok users’ understanding of the platform’s algorithm. International Journal of Human–Computer Interaction40(18): 1–15.
45.
JeongSHChoHHwangY (2012) Media literacy interventions: A meta-analytic review. Journal of Communication62(3): 454–472.
46.
JohnstonBWebberS (2005) As we may think: Information literacy as a discipline for the information age. Research Strategies20(3): 108–121.
47.
JustNLatzerM (2017) Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society39(2): 238–258.
48.
KarizatNDelmonacoDEslamiM, et al (2021) Algorithmic folk theories and identity: How TikTok users co-produce knowledge of identity and engage in algorithmic resistance. In: Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2): 1–44.
49.
KlawitterEHargittaiE (2018) “It’s like learning a whole other language”: The role of algorithmic skills in the curation of creative goods. International Journal of Communication12: 3490–3510.
50.
KlugDSteenEYurechkoK (2023) How algorithm awareness impacts algospeak use on TikTok. In: Companion Proceedings of the ACM Web Conference 2023, Austin TX USA, 30 April 2023, pp. 234–237.
51.
KolbDA (1984) Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall.
52.
KolbDA (2015) Experiential learning: Experience as the source of learning and development. Upper Saddle River, NJ: Pearson.
53.
KoltayT (2011) The media and the literacies: Media literacy, information literacy, digital literacy. Media, Culture & Society33(2): 211–21.
54.
LeslieDBurrCAitkenM, et al (2021) Artificial intelligence, human rights, democracy, and the rule of law: A primer. Report for the Ad Hoc Committee on Artificial Intelligence. Strasbourg: Council of Europe and The Alan Turing Institute.
55.
LivingstoneS (2004) Media literacy and the challenge of new information and communication technologies. Communication Review7(1): 3–14.
56.
LloydA (2019) Chasing Frankenstein’s monster: Information literacy in the black box society. Journal of Documentation75(6): 1475–1485.
MacDonaldTW (2023) “How it actually works”: Algorithmic lore videos as market devices. New Media & Society25(6): 1412–1431.
59.
MakadyH (2023) To interact or not to interact with news posts: The role of algorithmic awareness & self-monitoring in Facebook news consumption. Electronic News17(4): 223–246.
60.
MalyI (2019) New right metapolitics and the algorithmic activism of Schild & Vrienden. Social Media + Society5(2).
61.
McKelveyF (2014) Algorithmic media need democratic methods: Why publics matter. Canadian Journal of Communication39(4): 597–614.
62.
MøllerLA (2022) Recommended for you: How newspapers normalise algorithmic news recommendation to fit their gatekeeping role. Journalism Studies23(7): 800–817.
63.
MorrisTH (2020) Experiential learning: A systematic review and revision of Kolb’s model. Interactive Learning Environments28(8): 1064–1077.
64.
NguyenD (2023) How news media frame data risks in their coverage of big data and AI. Internet Policy Review12(2).
65.
NguyenDHekmanE (2024) The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & SOCIETY39(2): 437–451.
66.
NobleSU (2018) Algorithms of oppression: How search engines reinforce racism. New York City: New York University Press.
67.
Oeldorf-HirschANeubaumG (2023) What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society0(0).
68.
O’NeilC (2016) Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.
69.
PaezA (2017) Gray literature: An important resource in systematic reviews. Journal of Evidence-Based Medicine10(3): 233–240.
70.
PetreCDuffyBEHundE (2019) “Gaming the system”: Platform paternalism and the politics of algorithmic visibility. Social Media + Society5(4).
71.
PronzatoRMarkhamAN (2023) Returning to critical pedagogy in a world of datafication. Convergence29(1): 97–115.
72.
QadriRD’IgnazioC (2022) Seeing like a driver: How workers repair, resist, and reinforce the platform’s algorithmic visions. Big Data & Society9(2).
73.
RaderEGrayR (2015) Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul Republic of Korea, 18 April 2015, pp. 173–182. ACM.
74.
RamizoG (2022) Platform playbook: a typology of consumer strategies against algorithmic control in digital platforms. Information, Communication & Society25(13): 1849–1864.
75.
ReviglioUAgostiC (2020) Thinking outside the Black-Box: The case for “algorithmic sovereignty” in social media. Social Media + Society6(2).
76.
SavolainenLRuckensteinM (2024) Dimensions of autonomy in human–algorithm relations. New Media & Society26(6): 3472–3490.
77.
ScalviniM (2023) Making sense of responsibility: A semio-ethic perspective on TikTok’s algorithmic pluralism. Social Media + Society9(2).
78.
SchaetzNGagrčinETothR, et al (2023) Algorithm dependency in platformized news use. New Media & Society0(0)
79.
SchererRWSaldanhaIJ (2019) How should systematic reviewers handle conference abstracts? A view from the trenches. Systematic Reviews8(1): 264.
80.
SchreursLVandenboschL (2021) Introducing the Social Media Literacy (SMILE) model with the case of the positivity bias on social media. Journal of Children and Media15(3): 320–337.
81.
ShinDParkYJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior98: 277–284.
82.
SilesIValerio-AlfaroLMeléndez-MoranA (2022) Learning to like TikTok . . . and not: Algorithm awareness as process. New Media & Society26: 5702–5718.
83.
SimpsonEHamannASemaanB (2022) How to tame ‘your’ algorithm: LGBTQ+ users’ domestication of TikTok. Proceedings of the ACM on Human-Computer Interaction6(GROUP): 1–27.
84.
SimpsonESemaanB (2021) For you, or for”you”?: Everyday LGBTQ+ encounters with tiktok. Proceedings of the ACM on Human-Computer Interaction4(CSCW3): 1–34.
85.
SunP (2019) Your order, their labor: An exploration of algorithms and laboring on food delivery platforms in China. Chinese Journal of Communication12(3): 308–323.
86.
SundarSSMaratheSS (2010) Personalization versus customization: The importance of agency, privacy, and power usage. Human Communication Research36(3): 298–322.
87.
SwartJ (2021) Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society7(2).
88.
TörnbergP (2022) How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences119(42).
89.
TaylorSHChoiM (2022) An Initial Conceptualization of Algorithm Responsiveness: Comparing Perceptions of Algorithms Across Social Media Platforms. Social Media + Society8(4): 20563051221144322.
90.
TorracoRJ (2005) Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review4(3): 356–367.
91.
VelkovaJKaunA (2021) Algorithmic resistance: Media practices and the politics of repair. Information, Communication & Society24(4): 523–540.
92.
VinceR (1998) Behind and beyond Kolb’s Learning Cycle. Journal of Management Education22(3): 304–319.
93.
XieXDuYBaiQ (2022) Why do people resist algorithms? From the perspective of short video usage motivations. Frontiers in Psychology13.
94.
YeomansMShahAMullainathanS, et al (2019) Making sense of recommendations. Journal of Behavioral Decision Making32(4): 403–414.
95.
Ytre-ArneBMoeH (2021) Folk theories of algorithms: Understanding digital irritation. Media, Culture & Society43(5): 807–824.
96.
ZaroualiBBoermanSCde VreeseCH (2021a) Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics62.
97.
ZaroualiBHelbergerNde VreeseCH (2021b) Investigating algorithmic misconceptions in a media context: Source of a new digital divide?Media and Communication9(4): 134–144.
98.
ZhangYChenH (2023) Can algorithm knowledge stop women from being targeted by algorithm bias? The new digital divide on Weibo. Journal of Broadcasting & Electronic Media67(3): 397–422.
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.