Abstract
Keywords
Introduction
Algorithmic recommendation systems (i.e.,
Despite these concerns, there is reason to believe that not all users view filter bubbles as a negative. Users may, for instance, intentionally want to curate recommendations that promote psychological safety and help them connect to underrepresented users or content (Kanai and McGrane, 2021; Zhao, 2023). Others have challenged the assumptions underlying critical analysis of the filter bubble. For instance, Guess (2021) posited that “requiring of citizens that they continuously engage with challenges to their worldviews fundamentally neglects their autonomy.” While filter bubbles have received attention for their adverse influence, comparatively few studies have considered their potential benefits. This commentary reviews the literature on digital safe spaces and protective filter bubbles and proposes a forward-looking research agenda for developing a holistic view of their impact.
Digital safe spaces
Digital safe spaces are rooted in the concept of the
Protective filter bubbles
The features of algorithmic curation make protective filter bubbles distinct from traditional online spaces (Abidin, 2021; Zhao, 2023). While filter bubbles have been viewed critically as insulating users from diverse perspectives, there is a growing awareness that they may serve a protective purpose (Kanai and McGrane, 2021; Randazzo and Ammari, 2023; Zhao, 2023). The existing literature points to several examples of protective filter bubbles, including those supporting feminist groups (Kanai and McGrane, 2021), gay men in China (Zhao, 2023), and political dissidents (Randazzo and Ammari, 2023). While it is an emerging concept, this commentary defines the protective filter bubble as
By engaging with marginalized users directly, several studies have found nuanced perspectives, with many users viewing protective filter bubbles as a necessity while also approaching them with caution (Kanai and McGrane, 2021; Randazzo and Ammari, 2023; Zhao, 2023). The protective filter bubble can serve an important function by giving marginalized users a space for discussion and exploration. For instance, Randazzo and Ammari (2023) found that algorithmic recommendations helped give voice to trauma survivors by introducing them to the concepts to recognize their experience and the language necessary to express them. The protective filter bubble may be especially important in information environments that are restricted due to political or social repression (Makhortykh and Wijermars, 2023; Zhao, 2023). For instance, Makhortykh and Wijermars (2023)suggest that in countries with low press freedom, such as Russia, algorithmic personalization and protective filter bubbles may facilitate independent and dissident thought.
Possible research directions
The prior scholarly work on protective filter bubbles provides a path for developing a forward-looking research agenda. In this section, several overlapping research areas will be covered, starting with those that are presently receiving attention but still necessitate further scholarship. The discussion will then progress toward areas of research that are, at present, less developed.
Furthermore, in the algorithmic fairness literature, researchers have quantified “fairness” metrics (sometimes conceptualized as “unfairness” metrics) (Wang et al., 2023) and weighed them alongside a standard objective function. The same logic may be extensible to the concept of the protective filter bubble. For example, a “protective” metric could be maximized alongside other metrics to account for the risk profile that an algorithm presents to users.
Each of the aforementioned research areas works together to form part of a broader research agenda. A fundamental challenge is understanding how users interact with, understand, and experience algorithms (Hargittai et al., 2020). There is a need for foundational research exploring how users experience and find themselves—either through intentional action or inadvertently—in a protective filter bubble. Furthermore, there is a need for algorithms and recommender systems that weigh the different trade-offs that users may wish to make and that they often do on social platforms. By gaining a more nuanced understanding of user perceptions and intentionally working to build systems that support them, digital environments can better serve the needs of a diverse mixture of users.
Caution
As Kanai and McGrane (2021) observed, maintaining a protective filter bubble may be a substantial undertaking that is unsustainable in the long run for members of marginalized groups. This dynamic will be especially pronounced if the onus is put on individuals rather than algorithms or those who develop them. Furthermore, because algorithms are often a “black box” to users, a filter bubble's actual protection may be less than what is perceived (Zhao, 2023). Misconceptions about a filter bubble's protective qualities could expose marginalized individuals to severe threats. Similarly, algorithms are also liable to change. What may be protective in the current moment may not remain so, and there is a degree of precarity in relying on these systems. Furthermore, Marwick (2023) noted that recommender systems sometimes undermine their protective qualities, for example, by “leaking information” that was not intended to be shared with others through features such as “people you may know” on Facebook. Finally, while the evidence of their negative influence is mixed, filter bubbles present legitimate risks (Bryant, 2020; Pfetsch, 2018). When and where the risks outweigh the benefits is an open question worthy of further research. Filter bubbles can have adverse impacts, such as contributing to a fracturing public sphere (Pfetsch, 2018) or promoting hatred and extremism (Bryant, 2020). It is not that filter bubbles do not present any risks; the preposition discussed herein is that they may not be intrinsically damaging, and users may perceive them as providing valuable barriers.
Conclusion
While acknowledging filter bubbles’ potential risks and pitfalls, this commentary posits that the potential benefits—either real or perceived—are understudied. Current research suggests that filter bubbles may benefit marginalized communities by providing a refuge where individuals can gather to seek support, avoid hatred, and express themselves freely. Furthermore, the protective filter bubble may benefit those living under repressive conditions who otherwise may have difficulty engaging with certain kinds of material or face surveillance threats. This paper aims to illuminate the path forward for future research by reviewing the literature on digital safe spaces and protective filter bubbles and suggesting areas needing additional scholarly work. Algorithmic recommendations are a significant part of the lived experience of digital natives; it is crucial to understand all sides of their impact.
