Abstract
Keywords
Introduction
Theoretical traditions in critical algorithm studies (Gillespie and Seaver, 2016) are indispensable for conceptualizing algorithms as belonging to algorithmic
Without addressing these factors, methods for correcting algorithmic bias and opacity amount to what I call
To address this, I propose the method of

Forward Engineering. Algorithm design constraints provide an algorithmic frame that specifies what an algorithmic system should do. Through forward engineering, we design an algorithmic system that satisfies the specifications of the algorithmic frame, while identifying design implications and design flexibilities that emerge through the design process.
This demands an analysis of not only design constraints, but also the interdependencies and indeterminacies that they reveal during the design process (Figure 1). On the one hand, we identify
In order to identify design implications and flexibilities, I propose three dimensions of algorithm design that shift our theoretical focus away from whether the design of algorithms is satisfactory, and toward interrogating how design constraints inform this design in the first place (Figure 2). First, I substitute an analysis of

Analytical dimensions that motivate forward engineering and reverse engineering. In forward engineering, we produce an algorithmic system according to design constraints to investigate how the constraints inform the design process and product. During this process we attend to three dimensions of algorithm design: algorithmic domain, semiotics, and authority. In reverse engineering, we identify a black boxed algorithm in an algorithmic system and use statistical techniques to analyze its underlying logic. This enables us to identify algorithmic bias, and to achieve algorithmic transparency and accountability.
The primary contribution of forward engineering is to confront appeals to algorithmic reformism, which I examine in the context of law enforcement data analysis. I investigate law enforcement data analysis because it has become increasingly subject to algorithmic reformism, whereby algorithm auditors and designers propose to correct the bias or opacity of law enforcement algorithms, without addressing the design goals, underlying theoretical motivations, and regimes of authority that these systems presuppose by design. Accordingly this article examines how these theoretical motivations include criminological theories like “broken windows policing” which have been historically critiqued for licensing the over-policing of poor and marginalized communities (see for example Fagan and Davies, 2000), and which nonetheless continue to motivate the designs of contemporary algorithmic systems in law enforcement. Through forward engineering, I address these design presuppositions, theoretical assumptions, and the problems they motivate, which together inform the implementation of algorithmic solutions, and which remain unchanged by algorithmic reformism. Moreover, that the public is generally restricted from access to law enforcement data analysis systems makes them appropriate for a study that concerns issues of algorithmic transparency and accountability. To this end, forward engineering provides a method for reading publicly accessible documentation to infer the consequences of algorithmic systems, without needing to know specifically what algorithm logic is being used.
To begin, this article explains how the concepts of a
Reverse engineering
In academic research, literature, and journalism, much of the discourse that attends to the social consequences of algorithms concerns the notion of “algorithmic bias,” which addresses how the decision-making logic of algorithms and their consequences are statistically partial to certain individuals, populations, or entities (Hajian et al., 2016). The prevailing response to algorithmic bias has been an appeal to
Reverse engineering can concern multiple kinds of algorithmic bias. An algorithm may treat a certain data variable as more significant than another, which exhibits a “technical bias” toward this variable; otherwise, the deployment of an algorithm in a particular social context may result in an “emergent bias” that was not explicitly accounted for in the algorithm's design (Friedman and Nissenbaum, 1996). Algorithmic bias may also concern how an algorithm inherits the biases of its input data (Feinberg, 2017), which has been popularized by the phrase 'garbage in, garbage out.' For example, if a certain population is underrepresented or overrepresented in input data, then the consequences that result from an algorithm's use of this input data will be partial toward or against this population.
In turn, concerns of algorithmic bias have motivated the conceptualization of “algorithmic transparency” (Diakopolous and Koliska, 2017), which argues that if the logic of algorithms were made transparent, then their biases and logical discrepancies could be more readily identified and corrected for (Pasquale, 2015; Burrell, 2016). Faced with the challenges proposed by notions of algorithmic bias and opacity, “algorithmic accountability” (Diakopoulos, 2015, 2016) proposes to hold algorithm developers accountable for the consequences of algorithms. This mainly entails the development of tools for detecting algorithmic bias or ensuring algorithmic transparency, so as to enable “algorithmic auditing” (Mittelstadt, 2016; Mehrotra et al., 2017), or to supplement the decision-making criteria of algorithms with human oversight (Neyland, 2016; Shneiderman, 2016), as in a system of checks and balances.
Reverse engineering can be applied to reveal an algorithm's biases, to make its logic more transparent, or to motivate the installation of accountability mechanisms. In any case, reverse engineering analyzes an algorithm's logic, such as which data variables it ingests, or which machine learning methods it utilizes, in order to identify its consequences. Such an inquiry motivates us to implement a new algorithm that does not exhibit the same consequences, or to critique those who designed the algorithm and demand that they be held accountable for the algorithm's design and effects.
Reverse engineering in practice
Here I describe how reverse engineering is applied in practice to PredPol (Predictive Policing), a law enforcement data analysis system developed for use by the Los Angeles Police Department (LAPD). PredPol uses an algorithmic model called ETAS (Epidemic Type Aftershock Sequence) to convert historical crime data into a geographic probability distribution that indicates where law enforcement interventions should be staged to apprehend and prevent crimes (Mohler et al., 2011). Kristian Lum and William Isaac reverse engineer the ETAS model and demonstrate the algorithm's disproportionate impact on historically over-policed communities (Lum and Isaac, 2016). What makes their approach an instance of reverse engineering is that the authors investigate how a particular algorithm operates (ETAS), and attend to the social consequences that can be expected to arise from using this algorithm's logic in practice.
Reverse engineering requires that Lum and Isaac examine the documentation that specifies how ETAS should be configured to predict crimes for PredPol, and then reproduce and implement the algorithm themselves. For want of the crime data that PredPol uses in practice, Lum and Isaac generate
The advantage of reverse engineering in this case is that it concretely demonstrates how an algorithm is biased in a realistic context. Lum and Isaac use geographic maps of hotspots, like PredPol does, to demonstrate how this bias might affect Oakland geography and corresponding demographics. Reverse engineering also enables the authors to posit that their findings are generalizable to other algorithms that work similarly to the ETAS model, which they identify as those that predict future crimes without adjusting for the disproportionate abundance of crime data in certain areas. This mitigates the need to repeat reverse engineering if the proprietors of PredPol design a new algorithm in the place of ETAS, provided that the new algorithm design does not take this critique into account.
Lum and Isaac's use of reverse engineering establishes a critique of the ETAS model that demonstrates a significant consequence of its logic. The proprietors of PredPol can now choose to implement an algorithm in the place of the ETAS model that addresses this critique. In fact, this is a change that is now underway, as the LAPD's “Precision Policing” initiative proposes to use new heuristics to adjust for the disproportionate presence of police in certain areas (Bratton, 2018), while there are no plans in place to retire the remainder of the PredPol infrastructure. Because Lum and Isaac's critique is no longer applicable to this new regime, we will need to apply reverse engineering again to the new scenario. This creates a pattern that I call
Here I argue that the biases of the ETAS model, although they appear to be specific to the algorithm, serve specific design constraints, and that any other algorithm implemented to satisfy the design constraints specified for PredPol would share similar consequences. Whereas reverse engineering aims to reproduce and analyze a particular algorithm like ETAS, it does not venture to interrogate the design constraints and problems that inform the choice of this algorithm, as well as how the algorithm is configured to satisfy these design constraints in the first place. Therefore, we cannot depend on reverse engineering alone to address consequences of algorithmic systems, insofar as these consequences are implicated in the constraints that motivate algorithm design. Instead, we must interrogate these design constraints themselves, and how they reveal certain consequences through the design process.
Forward engineering
How we design an algorithmic system depends on what we set out to achieve, as well as the problems and questions we presuppose can be solved by an algorithmic system. Algorithmic reformism elides this principle by proposing new algorithmic solutions to satisfy the same design problems, neglecting to acknowledge how
The notion of an algorithmic frame finds precedents in existing research that examines how design presuppositions inform the logic of algorithms. For example, Yanni Loukissas investigates how a popular library of natural language processing algorithms depends on a history of unique design assumptions to classify the syntax of human speech, which he demonstrates by using the algorithms in practice (Loukissas, 2019). Os Keyes conducts a content analysis of automatic gender recognition (AGR) and human–computer interaction (HCI) research that aims to classify gender, demonstrating how the design assumptions of AGR algorithms prefigure the way that HCI research accounts for gender (Keyes, 2018). Both of these projects inspect algorithmic frames as their object of research, explaining the design of algorithms according to ontologies, partial objectives, and social biases that they presuppose.
Provided an algorithmic frame, a developer devises an algorithmic system by identifying a set of algorithms, data sources, and implementation details that can be articulated together to solve the problem specified. This process reveals design consequences and compromises that are not explicit in the algorithmic frame but rather emerge through the design process. Moreover, the design process involves more than just designing algorithm logic and data; as I will show, it also requires designing visual, interactive, and statistical conventions as well as configurations of authority that enable algorithms to be used as intended by design. In order to address these considerations, I propose the method of
In particular, through the design process forward engineering identifies how relationships between design constraints lead to
Research through design
Research Through Design involves both theoretical and practical components, both of which can inform one another: we can evaluate an existing theory by designing an artifact that reflects its principles, or we can design an artifact to theorize about its design process (Stappers and Giaccardi, 2017). In order to investigate the algorithm design process, I enlist Research Through Design in the sense that research can be produced by designing under the constraints of design requirements and stated objectives as they are already specified. Notably,
Like both imaginary media studies and adversarial design,
These methods derive research through design because they interrogate how the process of designing computational systems generates certain problems, compromises, and flexibilities, each of which informs the computational product that results. Likewise, we forward engineer algorithmic frames to investigate the consequences that design constraints exhibit during the design process. This fundamentally changes our conceptualization of algorithmic bias, transparency, and accountability. No longer concerned with opening the black boxes of algorithmic systems to reform their logic, these concepts must be made adequate to an investigation of the design process itself (Figure 2).
Algorithmic domain
Whereas algorithmic bias identifies how algorithms are partial to certain objects and variables that they operate upon, an algorithmic domain identifies how the language, concepts, and problem space of algorithmic frames present partial ways of designing, developing, and applying algorithmic systems in the first place. This is to say that
Such partiality is evidenced, for example, in the classification of individuals into discrete gender categories that exclude non-binary and transgender parameters of representation (Keyes, 2018; Hoffmann, 2019). These schemes of classification are provided by algorithmic domains that conceptualize gender according to presupposed categories. Moreover, an algorithmic frame expresses not just a partiality toward certain ontologies of objects and subjects, but also a partial mode of organizing these objects and subjects. This is especially relevant to the context of law enforcement, where criminological theories motivate the design of information systems that sort and rank individuals according to data collected about them, namely in order to calculate their disposition to criminality. Through forward engineering, we interpret such theories as design constraints that inform the design of particular algorithmic solutions. We then devise algorithms that respond to such design constraints, examining how their logic satisfies these constraints, and to what ends. To investigate an algorithmic domain is to identify how design problems and presuppositions confer to particular design solutions.
An algorithmic domain specifies the needs or agendas that motivate the design of an algorithmic system, and it often provides use cases to illustrate how the system can be used to satisfy these needs. An algorithmic domain may also specify statistical, sociological, or criminological theories that motivate an algorithmic system's design, along with key entities, subjects, and phenomena that it should account for. In doing so an algorithmic domain delimits the discourses and epistemic criteria that are adequate to reflecting on algorithmic systems. For example, an algorithmic frame may identify the correction of
Algorithmic semiotics
Algorithmic transparency conceptualizes algorithm logic as enclosed and obscured by the walls of a 'black box' (Figure 3), which must be reverse engineered so that the consequences of algorithm logic can be revealed. This discourse elides the fact that the consequences of algorithms cannot be discovered by attending to their logic alone; to the contrary, the consequences of algorithms depend on how they disclose themselves to the world, through interfaces, visualizations, diagrams, and automated events. Through

Algorithmic Transparency and Algorithmic Semiotics. Algorithmic transparency posits a black box that occludes algorithm logic. Algorithmic semiotics posits designed conventions that mediate this logic for human apprehension and interaction.
Algorithmic semiotics include graphical and interaction conventions designed by User Interface (UI) Designers and User Experience (UX) Designers, but also those designed by algorithm developers who assign meaningful labels to dimensions of data or algorithm outputs, like 'heart rate' or 'user ID,' in order to indicate their use. Moreover, algorithmic semiotics account for the metaphors, explanations, and rhetoric that demonstrate to people how algorithms should be understood and used, often in order to encourage their positive reception and to justify their deployment. Christian Sandvig demonstrates, for example, how algorithms are explained with diagrams or cartoons in order to indicate how they work, or to advertise their functionalities (Sandvig, 2013). In this way, algorithmic semiotics do not simply make algorithms apprehensible, but are designed to make algorithms apprehensible in particular ways, to certain ends.
The design of algorithmic semiotics can be likened to a practice of rhetoric and persuasion that concerns how algorithmic systems appear to people and are experienced by them, which are appearances and experiences designed in accordance with the constraints specified by an algorithmic frame. For example, Louise Amoore and Agnieszka Leszczynski examine how the probabilistic calculations of algorithms are mediated by geographic maps that influence how people perceive the calculations and act upon them (Amoore, 2011; Leszczynski, 2016). Here the assignment of forward engineering is to identify the design constraints and goals that motivate the presentation of these probabilities on geographic maps in the first place. Such maps do not simply represent these probabilities; they are algorithmic semiotics designed to mediate them in a particular way, consistent with goals and use cases set forth in an algorithmic frame.
Insofar as an algorithm is designed by people or to affect people, I argue that algorithms
Algorithmic authority
Appeals to algorithmic accountability (Diakopoulos, 2015, 2016) tend to follow from algorithmic mistakes, mishaps, or blunders that draw the reliability or epistemic legitimacy of algorithms into question once they are already deployed in practice. Algorithmic accountability thus intends to account for algorithmic discrepancies when they emerge in particular circumstances, but it does not venture to attribute these discrepancies to concrete design principles and assumptions that might reproduce them in the future. Therefore algorithmic accountability wants to hold algorithm developers accountable for the systems that they have constructed, but less so for the systems that they are actively developing. Moreover, while algorithmic accountability attributes an
As opposed to the dichotomous model of algorithmic accountability, we should be concerned with
Algorithmic authority accounts for the existence of different accountability models. It pluralizes algorithmic accountability by acknowledging the existence of particular design conventions and technological mechanisms that algorithmic systems deploy to configure their subjects in relation to one another. Instead of demanding that algorithm developers be held accountable, we ask: In this particular case, why are algorithm developers authorized to reform algorithmic systems, and how do algorithmic frames justify this authority? Furthermore, we investigate how algorithmic systems are deployed in the service of particular configurations of authority. These configurations of authority may preexist the design of an algorithmic system, but it falls on an algorithmic frame to specify a particular configuration of responsibility and access that an algorithmic system should abide to.
Forward engineering in practice
To demonstrate forward engineering in practice, I apply the method to analyzing PredPol and compare it to the reverse engineering approach employed by Lum and Isaac (Lum and Isaac, 2016). To accomplish this, I analyze PredPol's public documentation and promotional materials, white papers documenting the system's algorithmic implementation details, a patent specifying the system's information architecture (Mohler, 2015), as well as documents disclosed in response to Public Records Act (PRA) and Freedom of Information Act (FOIA) requests.
To ascertain PredPol's
Historically, appeals to broken windows theory have licensed law enforcement interventions in locations that are comparatively poor and underprivileged, following from the logic that misdemeanor crimes like vandalism, evidenced by broken infrastructure, are indicators of a regional disposition to criminality (Fagan and Davies, 2000). In this way adherents of broken windows policing claim to discriminate on the basis of such misdemeanor crimes alone, while denying that this regime of discrimination necessarily contributes to the over-policing of poor and minority communities (Bratton, 2018). For PredPol, an appeal to this theory justifies the implementation of a predictive model that does not aim to identify substantive causal links between crimes, but rather interprets every crime at a given location as a generic indicator that another crime will recur nearby. However, PredPol's algorithmic frame imposes a constraint on this generalizing logic by requiring users to choose a crime classification (e.g., auto theft, burglary) for analysis each time that they run the predictive model (Mohler, 2015). This indicates a first
Like Lum and Isaac (Lum and Isaac, 2016), I generate synthetic crime data according to the design specifications: fundamentally, each data point must have geographic coordinates and a timestamp. The difference is that whereas reverse Lum and Isaac simulate how PredPol would analyze drug crime data in practice, and therefore design all of their data to belong to the same classification of drug crime, I follow the algorithmic frame to design the data points with different crime classifications in order to investigate the implications of this design constraint. This consequently raises a substantial
The documents further specify that locations for allocating police resources according to historical crime data are called “hotspots,” which designate 500 × 500 ft squares. In order to satisfy this constraint, the output of my algorithm needs to take the form of this grid, even though the specified input is point-based location data. This consequently reveals a second
I next need to implement logic for determining

Algorithmic system forward engineered from PredPol's algorithmic frame.
Now that I have implemented an algorithm to designate hotspots, the design constraints require me to design algorithmic semiotics to present these hotspots to human apprehension and interaction, to indicate precisely where law enforcement should intervene in geographic space. Given that I am constrained by having converted point-based crime data into a grid-based probability distribution, I encode geographic space as a gridded heatmap, with darker squares indicating a higher local likelihood of future crimes, with hotspots outlined. Then, following the requirements of the algorithmic frame, I overlay this heatmap on a geographic map (Mohler, 2015). This effectively reveals a
The superposition of hotspots and geographies is an increasingly common approach to representing crime data that warrants specific attention. Fundamentally the semiotics of crime maps evince what media scholar Aurora Wallace (2009) describes as an “aesthetic of danger” that portrays particular geographies and communities as threatening, reinforcing popular convictions about the spatiotemporal dynamics of criminality and its regional distribution. Wallace draws particular attention to the degree of control afforded to crime mapping systems to depict certain areas as dangerous, and as permanently marked by past events that are less contemporaneous than the crime maps suggest. In the case of PredPol, what makes the presentation of geography and hotspots significant in this way is another feature provided by the algorithmic frame: users can not only select the range and kind of data that the algorithm considers to calculate hotspots, but they can also select hotspots manually from any point on the map (PredPol, 2014). This engenders a key
From the purview of algorithmic accountability, we might appeal to holding users accountable for selecting fair or appropriate hotspots, perhaps by limiting the flexibility that they have when choosing hotspots, or rendering their choices more transparently. However, by forward engineering the algorithmic frame, we find that PredPol is designed precisely in order to balance the use of a scientifically verified hotspot selection algorithm (that is to say, one that is informed by peer-reviewed theories and methods) with the unbounded decisions of human users. To request algorithmic accountability or transparency here is to miss the point: PredPol's algorithmic frame seeks an algorithmic solution that licenses the unbounded discretion of law enforcement analysts with a bounded algorithmic protocol. These two processes of hotspot selection are not mutually exclusive but paralleled, and this parallelism informs how the algorithmic system operates and appears. It provides for a distribution of authority which effectively ensures that no single algorithm or person is definitively responsible for hotspot selection: neither the developers who implement PredPol, the users who adjust its parameters in practice, the PredPol algorithms, nor the law enforcement officers dispatched to the hotspot locations.
Reverse and forward engineering: Methodological distinctions
Having forward engineered PredPol's algorithmic frame, I now discuss methodological distinctions between forward engineering and reverse engineering. First, reverse engineering is concerned with evaluating a specific algorithm's logic, in this case the ETAS model (Lum and Isaac, 2016). Reverse engineering identifies whether the algorithm exhibits any
This is significant in the context of law enforcement because criminological theories and policing initiatives have been historically designed to police poor and minority communities (see for example Hawkins and Thomas, 2013). If we locate the cause of disproportionate over-policing in particular algorithmic solutions and datasets, we may neglect to interrogate the historical legacy of theoretical assumptions that motivate these solutions in the first place. For its part, forward engineering identifies PredPol as an algorithmic system that operationalizes the logic of broken windows theory and enlists the semiotics of cartographic hotspots to represent criminality geographically. In turn, this necessitates that we acknowledge existing research and literature that addresses the motivations and practical implications of broken windows theory (for example Fagan and Davies, 2000) as well as the function of cartographic conventions in establishing certain conceptualizations of criminality (for example Kindynis, 2014). Algorithmic systems designed according to these conventions are subject to their same critiques; for this reason, forward engineering provides a method for identifying the function of these conventions in the process of designing algorithmic systems.
Second, while reverse engineering conceptualizes an algorithm as a logical process that must be analyzed or revealed, forward engineering conceptualizes algorithm logic as a component of a broader algorithmic system. For forward engineering, the consequences of algorithms are not limited to their logic, but to configurations of semiotics and authority that this logic depends on in order to work as intended by design. In the case of PredPol, forward engineering can clearly acknowledge a series of consequences that reverse engineering does not. First, hotspots do not represent definitive crime predictions so much as they are instances of
Further, PredPol has an
Third we should consider the cases to which reverse and forward engineering can be applied. Both methods concern analyzing algorithmic systems provided
Lastly, forward engineering is designed specifically to account for algorithmic reformism. The outcome of reverse engineering is a critique of a particular algorithm's epistemic fallacies, which motivates the design or implementation of an improved algorithm. This tends toward algorithmic reformism, where in the case of the LAPD, algorithms are now being reformed to use heuristics that respond to critiques like Lum and Isaac's, while continuing to depend on the same criminological theories like broken windows policing (Bratton, 2018). These reformed heuristics are said to be “discriminating” but not “discriminatory,” because they use data about community orderliness to anticipate crimes, but they are not intentionally designed to have a disproportionate impact on certain races. From the vantage of forward engineering, however, we see that this is but a new rhetorical and semiotic solution to a design problem and design constraints that have remained the same. Moreover, whereas algorithmic bias concerns how algorithms are “discriminatory,” an algorithmic domain addresses the deleterious consequences of “discriminating” design problems to begin with. By investigating how algorithmic frames inform the design process, we acknowledge that this new solution will not fix the problems that the LAPD presupposes by design – we must confront the design problems and constraints themselves.
Meanwhile, new law enforcement data analysis systems like HunchLab (hunchlab.com) and CivicScape (civicscape.com) propose to account for algorithmic bias, transparency, and accountability with algorithm logic that is open source and user-controlled. Again, we need only return to the forward engineering of PredPol to find that many of the design constraints, implications, flexibilities, and consequences remain. Whereas reverse engineering can target the algorithms that each of these systems use and correct their epistemic fallacies, forward engineering can critique the regime of design problems and conventions that these systems share. As new law enforcement campaigns arise to reform the biases of algorithmic systems, it is imperative that we continue to confront the historically entrenched presuppositions that motivate these systems in the first place. Forward engineering provides a method for identifying these presuppositions in the design constraints specified by algorithmic frames, enabling us to relate work on the history, motivations, and contexts of these presuppositions to an analysis of the algorithm design process.
Conclusion
This article presented a method for analyzing the process of designing algorithmic systems according to design constraints. In doing so, it proposed the design dimensions of algorithmic domains, semiotics, and authority to investigate the consequences of algorithms beyond the scope of algorithmic bias, transparency, and accountability. Whereas these latter discourses continue to raise important concerns about the biases and opacity of algorithms, they contribute to algorithmic reformism insofar as they overlook how these consequences are informed by design constraints in the first place. Through forward engineering, we confront consequences of algorithmic systems that are presupposed by design problems, irrespective of the algorithmic solutions designed or reformed to solve them.
By shifting the scope of our analysis, we also stand to shift the scope of our interventions. As opposed to correcting for algorithmic bias, we can consider how design problems, objectives, and needs motivate algorithmic systems to be partial to certain people, populations, processes, and politics by design, as specified by algorithmic domains. The call for transparent algorithm logic can be met with an analysis of how design problems influence the configuration of semiotics that mediate this logic to people in order to influence how algorithms and their calculations are perceived. And programs of algorithmic accountability can shift to acknowledge that configurations of authority are implicated in the design of algorithmic systems, and that we must draw these designs themselves into question, not merely regulate their consequences. In each way, we depart from appeals to algorithmic reformism that propose to correct the logic of algorithms, and we confront the underlying design principles, motivations, and problems that algorithmic solutions are designed to satisfy.
This underscores the importance of a Research Through Design practice in both critically interrogating and intervening into the consequences of algorithmic systems. By forward engineering algorithmic frames, we demonstrate how the design requirements of algorithmic systems inform processes of designing them to certain ends, with particular consequences. This presents an opportunity to develop critical algorithm pedagogies that begin from algorithm design problems to teach about how algorithm design solutions respond to these problems and what kinds of flexibility developers have (and don't have) in negotiating them. Such pedagogies can accordingly draw from existing research and literature to address how certain discourses, values, practices, and available technologies inform the design of algorithmic frames in the first place. In the case of law enforcement data analysis, we must go beyond algorithm logic to ask why law enforcement presupposes certain problems for algorithms to solve, as well as how algorithms are designed not only to solve but also to justify these problems and their appeals to criminological theories like broken windows theory.
By laying this critical foundation, future research can better account for how
