Abstract
Police officers often make decisions while under stress and pressed for time, which can lead to troubling consequences such as wrongful arrests, use of excessive force, and disproportionate arrests of Black community members. 1 Existing informational resources can enable officers to prepare for interactions in ways that are better calibrated to the needs of community members.2,3 For instance, inputting a name into CompStat or another incident database might reveal factors like a person’s past arrests or encounters with police. 4 But most police agencies do not provide tools that enable officers to review, organize, and synthesize a community member’s history of encounters with law enforcement in real time before engaging in an encounter with that person.
The U.S. Department of Justice’s Office of Community Oriented Policing Services has argued that police officers—from rank-and-file patrol officers to department heads—could benefit from real-time access to relevant data in their day-to-day work. 5 Artificial intelligence (AI) technology—which can be thought of as software that simulates human decision-making and learning—holds promise for meeting that need because it can easily synthesize large amounts of quantitative and qualitative data in real time. Moreover, this ability to synthesize data can potentially help ameliorate a common problem for officers: a lack of bandwidth for processing information in the heat of the moment. 1
Some forms of AI software are already used in aspects of policing as standalone technologies or to enhance existing technologies, such as image recognition or predictive analytics (for instance, predicting where certain crimes are particularly likely to occur and at what times of day). Thousands of U.S. police agencies use standard or AI-based image recognition software. 6 When officers obtain images of license plates or suspects, the software can match those images to images in databases, which then generate the names of and other information about the people who own the cars or have a history of police encounters. Automated AI text summarization and classification tools, which are not yet widely used, could offer similar benefits, especially in high-pressure situations when an officer is deployed to an emergency call.
One example of an AI tool used by police departments, ForceMetrics, 7 synthesizes information from large numbers of incident reports and extracts insights and themes from community members’ histories with law enforcement. 8 AI tools like ForceMetrics can, for instance, instantly recognize multiple references to concepts related to hunger in a history of police reports and tag community members’ files with the label “food insecurity”—a task that would require intensive human capital to code and extract without AI tools. With this knowledge in hand, officers might approach someone who stole food from a grocery store with greater empathy than they would have displayed without access to this information. By showing empathy, the officers may help to avoid a combative response from the individual. Other labels assigned by AI after report analysis might relate to factors such as homelessness or mental health.
Although AI tools hold the potential to improve officers’ ability to respond to crime effectively and, in so doing, ultimately bolster public safety, research also documents persistent biases in AI tools and in the behavior of employees who use them. AI software programs, or algorithms, that make predictions on the basis of large amounts of past data are only as accurate as the data they are trained on; any biases in the training data will result in biased algorithms and predictions. 9 In this context, “training” means the process by which AI algorithms learn to recognize patterns in past data and use those patterns to make predictions. Take the case of facial recognition. An AI system would first be exposed to a large collection of facial images, along with information about whose face is in each image. The system would analyze the unique features of each face—such as the shape of the eyes or the distance between facial features—and use this information to create a set of rules for distinguishing between different people. Once trained, the system can then apply these rules to identify a match between a new image and a known face in the database.
But facial recognition software might lead to racial bias in the identification of possible crime suspects if flaws in the training data set caused the system to be more accurate at matching a new image with a known face in the database when the person in the new image is White rather than Black. In that case, pursuing a matched Black person as a suspect could lead to miscarriages of justice more often than would be the case with a matched White person. There are high-profile examples of officers wrongfully arresting community members on the basis of faulty image matches by facial recognition software. 10 AI tools are only meant to supplement existing crime-solving methods. However, the high-profile mistaken arrest incidents reveal that officers can unintentionally allow AI tools to replace critical thinking. 6
In this article, we describe a framework for understanding how AI can foster and amplify bias in policing—creating what we call a

The cycle of bias in AI-assisted police work & how to disrupt it
Framework: The Cycle of Bias in AI-Assisted Police Work
To understand how bias in AI tools could exacerbate inequities in criminal justice outcomes, it is first necessary to understand how officers might use AI tools in their daily work. As is shown in Figure 1, AI-assisted police work generally involves three basic steps.
In Step 1—before any interaction occurs—officers gather data from the AI system about potentially involved individuals (the “targets”) and form initial impressions of the individuals. This could happen at the police station prior to being deployed to a call or in the car while en route to the location.
In Step 2, the officers interact with the targets in ways shaped by the data provided by the AI tool, the impressions formed from that data, and the impressions they form on the scene.
In Step 3, after the interaction, the officers update the AI database with details about the interaction. The AI software then retrains using the updated data and refines its algorithm, which then serves as the starting point for subsequent cases.
If the AI tools are developed using algorithms based on biased training data, bias can be introduced as early as Step 1. Indeed, it is likely that most existing police databases from which AI tools would draw are biased. 13 For example, Black people are disproportionately stopped by police compared with their White counterparts. 14 As a result, existing police databases are likely to include more information about and a greater number of police-stop reports for Black people than White people. This imbalance can predispose officers to be more suspicious of Black people and to stop them more often than they do White people.
Furthermore, officers are also likely to interact with biased data in a biased way, owing to the demands on their cognitive bandwidth (also known as
At Step 2, when police engage with community members, officers would continue to be prone to using cognitive shortcuts and, therefore, could end up failing to effectively use constructive information provided by AI tools. This misstep can further exacerbate heuristic thinking. 6 For example, officers might unnecessarily escalate an encounter with a person accused of a crime on the basis of their assumption that the individual is a criminal when, in reality, the individual acted out because of mental health issues. If the officers had fully appreciated the output of an AI tool that flagged this contextual factor, they might instead have had a better understanding of what was happening and why and treated the community member more compassionately.
In the interaction stage, officers might also be afflicted by AI technology’s potential to significantly undermine people’s empathy toward others. 18 Research has shown that technology-mediated communication may decrease people’s perspective-taking ability,19,20 which is a key component of empathy. 21
It is during interactions with community members that self-fulfilling prophesies arise. A police officer may form a hasty judgment about a community member, which in turn shapes how the officer treats that person. The officer’s treatment then affects how the community member responds, ultimately reinforcing the officer’s initial swift judgment. In an example of a self-fulfilling prophecy, an officer may perceive Black men as being more violent than White men. If this officer responds to a call about a Black man jogging down an alley and an AI tool indicates that the alley is in a high-crime stretch of town, the officer may assume that the man has committed a crime rather than considering alternative explanations. 1 The officer may therefore start the interaction by issuing commands instead of asking questions. 22 This approach may alarm the jogger, causing him to try to flee, which might then lead the officer to use unnecessarily harsh force to restrain the jogger—thereby perpetuating the history of racial biases contributing to the use of force by police. 23
In the context of AI-assisted police work, such an outcome means that at Step 3, report writing, the information fed back into the system will be based on officers’ self-fulfilling prophecies and will further confirm the biases in the existing data set. Bias at this stage can affect which reports get written and how reports are written. The report for the jogger scenario described in the previous paragraph will indicate that the target was aggressive, just as the officer’s assumptions and biases predicted. Had self-fulfilling prophesies and biases not occurred and had the officer considered an alternative explanation that proved to be correct—that the man was simply out jogging for exercise—the officer may have ended up not writing a report about the situation at all.
When the algorithm updates to include new biased reports, it will become still more biased. This increasingly biased algorithm will serve as the starting point for subsequent calls and cases, creating a negative cycle that will amplify bias and likely increase injustice over time. To combat this recursive negative process, we next outline potential strategies to mitigate bias in AI-assisted police work.
Interventions to Mitigate Bias
A comprehensive approach that intervenes at each step of police work and targets multiple aspects of policing (tools, individual biases, and interactions with community members) is needed to mitigate the cycle of bias. Research in behavioral science informs our recommendations. Beyond providing insight into causes of and antidotes to bias in general, researchers have begun to study people’s psychological responses to advanced technologies like AI.24 –28 For example, research indicates that people often perceive that algorithms tend to overlook qualitative and contextual information (in other words, they perceive algorithms to be overly reductionistic), which leads them to believe that algorithmic decisions are more unfair than decisions made by humans. 26
Specifically, we propose that the most effective approaches for mitigating bias in AI-assisted police work will focus on improving officers’ deliberative processing, inducing empathic mindsets, training officers on an expanded set of communication tactics, and prompting officers to be more factual and detailed in their report writing. To support these claims, we illustrate how applying our proposed interventions could shift the process of AI-assisted police work from one that perpetuates bias to one that disrupts it.
At Step 1, Recalibrate How Officers Perceive AI Tools
As part of addressing bias related to the initial use of AI tools, police leaders could ensure that bias-fighting advice is added to the initial training on how to tactically deploy new AI tools. This added training would encourage officers to take time to absorb the tools’ output and consider whether the information might be biased. Slowing down also should reduce reliance on stereotypes and other bias-promoting cognitive shortcuts. (Research has shown that, in the field, interventions that encourage officers to stop and think can reduce wrongful arrests and the use of force.) 1
In addition, developers of AI tools could include nudges in the AI interface that remind officers of the bias-fighting concepts taught in the training. Technically,
A more elaborate intervention could explicitly highlight in the AI interface what the AI tool cannot do in addition to what it can do. For example, notifications in AI interfaces could remind officers that AI can identify concrete and objective features of a case
34
but cannot provide the complete picture and can never know all of community members’ experiences and histories. This approach emphasizes that AI is not all-powerful: Humans and AI tools are collaborators. This perspective, known as the
Taken together, these types of interventions will prompt police officers to engage in more deliberative processing. They will also highlight the human skills officers should bring to interactions with community members.
At Step 2, Foster Diagnostic (vs. Confirmatory) Information Gathering
To address bias in the interaction step of policing, we suggest that police departments (a) teach officers to think and act in more empathic ways and (b) include nudges in AI tools reminding them to adopt empathic mindsets.
In interventions designed to induce an empathic mindset, 36 trainers attempt to shift authority figures, such as police officers, away from making judgments about people’s character (for example, that they have inherent character flaws) and toward considering contextual reasons why individuals may engage in misbehavior (for example, that they lack the money to buy food) and are deserving of empathy. This empathic mindset can be induced by providing authority figures with examples of contextual reasons for people’s behaviors (such as readings that clearly illustrate how situations, contexts, and social structures beyond people’s control can help explain their actions).
In contexts outside of policing, like education, empathic mindset interventions have been shown to reduce stereotyping and the severity of discipline used by authority figures. 37 In the domain of criminal justice, researchers tested a related intervention in which they found that when people read a first-person narrative of a prisoner, they displayed more empathy than they did when they read third-person information about incarceration. 38
Interventions could also teach officers concrete communication tactics that would help them translate the motivational benefits of having an empathic mindset into behaviors that lead to less biased treatment. Interventions that teach these communication tactics are particularly appealing because practicing them can make the tactics habitual and automatic, 17 requiring minimal cognitive attention from officers who already have limited bandwidth. 1 What is more, once officers are comfortable with empathic communication tactics, AI interface nudges can prompt officers to use them.
One particularly promising communication approach for promoting an empathic mindset and reducing biased behavior is
Asking questions to get community members’ perspectives should reduce officers’ tendency to use data from AI tools to confirm biased assumptions and stereotypes. Instead, officers will be more likely to focus on getting an accurate picture of the community members with whom they interact. Training officers to develop an empathic mindset and incorporate perspective-getting nudges into informational tools should improve not only AI-assisted policing but also police work in general. Moreover, these interventions could help assuage community members’ concerns that AI-assisted police work will harm service quality and lead officers to overlook their unique needs.43,44 Research in other fields has found that consumers worry that, compared with the quality of work not assisted by AI, the quality of AI-assisted work will be worse 43 and that individuals using AI tools will be less attentive than usual to other people’s unique circumstances and contexts. 44
At Step 3, Ensure Accurate Data are Input Into AI Tools to Mitigate Future Algorithmic Bias
If the interventions we have described are implemented effectively, they could by themselves reduce bias in the report-writing step of policing. Not only will officers prepare a less biased incident report than would have otherwise been the case, they will also likely collect more data on and draft fuller accounts of interactions. Detailed accounts tend to be less subject to bias than shorter ones are. 45
Nevertheless, interventions specific to this phase of police work could be helpful, such as ensuring that officers have sufficient motivation and time to write detailed reports. From a motivational standpoint, AI tools could include interface prompts asking officers to describe their encounters in as much detail as possible, even when they find it challenging to do so. The interface might say, for instance, “Write a report so that someone who wasn’t there could easily follow what happened. It is normal if this feels difficult to do: It means you are building your skills.” 46
From a structural standpoint, police department policies could explicitly build in time for officers to complete sufficiently detailed reports before the end of their shift, which would reduce cognitive load 47 and allow officers to devote their brainpower to producing descriptive step-by-step accounts of their encounters. Research has shown that removing distractions and reducing cognitive load when people are trying to accomplish effortful tasks promotes accuracy and attention to detail. 47 Officers’ updating of databases with specific and accurate data and reports will help to debias AI algorithms, which will then provide outputs that serve as more accurate starting points for subsequent calls and cases.
Conclusion
We have outlined a framework that describes how officers might use AI tools in their day-to-day work. We then illustrated how bias could creep into each step of this work, creating a cycle that amplifies bias in the AI algorithms and in policing over time, ultimately exacerbating inequities in criminal justice outcomes. We have also described how interventions grounded in behavioral science research could be tailored to AI-assisted police work to mitigate bias at each phase of the cycle. AI tools hold promise for improving policing. However, they must be paired with psychologically informed training and nudges designed to mitigate bias at each step of police work if the tools are to fulfill that promise of improving public safety for all.
Supplemental Material
sj-docx-1-bsx-10.1177_23794607241300788 – Supplemental material for How behavioral science interventions can disrupt the cycle of bias in AI-assisted police work
Supplemental material, sj-docx-1-bsx-10.1177_23794607241300788 for How behavioral science interventions can disrupt the cycle of bias in AI-assisted police work by Andrea G. Dittmann, Kyle S. H. Dobson and Shane Schweitzer in Behavioral Science & Policy
Footnotes
Declaration of Conflicting Interests
Funding
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
