By Ali Al Bataineh, Rachel Sickler, W. Travis Morris, and Kristen Pedersen
Introduction
The advent of artificial intelligence (AI) has brought about significant changes across all sectors, including law enforcement (LE). In LE, it is used for facial recognition, crowd monitoring, and to inform crime prevention activities. As the technology’s accuracy and abilities grow, AI will spread to other aspects of LE that raise ethical concerns, like predictive policing and biometric profiling. This article argues that AI-generated reports should be included in the list of technological developments that require responsible integration and interpretability.
Report-Writing in LE
Law enforcement officers devote 50% of their duty time to creating and filing reports. The most common are motor vehicle accident reports, incident reports, crime reports, and arrest reports. Reports are generally recorded within an electronic Records Management System (RMS), and contain both individual, pre-defined fields and text areas for narrative observations. The reports must be concise, detailed, timely, clear, and accurate while withstanding legal scrutiny in the court system. Reports must be unbiased, truthful, and unprejudiced representations of events. Incident details such as who, what, where, when, and why are required for investigations and analysis.
AI could automatically populate report fields. Common LE reporting fields include information about witnesses, victims, and suspects, such as name, age, date of birth, race, sex, physical characteristics, and information about the incident, such as type of incident, location, assisting officers, and description of the scene. Witness and victim statements from audio or video footage using speech-to-text technology, times, and locations could be populated by aligning GPS coordinates with timestamps in recordings, and descriptions of the scene could be populated through a combination of computer vision using bodycam footage and audio recordings. Once text is created from the audio footage, large language models (LLMs) could be trained to understand the types of questions each person of interest is asked in specific events to discern to which type of report and category each person belongs (e.g. witness, victim, suspect). LLMs could be used to generate summaries of events and actions taken by LE using training data from prior reporting to understand and mimic the language of these narratives and recognize how the other fields in the report are used in these summaries.
Benefits of AI-Generated Reports in Law Enforcement
As Ferguson notes in The Rise of Big Data Policing (2017), there is no doubt that AI-generated reports have the potential to improve data analysis in LE. The result is decision-making based on objective information, increasing officer effectiveness through timesaving, fostering community engagement, and holistic crime prevention programs through improved understanding of local communities. With the vast amount of data collected by LE agencies, AI-generated reports can help process and analyze this information more efficiently and accurately, enabling officers to take advantage of the large amount of data collected to identify patterns, trends, and anomalies for investigations, understanding community sentiment, and clearing cases. In terms of saving time, AI-generated reports can improve officer effectiveness by automating the processing of data collected while on patrol and online, enabling officers to focus more of their day on critical aspects of their work, such as community engagement and addressing complex issues, and less on data entry and report-writing.
AI-generated reports can also help LE agencies better understand the communities they serve. AI can provide the accurate and timely information that is necessary to communicate more effectively with the public. This more holistic data analysis approach may allow law enforcement executives to develop policies that aid in impactful and appropriate crime prevention.
Drawbacks and Ethical Concerns
Despite the potential benefits, there are significant concerns regarding AI-generated reports in LE. Effective and unbiased AI is dependent on existing data. AI algorithms, including LLMs, can perpetuate existing inequalities in LE due to biases present in the data used to train the models. For instance, an AI-generated report that relies on historical arrest data that disproportionately targets specific communities may reinforce these biases in its future predictions and recommendations. Misleading or false information in the source data may also result in inaccurate analyses and reports, leading to inaccurate investigations, false arrests, decreased community trust, and misunderstanding of community populations.
The legal implications of using AI-generated reports in court are another potential drawback. AI-generated data can be used in the discovery process, but AI-generated reports that use opaque systems as evidence in court proceedings may not meet the acceptable standards of validity and reliability for admissibility, if the output process is not legitimized. As noted by Grimm et al. in Artificial Intelligence as Evidence (2021), the party presenting AI evidence must be prepared to disclose the (sometimes proprietary) data and report development details of the system. Investing in AI-generated reporting could be detrimental to LE if these reports are deemed inadmissible for prosecution.
The Importance of Interpretability
Interpretability refers to the explainability, transparency, reliability, and auditing of the AI system, data sources, algorithms, and consistency of outputs. It is essential for responsible deployment of AI-generated reports in LE. Policymakers, LE professionals, and the public must understand and scrutinize the AI system’s decision-making process prior to implementation to ensure the information can be trusted and that reports are legally viable in court.
Models created for use in LE should have the ability to explain how the model arrived at its inferences, allowing users to understand the rationale behind the output. This can be achieved through various techniques, such as local interpretable model-agnostic explanations (LIME) or counterfactual explanations. Additionally, AI developers should be transparent about the data sources, algorithms, and assumptions—reports are only as good as the data used for their development. This transparency can help users assess the reliability and potential biases of the AI system.
Reliability commonly refers to how well a model generalizes when in use. This requires that the data used for training be a representative sample of the population it will be used to make inferences about. AI developers must ensure that AI-generated reports consistently produce reliable outputs. Regular audits can help ensure that the AI system is functioning as intended and identify any issues or biases that may have emerged (Hagendorff 2020). One issue is model drift, a process by which a model’s important metrics begin to deviate from what is expected due to changes in the larger population or in this case changes in the legal system. This can be monitored via a dashboard of visualizations with an alerting function that indicates when model re-training is necessary. Consistent and accessible AI reporting is imperative for effective integration in analysis and creating effective LE policies.
Achieving interpretability in AI-generated reports is not without challenges. Some AI algorithms, particularly deep learning models, are inherently complex and challenging to interpret, even for experts in the field (Arrieta et al. 2020). There may be trade-offs between interpretability and performance, with more interpretable models potentially sacrificing accuracy or efficiency.
Despite these challenges, several strategies can help enhance interpretability, such as hybrid/ensemble models, model simplification, human-in-the-loop systems, and various visualization techniques. Hybrid models are a technique whereby several models are combined to achieve better outcomes and ensemble models are similar but include a weighting system for each sub-model included. Combining the strengths of different AI approaches, such as rule-based systems and deep learning models, can improve both interpretability and performance (Keneni et al. 2019). For example, incorporating expert knowledge into AI-generated reports can help provide context and explanations for complex decisions. On the other hand, developing simplified AI models that prioritize interpretability while maintaining acceptable levels of performance can make AI-generated reports more accessible and understandable to users (Rudin 2019). Involving experts in the development, training, and evaluation of AI-generated reports, using a human-in-the-loop process, can ensure that the AI system’s decisions align with human values and expectations. This kind of collaboration can also help identify potential biases or errors in the AI system’s outputs. Finally, using visual representations to illustrate the AI system’s decision-making process can help users grasp complex concepts and relationships more intuitively (Wattenberg et al. 2016). Producing user-focused visualization is imperative for LE systems to maximize the advantages of AI.
Case Studies
Gaining a practical understanding of the benefits and challenges of integrating AI reports in LE requires examining specific case studies. Below are two recent examples of AI integration in law enforcement:
Los Angeles Police Department’s (LAPD) PredPol. In 2016, the LAPD implemented PredPol (now called Geolitica), a predictive policing tool that uses AI-generated reports to forecast crime hotspots so LE can allocate resources more effectively. The system analyzes three points of historical crime data—crime type, crime location, and crime date/time—to identify patterns and generate daily heat maps for patrol officers. Despite some initial success in reducing crime rates, community concerns emerged over potential racial biases and increased surveillance in marginalized neighborhoods. PredPol essentially validated the existing LAPD crime prevention strategies and perpetuated already present discrimination. The LAPD terminated use of PredPol in 2020 due to budgetary concerns (The Guardian 2020).
New York City Police Department’s (NYPD) Pattrnizr. In 2019, the NYPD implemented an AI-driven tool called Patternizr, which helps officers identify and track crime patterns (The Verge 2019). Developed in-house, Patternizr utilizes machine learning algorithms to analyze crime data, such as the location, date, and type of crime, and recognizes patterns that may be overlooked by human analysts. Patternizr was trained using manually identified patterns for burglaries, robberies, and grand larcenies in the city to find relationships among them. The final models were incorporated into NYPD’s Domain Awareness System (DAS)—a citywide network of sensors, databases, devices, software, and infrastructure. All historical pairs of complaints were then processed in the cloud against 10 years of records of burglaries and robberies, and 3 years of grand larcenies data. To keep the software up-to-date, similarity scores were calculated and updated for new and revised complaints three times a day, and each was scored against the existing crime data before being incorporated into DAS (DS-ISAC 2019). The use of Patternizr has demonstrated the potential for AI-generated reports to improve situational awareness and streamline the process of identifying crime patterns, enabling the NYPD to respond more effectively to emerging trends and allocate resources more efficiently.
A Call for Ethical Guidelines and Regulatory Frameworks
To ensure responsible integration and interpretability of AI-generated reports in LE, it is crucial to establish clear ethical guidelines and regulatory frameworks. AI developers, policymakers, and LE professionals should collaborate to ensure transparency in the development, deployment, and evaluation of AI-generated reports. This includes disclosing data sources, algorithms, assumptions, and examining for potential biases. It is also important that clear lines of accountability be established for AI-generated reports, including who is responsible for errors or unintended consequences that may arise from their use in LE (Jobin et al. 2019). AI-generated reports must also adhere to strict privacy and data security standards to protect sensitive information and prevent unauthorized access or misuse. Additionally, it is important that teams developing AI for LE be engaging the public in discussions, a clear understanding can lead to trust and ensures that the technology is used in a way that aligns with societal values and expectations.
Incorporating ethical considerations into best practices in AI development is crucial. Developers should use transparent algorithms that prioritize fairness, interpretability, and robustness, engaging stakeholders in the development process. This includes monitoring and evaluating AI systems regularly, addressing biases, inaccuracies, and other issues to ensure they remain effective and ethical. AI systems should not perpetuate historical injustices or exacerbate existing inequalities in law enforcement. AI-generated reports must treat individuals and groups equitably. In order to do this, AI developers need to ensure diverse, representative, and high-quality data sources to minimize biases and improve algorithm accuracy (Buolamwini and Gebru 2018). Finally, AI-generated reports should respect human autonomy, ensuring that individuals maintain control over their personal data and are not subject to undue surveillance or intrusion (Crawford and Calo 2016). Law enforcement personnel should be trained on the use, limitations, and ethical implications of AI-generated reports, fostering critical thinking and informed decision-making.
AI and the Future of Law Enforcement
To ensure ethical, accurate, and transparent AI implementation, these LE best practices should be followed. There are five necessary elements to successfully integrating the power of AI into LE in general and into the creation and use of AI-generated reports more specifically. We recommend the following:
Cross-Disciplinary Collaboration: Partnerships between AI developers, law enforcement professionals, policymakers, and social scientists to advance the understanding of AI opportunities and challenges, while serving the public good.
Public-Private Synergy: Building collaboratives between law enforcement agencies, community organizations, and private AI developers to develop and implement innovative AI tools tailored to LE needs and constraints. These partnerships would promote transparency, accountability, and trust in the validity and neutrality of AI-generated reports.
Continuous Training and Education: As AI becomes more integrated into law enforcement practices, it is essential to train law enforcement personnel to use AI-generated reports effectively and appropriately. This training should include instruction on the ethical implications, potential biases and misinformation, and limitations of AI-generated reports.
Adaptive Policymaking: Policymakers must remain vigilant and adaptive, updating AI regulations and guidelines as it evolves. This requires ongoing engagement with various stakeholders, including AI developers, law enforcement professionals, the court system, and community members, to identify emerging concerns and develop effective responses (Crawford and Calo 2016).
Public Awareness and Engagement: Public discussions are necessary for popular understanding of how LE uses AI. These discussions would ensure that the technology aligns with societal values and expectations, and that the process is explainable. LE must foster public trust and strive for transparency through awareness campaigns and other community dialogue.
Conclusion
AI-generated reports hold great potential for improving data analysis, officer effectiveness, crime prevention, and community engagement. However, responsible integration of AI must address inherent biases within AI algorithms, the implications of AI-generated misinformation, and the importance of interpretability. This article has outlined the challenges of using AI-generated reports in law enforcement and the need for careful implementation and evaluation. Furthermore, we have discussed ethical considerations and best practices for AI implementation and future research directions. Engaging diverse stakeholders, including law enforcement agencies, AI developers, policymakers, and community members, can address concerns and interests related to AI-generated reports. Establishing ethical guidelines, regulatory frameworks, and strategies to enhance interpretability, while adhering to ethical principles like fairness, justice, and human autonomy, will promote transparency, accountability, and collaboration between AI developers, policymakers, and criminal justice professionals. Ultimately, responsible and effective AI-law enforcement generated reports grounded in continued research, dialogue, and piloted programs have the strong potential to contribute to a safer and more equitable society.
Author Biographies
Dr. Ali Al Bataineh is the Director of the Artificial Intelligence (AI) Center at Norwich University. His research addresses the development and application of AI technology, with a strong focus on designing machine learning algorithms and models that address complex real-world challenges. Ali is passionate about harnessing the power of AI to drive innovation across a broad spectrum of industries and disciplines.
Rachel Sickler is a Senior Developer and Machine Learning Engineer at Norwich University Applied Research Institutes. Rachel’s experience includes software engineering, business analytics, natural language processing, predictive analytics, model observability and explainability, end-to-end data engineering and machine learning engineering. Her current focus areas are social cybersecurity , criminal justice and AI ethics.
Dr. Travis Morris currently serves as the director for Norwich University’s (NU) Peace and War Center and the Director for NU’s School of Criminology and Criminal Justice. Travis’s research addresses information warfare, terrorism, counter terrorism, and environmental security. He has also published numerous articles and chapters on information warfare, propaganda analysis, and security in numerous academic journals.
Dr. Kristen Pedersen is the Vice President and Chief Research officer at the Norwich University Applied Research Institutes. She is responsible for NUARI’s cyber operations and portfolio of contracts and research projects for DHS, DOD, NSA, and national critical infrastructure organizations. Kristen’s research areas include influence/information operations, frame analysis, cyberpsychology, and strategic communication.
References
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges toward responsible AI. Information fusion, 58, 82-115.
Bhuiyan, J. (2021). LAPD ended predictive policing programs amid public outcry. The Guardian. Retrieved from: https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability, and transparency (pp. 77-91). PMLR.
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311-313.
DS-ISAC. (2019). NYPD’s machine learning software spots crime patterns. Retrieved from: https://dsiac.org/articles/nypds-machine-learning-software-spots-crime-patterns/
Ferguson, A. G. (2017). Rise of Big Data Policing, The. In Rise of Big Data Policing, The. New York University Press.
Grimm, Paul W., Grossman, Maura R., & Cormack, Gordon V. (2021). Artificial Intelligence as Evidence, 19 NW. J.TECH. & INTELL. PROP. 9, .84-90. https://scholarlycommons.law.northwestern.edu/njtip/vol19/iss1/2
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Keneni, B. M., Kaur, D., Al Bataineh, A., Devabhaktuni, V. K., Javaid, A. Y., Zaientz, J. D., & Marinier, R. P. (2019). Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access, 7, 17001-17016.
Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. NYUL Rev. Online, 94, 15.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5), 206-215.
The Verge. (2019). New York City is using machine learning to find out where the next crimes will take place. Retrieved from https://www.theverge.com/2019/3/10/18259060/new-york-city-police-department-patternizer-data-analysis-crime