Policy makers and the legal system must often make tough decisions that affect the future. In the case of violent criminals, the decision to release or retain in incarceration could determine life or death for future potential victims. As such, decision makers need the right tools to ensure more statistically accurate predictions of future outcomes.
The use of predictive modeling in policing is relatively new, yet it has been lauded as a revolutionary technique for reducing crime. The National Institute of Justice (NIJ) hosted two symposiums dedicated to the topic of predictive policing in 2009 and 2010. TIME Magazine designated predictive policing one of the best inventions of 2011. A 2012 survey conducted by the Office of Community Oriented Policing Services and the Police Executive Research Forum found that 70 percent of police agencies surveyed planned to incorporate predictive policing methods in their departments within the next five years. However, hard evidence about the effectiveness of predictive policing techniques is currently scant, though a number of NIJ studies aim to fill this gap in the next several years.
This article introduces “machine learning” approaches to prediction. “Machine learning” refers to the ability of a computer program to learn from its computations and improve its own performance. In contrast to other prediction models, in which the predictor variables are predetermined and used to predict outcomes (like parole failures) in a static way, machine-learning models build their rules from raw data and have the ability to improve their accuracy as new data are fed into the model. Machine learning is a key concept underpinning some predictive policing techniques. Just as a crime analyst would learn over time from receiving additional reports and data, so too do the computerized machine-learning models.
Machine-Learning Models & Criminal Recidivism Richard Berk, professor of criminology and statistics at the University of Pennsylvania, recently applied predictive modeling to the prediction of criminal recidivism. His work steps away from the more traditional predictive policing models used to target crime, typically on a geographic basis. In a recent report, Berk and his colleague Justin Bleich explore predictive models’ accuracy in predicting re-offending behavior among parolees, comparing two machine-learning models with a different (regression) model without machine-learning characteristics.
Berk and Bleich use real-world parolee data to show that using predictive methods (specifically, a random forest method) results in more accurate predictions than traditional regression-based prediction models. There are two main reasons this is the case. First, machine-learning models can easily capture data about numerous types of outcomes – for example, not re-offending, re-offending for a minor crime, and re-offending for a serious crime. Although some regression models can forecast more than two outcomes, logistic regression models cannot. Regression models require a priori specification of predictive variables, whereas machine-learning models do not; they build models inductively through exhaustive searches of large data sets for associations between variables.
Second, machine-learning techniques allow the analyst to apply different weights to model outcomes, and thus can be more responsive to the needs of policy makers. For example, predicting a parole failure that does not occur in reality (a false positive) is different from predicting success when a parole failure occurs in reality (a false negative). Most people would agree that a false negative (releasing a person on parole who commits a serious crime) is more costly and more serious than a false positive (keeping a person incarcerated to the end of their sentence even if they would not have committed another serious crime). Berk’s point is that traditional modeling methods treat both outcomes as equals, when in reality they are not equal regarding costs and public harm. With the information produced by machine-learning models, policy makers are in better positions to make evidence-based decisions about incarceration and parole policies.
Predicting the Future of Prediction As interest in predictive policing grows, researchers and analysts should look carefully at machine-learning modeling. Based on the work of Berk and others, these predictive models perform well and, in some cases, outperform other more traditional approaches to predictive modeling. Although conducting the programming required to build machine-learning models can be a bit sophisticated and perhaps intimidating, as this field moves forward, such models will become more accessible and understandable.
Note: The authors express their thanks to Richard Berk for his consultation on this article.
James R. (Chip) Coldren Jr.
James R. (Chip) Coldren Jr. is the managing director of Justice Programs in CNA’s Institute for Public Research. He directs several law enforcement related reform and technical assistance projects, and serves as principle investigator for several national research projects concerning utilization of technology in law enforcement and corrections.
- James R. (Chip) Coldren Jr.https://www.domprep.com/author/james-r-chip-coldren-jr
- James R. (Chip) Coldren Jr.https://www.domprep.com/author/james-r-chip-coldren-jr
Zoë Thorkildsen
Zoë Thorkildsen is a research analyst in the Safety and Security division at CNA. She leads and supports a variety of research and training and technical assistance projects for the Department of Justice and other clients. She is co-investigator on an Office of Community Oriented Policing Services-funded research project on ambush attacks of police, serves as website coordinator and analyst for the Bureau of Justice Assistance Smart Policing Initiative, and is lead analyst for a National Institute of Justice research project on safety equipment efficacy in correctional facilities.
- Zoë Thorkildsenhttps://www.domprep.com/author/zoe-thorkildsen