Abstract
Using Chicago police data, we train a machine learning model to predict the risk of being shot in the next 18 months. Out-of-sample accuracy is strikingly high. A central concern with using police data is “baking in” bias, or overestimating risk for groups likelier to interact with police conditional on behavior. Our predictions, however, accurately recover risk across demographic groups. Legal, ethical, and practical barriers should prevent using victimization predictions to target law enforcement. But using them to target social services could increase both the potential for interventions to reduce shootings and the available statistical power to detect those reductions.
This content is only available as a PDF.
© 2024 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology
2024
The President and Fellows of Harvard College and the Massachusetts Institute of Technology
You do not currently have access to this content.