log in  |  register  |  feedback?  |  help  |  web accessibility
The Lessons and Limits of Predicting Shooting Victimization
Virtual - https://umd.zoom.us/j/93207947099?pwd=c096Z3JrZ1FGSXVEVjFWL29PQUV1dz09
Wednesday, February 24, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Almost 90,000 people become shooting victims each year in the U.S., costing society on the order of $100 billion. Most of this gun violence is concentrated in marginalized communities and among their most disadvantaged residents, disproportionately young Black men. Accurately predicting ex ante gun violence risk, and directing prevention efforts accordingly, has the potential to save many lives and reduce the Black-white mortality gap among young men. Algorithms excel at risk prediction, but if their output “bakes in” biases present in the input data, then using them to guide police or other criminal justice efforts could exacerbate institutional biases and incur high costs from intervening with false positives. We consider an alternative use of algorithms to prevent gun violence: re-purposing police data to predict the risk of shooting victimization rather than arrest, and using the results to direct social services. 


Zubin Jelveh is a new assistant professor at University of Maryland with a joint appointment at the College of Information Studies (iSchool) and Department of Criminology and Criminal Justice. Prior to joining UMD, Zubin was a research director at Crime Lab New York – a University of Chicago research institute that partners with civic and community leaders to design, test, and scale promising programs and policies to reduce violence and the harms associated with the criminal justice system. Zubin's research interests lie at the intersection of machine learning and human decision-making, in particular exploring the impact of past human decisions on data that is the input for machine learning models that are intended to aid future human decisions.

This talk is organized by Wei Ai