Tackling domestic violence using large-scale empirical analysis

New paper in Journal of Empirical Legal Studies co-authored by Ria Ivandić ’13 (Economics)

A woman holds a sign in front of her face that reads, "Love shouldn't hurt."
Photo by Anete Lusina from Pexels

In England, domestic violence accounts for one-third of all assaults involving injury. A crucial part of tackling this abuse is risk assessment – determining what level of danger someone may be in so that they can receive the appropriate help as quickly as possible. It also helps to set priorities for police resources in responding to domestic abuse calls in times when their resources are severely constrained. In this research, we asked how we can improve on existing risk assessment, a research question that arose from discussions with policy makers who questioned the lack of systematic evidence on this.

Currently, the risk assessment is done through a standardised list of questions – the so-called DASH form (Domestic Abuse, Stalking and Harassment and Honour- Based Violence) – which consists of 27 questions that are used to categorise a case as standard, medium or high risk. The resulting DASH risk scores have limited power in predicting which cases will result in violence in the future.  Following this research, we suggest that a two-part procedure would do better both in prioritising calls for service and in providing protective resources to victims with the greatest need. 

In our predictive models, we use individual-level records on domestic abuse calls, crimes, victim and perpetrator data from the Greater Manchester Police to construct the criminal and domestic abuse history variables of the victim and perpetrator. We combine this with DASH questionnaire data in order to forecast reported violent recidivism for victim-perpetrator pairs.  Our predictive models are random forests, which are a machine-learning method consisting of a large number of classification trees that individually classify each observation as a predicted failure or non-failure. Importantly, we take the different costs of misclassification into account.  Predicting no recidivism when it actually happens (a false negative) is far worse in terms of social costs than predicting recidivism when it does not happen (a false positive). While we set the cost of incurring a false negative versus a false positive at 10:1, this is a parameter that can be adjusted by stakeholders. 

We show that machine-learning methods are far more effective at assessing which victims of domestic violence are most at risk of further abuse than conventional risk assessments. The random forest model based on the criminal history variables together with the DASH responses significantly outperforms the models based on DASH alone. The negative prediction error – that is, the share of cases that would be predicted not to have violence yet violence occurs in the future – is low at 6.3% as compared with an officer’s DASH risk score alone where the negative prediction error is 11.5%.  We also examine how much each feature contributes to the model performance. There is no single feature that clearly outranks all others in importance, but it is the combination of a wide variety of predictors, each contributing their own ‘insight’, which makes the model so powerful.

Following this research, we have been in discussion with police forces across the United Kingdom and policy makers working on the Domestic Abuse Bill to think how our findings could be incorporated in the response to domestic abuse. We hope this research acts as a building block to increasing the use of administrative datasets and empirical analysis to improve domestic violence prevention.

This post is based on the following article:

Grogger, J., Gupta, S., Ivandic, R. and Kirchmaier, T. (2021), Comparing Conventional and Machine-Learning Approaches to Risk Assessment in Domestic Abuse Cases. Journal of Empirical Legal Studies, 18: 90-130. https://doi.org/10.1111/jels.12276 

Media coverage

Connect with the author

Ria Ivandić ’13 is a Researcher at LSE’s Centre for Economic Performance (CEP). She is an alum of the Barcelona GSE Master’s in Economics.

Media and behavioral response: the case of #BlackLivesMatter

Economics master project by Julie Balitrand, Joseph Buss, Ana Monteiro, Jens Oehlen, and Paul Richter ’19

Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.

Abstract

We study the effects of the #BlackLivesMatter movement on the law abiding behavior of African-Americans. First, we derive a conceptual framework to illustrate changes in risk perceptions across different races. Second, we use data from the Illinois Traffic Study Dataset to investigate race ratios in police stops. For identification, we apply a linear probability OLS regression on media coverage as well as an event study framework with specific cases. We find that the number of black people committing traffic law violations is significantly reduced after spikes in media coverage and notable police shootings. In the latter case, we further find that the effect holds for an approximate ten day period. We argue that these observed changes in driving behavior are a result of the updated risk beliefs.

Game Tree. Balitrand et al.

Conclusions

Beginning with our model, we show that media related changes in risk perceptions cause a change in the proportion of people committing crimes. Using this model, we further predict that this change would be different across different racial groups. More specifically, it predicts that Blacks became more cautious in order to decrease the chance of a negative interaction with the police. On the other hand, whites were predicted to not change their behavior, since the violence in media coverage is not relevant to their driving decisions.

In order to test our model, we develop a hypothesis testing strategy that allows us disentangle police actions from civilian decisions. By considering the proportion of stopped people who are black at nighttime, we completely remove any effect caused by changes in policing intensity and bias. Instead, we create a testable hypothesis that only focuses on the differences in behavior between racial groups.

To test this hypothesis, we use a linear probability model along with traffic data from Illinois. We test the hypothesis using both an event study approach, as well as using media intensity data from the GDELT Project. Both approaches verify our model’s predictions with high significance levels. Therefore, we have shown that Blacks became more cautious in response to these events compared to other racial groups. In addition, our robustness check on the total number of stops supports the claim that non-blacks do not have a significant response to media coverage of police brutality toward Blacks. This leads to the conclusion that the expected proportion of Blacks breaking traffic laws goes down in response to coverage of these events.

An implicit assumption in our model was that as media coverage goes to zero, Blacks would revert back to their original level of caution. To test this we looked at three days intervals following each media event. We showed that after approximately 10 days, the coefficients were not significant anymore, showing that the media only caused a short term change in behavior. Since this was a robustness check, and not a main focus of our model, we did not investigate this further. This is an interesting conclusion, and warrants future analysis.

On a final note, we want to address the type of media we use for our analysis. Our model section considers media in a general sense. This can include, but is not limited to, social media platforms such as Twitter and Facebook, as well as more traditional media platforms such as television and print newspapers. All of these sources cover police brutality cases at similar intensities. We use TV data for media intensity, since it affects the broadest demographic and therefore best represents the average driver’s exposure to the topic. Different media age medians might affect different demographics more or less. For example, social media may have a greater effect on younger drivers than older drivers. We believes this topic warrants further analysis, in a addition to the topic of the previous paragraph.

Authors: Julie Balitrand, Joseph Buss, Ana Monteiro, Jens Oehlen, and Paul Richter

Statistical Racism

Nandan Rao ’17 (Data Science) has posted a simulation over on the BGSE Data Science blog to see if racial profiling really helps catch more criminals.

Source: Nandan Rao ’17

“In the real-life algorithms being implemented by police departments, as in our toy simulation, the data used to find criminals is not the data on crimes, but the data on crimes caught.”

Read the post and see the code he uses to produce the simulation and graphics over on the BGSE Data Science blog.

Source: Nandan Rao ’17