Viewing a single comment thread. View all comments

Skontradiction t1_jefcug6 wrote

While I think this is good news, I would still view this report skeptically for a few reasons:

  1. I’ll just start off by saying the 22% reduction in homicides is not statistically significant according to the report.
  2. Also there is no statistically significant impact in new sites. The authors mention “all sites” because the estimated impacts for the older sites are lower than the newer sites. So combining the data gets them a bigger impact value and they manage to still stay statistically significant. That seems wrong to me.
  3. Research on the impact of SafeStreets in Baltimore is mixed. Other research by Hopkins has found some positive effects but two other studies on the program have found no impact. Similarly the literature nationwide is mixed. To the authors’ credit they note this in the report.
  4. Estimating the impact of Safe Streets is hard because sites are not chosen randomly. The sites in the program are those with the most violence. It is likely that a reduction in violence can be attributed to regression to the mean rather than any given intervention.
  5. The authors try to get around this by creating a synthetic control group. In other words, they take a bunch of areas around the city and weight their arrests, homicide stats, etc until they get a trend that is as close as possible to each Safe Streets site they are studying. This is a decent way to get around the problems in point two but the approach still has drawbacks the authors don’t give information on. For example, they give error bars for each synthetic control site and treatment site post intervention in the appendices. However they do not give data on how well the synthetic control matches the sites pre-treatment which would enable us to know how good a job they did creating controls. Similarly, we don’t have information on why the control matches pre-treatment trends. Do the control sites vary wildly but average out to a close approximation? Or do the control sites generally mirror the treatment site’s patterns closely?
  6. The nonfatal shooting results are barely statistically significant. The confidence intervals stopping at -0.00 for two overall effects makes me wonder if there’s p-hacking going on there. I don’t live and die by a p-value of 0.05 but it’s a flag that maybe data was manipulated until it hit a certain threshold.
  7. Putting the data together suggests a statistically significant effect on homicides in the first four years of Safe Streets but no impact/reversion in later years of program implementation (inferred because the impact becomes insignificant for the entire program duration). Again, these findings only apply to old sites. The authors find no statistically significant impacts in the new sites.

I want this program to succeed and I don’t think the above means the program is a failure. I just am very skeptical of the headline findings being reported here.

28