Jill Loevy’s Ghettoside is a brilliant exposition of the problem of Black-on-Black murder in South Central LA. The central concern of the book is trying to understand why Black men, only 6% of the US population, make up 40% of the nation’s murder victims.
The book skilfully weaves between big picture thinking on why murder epidemics happen, and the social dynamics of Black-on-Black murder, with the story of the murder of an LAPD officer’s son. I think it might be even better than David Simon’s classic Homicide: A Year on the Killing Streets about Baltimore homicide detectives (which inspired the TV shows Homicide: Life on the Streets, and The Wire).
Mid-way through Ghettoside the action briefly focusses on the work of the firearms analysis team at the LAPD which provides a great case study in organisational dysfunction that’s worth focussing on.
Prior to the investigation that forms the main focus of the book, the LAPD had adopted the National Integrated Ballistics Information database (NIBIN):
‘The NIBIN catalogued digitised images of bullets and cartridge casings from crime scenes and seized guns. The database could be searched by an algorithm. This allowed fast, cheap searches, matching ammunition used in crimes to individual weapons…[but]…the computer system was not as discerning as trained humans. It relied on simplified digital renderings of microscopic images produced through standardised procedures – a process that eliminated many telling nuances and contours’.
Loevy compares this to the lower tech system which has prevailed before:
‘Before, skilled technicians had taped Polaroid photos of bullets and cartridge casings to the wall and examined every microscopic dent and groove with the naked eye to match them to ammunition test-fired from the individual firearms. This low-tech method was not efficient but it yielded excellent results’.
The real problem with the NIBIN was that it had a blind spot – revolvers:
‘Revolver matches are more difficult than other types of firearm analysis…bullets are cylindrical and the grooves and scratches they bear after being fired wrap around a curved surface. In contrast, breech face markings on the flat part of a cartridge case are relatively easy for a computer to read. So while the NIBIN system was adept at matching casings to semiautomatic pistols, it had proven useless at matching bullets to revolvers’.
The result:
‘Although the LAPD and other agencies had dutifully entered test-fired bullets from hundreds of revolvers into its database for years, by the summer of 2007, the system had never successfully matched a bullet used in an LA crime to a revolver’.
This is put into context by the following: ‘about one third of the LAPD’s seized firearms were revolvers’.
Loevy describes how the ballistics team, recognising the inadequacies of the software, create a shadow-system to get this part of their job done.
‘So Hudson [who ran the firearm analysis lab] made a decision. They would bypass NIBIN. Her workers would continue submitting images to the database as required. But they also would quietly assemble their own duplicate database of test-fire exemplars from seized revolvers. This secret trove would be analysed the old fashioned way with the human eye’.
The result of this painstaking work is ultimately a big break in the case at the heart of the book, leading to the identification of the murder weapon.
But this whole sequence begs the question: why are seasoned investigators having to secretly recreate an old system to bypass the failings of a newer system? (And, as this article makes clear, this is far from just an LAPD issue.)
Your knee-jerk response to this probably depends on your politics. Either this is an example of public sector mismanagement or an example of woeful underinvestment in essential services. Probably, it’s a little of both.
But this passage is more interesting as an example of what happens when software solves most, but not all, of a given problem. In particular, it shows how the existence of ‘edge cases’, instances where slightly or very different rules hold, can create big flaws in an otherwise operational system. In the case of NIBIN, a system worked for two-thirds of cases but was disastrous for the other third. It also shows how design ‘blind-spots’ – when a piece of software is designed without fully understanding the technical challenges involved – can have far-reaching consequences.
Clearly, though, this is not just a software problem: it’s also an organisational one. Big software purchases are often major decisions for organisations, creating a reluctance to admit when something has gone wrong. The decision may ultimately have been predicated on a cost-saving that could only be realised if all analyses were run through the NIBIN system. This created a dynamic where the two-thirds of the system that worked couldn’t be used in isolation from the third of the system that didn’t work. In an organisational context where such decisions can’t be challenged by the people using the system, the result is dual processes, informal ways of working and wasted time and effort observing the formal requirements.
This suggests a few important things about introducing new systems, all of which are both obvious and clearly very easy to overlook / get wrong:
(1) Talk to the people who use the system / do the job at the moment. Understand what the problems are and what they need.
(2) Make a real attempt to understand how your brilliant big picture idea could break when it encounters the messiness of reality. Understand how edge cases differ in crucial ways to the day-to-day of regular use.
(3) Introduce the system as a work in progress, not a done deal. Encourage feedback. Take bad feedback well, not defensively. Use it to make the changes that lead to a system that works.
The alternative is unhappy users, under-utilisation of a new system / continuing to use old and inefficient processes, and – most importantly – a failure to improve your system to be what it needs to be.