It’s often argued that the high false positive rate proves the system is poorly run or even useless. This is not necessarily the case. In running a system like this, we necessarily trade off false positives against false negatives. We can lower either kind of error, but doing so will increase the other kind. The optimal policy will balance the harm from false positives against the harm from false negatives, to minimize total harm. If the consequences of a false positive are relatively minor…, but the consequences of a false negative are much worse…, then the optimal choice is to accept many false positives in order to drive the false negative rate way down. In other words, a high false positive rate is not by itself a sign of bad policy or bad management. You can argue that the consequences of error are not really so unbalanced, or that the tradeoff is being made poorly, but your argument can’t rely only on the false positive rate.
— Ed Felton — Why So Many False Positives on the No-Fly List?
(Bolding my own.)
This quote is really about the No Fly List whose purpose is to help the airlines identify who is not allowed to fly. False positives have come up at work lately in the context of catching “bad people”. In our case, great differences of opinion exist about whether the false positives are relatively minor. We all do agree that false negatives are very bad.
A concern about the false positives, is a lot of time an resources are spent looking at the possibles only to determine they are not really a “bad person”. The more false positives we get, the more we doubt the usefulness of the tools we have to identify “bad people”.
Leave a Reply