Earlier this month when Facebook’s Sarin nerve agent threat hit the news it highlighted a reality often overlooked when planning for emergencies: the most common end state of an emergency situation is that of a false alarm. By the end of the day, the apparent nerve agent threat was deemed a false positive. In most cases, such situations can be de-escalated by stopping, taking a breath, and ensuring that the indications of a threat are self-consistent with all of the evidence at hand. Once an organization’s emergency response plan is activated, the most important goal for all involved is to return to normal operations.
False alarms and false positives are the most common reason that emergency action or response plans are initiated. But what happens when something is initiated and there is no clear and definite endpoint to the emergency? The answer is ambiguous unless there is a holistic understanding of the threat and those involved in making decisions are fully-informed of the best possible way to return to organizational normalcy. Facebook, from the outside looking in, missed the mark on this point. Sarin is not the most likely conclusion even when an identification system produces it as an answer. If it looks like a duck, walks like a duck, quacks like a duck but one system says it’s a dog; what is the most likely correct conclusion?
Simply stated, chemical identification systems are only as good as the library they reference. Nearly all chemical detectors compare their measurements of a sample with a database or library that looks for similarities between the measurement and a precursor, or even whole compounds, that make up a threat. In the case of sarin, which is produced all over the world, no two batches test exactly the same. Facebook’s response was most likely, a false positive because their detector simply identified a compound that fell within the parameters of Sarin in the library they have. Events that resemble this happen every day, in every country and in every location that chemical identification efforts take place. Until a better method of identification is invented the best option is to ensure human involvement to confirm plausibility and verify legitimate threats while de-escalating and weeding out false positives at an appropriate point in the decision cycle.
Adding technical expertise to an organizational response plan inserts a checks and balances capability to the threat management plan. It increases the speed at which companies recover and decreases the amount of lost revenue. This essentially places a facilitator inside of larger units to ensure the “correct” answer gets to the right people as quickly as possible. Unfortunately, most identification systems are not effectively tied to someone that has a complete understanding of the possible threats their users face. For organizations to be truly successful in the emergency management arena requires expert human involvement to consider all of the relevant factors to arrive at the correct decision; so that while one system might say it’s a dog, if it has wings, feathers, webbed feet, quacks, and is swimming in a pond it must be a duck!