“Why Plan B Often Works Out Badly.” Interesting explanation of Risk Management, from MSNBC commentator, March 18, 2011. Two quotes follow:
Engineers used to talk about guarding against the “single point of failure” when designing critical systems like aircraft control systems or nuclear power plants. But rarely does one mistake or event cause a catastrophe. As we’ve seen in Japan, disaster is usually a function of multiple mistakes and a string of bad luck, often called an “event cascade” or “propagating failures.”
Defending against and preparing for such event cascades is a problem that vexes all kinds of systems designers, from airplane engineers to anti-terrorism planners. There’s a simple reason, according to Peter Neumann, principal scientist at the Computer Science Lab at SRI International, a not-for-profit research institute. Emergency drills and stress tests aside, Neumann said, there is no good way to simulate a real emergency and its unpredictable consequences. Making matters worse is the ever-increasing interconnectedness of systems, which leads to cascading failures, and the fact that preventative maintenance is a dying art.
Thanks to Anne Strauss for calling it to my attention.
Of general interst, the posting today (March 20) by blogger Phil Palin on the Japan Disasters.