FEMA Is Not Learning From History — what a surprise!

Washington, DC, April 5, 2005 -- The Departmen...

Image via Wikipedia

I have to admit it is fun saying ” I told you so.”  When I say it, no one pays much attention; but now that FEMA’s Inspector General has addressed the problem maybe people will think about it. The new report, issued today,  is titled Lack of After-Action Reviews Hurts FEMA Knowledge Base. The specifics of the report are OIG-11-32, dated January 2011.

As noted in the publication HSToday, FEMA has not been conducting after-action reviews for all disasters, contrary to the agency’s policy,.

In addition, FEMA has been unable to distribute previously gathered information on lessons learned and best practices due to technology failures and limitations, discovered the IG report, FEMA’s Progress in Implementing the Remedial Action Management Program.”

“FEMA stood up its Remedial Action Management Program in July 2003 to identify lessons learned and best practices for dealing with disaster response and recovery operations. The program aims to identify problems and limitations encountered by FEMA in response to a natural disaster or terrorist incident and record information on overcoming those challenges in a database for FEMA personnel to explore. However, FEMA personnel have not consistently produced this information after every disaster, as required by the policies of the RAMP. As such, FEMA personnel have lost opportunities to share knowledge with their colleagues….” “Failing to conduct after-action reviews limits the lessons learned and best practices generated by the agency, preventing FEMA personnel from learning from the experiences of their colleagues” As a result, the vast majority of FEMA personnel cannot access historical data on lessons learned and best practices.” [Their emphasis.]

In my opinion the investigation did not go far enough. Although the RAMP program is important, I suggest there are many broader concerns about failure to read and learn from historic disasters. The lack of information about major and catastrophic disaster events that have happened in past decades is a glaring omission, in my view. From 30+ years of experience, I would note that some of the tasks that FEMA has failed to do for many years are: prepare candid and actionable after-action reports, conduct independent field investigations after a major disaster, and document case studies.  Additionally, efforts to store, analyze, synthesize, and then share findings have been inadequate. Regarding the recovery phase, for example, neither a body of theory nor a knowledge base exists presently. FEMA is more than 30 years old. It is time to deal with these matters.

For many years, I have been working on various charts, reports, and books that deal with the need to know the history of emergency management and to learn from it. I expect to have more to say on this topic later.  Your comments are invited.

2 thoughts on “FEMA Is Not Learning From History — what a surprise!

  1. I’d like to see more of a multi-agency, multi-discipline approach to after action reviews for each disaster. Maybe each jurisdiction or discipline could still conduct its own hotwash to address internal operations issues, but the results of those efforts should be rolled up into a comprehensive AAR for the disaster, including significant input from the federal, state, and local perspectives. This will hopefully foster improved collaborative problem-solving and inject more accountability into a process well-known for lack of consistent follow-through at all levels of government. It’s not a new concept (and has happened quite a lot in various settings I’ve attended, including post-disaster and post-exercise reviews as well as in the old IHMT/HMST process for identifying mitigation opportunities), but it could serve as the national standard for process improvement in emergency management.

    Another challenge is finding a way of measuring performance (best practices and not-so-good practices) in a consistent way between regions and disasters. There has been plenty of guidance on the elements of an effective response and recovery effort, and it’s tempting to identify one scorecard to be applied consistently to everyone, and every disaster, across the board. Quantifying success is not easy, but without benchmarking of some sort it is difficult to track progress from year to year. This is probably more appropriate with baseline capabilities like communications, timeliness of service delivery, etc. Still, a scorecard will not adequately address those “special moments” of exceptional success or failure that accompany every major disaster and which often define the character of a given event in our collective memory. Not everything is reducible to a number, and to not offer qualitative criticism on top of quantitative feedback would do a disservice to the profession.

    Anyway, sorry for the ramble. Less AARs per event, more collaboration (including follow-through), and the right mix of quantitative and qualitative performance measurement. That’s what I’m advocating.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.