Although this report focuses on FEMA’s homeland security grants, some of it may be of general interest. The new report (138 pp.) is titled Improving the National Preparedness System: Developing More Meaningful Grant Performance Measures. It was prepared by the National Academy for Public Administration, June 2012.
The U.S. Congress asked an expert panel of the NAPA to assist the FEMA Administrator in studying, developing, and implementing quantifiable performance measures to assess the effectiveness of homeland security preparedness grants. The Academy Panel focused the scope of this study on the State Homeland Security Grant Program (SHSGP) and Urban Areas Security Initiative (UASI), as these are the two largest of FEMA’s homeland security grant programs.
The Panel found that measuring the outcomes of these grants poses two challenges:
(1) the preparedness system’s greatest strength—conducting efforts in an integrated fashion that blends resources from multiple sources—is also its greatest weakness from a performance measurement standpoint; and (2) the federal government has not developed measurable standards for preparedness capabilities to guide the performance of the states and urban areas receiving these grants.
The Panel recommended a set of measures that collectively begin to address the effectiveness of the two grant programs.This measures have three parts:
Part 1: Effective and Targeted Grant Investments – These measures examine the elements that are needed to make sure that grant investments are targeted to priorities and effectively carried out.
Part 2: Context Measures – While not performance measures per se, these provide meaningful context to help understand and improve the execution of the grant programs.
Part 3: Collaboration Measures – This part discusses measures the Panel recommends that FEMA should assess preparedness collaborations to capture an important facet of grant performance.
In addition to the recommendations for performance measures, the Panel offers several recommendations to FEMA that will strengthen the performance of these grants. These include pairing quantitative with qualitative measures, starting the grant cycle earlier, communicating performance results more broadly, institutionalizing the nationwide plan review, and assessing how states and urban areas adapt to the decrease in number of federally funded UASIs and decline in funding.
To me the three types of measure seem predictable. The last paragraph mentions the all-important larger context considerations.
A major part of the problem is the lack of rigor in judging who gets the grants in the first place. Using a single yardstick after the fact seems to simply perpetuate the status quo of often less than effective spending (e.g., UASIs simply buying more and more equipment).
An accreditation program somewhat like that used by colleges and universities makes better sense to me. The institution develops a plan that considers both strengths and weaknesses, and threats. The plan (self-)identifies improvements the institution needs to make, the steps it will take to make them, and the performance measures it will use to measure progress. This plan is then reviewed by a board of visitors who accredit the institution based on the accuracy of its self-assessment, the appropriateness of the actions proposed, and the usefulness of the measures. The board of visitors almost always provides feedback on the spot to the institution so that the plan can be improved. They also specify a time for re-review – no institution is accredited forever (usually for one to five years, if accredited).
This concept could be applied to the grants process as follows. Grants would be available only to accredited entities (states, UASIs, …). This gets the gov’t out of the sticky business of developing “one size fits all” performance measures. Instead, FEMA (or other grant-making agency) establishes boards of visitors using recognized experts from both the public and private sector. The criteria for accreditation are actually much more easily developed than after the fact performance measures; they are implied by what I’ve noted above.
Grant applications would be judged by the fit to the accredited entities’ own plans and by the expected impact of funding (which would have been independently determined by the board of visitors). As an example, an entity would be funded to buy satellite phones only if that was part of its own master plan. If the impact of buying those phones was less than the impact of some other accredited entity’s application (e.g., buying equipment to quickly put in a temporary seal if a critical dam failed) then the satellite phones would have lower priority.
It’s great that the problem has been recognized; let’s try to be more innovative in our solutions!
Thanks, John. Those are good suggestions.