No long term business endeavor can successfully be sustained without regular structured assessment. Emergency response is no different. In fact it may be more important to accurately assess emergency response programs. Emergency response does not generate income, nor is generating income a goal of emergency response programs. Therefore it differs from private business in that a simple bottom line fiscal evaluation does not exist.
It is quite possible that a private business can generate profit while not being run at its most efficient levels. The goal is often to make money, not provide the absolute best product. An emergency response organization has a goal of doing their best. They are evaluated by not only what they do, but also on their capabilities to provide expertise and care that may never be needed. How do you assess an organization’s ability to do something they don’t do regularly, if at all? How do you assess an organization’s ability to do something they have never done?
Capability Assessment for Readiness and Emergency Response
The answer to those questions is in what the Federal Emergency Management Agency calls Capability Assessment for Readiness or CAR. While it may be called by different names, the emergency response community uses capability assessments as the standard for determining the readiness of responders.
Most people will say that our responders are doing their best. This is true, in that they are doing their best given the systems they are given to operate within. Would they do better if they used systems that were more accurately assessed, that were modernized regularly, and that used the latest technology had to offer to make them more safe and efficient? If they were striving to be better and reach higher goals on a consistent basis?
Why would responders not be aiming to reach higher goals? Perhaps they are being told, “You’re doing fine. You passed your assessment.”
The issue is most certainly not with the effort of our responders, but rather the manner in which they are assessed. Accurate assessments would change the objectives and give our responders the tools they need to continue to improve and be as safe as they deserve to be.
Our responders are put in dangerous situations on a consistent basis. Unlike in business, they are required to make life or death decisions in an instant. They are well certified and trained, and use the equipment they have to the best of their abilities. But they are still injured and killed far too often. Police, Fire, EMS, Hazardous Materials Technicians and many others accept the risks associated with their jobs as a part of daily life.
The ability to assess whether or not they are being given the best chance to do their jobs as safely and efficiently as possible rests in our ability to provide adequate and accurate assessments.
The Problems with Internal Emergency Response Assessments
Most public response agencies and jurisdictions do internal assessments. They use the tools and standards set out by agencies such as FEMA, the International Association of Emergency Managers (IAEM) and the National Fire Protection Association (NFPA). While there are positive aspects to these assessments, this kind of assessment also has many faults. First and foremost is that most internal assessments are often nothing more than a regulatory compliance check.
Meeting regulations is a good thing. It sets a minimum. However there is so much more that responders can do. Best Practices very commonly far exceed minimum regulatory standards. Standards are developed over time. They lag behind the most current technologies. Standards are usually median level objectives. At best, meeting standards might put you in the middle of the pack.
Internal assessments also have another common aspect that can limit the effectiveness of the team in the long run. They very often do nothing but measure the ability to meet internal standards. If you continue to measure yourself against internal standards only, you will fall behind the curve. You will be meeting outdated goals and objectives.
A most troubling aspect of organizational assessments however is how they tend to be a chance for middle management to prove how capable they are to senior management. The objective should be to accurately evaluate capabilities by identifying shortcomings and creating corrective actions. It is not at all uncommon for managers to prepare responders to excel in assessment activities. Prior to assessments, scenarios are practiced and personnel are scheduled to provide a greater number of responders than normal. The goal is positive results. The results are false positives.
What ensues is an inaccurate assessment that overstates the response capabilities. Gaps that exist will not be filled. Personnel will not get training they need. They will not get tools or equipment they need. Senior management is happy that their responders are so capable. Middle management will be happy they are such capable leaders.
The Problems with Third Party Emergency Response Assessments
Outside or third party assessments are available to alleviate some of those issues, however they come with some issues of their own.
- Outside assessors don’t have the knowledge of the organization that internal assessors can have.
- Outside assessments can also stress meeting minimum statutory or regulatory standards as opposed to best practices
- Often, outside assessors speak only with management level personnel
- Outside assessors often do not get information from ground level responders who have first-hand knowledge of the capabilities of the organization.
Many consulting firms who do assessments lack focus to provide the best assessments to specific agencies. They provide assessments for a wide range of disciplines using templates that might not best measure the true needs and capabilities of each. Fire response is different from medical response, which is different from public safety response and other types of emergency response.
So How do you Assess Emergency Response Preparedness?
In his paper, The Problem of Measuring Emergency Preparedness, the Need for Assessing “Response Reliability” as a part of Homeland Security Planning, Brian A. Jackson makes the point that to truly assess response capabilities, not only should we consider what a response agency is capable of, but also ask what is the reliability that they will be able to respond in a certain manner. An agency might be able to meet specific response criteria. However there is a difference if they can meet it only 30 percent of the time versus 80 percent of the time. Maybe they can do it during day time shifts Monday through Fridays only. There can be many reasons for an organization to be able to meet specific criteria on at certain times. Manpower differences, traffic, weather are some of the many variables.
So what is the answer to the dilemma of emergency response capability assessment?
One thing that most agree on is that a comprehensive cycle of drills, training and exercises are required. Since we don’t know when and where emergencies are going to happen, it is difficult to observe actual responses in an assessment. Large organizations that respond to a high number of incidents can use actual incidents. An analysis of recent incidents can be included for most agencies, if accurate incident reports are available. Smaller organizations need to rely on the drills and exercises, but the results of these should still be critically evaluated.
Accurate Emergency Response Capabilities Assessments
There are some key elements to obtaining an accurate capability assessment. The main consideration in determining how to perform an assessment is to start with “Why you are performing an assessment?”.
The reason is to accurately determine to what extent your organization can respond to the given situations they might face.
There are many levels of situations that must be considered, each with separate approaches – how do they operate in routine emergencies, what are their capabilities during major emergencies and how will they perform in a worst case scenario?
These measurements must be compared to minimum standards, organizational goals, regulatory requirements, best practices and the newest modernized technologies. When organizations are given all this information to compare themselves to, they can decide on a proper course of action to get to where they want to fall on the scale from “Minimum Standards, to the Best That Modern Technology can Afford.”
Assessments should be highly focused. There are so many different parameters specific to each discipline and each jurisdiction. Individual attention to detail is important. All levels of personnel should be included.
So many times I have seen mid-level managers and above involved in an assessment of what responders can do-without the representation of a single responder! Several responders should have the opportunity to give honest and anonymous information as to what they can and can’t do. Assessment reports tend to be very dry and very predictable. It is sometimes hard to believe that someone paid for that opinion. Thank you Mr. Obvious.
Taking the SMART Approach to Emergency Response Assessments
Assessments should be SMART (Simple, Measurable, Attainable, Realistic and Timely). This is a slight modification of the traditional SMART objectives, although I changed the first component.
Rather than have “S” represent “Specific”, I prefer to make sure that specific is included in some of the other components. I feel that more important than specific, that assessment results should be Simple. Simple can best be described as focused. Focus is important in making results meaningful.
The M component is Measurable. Specific parameters should be reported in measureable formats. (I told you that specific was incorporated).
The A is Attainable. In the assessment world, attainable has a dual meaning – first measure capabilities against specific metrics that can be attained. The Mayberry FD will not be able to attain all the same capabilities that FDNY will. Secondly it means that you should make the results gradual – so that they can be attained on a consistent basis, building success on success.
The R is for Realistic. The assessment should reflect what the organization can do on a consistent basis. Set goals that the organization has the budget for.
Finally T is for timely. Set deadlines for improvement on specific objectives. Not a single deadline, but many. If you are going to drive from New York to San Francisco, you don’t set out with just one final destination. You route your trip through cities between New York and San Francisco. You set a time to reach each one. If you leave on Monday and wish to be there by Saturday, you might want to be in St. Louis by Wednesday and Denver on Friday. Short term goals allow morale to build as goals are reached and celebrated. Success is built on success.
What should happen is that tools to make meaningful change should be identified. Quantifiable results that meet the SMART test should be presented and a roadmap to graduated improvement should be provided as the result of any emergency response assessment.
- Great article. This is what I keep thinking whenever I participate in a mandatory twice yearly fire drill. They are done to tick an item in the checklist. Most guys who participate in this drill also know it is an eyewash and grumble that they are missing customer deadlines due to the mock drill. Most of the premises have installed some type of emergency response mechanisms and they do it because (I work in a company that does work that is outsourced to them) the client would insist on this and they would sign a contract that they would do so, without which they would not get the business. The contract might also specify that third party inspections and they get them done but that is all an eyewash, Like everything else, just a couple of days before the inspection, they get everything ready and then once the certificate is awarded, things are back to normal.Good article and is dead on. I worked in compliance and risk, business continuity for some years and the only point missing from the article is this one. Any emergency plan/continuity plan that is longer than about 6 pages is doomed to fail. In the post mortems of actual events and accurate test plans, none of the managers got to the second page. Many were inundated with tasks and questions not related to the business at hand and wasted time and energy that would have been better directed at the crisis.
Secondly, those key people, identified and trained prior, were found 1) to be totally unfocused and ineffective, until they knew that their family was safe and in place securely. 2) Were unclear as to what authority they had and were unclear how to communicate with their identified and now needed peers. 3) Key management people were unavailable in times of crisis and social media needs to be the tool of last resort, especially for those key management people not on the scene.This reminds me of my last workplace, where the emergency procedure documentation covered around 10 pages! We would constantly fail drill inspections as nobody could ever remember what the rules were - there were so many of them. In the end, certain responsibilities were delegated to different business units and this seemed to work much better. By delegating to whole units rather than individuals, it solved the problem of people being off work when these situations happened and everyone was much clearer on thier responsibilities.I do think that unless there was a previous case serving as reference, emergency response assessments - especially during their infancy - are a hit-or-miss. There are times when what their assessments accurately anticipate the incoming disaster. On the other hand, responses tend to fall short when the disaster that occurs wasn't covered during the assessment stage.I'd have to agree with xTinx. When I lived in LA, CA I worked for the LA City College Office of Special Services while I was also in the student government. So I dealt with some of these assessors. They checked for Fire and Earthquake, but they did not check for anything else. LA is located a ways from the coast, but it is closer to the coast then where I grew up in GA. We were evacuated one year because of a hurricane. I know that hurricanes and Tsunamis are rare on the West side of the US, but that does not mean they wont occur. So if one does occur, LA is up the creek without a boat. They have no prep plans for one at all. It is very vulnerable in that area. Assessors look only for what is the most common and not what hardly occurs and as a result no care is done to prepare for everything, just for what they will be inspected for.I found it difficult to continue reading after the use of Federal Emergency Management Association. I assumed the author was referring to FEMA, the Federal Emergency Management Agency. With this error, I wondered about the rest of the article.So derogatory toward the private sector and consultants - espousing an assessment with no value and billing 350k and that profit is more important than quality of service. Strange coming from a guy that is now consulting with oil and gas, writing from a .comThank you to the reader who pointed out my mistake in the article. My apologies for mistakenly typing Association (thank you auto-correct) , and an even bigger apology for not correcting this mistake during proofing. I appreciate your keen eye for catching that glaring error! Capable and honest assessments are quite necessary to create improvement!In regards to the latter viewpoint, thanks for your comments. The cartoon you mention was an exaggeration in an attempt at humor (possibly a failed attemp?) However I do very much believe that in general, emergency response assessments need improvement. This is not to say that all are bad of course. Like all fields, there is good and bad. I am more concerned with the system, not the people. I stand by my comments about profit being more important that quality of service in the corporate emergency response field. That is a fact of capitalism. And it does not infer that personnel don\'t care about safety. However, safety has costs, and companies can\'t spend what they don\'t have. Simple economics do bear out the fact that companies can only be as safe as they can afford. There is a term in the safety field called ALARP. It is an acronym for \"As Low As Reasonably Possible\". Many factors are considered when deciding what is \'reasonable\'. One of them is financial. My direction with statements in that regard is this: Financial considerations are and should be a part of capabilitiy assessment, because they are a consideration in emergency response programs. I certainly did not intend my comments to be derogatory. Ironically an honest assessment of how capabilities are being measured should include the good and the bad. The bad will get more discussion, as it is what requires attention to be improved. I certainly appreciate your comments and of course will strive to ensure my comments do not sound derogatory in the future. Thank you so much for your honest appraisal of my article. Your assessment will assist my improvement! Finallly I would like to correct your final statement. I do not consult for oil and gas companies. I am employed by an oil and gas company. I have a long background in training and emergency response. Capability assessment is a big part of that experience. My goal is to make emergency response safer for all the fine people who dedicate themselves to that task. Terry Boes, author of this posted article.The author of this article has a criminal record and writes for a .com, this site is a joke.