Application Sign In

Archive | Emergency Management

Our fear of getting it wrong stops us from getting it right

When I started working in the fire service, things weren’t exactly progressive. Most departments weren’t being headed by true leaders and even fewer people were working to change the culture and overall approach of service delivery. Where once we operated under the banner of ‘150 years of tradition unimpeded by progress’, the the emergency services is now doing incredible things and the future looks bright.

Despite all of this recent advancement, there’s still room for some polish, particularly when it comes to the integration of technology into our organizations. Over the years we’ve had the good fortune to speak with responders from across the globe and we’ve seen the emergence of a common theme. We’re too focussed on making the wrong choices around technology. This focus on making mistakes isn’t only unhealthy but it’s also out of step with the quick adoption of new and innovative solutions by those in other fields.

As an industry, we need to realize that this mentality almost always leads to friction and ultimately slows our rate of progress. New initiatives never get off the ground and momentum stalls. Not only does this thinking slow down our own organization’s rate of advancement but it contributes to less innovation throughout our industry. If we’re not pushing the boundaries, we’re not pushing others and that doesn’t serve anyone well.

robot

To be at the forefront of technology means we’re going to make mistakes. The key is anticipating this and being flexible enough to quickly move past whatever issue might arise. As first responders, we need to better embrace technology, we need to incorporate the latest and greatest hardware and software into mission critical roles and we need to be OK with the possibility that it won’t always unfold perfectly.

Regardless of the problem you’re working to solve, you’ll almost always be better off with a solution that ‘mostly’ works vs. the one that you’re still ‘analyzing’. So stop focussing on what might go wrong and look instead at the huge potential upside of being on the bleeding edge. Your organization will be better for it and so will the citizens you’re tasked to protect.

Connect early and often for more effective response.

In the early stages of a large scale emergency incident things move quickly. The volume of information (and misinformation) builds with each passing second. Responders grow impatient waiting for direction and external factors, like third-party agency requests begin to mount. The more time that passes between the start of an incident and response, the greater the volume of information that will need to be processed. Add in variables such as; equipment availability, staffing, mutual aid consideration, weather and the ongoing changes to the incident landscape and things can get out of hand in a hurry. At a point in time, the sheer volume of incident information will exceed the processing power available to senior officials. Decision fatigue becomes a factor, response is delayed and ‘knowns’ about an incident begin to lag behind the actual on-the-ground situation. The larger or the more urgent a response, the faster this information will build up.

By focussing more energy on connecting our key decision makers in the early stages of an incident and on a regular basis, we can greatly enhance the effectiveness of our response to any type of incident. So how do we do this?

At the earliest opportunity, put key decision makers on the phone for a fast call. Think less than 3 minutes. Every time senior officials connect to review the evolution of an incident, inputs are gathered, analyzed and fed into an early stage response plan. This series of actions essentially resets the meter on the amount of data that can be managed before overload sets in – a critical step in the early stages of an incident when management resources are limited. It’s akin to dividing responsibility for resources under ICS to maintain span of control.

regular_comms

Repeat the calls on a regular basis (interval to be dictated by the nature of the event) to ensure the incident never escapes our ability to manage it effectively. As the incident matures and there are more resources to manage the response, the frequency and/or necessity of these calls will be reduced as more formal management criteria replace the need for quick informed decision making.

Exercise Operational Plans: Fail often & different.

eoc

We have the great privilege of meeting with small and large Emergency Management Agencies from across North America and spend a good amount of time getting to know their capabilities. Despite the best of intentions, some of these agencies will fail to meet their mandate during a mid to large scale incident.

In most cases, a significant amount of responsibility for the failure can be attributed to operational testing that is either non-existent or too lenient.

Failure needs to be an acceptable option

Testing the operation of an EMA’s capabilities is a big deal. It costs money, involves multiple agencies and significant staff hours. These high profile tests often attract political figures and there’s real pressure on senior staff to put on a good ’show’. Given these circumstances, who in their right mind would want to engineer a test to implode their entire operation? Turns out not too many people.

So our ’mock’ scenarios run smoothly. Sure, a few little hiccups are injected for good measure but senior staff know these hurdles can be cleared with little effort. Another successful exercise is wrapped up. Staff debrief, talk about little things that could be done better and then complement each other for a job well done. While it might feel positive at the time, ineffective testing only serves to set us up for trouble come game day. As organizations, we need to start valuing failure for the positive impact it can have on our performance. More importantly, we need to educate outsiders about how failure can be leveraged to increase our capacity to handle big incidents.

Testing with Teeth.

Testing an EMA’s capabilities should (from time-to-time) be a brutal and relentless assault. Your best staff should be investing time working in opposition to the organization, trying to find new and innovative ways to derail operations. The more problem areas that can be identified in advance of an incident, the better your team will perform. On occession, failure should be the goal.

While everyone likes to talk about the benefits of failure, few groups embrace it. Everyone in the organization needs to understand that a reasonable portion of operational tests should result in failure. This shouldn’t be an embarrassment but rather a point of pride. It is through this failure that we uncover weakness and gain insight into the steps needed to enhance our operational capabilities. Some will argue that pushing staff to their limits isn’t realistic or that every eventuality can’t be prepared for. Perhaps, but after years of working in emergency services, we know that the impossible happens with shocking regularity and that lobbing ’softball’ operational tests at your team will eventually spell trouble. In short, there is no amount of money, infrastructure or technology that makes an EMA immune from unforeseen events.

Here are some ideas from one particularly effective EMA that often pushes their team to extremes. The below elements were all compounded into one recent exercise that helped this group to identify a number of areas to work on. How would your agency have coped?

  • Random timing

    Don’t test your EOC’s capabilities at a scheduled time. If everyone is anticipating a test, they’re mentally prepared for the exercise, tend to be waiting around for it and show up at the EOC in minutes with everything they need. In reality, this doesn’t happen. Randomly stand up the team. What happens if you run an unannounced test at 2:00 in the morning? Prepare for some surprises.

  • Missing personnel

    100% of your staff aren’t going to show up during a major incident. Make sure a material number of key personnel don’t show up until late in the exercise. You’ll quickly know if you’ve cross-trained deeply enough.

  • Multiple infrastructure failures

    Don’t just cut the power and wait for the back-up generator to kick in. Cut the back-up generator as well. Now what? Don’t talk theory, actually solve the problem. If you can get your hands on a number of portable generators, does anyone know how to use them to drive critical infrastructure? Will these smaller units actually carry the required load? Now would be the time to figure that out.

  • Multiple Communications Failures

    Kill your radio communications. Take landlines down too. Now take away 50% of the available cell phones. Could it get worse? Restrict all cell communications to text only. Can you fail over to text only? What about Satellite?

  • Venue Change

    It’s at about this point that things are probably breaking down. If things haven’t already ground to a halt, your team will certainly be pulling hard on all of their faculties. Perfect time to require an orderly evacuation of the EOC. Now what? Where to? How do you get there? Can you figure it out given the reduced infrastructure and limited comms?

There’s no doubt that this type of training is extreme. It will certainly induce failure for many groups but it will also bring out the best in your people and expose gaps in your pre-planning and theory. The next time you initiate a test of similar intensity, you’ll probably fail again but you’ll fail different and that means your improving.

Getting social media right during disaster management- lessons from the trenches

Over the weekend, there was a significant earthquake off the coast of British Columbia and almost immediately, the provincial agency tasked with providing support and broad coordination during emergency events was on their heels. Or at least that was the public perception.

Whether Emergency Management BC (EMBC) responded appropriately or not, will be revealed in the weeks and months ahead. Like most people, we’re in no position to critique or praise EMBC’s overall response. The vast majority of activity occurred out of public view - between the province and smaller cities, districts, municipalities towns and villages. The government and the agency itself will need to look at all of the data and decide how they performed. Where EMBC did stumble was in the court of public opinion. In particular, EMBC failed to realize the importance of being in charge of their social media channels at time of crisis.

To put this in context, EMBC made a point of establishing a Twitter account and instructed the public to ’follow’ the account as a means of notification at time of disaster. When Saturday’s earthquake occurred, many people turned to Twitter for updates. What they found was a silent Twitter stream…no news or instruction. EMBC took almost a full hour to post their first tweet. When they did chime in, their initial tweets weren’t to provide important details about the quake but were in response to negative comments from other Twitter users who had grown frustrated.

The value of social media at time of emergency can be debated at length but any agency that encourages the public to engage with them needs to show up. EMBC for their part, said they were silent for almost an hour because they were “confirming intel.” Unfortunately, there was no way for the public to have known this. It simply appeared as if EMBC was absent at a time when other media and social channels were exploding with information (and misinformation) about the quake and possible tsunami.

After about 30 minutes, you can be sure many wrote off EMBC’s Twitter stream as a source of information. As would be expected, the public began looking for details elsewhere - a problem and a dangerous one at that. With each passing minute, EMBC’s audience grew smaller and smaller. This exodus left EMBC with diminished capacity to communicate critical information at a time when it mattered most. If Saturday evening had evolved differently, EMBC’s management of their social stream could have had very real consequences.

Lessons:

This event like any, provides a number of lessons that everyone in emergency management should be taking into consideration as they evaluate their own commitment to social media usage in disaster management.

  1. In or out?

    If you are going to encourage the public to engage with you online at time of emergency, you need to be online and you need to be firmly in control. If you’re not sure about the value of social media or lack the resources (personnel or training) to utilize the tool properly during a crisis, do not tell the public to look for guidance from your social channels.

  2. Broadcast:

    Twitter is a conversational tool but in the early stages of an incident, the objective should be the rapid dissemination of lifesaving information to as many members of the public as possible. Do not engage in one-to-one conversations when the greater public good should be the focus. Make broadcast messages your priority. The time for individual exchanges will come later.

  3. Communicate early:

    It is critically important to let the community know that the lights are on. Send a quick message early so that the public knows you intend to share important details via your chosen social channels. We’re here. We’re working hard to separate fact from fiction. We will provide an update asap.

  4. Message often:

    If information is sparse, let the public know - don’t go quiet. Maintaing the attention of your audience is supremely important. You need the public to be listening when you DO have critical information to share. Be prepared to send out generic messages if your team is gathering intel – this will maintain interest and further public safety. Take the time to compose these messages in advance for any number of anticipated incidents.

  5. Work your plan:

    After any disaster, things will be hectic. Don’t expect you’ll come up with a clever game plan on the fly - even for something like Twitter. As with most things, structure works best. Ensure communications are in simple terms and lingo free. Know that you will post your first message ASAP. That subsequent updates will come every 10 minutes, even if you have no new info. Know who is in charge of outbound communications and have identified objectives that you are working toward with your social channels. 1) Save Lives 2) Save Property etc.

Conclusion:

Saturday night was undoubtedly very tense at EMBC. A 7.7 magnitude quake is potentially catastrophic (the 2010 Haiti earthquake measured 7.0 and killed an estimated 316,000 people) and there was the very real possibility of a tsunami that could have impacted life and property.

If things had played out differently and the epicentre of the quake was closer to a population centre, EMBC’s usage of social media might have done more harm than good, at least in the early stages. That being said, BC got lucky and EMBC benefited from some very valuable and rare practice. This was as real a dry run as any emergency management agency will ever be given and it will yield very valuable lessons that should be shared.

In the aftermath, we should all revisit EMBC’s usage of social media and look at what they might have done differently. This event provided an invaluable (and public) learning experience for all of us. Smart organizations are looking at how the evening unfolded - analyzing the good and the bad and then reviewing their own social strategies in light of their findings. We learn this way. We improve this way. The best organizations are the ones that know we’ll never get it perfect, that in everything we do there is room to improve.