Here’s a scenario I’ve seen play out more times than I can count.
A facility conducts its annual emergency drill. Staff get notified in advance. Everyone knows it’s coming. The drill runs, nobody panics, the form gets signed, the documentation gets filed. Leadership checks the box. Compliance satisfied.
Six months later, something real happens — a patient collapses in the waiting room, a fire alarm triggers during a procedure, a severe weather event requires immediate shelter-in-place. And the response looks nothing like what the drill produced. People freeze. Nobody is sure who’s in charge. Staff from different departments don’t know what the others are doing. The plan that worked on paper doesn’t translate to the floor.
This isn’t a rare failure. Research reviewing how healthcare providers respond to real emergencies found a recurring pattern across multiple studies: the majority of facilities had emergency plans that were activated during actual events, yet staff were frequently unaware of those plans or confused when they were put into use. The same research identified a consistent call across the literature for more drills — with clearer direction and better design.
The problem isn’t that facilities aren’t drilling. The problem is how they’re drilling, and what they’re actually testing when they do.
The difference between a drill and a test
There’s a distinction worth making early: a drill and a genuine test of your emergency plan are not the same thing.
A drill, as most facilities run it, is a rehearsal. Staff know it’s happening. The scenario is familiar. The conditions are controlled. Nobody is actually stressed, nobody is making real decisions under pressure, and nobody is discovering for the first time that the person they’re supposed to call isn’t answering.
A genuine test puts your plan under conditions that are as close to real as you can safely create. That means introducing unexpected variables. It means involving staff who didn’t help write the plan. It means running scenarios at times other than Tuesday morning when everyone’s already at their desk.
The distinction matters because the gap between your rehearsal performance and your real-world performance is exactly the gap your training should be closing. If you only ever drill under ideal conditions, you’ll only ever know how your plan performs under ideal conditions.
Four reasons facilities fail their own drills
After 30 years in public safety operations — first as a dispatcher, then in EMS operations supervision — I’ve seen the same failure patterns repeat across organizations of every size. Here are the four I encounter most often.
1. The plan was written by people who won’t execute it
Emergency plans are frequently developed by administrators, compliance officers, or outside consultants. The people who will actually carry out the plan — the front desk staff, the medical assistants, the technicians working in the back — often had no involvement in writing it.
This creates a plan that looks complete on paper and fails at the human level. The person assigned to conduct the headcount didn’t know they had that role. The staff member responsible for securing the medication room has never practiced the procedure. The designated contact for incoming emergency calls doesn’t know what information first responders need when they arrive.
A plan written without input from the people who will execute it is a plan optimized for documentation, not response.
2. Training stops when the document is signed
OSHA’s 29 CFR 1910.38 requires that emergency action plans be reviewed with employees when the plan is developed, when responsibilities change, and when the plan is updated. The regulation sets a floor. What it can’t mandate is that staff actually retain what they were told in a 20-minute orientation when they were onboarded.
The high rate of staff turnover in healthcare settings compounds this problem. New employees arrive. Responsibilities shift. The person who was trained six months ago may have moved to a different role, and the person who replaced them hasn’t been trained at all.
Signing the acknowledgment form isn’t training. It’s documentation. The two are not the same.
3. Drills test the scenario, not the system
Most drills are scenario-based: there’s a fire, or a patient codes, or a severe weather alert comes in. The scenario is defined, the response plays out, and if the scenario resolves without visible chaos, the drill is considered a success.
What doesn’t get tested is the system underneath the scenario. Can staff reach the right people when communication systems are stressed? Does the employee alarm signal your staff to evacuate or shelter in place — and do they know which tone means which? If the drill involves an evacuation, was anyone tracking where patients actually ended up?
Drills that test only the scenario produce organizations that can handle that specific scenario — once, under controlled conditions, when everyone knows it’s coming. They don’t produce organizations that can handle the next scenario, the one nobody planned for.
4. After-action review is skipped or superficial
CMS’s Emergency Preparedness Rule requires that covered facilities analyze their response to drills, exercises, and real events — and revise their plans accordingly. The regulation is explicit: documentation of analysis is required.
In practice, after-action review is frequently the first thing cut when the drill runs long or when nobody wants to spend another hour in a room after a tiring exercise. The debrief becomes a quick conversation, the notes are thin, and the plan doesn’t change.
The after-action review is the entire point of running the drill. It’s where you find the gaps. A drill without a genuine debrief is an expensive way to confirm that nothing catastrophically wrong happened — it’s not a tool for improvement.
What a well-designed exercise actually produces
A tabletop exercise, when it’s facilitated well, does something a standard drill can’t: it forces decision-makers to work through a problem in real time, with incomplete information, under conditions that surface the gaps in their plan before a real event does.
A tabletop doesn’t require everyone to physically evacuate the building. It doesn’t interrupt patient care. It puts the right people in a room, introduces a realistic scenario with time-pressured injects, and asks them to make the decisions they would make if this were real. The facilitator’s job is to push on the weak points — to ask “and then what?” and “who’s responsible for that?” until the gaps show up.
Research on emergency preparedness exercises consistently identifies that facilities benefit most from exercises that are realistic in their scenarios, involve people at multiple levels of the organization, and include structured debrief processes that feed directly into plan updates.
The goal isn’t a perfect score. The goal is to find out what doesn’t work before you need it to work.
The binder on the shelf
There’s a pattern I’ve described to a lot of healthcare operations professionals over the years, and it tends to land: every facility has a binder. The emergency plan lives in it. It sits on a shelf somewhere, usually in the administrator’s office or a back storage room. It was current when it was written. It may or may not still be current now.
That binder isn’t a problem until the day it matters. On that day, it’s not the binder anyone reaches for — it’s the nearest staff member who looks like they might know what to do. Whether that person actually knows what to do depends entirely on what happened before the emergency, not during it.
The facilities that respond well aren’t the ones with the thickest binders. They’re the ones where the people on the floor have been through the scenario enough times that the response feels familiar, even when the specifics are different from anything they practiced.
That’s what a well-run drill produces. That’s what a tabletop exercise is designed to build toward.
Where to go from here
If your facility conducts annual drills primarily as a compliance exercise, the question worth asking is what you actually learned the last time you ran one. Not whether the form was filed — what you found out about your plan, your staff, and the gap between the two.
If the honest answer is “not much,” it may be time to approach the next exercise differently. Adams Operations Group designs and facilitates tabletop exercises for healthcare-adjacent organizations in the Dallas-Fort Worth area — built around realistic scenarios, structured to surface the gaps that matter, and followed by an after-action process that actually improves the plan.
If you want to talk through what that looks like for your facility, schedule a consultation call at adamsopsgroup.com.
Sources: “How are healthcare provider systems preparing for health emergency situations?” PMC/NCBI (pmc.ncbi.nlm.nih.gov/articles/PMC8242524); “Emergency Preparedness and the Development of Health Care Coalitions,” PMC/NCBI (pmc.ncbi.nlm.nih.gov/articles/PMC7094223); CMS Emergency Preparedness Rule (cms.gov); ASPR TRACIE, U.S. Department of Health and Human Services (asprtracie.hhs.gov).