The 72-Hour Post Disaster Problem
By Alex Nwoko
On a Sunday afternoon in October 2023, a magnitude 6.3 earthquake struck Herat province in Afghanistan. By Monday morning, our Humanitarian Spatial Data Center team in Kabul was being asked the questions we always get in the first 24 hours of a sudden-onset disaster — and which we never have full answers to.
How many villages are affected? Which roads are passable? Where are the field hospitals? How many people have been displaced? Which communities had vulnerable populations to begin with?
The honest answer to most of those questions, on the morning after a disaster, is: we don't fully know yet. Field teams are still moving. Phone lines are still down in places. Damage assessments are days away from being completed. Population baselines are months out of date.
This is the 72-hour problem. The window when decisions matter most is also the window when information is most incomplete. And the temptation, for everyone in the room, is to wait for better data before acting.
After ten years of doing this work — Bangladesh after a cyclone, Ethiopia during a drought escalation, Afghanistan after multiple earthquakes — I've come to a hard conclusion: good information management in the first 72 hours is not about delivering perfect data. It's about being useful under conditions where perfect data is structurally impossible.
Why the First 72 Hours Are Different
Standard humanitarian information management is built for steady-state coordination. A monthly 5W reporting cycle. Quarterly multi-sector needs assessments. Annual Humanitarian Needs Overview cycles. Each of those products assumes time to clean data, validate sources, and reconcile contradictions.
A sudden-onset event collapses that timeline. The Inter-Cluster Coordination Team meets within hours of the event. Resource mobilisation appeals go out within days. Donor commitments are negotiated based on whatever evidence exists at the moment.
Decisions made in this window have outsized consequences. Pre-positioning supplies in the wrong district means relief takes 36 extra hours to arrive. Activating a flash appeal with the wrong affected-population estimate locks the response into a budget envelope that may not fit reality. Failing to flag a vulnerable group early means they get coded out of the response architecture for months.
And yet the data systems we build are mostly designed for the steady state, not for the surge.
What Doesn't Work
Waiting for clean data. I've watched senior IM officers refuse to publish a hazard map until every administrative boundary code was verified. By the time the map went out, the response decisions it was meant to inform had already been made — based on someone's WhatsApp screenshot of a sketch on a notebook page.
Insisting on the standard reporting template. Partner organisations in the first 48 hours can't fill out a 60-field 5W. They're mobilising staff, opening field offices, sourcing fuel. Asking them to populate every disaggregation cell guarantees you get a blank or a fabrication.
Producing the perfect product. A 40-page situation analysis published on day five is operationally less valuable than a 1-page snapshot published on day one. The decision-maker has already made the day-one decision.
Ignoring open-source signals. GDACS alerts, USGS shake maps, GloFAS discharge forecasts, satellite imagery from Sentinel and MODIS, even social-media geolocation — these are imperfect, but they exist within hours of an event. Treating them as too crude for "official" products means you publish nothing while the world burns.
What Works: Pre-Positioned Information Architecture
The shift in my thinking, over many sudden-onset events, was this: the first 72 hours don't reward better real-time data collection. They reward pre-positioned information architecture that can be flexed to a specific event.
Baseline layers, ready to go. Population estimates by admin-3 (with WorldPop and Microsoft building footprints as the foundation). Health facility locations. School locations. Roads with passability classification. Pre-event vulnerability indices. None of these need to be collected after the disaster — they can sit in a sovereign database and be intersected with the event footprint within an hour of the alert.
Standardised event-impact templates. A one-pager with a fixed structure: hazard summary, affected administrative units, exposure estimates from baseline layers, immediate humanitarian implications, known partner presence. Designed to be filled in at 80% confidence, not 100%.
A go-to data triangulation protocol. GDACS for the initial alert. USGS or WMO for technical hazard parameters. Open-source remote sensing for damage extent. Pre-positioned partner contact lists for ground-truthing. The protocol exists before the event so the first hour isn't spent inventing it.
Decision-rights agreements signed in advance. Who can approve publication of a flash analytical product without full data validation? In Afghanistan, we had a written protocol that the Country Technical Advisor (me) could approve a 72-hour rapid analysis with a "preliminary, subject to revision" disclaimer. That single protocol unlocked products that would otherwise have sat for days awaiting sign-off.
The 80% Principle
Here is the principle I now apply: 80% confidence in 4 hours beats 100% confidence in 4 days.
This isn't a license to be sloppy. It's a recognition that humanitarian decisions are made under uncertainty whether or not you publish data, and that a transparent estimate with a confidence band is more useful than silence.
Every rapid product I publish carries the same disclaimer: preliminary estimate based on [sources X, Y, Z], confidence level [low/medium/high], to be revised within [N] hours as field reports arrive. That disclaimer protects the IM unit's credibility AND empowers decision-makers to act on the best evidence available.
In Afghanistan, after the Herat earthquake, this approach let us publish an initial affected-population estimate within 18 hours that was within 12% of the final verified figure five days later. That early estimate informed the first round of cluster activation, partner deployment, and donor briefings. Was it perfect? No. Was it useful? Materially.
What I Now Do Before Every Posting
When I arrive in a new country office, the first 72-hour audit I run is structural, not operational. I ask:
- What baseline layers exist and how current are they?
- Where are they hosted? Can the IM team access them under emergency conditions?
- What is the standard structure of a rapid analytical product? Is it pre-templated?
- Who can approve publication without standard validation?
- What are the data triangulation defaults?
- How do field reports flow into the analytical pipeline?
If any of those questions don't have a clear answer, I work on them before the next event — not after. Because the next event is always coming, and the 72 hours after it arrive whether the architecture is ready or not.
The goal of humanitarian information management isn't perfect data. It's decision-support that arrives in time to matter. Build for that, and the rest follows.
Continue Reading
Lessons from Building Humanitarian Data Platforms Across Multiple Crisis Contexts
Multiple countries. Seven data platforms. A decade of work. Each one taught me something I could not have learned from a textbook. Six principles emerged across all of them — and none are about technology.
Read more →Building Disaster Data Systems That Governments Can Own: Lessons from 10 Years in Humanitarian Information Management
A flood vulnerability analysis I designed died quietly two years after I left — the trained staff member moved on, the dashboard stopped refreshing, and the analytical capability that informed life-saving decisions disappeared. The hardest lesson from a decade of building these platforms isn't technical. It's institutional.
Read more →The IM Coordination Trap
The biggest barriers to good information management in humanitarian response are not technical — they're political. Data sharing agreements that never get signed, institutional distrust that no dashboard can fix, and donor-imposed reporting cycles that don't match field reality. Technology is the easy part.
Read more →