Building Disaster Data Systems That Governments Can Own: Lessons from 10 Years in Humanitarian Information Management
By Alex Nwoko
In early 2019, I received a message from a former colleague in a mission I had left about two years earlier. The flood vulnerability and exposure analysis I had designed for displaced populations — a system that mapped how IDP settlement patterns intersected with flood risk across the response area to support contingency planning — was no longer being updated. The team member I had trained to maintain the analytical process had moved on. The live dashboard was gone. Only an old static version had been archived. And it was flood season again. They wanted to know whether the pattern of vulnerability and exposure among displaced populations had evolved — and they had no way to answer that question because the system that could tell them had died with the departure of the one person who knew how to run it.
That is how data innovations die operationally: not with a dramatic failure, but with a quiet erosion — a trained staff member leaves, a handover doesn't happen, a dashboard stops refreshing, and suddenly the analytical capability that informed life-saving decisions no longer exists. I wish I could say this surprised me. It didn't. I had seen it before, and I witnessed it again in the three countries where I worked afterward. Different systems, different organisations, the same pattern: an international organisation arrives, builds a sophisticated data platform, trains staff, produces impressive outputs for a year or two, and then leaves — taking the institutional knowledge, the server credentials, and the analytical momentum with them.
The hardest lesson from over a decade of building these platforms is not technical. It is this: the measure of a data system is not how sophisticated it is on launch day. It is whether it's still running two years after you leave.
The Graveyard of Humanitarian Data Platforms
The humanitarian sector has a sustainability problem with data infrastructure. We celebrate launches, showcase dashboards at donor briefings, and write case studies about platforms "transforming decision-making." But we almost never return two years later to check whether they survived.
I have contributed to this graveyard. The systems that failed shared common traits: they were designed around international staff's analytical preferences rather than government workflows; hosted on servers controlled by the implementing organisation; built with tools the national team hadn't been trained to maintain; and their governance — who decides what data gets collected, who validates it, who publishes it — was never formally transferred. These failures reflect the fundamental misalignment between humanitarian project cycles (short, deliverable-driven, with rotating international staff) and what data systems need to survive: institutional permanence, local ownership, and sustained investment in human capacity.
The DELTA Resilience Connection
These principles are now embedded in global architecture. DELTA Resilience — the next-generation disaster tracking system — was designed around sovereign data ownership from the ground up. Its interoperability architecture (API-driven data exchange with meteorological services and sectoral ministries) integrates into existing government ecosystems rather than sitting alongside them.
The Data Ecosystem Maturity Assessment (DEMA) framework assesses governance, infrastructure, data quality, and human capacity before deploying technology. The G-DRSF institutionalises the NSO partnership by mandating statistical harmonisation between disaster management and official statistics.
These are governance improvements, not technical ones. And governance improvements determine whether systems survive.
What Makes a Data System Survive Its Creator
After building or contributing to data platforms in six countries, I have distilled what works into four principles. None of them are technical. All of them are institutional.
Institutional anchoring from Day 1. The system must belong to government from the beginning, not be handed over at project close. This means the National Disaster Management Authority or the relevant ministry is the data owner from the first design meeting. It means the platform sits on government infrastructure (or government-controlled cloud), not on the implementing organisation's servers. It means the URL, the branding, and the access controls reflect government ownership.
NSO partnerships. National Statistical Offices outlive project cycles. They are the permanence anchor that project-funded NGOs cannot provide. The G-DRSF (Global Disaster-Related Statistics Framework), endorsed by the UN Statistical Commission, formalises this insight at the global level — mandating that disaster data systems bridge the disaster management-NSO divide. In practice, this means involving the NSO from the data model design stage, not the validation stage. It means using statistical standards (p-codes, official administrative boundaries, internationally harmonised hazard classifications) that the NSO recognises. It means building a data pipeline where the disaster management authority collects operational data and the NSO certifies it as official statistics. When I conducted a data ecosystem audit at a UN agency's headquarters-level posting, the same principle applied: the system that survived was the one that aligned with existing institutional reporting flows, not the one that tried to replace them.
Training-of-Trainers, not training-of-users. Generic user training is expensive and ineffective. I have watched hundreds of staff trained on Power BI or QGIS who never used the tool again after training ended — because they lacked ongoing support, peer community, and institutional incentive. Training-of-Trainers (ToT) produces lasting capacity. Identify 3-5 national focal points per institution, invest heavily in their technical skills over months, and certify them only after they conduct a national workshop. Build a peer support structure so they troubleshoot without international assistance. The Sendai Framework Academy uses this model for DELTA Resilience. It creates self-sustaining knowledge ecosystems, not dependency relationships. When I built a coordination mechanism's analytical framework — a meta-analysis unifying data from five agencies across 1,559 households — it survived because the coordination mechanism owned it, not any single agency. The coordination leads maintained the analytical pipeline and onboarded new partner data. Governance was embedded in the structure, not in any individual.
The politics of data ownership — and the politics of data suspension. Data ownership is contested everywhere. Governments want control over publication, especially when data reveals politically sensitive patterns. Humanitarian organisations want open data for coordination. Donors want outputs demonstrating impact. These interests conflict, and if the governance structure doesn't resolve them at the design stage, the system becomes paralysed. But the politics can be even more brutal than paralysis. In one country where I served as programme coordinator, I witnessed a nationwide humanitarian reporting platform — the primary monitoring tool for over 115 partner organisations including UN clusters, NGOs, and working groups — suspended overnight when the sole donor froze funding. There was no phased transition plan. No bridge funding. No advance notification to the partners who depended on the system daily. The implementing organisation had no choice but to pause all operations immediately, and I was the one who had to communicate that decision to every partner across the response.
The consequences were immediate. The UN coordination body cancelled planned meetings with the implementing organisation and excluded it from critical information management discussions — a signal of institutional trust collapsing in real time. Partners who had built their coordination workflows around the platform were left without essential humanitarian data mid-response. Ethical questions surfaced about the reliability of an organisation that could suspend services without warning. And the episode exposed a structural vulnerability that no amount of technical sophistication could have prevented: a data system that serves an entire country's humanitarian coordination but depends on a single donor is a system with a single point of failure. The experience reinforced what I had been learning across every deployment: the politics of who funds, who hosts, and who controls a data system are not secondary concerns. They are the system's immune system. When the politics fail, the technology — no matter how well-designed — fails with it. The solution is tiered access and diversified ownership: government has sovereign control over raw data and publication; humanitarian partners access aggregated, anonymised data for coordination; donors receive pre-agreed outputs. And critically, no single donor or implementing partner should be the sole point of failure for a system that an entire response depends on. This requires formal data-sharing agreements, contingency plans for funding disruptions, and institutional anchoring deep enough that the system survives the departure — or suspension — of any single actor.
What I Would Do Differently
In my earlier roles, I underestimated the time required for institutional anchoring. I moved too quickly to the technology — building dashboards, designing data models, training users — without investing enough in governance architecture. The dashboards looked impressive. The data models were sound. But the institutional foundations were shallow.
I also underestimated governance documentation: who owns what, who has admin access, what happens when staff leave, how disputes are resolved, what the escalation pathway looks like when the international organisation is no longer present. This documentation is tedious but essential.
The hardest conversation in humanitarian data work is not technical. It is telling a government official that current data quality is inadequate for international reporting, and that improving it requires resources, political commitment, and transparency about gaps. That conversation, handled badly, kills partnerships. Handled well, it begins genuine ownership.
Design for Departure
The principle I now apply to every data platform: design for departure.
Before writing a single line of code, I ask: what happens when I leave? Who maintains the server? Who updates the data model when requirements change? Who trains the next cohort of data officers? Who troubleshoots failures at 2am before a donor briefing?
If I cannot answer with names — specific people in specific institutions with specific skills — I am not ready to build. The technology can wait. The institutional foundation cannot.
Continue Reading
The Politics of Humanitarian Data Infrastructure: Who Owns the System When Everyone Walks Away?
I wrote the email at 11am. It went to over 115 organisations — UN clusters, NGOs, working groups — telling them the nationwide humanitarian reporting platform was suspended immediately. Afghanistan in 2025 was a stress test that revealed a system-wide architectural flaw: nobody owns continuity.
Read more →The Data Ecosystem Maturity Assessment: A Practitioner's Guide to Diagnosing National Disaster Data Readiness
On my first week at a UN agency headquarters, I asked: "How many data systems does this Division use?" The answer took three weeks to assemble. That experience of mapping before building became the foundation for every data system project since. A maturity assessment is not a delay — it is the investment that ensures the system you build is the system that survives.
Read more →Lessons from Building Humanitarian Data Platforms Across Multiple Crisis Contexts
Multiple countries. Seven data platforms. A decade of work. Each one taught me something I could not have learned from a textbook. Six principles emerged across all of them — and none are about technology.
Read more →