Lessons from Building Humanitarian Data Platforms Across Multiple Crisis Contexts
By Alex Nwoko
Multiple countries. Seven data platforms. A decade of work. Each one built under different constraints — funding pressure, active conflict, pandemic restrictions, institutional fragmentation, political upheaval. Each one produced outputs that mattered to people making decisions under pressure: cluster coordinators deciding where to deploy assessment teams, government officials deciding which provinces to prioritise for drought response, cash working groups deciding whether their transfers were reaching the right households.
And each one taught me something I could not have learned from a textbook, a conference presentation, or a best-practice guide.
This post distills what those platforms taught me — the cross-cutting principles that apply regardless of the crisis, the technology, or the institutional context. Six principles emerged. None of them are about technology.
Principle 1: Build for the Worst Network, Not the Best
My first field posting placed me in a conflict-affected region with a 2G connection that dropped every afternoon when the generator ran out of fuel. I built 5W dashboards in Excel — not because I wanted to, but because it was the only software every partner already had installed, that worked offline, and that could be emailed on a 2G connection. The dashboards were ugly. They were functional. They were used.
Every humanitarian data platform is designed in a capital city with reliable internet and tested in a field office where the connection drops when it rains. If your system requires 4G to function, it will not function where it is needed most. The constraint is not bandwidth — it is the assumption that bandwidth will be available. Design for offline-first with synchronisation, and you will never be caught by a generator failure.
Principle 2: The Coordination Mechanism Is the Product, Not the Dashboard
In one of the largest refugee responses on the planet — nearly a million displaced people in a concentrated geographic area — the information management challenge was not data scarcity. It was data flood. I led inter-sector analytical reports combining health, nutrition, WASH, education, and protection data into a unified framework. The reports became reference documents not because we had the best data, but because the coordination mechanism that produced them was trusted by the organisations that consumed them.
A dashboard that nobody trusts is a decoration. A coordination mechanism that produces trusted analysis — even if it is a simple table in a PDF — is an information management system. Invest in the process (shared questions, shared data standards, shared review) and the technology will follow. Start with the technology and the process will not materialise.
Principle 3: Invest in Data Governance Before Data Collection
Five humanitarian organisations were each running post-distribution monitoring for their cash transfer programmes using different tools, different questions, different sampling strategies, and different definitions of "success." The cash working group could not answer a basic question: "Is our collective cash programming working?"
I built a unified analytical framework — nine analytical pillars covering adequacy, timeliness, utilisation, market access, protection, targeting accuracy, satisfaction, coping, and impact — and harmonised data from over 1,500 households into a single analytical ecosystem. The framework worked because we invested months in governance before collecting a single data point. We agreed on shared definitions, shared indicators, shared disaggregation, and what "success" meant. When I left, it survived — because it was owned by the coordination mechanism, not by any single agency.
Multi-partner analytics only works when you govern before you collect. Skip this step and you will spend more time harmonising incompatible data than you would have spent negotiating shared standards upfront.
Principle 4: Start with a Maturity Assessment, Not a Technology Choice
A headquarters posting taught me this principle most clearly. The division had multiple incident-monitoring and knowledge-management tools running in parallel. Each had been built to solve a specific problem. None had been mapped as an ecosystem.
The audit took six weeks. The platform design took four. The audit was the more valuable deliverable — because it prevented building a solution to a problem that was not fully understood.
Don't build until you've mapped what already exists. The audit always reveals surprises — systems nobody remembers building, data flows that depend on one person's email habits, governance gaps that no technology can solve.
Principle 5: Build for Departure
The largest platform work of my career was a multi-million-dollar DRR, climate preparedness, and information management programme in a conflict-affected country — a multi-hazard analysis platform and a humanitarian reporting system that onboarded 200+ partner organisations. Both were significant technical achievements. Both were vulnerable to political change, funding cycles, and staff turnover. The components that were most resilient were the ones most deeply anchored in government workflows — built around NDMA requirements, their geographic taxonomies, their briefing templates.
But "build for departure" assumes there is a legitimate government to depart to — and this assumption does not hold everywhere. In contexts where a de facto authority controls the territory but lacks international recognition — where donor conditions prohibit sharing programme data with the governing authority — the principle hits a wall. This is the data ownership dilemma in contested legitimacy, and it remains one of the most consequential unresolved challenges in humanitarian data governance.
The system must work after you leave. Before writing a single line of code, answer: who will maintain the server, who will update the data model, who will train the next cohort? If the answers are "the international consultant," the system has an expiration date. And if the answer is "nobody — because no recognised institution can legally receive it" — then the system has a deeper problem that no amount of technical design can solve.
Principle 6: Train the Trainers, Not the Users
This principle emerged across every posting, but crystallised in the environments where I saw the sharpest contrast between trained individuals and trained institutions. Generic user training evaporates within months. Invest in 3-5 national focal points per institution, certify them as trainers through a structured Training-of-Trainers programme, and build a peer support network. This is the only model that produces lasting capacity.
My academic foundation — a Commonwealth Scholarship and subsequent analytics certifications — shaped the ability to think about disaster risk as a system of interacting variables (hazard, exposure, vulnerability, capacity) rather than as a sequence of emergency responses. The best analytical frameworks in humanitarian IM are the ones simple enough to implement under operational pressure but rigorous enough to withstand methodological scrutiny. My best work has happened at this intersection: academically grounded frameworks implemented with field pragmatism.
What I Still Get Wrong
Honesty requires this section.
I still underestimate the time data governance work takes. Data infrastructure is like an iceberg: the visible tip — dashboards, platforms, analytical outputs — is what gets funded, celebrated, and counted toward programme KPIs. But the mass below the waterline — data-sharing agreements, institutional roles, governance frameworks — is what determines whether the whole thing stays upright. I still feel the pull to start building the visible part before the foundations beneath it are secure, because building is satisfying and governance negotiation is slow.
I still overestimate the transferability of skills. A data officer trained in one context does not automatically become effective in a different context with different data, different stakeholders, and different institutional incentives. Skills transfer requires contextualisation that I don't always budget time for.
And I still struggle with the hardest question in humanitarian data work: when is "good enough" actually good enough? The tension between statistical rigour and operational urgency is real, and I have not resolved it. I have only learned to name it honestly and let the operational context — not my analytical preferences — determine the answer.
The Platforms Change
The platforms change. Excel gave way to Power BI. KoboToolbox replaced paper forms. Legacy disaster databases are being replaced by sovereign, API-ready national systems. The next generation will use AI agents and automated analytical pipelines.
The principles don't change. Build for the worst conditions. Invest in coordination before technology. Govern before you collect. Assess before you deploy. Build for departure. Train the trainers.
And the most important principle is the one the sector keeps forgetting: design for departure. Because the measure of a data platform is not what it produces while you're there. It's what it produces after you've gone.
Continue Reading
Building Disaster Data Systems That Governments Can Own: Lessons from 10 Years in Humanitarian Information Management
A flood vulnerability analysis I designed died quietly two years after I left — the trained staff member moved on, the dashboard stopped refreshing, and the analytical capability that informed life-saving decisions disappeared. The hardest lesson from a decade of building these platforms isn't technical. It's institutional.
Read more →The Politics of Humanitarian Data Infrastructure: Who Owns the System When Everyone Walks Away?
I wrote the email at 11am. It went to over 115 organisations — UN clusters, NGOs, working groups — telling them the nationwide humanitarian reporting platform was suspended immediately. Afghanistan in 2025 was a stress test that revealed a system-wide architectural flaw: nobody owns continuity.
Read more →The Data Ecosystem Maturity Assessment: A Practitioner's Guide to Diagnosing National Disaster Data Readiness
On my first week at a UN agency headquarters, I asked: "How many data systems does this Division use?" The answer took three weeks to assemble. That experience of mapping before building became the foundation for every data system project since. A maturity assessment is not a delay — it is the investment that ensures the system you build is the system that survives.
Read more →