Tutorial / Technical Deep Dive·8 min read·April 2026

The Data Ecosystem Maturity Assessment: A Practitioner's Guide to Diagnosing National Disaster Data Readiness

By Alex Nwoko

I was hired by a UN agency's headquarters division to audit and redesign their crisis information management architecture. On my first week, I asked a straightforward question: "How many data systems does this Division use?" The answer took three weeks to assemble. Not because people were uncooperative, but because nobody had a complete picture. Incident monitoring lived in one platform. Knowledge management lived in another. Situation reports came from a third. Country alerts from a fourth. Each system solved a specific problem well, but they had never been mapped as an ecosystem — the result was duplication, gaps, and interoperability failures that no single system owner could see.

That experience of mapping before building became the foundation for every data system project since. Through data ecosystem assessments across multiple contexts, the single most important lesson is this: a maturity assessment is not a delay. It is the investment that ensures the system you build is the system that survives.

This post is the practitioner's guide I wish I had when I started — grounded in the DEMA framework developed by UNDRR and UNDP, and informed by what I have seen go wrong when the assessment step is skipped.

Why Assess Before You Build

The humanitarian and DRR sectors have a pattern: identify a data gap, deploy a technology solution, train users, and move on. The maturity assessment step — understanding the institutional, technical, and human landscape before choosing a technology — is frequently skipped because it feels like overhead. It is not overhead. It is the most consequential phase of any data system deployment.

Without a maturity assessment, you risk deploying technology that the institution cannot sustain, producing poor data faster with more attractive formatting, and missing governance gaps that will kill the system after the project cycle ends. I have seen all three — sometimes in the same deployment.

The UNDP-UNDRR Data and Digital Maturity for Disaster Risk Reduction working paper provides the theoretical foundation. The DEMA framework operationalises it into a structured, facilitated self-assessment that countries can own. What follows is how it works in practice.

The Five Dimensions

The DEMA framework evaluates a national disaster data ecosystem across five interconnected dimensions. Each has subdimensions with specific indicators scored against a five-phase maturity scale — from Phase 1 (incomplete, ad hoc) through Phase 3 (managed and defined) to Phase 5 (state of the art, transformative). The framework is diagnostic, not punitive — it is designed to support reflection and identify concrete actions, not to rank countries.

Dimension 1: Actors and Roles. This dimension maps who participates in the data ecosystem and whether their roles are understood. The key actors are data producers (NDMAs, meteorological services, sectoral ministries), data users (planners, policy-makers, humanitarian coordinators), and intermediaries (statistical offices, UN agencies, research institutions). In every ecosystem assessment I have conducted, the same pattern emerges: actors are identifiable, but their roles in the data production chain — who collects, who validates, who publishes, who certifies — are either undefined or informally negotiated. This is the most common Phase 2 finding: roles are recognised but reactive, dependent on personal relationships rather than institutional mandates.

The G-DRSF provides the reference architecture for these roles, particularly the relationship between the National Disaster Management Authority (operational data collection) and the National Statistical Office (statistical certification). Where this relationship is formalised, the ecosystem is resilient. Where it depends on individuals, it is fragile.

Dimension 2: Data Supply. Data supply assesses the quality of available disaster data — its accessibility, relevance, accuracy, timeliness, and clarity. This is where the gap between what countries report and what is actually usable becomes visible. I have reviewed national disaster databases where completeness rates for mandatory fields — hazard type, date, administrative geography code, mortality, affected population — fell below 60%. Records where mortality exceeded affected population. Events recorded without valid p-codes aligned with OCHA Common Operational Datasets. Hazard classifications that shifted terminology between reporting years, blocking trend analysis.

The quality problems are not random. They concentrate in specific time periods (election years, funding transitions), specific geographies (remote provinces with weaker NDMA capacity), and specific hazard types (slow-onset events like drought and coastal erosion are consistently under-recorded compared to rapid-onset events like floods and earthquakes).

Dimension 3: Data Demand. This is the dimension most assessments neglect entirely — and the one that determines whether a data system is actually used. Data demand captures the applications and use cases the data is meant to serve: Sendai Framework reporting, SDG indicator computation, Loss and Damage Fund evidence requirements, early warning triggers, anticipatory action thresholds, national DRR strategy development, and climate adaptation planning.

The critical diagnostic question is whether supply meets demand. In my experience, the answer is almost always no — but not for the reasons people assume. The data gap is rarely about volume. It is about format, disaggregation, and interoperability. Countries often have substantial disaster data, but it is locked in formats (paper records, isolated spreadsheets, legacy databases) that cannot serve the analytical and reporting demands now placed on it by the Sendai Framework Monitor, the Belém Adaptation Indicators, and the Loss and Damage Fund.

Dimension 4: Data Infrastructure. Data infrastructure covers the institutional, physical, and digital means for storing, sharing, and consuming data — from individual laptops to organisation-specific archives to online information management systems and geospatial data-sharing platforms.

The key subdimensions are technical interoperability (can systems exchange data programmatically?) and operationalised common standards (are shared codes, schemas, and formats in use?). DELTA Resilience requires API-driven data exchange with meteorological services and statistical offices. For countries where the NDMA's primary data tool is a standalone spreadsheet on a single staff member's laptop — and I have seen this in more countries than I expected — the infrastructure gap is not about purchasing servers. It is about institutional architecture: where data lives, who controls access, and what happens when that staff member leaves.

A common failure mode is assuming cloud hosting solves everything. Cloud solves hardware but raises data sovereignty concerns. Hybrid models — cloud compute with local storage — are often the pragmatic answer.

Dimension 5: Data Ecosystem Governance. Governance determines whether the ecosystem holds together when external support ends. It covers policies and standards (does a national data strategy exist? are common data standards mandated?), dedicated budget (is disaster data funded from national budget or entirely donor-dependent?), collaboration and inclusion (are data-sharing agreements formalised between NDMA-NSO, NDMA-meteorological service, NDMA-sectoral ministries?), capacity (are human skills being built and retained?), and governance ethics and trust (are there protocols for privacy, responsible data use, and accountability?).

In my experience, the governance dimension is the strongest predictor of system survival. I have seen technically sophisticated platforms fail because there was no legal mandate for data collection, no MoU between the NDMA and NSO, and no data-sharing agreement with the meteorological service. Conversely, I have seen basic systems survive for years because the governance architecture was sound — roles were assigned, budgets were allocated, and the data pipeline did not depend on any single person or organisation.

The distinction between de jure governance (what the law says) and de facto governance (what actually happens) is critical. Assess both.

The Data Quality Assessment Tool

Alongside the DEMA, UNDRR has developed a complementary Data Quality Assessment Tool that evaluates the quality of specific data streams — hazardous event data, disaster event data, and losses and damages data — against four quality criteria, each scored on the same five-phase maturity scale.

Accuracy: Are events verified through triangulation of multiple authoritative sources, or recorded with frequent errors and no verification process?

Completeness: Are all critical fields populated — temporal, spatial, technical characteristics, triggers, cascades, source — or are records patchy with key information missing?

Consistency: Are events classified using controlled vocabularies and standardised formats, or do terminology and coding shift between time periods and data sources?

Interoperability: Are hazardous event data and loss/impact databases linked through shared codes, APIs, or schemas — or do they exist in incompatible silos?

The Data Quality Assessment Tool complements the DEMA by drilling into the data itself rather than the ecosystem that produces it. The DEMA tells you whether the institutions, infrastructure, and governance are in place. The quality tool tells you whether the data those institutions produce is actually fit for purpose. Both are needed. A mature ecosystem can still produce poor data if quality assurance processes are weak. Good data can still be unusable if the ecosystem cannot share, validate, or publish it.

Running the Assessment: The DEMA Process

The DEMA is designed as a facilitated self-assessment — owned by national actors, not conducted on them. The process follows four phases:

Phase 1: Desk research. Review existing risk data availability, stakeholder mapping, previous assessments, data governance and policy instruments, and current platforms and tools. This gives the facilitator an initial picture of the ecosystem before engaging stakeholders directly.

Phase 2: Surveys and interviews. Structured engagement with all actors in the ecosystem — data producers, users, and intermediaries. This ensures all actors are identified, gives an initial indication of maturity levels, and surfaces themes for deeper discussion.

Phase 3: Multi-stakeholder workshop. A facilitated workshop bringing all stakeholders together to discuss the current state, agree on maturity scores, and identify short-, medium-, and long-term actions to advance to the next maturity phase. This is where ownership is built — the scores and action plan are co-created, not imposed.

Phase 4: Reporting and action plan. A final report with maturity scores, findings, and country-specific, action-oriented recommendations. The action plan assigns stakeholders to specific activities with agreed timelines, reinforcing national ownership and institutional memory.

For complex ecosystems, the full process takes 6-10 weeks including preparation and reporting.

The Assessment That Saves the System

A maturity assessment is the single most consequential deliverable in a DELTA Resilience deployment. It prevents mismatched system designs, identifies governance gaps before they become fatal, quantifies training and migration needs, and — critically — builds the national ownership that determines whether the system survives its creator.

The DEMA is not a delay. It is the foundation that ensures the system you build is the system that lasts.

Share this post