Technical Deep Dive·11 min read·April 2026

Measuring Joint Response for Cash Transfer Programmes — A New Way of Using Humanitarian Meta-Data

By Alex Nwoko

In Ethiopia, the Cash Working Group coordinated an impressive multi-purpose cash response that, in 2022 and 2023, reached tens of thousands of beneficiaries across multiple regions through a dozen-plus implementing partners. But here is the challenge, each cash working group partner had ran their own post-distribution monitoring. Each one used a slightly different questionnaire, a slightly different sampling strategy, a slightly different definition of "satisfaction" or "adequacy" or "market access."

When the Cash Working Group leadership asked the most basic possible question — "is our collective cash programming working?" — the answer was structurally impossible to give. Not because the data didn't exist. Because the data existed in twelve incompatible silos that couldn't be combined without weeks of harmonisation work that nobody was funded to do.

This is the central problem in inter-agency cash coordination. Every individual partner produces good evidence about its own programme. The system as a whole produces no evidence about itself. And the donors, the government, and the affected populations all need answers about the system, not just the parts.

I've been doing inter-agency PDM meta-analysis work across cash coordination contexts for years now — most extensively in Afghanistan and Ethiopia. Across every one of those engagements, the same lesson keeps surfacing: meta-analysis isn't a statistics exercise. It's a governance intervention disguised as one.

Why Individual PDMs Don't Add Up

Take five partner PDMs from a typical inter-agency cash response.

Partner A surveys 400 beneficiaries with a 35-question instrument focused on transfer adequacy. Partner B surveys 1,200 with an 80-question instrument that includes detailed expenditure tracking. Partner C runs phone surveys only. Partner D uses face-to-face. Partner E weights its sample by household size; the others don't.

Each survey is internally valid. Each one tells you something true about its partner's programme. None of them, individually or summed, tells you whether the inter-agency response is working.

The reasons are technical:

Different denominators. "Beneficiary" means different things across partners — sometimes the head of household, sometimes everyone in the household, sometimes the registered recipient.

Different question wording. "Did the transfer meet your basic needs?" yields different answers than "Was the transfer amount sufficient?" Both questions appear, in different forms, across partner instruments.

Different scale anchors. A 5-point Likert satisfaction scale isn't arithmetically comparable to a 4-point scale, and direct dichotomous yes/no answers can't be averaged with either.

Different sampling frames. Partner A samples randomly within distribution lists. Partner B samples by geography. Partner C samples by enumerator convenience.

Different recall windows. "In the last 30 days" vs "since the most recent distribution" produce structurally different responses about the same underlying behaviour.

Aggregate across these incompatibilities and you don't get a richer picture. You get noise.

An Innovative Seven Pillar Meta-analysis Framework

The framework I've developed for inter-agency PDM meta-analysis in inter-agency multi-purpose cash coordination, organises the meta-analysis into seven pillars. Each pillar is defined narrowly enough that partner PDMs can be mapped to it cleanly, and broadly enough to capture the operationally meaningful dimensions of cash performance.

Pillar 1 — Programme Delivery and Beneficiary Profile. Targeting mechanism, registration process, delivery modality, transfer mechanism, timeliness, perceived fairness. This is the operational hygiene layer. If partners are targeting different populations or distributing through different rails, the rest of the analysis has to control for it.

Pillar 2 — Satisfaction and Adequacy. Satisfaction with transfer value, modality, and overall assistance, with explanatory feedback. This is where harmonisation work pays off most — Likert scales can be normalised to a common 0-100 index when you have the original variance structure.

Pillar 3 — Cash Utilisation and Markets. Expenditure patterns, ability to meet basic needs, market access, price dynamics, constraints to cash use. The pillar that connects PDM data to the Minimum Expenditure Basket review process.

Pillar 4 — Outcomes and Perceived Impact. Beneficiary-reported outcomes on food security, dietary diversity, coping strategies, debt, health expenditures, education expenditures, shelter access, WASH access, livelihood recovery, household well-being. This is the layer where the question "is the cash actually changing lives?" gets answered.

Pillar 5 — Equity, Protection, and Safeguarding. Disaggregated analysis by sex, age, disability, displacement status, and vulnerability characteristics. Protection risks. SEA and SH considerations. This pillar is structurally hard because most partner PDMs disaggregate inconsistently or not at all.

Pillar 6 — Accountability and Participation. Information access, complaints and response mechanisms, trust in the assistance, community engagement. The pillar most often skipped in standard PDM, and most consequential for programme legitimacy.

Pillar 7 — Cross-Analysis and Learning. Comparative analysis across partners and regions. Identification of patterns, divergences, good practices, and systemic constraints. This is where the meta-analysis adds value the individual PDMs can't.

The Real Work Is Harmonisation

When I scope an inter-agency meta-analysis, I budget roughly 40% of the total effort for harmonisation. People who haven't done this work assume the time goes to analysis. It doesn't. The analysis is the easy part.

Harmonisation means:

Variable mapping. For every question in every partner PDM, identify which framework pillar it belongs to and which canonical indicator it operationalises. Build a master codebook that maps partner-specific variables to harmonised analytical variables.

Standardisation of coding. Recode partner-specific response options into a common scheme. Yes/No becomes 1/0. Likert becomes 0-100 normalised. Categorical becomes a defined ontology with stable labels.

Geographic reconciliation. Partner PDMs use partner-specific geographic codes. Map everything to OCHA Common Operational Datasets and admin-2 (woreda) p-codes. This single step takes a week of focused work and is the highest-impact intervention in data quality.

Temporal alignment. PDMs from different distribution rounds, different fiscal periods, different recall windows have to be aligned to a comparable analytical timeframe.

Quality screening. Records where mortality exceeds affected population. Records with impossible values (households of 47 people, transfer amounts of negative numbers). Duplicates across partner submissions. The cleaning is less interesting than the analysis but determines whether the analysis can stand.

Reproducibility scripts. Every harmonisation step gets coded as a reproducible Python or R script with version control. The next analyst inheriting the dataset has to be able to re-run the pipeline end-to-end.

When harmonisation is done well, the analytical layer becomes almost mechanical. When it's done poorly, no amount of analytical sophistication recovers the data.

Meta-Analysis as Governance

The technical work above describes the visible product. The hidden product is governance.

Inter-agency PDM meta-analysis only works when partners agree to:

  • Share their raw PDM datasets (not just summary findings)
  • Adopt a harmonised reporting calendar so the data arrives in a usable window
  • Use a shared codebook for at least the framework's core indicators
  • Acknowledge comparative findings even when their own programme doesn't score well
  • Fund the harmonisation work as a recurring coordination cost, not a one-off

Each of those agreements is a governance commitment. In every Cash Working Group I've supported, securing them required formal coordination-body endorsement, partner-level technical review meetings, donor-level briefings on the limits of what comparative analysis can show, and clear written boundaries on what is and isn't in scope.

The meta-analysis report itself is a side effect of that governance work. The bigger product is the agreement to do it again next quarter, with a slightly tighter framework, slightly cleaner partner data, and slightly more decision-support value to the Cash Working Group as a body.

What This Approach Unlocks

When inter-agency cash meta-analysis is operationalised properly, three things become possible that single-partner PDM can't deliver.

System-level performance benchmarking. Cross-partner comparison on harmonised indicators. Which partners are achieving higher satisfaction rates with smaller transfer values, and what can the rest learn? Which geographies show systematically lower outcomes, and is that a partner effect or a context effect?

Equity audits at the response level. Disaggregated outcomes by sex, age, disability, and displacement status across the entire response. Where individual partners may be reaching equity targets, the system as a whole may be missing them — or vice versa.

Evidence for the Minimum Expenditure Basket review. Real expenditure patterns from harmonised data, capable of feeding the MEB taskforce with empirical evidence rather than partner-by-partner anecdote.

Donor-quality evidence on collective contribution. When a donor asks the Cash Working Group "what did your USD 50 million achieve?", the answer is no longer "here are 12 partner reports". It's a single integrated finding with confidence bands, methodology disclosure, and reproducible underlying data.

The Pattern Generalises

This isn't a cash-specific problem. Any sector running multi-partner programming with partner-specific monitoring has the same fragmentation. Health, education, protection, WASH, food security — every cluster generates more individual partner data than aggregate response data.

Inter-agency meta-analysis is the bridge. The seven-pillar approach can be adapted to any sector by swapping the pillar definitions for sector-specific outcome dimensions. The harmonisation discipline stays the same. The governance work stays the same. The reproducible analytical pipeline stays the same.

What changes is the substantive question. For cash, it's "is the cash transfer system working?" For nutrition, it's "is the multi-partner nutrition response moving the IPC needle?" For protection, it's "are the inter-agency referral pathways functioning?" The method is general; the question is sector-specific.

The point of meta-analysis is not just better evidence, but to optimise joint approach of responding to development and humanitarian needs and measuring results and gaps, using a whole-of-system thinking. It's the institutional habit of asking system-level questions instead of partner-level ones — and building the data architecture that makes those questions answerable.

Share this post