Opinion·8 min read·April 2026

The IM Coordination Trap

By Alex Nwoko

In Afghanistan, I once spent three months building what I believed was the perfect 5W reporting platform. It had standardised templates, dropdown-controlled p-codes, automated deduplication, real-time validation, role-based access controls, and a dashboard layer that turned partner submissions into instant cluster-level coverage maps. The technical work was sound. The architecture was rigorous. The product was beautiful.

It almost failed.

It didn't fail for technical reasons. It failed — almost — because three of the largest implementing partners in the country didn't want to submit their data through a centralised platform. They had political concerns about data ownership. They had legal concerns about beneficiary protection. They had institutional concerns about a single agency (mine) becoming the de facto information broker for the response. None of those concerns showed up in any technical specification. All of them threatened to make the platform irrelevant.

We rescued the system not by improving the software but by negotiating trust and through agreeing on verbal and non-written data governance protocol across 8 clusters that addressed every one of those concerns. Tiered access. Pseudonymisation rules. A formal escalation pathway for disputes. A clear commitment from funding partners that the platform was the cluster's, not the implementing organisation's. After that, the holdout partners came on board. ReportHub processed over 259 partner reports per month covering services to 2.28 million beneficiaries across 1,853 locations.

The lesson, after a decade of building IM systems across six countries, is one I have to keep relearning: the technology is the easy part.

The Trap

The IM coordination trap is the assumption that better technology solves coordination problems. It rarely does. Coordination problems are political, not technical, and they require political solutions.

Three flavours of the trap show up consistently across operations.

Trap 1: The Data-Sharing Agreement That Never Gets Signed. A consortium identifies the need for shared monitoring. The technical team builds the platform. The partners agree in principle. Then the data-sharing agreement goes through legal review. Six months later, the agreement is still in draft. The platform is online but empty. Eventually it's shelved as "not adopted by partners" — when the actual problem was that nobody owned the agreement's political negotiation.

Trap 2: The Cluster Lead vs Partner Trust Deficit. A cluster lead commissions a dashboard to track partner performance. Partners interpret this — sometimes correctly — as a surveillance instrument. They report selectively or not at all. The dashboard becomes a monument to coverage gaps that exist in the data because partners are protecting themselves, not because the gaps exist in reality.

Trap 3: The Donor-Imposed Reporting Cycle That Doesn't Match Field Reality. A major donor specifies a quarterly reporting cycle with 20 indicators. Partner field teams spend three weeks of every quarter filling out reports rather than delivering programmes. The data is collected, aggregated, and reported up. Nobody downstream uses it. The reporting exists because the funding requires it, not because anyone needed the information.

In each case, the technology can be perfect and the coordination still fails. The failure is upstream of the technology.

Why the Trap Is So Persistent

Information managers are hired for technical skills. The job ad asks for Power BI, GIS, SQL, Python. The interview tests dashboard design and data architecture. Promotions reward visible technical product.

But the job actually requires political negotiation. Securing partner buy-in for data submission. Brokering data-sharing agreements. Defending the IM unit's neutrality when the cluster lead asks for partner-comparison products that risk making partners look bad. Pushing back on donor reporting requirements that don't serve operational needs.

None of those skills are in the job ad. None of them get tested in the interview. None of them produce visible technical artifacts. So they are systematically underweighted in how IM officers spend their time.

The result is an IM cadre that's technically over-skilled and politically under-prepared. We build excellent platforms in environments where the political ground is unstable, and we're surprised when the platforms don't take hold.

When I ran a country-wide IM capacity audit in Afghanistan across over 60 humanitarian organisations, the pattern was clear: most agencies had IM focal persons, but the weakest capacity was at the coordination level — not at the individual analyst level. The gap wasn't technical skill. It was the institutional muscle to coordinate analytical work across organisations — the political work that no amount of individual training fixes.

The Afghanistan Suspension as Case Study

The clearest example I have of the IM coordination trap was the Afghanistan platform suspension in 2025. The platform — a nationwide humanitarian reporting system serving over 115 partner organisations — was technically excellent. Its architecture was modern, its uptime was high, its data quality was rigorous, its training programme was comprehensive.

It went dark overnight when its sole donor froze funding.

The technology had no defence. The institutional architecture had no defence. The partners who depended on the platform had no advance notice and no alternative. The lead UN coordination agency distanced itself from the implementing organisation rather than fighting for the shared infrastructure. Every actor retreated into self-preservation. Nobody owned continuity, so nobody fought for it.

This is the IM coordination trap at its most consequential. The technical work was good. The political architecture — diversified funding, mandatory contingency protocols, formal continuity agreements, sovereign data governance — was missing. When the political ground shifted, the technology went with it.

The lesson generalises beyond Afghanistan. Any humanitarian data system that depends on a single donor, a single implementing partner, or a single political configuration is a system with a single point of failure. And the failure mode isn't technical — it's institutional.

What Gets You Out of the Trap

The escape from the coordination trap isn't better software. It's the boring institutional work that IM officers are not trained to do but that determines whether the software ever gets used.

Negotiate the data governance before you build the platform. Who owns the data? Who validates it? Who publishes it? What happens when partners disagree about a finding? Who can suspend a partner from the system? Get the answers in writing before a single line of code is written. The Afghanistan IM Capacity Assessment exercise I led demonstrated, painfully, that platforms built without this groundwork hit walls within months.

Map the political stakeholders before you map the data sources. For every dataset you want to consume, identify the political actor who controls access. Get explicit, written, time-bounded permissions before assuming the data will flow. The data ecosystem maturity assessment framework bakes this in as Dimension 1 (Actors and Roles) for exactly this reason.

Design the platform around the coordination mechanism, not the cluster lead. Cluster leads change. Coordination mechanisms persist. Build the platform as a shared asset of the coordination architecture, with governance that survives leadership turnover. The cash working group I supported in Ethiopia continues to work because the analytical framework belongs to the coordination mechanism, not to any single agency.

Build escalation pathways. When two partners disagree on a finding, what happens? When a donor asks for a product the partners don't support, what happens? When a government counterpart objects to a publication, what happens? Pre-agreed escalation pathways prevent every dispute from becoming an existential crisis.

Ration the information products. Not every question needs a dashboard. Not every report needs to be quarterly. Cut the reporting burden to what's actually used. Less is more, almost always.

Diversify funding from the start. No coordination platform should depend on a single donor. The reserve mechanism, multi-donor pooled fund, or cost-sharing agreement has to exist on day one. Bolt-on diversification after a funding crisis is too late.

The Promotion Path That Doesn't Exist

Here's the structural fix the IM cadre needs but doesn't have: a promotion path that rewards political and institutional work as much as it rewards technical work.

Right now, an IM officer who builds a beautiful platform gets visibility, recognition, and the next assignment. An IM officer who spends three months negotiating a data governance MoU gets… nothing visible. The MoU is invisible until it's tested, at which point its value is enormous, but the IM officer who built it has long since moved on.

The fix is structural. Performance frameworks for IM officers should explicitly evaluate political and institutional outcomes — data-sharing agreements signed, partner trust scores, coordination platform survival past project close, donor diversification metrics. Until those metrics exist, the IM cadre will keep falling into the coordination trap, and the platforms will keep dying when the political ground shifts.

The technology really is the easy part. We just keep being surprised by it.

Share this post