Opinion / Technical·10 min read·March 2026

The Future of Humanitarian IM is Agentic

By Alex Nwoko

I'm watching an analyst spend three weeks on what should be a two-day job.

She's sitting in a Nairobi office, working on a situation analysis for a new conflict-affected area. The task is straightforward: read reports from dozens of sources, classify the information by sector, identify key trends, cross-reference with historical data, synthesize into a single analytical product, and produce a situation report that senior management can use to make decisions.

Three weeks. In that time, she's manually searching databases, reading PDFs, categorizing information in spreadsheets, cross-checking dates and numbers, consolidating into a single narrative. The work is artisanal. It requires human judgment—she needs to assess source credibility, handle contradictions, extract meaning from messy data. But so much of it is structural work that a machine could do.

By the time the situation report is published, the situation has shifted. The analysis is technically complete but operationally obsolete.

I'm thinking: an AI agent could handle the structural work in hours. Not replace the analyst. Augment her. Do the heavy lifting of information processing so she can focus on judgment and synthesis.

The Manual Bottleneck

Situation analysis is the backbone of humanitarian decision-making. It's how leadership understands crises. What's happening on the ground? What are the humanitarian needs? What's causing them? How are they changing? What does the operation need to do?

Producing a good situation analysis requires reading widely, thinking across sectors, holding multiple perspectives at once, updating as new information arrives. It's a core competency.

But the process is artisanal. An analyst sits at a desk and does search-classify-synthesize work manually. She searches the OCHA situation reports database for similar contexts. She reads analysis from partner organizations. She pulls sector-specific data. She matches dates and locations. She reconciles contradictions. She builds a narrative.

This is cognitively intensive, which is why it requires a skilled analyst. But it's also repetitive and structural, which is why so much of it is ripe for automation.

In Afghanistan, producing a single multi-sectoral situation analysis took 2-3 weeks. In Somalia, it took 3-4 weeks. By the time it was published, the situation had shifted, new data had arrived, and the next analysis cycle was already behind schedule.

The humanitarian sector has tried to solve this with dashboards (which just aggregate data without adding synthesis), with templates (which structure the output but don't speed up the process), and with larger teams (which helps but hits diminishing returns fast).

None of these address the core bottleneck: information processing is slow because it's manual.

From AI Tools to AI Agents

The humanitarian sector has dabbled with AI. DEEP platform for document classification. NLP for sentiment analysis. Predictive models for needs estimation. These are useful.

But these are AI tools, not AI agents. There's a critical difference.

A tool waits to be used. You point it at a document and it classifies it. You feed it text and it extracts sentiment. It's reactive. You summon it, it performs its function, it waits for the next summons.

An agent is autonomous. It has a goal and it acts toward that goal. It searches for information without being asked. It identifies gaps in its understanding and goes looking. It detects anomalies and flags them. It updates its outputs as new data arrives. It thinks about what information it needs and goes after it.

The difference is agency.

Current AI systems in humanitarian response are tools. A document classification model is useful for screening reports. NLP for sentiment analysis helps you understand population perception. But none of these do anything without a human initiating the action. You have to bring them a document, ask them a question, point them at data.

An agentic system would work differently. It would continuously monitor sources (humanitarian databases, news feeds, partner reports, social media signals). It would detect emerging patterns automatically. It would classify information as it arrives. It would identify contradictions and gaps. It would draft analytical products. It would flag anomalies that humans need to review.

The human would still be in the loop for judgment, contextual understanding, and quality assurance. But the information processing would be automated.

Introducing AISA

AISA: Agentic Intersectoral Situational Analysis.

This is the concept I've been thinking about for the past year. What would it look like if we built an agentic system for situation analysis in humanitarian response?

Architecture: multiple specialized agents, each with a specific role.

Source monitor agents continuously scan humanitarian databases, news feeds, partner reports, and social media for information about the crisis. They're looking for new reports, updated statistics, emerging trends, signals of change.

Classification agents take incoming information and categorize it by sector (WASH, food security, health, protection, etc.), by operational relevance (critical for decision-making, contextual background, noise), by source credibility (which organizations are reliable? which sources are prone to bias?).

Cross-reference agents look for contradictions, temporal inconsistencies, and outliers. If two organizations report different numbers for the same metric, they flag it. If a trend contradicts historical patterns, they note it.

Synthesis agents look across classifications and create intersectoral understanding. They identify how health impacts connect to food security, how displacement creates protection risks, how market disruption affects livelihood capacity.

Product generation agents draft analytical outputs—situation reports, trend analysis, early warning signals, critical updates—and flag them for human review.

All of these agents are in communication. They're not sequential. They work in parallel, updating each other, refining understanding as new information arrives.

The human analyst is positioned as a quality control point and a judgment arbiter. She reviews agent outputs, challenges them, adds context, makes sense of contradictions, and decides what gets published. The agents handle the structural work of information processing. The human handles the cognitive work of judgment.

The result: situation analysis that's updated daily, not monthly. Analysis that catches emerging trends in real-time. Analysis that's consistent and well-sourced. Analysis that's ready to support decisions.

Practically, this means 10x faster analytical throughput with consistent quality.

Why This Matters Now

The volume of humanitarian information is growing exponentially. More organizations, more reporting systems, more real-time data feeds, more beneficiary communication platforms. The amount of raw information available is increasing faster than our ability to process it.

At the same time, crises are becoming more complex. Climate-driven disasters interact with conflict, conflict creates displacement, displacement creates protection risks, all of this unfolds across multiple sectors simultaneously. Understanding a complex crisis requires holding many variables at once.

The analyst workforce can't scale to meet demand. You can't hire your way out of this problem. And even if you could, the human experience of sitting at a desk doing manual information processing is grinding.

Agentic systems aren't replacing analysts. They're multiplying their capacity. One analyst with AISA support can do the work of five analysts without it. That's not because the agents are smarter—it's because they handle the repetitive structural work and let the humans focus on judgment.

This matters now because the technology is ready. Large language models can read, understand, classify, and synthesize information. Multi-agent orchestration frameworks exist. We know how to build agentic systems. The missing piece is someone building it for humanitarian use.

Closing Provocation

The question isn't whether AI will transform humanitarian information management. It will. The question is whether we'll build the right kind of AI.

Will we build systems that augment human judgment or bypass it? Will we build for transparency and explainability or opacity? Will we build for the constraints of humanitarian work—sparse data, ethical complexity, political sensitivity—or will we import standard models that don't fit?

The next generation of humanitarian IM won't be dashboards and databases. It will be agentic systems that continuously work to make invisible information visible, that support human judgment with machine-powered information processing, that free analysts from the grinding work of manual synthesis so they can focus on meaning-making and decision support.

That's the infrastructure the humanitarian sector needs. That's what builders should be focusing on.

Share this post