The Reality Gap: Why Human Rights Technology Often Fails on the Ground

In the aftermath of the Arab Spring, I consulted on a project developing a mobile application for citizen journalists. The app was technically sophisticated. It helped people record and edit video on their smartphones. It had built-in lessons with training on framing, narrative structure, and journalism topics like citing and protecting sources. The Valley-based developers were excited about "empowering everyday people" to document historical events with journalistic rigor. Keep in mind this was before the rise of TikTok and YouTube Shorts. It was back when traditional media was used and respected.

In principle, it worked perfectly. In practice? The app failed.

It generated high-quality video files ranging from 500MB to 3GB per story package—impossible to transmit over the congested 3G networks and intermittent connections available in regions experiencing political upheaval. And that's before considering the prohibitive mobile bandwidth costs. Citizen journalists reverted to posting short, unedited clips directly to social media platforms that compressed the videos. The app's sophisticated storytelling features went unused, while important documentation happened through the simplest channels that could actually handle infrastructure bottlenecks.

This wasn't a one-off failure. In human rights tech, this sort of failure is common.

The Great Disconnect

A vast reality gap separates the people who build human rights technology from those who need to use it. We've created a situation where tools are designed in Silicon Valley, London, or Berlin for use in entirely different contexts; where infrastructure, digital literacy, and everyday realities are vastly different.

While human rights defenders prioritize simplicity, familiarity, and reliable offline functionality, developers often focus on advanced features, perfect security models, and sophisticated workflows that assume stable infrastructure.

What we're left with is a graveyard of well-intentioned technologies that look impressive in demos but fail in the field.

In my decade of consulting with human rights organizations, I've seen this story play out repeatedly. Applications require constant internet connectivity in regions where connections are spotty at best. Tools generate large files without considering how users will transmit or store them. Software gets designed for high-end devices in contexts where users have older phones with limited processing power. Security updates require bandwidth many users simply can't access.

Some years ago, I worked with a cocoa importer who wanted to improve their sustainability monitoring across their West African supply chain. Their team understood the reality on the ground. Mobile signals in rural areas were either non-existent or prohibitively expensive. Instead of forcing a digital-first approach, their field inspectors gathered information using paper forms. The software solution processed these paper forms once inspectors returned to their offices, automatically extracting and aggregating sustainability metrics from the scanned documents. The system generated the quantitative data they needed for annual reports and year-over-year tracking.

The inspectors could focus on their work in the field without worrying about connectivity, while the importer got the digital sustainability data they needed. Not flashy, but it worked because it respected local constraints from the start

What happens when these tools fail? Human rights defenders don't just abandon their work, they find creative workarounds. Activist collectives use shared email accounts as makeshift archives. WhatsApp groups become documentation databases (yes, sigh)

They're smart, they're adaptable! When your solution fails them (because you didn't understand their local constraints), they come up with a better one.

Better, because it actually works.