The Security Paradox: When Good Advice Makes Human Rights Work Less Secure

Picture this (real!) example from the "Technology Tools in Human Rights" study (PDF) 1

1.conducted by The Engine Room and funded by the Oak Foundation in 2016.Back to reference

An NGO documenting human rights abuses was encouraged by security experts to switch from Windows to a free software operating system. The reasoning was sound - better security, less surveillance, more control. But there was a problem.

Their printer wouldn't work with the new system.

Staff started saving sensitive documents to USB sticks and printing them at local internet cafes. The USB sticks went missing, exposing far more sensitive data than the theoretical risks they'd been trying to avoid.

I call this the "security paradox" - when well-intentioned security advice actually creates new, often more dangerous vulnerabilities.

Why Does This Keep Happening?

The root cause isn't complicated: there's a massive disconnect between security experts (often in Europe or the US) and the daily realities of human rights defenders working in challenging contexts.

Valeria Umaña, who works with groups in Nicaragua, puts it bluntly in the Technology Tools in Human Rights report:

"For people in the countryside, the more apps they have, the more problems they can have because they often don't know how to use them."

The pursuit of perfect security ignores imperfect realities, and actively endangers the people it's meant to protect.

When security recommendations fail, vulnerable populations bear the risk. It's not the security consultant who faces danger when documentation leaks - it's the victims and witnesses who provided testimony.

Trust evaporates after a security recommendation backfires, making organizations resistant to all security advice, even the good stuff.

For chronically underfunded human rights organizations, investing precious time and money into systems that ultimately fail is devastating.

I've witnessed organizations abandon critical documentation work altogether after particularly traumatic security failures. That's not just a technical problem - it's a human rights catastrophe.

A Better Way Forward

The good news? There are approaches that actually work. The conclusion of the Engine Room study has some stellar advice.

I also would add these three additional techniques I have found useful:

  1. Start with risk assessment, not tool selection. Understand the specific threats an organization faces before recommending solutions. A women's collective documenting domestic violence faces different risks than journalists exposing government corruption.

  2. Value simplicity and familiarity. Sometimes a slightly less secure tool that people will actually use correctly is better than a theoretically secure one they'll work around. As Rory Byrne says in the report, "People like to stick with what they know."

  3. Implement changes incrementally. Rather than radical overhauls, focus on gradual improvements with continuous feedback. When I work with human rights groups, we start with one small change, perfect it, then move to the next.

The Reality Check We Need

I believe deeply in the right to privacy and the importance of security for human rights work. But I've learned to be humble about technology solutions. The most secure system isn't the one that looks best on paper, it's the one that actually protects people in real-world conditions.

Technology doesn't make change. People make change. They need security approaches that recognize their humanity, constraints, and contexts.

So the next time you receive security advice, remember the NGO that had printer trouble and ended up with USB sticks in internet cafes. They weren't careless - they were trying to be more secure.

Don't let the latest security solution become the next problem.