Could deliberately adding security bugs make software more secure?

The best way to defend against software flaws is to find them before the attackers do.

This is the unshakeable security orthodoxy challenged by a radical new study from researchers at New York University. The study argues that a better approach might be to fill software with so many false flaws that black hats get bogged down working out which ones are real and which aren’t.

Granted, it’s an idea likely to get you a few incredulous stares if suggested across the water cooler, but let’s do it the justice of trying to explain the concept.

The authors’ summary is disarmingly simple:

Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable.

By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program.

Each of these bugs is called a ‘chaff’, presumably in honour of the British WW2 tactic of confusing German aircraft radar by filling the sky with clouds of aluminium strips, which also used this name.

Arguably, it’s a distant version of the security by obscurity principle which holds that something can be made more secure by embedding a secret design element that only the defenders know about.

In the case of software flaws and aluminium chaff clouds, the defenders know where and what they are but the attackers don’t. As long as that holds true, the theory goes, the enemy is at a disadvantage.

The concept has its origins in something called LAVA, co-developed by one of the study’s authors to inject flaws into C/C++ software to test the effectiveness of the automated flaw-finding tools widely used by developers.

Of course, attackers also hunt for flaws, which is why the idea of deliberately putting flaws into software to consume their resources must have seemed like a logical jump.

To date, the researchers have managed to inject thousands of non-exploitable flaws in to real software using a prototype setup, which shows that the tricky engineering of adding flaws that don’t muck up programs is at least possible.