First, there's a good argument that the culprit is not complexity but an overly trusting nature. The people behind the routing exploit say as much, and a more paranoid version of DNS has been around, but not widely deployed, for some time now (see here for some of the background, for example). In other words, people have known about the basic problems for some time, arguing against the "no obvious deficiencies" position.
Second, as I said previously, the standards themselves are not horribly complex. The specifications are deliberately loose in places, but they have to be. An overly tight specification is unlikely to survive unanticipated situations, and there are always unanticipated situations. The complexity comes from actually deploying in the field.
But that's just life with computers. Another advocate of simplicity, Edsger Dijkstra, once provided code examples in a language without subroutines on the grounds that even a few dozen lines of straight code can be hard enough to analyze. [I'm working from memory here. It might have been Niklaus Wirth. Or possibly Colonel Mustard with a lead pipe.] If you don't buy that, have a look at the "rule 110 automaton".
So what, if anything, can we conclude here? I like to come back to a few recurrent themes:
- The algorithms are not as important as the monumental efforts of the security experts and administrators who track down what's actually going on and try to fix it.
- What's deployed trumps what's implemented trumps what's specified.
- The technical picture is not the whole picture. If your identity is stolen, it doesn't matter so much how it happened. There's a crime to be investigated and a reputation to be repaired in any case.
- There are no 100% solutions. Even when the latest exploits are countered, there will still be vulnerabilities. The only secure computer is one that's turned off -- and you don't want to be too sure about that.
1 comment:
Note to self: repeat "Colonel mustard"?
Post a Comment