Monday, October 1, 2007

How do you know I wrote this?

[This one came out a bit disjointed.  Re-reading, I'm not really sure why I mentioned TLS for SMTP servers as having much to do with digital signatures as a SPAM-fighting device.  I also seem to imply that DNS spoofing could point you at, say, a fake banking site without your browser knowing.  The certificate checking in TLS would catch that.  DNS spoofing could perhaps be used in more subtle ways to get your browser to accept a bogus update to its trusted certificates list, but that's a much smaller attack surface.  Anyway, the general drift, such as it is, is still good, as is the conclusion --DH 7 Sep 2010]

As I go along, one of the things I'm trying to figure out is why doesn't everyone have a cryptographically strong keyring? I mean, here's this really well-established, well-studied and as far as our best minds can figure out highly secure way of doing several useful things. In particular, it can establish to a practical certainty that whoever signed a particular piece of data knew a particular secret key.

But no one uses it.

OK, people do use it. The major example, not surprisingly, is that e-commerce sites generally use a certificate in their HTTPS handshake. Your browser will even tell you this if you know where to look (in Firefox you use the little lock icon next to the address at the top which, if you're like me, you forgot all about).

So as long as you remember to check for that, and no one's fooling around with DNS behind your back (see below) and your browser hasn't been compromised (see below), you're good. Note that this is meant to establish your trust in the server. The server will trust you on the basis of a small password (and your incentive not to give it away).

Outside that crucial and curiously asymmetric case, examples get a bit sparse.

At least one major online brokerage (and so probably all of them) will set you up with a digitally secure ID. Whatever overhead or unfamiliarity digital keys may have is worth it if there is enough money at stake.

I've mentioned that widespread use of signatures would probably take a pretty big bite out of spam (um, maybe that's not the best metaphor). I don't see any signs of traction on this, outside of some SMTP servers using certificate-based TLS (quick -- which ones and how do you tell?).

For a while I used to sign all my posts to a mailing list I was on. Then I switched to a different setup and never got around to turning signing back on. No one cared.

Even in an environment like a standards committee, where the members are speaking very publicly and representing companies with money at stake, the posts are generally not signed (at least not in the committees I've had direct experience with). You're trusting the committee members to keep their small passwords safe and the world at large not to care much about standards committees. Which, admittedly, is a pretty safe bet.

OS vendors use strong crypto to make sure that updates are from where they say they are, though I do recall once or twice seeing instructions to the effect that "You may see a message complaining about a bad or missing certificate. Ignore it." OK .... I'm getting a bit of a mixed message here, but OK ....

I couldn't remember whether my Firefox plug-ins were signed, and I couldn't find any indication of it for the ones I had installed, so I went to the Firefox extensions site and checked a couple more-or-less at random. These were in the web development section, so they did things like muck with the HTTP headers you send or munge the content coming back. Stuff you might want to be particularly sure was kosher before installing (or not -- the whole point of Trojan-horse-like attacks is that they can be disguised as anything).

Unsigned, of course. Even if they are signed, do I understand the signing protocol in question well enough to trust it? Not really. I take it on faith that the folks at Mozilla have thought it all through. I particularly hope they've thought the HTTPS stuff through. I'm sure I'd hear about it if they hadn't.

Not that most people will care, but Eclipse plug-in security is even more of a formality. I'm not sure I've ever seen a properly signed third-party plug-in. I probably have, but I don't remember where or when. Security of development tools is not of entirely academic interest. Ken Thompson had a bit to say about that.

What the heck is going on here? If you've got a reasonably mature technology and a problem it seems to fit well, yet no one seems to have adopted it, then either
  • The problem isn't the problem it looked to have been
  • There's more adoption than one might think
  • Something else is solving the problem well enough to prevent major outrage

In this case I think it's a combination of the last two. In e-commerce, there actually is strong crypto involved, just woven in seamlessly enough you only see it if you're looking for it.

That's good, but SSL/TLS authenticate a server, not a person. To take a classic example, if I'm dealing with someone on eBay, their certificate means I can be pretty confident I'm talking to eBay. I can also be pretty confident that eBay at least made this person type in a password in order to sign on. I mean, I had to when I logged in. Beyond that I need eBay's feedback mechanism to help decide whether I want to do business with them.

Returning to the title, how do you know I wrote this? It probably goes like this:
  • You believe that fieldnotesontheweb.blogspot.com is controlled by Blogger. We put remarkable trust in DNS, and despite attempts to poison it (or spoof by playing nasty games with Unicode), that trust seems to be repaid. So far.
  • You believe that no one has stolen my blogger ID (or if they did, I would be able to get the account blocked until everything got straightened out).
  • If you've read more than one post, this one (I hope) seems like it was written by the same person as the others.
Other situations behave similarly. I trust DNS to make sure my bank is my bank and that the servers involved in authenticating my HTTPS connection with them are who they say they are. We trust our OSs and browsers to do the right thing with updates. We exercise caution in dealing with phishy-looking emails.

We trust that email that says it's from someone we know really is, and we're generally right (the exception being mail from oneself -- either I sleepwalk or the occasional spammer is spoofing the From: line to be the same as the To: line). That makes a simple whitelist a reasonably good spam filter.

In short, we have a typical case of engineering in the real world. The status quo is a patchwork of partial solutions yielding results that are significantly non-optimal, but not quite bad enough to leave room for sweeping reforms. That's probably not going to change until the bar for going secure is very, very low, and/or people decide that having strong crypto is worthwhile even if no one else uses it.

1 comment:

David Hull said...

Note to self: this aged pretty well