Showing posts with label SSL. Show all posts
Showing posts with label SSL. Show all posts

Sunday, January 11, 2009

MD5, SSL, CAs etc.: Summary opinion

In the past few posts (this one, this one and this one) I've tried mostly to lay out the facts as I understood them. But this story is prominent enough that everyone seems to have an opinion on it. So here's mine, keeping in mind that when I'm not busy not being a security expert, I spend my spare time not being a lawyer.

First, the CAs [that is, the ones that persisted in using MD5] clearly deserve a good dose of public humiliation on this one, if only to remind everyone that bad press and possible loss of sales are an additional cost of not acting, even if no actual exploit occurs. They deserve it not because a weakness turned up -- that's pretty much inevitable -- or because they don't always respond to every conceivable threat -- that's a natural consequence of doing business and weighing costs against benefits. They deserve it because in this particular case the threat was clearly large, the fix was clearly not that hard and they had years of lead time to fix things quietly.

Verisign in particular argues that RapidSSL was an acquisition and they were just now able to get to know RapidSSL's code base. Well yeah, but ... Verisign chose to acquire RapidSSL and it would have taken very little due diligence to determine that they were using weak certificates. Depending on their negotiating position, Verisign might even have been able to make RapidSSL's fixing its certs a condition of the acquisition. But in any case, it's a problem Verisign took on voluntarily, and if they didn't know about it they should have.

However ...

It would be a grave mistake to focus completely on the CAs. When you (say) visit your bank's web site, you are trusting
  • The bank to keep your money safe to the best of its ability.
  • The bank to keep your private info safe to the best of its ability.
  • The banking system and government to clean up if the bank fails. [When I wrote this, I was thinking more of a security breach. Heh.]
And on the technical side:
  • The CAs signing your bank's certificate
  • The design of the certificate system (PKI)
  • The researchers that claim all this works they way they say it does
  • The implementation of SSL you're using
  • DNS (the system that figures out which actual server to contact when you ask for "foobank.com").
  • Your browser
  • Your operating system (a whole separate kettle o' fish)
  • Whatever else I didn't think of, and I'm sure I've left out several major factors.
Any of these can and does have problems from time to time. However, you're not counting on all of them to be perfect, always. You're counting on the system, in aggregate, to be safe enough. The wrong lesson to learn from all of this would be "The CAs are too lazy to protect our data". A better lesson would be "Every part of the system is imperfect. And so is the system. That's probably OK, but we really don't know."

Not exactly a reassuring story with clear good guys and bad guys, but it's the best I can come up with.

A bit more on MD5 cracking in practice

While we're on the topic, I should point out that Sotirov et. al. were not the first to put the theoretical weakness of MD5 into practice. The timeline from my previous post is
  • A while ago, someone published a paper showing that MD5 was vulnerable.
  • Late in 2008, Sotirov et. al. disclosed that they had forged a root certificate.
  • Soon thereafter (right about now), the CAs got serious about updating their root certificates.
All well and good, but there are a few missing pieces (and even then, this is just scratching the surface):
  • Rivest introduced MD5 in 1991
  • The first theoretical indication that MD5 was weak came in 1996, when Dobbertin published a paper on the subject.
  • In 2005 Wang and Yu published a paper with the straightforward title "How to Break MD5 and Other Hash Functions", including fully-worked examples.
  • Also in 2005, Lenstra, Wang and de Weber published a paper announcing that, using Wang's method, they had produced colliding X.509 certificates (the kind everyone uses). At this point, one could make a very strong argument that the cat was out of the bag as far as applications like SSL were concerned. A couple of key quotes:
    • "With this construction we show that MD5 collisions can be crafted easily in such a way that the principles underlying the trust in Public Key Infrastructure [the basis for SSL] are violated."
    • Below is an example pair of colliding certificates in full detail (byte dump).
  • In 2007, Lenstra and de Weber, along with Stevens, got some press by claiming to have predicted the outcome of the 2008 presidential election, and then, once they had your attention, explaining that they'd really just made multiple predictions with identical MD5 hashes.
  • Finally, in late 2008, we join our story currently in progress.
In other words, MD5 has been known to be weak in theory for more than a decade, and in practice for several years. As usual, Wikipedia has more background and pointers to original sources (several of which I used here).

Thursday, January 8, 2009

The Register's take on the MD5/SSL crack

Under the very appropriate rubric of "As usual, the truth is a little more complicated," The Register picks up on two points I glossed over in my previous post on the MD5/SSL crack. Before I get to them, let me quote myself from a different previous post:
The basic trust issues are clear enough, but the kind of mental ju-jitsu needed to think through all the various counter-measures and counter-counter-measures is hairy in the extreme. True black belts are relatively rare, and I'm not one of them.
Caveat lector.

Point 1.
Recall that crackable root certificates are in the process of being replaced. In particular, Verisign subsidiary RapidSSL has replaced its tainted root certificate. The register counters:
But there's nothing stopping anyone who might have used the attack before that date to masquerade as RapidSSL and issue counterfeit certificates for any website of their choosing (think Bank of America, HMRC, or any other sensitive online destination).
My understanding here is that there is just such a thing, namely that modern browsers don't rely on a fixed set of trusted root certificates. So, our enterprising cracker puts up a site spoofing my bank, using a bogus certificate, signed by an imposter of RapidSSL's root certificate:
  • My browser makes an HTTPS connection to my bank. As part of the SSL handshake, it asks the purported bank "Who are you?".
  • The site responds "I'm FooBank. Says right here on this certificate."
  • The browser takes the certificate and examines it. The certificate says it's signed by RapidSSL's old root certificate (or it says it's signed by some other certificate that's been signed by RapidSSL's etc., etc.).
  • Without the knowledge that the old RapidSSL root cert has been spoofed, my browser would say "RapidSSL root cert XYZ? Looks OK to me. Go ahead and serve me."
  • With the new information, the browser says "RapidSSL root cert XYZ? Don't know about that one. Sorry." My browser does of course know about RapidSSL root cert NewImprovedXYZ, but that's not the root certificate the cracker is claiming signed the site's certificate saying it's FooBank. Same CA (RapidSSL) but a different certificate.
Microsoft's advisory on the subject states that "When visited, Web sites that use Extended Validation (EV) certificates show a green address bar in most modern browsers. These certificates are always signed using SHA-1 and as such are not affected by this newly reported research."

Firefox doesn't use a green bar, and Mozilla's own advisory on the subject (dated December 30 2008 and linking to Microsoft's) is a bit vague, but (without checking the code and getting further bogged down in details) it looks like Firefox has a couple of safeguards in place as well. It would be nice if they had a more definitive security statement, but the upshot is this: It is possible for browsers to reject certificates signed by fishy root certificates and only accept ones signed by root certs that use stronger hashing (e.g, SHA1) than MD5.

Further, the lead researcher in question, Alexander Sotirov, states, in the blog post I previously linked to
Only 5 hours after our presentation, Verisign stopped using MD5 for all new RapidSSL certificates, successfully eliminating this vulnerability [emphasis mine].
So it would appear that it's enough to revoke the offending root certificates, of which there are a known quantity, and the rest of the system will behave appropriately.

Point 2: The article also brings up a broader and more troubling concern:
More generally, what [Verisign product marketing VP Tim] Callan seems to gloss over is the truism VeriSign and the rest of the security community have repeated so many times that it's become a cliche: Hacking is no longer the province of script kiddies[*], but rather sophisticated and well-funded criminal enterprises. It's hard to imagine these groups wouldn't spend huge amounts of money to buy the credentials that would allow them to spoof any website in the world.
The particular concern driving this is that maybe someone else has already quietly duplicated the Sotirov team's substantial effort and has bogus certificates ready to go, or they're about to do so. That may be but, thanks to the recent efforts, the window for using them is rapidly closing. There's no particular evidence that someone beat the White Hats to the punch on this one. You would expect a massive spike in phishing attacks, and that hasn't happened. So far, at least.

As far as I can tell, no one is currently assuming that no Bad Guys have the resources to crack MD5 and forge root certificates. Sotirov's team tipped off everyone they could as soon as they had the goods (albeit carefully and indirectly in the case of the CAs). The CAs in turn appear to be acting with all deliberate speed to plug the holes. They are not currently assuming that they have a lot of time to act.

The more worrisome problem is that MD5 has been known to be vulnerable, and better alternatives have been available, for some time, but only now are the CAs actually ditching MD5. This indeed was based on the notion that it was highly unlikely that Bad Guys would be able to put the theoretical weakness of MD5 to practical use. More precisely, it was based on the calculation that the likelihood times the cost of a crack was more than [I meant "less than"] the cost of fixing the root certs. Now that Sotirov et. al. have published, the likelihood has increased and the cost of not acting along with it.

What if the CAs had been wrong? What if the Black Hats had won the race? In that case, there would likely have been a huge mess. Major banks would have been spoofed and potentially large numbers of customer accounts compromised before the banks took down their sites and breathed heavy fire at the CAs, who in turn scrambled to revoke the bad certs. The bank sites would then be down for however long it took for the CAs to fix their certs, plus however long it took for the CAs to convince the banks that, no, really, it's all safe now. Banking by phone and at, um, actual banks would continue. I'm reasonably sure that credit card transactions at stores would either not be affected or would be easier to fix since they don't use the public internet.

Parallel to that, the banks would have been scrambling to refund customers their fraudulent charges, reset the magic numbers on every account in sight and convince the customers that no, really, it's all safe now. The browser vendors, who appear already to have done what they could have, would face a similar PR nightmare anyway.

A huge mess, but I wouldn't quite call it "betting the internet". Nonetheless, I can't quite shake the nagging feeling that, one of these days, someone is going to bet wrong, or make a rational bet but still lose. That doesn't seem to have been that case this time, but who knows what comes next?

[*] I can't help including my reflexive grumble here: Hacking was never the province of script kiddies. The two are about as far apart as you can get and still have a computer involved. But I'm using "hacking" in its older sense here.

"Hackers crack SSL"

[You may also want to check out this followup, and in particular the disclaimer near the top]

Well, kind of. SSL -- the protocol you use to make sure that, say, you're actually talking to your bank and that no one's listening in -- still appears safe when used correctly. What's actually happened is that Alexander Sotirov et. al. have described a way of using a long-known weakness in the MD5 cryptographic hash function to create a rogue Certificate Authority (CA) certificate. CA certificates are the "root certificates" used to vouch (directly or indirectly) for the certificates that servers use to convince your browser (and whoever else) that they are who they say they are. Certificate Authorities (CAs) are companies that -- very carefully -- issue server certificates cryptographically signed with their root certificates.

Although better alternatives (SHA1 and SHA2, for example) are known and widely available, some signing authorities were still using MD5 when Sotirov's team created their certificate. Anyone who trusted such a certificate, directly or indirectly, would be liable to be fooled by a forged certificate that looked the same as the real one as far as MD5 is concerned. Since at least one of the CAs that the major browsers trust by default still used MD5 at the time, a phisher could have used this certificate to spoof any site in the world.

Being White Hats, Sotirov and company took several steps to ensure that their particular certificate wouldn't be used that way and to give the Good Guys a chance to take preventative steps. This is standard practice. The general pattern is:
  • Someone publishes a theoretical paper saying that some security measure (in this case MD5) is vulnerable to attack.
  • Everyone nods thoughtfully and goes back to what they were doing.
  • Time passes. In some cases years.
  • Someone actually writes code to exploit the theoretical weakness. Ideally, it's a White Hat who's trying to get the major players off the dime. Less ideally, it's a Black Hat actually trying to use the exploit for ill. Quite often it's the White Hats because (we hope, and experience bears this out) the Black Hats are too busy scamming with the tools they already have to develop sophisticated new exploits.
  • If it's a White Hat, the next step is to tell the relevant major players "Remember that theoretical paper on (some vulnerability)? I'm going to present an exploit based on it at (some conference). Here's everything I know about the problem and what to do about it." This gives the major players lead time before the cat is out of the bag and the Black Hats have access.
  • In many cases, including the present one, the White Hats don't present the actual code they used, just the general technique and the results (in this case the bogus certificate) -- at least not until everyone's satisfied that the threat has been addressed. This means that anyone trying to do ill will actually have to have some programming skills. In the present case, it took a team of seven highly-skilled researchers six months to produce their result.
So ... it's highly unlikely that anyone will be able to use the Sotirov & co.'s certificate to steal your bank details. It's only a matter of time before someone else forges a bogus certificate that could have been so used, had the CAs not taken proper steps, but by that time the MD5-based CA certificates will have been taken down. In particular, Verisign has already taken down the one under their direct control and has said that those controlled by resellers should be gone by the end of January.

There's an interesting wrinkle in Sotirov's blog post that I linked to. Thanks to the recent legal wrangling over a paper detailing an attack on the Mifare Classic subway card system, people have become more skittish about giving a heads-up to just anyone.

Most hackers (in the older sense) believe strongly that suppressing useful information is counterproductive. Mifare is a case in point, particularly since Mifare Classic had already been successfully attacked, and the paper Mifare was trying to suppress is publicly available right here. But I digress.

Back at the storyline, Sotirov's team was concerned enough about dealing directly with CAs that "did not have a significant track record of responding to public security vulnerabilities in their systems" and so might "overreact and attempt to stop or delay our presentation through legal or other means" that they took the extra precautions of getting non-disclosure agreements from the browser vendors and using Microsoft as an intermediary in talking to the CAs.

The net effect was the same: The team was able to alert the geeks at Verisign and elsewhere without giving any trigger-happy suits anything to go on, and Verisign in turn acted quickly to deal with the problem. Sotirov closes his post on an encouraging note:
Cryptographic algorithms can become broken overnight, so it is important for CAs to demonstrate the ability to react quickly to such issues. I'm happy with the reponse from Verisign and the other affected CAs. Based on our experience with them, I would not hesitate to work with them directly on any vulnerabilties I might discover in the future.