Back when nothing was connected to anything, if you needed to reset the clock on your computer -- say, to work around a "Y2K" bug, but surely never to thwart an overzealous licensing scheme -- you could just do it. At most you'd have to re-date some files when you were done. That's no longer the case.
If you're on the net, you pretty much have to have your clock set correctly. For one thing, most communication on the net is timestamped in one form or another. "Did you get my email?" "No. When did you send it?" If your clock is off by five minutes, people won't care, but five days or five months is a different matter, and servers are liable to drop suspiciously dated mail on the floor.
Timing is also important in security. If I send you my encrypted password, someone in the middle can grab a copy and try to send it later. The main way of dealing with this sort of replay attack is to require the message containing the password also to contain a nonce -- a bit of data that is different every time.
One way to do this is to send a random number when requesting a response: "Please send your message and the number ######### in encrypted form". Another way is to have the sender include its idea of the current time. If it's not reasonably close to the receiver's idea of the current time, the receiver rejects the message. This approach is particularly useful when protecting a series of messages, since it doesn't require continual requests and responses, but it will only work if the clocks are synchronized.
A variant of this is a keycard that generates a new security code every few seconds. When you log in with such a card, the server will reject any codes that are too old.
If phrases like "reasonably close" and "too old" give you the idea that time on the net is somewhat fuzzy, that's because it is. If you and I can only communicate through messages that take a small but non-zero time to reach their destinations, then there's no meaningful way to say "I did X at the same time you did Y." (Einstein had some things to say on similar topics, but let's not go there now)
How would we prove such an assertion? I could send you a message, timestamped by my clock, and you could do the same. We could also note the times at which we each received our messages. But what if, say, the relevant timestamps are identical, but my clock is really a bit fast of yours, or a bit slow? What if one message got slightly delayed by a transient network traffic jam? There's no way to know.
This can actually be a pain if, say, you are picking up file A from a remote server and creating a local file B from it. File A might change, so you want to make sure that you re-create file B whenever file A changes. A popular development tool, which shall remain nameless, assumes that file B needs to be rebuilt if file A has changed more recently than it has. Really, you want to rebuild B if A has changed since the last time you used it to build B. These are basically the same thing if everything is on the same host or if the hosts' clocks are tightly synchronized, but not if one clock is allowed to drift away from the other.
Fortunately, there are ways of ensuring that clocks in different computers are very likely to be in sync within a given tolerance (which depends on the latency of the system, and other factors). They involve measuring the transit time of messages among servers, or between a given server and "upstream" servers whose clocks we trust, as with NTP.
Time may be fuzzy on the net in one sense, but from a practical point of view it's not fuzzy at all. Without really trying that hard, I have now several accurate clocks at my disposal. The first one I got used the radio time signals broadcast from WWV in Fort Collins, Colorado. My cell phone gets periodic pings from the nearest tower, the towers being synchronized I-know-not-how. My cable box shows the current time according to the feed upstream. And every computer in the house keeps good time thanks to NTP.
I haven't checked rigorously, but none of them ever seems to be more than seconds off of the others. In theory, the radio clock and the computers should be within less than a second of each other. Under good conditions, NTP can maintain sync to within tens of milliseconds, or less time than most packets take to reach their destination over the internet (under ideal conditions, it can do better than a millisecond).
Except for the radio case, all this is by virtue of the clocks in question belonging to one or another network. Particular measurements and results on a network are fuzzy, but the aggregate can be quite robust.
What good is half a language?
4 years ago
No comments:
Post a Comment