Showing posts with label latency. Show all posts
Showing posts with label latency. Show all posts

Friday, August 21, 2015

Latency, bandwidth and Pluto

As you may well know, the New Horizons spacecraft flew by Pluto last month, dramatically increasing our knowledge of Pluto and its moons (let's not even get into whether Pluto and Charon jointly constitute a "binary dwarf planet" or whatever).  There are even a few pictures on the web.

But wait ... that's not very many pictures for a ten-year mission.  Even worse, if you were watching at the time you'll know that New Horizons went completely dark for most of a day, right when it was flying by Pluto.  Isn't this the modern web, where everything is available everywhere instantly?  What gives, NASA?

Part of the problem is the way New Horizons is designed.  It's expensive to accelerate mass to the speed New Horizons is going, and since you can't exactly send a repair crew out to Pluto, it's good to have as few moving parts as possible.  As a result, the ship has a small battery and both the antenna and the cameras are mounted firmly in place.  If you want to turn the antenna toward Earth, you have to move the whole ship, using some of the small store of remaining fuel dedicated to course corrections and attitude control.  If you want to point the cameras toward Pluto, you have to turn the ship that way.  You can't do both.

That explains why the ship went dark for the duration of the flyby, but actually it effectively went dark for considerably longer than that.  It takes about 4.5 hours for a signal to travel the distance between Earth and Pluto.  That means the sequence of events is, more or less
  • t + 0: Flight control sends commands to New Horizons to point the cameras at Pluto, take pictures, orient the antenna toward Earth, and report back.
  • t + 4.5 hours: New Horizons gets the command and starts re-orienting and taking pictures
  • t + 9 hours: Last time at which any signal from before the pointing operation will reach Earth
  • t + 25.5 hours (more or less): New horizons, now with the antenna pointed back toward earth, sends "Phone home" message reporting status.
  • t + 30 hours (more or less): "Phone home" message arrives
The important thing to note here is that, while the ship is actually out of contact for 21 hours, it's 25.5 hours from the time the command is sent to the time the ship is reachable again, and 30 hours before the ground crew knows it's reachable again.   If the phone home signal hadn't arrived, it would be 9 more hours, at a minimum, before they knew if any corrective action they'd taken had worked.  By internet standards this is ridiculously high latency, but anyone who's played a laggy video game or been on a conference call with people on the opposite side of the world has experienced the same problem on a smaller scale.

So part of the reason pictures have been slow in coming is latency.  It's going to take a minimum of 4.5 hours to beam anything back, and longer if the ground crew has to send instructions.

The other problem is bandwidth.  Pluto is about 5 billion kilometers (about 3 billion miles) away.  Signal strength drops off as the square of the distance, so, for example, a signal from Pluto is about 160 times weaker than a signal from Mars and, since the power source has to be small (around 15 watts), the signal is not going to be extremely powerful to start with.  Lower power means a lower signal to noise ratio and less bandwidth (or at least that's my dim software-engineer understanding of it -- in real life "bandwidth" doesn't exactly mean "how many bits you can transmit", and I'm sure there's lots more I'm glossing over).

Put all that together and on a good day we have about 2Kbps coming back from Pluto.  That's about what you could get out of a modem in the mid 1980s.   Internet technology has progressed just a bit since then, but internet technology doesn't have to cope with vast distances and stringent mass limitations.  At 2Kbps, one raw image from LORRI (the hi-res black-and-white camera) takes close to two hours to transmit.  This is why, if all goes well, we'll be getting Pluto pictures (and other data) well into 2016.

I'd still say that New Horizons is "on the web" in some meaningful sense, but the high latency and low bandwidth make it a great example of Deutsch's fallacies in action.

[Update: Not only is New Horizons sending back data slowly, it's not sending back particularly pretty data at the moment.  From their main page:
Why hasn’t this website included any new images from New Horizons since July? As planned, New Horizons itself is on a bit of a post-flyby break, currently sending back lower data-rate information collected by the energetic particle, solar wind and space dust instruments. It will resume sending flyby images and other data in early September.
-- D.H. 24 Aug 2015]

Monday, September 7, 2009

Angling for a spot on the exchange

In the old days, when people traded stocks and commodities face to face and recorded the resulting transactions on parchment with a quill pen, it mattered very much whether you were on the exchange floor or just working through some intermediary. If you were on the floor, you could get your order placed faster. A faster order meant a better price and up-to-date information about market movements. As they say in the biz, old news is no news.

Open outcry exchanges are on the way out now, replaced by shiny new technology. Anyone anywhere can get in the game, set up a trading algorithm, put it on a server and sit back as the cash flows in.

So long, of course, as your server is in the same building as the exchange's servers. Otherwise, your order is going to get there (milliseconds) too late and you'll be edged out by someone with lower latency, who will then be glad to turn around and sell you what you wanted at a slim but persistent profit.

Score one more for Peter Deutsch.

Tuesday, April 8, 2008

Timing is something, at least

Back when nothing was connected to anything, if you needed to reset the clock on your computer -- say, to work around a "Y2K" bug, but surely never to thwart an overzealous licensing scheme -- you could just do it. At most you'd have to re-date some files when you were done. That's no longer the case.

If you're on the net, you pretty much have to have your clock set correctly. For one thing, most communication on the net is timestamped in one form or another. "Did you get my email?" "No. When did you send it?" If your clock is off by five minutes, people won't care, but five days or five months is a different matter, and servers are liable to drop suspiciously dated mail on the floor.

Timing is also important in security. If I send you my encrypted password, someone in the middle can grab a copy and try to send it later. The main way of dealing with this sort of replay attack is to require the message containing the password also to contain a nonce -- a bit of data that is different every time.

One way to do this is to send a random number when requesting a response: "Please send your message and the number ######### in encrypted form". Another way is to have the sender include its idea of the current time. If it's not reasonably close to the receiver's idea of the current time, the receiver rejects the message. This approach is particularly useful when protecting a series of messages, since it doesn't require continual requests and responses, but it will only work if the clocks are synchronized.

A variant of this is a keycard that generates a new security code every few seconds. When you log in with such a card, the server will reject any codes that are too old.

If phrases like "reasonably close" and "too old" give you the idea that time on the net is somewhat fuzzy, that's because it is. If you and I can only communicate through messages that take a small but non-zero time to reach their destinations, then there's no meaningful way to say "I did X at the same time you did Y." (Einstein had some things to say on similar topics, but let's not go there now)

How would we prove such an assertion? I could send you a message, timestamped by my clock, and you could do the same. We could also note the times at which we each received our messages. But what if, say, the relevant timestamps are identical, but my clock is really a bit fast of yours, or a bit slow? What if one message got slightly delayed by a transient network traffic jam? There's no way to know.

This can actually be a pain if, say, you are picking up file A from a remote server and creating a local file B from it. File A might change, so you want to make sure that you re-create file B whenever file A changes. A popular development tool, which shall remain nameless, assumes that file B needs to be rebuilt if file A has changed more recently than it has. Really, you want to rebuild B if A has changed since the last time you used it to build B. These are basically the same thing if everything is on the same host or if the hosts' clocks are tightly synchronized, but not if one clock is allowed to drift away from the other.

Fortunately, there are ways of ensuring that clocks in different computers are very likely to be in sync within a given tolerance (which depends on the latency of the system, and other factors). They involve measuring the transit time of messages among servers, or between a given server and "upstream" servers whose clocks we trust, as with NTP.

Time may be fuzzy on the net in one sense, but from a practical point of view it's not fuzzy at all. Without really trying that hard, I have now several accurate clocks at my disposal. The first one I got used the radio time signals broadcast from WWV in Fort Collins, Colorado. My cell phone gets periodic pings from the nearest tower, the towers being synchronized I-know-not-how. My cable box shows the current time according to the feed upstream. And every computer in the house keeps good time thanks to NTP.

I haven't checked rigorously, but none of them ever seems to be more than seconds off of the others. In theory, the radio clock and the computers should be within less than a second of each other. Under good conditions, NTP can maintain sync to within tens of milliseconds, or less time than most packets take to reach their destination over the internet (under ideal conditions, it can do better than a millisecond).

Except for the radio case, all this is by virtue of the clocks in question belonging to one or another network. Particular measurements and results on a network are fuzzy, but the aggregate can be quite robust.

Friday, November 16, 2007

Laptop orchestras. You read that right.

The University of York has been getting publicity lately for its Worldscape Laptop Orchestra, currently billed as the world's largest, though not the first. Others include the Moscow Laptop Cyber Orchestra and Princeton's PLOrk. Create Digital Music has a good summary. [There doesn't seem to be a good permalink for the Worldscape site yet -- I'll have to remember to fix the link when there is one][I've updated the link from York Music's home page to the press release for Worldscape. They still don't seem to have their own page, which leads me to wonder if they're still around].

So just what is a laptop orchestra? A bunch of people clicking "play" on some mp3 files and listening to the results? Not at all. Worldscape and its cousins are bona fide orchestras, making live music, often collaborating with more traditional instrumentalists and at least in the case of Worldscape, requiring a conductor. There is also at least one club sponsoring open jam sessions where anyone can show up with their gear, plug in and play.

The key here is the interactive element. An instrument in a laptop orchestra isn't just spewing out pre-programmed bits. It's responding to the musician's input, whether through specialized controllers, gestures grabbed by a video camera, or whatever else. As with any other orchestra, the musicians respond to each other, to the conductor (if any) and to the audience. The result is a genuinely live musical performance.

One telling detail: How do you record a laptop orchestra? You might think you'd just capture the digitized sounds the laptops are producing and mix them down. That's certainly possible, but if you want to capture the experience, it's better just to put mics in the house and record what the audience is hearing.

That's not to say you couldn't do the same thing online. I've heard of small-scale live musical collaborations over the net (though I can't remember where). I suspect, however, that keeping an orchestra of fifty in sync online is going to be a problem. I doubt you could just put everyone on one big Skype conference call, but if it's been done on that scale I'd be glad to be proved wrong.

Monday, October 29, 2007

Latency in a virtual stadium

Just to put a little perspective on my previous post on latency: Sound travels about 350 m/s. If the network round-trip time between the US and Australia is 200ms, then that's equivalent to a physical separation of about 35m, at least as far as sound is concerned. Being in a global virtual crowd is like being in a largish theater or small arena.

That's really not going to be a problem most of the time. The interesting thing to me is that there's a hard limit that is (just) humanly perceptible. For example, if you're assembling a virtual crowd, you'll hear the reactions from across the globe perceptibly later than those from your neighbors. The only way to even this out is to slow everybody down, which would have its own effects on the feedback loop.

Or consider a virtual marching band without any visual cues, which is effectively what you have if it takes just as long to see the baton as to hear the drum. Not impossible, but a bit more challenging.

Sunday, October 28, 2007

We'll be with you after a brief delay ...

One of Deutsch's distributed computing fallacies is that latency is zero -- messages arrive the moment you send them. In practice, it takes a bit of time for your message to get through your firewall to your ISP's router, onto the backbone, to your recipient's ISP, through their firewall and to their host.

Much of the time, this won't matter. If your mail client is polling every five minutes, who cares if the packets containing your email took some fraction of a second to get to your mail server? On the other hand, if you're trying to do something on the net that requires precise synchronization, things can get hairy.

The Network Time Protocol keeps good time (to within about 1/100 of a second over the internet), but NTP relies on propagating accurate timing out from a small set of central servers. It doesn't try to keep everyone in sync with each other directly.

Depending on the circumstances, people can notice delays as low as 20-40 milliseconds. For example, a lag of 40 milliseconds between hitting a key and hearing a sound is enough for a musician to notice. Echoes become perceptible around 100ms and extremely distracting not long after.

Latency on a local network can be quite low, often just a few milliseconds. The round-trip time for pinging a major service can be in the teens and twenties. This is partly because the major providers replicate their servers so that you're generally talking to a server close to you.

For example, I pinged www.google.com.au (Google Australia) and got a round-trip time of about 15ms. That's pretty impressive, given that I'm about 15,000km from the nearest point in Australia and light travels at 300,000km/s. That would give an absolute minimum time of 100ms for the 30,000km round-trip. However, the name www.google.com.au resolves (where I'm sitting) to a server in Mountain View, CA. That's fair game, as long as the Mountain View server looks enough like the "real" one Down Under.

However, if I try to ping the Australian Broadcasting Company (which probably has little reason to put a duplicate server in my neighborhood), I get a more believable time of 200ms or so. Depending on circumstances, that much delay can cause problems. For example, a badly placed speaker phone in a conference call between Oz and the US can render conversation nearly impossible.

As it turns out, most of the populated world has water directly opposite on the globe, but there are a few extreme cases, such as Hamilton, New Zealand and Córdoba, Spain. There are also plenty of less extreme cases, whether Europe/Australia, California/India or what-have-you, where even the best case may introduce noticeable delays.

The high-order bit here is that some level of humanly perceptible latency is likely to be with us, in particular situations, no matter how fast the hardware gets. Moore's law can make the pipes bigger and the processors faster, but it will do nothing to increase the speed of light.