Tuesday, November 27, 2007

On the history of the telephone

Here's a brief but hopefully not-too-distorted history of the telephone:

In the late 1800's various people hit on the idea of using electricity to transmit sound over wires. It's not entirely clear who did what first, and there is quite a bit of litigation at the time, but in 1875 Bell is granted a patent for "Transmitters and Receivers for Electric Telegraphs". By that point, the basic premise is in place: extend the existing communications technology (wires) to carry a new medium (sound).

At first, telephones are confined to early adopters in places like national capitals and Deadwood, South Dakota (there's a reason they called the dot-com madness a "gold rush"). Connections are originally point to point but exchanges are introduced very soon, providing a means of scaling the network up. To make a call via an exchange, you call the operator there, who then connects you to the person you're trying to call -- or to another exchange, ideally closer to your desired party, if that person doesn't belong to yours.

Adoption proceeds rapidly and vast fortunes are made, but full saturation takes decades even in industrialized countries. Beyond the basic technological leap of transmitting analog sound instead of bits, technological progress is incremental; better phones, switches to automate the exchanges, standards for phone numbers, area codes and so forth.

As the technology matures, reliability becomes a concern. Other features, such as conference calls, call waiting, touch-tone dialing and such are nice to have, but can be dispensed with as long as you can just pick up a phone and expect it to work. The possible exceptions I'd cite would be the answering machine and voice mail, which are more in the "how did we ever do without that?" category. Caller ID is another possible candidate. It definitely changes the interaction, but if I had to pick I'd probably go with voice mail. Your mileage may vary.

Obviously there are several similarities and contrasts to be drawn between the telephone and the web. One that I'd like to draw out here is the pattern of a world-changing new invention followed by incremental refinements.

The idea of serving hypertext over an internet aimed more at file transfer and remote logins shook things up. Compared to that, Web 2.0 concepts like tagging, microformats and social networking seem more like refinements. Useful refinements, to be sure, and ones whose combined effect will help make the web in 2010 noticeably different from the 2000 edition, but I don't see them as revolutionary. One could make a case that improvements in bandwidth (I won't say "broadband" because current broadband will look like a joke in ten years) will have more effect.

Granted, if you put enough incremental improvements together you end up with a qualitative change. Long distance calling today is an entirely different thing from the "have my operator call your operator" scenario I described above, and as a result the world is in a certain sense a smaller place. Nonetheless, I would expect the future history of the web to have relatively few "on this date ..." major moments and more "by the 2010's ..." summaries of progress.

Saturday, November 24, 2007

On the economics of anonymity

I'm still trying to get a good handle on the economics of anonymizers [and I'm not alone -- see here for pointers to a discussion in greater depth]. The first clear point is that clients use the service to offload risk, namely the risk of being associated with some particular activity on the web (three guesses what the most popular activity appears to be). When risk is transferred, there will generally need to be some kind of compensation. This is a basic economic proposition, one that's been back in the headlines lately.

But just where is this risk going? The first guess is the exit nodes. After all, it's the exit nodes that actually contact the services being used and would seem to have the most explaining to do if The Man starts asking questions. They also appear relatively easy to find. For example, if I continually send anonymous messages to myself, I should expect to hear from every exit node sooner or later (if the routing prefers a particular path for a particular client or server, compared to random chance, that could be used to narrow down the identities of one or both).

However, if The Man is really trying to find out who's on the other end of the connection, busting the exit node operator is not going to help, except perhaps to weaken the network as a whole. There may be jurisdictional problems as well. This pushes the search back to the clients.

Where we go from here probably depends on exactly how you analyze the anonymizer in question. Let's assume that The Man can make a better-than-random guess as to who's using the anonymizer or not. This seems very likely if relatively few people are using it. This will include pure clients, who only use the anonymizer but don't relay traffic or act as exit nodes, as well as the exits and relays themselves, who as far as I can tell have no way of proving they're not also clients.

Under this assumption, and all other things being equal, the risk is spread evenly among all the nodes, whatever their type. In that case, risk is certainly being transferred, namely from those with more to lose from exposure to those with less, but in a perfect anonymizer it's impossible to tell who is which. The basic arbitrage opportunity is there, but there appears to be no way to exploit it.

Or at least, no way for an outside observer to exploit it. If I'm, say, running a relay node but also using the anonymizer to do something truly hairy, I can be reasonably sure I have more to gain than someone just sitting at work perusing material that violates company policies. In effect, most of the clients are acting as a smokescreen for my activities. That in turn makes it worth my while to contribute greater-than-average resources to the network. At least, if I can do so without anyone noticing.

That seems a plausible story, but I'm not at all confident that I've understood the full implications here.

Wednesday, November 21, 2007

Clouds and onions

Hal Finney comments, correctly, that the story I told in Anonymous Three-Card Monte misses a couple of significant points. So here's an attempt to rectify that.

Generally, when you use an anonymizer, you talk to the anonymizer -- that is, to some set of hosts participating in the anonymizing. After several carefully managed intermediate steps the anonymizer -- that is, some participating host -- talks to the service you're really interested in. To that service, it looks like the request came from the IP address of that host, not yours, because that's indeed who's talking to it.

One way to look at this is to consider the anonymizer as a "cloud". You don't really know what goes on inside. An outside observer would see traffic between you and the cloud and between the cloud and the real service. It would also see a lot of random encrypted traffic among the hosts in the cloud, but as long as there are enough users (or at least computers spitting out random encrypted bits and pretending to be users) for the "I'm Spartacus" effect to kick in, that outside observer can't connect the you to your real service.

Good anonymizers use multiple hops inside the cloud, each of which is unaware of the rest of the chain, to provide multiple layers of protection, like layers of an onion.

The hosts, called "exit nodes", that talk to real services have to use their own IP addresses. Because of this, an outside observer could say "at such and such time, someone at this IP address connected to this Very Bad Site." If you're just using the anonymizer, but not participating in the cloud, there's zero chance that your IP address will be connected directly to the Very Bad Site. In effect, the exit nodes have collectively taken on that particular risk for you.

On the other hand, if you're using an anonymizer, you should probably pessimistically assume that an outside observer could tell which hosts are in the cloud. That is, you should assume that people can tell you're using the service. You should also take care that you use an encrypted connection to your real service. The exit node can only do what you (indirectly) ask it to, and if you don't ask it to use encryption, someone watching could say "I don't know who's at the other end of this connection, but whoever it was logged into this Very Bad Site under the name of ..." Caveat browsor.

So where does that leave the original story?

Well, IP addresses are being pooled, but among exit nodes, not among exit nodes and users. If you're an exit node, your IP address will be directly associated with the activity of random people whom, if the system is working, you have no way of identifying. This means you may have some explaining to do, more or less as described in the punchline. And you may have less explaining to do if your node has an IP address from a country that doesn't keep close track of who's using what address. Assuming there are such.

If you're running an anonymizer and not charging money for it, you might consider requiring anyone who uses the service to be prepared to host an exit node as well [It's not clear how you'd convince someone they wanted to do that. See this later post, for example.]. This, arguably, distributes the risk fairly. As a corollary, it also produces the "big pot of IP addresses" scenario that I originally described.

However, if you're just using such a service and not acting as an exit node, you shouldn't have to explain much more than why you're using an anonymizer. Beyond that, you can shoot yourself in the foot in a variety of ways, whether by failing to encrypt your connection to your real service, or by giving away more information than you think you are, or confiding in someone who turns out not to be who you thought they were or by some similar mistake. But the anonymizer can't help you there.

The larger point here is that you should be good and sure you understand what an anonymizer can and cannot do before you decide to use one.

Tuesday, November 20, 2007

Barristers and bloggers

Picking up where I just left off ...

Some professions seem fairly immune to technological change. The law is one. As the man said, lawyers find out still litigious men. If automobiles supplant buggies and consign the buggy whip makers to a small niche, chances are everyone involved will want to consult a lawyer sooner or later.

Which brings up a question: In the spectrum from buggy whip makers through blacksmiths, brewers and bakers to lawyers, where do writers fit in? My fond hope is that it's closer to the lawyer end (at least in terms of viability), and I think there's some evidence for that.

The odds seem good that there will continue to be viable business models in which writers get paid, whether it's through advertising, or as part of the production of interactive games and experiences or perhaps some other way. Certainly people still seem interested in text and in scripted entertainment.

And yet the writing game must surely be changing. Consider blogging. That's something radically new and different, right? Well, it depends. Certainly the medium is new, but just how has it changed the game?

For example, this blog, along with many others, is basically a column. The genre has been around for quite a while. The present example owes as much to E. B. White (at least as a model to strive toward) as it does to the pioneers of the web (to whom it also owes much).

What about political bloggers, with their game-changing, king-making deal-breaking influence? Is this a new phenomenon, or is it just political activists -- players in another very old game -- making use of the latest technology? (Let me add that when I say the game is old, I'm not claiming that all political bloggers are working for a particular party. Grass-roots activism has its own long pedigree.)

What about the celebrity and gossip blogs? Again, I'd argue that's an old genre in a new medium, and similarly for music blogs, personal journals and much if not all of the other material I've run across in the blogosphere.

What about the web of reactions among blogs? Surely this is new, could only have happened on the web. Well, no and yes. No, because deliberative exchanges in writing are most likely as old as writing itself. But yes, because the ability to quickly build up such a discussion, and to easily navigate through it later, is new and has a very web-ish flavor.

So what am I trying to say here?
  • Writing as a profession seems to benefit from the web, rather than being marginalized by it.
  • The web offers new media for writing, but the genres are probably largely the same.
  • Web media offer new possibilities but, IMHO, the similarities to old media are at least as significant as the differences.
On that last point, I might liken the situation to 3-D movies vs. traditional ones. Yes, there is a difference, but the basic experiences are more similar than different.

Buggy whips and blacksmiths

Pity the poor buggy whip, the icon of technology's scrap heap. Do you miss the sturdy heft of the old WE302 telephones? Alas, they've gone the way of the buggy whip. Can't stand the latest annoying gadget? Don't worry, the paradigm will soon shift and it, too, will go the way of the buggy whip.

Just what is a buggy whip? As the name implies, it's a small whip used to drive the horses pulling a buggy or carriage. Buggy whip manufacture used to be a prominent industry, but that changed when the automobile came along.

The implication is that when a new technology comes along, older ones are left in the dust. Consider blacksmithing. Look up at the older buildings in many cities and you're likely to see a lot of wrought iron (wrought iron is worked by hammers and such, while cast iron takes its shape from the mold it's poured into). That iron was likely worked by small armies of blacksmiths under the supervision of a master smith.

Blacksmithing was an important profession anywhere there was iron, which was a large portion of the world. In smaller towns, the smith would also act as a farrier, shoeing horses. But all that's gone the way of the buggy whip. With newer machining and manufacturing processes available, why would anyone take the time to work iron by hand, at least in the industrialized world?

Except ... blacksmiths are still very much around, and doing reasonably well for themselves. What do they do? Apart from producing pure sculpture, they build fences, handrails, window bars, fireplace tools, weather vanes and anything else that can usefully be made of wrought iron. Generally a hand-wrought item will cost more than something from the local big-box store, but it will also look better, custom-fit the site and provide a one-of-a-kind design. Enough people like that enough to keep modern blacksmiths in business.

The same has happened with many of the traditional crafts. Witness the resurgence of local breweries and bakeries, which are now called microbreweries and artisan bakeries, much as guitars are now called acoustic guitars. There are any number of other examples. Free associating from "acoustic guitars", drum machines were supposed to put drummers out of work, but they didn't.

That's not to say that new technology is necessarily good for old technology. There are, after all, many fewer blacksmiths, brewers and bakers than there used to be. But neither is it a death sentence. It's also worth noting that many modern blacksmiths use gas forges and power hammers, and state-of-the-art brewing and baking equipment is, well, state-of-the-art.

Not even the buggy whip has gone the way of the buggy whip, if that way is supposed to be extinction. They're still made, just not as many or by as many people.

What does this have to do with the web? I'm getting to that ...

Monday, November 19, 2007

Scented junk mail. Oh dear.

Apparently, the advent of email has reduced the volume of snail mail. I say "apparently" because my own mailbox never seems empty. In an effort to counteract this trend, the British Royal Mail has, on advice from an Oxford consulting firm, opted to try "reinventing" mail, taking it from a two-dimensional medium to a "three-, four- or five-dimensional medium."

I'm not making this up. You can read it here.

How to do this? Traditional mail is aimed at the visual system because, well, the visual system seems particularly well tuned to the kind of information we want to convey with mail. That's why we have text. But in this modern, digitized age, that's not enough. Modern mail must be enhanced by adding elements of sound, smell and taste

The Royal Mail is on the case with a sales force of 300 dedicated to helping businesses develop, and decide that they need to send "noisy, smelly junk mail" (That's probably not the designation the consultants at Brand Sense had in mind, but it seems apt).

As far as I can tell, the underlying rationale is that the mail needs to compete with email, and its unique advantage lies in being able to engage all the senses. Since email can send sound and video just fine -- more conveniently, one could argue -- that really leaves smell, taste and touch.

On the radio piece I heard, the Brand Sense spokesman described using scent not just in a literal way, as with perfume or dish soap, but more abstractly. Use citrus scents if you want to convey freshness and excitement, for example. Intriguing, to be sure, but just what need are we trying to fill here?

The whole idea of the state-run mails competing with email seems strange. If there's less need to send paper around thanks to email, that's a good thing, not a problem that needs to be solved by inventing new kinds of paper to send around, much less expending resources actively trying to convince people to do so.

Mind, I expect the consultants would have a different take.

Sunday, November 18, 2007

An editorial note

On re-reading my first post on Richard Stallman and trusted computing, I found myself unsatisfied with the way I had represented FSF's position on software copyrights. My first impulse was to fix the text, and indeed I did just that.

The result was even more unsatisfying, not because it was wrong -- as far as I can tell it was better -- but because, even though I clearly noted that I'd made the change, it just didn't fit with my view of what a blog should be.

This is a blog, not a wiki. On a wiki, the edit history is readily available. On a blog, it's not (even to me, as far as I can tell). In this case, I could
  • Quietly change the text. I do this routinely with typos I catch on re-reading, or missing or inconsistent tags, or prose I just don't like. For example, I've tightened up the punchline of Anonymous three-card monte at least twice. But in this case, the change was substantive. Quiet, substantive changes seem out of bounds.
  • Make the change and mark it as such. That's what I originally tried, but that left only my description of the original text, and that didn't seem right, either.
  • Use strikethroughs, italics and such to show the changes explicitly. Frankly, by the time I considered that, I was tired enough not to want to bother with it. It would give the fullest disclosure, but it would also be well down the road of trying to make a blog into a wiki.
So instead I put the text back as best I could and put in a note linking to a later post that (in my opinion) handled the topic better. This seems like a good balance to me, and I think I'll stick with it. Purely editorial changes will continue to go in quietly, in a lame attempt to present myself as a more careful writer than I actually am.

Substantive mis-steps will stay in place [but I may comment on them later --DH 9 Sep 2010]. If a later post or comment adds something significant to an existing post -- whether the existing post is wrong or for some other reason -- I'll try to put in a note the next time I review. Naturally, the backlink feature is useful here as well, but a nice, visible [italicized note] should make things clearer.

That is all. We now return to our regularly scheduled programming.

Friday, November 16, 2007

Sixty-year-old computer slower than modern emulator. Film at a 11.

I suppose that's not really a fair summary of this BBC article on a commemoration of the cracking of Nazi codes by Colossus, one of the first modern computers (depending on one's definition of "computer"). Still, it hardly seems surprising that an emulator running on a laptop would be faster than the 1940's original.

Re-assembling Colossus and getting it running, though -- that's a neat hack. Especially since the original machines were cut into pieces after the war.

Laptop orchestras. You read that right.

The University of York has been getting publicity lately for its Worldscape Laptop Orchestra, currently billed as the world's largest, though not the first. Others include the Moscow Laptop Cyber Orchestra and Princeton's PLOrk. Create Digital Music has a good summary. [There doesn't seem to be a good permalink for the Worldscape site yet -- I'll have to remember to fix the link when there is one][I've updated the link from York Music's home page to the press release for Worldscape. They still don't seem to have their own page, which leads me to wonder if they're still around].

So just what is a laptop orchestra? A bunch of people clicking "play" on some mp3 files and listening to the results? Not at all. Worldscape and its cousins are bona fide orchestras, making live music, often collaborating with more traditional instrumentalists and at least in the case of Worldscape, requiring a conductor. There is also at least one club sponsoring open jam sessions where anyone can show up with their gear, plug in and play.

The key here is the interactive element. An instrument in a laptop orchestra isn't just spewing out pre-programmed bits. It's responding to the musician's input, whether through specialized controllers, gestures grabbed by a video camera, or whatever else. As with any other orchestra, the musicians respond to each other, to the conductor (if any) and to the audience. The result is a genuinely live musical performance.

One telling detail: How do you record a laptop orchestra? You might think you'd just capture the digitized sounds the laptops are producing and mix them down. That's certainly possible, but if you want to capture the experience, it's better just to put mics in the house and record what the audience is hearing.

That's not to say you couldn't do the same thing online. I've heard of small-scale live musical collaborations over the net (though I can't remember where). I suspect, however, that keeping an orchestra of fifty in sync online is going to be a problem. I doubt you could just put everyone on one big Skype conference call, but if it's been done on that scale I'd be glad to be proved wrong.

Wednesday, November 14, 2007

A bit of clarification on anonymity

In previous posts (like this one, this one and maybe this one) I've taken a fairly skeptical tone concerning anonymizers and such. I wanted to take the opportunity here to clarify that a bit.

It might seem that I think that tools like anonymizers are a waste of time or that only miscreants are likely to use them. That's not what I think.

There are certain situations where anonymity is extremely valuable. Real journalism requires anonymous sources. Some crimes and abuses will only be exposed if those in the know -- including both victims and perpetrators -- can come forward without risk of identification. Political action sometimes requires anonymity. The Federalist Papers come to mind.

So when I take aim at certain quirks and pitfalls of anonymity, I'm not trying to write off anonymity entirely. I'm just trying to point out aspects of anonymity on the web that are trickier than they might seem (and therefore, frankly, fun to write about).

Wikipedia's angle on anonymous IP addresses

I'm not sure when this kicked in, but the message you now get when you edit a page anonymously is intriguing ...
You are not currently logged in. While you are free to edit without logging in, be aware that doing so will allow your IP address (which can be used to determine the associated network/corporation name) to be recorded publicly, along with the dates and times at which you made your edits, in this page's edit history. It is sometimes possible for others to identify you with this information. If you create an account, you can conceal your IP address and be provided with many other benefits. Messages sent to your IP can be viewed on your talk page.
So in other words, if you have a user name, you're more anonymous than if you don't. It's an interesting angle.

From its beginnings, Wikipedia has been beset by anonymous vandals who find out about Wikipedia's "anyone can edit" ethos and think "Whoa, dude, I can totally write 'My math teacher sucks' here and no one will know who did it", or something similar, but generally less sophisticated and coherent.

Fortunately, a number of Wikipedians have taken it upon themselves to make life better for the rest of us by constantly scanning the change logs for such drivel and reverting it back. One does occasionally run across vandalized pages, but in general vandalism gets reverted within seconds. And may I join the rest of the community in repeating my sincere thanks for that.

With that for background, it's easy to see why the community would want to discourage anonymous editing in the first place. On the other hand, it wouldn't do to ban it entirely. Anonymous editing (and editing by registered users who, erm, forget to log in from time to time, not that anyone would do that ...) is a valuable part of the process. Trying to prevent it while still promoting anything like an open culture would be an exercise in frustration as vandals worked out ways of gaming the system anyway.

And thus the current formulation, part carrot -- register and you can create your own persona and reap other benefits -- and part stick -- misbehave and people may well be able to track you down. Oh look: all those nasty edits to the page on FooCorp are coming from BarCorp's IP addresses.

It also warns legitimate anonymous editors that they may not be as anonymous as they think. If you're blowing the whistle on FooCorp, do it from a cybercafe or public library, not from your office at FooCorp (well, you knew that anyway, didn't you).

I'm Spartacus!

It's one of the great scenes in film, one, if you're like me, you've seen even though you haven't actually seen the film itself. After wreaking havoc against the Roman armies, Spartacus and his followers are finally defeated and captured. It's clear that Spartacus is destined to die a horrible death -- if only the Romans can figure out who he is.

To get his followers to rat Spartacus out, the Romans promise leniency to the person who will identify him. Instead, those assembled stand up one by one and shout "I'm Spartacus!"

That's anonymity in a nutshell. Spartacus might be anyone in the crowd. If the idea was to single Spartacus out, the Romans are no closer than they were to begin with. When you use an anonymizer, you're in much the same situation. It's not hard to establish that you might be the person who engaged in some particular interaction, but if the anonymizer is doing its work, there's no way to tell that it was you in particular and not some other user. Everyone's in the same boat.

This "everyone's in the same boat" factor lends anonymity a peculiar flavor. Looking at it from that angle, why would I use such a service if I didn't feel I had more to lose than the average user? This in turn will tend to throw the average user in with a fairly interesting crowd. I'm guessing here, of course. There aren't a lot of reliable usage statistics available.

I'm also guessing that most people using anonymizers aren't up to anything particularly nefarious and either value privacy on principle or just like the concept. How does that square with the previous point? Probably most people figure there's safety in numbers. Whatever those involved stand to lose, there is presumably a smaller chance they will lose it than if each operated alone.

"Sure, there may be some bad apples in the crowd, but they can't arrest all of us just to find them. And if they come for me, I can prove I'm not up to anything bad."

At which point it might be worth pointing out that in the film, Spartacus and his followers end up crucified.

(A side note: Not only do the Romans know no more than when they started, everyone now knows this. It's a neat case of common knowledge in action. By contrast, in the classic "question them separately" scenario, the person being interrogated has no idea who has said what to whom.)

(Another side note: The real Spartacus most likely died in battle. The whole scene is just a nice bit of dramatic license.)

[And finally ... this later post in the anonymity thread references LaTanya Sweeny's work in anonymity, specifically the notion of an "anonymity set", which formalizes the intuition that the more people you could be mistaken for, the more anonymous you are. Another later post references Alessandro Acquisti, Roger Dingledine and Paul Syverson's work on the economics of anonymity, drawing on the economic notion of a public good.]

Tuesday, November 13, 2007

Middle ground on GPS and privacy

Let's assume that GPS evidence becomes generally admissible in court. It's already worked a few times. Besides the case I mentioned, there has been a similar case in Australia (my thanks to two anonymous commenters for the pointer).

So how is this going to work? I get busted for speeding. I bring a printout to court from my GPS saying I was doing the speed limit. The judge says "and how do I know you didn't just fabricate this?" That's not going to work.

On the other end, we have the case I first mentioned, where the GPS coordinates are getting beamed back to some third party for perusal. The GPS itself is presumably tamper-resistant. I'm presuming this because the evidence stood up in court, and because there are existing GPS applications, such as monitoring commerce and monitoring people under house arrest, where tamper-resistance is at a premium.

That ought to work just fine, but who wants to run around with a GPS reporting their every move, just to get out of a possible speeding ticket? The stepson in the case certainly didn't. He just didn't have much choice.

Fortunately, there's a middle ground. A tamper-resistant (and probably tamper-evident) unit that can provide its logs if asked (ideally with proper authentication), but doesn't just broadcast them. As far as I can tell (and again I haven't done the legwork here), that's what happened in the Australian case.

This seems like a decent paradigm for Trusted-Computing-like devices that use techniques like strong encryption and special hardware to try to ensure that everything is what it appears to be. As with the music/video case, the trusted device performs a specialized function and doesn't need to be highly upgradeable.

Unlike the classic TC scenario, the trusted device is not in frequent communication with the mothership. Its job is to hold sensitive data and divulge it only when I ask. Or more accurately, when someone who can prove they know a particular secret asks. Much like a personal datastore.

Monday, November 12, 2007

Anonymous three-card monte

Practically everything that happens on the net has an IP addresses attached to it. That's even a decent working definition of the net: anything that happens with an IP address attached.

You can find out a lot from an IP address (you can find out about yours here). IP addresses are typically tied at least to an ISP and a location near your actual address, typically in the same town or one nearby. In some cases they can be nailed down more exactly.

If The Man decides to subpoena your ISP, your ISP can generally provide your exact house address from their records. Even without cooperation from the ISP a dedicated snooper, working for The Man or otherwise, could compile a record of what sites your IP address connected to and, depending on the exact sites, find out all sorts of things about the person or persons using that address, possibly including their identities.

Naturally, not everyone is comfortable with that. Even someone with little to hide may still want to keep it hid, if only out of principle. As a result, there are several services available promising anonymity.

This approach is not without its pitfalls. The site I mentioned above has a pretty good rundown on this. Basically, if you are using an anonymizing service, you are investing a pretty high level of trust in it. Good anonymizers recognize this and take steps to ensure that not even they know what's going on ... an interesting business to be in, to say the least. But hey, Swiss banks seem to do OK.

Now when you visit a site through an anonymizer, that site still has to see some IP address. Otherwise the protocols just don't work. Since you're anonymous, it can't be your IP address, so whose is it? They can't just make one up. Someone else might already be using it, resulting in various havoc. One approach is to grab a block from some lightly-regulated area. Hmm ... this site sure is getting a lot of traffic from Elbonia these days ...

Another is to take the IP addresses of all the people using the service (and there had better be a bunch -- an anonymizer with only one user is not fooling anybody) and throw them in a big pot. When you go to visit a site, you get an address out of the pot [As Hal Finney points out below, this is somewhat oversimplified, but let's go with it.  See this followup for a more accurate picture --D.H. May 2016].

So you decide to use such a service to, well, it's not any of my business, is it? Someone else decides to use this service to visit a Very Bad Site because, well, they don't want anyone to find out, now do they? When they do this, the service happens to pick your IP address out of the pot.

Then The Man comes a-knocking. Your story is: No sir, I was not using that site. Someone else was using my IP address to visit that site. No, I don't know who. You see, I use an anonymizer that switches my IP with other people's. Why? With all due respect, that's none of your business, sir.

Best of luck with that. Bear in mind that The Man is not always known to appreciate the subtleties of such arguments.

Sunday, November 11, 2007

One teenager's dilemma (and ours)

I heard this on the radio the other day ...

The stepfather of a teenage boy, concerned about the stepson's driving, has a GPS installed in his car. The GPS reports back to its mothership and Dad can log in to check up. It will also email Dad if the car exceeds a given speed. This happens once, resulting in a 10-day loss of car keys.

Not surprisingly, the stepson is not entirely thrilled with the arrangement.

Then one day the stepson gets a speeding ticket. Radar has him going 60+ in a 45. GPS says he was doing the speed limit. As the radio story airs, Dad is in the process of challenging the ticket in court, on the grounds that the GPS is much more reliable than radar. The stepson still hates the GPS, but admits that, just this once, maybe it's not such a bad thing.

And there's the whole privacy dilemma in a nutshell: We'd love to have the cameras running when it benefits us, but the rest of the time, whether we're misbehaving or just being or normal boring human selves, we'd just as soon be left alone.

This is not an entirely new problem, of course. Privacy concerns have been around as long as people have lived around each other, which is pretty much as long as there have been people.

Modern privacy concerns are not so much about whether your neighbor knows what you're up to, but about who gets to be your neighbor and the balance of power between eavesdropper and eavesdroppee. From time to time, technology disrupts that balance (anyone remember party lines?) and society has to work out new rules to reclaim it.

One could make a reasonable theoretical argument that in a rational society, everybody benefits if everybody knows everything about everyone. But society is made up of people and people aren't rational. If the only choices are complete surveillance and complete privacy, I would tend to side with the stepson on this one and go for privacy. Those aren't the only choices, though.

Saturday, November 10, 2007

Trusted computing: What could be better?

The fundamental tension behind trusted computing is over programmability. Someone sending out protected content wants to be sure that it can only be accessed on a restricted set of particular devices. This is a lot easier of the devices in question are not highly programmable. In the case of a portable music player or set-top box, the keys involved can be kept in special tamper-resistant hardware and otherwise protected from exposure or modification.

If your playback device is a general-purpose computer, the game becomes a lot harder. I could send you a player application with a key branded into it, but there are any number of ways to get such a player to yield up its secrets, or yield up the unprotected content without having to uncover the secrets themselves.

The trusted computing model tries to combat this by restricting access to the information on a given computer and tightly controlling all modification to such an otherwise-programmable device. In other words, the vendor asserts control over programmability. It is this idea, not the idea that the creator of content should have control over the content, that fundamentally conflicts with the ideas (and ideals) of personal computing in general and free software in particular.

The TC model, depending on tight control of all possible modifications, is inherently fragile. Compare it to the models used in modern cryptography (on which it heavily relies). In modern cryptography, one makes extremely pessimistic assumptions about what will happen in practice.

For example, in designing a cipher, one typically assumes an adaptive chosen-plaintext attack. This means that the attacker can repeatedly choose a message to be encrypted, look at the resulting ciphertext, choose another message to be encrypted and so on. This did not come about by accident. There are various ways a real-world attacker can perform such an attack on a real-world cipher.

Cipher design, and robust engineering in general, assumes that anything that can go wrong will. This generaly means minimizing the number of dependencies and moving parts. The RSA cipher, for example, consists of raising a number representing the message to a known power and taking the remainder against a large number, called the modulus. The modulus (along with a second exponent used to decrypt the message) is derived from two large, randomly-chosen prime numbers by a simple recipe.

That's it. That's one of the most secure ciphers known. But even with that simple recipe there are known subtleties in choosing a good key and in preparing messages for encryption in order to avoid various attacks.

Trusted computing relies on five key technologies, which interact in various ways to provide the full model. You need hardware support in several places to even have a chance at making it all work. There are legitimate questions about how all this will affect basic system functions like backup. It's quite clear that any TC system will be actively attacked by hackers in both senses (I shouldn't get started on this, but I still like to think of "hacker" as meaning someone who does clever things with technology for the sake of learning and having fun; the more popular meaning is someone who tries to break into systems).

It doesn't seem like a good bet.

Trying to prevent or control modifications to a general-purpose computer is swimming upstream. The main driver here is to protect content like music and video. That requires a tamper-resistant decoder (and faith that this is a worthwhile exercise, despite analog reconversion). From this point of view, TC tries to enable general-purpose computers to become decoders by first making them tamper-resistant.

The alternative is not to try to make general-purpose computers into decoders. If my computer has an encrypted-bits-to-sound-and-video decoder attached to it, then I can reprogram my computer all I want, and I can make as many copies of protected content as I want. When I want to play a song or video, I send it to my decoder, which has all the attributes TC wants: it's tamper-resistant, non-programmable and has a private key embedded in it as tightly as modern technology will allow.

I can use my favorite software to index the content that I've bought the rights to, to sequence it, to dispatch it to the various decoders I own and so forth. I can use my favorite non-media software without having to worry about what measures my OS vendor is taking to control my use of the content I bought the rights to.

This is not to far from how current content-delivery systems like cable and satellite boxes work, as I understand it. Given that, it's not clear to me how much farther we need to go down the TC road.

The GPL and copyright protection vs. copy protection

One interesting thing about the material on the GNU website is that much of it is copyrighted. This is a bit counter-intuitive, but it makes perfect sense. Copyrighting something doesn't mean prohibiting copying. It means laying claim to the right to set the rules for copying content and creating derivative works from that content. [I'm speaking from a US perspective here. International copyrights are a matter of international law and treaties, of which the US is a signatory]

The GPL (or "copyleft") and its descendants are a perfect example. Putting something under the GPL is not the same as putting it in the public domain. Something under the GPL can be copied freely just like something in the public domain. Unlike something in the public domain, it can only be modified and copied so long as the modified version is also under the GPL. In particular, this prohibits removing the notice that something is under the GPL (in general, something can be under copyright without carrying a notice at all). It also ensures that improvements to GPL software will be publicly available.

The basic concept of the GPL has stood up to (intense) legal scrutiny over the last twenty years and enabled the production of large quantities of useful software, including much of what I'm using to put this blog on the web. It has done so because it fits with the basic idea of copyrights, both in the literal sense of regulating the right to copy, and in the larger sense of promoting the production of useful content.

I still fondly remember the feeling of "wait ... that can't possibly work ..." followed by the gradual realization of "hey ... that might just work ..." and "wow ... this is really working ...". Definitely a neat hack.

Thursday, November 8, 2007

How do writers get paid without copy protection?

Hal comments (thanks!) that music and video can't be protected, and as a result only interactive content, particularly games, will be commercially viable. Music and video will be produced mainly as an adjunct to games and given away free. I was going to get into this anyway, so that seems like a great jumping-off point.

First, I completely agree that music and video can't be protected by purely technical means, and that interactive media have a much better shot. Copy protection fundamentally requires a tight connection to an object or event in the physical world, and interactivity provides such a connection. Frankly, though, I'm reluctant to claim categorically that anything can be effectively copy-protected, even when it looks like there's an airtight case. Where there's a will there's very often a way.

Nonetheless, the model of paying a fee to be able to participate in an interactive experience with others looks viable. It certainly seems to be working so far. I can also see single-user cases working, particularly if the game play has a random element to it. This also seems to be working so far.

On the other hand, I'm not convinced that it's necessary to protect content strongly in order for content creators to make a living. Text, for example, is impossible to protect. No one really tries, that I'm aware of. The exception would be environments where secrecy is at a premium, in which case protection is generally a combination of encryption to prevent casual access and stiff penalties for giving away the keys or unencrypted content.

And yet writers can still make (a little) money. How?

First, the traditional print model is still alive. I've heard newspaper publishers express concerns, given that classified ads have serious alternatives online (often including the newspaper's own online ads), but daily papers are still around, as are independent weeklies, supermarket tabloids and mass-market magazines, coffee-table books, specialized trade rags, pulp fiction and pretty much everything else.

Not only has print not disappeared, I'm not sure I can even think of a particular commercial genre or format that's disappeared. You'd think by now something would have. Small-circulation newsletters tend to be emailed or on web pages these days, and office memos have (thankfully) more or less bitten the dust, but those are non-commercial.

Print has (at least) three ways of making money. At least one of them carries over quite well to the online world:
  • The book as a physical artifact. Coffee-table books look great. You just can't get that on your screen. If nothing else, the resolution and contrast are way higher than for anything you can get on a screen. Other books use special paper and fancy bindings to look gorgeous. Children's books are particularly inventive in using pop-ups, textured materials and so forth. These are niche markets, though. What's interesting is that nothing electronic yet seems to have killed off even the humble paperback.
  • Subscriptions. You pay me a regular supply of money, I give you a regular supply of content. This seems to work well for cable TV and satellite radio. In the case of music radio, you can get all the content elsewhere, but what you don't get is the particular selections and, of course, the charming DJ's. Talk radio is more of a live performance and in many cases interactive. As such, it's one prototype for (probably) copy-protectable interactive content. Not all radio or TV is paid for by subscription, though, and pure subscription models are fairly rare in print (investment newsletters come to mind). The alternative, of course is
  • Advertising. Not even Mad magazine survives purely on subscriptions any more. Plenty of "free" publications survive on advertising alone. Online, without the cost of printing presses, paper and delivery, no one needs to charge a subscription fee. Why bother, given that it's trivial to copy the content? So instead, you get a variety of ads in exchange for the convenience of being able to get a document hot off your search engine. There are various technical ways of stripping out ads, but the dirty secret is this: ads are actually useful, at least sometimes. The symbiosis between "real" content and ads has been around for a long time and very much alive and kicking now. Just ask Larry and Sergey.
This by no means exhausts the possibilities for making money without strong copy protection. I've singled out text here because it's been unprotected for longer -- at least as long as we've been calling the web the web -- while other media are still a bit more difficult to copy. For now.

Of the three approaches above, physical print is all about literal copy protection, while subscription depends at least to some degree on protection through interactivity (even a print magazine has to stay fresh by responding to its readers). Advertising stands out in that it not only doesn't require copy protection, but actively encourages free copying (as long as the ads stay attached). The more copying, the more people see the ads.

Tuesday, November 6, 2007

Analog reconversion and copy quality

All copy-protection schemes have at least one prominent hole. At some point, they have to deliver you something. A music player has to play music, a document viewer has to show you a document, and so forth.

There's nothing at all stopping you from recording the sound that comes out of the headphones, or taking a picture of a document on a screen (and in the Trusted Computing nightmare scenario of the disappearing order, that might be a very good idea).

Of course, the usual objection is that you've lost information in the process. You've lost sound quality in the case of music. In the case of a document, you're just seeing text on the screen, not the underlying markup or structure.

Well yes, but ... if all you can do with a piece of music is play it on your headphones, and you record the analog signal going to the headphones (or even coming out of the headphones), then you have copied all the information needed to listen to the music with the quality those headphones give. Which is all you had in the first place.

Similarly, if you capture the images on the screen as you view a document, then you can go back and re-read the document any time you want. With optical character recognition, you could even reconstruct the text with fair accuracy. The process in general is called analog reconversion.

If all you are granted is the ability to view images or listen to sound, then you have already lost information. Recording the analog signal results in further loss, but as playback and recording equipment gets better and better, the loss becomes less and less perceptible. The ultimate limit here is human bandwidth, not computer bandwidth.

At the moment, recapturing the full glory of an HD DVD is beyond most people's capability. There are just too many bits going up on the screen and compressing would take too long. But grabbing enough to re-sell as a cheap pirated copy is clearly within many people's capability, and Moore's law will come into play sooner or later.

Attempts to plug this "analog hole" tend to be draconian, e.g., restricting the use of digital recording devices, and don't tend to get very far. Not that that will keep people from trying.

Hypertext is an interesting counterexample. I could record my viewing of a web site, and probably even analyze the results and reconstruct the links I've followed. What I can't do, however, is know what's behind the links I didn't follow, or even where they point.

Monday, November 5, 2007

Trusted Computing: The darker side

So far, I've painted a fairly benign picture of Trusted Computing, even suggesting that a couple of prominent criticisms of it may go too far. For example, I argued that using hardware to enforce copyrights on music -- assuming this will actually work -- is not unreasonable in and of itself. But in doing this, I haven't mentioned (except indirectly) some of potentially darker aspects of TC. So here goes.

Before I get into details, let me give my gut reaction: The concerns are legitimate. Several aspects of TC are potentially obnoxious, open to abuse or both, and those who promote it don't necessarily have their interests aligned with mine or yours. On the other hand, the problems in question are really legal and ethical problems. Nor are they particularly new or unique to TC.

As such, there are already significant counterbalances that will tend to prevent the nightmare scenarios -- particularly if people are generally informed of the dangers. For this reason, I'm very happy to see people like Vixie and Stallman voicing their concerns. If it turns out they're wrong, it will be in part because they -- and many others in the FSF, the EFF and elsewhere -- raised the alarm. And if they're right, we need every bit of firepower working on the solution we can get.

With that in mind, here are some of the objections I've seen:

Constant contact with the mothership: In order to make sure that the hardware is running the right firmware/software and not some hacked version, to push out updates and enhancements and to update access rules, a given hardware device will communicate frequently with the vendors of the various software it runs. This communication can potentially include all sorts of information stored on the device, whether it's any of the vendors' business or not.

But this is pretty much what's happening now. The major OSs have been converging for some time now, and one thing they've converged on is frequent automatic updates. Updates that call back to the mothership, send you digitally signed modifications to your software -- including your kernel -- and ask you to give an administrator password or otherwise grant permission. It's not just OSs either. Browsers update themselves and ask you to install plugins. So do at least some of those neat desktop widgets.

Does this communication include sensitive information about what's on my disk and what I do with my computer? To the best of my knowledge, no. How do I know this? Basically the vendors say so and no one seems to say they're lying. If someone does start leaning over the line, we hear about it. Just as we're hearing about the potential for abuse of TC.

Unique ids on everything: In order to make sure that a given piece of content can only be accessed on a particular device, you need to know what exact device is trying to read it. In the TC world, every device includes a unique "endorsement key" -- a private cryptographic key that allows anyone who knows the corresponding public key (which will generally be anyone) to send it messages only it can read. It also allows the device to prove that it has that key and since no one else can do so, it uniquely identifies the device.

Unique ids have been around for some time now. In the physical world they're called serial numbers. Network interfaces have MAC addresses. Every bluetooth device has a unique address, and so on.

Neither are digital keys and signatures new. The problem is more that, with an unforgeable ID, a given device, and its usage history if it chooses to record that, can be strongly tied to a given system and from that generally to a given person. If the system is in frequent contact with the mothership, then that information can easily be uploaded and used for any purpose the vendor sees fit.

Again, the problem is not the unique ID per se, but what may and may not be done with the information associated with it. This is a legal and ethical problem, and analogous cases already exist, for example cell phone records (and land line records before them).

Disappearing ink: If I send you a file encrypted to your private key (using your public key), once you decrypt it it's yours to do with as you please. If I send you a file encrypted with some secret key and give you the key, it's the same deal. But suppose I give you a file encrypted to some key that I don't know, and you don't know, but some black box installed on your system does know. That box is set up never to reveal the actual bits I sent you directly, but it will allow you to view the contents on your screen (or play them on your headphones, or whatever).

Then when and whether you can read that file is up to the black box. If the black box is under my control, then so is the file, effectively. This leads to several worrisome possibilities. In order of (IMHO) decreasing likelihood:
  • Lock-in: If that black box is the only way you can view the content I sent you, then it's going to be hard to switch to a different brand of black box. You would most likely need me to either re-send the file, or give your black box permission to hand it over to the new black box (it seems possible that you could require me to include some sort of transfer permission with the file when I send it, but that has its own problems).
  • Plausible deniability run amok. I'm your boss. I send you an order to do something bad. You do it. I tell your black box to get rid of the order. Effectively, I never told you anything. The authorities come a-knocking. You hang and twist in the wind. You sent a copy of the order to the press? Too bad they can't read it. I retire to a sunny tropical isle.
  • Outright censorship: If all systems are "trusted", then anything published is under the control of whoever controls the black boxes. You write something I don't like. I tell every black box I control to refuse to show it. If I'm an OS vendor or the government, that could be quite a few boxes. In the worst case, it could be every legal box. You never said anything, and Oceania has always been at war with Eurasia.
All of this is predicated on people sending data in protected form. Currently, you would have to want to do this (as record labels and music studios do). There are many applications where this won't really make sense (for example, blogging).

In cases where protecting data is important, there will be market pressure against closed solutions. If I'm, say, a bank or insurance company, I don't want to depend on a particular vendor's device still working twenty years from now -- or that vendor not having folded and taken its keys with it -- when I want to retrieve data from a storage medium that doesn't even exist today.

The plausible deniability problem is ancient. There have always been and almost certainly always will be ways of telling someone to do something without actually coming out and saying it. There will always be a legitimate need for secure, untraceable communication, whether it's sending a protected email or calling someone into an empty room out of sight and earshot. This ability will always be open to abuse.

In order for the worst nightmare scenarios to happen, TC has to be pervasive and mandatory. Right now we're several steps from that. In the US, at least one bill trying to require TC for internet access has died in committee, due in part to the efforts of the FSF, EFF and others, but also to the ability of at least some senators to understand the free speech implications. Again, the problems are more legal and ethical than technical.

Except ...

No unauthorized modifications: If you want to make sure that a system can be trusted (for whatever we're trusting it to do), you also have to make sure it stays that way, particularly when you have a mechanism for automatically delivering updates. Asking the user for authorization is not enough. Very, very few people have the expertise and time to examine every single update that comes across.

A trusted system will only accept updates with the proper credentials, using strong digital signatures. If I try to install my own word smasher or OS, the system will refuse.

This is, of course, a topic near and dear to Stallman and the free software community in general. The whole reason that free software exists outside a small group of dedicated individuals is that it fulfills an important need. It's a counterbalance to commercial software's tendency to lock people in.

Free software says "Here's exactly how the system you're running works. If you don't like it, you're free to rewrite it or (more likely) find a better version or hire someone who can write you one." TC says, "Hey, buddy, you can't change that unless the people who made the system know they can trust you" and that, of course, directly conflicts with the whole "if you don't like it fix it" principle.

There's a case to be made for TC or something like it in special-purpose content-delivery systems like music players and set-top boxes. The question is whether this is the first step toward TC everywhere whether we want it or not.

One of the main sources of concern is whether we will reach some sort of tipping point, where so many systems are TC-enabled that you effectively have to protect your data in order to exchange it at all.

This is not a problem on the sending end. If I'm sending from an open system to a TC system, I can always make my own copy of whatever I send before I armor-plate it. Or can I? The trusted system I'm sending to might be so heinous as to accept only messages produced by a particular trusted word smasher, and not just random files that happen to be properly encrypted. That would be pretty heinous, but who knows?

It's definitely a problem on the receiving end. Just as people sometimes insist on sending attachments in proprietary format, they could choose to send messages in protected form, so that I need a particular trusted black box in order to view them. If practically everyone they correspond with has such a box, then it's most convenient for them to send everything that way and conversely there will be great pressure for anyone without such a box to get one. In other words, I have the choice of receiving a message written in (potentially) disappearing ink, or no message at all.

The $64,000 question is, will people stand for this? I can see such a system being put in place inside a large company in order to, say, enforce document retention policies. It's harder to see it evolving among companies or groups of individuals. Postel's principle that one should be liberal in what one accepts and conservative in what one produces seems particularly applicable here.

In short, the situation definitely bears watching. My guess is that better alternatives will arise to solve the problems, such as copyright protection and malware, that TC is trying to address. The climate doesn't seem particularly threatening at the moment, but if this should change, we should all be ready.

Degrees of access control

Following on to the previous post, I wanted to explore just what kinds of control one currently has over one's data and could potentially have. Here are some possibilities:
  • None. If I post an anonymous comment somewhere, anyone can read it, copy it, quote it or whatever they want. If I've been careful, it can't be traced back to me, so conversely I (generally) have no claim to it.
  • Open access, but traceable origin. This blog is strongly tied to my online identity, which in turn is more-or-less strongly tied to my real identity. If I were to write something libelous, I would have to answer to it, but on the other hand if someone were to try to pass something here off as their own, I would have a believable claim to authorship. Note that an anonymous message can still be traceable, if it's digitally signed but the identity behind the signature is kept secret.
  • Access by software key. Now things get interesting. If I send you an encrypted message, only you have access to it. But once you decrypt it for your own access, there is nothing technically preventing you from doing whatever you want with it. For example, you could charge people to look over your shoulder and read it off your screen, without giving them permanent access to it. As always, copy protection works by tying information to a physical object. This leads to ...
  • Access via dedicated hardware. This is access by key, but the key is strongly tied to a physical device, which presents the data only in analog form. One successful example of this is the set-top box. If "trusted" hardware devices are widespread, this actually gives quite a bit of flexibility -- to whoever holds the keys. One could grant permission to a group of devices (say, all of my family's music players), or transfer permission between devices.
The point of that last point is that DRM is not binary. There is quite a bit of potential for flexibility in control. The contentious issue is not control per se, but whose control.

Sunday, November 4, 2007

Stallman on Trusted Computing and DRM

I previously mentioned Paul Vixie's speech to the Commonwealth Club about (among other things) one's continued free access to one's own data. Of course, he's not the only one so concerned. Along with other prominent people, Richard Stallman has been beating this drum for some time now, for example in Can you trust your computer?.

This essay takes aim at trusted computing, which Stallman calls "treacherous computing" and which I'll refer to here as "TC". It's the same initiative that Vixie mentions, though not by name. Given the history involved it's no surprise Stallman should take the position he does, but when not one but several of the major pioneers of the net as we know it are sounding a caution, it's a good idea to think particularly carefully about what they're saying.

Reading Stallman's piece, I feel a bit of dissonance that I'm not completely sure how to resolve. My particular viewpoint coming in includes these basic tenets:
  • Information is hard to constrain. I don't believe there's some undefinable essence in information itself that causes it to thwart restrictions, but as a practical matter, anything that appears on the net unencrypted has the potential to spread very far very fast (whether anyone will pay attention is a different matter, one of human bandwidth).
  • Technology is largely neutral. As the lady said, any tool is a weapon if you hold it right. The converse also holds. Apparent controversies about technology often turn out to be controversies about law, society and ethics.
  • It's hard, and often counterproductive, to fight market forces; market forces, like technology, are largely neutral.
So whence the dissonance? On the one hand, I'm very sympathetic to the idea that trying to use technology to stop people from using technology inappropriately is risky at best and open to abuse at worst. Technology doesn't know when it's being used inappropriately. I also share the more general skepticism toward promises to do something for me for my own good.

On the other hand, I don't find the arguments Stallman advances against initiatives like TC particularly satisfying either. Thence the dissonance.

Let's back up a bit and look at what TC is. There are two basic components:
  • Content -- whether music, email, code or whatever -- is strongly encrypted.
  • The keys needed for decryption are tightly tied to particular physical pieces of hardware (using a combination of hardware support, further encryption and digital signatures).
For example, suppose I buy the right to listen to a song someone has recorded. In the TC world, this means that my song player will get a key enabling it to play the song, possibly along with further instructions such as "only play this song N times" or "don't play this song after such-and-such date". Since the ability to play that encrypted song rests solely with my player, I can copy the song file and share it all I want, but the experience of hearing it is still tied to that physical player (or others that have been authorized).

Personally, I don't have a problem with this scenario per se. I'm fine with pay-per-view movies on TV, for example. I don't feel I have some basic right to store and copy any collection of bits that happens to enter my house. The movie I watch on PPV isn't mine. I didn't write it, direct it, produce it or act in it. My cable provider is selling me the opportunity to watch that movie at home during a given time period. That seems fair.

Whether you approve of the particular way my $4 gets distributed among the cable operator, the studio, the creative people involved, their agents, the catering truck on the set and so forth is a separate, non-technical issue.

I believe this is one source of dissonance. To check this, I went back and re-read the GNU Manifesto. If you haven't read it in a while (or ever), please do so and take a moment to appreciate how much it got right over twenty years ago and to consider how much of today's landscape was shaped by it. I'm writing this using Firefox running on Ubuntu, for example. If you work for, say, Red Hat, in a real sense you owe your job in part to this document.

[FSF's position on copyrights is more sophisticated than what I describe next. I had originally tried to fix the text up, but later realized that it's more bloggish just to let the text stand as written (or as close as I could un-fix it) and continue the conversation, as for example here. -- DH 18 Nov 2007]
One of the bases of GNU is that software copyrights are not only not useful, but inherently harmful. One reason given is that a piece of software is fundamentally different from a book, play, musical composition or what-have-you. Another is that network technology has changed the economics of copying.

It's an easy extrapolation from this second point to the notion that copyrights are not only inapplicable to software but to anything else as well. For centuries, copyrights could be enforced (logically enough) by limiting the means of copying. That avenue is open to serious challenge now, and TC as applied to DRM is clearly an attempt to put the genie back into the bottle.

I'm skeptical of genie-bottling exercises in general, but I'm not ready to give up on the idea of copyrights as a legal and economic construct. Much of the backlash against TC, however, seems to be based on the idea that copyrights in general are harmful -- at least when used in particular ways. Stallman says as much in a footnote, and others have mentioned it more prominently:

A previous statement by the palladium developers stated the basic premise that whoever developed or collected information should have total control of how you use it. This would represent a revolutionary overturn of past ideas of ethics and of the legal system, and create an unprecedented system of control.
But the whole point of existing copyrights is that the creator of a work retains control over what happens to it. A common magazine contract is for "first serial rights", meaning that the buyer (a magazine) has the right to print the article once, but that the author retains the right to publish it again, for example in an anthology. When I buy that magazine, I in turn have the right to read it, but not to copy it or quote it outside the fairly well-delimited area of "fair use". Otherwise I am breaking the law and subject to stiff penalties.

This is fairly complete control over the physical realization of an author's ideas, and this is the current law, not a revolutionary break from it. So what's different in the digital world? My guess is that in the physical world, copyright is quite tightly controlled when it comes to large publishers -- a magazine publishing plagiarized work will be subject to major lawsuits and a seriously damaged reputation -- but pretty lax once you enter the home.

Who hasn't made a mix tape/CD? No one cares, even though an anthology is a "derived work", at least as far as I understand US law. TC has the potential to seriously disrupt this. It is not the idea of authors retaining control that is new and unsettling. It is the high degree of automated, fine control. TC can impose restrictions like "you can only listen to this song at these precise times on this particular player, and the player can report any attempt to get around this" as opposed to "if you try to make and sell a lot of copies of this song, you'll be in big trouble if we catch you." At the very least, this will require a lot of further hashing out in the courts and the markets.

There is a lot more to be said here. I haven't even touched on the very legitimate concerns about civil liberties and Orwellian nightmares. Again, I don't believe the problems are quite as new as they might seem nor the world quite as unprepared, but these too bear careful scrutiny. More to follow, I hope.