I'm still trying to get a good handle on the economics of anonymizers [and I'm not alone -- see here for pointers to a discussion in greater depth]. The first clear point is that clients use the service to offload risk, namely the risk of being associated with some particular activity on the web (three guesses what the most popular activity appears to be). When risk is transferred, there will generally need to be some kind of compensation. This is a basic economic proposition, one that's been back in the headlines lately.
But just where is this risk going? The first guess is the exit nodes. After all, it's the exit nodes that actually contact the services being used and would seem to have the most explaining to do if The Man starts asking questions. They also appear relatively easy to find. For example, if I continually send anonymous messages to myself, I should expect to hear from every exit node sooner or later (if the routing prefers a particular path for a particular client or server, compared to random chance, that could be used to narrow down the identities of one or both).
However, if The Man is really trying to find out who's on the other end of the connection, busting the exit node operator is not going to help, except perhaps to weaken the network as a whole. There may be jurisdictional problems as well. This pushes the search back to the clients.
Where we go from here probably depends on exactly how you analyze the anonymizer in question. Let's assume that The Man can make a better-than-random guess as to who's using the anonymizer or not. This seems very likely if relatively few people are using it. This will include pure clients, who only use the anonymizer but don't relay traffic or act as exit nodes, as well as the exits and relays themselves, who as far as I can tell have no way of proving they're not also clients.
Under this assumption, and all other things being equal, the risk is spread evenly among all the nodes, whatever their type. In that case, risk is certainly being transferred, namely from those with more to lose from exposure to those with less, but in a perfect anonymizer it's impossible to tell who is which. The basic arbitrage opportunity is there, but there appears to be no way to exploit it.
Or at least, no way for an outside observer to exploit it. If I'm, say, running a relay node but also using the anonymizer to do something truly hairy, I can be reasonably sure I have more to gain than someone just sitting at work perusing material that violates company policies. In effect, most of the clients are acting as a smokescreen for my activities. That in turn makes it worth my while to contribute greater-than-average resources to the network. At least, if I can do so without anyone noticing.
That seems a plausible story, but I'm not at all confident that I've understood the full implications here.
What good is half a language?
4 years ago
No comments:
Post a Comment