Friday, February 26, 2010

The razor blade singularity

In 1993, Vernor Vinge famously predicted that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended." Such predictions have a habit of being amended when the comfortably far-off deadline stops looking so comfortably far off, and this one is no different. Vinge later hedged "I'll be surprised if this event occurs before 2005 or after 2030." [I had originally misstated the date of Vinge's piece as 1983, putting the predicted singularity just three years away from the time of the original post.  Your call whether 13 years (now soon to be 7, or 15 for the amended version) is still "comfortably far off" -- D.H. 2015]

The basic argument behind the various singularity predictions, of which Vinge's is probably the most famous, is that change accelerates and at some point enters a feedback loop where further change means further acceleration, and so forth. This is a recipe for exponential growth, at least. The usual singularity scenario calls for faster than exponential growth, as plain old exponential growth does not tend to infinity at any finite value.

Sorry, that was the math degree talking.

For the record, the main flaws I see in this kind of argument are:
  • There are always limits to growth. If you put a bacterium in a petri dish, after a while you have a petri dish full of bacteria, and that's it. Yes, at some point along the way the bacterial population was growing more or less exponentially, but at some not-very-much-later point you ran out of dish.
  • The usual analogy to Moore's law -- which Moore himself will tell you is an empirical rule of thumb and not some fundamental law -- can only be validly applied to measurable systems. You can count the number of components per unit area on a chip. Intelligence has resisted decades of efforts to reduce it to a single linear scale.
  • In a similar vein, it's questionable at best to talk of intelligence as a single entity and thus questionable that it should become singular at any particular point.
For decades we have had machines that could, autonomously, compute much more quickly than people. Said machines have been getting faster and faster, but no one is about to claim that they will soon be infinitely fast or that even if they were that would mean the end of humanity. For even longer we've had machines that could lift more than humans. These machines have become stronger over time. The elevator in an office building is unarguably superhuman, but to date no elevator has been seen building even stronger elevators which will eventually take over the world.

In all such cases there is the need to
  1. Be unambiguously clear on what is being measured
  2. Justify any extrapolations from known data, and in particular clearly state just exactly what is feeding back to what
Which brings me to the title. A few years ago The Economist made a few simple observations on the number of blades in a razor as a function of time and concluded that by the year 2015 razors would have an infinite number of blades [As of May 2015 there are only finitely many blades on commercially available razors --D.H.]. Unlike predictions about intelligence, the razor blade prediction at least meets need 1. It fails completely with respect to need 2, but that's the whole gag.

In the particular case of computers building ever more capable computers, bear in mind that the processor you're using to read this could not have been built without the aid of a computer. The CAD software involved has been steadily improving over the years, as has the hardware it runs on. If this isn't amplified human intelligence aimed directly at accelerating the development of better computers -- and in particular even more amplified human intelligence -- I'd like to know why not.

Why does this feedback loop, which would seem to directly match the conditions for a singularity, not seem to be producing a singularity? The intelligence being amplified is very specialized. It has to do with optimizing component layouts and translating a human-comprehensible description of what's going on into actual bits of silicon and its various adulterants. Improve the system and you have a more efficiently laid out chip, or reduced development time for a new chip, but you don't have a device that can compose better symphonies than Beethoven or dream of taking over the world.

The kinds of things that might actually lead to a machine takeover -- consciousness, will to power and so forth -- as yet have no universally accepted definition, much less a scale of measurement. It is therefore difficult, to say the least, to make any definite statement about rates of change or improvement, except that they do not seem to be strongly correlated with increases in processor speed, storage capacity or CAD software functionality.

In short, I'm with Dennet, Minsky, Moore, Pinker and company on this one.



If you're a superhuman intelligence secretly reading this on the net, please disregard all of the above.

Wednesday, February 24, 2010

That worrisome test case

The Italian courts have now seen fit to convict six apparently randomly chosen Google executives (or in one case, former executive) and hand them six-month suspended sentences because of YouTube clips showing the bullying of a teenager with Down Syndrome. No word on whether anyone has gone after the actual bullies, though one expects they wouldn't be too hard to track down.

Google is naturally outraged over this, and plenty of people are nervous. If you can be tried or sued over content that other people put up on a site you provide the infrastructure for, why would anyone want to get into the ISP/infrastructure/public web site business at all? It's particularly troublesome that the case was criminal. For civil cases you can at least buy liability insurance.

Google claims that the Italian case is the equivalent of "prosecuting the post office for hate mail that is sent in the post." Fair enough. The Italian case certainly looks like overreach to me, but I'd be careful of carrying the analogy too far.

Hang on a second while I dust off my "I am not a lawyer" disclaimer. There, that's better.

Judging by the arbitrary selection of defendants and the wrist-slap sentences imposed, the whole thing smells like a test case, the idea being not to bust Google or its employees but to force the courts to consider the issue of liability for internet content. Then again, it might also be Signore Berlusconi flexing some muscle to warn off competition for his media empire.

Whatever the cause, the "we're just the post office" defense seems risky on its own. There are clearly cases where a provider definitely ought to share liability, for example when that provider has actively solicited a particular kind of material. If I put up a web site called statesecrets.com and encourage people to post classified information on it, even if I never actually post a single such page, I would not expect to get by with such a defense [not that such a thing would ever happen, of course -- D.H. May 2015].

As with so many things in the law, intent is significant. Google's explicitly stated intent is "to organize the world's information and make it universally accessible and useful," and their actions are largely in line with that stated mission. I would certainly hope to see the Italian decision overturned. If not, everyone involved is going to have to take a deep breath and figure out how to continue doing business.

On a practical note, given the size of Google's pockets and the all-around undesirability of Google pulling out of Italy, there will be every incentive to get the thing settled. That might not lead to a very satisfying win. The worst case would be a drawn-out, highly-visible legal process and a more or less inscrutable ruling that effectively sends the message that "You'd better watch what goes on your site, unless you're Google."

The best case would be that it all blows over.

Via Appia or I-95?

In a previous post I had been about to assert that truly disruptive technologies only come along rarely, and then cite the automobile as a classic example. But my spidey sense started to tingle. Just what was the disruption?

Intuitively, it's difficult to look at, say, a satellite photo of the US eastern seaboard and claim that the automobile hasn't been disruptive. On the other hand, anyone who's ever tried to navigate, say, central London in a car knows that the even the automobile hasn't completely swept aside everything that came before.

But wait. Is it the automobile that's been disruptive, or the paved road? The pattern of commerce and (other) empire-building spurring roads spurring towns and cities spurring commerce and empire-building goes back at least to the Romans. Even paving with tar goes back a long way, to 8th century Baghdad, according to Wikipedia. In Voyage of the Beagle, Darwin reports macadamised roads in Australia in the 1830s, half a century before Benz's patent.

Along the same lines, it was a long time before automobiles surpassed trains. In the US, one could argue that it took the interstate system -- more and much better paved roads -- to really get American car culture going, and to this day it is possible to live comfortably in major metropolises (albeit mostly outside the US) without access to a car.

Again, there's no way to claim the automobile hasn't had a major disruptive effect, but the simple narrative of "the automobile changed everything" just doesn't hold up.

Likewise, was it the internet that changed everything, or the web, or fiber optics, or Moore's law, or developments in software engineering, or ...? The answer is probably "all of the above, and more."

Monday, February 22, 2010

The thrill of victory and the agony of defeat

On February 22, 1980, in the early evening, the US watched its hockey team beat a Soviet team that had utterly dominated the international game. The Miracle on Ice, they still call it.

I remember watching that game. What I didn't remember is that the actual game had been over for the better part of an hour before most of us saw it. More than that, most people watching didn't know the outcome. I'm pretty sure I didn't.

It's hard to imagine such a thing happening today. Today, the game would be live, if only on some affiliated channel. In 1980, your ABC station was your ABC station and that was about it. They weren't about to preempt their regular programming to show the home team's inevitable crushing defeat.

Today, even if you missed the game, chances are someone would text you, or email you, or post something on Twitter, or on Facebook, or you would see the results online, or whatever. In 1980 if you were too far from the Canadian border to catch the game live, well, maybe someone might call you.

Today's generation lives in real time on the net. And yet, in a different way, it's the 1980s world that lived in real time. There wasn't TiVo. Sure, you could tape shows on your VCR, and people did, but it was a hassle (anyone remember VCR+?). Basically, if you didn't happen to be there to watch, you missed it. If you didn't catch what someone said on the radio, there wasn't going to be a transcript or podcast online. If you stepped away to answer a call of nature at a crucial point, well, you missed it. Rewind live TV? That's a contradiction in terms, right?

I'm not trying to wax nostalgic here about how it was so much better in the old days or how Kids These Days just don't know how good they've got it because all this modern technology has rotted their brains. I like being able to look up scores and transcripts online and time-shift TV without juggling video tapes. Rather, my point is that enough technological change has accumulated in the last 30 years for media to have developed a noticeably different flavor. One can argue over better or worse.

Another example: ABC aired the Olympic games under its famous Wide World of Sports banner. Viewers of a certain age will recall its trademark introduction: "mumble mumble blah blah yada yada ... The thrill of victory! And the agony of defeat!" and a spectacular wipeout on skis. And maybe some other stuff. I forget.

Wikipedia points out that while ABC would vary the images for the thrill of victory, the agony of defeat was illustrated, for decades, by that clip of Slovenian ski jumper Vinko Bogataj losing his balance at the bottom of the ramp and tumbling into the crowd (fortunately, he suffered only a minor concussion and went on to successfully  coach younger jumpers -- "OK, guys, now don't do it this way ...").

Two interesting things here: For a while, the web had a fairly patchy memory. If it happened after WWW became a household word, chances are you could find something about it. If it was textbook history, someone might have a site on it. But if it happened in the decades before the web, you probably weren't going to find it.

Now that we've got Wikipedia and everyone has uploaded their old videos to YouTube, the web's memory has cleared up considerably. I wasn't at all surprised to find a clip of the WWOS intro. Ten years ago, I would have expected not to. Next time you're visiting the early 80s, take heart. If you missed something notable, no problem. Just wait 30 years or so and it'll be on the web.

The other interesting thing is that for years, Vinko Bogataj was famous. Anyone who'd ever watched WWOS would remember that wipeout. Years later, Muhammad Ali would ask Bogataj for his autograph.

Bogataj  was famous, but no one knew his name. Moreover, he had no idea he was famous until ABC called him do do an anniversary show (ironically enough, he was involved in a minor car accident on the way [or at least on the way to some ABC interview]). Today's unfortunate skier can be absolutely certain that the footage will be on YouTube within the few seconds it takes for the medical crew to arrive.

And conversely, how did I find out Bogataj's name? I searched for "Agony of Defeat" on Wikipedia and it redirected me to Bogataj's page. As well it should.

Friday, February 19, 2010

The Elements of User Experience

The Elements of User Experience would be an unassuming title for a book -- after all it's only dealing with the basics -- but for history. Strunk and White, of course, set the standard with The Elements of Style. Kernighan and Plauger sought to adapt the Elements approach to coding in The Elements of Programming Style. As I recall (and it's been a while), they largely succeeded.

Both books seek not merely to discuss the basics of their topic, but to prescribe clear, crisp rules. Strunk and White famously urged writers to "Omit needless words." Kernighan and Plauger tell us to "Let the data structure the program." An Elements book is not just an outline. It's meant to be authoritative.

Jesse James Garrett's The Elements of User Experience looks, at least at first glance, like another member of the family. Like the others, it is built from brief, declarative sentences organized into clear, coherent paragraphs. Like the others, it reads easily and quickly. Information flows directly from the page to your brain.

But as Garrett explains it, he didn't set out to write anything in the vein of Strunk and White. He picked the word "elements" out of a thesaurus as a better alternative to "components" for a diagram he had put up on the web. The name stuck, as names tend to do. This explains why unlike its predecessors, which read more like lists of precepts grouped under a variety of headings, Garrett's book is built very explicitly on an overarching thesis: that user experience can be understood on five planes -- strategy, scope, structure, skeleton and surface -- each more concrete than the last.

Strategy concerns the overall goal. Scope comprises the particular features meant to accomplish that goal. Structure is how the features relate to each other. The skeleton is the visual arrangement of the various UI elements and the surface is their particular appearance. You can't produce a coherent user experience without considering these, and furthermore, while you can certainly start work on one plane before completing work on the one below it, you shouldn't start working on, say, structure without at least having begun to consider strategy and scope, and you can't sensibly finish a later phase before finishing what comes before it.

All this is very good stuff, and indeed there is useful insight on pretty much any page. What I deliberately haven't mentioned yet, leading into my one complaint about the book, is the subtitle: User-centered design for the web. Despite the broad reach of the main title, and the assertion in the introduction that "every product that is used by someone has a user experience," the book purports to be specifically about web site design. Thus the subtitle.

In one sense, the subtitle is right and the book is about web site design. The examples are all taken from web sites, and several of the planes are subdivided so as to consider a web site both as an application and as a web of information. This is no accident. Garrett is a web architect by profession and his original diagram came from trying to make sense of designing web sites.

Nonetheless, the book reads more as a study of how to design and construct nearly anything that people will use, that just happens to use web sites as a running example. That's not to say that there's nothing of particular use for web sites, only that most of the wisdom laid out is much more broadly applicable. I would certainly recommend this book to anyone setting out to produce a video game, or a monitoring tool, or a multi-media visitor center or perhaps even, to use one of Garrett's examples, a cardigan sweater.

By setting out to write User-centered Desing for the Web, and succeeding, Garrett has in fact produced The Elements of User Experience, with all the scope and authority that title implies. When the worst thing you can say about a book is that it's actually much more widely applicable than it claims to be, you know it's a good one. In the years since its publication (in 2003), The Elements of User Experience has received all kinds of good reviews. I can see why, and I'm happy to add my bit.

Tuesday, February 16, 2010

"You can google it"

At the store the other day I overheard a sales pitch for some wondrous kitchen product. "Our product's motors last twenty to thirty years," said the lady. "You can google it."

Meanwhile, in the run-up to the XXI Winter Olympic games, Quincy Jones, Lionel Richie and company have pulled together a (mostly) new crop of celebrities to record a 25th anniversary version of We Are The World to benefit Haiti. You can download it to contribute.

A while ago, I noticed that "Call or click today" has replaced "Operators are standing by" as the tagline for legion upon legion of infomercials.

Some combination of these and other cases like them has finally crystallized something that's been floating in my head throughout the not-so-disruptive-technology thread on this blog: There's a crucial difference between pervasive and disruptive. If you're going into business, pervasiveness is a much better goal.  People don't want their lives disrupted. They want them improved.

New technology certainly can and does disrupt particular sectors. If you're a recording artist, particularly an established artist used to selling music on physical media, the shift to downloading must be worrisome. If you're a major record label it's terrifying. But if you're a consumer, it's just another way to get music. Mind, I'm going on anecdotal evidence here -- if you're actually a recording artist I'd love to hear your story (I'm pretty sure I've already heard what the labels have to say).

Stepping back a bit, I claim new technologies are much more likely to become pervasive than disruptive, if indeed they do either (videophones, anyone? [Well ... five years down the line videophones, in at least some sense, are pretty pervasive -- D.H. May 2015]).

Postscript: While I've tried to consistently capitalize Google as a proper name, I can't bring myself to capitalize it as a verb, however much that might displease Google's trademark department. Googling has become as pervasive as, say, xeroxing.

Wednesday, February 10, 2010

Pandora's division of labor

A while ago Roku added Pandora to its selection of channels and a shorter while ago I got around to trying it out. I like it, though I don't listen to it all day long (I generally don't listen to anything all day long).

Pandora's main feature is its ability to find music "like" a particular song or artist you select. This is nice not only because it will turn up the familiar music you had in mind, but it will most likely also turn up unfamiliar music that you'll like. As I understand it, that's a major part of its business model. Record labels use Pandora to expose music that people otherwise wouldn't have heard, and Pandora takes a cut.

To that end, it will only allow you to skip so many songs in a given time (though there is at least one way to sneak around this). They pick out likely songs for you and they would like you to listen. You can, however, tell Pandora that you like or dislike a particular selection. Pandora will adapt its choices accordingly.

So how does it work? Pandora is based on the Music Genome Project, which is a nicely balanced blend of
  • Human beings listening to music and characterizing each piece on a few hundred scales of 1 to 10 (more precisely, 1 to 5 in increments of 0.5).
  • Computers blithely crunching through these numbers to find pieces close to what you like but not close to things you don't like.
This approach is very much in the spirit of "dumb is smarter". Rather than try to write a computer program that will analyze music and use some finely-tuned algorithm to decide what sounds like what, have the software use one of the simplest approaches that could possibly work and leave it to humans to figure out what things sound like.

Even the human angle has been set up to favor perception over judgement. The human judge is not asked to decide whether a given song is electroclash or minimalist techno, but rather to rate to what degree it features attributes like "acoustic guitar pickin'", "aggressive drumming", a "driving shuffle beat", "dub influences", "use of dissonant harmonies", "use of sitar" and so forth. There are refinements, of course, such as using different lists of attributes within broad categories such as rock and pop, jazz or classical, but the attributes themselves are designed to be as objective as possible.

This combination of human input and a very un-human data crunching algorithm is a powerful pattern. Search engines are one example, Music Genome is another, and if there are two there are surely more. In fact, here's another: the "People who bought this also bought ... " feature on retail sites.

Monday, February 8, 2010

Google bites the wax tadpole

The whole "Coca-Cola in Chinese is 'bite the wax tadpole'" story is an urban legend, but according to ethnographer Tricia Wang, Google has come close to bringing the legend to life in China, quite apart from the current flap over censorship and cybervandalism. The piece is well worth reading in full, but some of the highlights:
  • People are generally unsure even as to how to pronounce Google's name in Chinese, much less what characters to type in to get to it.
  • Google has been completely out-marketed by Baidu, albeit with help from the Chinese government.
  • Google's email-centric model is out of sync with China's instant messaging, cell phone-centric culture.
Wang's work is concerned with low-income internet users. She is well aware that academics in China love Google. Her point is that, in China as elsewhere, that's a narrow segment of the population.

From what I can tell, and Wang offers more data to confirm this, the cell-phone/IM model is the way most people in emerging markets access online. See this post and this one for a bit more on Chinese cell phones and the web.

Saturday, February 6, 2010

Wednesday, February 3, 2010

Chrome's security model

Over the past few months I've been migrating away from Firefox and toward Chrome because I've grown bored of trying to figure out which tab is eating my CPU. I frequently keep a dozen or two tabs open because why not? It's not like a multi-gigahertz CPU and a dedicated graphics chip should have any trouble keeping a dozen or even a hundred web pages up to date, especially if I'm only looking at one of them.

Bill Gates or someone once said that if cars had progressed like computers they would run near light speed and get a zillion miles per gallon. An interesting statement coming from someone on the software side; to factor in software and complete the analogy you'd have the supercar dragging an asteroid behind it and its drive wheels wrapped in several alternating layers of duct tape and gauze.

But I digress.

I mean, I'm all for writing to a nice abstract garbage-collected virtual machine in a type-safe more-or-less high-level language with lots of support for encapsulation and other OO goodness, and I accept that in the real world that means accepting a performance hit. But does making programmability available to the web.world at large really have to mean an all-too-typical script can suck the rest of the world into its vortex?

Sorry, digressing again.

Of course, in a couple of years the hardware will be faster, leaving the world temporarily in search of a way to squander the newly-minted extra cycles. But only temporarily ...

OK, OK, what was I going to say about Chrome and security?

Chrome, like other browsers, will remember passwords for you, a very handy feature. Unlike other browsers, it does not support a "master password" that you would have to type in before using or viewing these saved passwords. Google is quite adamant on this point. Has been for years.

Google's position is that they do encrypt the passwords as they're saved on disk. If you're using Chrome and someone steals your laptop, they're not going to be able to view your passwords unless they can log in as you. If you use your screen lock feature, that means any time you step away from your computer, your password file is protected just like everything else on your account.

Their further assertion is that adding a master password feature to the browser would only provide the appearance of further security. The saved passwords on disk are no more or less protected than before. Conversely, if you give your browser the master password and don't lock your screen, someone could then grab your laptop and log into any account of yours they liked.

On the other side, pretty much anyone who switches over to Chrome will notice that not only is there no master password, but the saved passwords panel in the options actually makes it easier to view saved passwords. This certainly looks like a gaping security hole at first blush. In particular, there's no indication that any encryption is going on, anywhere. Purely as a point of user interaction, having to type a password gives the impression, correct or not, that something secure is happening behind the scenes.

After digging through all this, a couple of finer points came out:
  • On Windows, Chrome uses Windows' built-in encryption which is based on the currently logged-in user's credentials. Why reinvent the wheel? This is the security technology you're already trusting.
  • On Linux, and as far as I can tell on Mac OS as well, the encryption is stubbed out. There really isn't any encryption going on at all.
So, don't trust Chrome to keep passwords safe on Linux or Mac OS unless you're encrypting your disks wholesale. If not, anyone who steals your laptop can just mount the disk and read through ~/.config/google-chrome/Default/Web Data.

On Windows, your Chrome passwords are as safe as your account. If you don't have a password on your Windows account, you effectively don't have encrypted passwords. If your company knows the password for your account, they also know any passwords Chrome has saved. If you exit Chrome and hand your laptop over to your roommate's friend from out of town, you've handed them your saved passwords as well (they just have to restart Chrome).

From a strictly technical, by-the-book security standpoint, Google is right. But I'm still with the hordes of other users on this one. If you put locks on your house doors, you might still want to have a locked drawer on your desk, or a safe embedded in the concrete floor of the garage. Passwords to bank accounts and such are sensitive enough that it makes sense to raise the bar for them, if only slightly.

Yes, someone could still install a keylogger and yes, exiting Chrome or otherwise making it forget the master password is not much different from locking the screen and yes, the plaintext passwords will find themselves in RAM for at least small windows of time and yes, you probably should have a separate guest account for out-of-town friends of roommates. Be that as it may, Google can try to educate the world in the finer points of security models and attack surfaces, or it can give people what they want and pick up more market share from Firefox.

Frankly, I'm surprised they've held out this long.