Sunday, January 31, 2010

Apple: innovation vs. breakthrough

From time to time, Apple announces its latest creation. Even if you're not a particular follower of Apple, you can tell it's coming by the steady stream of breathless press releases. "Rumor has it Steve Jobs is about to announce ..." Apple didn't get into this happy position by chance. The company has been extraordinarily successful in building its brand through a highly effective combination of engineering and marketing.

Before I go on, let me be clear: That's a compliment.

Rather than dig into the details of the latest product, I want to put forth a small thesis: Apple has succeeded not by producing stunning technological breakthroughs, but by expertly pulling existing pieces together to fill a gap in the mass market.

That's also a compliment.

Looking at a timeline of Apple products, it's clear that this goes all the way back to the original Apple I, a pre-assembled motherboard in a time where kits were popular. Likewise, the Apple II (or Apple ][, if you prefer) came fully-assembled and had color when its main competitors didn't, but it certainly wasn't the first computer with a color display. The Lisa and Macintosh brought Xerox's UI breakthroughs out of suspended animation at the PARC to the world at large, but they weren't the first systems with mice and windows.

And let me pause here for another point: Just as much brain sweat can go into synthesizing existing pieces as creating new ones. The hacks behind the Apple II's handling of color and sound were ferociously good. I had the privilege of seeing an early (pre-release, I think) Lisa in college. I was suitably impressed. The hardware geeks in the room managed to persuade the sales rep to pull the cover off and, as I recall, made similar appreciative noises.

The Lisa wasn't really ready for prime time, but the Mac certainly was. The ROM carrying its graphics and other system support was known for having squeezed in more functionality into 64KB(!) than should have fit, and of course the coherence of the whole concept and execution laid crucial foundations for the brand we know today.

If you prefer to call those achievements engineering breakthroughs, I won't argue. My point is more that Apple didn't win by inventing color, or the GUI, or the portable music player, but by bringing them to market early and very, very well.

I think this is why I tend to find Apple's announcements exciting and underwhelming at the same time. The iPod, mini and nano followed in logical succession. With them, and their cousin the iPhone the news was not just that Apple had hit the existing digital music player and cell phone with its pretty stick, but that it had managed to partner with major players to give them a decent shot at working. Someone was bound to do that, might as well be Apple.

The MacBook Air is a cool piece of engineering, but however much you hype it it's a thinner, lighter MacBook. The latest offering is, as Jobs himself says, halfway between an iPhone and a MacBook. What's worth noting here is that Apple is re-entering a market where it and others have stumbled (remember the Newton?) not by jumping into the void but by anchoring firmly to two existing successes.

This is the usual way we progress, on the web and off. Breakthroughs are important, but they don't make it out of the lab until someone puts them into usable form and brings them to the world at large. This generally means connecting the breakthrough to enough of the familiar that people will know what to do with it. Apple does this as well as anyone and (with inevitable missteps) has from the outset.

Saturday, January 23, 2010

What you see and what you get

When I edit blog posts, or even preview them, links show up underlined in blue, making them easy to see. When I've visited the blog, which I generally only do if I want to double-check how it looks to the world at large, I've noticed that links were rendered in a fairly dark gray, hard to pick out from the general text.

So I've fiddled a bit with the colors in an attempt to find a combination that:
  • Makes links easy to pick out from text
  • Makes visited links easy to distinguish from fresh ones
  • Fits the existing color scheme of the blog
  • Gives good contrast for color-blind readers
I'm not ecstatic about the results, but I think the new scheme is at least better than the old one. Judging by this colorblind web filter, the scheme should work better unless your vision is completely achromatic, in which case it's probably only marginally better.


This is not the only case where Blogger's preview feature has shown a different picture from the blog itself. For example, when I put a bingo card in the middle of a post, it looked fine in the preview but was cut off when I visited the blog directly. The problem is that the preview feature doesn't take stylesheets into account. Now, it can't account for a browser being able to substitute whatever stylesheet it likes (including, perhaps, one that renders links more visible to the particular person browsing?), but you'd think it would be able to use the one that's set up for the blog.

Ah well.

TSA Reveals: Alex Johnson is Spartacus

OK, that title's probably a bit obscure. I've been using "Spartacus" as a byword for issues of anonymity, after the scene in Spartacus where everybody claims to be Spartacus, so Spartacus could be anybody. For more details, see this post, or this one, or this one. More formally, the people that could have a particular identity form the anonymity set for that identity. The bigger the anonymity set, the more anonymous the identity.

The US Transportation Security Administration gives a good example of this principle in action, though they're pitching it toward encouraging people to be less anonymous to them. In case the link is broken when you read this, the ad in the upper-left corner shows a series of images with captions:
  • Several people, with the caption "What do all these passengers have in common?"
  • The same group, captioned "They are all named Alex Johnson"
  • The three women in the group, captioned "Alex Johnson Female"
  • Two of the women, captioned "Alexandria Johnson Female"
  • One of the women, captioned "Alexandria Johnson Female 10/15/1967"
For more on just how few pieces of identifying information you need to narrow down to a small number of people, often to just one person, see this post.

Tuesday, January 19, 2010

Will a solid rocket booster fit in my shopping cart?

Nearly forty years after its inception in 1972, NASA's space shuttle program is finally ending this year. That leaves the question of what to do with the million or so items of inventory that have accumulated over those decades. In today's web-enabled world, what else to do but put it all on line, eBay style?

Well, actually they're not putting everything online, or even close to it, at least not yet. And it's not open to the public, and you don't exactly bid on items. But, if you have a login, you can put requests for items in your shopping cart and if NASA selects you and you can defray the transportation costs, the item is yours.

If the item in question is an actual space shuttle, transportation costs will come to about $30 million. So far there are 20 or so applicants. Get while the gettin's good.

Saturday, January 16, 2010

On codices and conventions

As I was reading Robert Darnton's The Case for Books, one word in particular jumped out at me, one which I hardly remembering hearing since high school: codex. Random house defines it by way of another delightfully arcane term:
a quire of manuscript pages held together by stitching
A quire, in turn is a set of folded leaves of paper or parchment (particularly, a set of 24 or 25). In other words, the distinctive feature of the codex is that it has pages, as opposed to a scroll or tablet. Here's a broad view from Darnton of what this meant:
The history of books led to a second technological shift when the codex replaced the scroll sometime soon after the beginning of the Christian era. By the third century AD, the codex — that is, books with pages that you turn as opposed to scrolls that you roll — became crucial to the spread of Christianity. It transformed the experience of reading: the page emerged as a unit of perception, and readers were able to leaf through a clearly articulated text, one that eventually included differentiated words (that is, words separated by spaces), paragraphs and chapters, along with tables of contents, indexes, and other reader's aids.
I would parse this as one major technical shift — from a single, serial scroll to random-access pages — and several refinements in convention — inter-word spacing, paragraphs, chapters, tables of contents, indexes, etc. This seems very much analogous to the case of conventions on the web. In both cases the technical shifts (script-enabled browsers in the web case) are important, but they are relatively rare. Small shifts in convention (tabs, rollover highlighting, etc.) are more common, and just as important in the aggregate.

Tuesday, January 12, 2010

Facebook privacy: Probably not dead either

There seem to be a lot of articles and posts lately about Facebook founder Mark Zuckerberg having announced the "end of privacy", so let's slow down a bit.

Here's a transcript of what Zuckerberg said on the matter, taken from Marshall Kirkpatrick's critique (the post also includes video of the original interview):
When I got started in my dorm room at Harvard, the question a lot of people asked was "Why would I want to put any information on the Internet at all? Why would I want to have a website?"
And then in the last 5 or 6 years, blogging has taken off in a huge way and all these different services that have people sharing all this information. People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time.
We view it as our role in the system to constantly be innovating and be updating what our system is to reflect what the current social norms are.
A lot of companies would be trapped by the conventions and their legacies of what they've built, doing a privacy change - doing a privacy change for 350 million users is not the kind of thing that a lot of companies would do. But we viewed that as a really important thing, to always keep a beginner's mind and what would we do if we were starting the company now and we decided that these would be the social norms now and we just went for it.
On the face of it, this sounds like a CEO making a fairly narrow statement about his company's service. There's a bit of ambiguity as to which social norms he's talking about, but clearly said norms are those of people who are on or might like to be on Facebook. So why has it been repeatedly glossed as "Facebook CEO says privacy is obsolete" or similar?

From what I can make out (and I'm not on Facebook) Facebook is changing its default privacy settings for content users publish from opt-in (you have to explicitly say you're sharing information) to opt-out (you have to explicitly say you aren't). This is part of a larger shift to more fine-grained privacy control, and to manage the transition Facebook users have been given a tool to "empower people to personalize control over their information".

Before I go on, here's a little Bingo card you can use the next time Facebook puts out a press release like the one in the link:

transparentempowerroll outeasy-to-usepersonalize
transformcontrolmessageevolutioninnovate
serve users’ changing needstoolFREEdynamicsimplify
communitymodelintuitiveaccessibleset a new standard
unprecedentedprocesseducatenetworkiterative

But I digress.

Reading past the marketspeak, this looks like a pretty reasonable cut at something any successful software product needs sooner or later: a migration tool. In particular, they appear to go to some lengths to preserve settings you already have. Where that can't be done, it looks like they tell you what the default is, and why, and how to change it. The devil is in the details, but it's clear that they've at least examined the kind of issues that inevitably crop up in such an exercise. Their claim to have done extensive user testing looks credible. With 350 million users, they better have.

One item does stand out, though, and it's probably the basis of Zuckerberg's comments above:
Common set of publicly available information: Facebook’s latest privacy policy, announced in October, indicated that certain basic information—a user’s name, profile picture, gender, current city, Friend List and Pages—would be categorized as “publicly available.” The overwhelming majority of users already make all of this information available to everyone and this label was chosen to ensure that users understand that it is possible for this information to be viewed by others. However, users can still avoid being found in searches or prevent contact from non-friends.
So, if you want to be on Facebook, you have to give out those basic items, but now they make that an explicit policy rather instead of just something everyone was doing. You can choose who sees what else, but with finer-grained control now. People who didn't originally make the basic items above public (may?) now have them made public, but nothing else need change. Maybe I've missed something, but this doesn't seem earth-shaking, and Zuckerberg's comments don't seem to say much more than "It looks like people like to share stuff on Facebook more widely than we originally expected."  [Five or six years on, it seems even less like the Earth has shaken --D.H. Dec 2015]

And it's all just Zuckerberg's opinion. As Derek Thompson argues, Zuckerberg may not even be right about people's attitudes in his own backyard of FaceBook.

Sunday, January 10, 2010

Another year, another Web 2.0 widget

A couple of years back I proudly announced that this blog was "Now available in living Technorati". Well, actually it was more like I'd heard of this "Technorati" thing and thought why not give it a spin. Soon afterward, my "authority" (number of other Technorati blogs linking to mine) skyrocketed to a mighty 2, placing me firmly in the second of Technorati's several millions of blogs. Since then, bubkes. Probably I wasn't doing something right, but it's never seemed pressing to find out what.

So I've just removed the Technorati widget from the sidebar. As far as I can tell, it won't be missed. If it is, please drop me a comment.

Balancing that out, I've been seeing little hieroglyphs like "Digg this!" on blog posts for years now and been thinking "Maybe I should get me one of them." Clearing out Technorati seemed like as good an occasion as any. So now, thanks to a little bit of XHTML hacking, you should be seeing a little "Digg" button at the bottom of every post. It's not the most graceful little thing, but at least it seems to work.

Enjoy!

[That was then, this is now (May 2015) ... the Digg widget has gone the way of the Technorati widget, replaced by a "whatever Blogger thinks is appropriate" widget]

The commonplace mashup

I've learned to take it on faith that the latest revolutionarily new forms and genres don't arise fully formed out of nothing, but have direct roots in older forms. Often these roots can be traced back quite a ways. Case in point: The net and the web are supposed to have created entirely new modes of expression based on taking bits and pieces from here and there and putting them together into a never-before-seen whole. For example, as I quoted Bruce Tognazzini in a previous post:
[W]e are also seeing the emergence of a new and powerful form of expression, as works grow, change, and divide, with each new artist adding to these living collages of color, form, and action.
This was written in 1994. At that point, sampling in music was well established. It's not clear whether Tog would have had sampling in mind as an example of such new forms, but if not it would certainly be a predecessor to whatever starting points Tog did have in mind. For example, the web mashup, which is clearly the sort of thing Tog had in mind, is so named by direct analogy with the musical mashup, which in turn is essentially sampling on steroids.

Sampling, in the sense of directly lifting parts of one recording into another, goes back to the 1960s, at least, with the Beatles and Frank Zappa among others. Lifting the music from one piece into another is much, much older. Working backward, start with jazz and folk music and along the way note that several classical composers were happy to incorporate folk tunes into their works, whether the critics approved or not.

Robert Darnton's The Case for Books gives another example: the commonplace book. People have been keeping journals and log books forever and these, along with the newspaper column, are clearly antecedents of modern web logs such as this one. I was surprised, though, to learn what a webby flavor the particular genre of the commonplace had. As Darnton explains it:
Time was when readers kept commonplace books. Whenever they came across a pithy passage, they copied it into a notebook under an appropriate heading, adding observations made in the course of daily life. Erasmus [1466 or 1469 - 1536] instructed them how to do it; and if they did not have access to his popular De Copia, they consulted printed models or the local schoolmaster. The practice spread everywhere in early modern England, among ordinary readers as well as famous writers like Francis Bacon, Ben Jonson, John Milton and John Locke.
Darnton goes on to assert that people read in a much webbier way then (which leads me to think, rather, that both modes he describes have been around forever):
It involved a special way of taking in the printed word. Unlike modern readers, who follow the flow of a narrative from beginning to end [...], early modern Englishmen read in fits and starts and jumped from book too book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts.
This is not necessarily a private exercise. Some commonplace books even saw publication and were doubtless then further dissected by new readers.

There is a commonly accepted narrative that before the web, information was produced and consumed in a strictly linear fashion and distributed strictly top-down from authoritative publishers to captive readers. The web, so the narrative goes, broke that all wide open.

The actual history of books paints a significantly different picture. Active reading and reconstruction has a long history. Commonplace books probably date to the 1100s and remained in vogue into the Victorian 1800s. The Talmud, with its commentaries and its commentaries on commentaries, is another notable example. Nor has publishing itself ever been exclusively confined to official sources. Unofficial publication has been illegal at various times, but the very act of suppression implies a market. This market has seldom gone unserved.

This is not to say that the web has had no effect. It has clearly tilted the tables towards self-publication and sampling, mashups or what have you. However, the web-oriented mindset behind these activities is not new. Neither are complaints about it. From Darnton again, himself quoting Bernard Rosenthal's translation of a letter by Niccoló Perotti written in 1471:
My dear Francesco, I have lately kept praising the age in which we live, because of the great, indeed divine gift of the new kind of writing which was recently brought to us from Germany. [...] I was led to hope that within a short time we should have such a large quantity of books that there wouldn't be a single work which could not be procured because of lack of means or scarcity . . . Yet — oh false and all too human thoughts — I see that things turned out quite differently from what I had hoped. Because now that anyone is free to print whatever they wish, they often disregard that which is best and instead write, merely for the sake of entertainment, what would best be forgotten, or better still erased from all books. And even when they write something worthwhile they twist it and corrupt it to the point where it would be much better to do without such books, rather than having a thousand copies spreading falsehoods over the whole world.
Or as Tog said it, over 500 years later:
Writers will no longer need to curry the favor of a publisher to be heard, and readers will be faced with a bewildering array of unrefereed, often inaccurate (to put it mildly), works.
Disruptive technology, indeed.

Romanes eunt domus!

Ah, the things you find while looking for something else.

It seems perfectly reasonable to expect that someone looking for texts in Latin be expected to be able to read Latin, but there's still something singularly mind-bending about this.

Lulu's long tail

While writing the previous post on book statistics, I almost mentioned Lulu, but after a little poking around decided not to muddy the picture. Why?

Going by the numbers in that post, somewhere around a million new titles appeared in print in 2009. Lulu's press kit claims that Lulu carries 520,000 "recently published" titles with more than 15,000 creators from 80 countries joining each week. I'm pretty sure this doesn't include a recently-announced deal with traditional publishers to distribute another 200,000 titles in e-book form.

It seems a pretty good bet that within the next few years, if not in 2010, Lulu will be adding more titles per year than all traditional publishers world-wide, combined. But of course, Lulu can add titles so quickly precisely because it doesn't have to actually print books. If I wanted to upload a few dozen pages of hex dumps from random mp3 files and call it a book, that would be fine with Lulu, whether or not anyone ever actually prints it out.

Most likely, Lulu's usage statistics are much like YouTube's: Some titles sell quite a few copies, but almost all sell almost none. Given that everything is pay-as-you-go and no one's on the hook for a warehouse full of unsold books, this seems absolutely fine. My point is just that it's such a radically different model from the traditional publishing house that comparing numbers of titles between Lulu and the rest of the world is essentially meaningless.

Similarly, I'd be careful about predicting that Lulu will steal market share from mainstream commercial publishers. To a large extent, they appear to be in different markets.

[Lulu is still around, as are traditional book publishers.  Neither seems to have taken over the world or fallen into oblivion, at least not yet.  This is to say nothing of possible future trends -- I've been meaning to take a closer look at that for several years now ...

Historical note: Not too long after this was written (pretty sure it was after), I would interview with Lulu, but not end up working there.  We both seem to have come out of that OK so far --D.H. Dec 2015]

Books: Not dead yet either

(The "either" refers to the ever-popular Information Age: Not dead yet, which has now been failing to answer searchers' questions about when the Information Age started for over two years)

I was generally aware, from browsing through bookstores and sites like Amazon, that books are still being published. What I wasn't prepared for was this list of statistics, taken from Robert Darnton's The Case For Books, itself referencing Bowker's Global Books in Print:
  • 700,000 new titles appeared worldwide in 1998
  • 895,000 appeared in 2003
  • 976,000 appeared in 2007
These are new titles, not individual books. Whether people are buying more or fewer books is a separate issue, but the cover prices of books strongly suggest that demand hasn't collapsed.

[That may have been the peak.   If not, it couldn't have been long afterward.  Bowker's now estimates closer to 300,000 print titles for 2013 and 2014.  Still substantially more than zero, but perhaps the writing is on the wall. -- D.H. May 2015]

Monday, January 4, 2010

Google Books vs. the Library of Alexandria

I received The Case for Books, by Robert Darnton, as a present this year from someone saying they'd sent me a book case. As I'm not above the occasional bad pun myself, I went ahead and read it. I'm glad I did, though as usual I didn't completely agree with all of the positions put forth.

Robert Darnton is (among other things) the director of the Harvard University Library, and assumed that role just as negotiations over Google Books were heating up. He was also a proponent of Harvard's open access policy, whereby most research publications become publicly available on the web unless the authors specifically opt out, and spearheaded Gutenberg-e, which sought to get top-quality dissertations in history reworked and expanded into electronic book form. In short, he knows a little something about books, both electronic and traditional.

On top of this, he writes clearly and engagingly. If you want to stop right now and get the book in order to see what such a person is thinking about the web and books, I'd completely understand. In any case, I would like to devote this and a few upcoming posts to examining Darnton's thoughts in more detail.


The book consists of previously-published essays on various topics, grouped into Future, Present and Past in that order. The first essay forthrightly answers the question of what effect the Google books settlement will have on the world of books: "No one knows," he says, "because the settlement is so complex that it is difficult to perceive the legal and economic contours in the lay of the land." Now that's an answer I can trust, particularly coming from someone intimately involved in the process.

Darnton's thesis here is that, while it's perfectly fine for businesses to make money and for authors to make money through copyrights, it's the responsibility of the public at large to push back in the direction of making the web more democratic. Darnton argues this is necessary for a number of reasons, some less widely publicized than others. In the particular context of Google books, there are several that seem particularly noteworthy:
  • Google is obtaining an effective monopoly on access to a large number of digitized books. Google means not to use its awesome power for evil, but it's awesome power nonetheless.
  • Google's terms of access, while generally reasonable and in particular "a boon to the small-town, Carnegie-library readers," are "hedged with restrictions." For example, access is limited to one terminal per library, no matter the size of the library.
  • Market forces cannot act correctively should Google or some successor in future decades choose to favor profit over access because "Students, faculty and patrons of public libraries will not pay for the subscriptions. The payment will come from the libraries[.]"
  • (This I had not realized at all) The libraries Google is digitizing "will not come close to exhausting the stock of books in the United States," and "contrary to what one might expect, there is little redundancy in the holdings of the five libraries [Google has partnered with.]"
    • There are about 543 million volumes in the research libraries of the United States, of which Google intends initially to digitize 15 million.
    • 60% of the books being digitized by Google exist in only one of the five libraries.
    • Google has not even begun to digitize the libraries' special collections.
The monopoly argument is the elephant in the room. I've met people who work at Google. They're serious, dedicated scary-smart geeks who want to build awesome software that makes the world a better place. I have no doubt that this culture extends from Larry and Sergey to the chefs in the kitchen. But a publicly-traded company is a publicly-traded company and a monopoly is a monopoly.

As to restrictions, the particular example of one terminal per library can probably be finessed or possibly renegotiated around. The larger problem is that there are many restrictions, larger and smaller, worked into the 100+ pages of the agreement by various parties with various axes to grind. At worst, such a web of legalese can have a chilling effect as no one wants to run afoul of something they signed up to but don't completely understand. At best, it's an annoying mess.

The market disconnect is a particularly sore subject to Darnton, who has seen a nasty and at least superficially similar scenario play out with academic journals. Students and faculty rely on journals both for research and for getting published (the preferred alternative to perishing), but don't and generally couldn't pay for them directly. As a result, the publishers have been able to steadily jack up the subscription prices, into the tens of thousands of dollars per year.

Bear in mind that the publishers do not pay the authors, who are academics struggling to publish, so profit margins are rather on the high side. University libraries have to carry the journals, so instead they cut back on other core activities, such as buying books. The claim here is not that Google intends to do the same, but that there is no effective market mechanism to push back against it, and no guarantee that Google's current coprporate culture will outlive Google the corporate entity.

The problem with Google only digitizing a small slice of the pie that people will tend to think that, since Google indexes the public web, and in fact the public web can for most intents and purposes be defined as that which Google indexes, Google books has similar scope. Since Google's digitizing is in essentially random order, it's quite possible that someone researching a given topic will turn up a small and unrepresentative sample of the published information on that topic without realizing it.

This should get better with time, but obviously it's going to take quite a bit of time before even the majority of published books have been digitized. In the mean time, the only antidote is old-fashioned library research, not that that's such a bad thing.

Darnton also puts forth several arguments I find less convincing:
  • "Companies decline rapidly in the fast-changing environment of electronic technology"
  • "As in the case of microfilm, there is no guarantee Google's copies will last."
  • "Google will make mistakes [in digitizing, tagging, etc.]"
  • Google is not a library. It doesn't have the expertise to tell people, say, which editions of Shakespeare's plays are more likely to represent what Shakespeare actually wrote.
  • Digitizing cannot capture everything in a paper book.
I'm not convinced electronic technology has much to do with Google's decline or lack thereof. Companies have imploded ever since there were companies. Google, however, is clearly in the category of Microsoft and IBM -- just don't say that in Mountain View -- and not Flooze or WebVan.com. Darnton's objection here is not ownership of the database per se, which seems a more relevant issue to me. Rather, that "Google may disappear or be eclipsed by an even greater technology, which could make its database as outdated and inaccessible as many of our old floppy disks and CD-ROMs."

This misses two qualitative differences between an old floppy disk with a document file produced by some long-unsupported word processor and Google's data. The first is that Google's data is not tied to any particular physical medium. I'm not intimately familiar with Google's infrastructure, but I would be shocked if digitized books were not already hosted at multiple physical locations. As hardware gets replaced and upgraded, more copies will get made. Google has to do this sort of thing to ensure high availability.

Second, Google's formats, even if they include Google secret sauce, are not going to be orphaned. If some future disaster should take out all known copies of Google's source code (making a local copy to hack on is a standard part of development) and prevent anyone familiar with the formats from remembering them, we would surely have much bigger fish to fry. Likewise, if Google's digitized books are no longer useful because of some new and better technology ... great.

But the main reason that I'm not concerned about Google's fallibility, or its inability even to pretend to be a library in anything but the most superficial sense, or the fiasco of microfilm (and yeesh, was it a fiasco), is that unlike the case of microfilm and early US newspapers, no one is getting rid of the originals. If you want to look at the Bad Quarto of Shakespeare, you can still go to Special Collections and see it, or at least you're no less able to than you were before. If you can't, you can at least look at a picture, which you couldn't before.

So, on the whole, I share Darnton's position of speaking "as a Google enthusiast, although I worry about its monopolistic tendencies," and I hope that despite the potential pitfalls, things will work out well. At the very least, it's very beneficial at least to publicly air and discuss the issues. Granted, it might have been better to have discussed them wider and sooner.

Indeed, this is one of Darnton's major regrets: "By spreading the cost in various ways [...]we could have provided authors and publishers with a legitimate income , while maintaining an open access repository or one in which access was based on reasonable fees. We could have created a National Digital Library -- the twenty-first century equivalent of the Library of Alexandria. It is too late now. [... W]orse, we are allowing a question of public policy -- the control of access to information -- to be determined by private lawsuit."

"While the authorities slept, Google took the initiative. It did not seek to settle its affairs in court. It went about its business, scanning books in libraries[.]"

Saturday, January 2, 2010

Off topic: Welcome to the new decade

[For a while, at least, this "off-topic" post had been one of the more popular ones on the blog.  If you landed here trying to find out when a decade "really" starts, well, here's my answer, but please feel free to look around the rest of the blog --D.H. Dec 2015]

Just as ten years ago you would hear that 2000 wasn't really the start of a new millennium, you'll now hear that 2010 isn't really the start of a new decade. As far as I'm concerned, it is. In particular, it's the first year of the 2010s (or teens, except maybe 2010, 2011 and 2012 should be "tweens").

Yes, I'm well aware there was no year zero in the traditional calendar (though there is in ISO 8601). Fair enough, and if you want to count 1-10 as the first decade, 11-20 as the second, and 2001-2010 as the first decade of the 21st century, have at it. Mind the changeover from Julian to Gregorian, and make sure you use the right calculation for the year 1, but have at it.

Be that as it may, you can start a decade any place you like. You could call 2010 the second year of the decade from 2009-2018. Lumping the years 1990-1999 together as "the 1990s" or "the 90s" and the years 2020-2029 together as "the 2020s" or "the 2os" is perfectly legitimate and natural, so calling 2000-2009 a decade and 2010-2019 another one is just as good for my money, even if coming up with a nice nickname is problematic.

Popular usage heavily favors this notation, and for similar reasons "the 1900s" may well supplant "the 20th century" or "the 20th century plus 1900 and without 2001" if you want to be a stickler about it. I personally prefer that usage and in any case if everybody is making the same "mistake" in usage, it's hard to argue it's a mistake.

We will now return to our regularly scheduled blogging.

Friday, January 1, 2010

Fox v. TWC: update

It appears that Time-Warner Cable and Fox have managed to reach at least enough of an agreement to keep Fox channels going for a while yet. How long? Not clear. What next? Not clear. Interesting case study in game theory? You bet.

[Update: As of January 2, 2010, the two seem to have reached an agreement.]