Thursday, December 31, 2009

TWC and Fox in the new year

As of this writing it appears that the game of chicken that Time Warner Cable and News Corp have been playing is likely to end up with the two splatting into each other at high speed. News Corp wants more cash for carrying its popular TV programming, including US college football bowls and sitcoms such as It's Always Sunny in Philadelphia. Time Warner Cable (now independent of parent Time Warner) claims it would have to pass along $1 per subscriber and is leery of setting a precedent that the other major TV networks could then use to their advantage.

So far there has been no agreement and there's every possibility that Fox channels such as FX, Speed, Fuel and Fox Soccer Channel -- but not Fox News or Fox Business, which have separate contracts -- could simply go dark to TWC subscribers at midnight tonight.

So what does this have to do with the web? For one thing, TWC is also a major ISP (full disclosure: they're mine, in particular). For another thing, Hulu (part-owned by Fox) will be happy to show you Sunny and other FX shows (not sure about the sports), whether over TWC's pipe or someone else's. That's particularly interesting since Hulu is supported by ads, and declining ad revenue is one of the reasons Fox wants to charge cable companies like TWC more. Another might possibly be that Fox wouldn't mind cable subscribers dumping cable in favor of an online service Fox has a piece of.

Nah.

Monday, December 28, 2009

How disrupted is technology?

The idea of disruptive technology is that changes in technology can bring about significant changes in society as a whole. But how much does technology itself change?

Let's start with what's on my screen right now:
  • A browser
  • An email client
  • A few explorer/navigators or whatever you call things that let you browse through a file system.
  • A couple of flavors of text editor
  • A command-line terminal (which I often don't have open) into which I mostly type commands I learned over twenty years ago.
  • If I'm working with code, I'll also have an IDE running
Everything on that list could have been there ten or fifteen years ago [The browser has since eaten the email client and a couple of flavors of text editor.  In general the trend seems to have been spending more and more time in the browser, to the point that, at least on my laptop at home, I spend virtually all the time in the browser.  I don't know what the split is between apps and browser pages on my phone, but certainly almost all screen time on a modern phone is web-driven one way or another --D.H. Dec 2015].

Now that first item, "browser", is a bit misleading because what you can access with the browser has changed significantly over the last decade or so, but even then the last major changes in browser technology, namely the key pieces of AJAX, happened over a decade ago. Again, I'm talking about disrupted technology here, not disruptive technology. Whatever changes the web and the computer desktop have wrought over the last decade, the underlying technology hasn't changed fundamentally.

What about the broadband connection behind the browser? There's broadband and then there's broadband, but if you mean "much faster than dial-up, fast enough to stream some sort of audio and video", that's been widely available for years as well. What about the server farms full of virtual machines at the other end of that connection? The whole point of such server farms is that they're using off-the-shelf parts, not the bleeding edge. Virtualization has become a buzzword lately, but the basic concept has been in practice for decades.

In short, I don't see any fundamental shifts in the underlying technology of the web. In fact, it seems just as likely that it's the stability of web technology that's enabled applications like e-commerce and social networking to build out over the last decade. Whether those are disruptive is a separate question, one which I've been chewing on for a while, mostly under the rubric of not-so-disruptive technology (in case you wonder where I stand on the matter).

Now, it's a legitimate question whether a decade or so is a long time or a short time. If you're a historian, it's a short time, but wasn't even one year supposed to be a long time in "internet time"?

Sunday, December 20, 2009

This one has a little bit of everything

For quite a while, the Did you feel it? link on the USGS web site has given the general public a chance to report earthquakes. This allows the seismologists to get a quick fix on the location and intensity of a quake before their instruments can produce more precise results -- seismic waves take time to travel through the earth.

This is a nice bit of crowdsourcing, somewhat akin to Galaxy Zoo, but it depends on people getting to the USGS site soon after they feel an earthquake. Some people are happy to do just that, but it's not necesarily everyone's top priority. So now the USGS has started searching through Twitter for keywords like "earthquake" or "shaking", and they're finding enough to be useful. The tweets range from a simple "Earthquake! OMG!" to something more like "The ceiling fan is swaying and my aunt's vase just fell off the top shelf," which gives some idea of magnitude.

As with Twitter in Iran, tweets are a great primary source of information, but you need to sift through them to get useful data. As with Google Flu, mining tweets doesn't require active cooperation from the people supplying the data. Rather, it mines data that people have already chosen to make public. In the case of Google Flu, Google is trying to use its awesome power for good by mining information that people give up in exchange for being able to use Google. (you have read Google's privacy policy, haven't you?) With Twitter, the picture is much simpler: The whole point is that you're broadcasting your message to the world.

It should come as no surprise that tweets about seismic activity are much more useful if you know where they came from (though even the raw timestamp should be of some use). Recently (November 2009), Twitter announced that its geotagging API had gone live. This allows twitterers to choose to supply location information with their tweets. The opt-in approach is definitely called for here, but even so there are serious questions of privacy. Martin Bryant has a good summary, to which I'll add that information about your location over time is a very good indicator of who you are.

Wednesday, December 16, 2009

And I would want to do this, why?

As I've mentioned, Google has decided that not only are blog readers a potential market for ads, so are the bloggers themselves. One ad, in particular, offers to print one's blog in book form. I can see the appeal of that in general, and I'm not alone, but the devil is in the details. The details in this case are not particularly attractive.
  • They're offering to print up your blog in softcover for $15 for the first 20 pages and $0.35 for each additional page. This is way, way more than Lulu charges to print on demand, so they're essentially charging you a hefty premium for scraping your blog and formatting it lightly before printing it.
  • The formatting options are extremely limited. You can show the entries in forward or reverse order, with or without comments. I didn't find out whether they carry along the style of the online version.
  • I didn't see any mention of indexing or table of contents.
  • As far as I can tell you can't even customize the title. You can, however, add an introduction/dedication of ... wait for it ... up to 350 characters!
I wouldn't call this a scam by any means, as they say up front what they do and what it costs, but it's definitely vanity publishing in the broader sense (see here for some background on what I mean by that). It's certainly not a commercially viable proposition [for the blogger, that is].

If I were to produce a print edition of Field Notes, I'd take a somewhat different approach:
  • I'd regroup the posts by theme so it becomes painfully obvious how often I've flogged each dead horse. The tags would be of some help here, but only some.
  • I'd provide a short lead-in for each section and a longer introduction for the book.
  • I'd probably do some light editing to improve the flow from one post to the next.
  • I'd give some indication of links between posts and probably selected external links. Sidenotes, maybe.
  • I'd clean up some of the formatting for consistency's sake, particularly the pseudo footnotes that appear here and there and maybe the editorial notes I sometimes add after the fact.
  • I'd take out any superfluous commas and parentheses I missed the first time around.
  • I'd provide a table of contents and index. Again the tags would be of some help, but only some, in constructing the index.
  • Along the way I'd probably end up doing some gardening in the blog itself, cleaning up tags and tweaking posts.
  • I'd title it Field Notes on the Web: Old-media Remix
All of this would entail quite a bit of hand-editing, some custom scripting/XML-bashing, considerable puzzling over what belongs in which section and, not least, re-reading the whole blog and the finished result from start to finish multiple times. To make this worth my while I'd need to see some indication that people would buy it, and I'm at least a couple of orders of magnitude away from that level of readership.

So if you're interested in my version, tell a hundred or so of your closest friends to stop by, and tell them to tell their friends, etc.. Go ahead. I'll wait. In the mean time, if you really, really want to get your hands on a printed, bound copy of a bunch of Field Notes, feel free to track down the service yourself. As far as I can tell, they don't really care whose blog you print, so long as you print it. Myself, I don't see the point.

Tuesday, December 15, 2009

Surfing up and down the east coast

In a previous life, I made a couple of long-haul bus trips across the American southwest. It was an option worth considering for someone with more time than money -- an undergraduate, say -- provided the traveler was someone -- an undergraduate, say -- who didn't mind sitting and occasionally half-sleeping in cramped quarters in the company all manner of interesting people. On the one hand it's slow, something over 40 hours of nearly continuous driving to get about halfway across the continent. On the other hand, you see all sorts of things you'll completely miss in flyover mode. That cuts both ways.

Here in the 21st century it's a whole different game, at least in the Boston-DC corridor. Two carriers, BoltBus and megabus.com, are offering service between major cities aimed not only at scruffy undergrads but also at business travelers. What makes it work?
  • Speed is not such a problem in the target market. Flying will still be considerably faster for the longer routes, even counting time to and from airports, but DC - New York is nowhere near as long a haul as LA - Albuquerque.
  • The bus is still way cheaper.
  • These new busses are spiffy, new, less cramped than Ye Olde Greyhound and ... you knew there it had to be in here somewhere ... webby.
Not only do you book online and bring an email confirmation as your ticket, you can surf while you're on the bus at no extra charge. Check your social networking site, fire off a few emails and tweets, who knows, maybe even get some work done while you're en route.

How does it work? I wasn't able to track down the exact mechanism, but I would have to guess a series of WiMax towers along the route feeding the on-board WiFi.

How well does it work? Well, BoltBus's FAQ cautions that "This technology is new, and there are spots on the trip where the service may be unavailable. We also do not advise downloading large files, as the speed will be relatively slow [...] Plug in and Wi-Fi disclaimer: BoltBus makes every effort to provide these services free of charge to every passenger. However, if, for whatever reason, the service is unavailable we are unable to supply a refund."

Well, whaddya want for a nickel?

I'm not sure which company's transparent attempt to sound like Bus 2.0 works better. You'd think that either CamelCaseNames or domainname.com would be about to fall out of fashion any day now, so I'll give the edge to BoltBus for not using mega.

[Looks like both of these outfits are still in business --D.H. Dec 2015]

Monday, December 14, 2009

More required reading I haven't read

Readers of this blog, or even passersby who run across the ever-popular "Information age: not dead yet" post, may find it hard to believe this, but I have not yet read Tom Standage's The Victorian Internet. In fact, I only just heard of it.

Sunday, December 13, 2009

Additive change considered useful

This post is going to be a bit more hard-core geekly than most, but as with previous such posts, I'm hoping the main point will be clear even if you replace all the geek terms with "peanut butter" or similar.

Re-reading my post on Tog's predictions from 1994, I was struck by something that I'd originally glossed over. The prediction in question was:
The three major operating systems in use today, DOS/Windows, Macintosh, and Unix, were all launched in the seventies. They are old, tired, and creaking under the weight of today's tasks and opportunities. A new generation of object-oriented systems is waiting in the wings.
My specific response was that object-oriented programming has indeed become prominent, but that for the most part object-oriented applications run on top of the same three operating systems. I also speculated, generally, that such predictions tend to fail because they focus too strongly on present trends and assume that they will continue to the logical conclusion of sweeping everything else aside. But in fact, trends come and go.

Fair enough, and I still believe it goes a long way towards explaining how people can consistently misread the implications of trends, but why doesn't the new actually sweep aside the old, even in a field like software where everything is just bits in infinitely modifiable memory?

The particular case of object-oriented operating systems gives a good clue as to why, and the clue is in the phrase I originally glossed over: object-oriented operating systems. I instinctively referred to object-oriented programming instead, precisely because object-oriented operating systems didn't supplant the usual suspects, old and creaky though they might be.

The reason seems pretty simple to me: Sweeping aside the old is more trouble than it's worth.

The operating system is the lowest level of a software platform. It's responsible for making a collection of hard drives and such into a file system, sending the right bits to the video card to put images on the screen, for telling the memory management unit what goes where, dealing with processor interrupts and scheduling, and other such finicky stuff. It embodies not just the data structures and algorithms taught in introductory OS classes, but, crucially, huge amounts of specific knowledge about CPUs, memory management units, I/O buses, hundreds of models of video cards, keyboards, mice, etc., etc., etc.

For example, a single person was able to put together the basic elements of the Linux kernel, but its modern incarnation runs to millions of lines and is maintained by a dedicated core team and who knows how many contributors in all. This for the kernel. The outer layers are even bigger.

It's all written in an unholy combination of assembler and C with a heavy dose of magical functions and macros you won't find anywhere else, and it takes experience and a particular kind of mind to hack in any significant way. I don't have the particulars on Mac OS and DOS/Windows, but the basics are the same: Huge amounts of specialized knowledge distributed through millions of lines of code.

So, while it might be nice to have that codebase be written in your favorite OO language, leaving aside that using an OO platform enables but certainly does not automatically bring about several improvements in code quality, why would anyone in their right mind want to rewrite millions of lines of tested, shipping code? As far as function is concerned, it ain't broke, and where it is broke, it can be fixed for much, much less than the cost of rewriting. Sure, the structure might not be what you'd want, and sure, that has incremental costs, but so what? The change just isn't worth it*.

So instead, we have a variety of ways to write desktop applications, some of them OO but all running on one of the old standbys.


Except ...

An application developer would rather not see an operating system. You don't want to know what exact system calls you need to open a TCP connection. You just want a TCP connection. To this end, the various OS vendors also supply standard APIs that handle the details for you. Naturally, each vendor's API is tuned toward the underlying OS, leading to all manner of differences, some essential, many not so essential. If only there were a single API to deal with no matter which platform you're on.

There have been many attempts at such a lingua franca over the years. One of the more prominent ones is Java's JVM, of course. While it's not quite the "write once, run anywhere" magic bullet it's sometimes been hyped to be, it works pretty well in practice. And it's OO.

And it has been implemented on bare metal, notably on the ARM architecture. If you're running on that -- and if you're writing an app for a cell phone you may well be** -- you're effectively running on an OO operating system [Or nearly so. The JVM, which talks to the actual hardware, isn't written in Java, but everything above that is][Re-reading, I don't think I made it clear that in the usual Java setup, the JVM is relying on the underlying OS to talk to the hardware, making it a skin over a presumably non-OO OS.  In the ARM case, you have an almost entirely OO platform from the bare metal up, the exception being the guts of the JVM].

Why did this work? Because ARM was a new architecture. There was no installed base of millions of users and there weren't hundreds of flavors of peripherals to deal with. Better yet, a cell phone is not an open device. You're not going to go out and buy some new video card for it. The first part gives room to take a new approach to the basic OS task of talking to the hardware. The second makes it much more tractable to do so.

What do the two cases, of Java sitting on existing operating systems in the desktop world but on the bare metal in the cell phone world, have in common? In both cases the change has been additive. The existing operating systems were not swept away because in the first case it would have been madness and in the second case there was nothing to sweep away.



* If you're curious, the Linux kernel developers give detailed reasons why they don't use C++ (Linus's take is particularly caustic). Whether or not we choose to count C++ as an OO language, the discussion of the costs and benefits is entirely valid. Interestingly, one of the points is that significant portions of the kernel are object-oriented, even though they're written in plain C.

** I wasn't able to run down clear numbers on how many cell phones in actual use run with this particular combination. I believe it's a lot.

Saturday, December 12, 2009

Usability, convention and standards

A while back now, I wrote about the importance of convention in life in general and by extension on the web in particular. Earl commented, wondering whether there was such a thing as an encyclopedia of conventions. A bit later, I asked a colleague who designs user interfaces about that. I wasn't expecting an actual encyclopedia, but perhaps a few widely-used reference works or such.

My colleague chuckled and pointed at Jakob's Law, named after Jakob Nielsen, which states that "Users spend most of their time on other sites." This ensures a certain homogeneity, since sites that don't conform to what everyone else is doing will tend to be hard to navigate and so lose traffic. As a corollary, conventions arise not by fiat from some authority, but de facto from what the most prominent sites do.

Fair enough, but that can't be the whole story. Some conventions are dictated by standards. Take URLs, for example. Granted, the casual web user can get by without seeing them much, but they can't be ignored completely, and every bit of a URL is dictated by standards. For example:
  • The http: or https: prefix is the standard designation of the HyperText Transport Protocol (RFC 2616). The URL syntax itself is specified in RFC 1738 and others.
  • HTTP URLs, and several other flavors, are hierarchical, meaning that they can be broken down into a list of sub-parts separated by slashes. Why slashes and not backslashes? That's what the standard calls for.
  • The authority part of an HTTP URL is the name of a host to which you should be able to make an HTTP connection*, typically something like www.example.com. The parts-separated-by-dots aspect is specified as part of DNS (RFC 1035).
  • Why do so many domain names end in .com? Thank the IANA.
There are also empirically-derived results that put limits on what will or will not work. Fitts's law, for example, states that how long it takes to point at something depends on how big it is and how close it is**. This has a strong effect on how well a user interface works. This in turn has at least some effect on how widespread a particular approach becomes.

But hang on. What did I just say, "has some effect"? Fitts's law has something of the character of a law when it comes to measuring how long it takes people to, say, find and select a menu option. There's laboratory evidence for it. It has less of a the character of a law when it comes to determining what real interfaces look like. That's determined in large part by what the prominent vendors happen to put out and by similar factors not having little to do with the merits of the interfaces themselves.

And those standards? It's slashes and not backslashes because the first web servers used the UNIX convention of forward slashes and not (say) the DOS convention of backslashes. Moreover, we use URLs and HTTP at all, and not some other set of standards with similar intent, because they caught on.

Anyone can write a standard. Writing a widely-accepted standard is a different matter, and it helps greatly to start with something people are already using. Why standardize something people are already using? Because people want some assurance that when they say "Foo protocol", they mean the same thing as everyone else, and in particular, their Foo implementation will work with everyone else's. Typically it goes something like this:
  • Someone puts out a nifty application that, say, uses something new called "Foo protocol" to coordinate its actions.
  • The app is so nifty that other people want to write pieces that work with it. So they figure out how to make their stuff speak Foo, whether by reading the source (if it's open), or reverse engineering, or by whatever other means.
  • Unfortunately, everyone's implementation works just a bit differently. At first, it doesn't matter much because I'm using my Foo to do something I know about and you're using yours to do soemthing you know about.
  • But sooner or later, people start using Foo to do something new. This happens with both the original application and with the third parties.
  • "Hey wait a minute! I was expecting your server to do X when I sent it a Blah message, but it did Y!" "Well, that always worked before, and it was consistent with the other stuff we were doing ..."
And thus is born the working group on Foo. With luck and a following wind, a written standard pops out some time later, and if adoption is good, the working group on Foo interoperability comes along to work out what the Foo standard really means.

But I digress. The main point, if there is one, is that conventions seem to arise organically, influenced by considerations such as existing standards and the technical details of the problems being addressed, but ultimately decided by accidents of history such as what happened to catch on first.

(*) Strictly speaking the part right after http:// doesn't have to be a real web server. But let's just pretend it does.

(**) Tog has an interesting quiz on the topic.

Wednesday, December 9, 2009

More bad news for Murdoch

Rupert Murdoch's approach to online content rests on a few basic tenets:
  • News aggregators* such as Google and Yahoo are stealing News Corp's content
  • People's online reading habits will reflect their newspaper buying habits
  • If the major outlets can all be persuaded to charge for content, people will have to pay
To which The Economist, citing a study by media consultancy Oliver & Ohlbaum, rebuts
  • People don't generally find their news through aggregators, so it's rather moot whether news aggregators are stealing or not
  • People's online reading habits have very little to do with what print publications they buy -- they'll read pretty much anything online
  • As more outlets decide to charge for content, people become less likely to pay
These findings fit against Murdoch's stated positions so tightly one would almost think they were aimed deliberately at rebutting them, and the last item is based on people's answers to a hypothetical question, but still ... the study does seem to put some meat on the bones of what a whole lot of web-savvy people have already been thinking and saying.


* Not to be confused with the search engines themselves, but Murdoch is also having a go at reining them in.

Tuesday, December 8, 2009

Real-time Google

[Not to be confused with Google Instant, which shows search results in real time.]

I tend to operate somewhat slower than real time myself, so I may not get around to investigating Google's latest magical trick, real-time search, right away, but for me what jumped out of CNET's article on it wasn't the inevitable Google-Twitter partnership, but that
Real-time search at Google involves more than just social-networking and microblogging services. While Google will get information pushed to it through deals with those companies, it also has improved its crawlers to index and display virtually any Web page as it is generated.
That's been coming along, by degrees, for a while, but it still seems kind of eerie.

[Sure enough, by the time I could decide that 'real-time search, right away, but for me what jumped out of' would be unique enough to find this post, and put it into Google, it was already in the index. Granted, Blogger is Google territory. Still pretty slick, though]

Monday, December 7, 2009

The future isn't what it used to be

Further into my digression into usability land -- a fine and useful place to digress, I might add -- I ran across the introduction to Bruce Tognazzini's Tog on Software Design, written in 1994, predicting the tumult of the next decade. Demonstrating that being a brilliant UI designer does not necessarily make one a brilliant futurologist, it nicely summarizes the "internet will change everything" vibe that was particularly strong then and still alive and kicking to this day. As such, it provides a fine chance to jump on my "not ... so ... fast" hobby horse and respond. Maybe even get it out of my system for a while.

Nah.

Following is a series of quotes, probably on the hairy edge of fair use. I had originally done the old point-by-point reply, but the result was tedious even for me to read, so instead let's pause to contemplate some of the more forceful statments in the area of technology ...
[W]ithin only a few more years, electronic readers thinner than this book, featuring high-definition, paper-white displays, will begin the slow death-knell for the tree mausoleums we call bookstores.
...
The three major operating systems in use today, DOS/Windows, Macintosh, and Unix, were all launched in the seventies. They are old, tired, and creaking under the weight of today's tasks and opportunities. A new generation of object-oriented systems is waiting in the wings.
...
[Cyberspace] will be an alternate universe that will be just as sensory, just as real, just as compelling as the physical universe to which we have until now been bound.
... economics ...
Every retail business from small stores to shopping centers to even the large discount superstores will feel an increasing pinch from mail-order, as people shop comfortably and safely in the privacy of their own homes from electronic, interactive catalogs.
...
a new electronic economy will likely soon rise, based on a system of barter and anonymous electronic currency that not even the finest nets of government intrusion will be able to sieve. [Bitcoin, anyone? --D.H. May 2015]
... society ...
Security is as much an illusion, as naïve, idealistic hackers automate their activities and release them, copyright-free, to an awaiting world of less talented thieves and charlatans. Orwell's prediction of intrusion is indeed coming true, but government is taking a back seat to the activities of both our largest corporations and our next-door neighbors. The trend will be reversed as the network is finally made safe, both for business and for individuals, but it will be accomplished by new technology, new social custom, and new approaches to law. The old will not work.
...
More and more corporations are embracing telecommuting, freeing their workers from the drudgery of the morning commute and society from the wear, tear, upkeep, and pollution of their physical vehicles. They will flit around Cyberspace instead, leaving in their wake only a trail of ones and zeros.
...
Dry-as-dust, committee-created and politically-safe textbooks will be swept away by the tide of rough, raw, real knowledge pouring forth from the Cyberspace spigot.
... and the creative world ...
As the revolution continues, our society will enjoy a blossoming of creative expression the likes of which the world has never seen.
...
[W]e are also seeing the emergence of a new and powerful form of expression, as works grow, change, and divide, with each new artist adding to these living collages of color, form, and action. If history repeats itself--and it will--we can expect a period of increasing repression as corporate intellectual property attorneys try desperately to hold onto the past.
...
Writers will no longer need to curry the favor of a publisher to be heard, and readers will be faced with a bewildering array of unrefereed, often inaccurate (to put it mildly), works.
Start with what more-or-less panned out: Object-oriented development has taken root. People do shop online. People do telecommute. Corporate intellectual property attorneys have indeed tried to put various genies back in their bottles. Blogs supply a bewildering array of unrefereed works (not to be confused with "rough, raw, real knowledge pouring forth"). Whether anyone reads them is a different matter.

Much more prominent here is what didn't happen, and there's a clear pattern: The new did not sweep aside the old. The most telling phrase along those lines is "mail-order". If you look at online shopping as a completely new way of doing business, then it's obvious that WebVan, eToys and Pets.com are going to slay the dinosaurs. But if you look at the web as the latest heir to the Sears Catalog, it's no surprise what actually happened. Far from feeling the pinch, the big box stores have simply added online shopping to their marketing arsenal.

And so on down the line: Object-oriented platforms are definitely here, but they generally run on DOS/Windows, Unix/Linux or the Mac. Various net-borne security threats have come along, but a scam is still a scam and a bank is still a bank. Some people telecommute now, but most can't and many would prefer not to. Wikipedia came along but textbooks are still here. Blogs and twitter came along, but major media outlets are still here. Record labels still produce music, studios still produce movies and publishers still publish. Often on paper, even.

I've left out a few more of the original predictions in the interest of brevity and because, though they would be interesting to discuss, they would take longer to go into in sufficient depth. I'm thinking particularly of the items about licensing fees and micropayments, and the have/have not divide. However I don't believe these omissions materially affect my main thesis that this piece, and many like it, are based mainly on taking what's hot at the moment and predicting that it will push everything else aside.

Why, then, the willingness to believe that today's particular preocupations will devour the future? Paradoxically, I think it may come of an inability to see change. If, to take a contemporary example, Twitter and social networking are all that everyone's talking (or tweeting) about, then simple inertia can lead one to assume that they're all that everyone will be talking about tomorrow, or in a month, or in a decade.

This is evident in one of the more jarring ironies in the piece: Directly after declaring that "Saying Information Superhighway is no longer cool," Tog goes on to extoll Cyberspace.

Remember Cyberspace?

[For a bit more on this thread, see this later post --D.H. Dec 2015]