Tuesday, January 7, 2025

The future still isn't what it used to be: This whole "web" thing

It's worth keeping in mind that the building blocks of today's web, particularly HTTP and HTML, were developed by academic researchers. One thing that academic researchers have to do, a lot, is follow references, because most academic work builds on, critiques and otherwise refers to existing work.

Let's take a moment to appreciate what that meant before the web came along. Suppose you're reading a reference work and you run across a reference like this:

4. Ibid, 28-3

That is, you've just run across a passage like this totally made-up example:

It is also known that the shells of tropical snails vary widely in their patterning4.

That little raised 4 tells you to skip to the bottom of the page and find footnote 4, which says "Ibid, 28-3", which means "look in the same place as the previous footnote, on pages 28 through 33". So you scan up through the footnotes and find

3. Ibid, 17

OK, fine ... keep going

2. McBiologistface, Notes on Tropical Snails, 12-4

OK, this is something previously referenced, in particular something written by McBiologistface (likely the eminent Biologist McBiologistface, but perhaps the lesser-known relative General Scientist McBiologistface). Keep going ...

1. McBiologistface, Something Else About Tropical Snails, 254

OK, looks like this person wrote at least two books on tropical snails. The one we're looking for must be referenced in a previous chapter. Ah, here it is:

7. McBiologistface, Biologist, Notes on Tropical Snails (Hoople: University of Southern North Dakota at Hoople Press, 1945), 32-5

Great. Now we know which McBiologistFace it was, and which edition of which book published by which publisher. Now all we have to do is track down a copy of that book, and open it to ... let's see, what was that original reference? ... oh yes, page 28.

To be fair,  "McBiologistface, Notes on Tropical Snails" from reference 2 is probably enough to find the book in the card catalog at the library, and if a reference is "Ibid", you may already have the book and have it open from following a previous reference to it. It's also quite possible that your department or office has copies of many of the books and journals that are likely to be referenced.

Nonetheless, thinking of the tasks I mentioned when describing the Olden Days -- navigating an unfamiliar place, communicating by phone, streaming entertainment and searching up information -- simply following a reference from one book or article to another could be more work than any of them.

Even answering a question like "where was the touch-tone™ phone invented" would have been easier, assuming you didn't already have a copy of Notes on Tropical Snails on hand: go to the library, walk right to the easily-located reference section that you've already been to, pull out the 'T' volume of one of the encyclopedias, flip to Telephone and chances are your answer is right there (or you could just ask someone who would know).

To find the reference on snails, you'll have to look up the book in the card catalog, note down its location in the stacks, go there and scan  through the books on those shelves until you find the book itself (and then open it and flip to the right page, but you already know that from the reference). This is all assuming there's a copy of the book on the shelves that no one's checked out (who knows, maybe there's been a sudden interest in tropical snails in your town). Otherwise, you could call around to the local bookstores, or your colleagues and friends, to see if anyone has a copy. If not, your favorite bookstore could special-order a copy from the publisher, and with luck it would be there in a few days.

Chasing a link in an HTML document is more or less instant. You can probably see the appeal.

My point here is that the interlinked nature of the web, that ability to click on a link, immediately see what's on the other end and easily get back to where you were, was an absolute game-changer for the sort of people who created the early web. Your own milage may vary.


To make this work, you need a few key pieces

  • A way of referencing data that's available on the network (URLs)
  • A way of embedding URLs in a body of text, similar to the way footnotes are embedded in ordinary text (HTML)
  • Ideally, a standard way of accessing something referenced by a URL (HTTP)
I say "ideally", because it was already possible to access data on the web using protocols like FTP and Gopher, and you could reference those with a URL. Nonetheless, having a integrated suite of {URL, HTML, HTTP} working together fairly seamlessly meant that http:// URLs (or later, https://) quickly came to dominate.

You also need one more thing, namely that there should actually be something on the other end of the link (it's OK if links are sometimes dangling or become broken, but that should be the fairly rare exception). By the time the web standards were developed, there was already enough interesting data and text on the internet to make links useful. To some extent, the early web was just an easier way to get at this kind of information. If you had the pieces, you could easily pull together an HTML page with a collection of links to useful stuff on your server, stuff like interesting files you could fetch via FTP, with a little bit of text to explain what was going on, and anyone else could use that.


The truly webby part of the web, the network of links between documents, is still around, of course, but as far as I can tell it's not a particularly important part of most people's web experience. Links are more a way of getting to content -- follow a link from a search result, or follow a reference from an AI-generated summary to see whether the AI knows what it's talking about -- but following links between pieces of content is not a particularly important part of the web experience. Some articles include carefully selected links to other material, but a lot don't. Personally, I've mostly stopped doing it so much, because it's time-consuming, though these recent Field Notes posts have a lot more linkage than usual.

One sort of link that I do follow quite a bit is the "related article" link in a magazine or news source -- articles by the same author or on the same topic, or just to stuff that the server thinks you might find interesting, or that the publisher is trying to promote. But again, this seems more like navigating to something. The articles themselves largely stand alone, and I generally finish one article before moving on to the next. A truly webby link, like a footnote before it, links from some specific piece of text to something that's directly related to it.

And, of course, I do click on ad links, though usuallyt by mistake since you just can't get away from them.


Realizing this, I think, is a big reason that this blog went mostly quiet for a couple of years. If the webby part of the web is really only of interest to a few people, except in a few special cases like sharing social media content and browsing Wikipedia, why write field notes about it, particularly if the blog writer doesn't find social media particularly appealing?

Conversely, this latest spate of posts is largely the result of relaxing a bit about what the "web" is and talking about ... dunno, maybe "the online experience" in general? Or just "internet-related stuff that doesn't really seem to fit on the other blog?"

Whatever you call it, I seem to be enjoying writing about it again. 

Monday, January 6, 2025

The future still isn't what it used to be: Vannevar Bush

(According to Blogger, this is the 700th post on this blog, which seems like a completely arbitrary milestone to note, but I noticed it nonetheless, so now you get to. You're welcome.)

Vannevar Bush casts something of a long shadow. He held several high-level technology-related posts in the FDR and Truman administrations, had a long and distinguished academic career at MIT and elsewhere, and won several prestigious awards, including the National Medal of Science. His students included Claude Shannon, whose work in information theory is still directly relevant, and Frederick Terman, who was influential in the development of what we now call Silicon Valley (I used to work fairly near Terman Drive in Palo Alto).

Bush is also often credited with anticipating the World-Wide Web in his Atlantic Monthly article As We May Think. Since I've been comparing early visions of the Web with what actually happened, I thought I'd take a look. I've linked to the ACM version rather than the Atlantic's version, which may or may not even be online, since the ACM version highlights the relevant passages. Though there's a Wikipedia page on the piece, I've deliberately skipped it in favor of Bush's original text (with the ACM's highlights).

Two things jump out immediately, neither directly relevant to the web:

  • The language is relentlessly gendered. Men do science. Girls [sic] sit in front of keyboards typing in data for men of science to use in their work. A mathematician is a particular kind of man, technology has improved man's life, and so forth. Yes, this is 1945, and we expect a certain amount of this, but from what I can tell Bush's style stands out even for the time. I mention this mainly as a heads-up for anyone who wants to go back and read the original piece -- which I do nonetheless recommend.
  • There is an awful lot of technical detail about technologies that would be obsolete within a couple of decades, and in several cases nearly fossilized by the dawn of the Internet in the 1970s. Bush speculates in detail about microphotography, facsimile machines, punch cards, analog computers, vacuum tubes, photocells and on and on for pages. Yes, all of these still existed in the 1970s (I spent many an hour browsing old newspapers and magazines on microfilm as a kid), but digital technology would make most if not all of them irrelevant before much longer. As far as predicting the technology underpinning the web, Bush's record is nearly perfect: If he speculated about it, it almost certainly isn't relevant to today's web.
Two thoughts on this. First, it's almost impossible to speculate about the future without mentioning at least something that will be hopelessly out of date by the time that future arrives. In our own time, all we have are the tools and mental models of the world of that time. I don't fault Bush for thinking about the future in terms of photographic storage, and I don't this takes anything away from his thoughts on the "Memex", which is what people are referring to when they talk about Bush anticipating the web.

I just wish he hadn't done nearly so much of it. Alan Turing's Computing Machinery and Intelligence spends two sentences on the idea of using a teleprinter so that it's not obvious whether there's a human or machine on the other end of the conversation, and one of those sentences just says that this is only one possible approach. That seems about right for that paper. In Bush's case, I could see a few paragraphs about how to store large amounts of information (for those days, at least) on film or magnetic media, and so forth. The article would have been much shorter, but no less interesting.

Second it's worth noting how many things were possible with mid 1900s technology. You could convert, both ways, between sound, image and video (in the sense of moving images) on the one hand and electrical signals on the other. You could store electrical signals magnetically. You could communicate them over a distance. You could store digital information in a variety of forms, including the famous punched cards, but also magnetically.

There were ways to produce synthesized speech and read printed text. Selecting machines could do boolean queries on data (Bush gives the example of "all employees who live in Trenton and know Spanish"). Telephone switching networks could connect any of millions of phones to any other in about the time it took to dial (and less time than it sometimes takes my phone to set up a call using my WiFi). Logic gates existed. For that matter, the first general-purpose digital computer, the ENIAC, existed in 1945 and Bush would certainly have known about its development.

In other words, even in 1945, Bush isn't drawing on a blank canvas. He's trying to pull existing pieces of technology together in a new way in order to deal with what was, even at the time, an overwhelming surplus of information. The gist of the argument is "If we make these existing technologies smaller, faster and cheaper, and put them together in this particular way, we can make it easier to deal with all this information."


The particular problem Bush is really interested in isn't so much storing information as retrieving it ("selecting" as Bush says). This is totally understandable for a national science adviser who had until recently been working on one of the largest technological efforts to date (the Manhattan Project). Bush cites Gregor Mendel's work having been essentially unknown until decades after the fact as just one example of a significant advance nearly being lost because no one knew about it, even though it was there to be found. Bush's desire to prevent this sort of thing in the future is palpable.

Bush mentions traditional indexing systems that can find items by successively narrowing down the search space (everything starting with 'F', everything within that with second letter 'i' ... ah, here it is, Field Notes on the Web), but he's much more interested in following a trail of connections from one document to another. That is, he's envisioning a vast collection of documents traversable by following links between them. That's the world-wide web. Ok, we're done.


Except ...

Bush sees the Memex as literally a piece of furniture, looking pretty much like a desk but with a keyboard attached along with various projection screens and a few other attachments. Inside it is a store of microfilmed documents together with some writable film, which takes up a small portion of the space under the desk, and a whole bunch of machinery to be named later, taking up most of the space.

Associated with each document is a writable area containing some number of code spaces, each of which can hold the index code of a document. There's also a top-level code book to get you started, and when you add a new document, you add it to the code book. To be honest, this seems a bit tedious.

To link two documents together, you pull them both up, one on one projection screen and the other on the other, and press a button. This writes the index code for each document in the other's next open code space. The next time you pull up either of the documents, you can select a code space and pull up the document with that code.

Codes are meant to have two parts: a human-readable text code and a "positional" numeric code (probably binary or maybe decimal). Linking this post to Bush's article might add "Bush-as-we-may-think" to a code space for this post, along with (somewhere offscreen) the numeric index for Bush's article, and "Field-notes-future-ramblings-Bush" to a code space on Bush's article (along with the numeric code for this post). At that point you've got one link in a presumably much larger web.  Actually, you have two links, or one-bidirectional link if you prefer. Not quite Xanadu's transclusion, but arguably closer than what we  actually have.

Pretty webby, except ... coupla things ...

For one thing, this is all happening on my Memex. My copy of this post is linked with my copy of Bush's article. Yours remains untouched. If there's a way of copying either content or links from one Memex to another, I didn't catch it. Bush's description of how document linking works is hand-wavy enough that it wouldn't be particularly more hand-wavy to talk about a syncing mechanism (and/or an update mechanism), but I doubt Bush was thinking in that direction.

Bush seems to be thinking more about a memory aid for an individual person (or possibly a household or small office/laboratory). Functionally, it's a personal library with much larger capacity and the ability to leave trails among documents. It's certainly an interesting idea, but it misses the "world-wide" part. When I link to the ACM's version of Bush's paper, the link is from my blog to the ACM's site. If you write something and link it to Bush's paper, we're pointing at the same thing, not separate copies of it, and we're pointing to a thing that might be stored anywhere in the world (and someplace else next time we access it).

In the same post I mentioned above, I talk about a couple of features that make the web the web, particularly that a link can be dangling -- pointing to nothing -- and it can become broken -- you pointed at a page, but that page is no longer there (early posts on this blog are full of these, though at the time it wasn't clear whether rotting links would be an issue as storage got cheaper; it is). There's also some ambiguity as to what exactly a link is pointing to. If I point to the front page of a news site, for example, the contents on the other end of that link will probably be different tomorrow. In other cases, it's worth going to some effort to ensure the contents don't change significantly.

These may seem like bugs at first glance, but for the most part, they're features, because the flexibility they provide allows the web to be decoupled. I can do what I like with my site without caring or even knowing what links to it. Since a Memex is a closed system, none of this really applies. On the one hand, it's not a problem, but on the other hand, it's not a problem because a Memex is not a distributed system, which the web as we know it very much is.

Finally, the mechanism of linking is noticeably different from what HTML does. You have a pair of links between documents (or maybe pages of documents?). An HTML link is between a particular piece of the source document to, in general, a particular anchor on the destination document. To be fair, this doesn't seem like an essential difference. You could imagine a Memex with a linking mechanism that goes from a piece of one document to a piece of another, which would be much more like an HTML link (and, arguably, more like a Xanadu transclusion).


So did Vannevar Bush anticipate the web by nearly half a century?

I think the fair answer is "not really", because the distributed, dynamic nature of the web is critical.

Did he anticipate the idea of an interconnected web of documents? I think the fair answer is "sorta". Again, actual web links are one-directional and non-intrusive. You can link from document A to document B without doing anything at all to document B or its associated metadata. You don't need a backlink and you generally won't have one.

This one-way form of link was not a new idea. Documents have been referencing each other forever. Bush's notion of linking is different from an HTML link, and since an HTML link is structurally the same as a reference in a footnote in a book, it's different from that as well.

In other words, the original idea in Bush's work is more an evolutionary dead end than an innovation. A pretty interesting dead end, but a dead end just the same.


Postscript:

There's one more thing that I'd been meaning to mention but, embarrassingly enough, forgot to: search. Bush is quite right in saying that people access information by content, but in the Memex world everything eventually boils down to an index number. You access document 12345, not "any documents mentioning Memex" or whatever.

Search is probably the aspect of the web with the least precedent in mid-1900s technology. There were ways to attach index numbers to things, or even content tags, and retrieve them, with a minimum of human intervention. Bush goes into those at length. But if you wanted to get to something by what was in it, you needed a person for that, if only to add indexing information. Indeed, Memex is aimed directly at making it easier for a human to do that task, by making it easy to leave a trail of breadcrumbs a human could easily follow.

It would be almost a half-century before documents could be easily accessed by way of what was in them.


Oh, and also ... in Bush's vision, linking documents together would be a frequent activity for anyone using a Memex. In today's web, not so much, except, I think, in the particular case of re-whatevering a piece of social media content. I think the reason for that is also search (see this early post for a take on that).

Sunday, January 5, 2025

The future still isn't what it used to be: Cyberspace

 In the previous post, I said 

Telecommuting and remote work exist, but they don't dominate, they only really make sense for some professions and they don't mean jacking into a Snow Crash or Neuromancer virtual world, even though one of the largest corporations in the world has rebranded itself around exactly that vision.

This very morning, I decided to add David Foster Wallace's Infinite Jest to my reading list. In the preface to the 20th anniversary edition (in 2015), Tom Bissell writes

Yes, William Gibson and Neal Stephenson may have gotten there first with Neuromancer and Snow Crash, whose Matrix and Metaverse, respectively, more accurately surmised what the internet would look and feel like.

Um, did they? Bissell goes on to say

(Wallace, among other things, failed to anticipate the break from cartridge- and disc- based entertainment)

Fair, but ...

Yes, there is a major difference between on-demand streaming and broadcast streaming, where a broadcaster puts out content according to its schedule. There is also a difference, though it seems like a smaller one, between obtaining a physical object that allows you to view something when you want to and being able to view something more or less instantly via an always-on connection (using "view" in a fairly general sense here that would include listening to audio).

Having the combination of "what you want" and "when you want it" without the friction of obtaining a physical artifact like a book, record, tape or disk does seem like something new and significant (more musings on that here), so in that sense, to the extent Wallace's world is limited to physical media, it's farther from our reality than one with data flowing freely over networks.

With one exception, though (which I'll get to) the modern web/internet that I'm familiar with has little to do with Neuromancer's matrix or Snow Crash's metaverse.


Let's start with how you get there (one small disclaimer: While I finally got around to reading Snow Crash a couple of years ago, the last time I read Neuromancer was, um, closer to when it came out, so I'm relying on fairly old memories plus secondary sources for that one; for reference, Neuromancer was published in 1984, Snow Crash nearly a decade later in 1992, Infinite Jest in 1995). 

You get to Gibson's cyberspace by jacking in, that is, connecting your central nervous system to a computer interface that delivers a completely immersive experience. To access Stephenson's metaverse, you need a terminal and googles, either a high-quality private terminal or a free public one which provides only a grainy, black-and-white experience. In either case, the experience in Snow Crash is immersive in that you are generally not aware of the outside world, but it's not the full-sensory experience of Neuromancer.

Back in our world, of course, people generally access the web through their own computing devices, whether a phone, a tablet, a TV set, a laptop or even a desktop computer. There is no scarcity of devices. If you have access to any at all, you probably have easy access to several. You can even visit a public library and use a computer there. You do need an internet connection, but those are nearly everywhere, too. You can get on the internet in a cafe, for example, by connecting to their WiFi (as far as I can tell, actual internet cafes are nearly extinct).

In most cases, you're aware of the world around you, or at least, the internet experience doesn't take over your entire sensorium. The semi-exception is gaming, which in some cases makes an effort to be truly immersive, more or less along the lines of Snow Crash. VR headsets have been around  in some form since the 80s (if not before), and they're a natural fit for applications like FPS games, so this is not exactly a surprise.

Long story short, in much of the world the internet is easy to access with readily available equipment. Going online often means using your phone or watching TV, that is, using something that's recognizably derived from a technology that existed before the internet. Immersive experiences are only a bit harder to get to, but in any case they're not the norm.

In Neuromancer, jacking in requires special equipment on both the human and computer end (though Gibson does speak elsewhere of billions of people having access). The bar is lower in Snow Crash, but it's not something that most people spend much time on. It's interesting that the 1992 version is a bit more mundane than the 1984 version, almost as though computing in the real world had become more commonplace. It's also telling, I think, that access to the virtual worlds of the novels is difficult enough to hang a plot point on, particularly in Gibson's earlier version, almost as though stories were written by writers.

OK, once you're in the virtual world, how do you get around? I'll focus more on Snow Crash here, mainly because memories are fresher. The key point about Cyberspace is that it's a space. In particular, it's a three-dimensional construct centered around a 100-meter-wide road 216 (65,536) kilometers long following a great circle on a  virtual sphere.

If you want to meet with someone else online, you arrange to go to the same space by moving your avatars. You can move your avatar around by walking or running, or use a vehicle, or take the transit system, which has 256 express ports, with 256 local ports in between each, at one kilometer intervals. There are special spaces within the metaverse, many with restricted access.

From an immersive gaming perspective, this makes perfect sense. From the perspective of the web, it makes no sense at all. If you chase a link from here to the Wikipedia article on Snow Crash, you just go. This page goes away and you see the Wikipedia page. Or it opens in a separate tab and you can flip back and forth, or whatever. You don't do anything even metaphorically like moving from this page to that. There's no concept of distance. At worst, one or the other of the pages might load slowly, but you don't have a sense of motion while that's happening (well, I don't, at least).

In other words, the key feature of Cyberspace, that it's a space, is at best completely irrelevant to the modern web, and at worst it's actually in the way. As I recall, Gibson's matrix is similar. For example, if you encounter ICE (Intrusion Countermeasures Electronics) you see an actual wall of ice or some other material that you have to get through.

Gibson's matrix, at least, is also spatial in another way: its contents are tied to physical computers in the real world. In particular, the two AIs Wintermute and Neuromancer are physically located in Bern and Rio de Janeiro, respectively. That is, they are presumably running on hardware located in those cities. Wintermute would like to be able to join with Neuromancer, its other half (Neuromancer is less concerned about this).

Data in today's internet is much more distributed. Not everything is in the cloud in the sense that there's no single well-defined physical location for data or the processors that process it, but a lot is, and even when a service or database is single-homed in a particular place, it usually doesn't matter exactly where that is. Even if two servers are located on different continents, they can still communicate easily because of the internet.


In the end, the technology of Neuromancer and Snow Crash isn't particularly prescient. The parts that are still around, such as a data-carrying network that's accessible across the world, or an immersive VR, were already under development in the 1980s. Gibson and Stephenson were drawing on cool and experimental, but real, technology as a jumping-off point for fiction. Moreover, they also copied some of the limitations of the technology of the time, particularly the need for specialized access terminals and on services being hosted on particular equipment located in particular places.

But in the end, Neuromancer and Snow Crash are not really about the technology. Snow Crash is more an exploration of Anarcho-Capitalism in a world where the official government has collapsed and ceded power to a collection of private entities. Neuromancer is in large part a conventional thriller, even including a physical ROM module as a MacGuffin (not withstanding what Bissell says about breaking away from physical media).

But for my money the computing technology and its relation -- or lack thereof -- to today's web isn't the interesting part of either book. Neuromancer is a ripping yarn set in a magical world whose magic happens to be presented narratively as a computerized virtual world. Snow Crash is a philosophical novel that uses an array of inventions, including but very much not limited to the metaverse, to frame its investigations. 

In both cases, the strange but also familiar technology is telling us that the novel's world is a different world from ours. The authors, particularly Stephenson, use those differences to explore our own world. As such, there's no particular need for them to have predicted the actual world of a generation later.

The future still isn't what it used to be: Tog

In 1994, so about 30 years ago, UX designer Bruce "Tog" Tognazzini's Tog on Software Design was published with this introduction. I wrote a post about it a mere 15 years later with a take on which predictions had and hadn't panned out. Another 15 years having passed, this seems as good a time as ever to take another look.

My first post included several direct quotes, which had the advantage of showing Tognazzini's actual words, but the disadvantage of leaving out some of them. This time around, I'm going to try summarizing the main point of each paragraph, with a few direct quotes for statements that seem particularly notable. Please have a look at the Tog's original page, as well. Unlike many old links on this blog, it still works, and kudos for that.

Tog's main points, as I see them, in the order originally written, were:

  • Phones, fiber and computers are [in 1994] about to converge. The whole world will be wired and national boundaries will no longer matter. Governments are trying to control this, but it's not going to work.
  • In particular, the Clipper Chip is a fool's errand because people can do their own encryption on top of it. Individuals will have access to strong encryption while banks and other institutions will be forced to use weak, government-approved encryption.
  • For example, the government of Singapore banned Wired magazine for an unfavorable article, but an online version was available immediately. "Traffic on the Internet cannot be selectively stopped without stopping the Internet itself"
  • Intellectual property laws can't keep up with new forms that build on putting together bits of existing content. There will be increasing repression as corporate lawyers try to stop this.
  • But this will end as corporations find ways to monetize content by having lots of people pay a little instead of a few people paying a lot [licensing fees at the time could run into the thousands of dollars] "As the revolution continues, our society will enjoy a blossoming of creative expression the likes of which the world has never seen."
  • While everyone's attention is focused on script kiddies, corporations will sneak around "America's boardrooms and bedrooms", destroying any illusion of privacy.
  • Security is also an illusion, but "The trend will be reversed as the network is finally made safe, both for business and for individuals, but it will be accomplished by new technology, new social custom, and new approaches to law."
  • The previous computer revolution, in the 1980s, resulted in a completely unexpected result: self-published paper zines. However [in 1994] it's hard to get distribution. Cyberspace [sic] will fix that, and creators will no longer need publishers in order to be heard. "[R]eaders will be faced with a bewildering array of unrefereed, often inaccurate (to put it mildly), works"
  • Tablets with high-resolution, paper-white displays will put an end to physical bookstores.
  • Retail will see increasing pressure from "mail-order, as people shop comfortably and safely in the privacy of their own homes from electronic, interactive catalogs"
  • "More and more corporations are embracing telecommuting, freeing their workers from the drudgery of the morning commute"
  • Schools will come to accept "that their job is to help students learn how to research, how to organize, how to cooperate, create, and think" and textbooks "will be swept away by the tide of rough, raw, real knowledge pouring forth from the Cyberspace spigot"
  • The term "information superhighway" is obsolete, because it doesn't do justice to Cyberspace, which will be "just as sensory, just as real, just as compelling as the physical universe"
  • A new economy will arise, based on barter and anonymous currencies that no government will be able to touch [this was written over a decade before the Bitcoin paper came out].
  • Initially, there will be digital haves and have-nots, but this will improve quickly as hardware becomes cheaper. The real problem is that the internet of the 1990s was built by mostly male hackers for their own use. There needs to be an "an easier, softer way" to access it, and only then will it see widespread adoption.
  • It's crucial to supplant the obsolete operating systems of the 1990s -- UNIX, Windows and Mac -- with object-oriented technology. Even 15 years after bitmapped displays were widely available [i.e., the first Macintosh came out in 1984], computers are barely shedding their old teletype-based look. We can't afford to wait another 15 years for OO to become widespread.
  • If all this is going to work, we need coordinated long-term strategies instead of each major player doing their own thing and hoping it all works out.

Honestly, I don't think my take on this has changed greatly in the past 15 years, because I think Tog's take is just as true as it was 15 years ago, or when it was written, even. That is, some parts are true and some parts are way off base, and which parts those are hasn't changed much. And, of course, it's likely that my opinions haven't changed greatly in the past 15 years.

Instead of comparing this post to the previous one, I'd like to look at the same themes from (I hope) a somewhat different angle. Last time around, I opined that the predictions that missed were mainly the result of assuming that a new development that's on the upswing will continue that way until it replaces everything that came before. I still think that's true, but what stands out to me more this time around is the apparent motivation behind the predictions.

Tog seems mostly to be grappling with the idea that computing technology of the 90s was poised to fundamentally overhaul our social structures. It should be clear to even the occasional reader of this blog (I'm pretty sure there are at least some) that I'm on the skeptical side of this one, but what really comes through in Tog's writing is a strong desire for this to be true, and in particular ways:

National boundaries will be obsolete. Government attempts to rein in technology will fail. Publishers will be irrelevant as entirely new forms of creativity emerge. Schools will change their entire mission. We will escape our physical bonds by working and living in a Cyberspace that's only distinguishable from the real world by its being more vibrant and vivid. OO will fundamentally change the way software is developed and open up whole new possibilities. Corporations and other major players will have to learn to work together in whole new ways.

No boundaries. No gatekeepers. No government interference. No physical bounds at all. New possibilities. New forms of expression. New ways of working. If you zoom out to that level, I don't think it would be much trouble to find a similar set of predictions from the 1960s, or the 1860s, or as far back as you want to go.

Or the 2020s, for that matter.

But national boundaries are still here. Reserve currencies are still around. Banking regulations still matter, even in the crypto world. Publishers, studios and record labels are still gatekeepers. To the extent schooling has changed, technology hasn't been a primary force (and remote schooling certainly did not replace students physically going to class). Telecommuting and remote work exist, but they don't dominate, they only really make sense for some professions and they don't mean jacking into a Snow Crash or Neuromancer virtual world, even though one of the largest corporations in the world has rebranded itself around exactly that vision.

Within this, a few particulars seem worth particular notice.

Tog wasn't the only one musing about new forms based on quoting existing material. Ted Nelson's Xanadu project was all about that, and by the time Tog was writing audio sampling had found its way from 1970s hip hop into the mainstream, eventually giving rise to whole new genres.

But this was neither a new idea nor anything revolutionary (see these old posts for more detail). Quotations and allusions have been around forever. It's more a matter of how they're used. Sample-based sound fonts are widely-used, for example, but the whole point of most of them is to imitate live instruments as closely and unobtrusively as possible. In practice, sampling is quite often done in support of existing forms.

On the other hand, answer songs, which have been around forever, are all about the reference to a known song. It's common for an answer song to use the original tune or quote the original lyric, but it doesn't have to. The point is the reference to an existing work, regardless of how that reference is made.

A sample of the Amen break might be a deliberate reference that the audience is meant to recognize -- even if they most likely recognize it from other samples of the break -- or it might be reshaped or reprocessed beyond all recognition, or maybe some of both.

In short, there mere act of sampling or quoting is neither necessary nor sufficient for the creation of a new form. To the extent that there's even such a thing as a truly new form, people create them because that's what creative people do. Some new forms may make use of new technology.

I think "new form" is somewhat of a red herring anyway. I can think of several examples of encountering something wildly new, only to later understand its deep and direct connections to what came before. An album that sounded like it was from another planet suddenly made a new kind of sense after I'd heard a different album from decades before. And then it turns out that the songwriter behind that one had studied poetry in college and cut their teeth in Tin Pan Alley (I'm deliberately being a bit coy about which particular albums these might be, because this is just one example and my claim here is that the particulars don't really matter).

The newness was real -- nothing quite like either album had been produced before -- but so were the connections. And a lot of the newness was newness to me. As exciting as that may be, it can't go on forever, but fortunately it doesn't have to. The connections are just as interesting.

It's easy to get excited about something new and to want the world to look like the new thing. I think this is particularly easy for technologists, since our whole gig is to try to make new and (ideally) better things.

Tog in particular played a key role in developing Apple's early UIs (the term user experience (UX) was just coming into usage when Tog published Tog on Software Design). Apple products were, by and large, much easier to use than MS-DOS PCs. It's not hard to understand someone who'd helped make that happen wanting to sweep away obsolete rules and systems. Given that Windows was announced in 1985, the year after the famous 1984 Macintosh ad, it's not hard to understand the feeling that this was actually happening in real time. The ad itself does a great job of conveying the desire to change the world.

The world, for its part, has its own opinions.


Before I go, I wanted to touch on the predictions that did pan out.

The Clipper Chip did, in fact, fall into oblivion, not long after Tog was writing about it. Tog was hardly a Cassandra here, though. If anything, the Clipper Chip was a great example of how a group of people really, really wanting something to happen doesn't necessarily make it happen. The idea that you can use end-to-end encryption to get around an insecure transport layer, whether that insecurity is accidental or a deliberate back door, is old. Arguably, it's ancient, but in any case PGP, for all its flaws, had been around for a few years by 1994. Even government agencies seem to have thrown in the towel on this one in recent years.

Overall, there is a pattern of yes ... but.
  • Corporations did, of course, figure out how to make money by charging a bit at a time, mostly by running ads or by charging for subscriptions ... but neither of these is a new business model (in-app purchases are an interesting case, though).
  • New case law and social conventions have developed around digital property ... but these look a lot like adaptations of existing law and conventions rather than something wholly new
  • Corporations have collected huge amounts of personal data about people, some of it, like genetic data, very personal indeed ... but it's hard to argue that "the internet has finally been made safe" from this as predicted. In fact ...
  • Security on the internet did indeed become a nightmare ... and it's still a nightmare
  • Zines morphed into blogs ... but even during the heyday of blogs, most of them went unread, and the same is true for podcasts, social media channels and so on today ("zines morphed into blogs" seems like one of those test sentences linguists use to show that we can understand a certain portion of language even if the words are totally made up)
  • Tablets did happen ... but they'd been a staple of science fiction for decades, and Apple itself had been working on the idea for a while by 1994 (the Newton came out in 1993), so this was more a matter of Tog asserting that eventually some kind of tablet would take off. Again, an assertion like that doesn't necessarily mean it will happen on a large scale, but it wasn't exactly a shot in the dark ... and, of course, bookstores are still around.
  • Online retail has had a huge impact ... but as I said the first time around, the term "mail order" is a big hint that this was more a shift in the mix of how goods are delivered (the original post snarkily mentioned WebVan, eToys and Pets.com, all of which were long gone by that time)
  • Telecommuting is a thing ... but it's also not a thing
  • "Information superhighway" stopped being a cool thing to say, if it ever was ... but (as I snarked the first time around) "cyberspace" also stopped being a cool thing to say, if it ever was
  • Cryptocurrencies happened, which seems striking since the Bitcoin paper was over a decade in the future ... but as to a "new economy [...] based on barter and anonymous currencies that no government will be able to touch" ... I've beaten this one pretty much into the ground here, so you be the judge
  • Object-oriented platforms have become mainstream ... but ... I'm not going to wade into the discussion of why software is the way it is, at least not here, but it's safe to say there are ills that the advent of OO platforms has not cured.
And then there are a few points where Tog's original post contains contradictory ideas because, I think, the underlying reality contains them as well:
  • The operating systems that Tog complained about (UNIX, Windows and Mac)  are still around, but  in a Ship of Theseus sort of way (see this followup post from the time -- just to muddy the waters, today's MacOS is a mashup of the original and UNIX by way of BSD and NeXTSTEP). So take your pick: Tog was wrong since they're still around, Tog was right since they've all been completely restructured over time, or some of each
  • In some sense, the internet knows no boundaries, but the Great Firewall shows no sign of going away and other regimes have found ways to severely restrict access. One way to look at it is that by default the internet knows no boundaries, but it can in practice if the local regime works to make that happen. This doesn't seem that much different from the earlier mass media, particularly TV, radio and print
  • The contrast between "often inaccurate (to put it mildly)" web publishers and "raw real knowledge" was jarring the first time around, and it's still jarring. The actual web/internet has been a mixture of both more or less from the outset.
  • Similarly, the tension between an internet built for geeks by geeks and an internet built for the whole world has been around from early days, and it's still around. Likewise for the underlying social issues around who gets access to technology and who pays the costs. Underneath this, particularly now that so many people are online, is the question of how much technology reflects society and how much it shapes society.

As I said above, I don't think my take on all this has changed much. I think I've mellowed on how I feel about the missed predictions, from "this is just horribly wrongheaded" to more like "this is a particularly clear example of something we all do", but what I think hasn't changed is the feeling that, however much I may disagree with many of the points, Tog is worth engaging with, by virtue of putting forth a strong and clear vision of the world, backed up by examples.