Tuesday, January 7, 2025

The future still isn't what it used to be: This whole "web" thing

It's worth keeping in mind that the building blocks of today's web, particularly HTTP and HTML, were developed by academic researchers. One thing that academic researchers have to do, a lot, is follow references, because most academic work builds on, critiques and otherwise refers to existing work.

Let's take a moment to appreciate what that meant before the web came along. Suppose you're reading a reference work and you run across a reference like this:

4. Ibid, 28-3

That is, you've just run across a passage like this totally made-up example:

It is also known that the shells of tropical snails vary widely in their patterning4.

That little raised 4 tells you to skip to the bottom of the page and find footnote 4, which says "Ibid, 28-3", which means "look in the same place as the previous footnote, on pages 28 through 33". So you scan up through the footnotes and find

3. Ibid, 17

OK, fine ... keep going

2. McBiologistface, Notes on Tropical Snails, 12-4

OK, this is something previously referenced, in particular something written by McBiologistface (likely the eminent Biologist McBiologistface, but perhaps the lesser-known relative General Scientist McBiologistface). Keep going ...

1. McBiologistface, Something Else About Tropical Snails, 254

OK, looks like this person wrote at least two books on tropical snails. The one we're looking for must be referenced in a previous chapter. Ah, here it is:

7. McBiologistface, Biologist, Notes on Tropical Snails (Hoople: University of Southern North Dakota at Hoople Press, 1945), 32-5

Great. Now we know which McBiologistFace it was, and which edition of which book published by which publisher. Now all we have to do is track down a copy of that book, and open it to ... let's see, what was that original reference? ... oh yes, page 28.

To be fair,  "McBiologistface, Notes on Tropical Snails" from reference 2 is probably enough to find the book in the card catalog at the library, and if a reference is "Ibid", you may already have the book and have it open from following a previous reference to it. It's also quite possible that your department or office has copies of many of the books and journals that are likely to be referenced.

Nonetheless, thinking of the tasks I mentioned when describing the Olden Days -- navigating an unfamiliar place, communicating by phone, streaming entertainment and searching up information -- simply following a reference from one book or article to another could be more work than any of them.

Even answering a question like "where was the touch-tone™ phone invented" would have been easier, assuming you didn't already have a copy of Notes on Tropical Snails on hand: go to the library, walk right to the easily-located reference section that you've already been to, pull out the 'T' volume of one of the encyclopedias, flip to Telephone and chances are your answer is right there (or you could just ask someone who would know).

To find the reference on snails, you'll have to look up the book in the card catalog, note down its location in the stacks, go there and scan  through the books on those shelves until you find the book itself (and then open it and flip to the right page, but you already know that from the reference). This is all assuming there's a copy of the book on the shelves that no one's checked out (who knows, maybe there's been a sudden interest in tropical snails in your town). Otherwise, you could call around to the local bookstores, or your colleagues and friends, to see if anyone has a copy. If not, your favorite bookstore could special-order a copy from the publisher, and with luck it would be there in a few days.

Chasing a link in an HTML document is more or less instant. You can probably see the appeal.

My point here is that the interlinked nature of the web, that ability to click on a link, immediately see what's on the other end and easily get back to where you were, was an absolute game-changer for the sort of people who created the early web. Your own milage may vary.


To make this work, you need a few key pieces

  • A way of referencing data that's available on the network (URLs)
  • A way of embedding URLs in a body of text, similar to the way footnotes are embedded in ordinary text (HTML)
  • Ideally, a standard way of accessing something referenced by a URL (HTTP)
I say "ideally", because it was already possible to access data on the web using protocols like FTP and Gopher, and you could reference those with a URL. Nonetheless, having a integrated suite of {URL, HTML, HTTP} working together fairly seamlessly meant that http:// URLs (or later, https://) quickly came to dominate.

You also need one more thing, namely that there should actually be something on the other end of the link (it's OK if links are sometimes dangling or become broken, but that should be the fairly rare exception). By the time the web standards were developed, there was already enough interesting data and text on the internet to make links useful. To some extent, the early web was just an easier way to get at this kind of information. If you had the pieces, you could easily pull together an HTML page with a collection of links to useful stuff on your server, stuff like interesting files you could fetch via FTP, with a little bit of text to explain what was going on, and anyone else could use that.


The truly webby part of the web, the network of links between documents, is still around, of course, but as far as I can tell it's not a particularly important part of most people's web experience. Links are more a way of getting to content -- follow a link from a search result, or follow a reference from an AI-generated summary to see whether the AI knows what it's talking about -- but following links between pieces of content is not a particularly important part of the web experience. Some articles include carefully selected links to other material, but a lot don't. Personally, I've mostly stopped doing it so much, because it's time-consuming, though these recent Field Notes posts have a lot more linkage than usual.

One sort of link that I do follow quite a bit is the "related article" link in a magazine or news source -- articles by the same author or on the same topic, or just to stuff that the server thinks you might find interesting, or that the publisher is trying to promote. But again, this seems more like navigating to something. The articles themselves largely stand alone, and I generally finish one article before moving on to the next. A truly webby link, like a footnote before it, links from some specific piece of text to something that's directly related to it.

And, of course, I do click on ad links, though usuallyt by mistake since you just can't get away from them.


Realizing this, I think, is a big reason that this blog went mostly quiet for a couple of years. If the webby part of the web is really only of interest to a few people, except in a few special cases like sharing social media content and browsing Wikipedia, why write field notes about it, particularly if the blog writer doesn't find social media particularly appealing?

Conversely, this latest spate of posts is largely the result of relaxing a bit about what the "web" is and talking about ... dunno, maybe "the online experience" in general? Or just "internet-related stuff that doesn't really seem to fit on the other blog?"

Whatever you call it, I seem to be enjoying writing about it again. 

Monday, January 6, 2025

The future still isn't what it used to be: Vannevar Bush

(According to Blogger, this is the 700th post on this blog, which seems like a completely arbitrary milestone to note, but I noticed it nonetheless, so now you get to. You're welcome.)

Vannevar Bush casts something of a long shadow. He held several high-level technology-related posts in the FDR and Truman administrations, had a long and distinguished academic career at MIT and elsewhere, and won several prestigious awards, including the National Medal of Science. His students included Claude Shannon, whose work in information theory is still directly relevant, and Frederick Terman, who was influential in the development of what we now call Silicon Valley (I used to work fairly near Terman Drive in Palo Alto).

Bush is also often credited with anticipating the World-Wide Web in his Atlantic Monthly article As We May Think. Since I've been comparing early visions of the Web with what actually happened, I thought I'd take a look. I've linked to the ACM version rather than the Atlantic's version, which may or may not even be online, since the ACM version highlights the relevant passages. Though there's a Wikipedia page on the piece, I've deliberately skipped it in favor of Bush's original text (with the ACM's highlights).

Two things jump out immediately, neither directly relevant to the web:

  • The language is relentlessly gendered. Men do science. Girls [sic] sit in front of keyboards typing in data for men of science to use in their work. A mathematician is a particular kind of man, technology has improved man's life, and so forth. Yes, this is 1945, and we expect a certain amount of this, but from what I can tell Bush's style stands out even for the time. I mention this mainly as a heads-up for anyone who wants to go back and read the original piece -- which I do nonetheless recommend.
  • There is an awful lot of technical detail about technologies that would be obsolete within a couple of decades, and in several cases nearly fossilized by the dawn of the Internet in the 1970s. Bush speculates in detail about microphotography, facsimile machines, punch cards, analog computers, vacuum tubes, photocells and on and on for pages. Yes, all of these still existed in the 1970s (I spent many an hour browsing old newspapers and magazines on microfilm as a kid), but digital technology would make most if not all of them irrelevant before much longer. As far as predicting the technology underpinning the web, Bush's record is nearly perfect: If he speculated about it, it almost certainly isn't relevant to today's web.
Two thoughts on this. First, it's almost impossible to speculate about the future without mentioning at least something that will be hopelessly out of date by the time that future arrives. In our own time, all we have are the tools and mental models of the world of that time. I don't fault Bush for thinking about the future in terms of photographic storage, and I don't this takes anything away from his thoughts on the "Memex", which is what people are referring to when they talk about Bush anticipating the web.

I just wish he hadn't done nearly so much of it. Alan Turing's Computing Machinery and Intelligence spends two sentences on the idea of using a teleprinter so that it's not obvious whether there's a human or machine on the other end of the conversation, and one of those sentences just says that this is only one possible approach. That seems about right for that paper. In Bush's case, I could see a few paragraphs about how to store large amounts of information (for those days, at least) on film or magnetic media, and so forth. The article would have been much shorter, but no less interesting.

Second it's worth noting how many things were possible with mid 1900s technology. You could convert, both ways, between sound, image and video (in the sense of moving images) on the one hand and electrical signals on the other. You could store electrical signals magnetically. You could communicate them over a distance. You could store digital information in a variety of forms, including the famous punched cards, but also magnetically.

There were ways to produce synthesized speech and read printed text. Selecting machines could do boolean queries on data (Bush gives the example of "all employees who live in Trenton and know Spanish"). Telephone switching networks could connect any of millions of phones to any other in about the time it took to dial (and less time than it sometimes takes my phone to set up a call using my WiFi). Logic gates existed. For that matter, the first general-purpose digital computer, the ENIAC, existed in 1945 and Bush would certainly have known about its development.

In other words, even in 1945, Bush isn't drawing on a blank canvas. He's trying to pull existing pieces of technology together in a new way in order to deal with what was, even at the time, an overwhelming surplus of information. The gist of the argument is "If we make these existing technologies smaller, faster and cheaper, and put them together in this particular way, we can make it easier to deal with all this information."


The particular problem Bush is really interested in isn't so much storing information as retrieving it ("selecting" as Bush says). This is totally understandable for a national science adviser who had until recently been working on one of the largest technological efforts to date (the Manhattan Project). Bush cites Gregor Mendel's work having been essentially unknown until decades after the fact as just one example of a significant advance nearly being lost because no one knew about it, even though it was there to be found. Bush's desire to prevent this sort of thing in the future is palpable.

Bush mentions traditional indexing systems that can find items by successively narrowing down the search space (everything starting with 'F', everything within that with second letter 'i' ... ah, here it is, Field Notes on the Web), but he's much more interested in following a trail of connections from one document to another. That is, he's envisioning a vast collection of documents traversable by following links between them. That's the world-wide web. Ok, we're done.


Except ...

Bush sees the Memex as literally a piece of furniture, looking pretty much like a desk but with a keyboard attached along with various projection screens and a few other attachments. Inside it is a store of microfilmed documents together with some writable film, which takes up a small portion of the space under the desk, and a whole bunch of machinery to be named later, taking up most of the space.

Associated with each document is a writable area containing some number of code spaces, each of which can hold the index code of a document. There's also a top-level code book to get you started, and when you add a new document, you add it to the code book. To be honest, this seems a bit tedious.

To link two documents together, you pull them both up, one on one projection screen and the other on the other, and press a button. This writes the index code for each document in the other's next open code space. The next time you pull up either of the documents, you can select a code space and pull up the document with that code.

Codes are meant to have two parts: a human-readable text code and a "positional" numeric code (probably binary or maybe decimal). Linking this post to Bush's article might add "Bush-as-we-may-think" to a code space for this post, along with (somewhere offscreen) the numeric index for Bush's article, and "Field-notes-future-ramblings-Bush" to a code space on Bush's article (along with the numeric code for this post). At that point you've got one link in a presumably much larger web.  Actually, you have two links, or one-bidirectional link if you prefer. Not quite Xanadu's transclusion, but arguably closer than what we  actually have.

Pretty webby, except ... coupla things ...

For one thing, this is all happening on my Memex. My copy of this post is linked with my copy of Bush's article. Yours remains untouched. If there's a way of copying either content or links from one Memex to another, I didn't catch it. Bush's description of how document linking works is hand-wavy enough that it wouldn't be particularly more hand-wavy to talk about a syncing mechanism (and/or an update mechanism), but I doubt Bush was thinking in that direction.

Bush seems to be thinking more about a memory aid for an individual person (or possibly a household or small office/laboratory). Functionally, it's a personal library with much larger capacity and the ability to leave trails among documents. It's certainly an interesting idea, but it misses the "world-wide" part. When I link to the ACM's version of Bush's paper, the link is from my blog to the ACM's site. If you write something and link it to Bush's paper, we're pointing at the same thing, not separate copies of it, and we're pointing to a thing that might be stored anywhere in the world (and someplace else next time we access it).

In the same post I mentioned above, I talk about a couple of features that make the web the web, particularly that a link can be dangling -- pointing to nothing -- and it can become broken -- you pointed at a page, but that page is no longer there (early posts on this blog are full of these, though at the time it wasn't clear whether rotting links would be an issue as storage got cheaper; it is). There's also some ambiguity as to what exactly a link is pointing to. If I point to the front page of a news site, for example, the contents on the other end of that link will probably be different tomorrow. In other cases, it's worth going to some effort to ensure the contents don't change significantly.

These may seem like bugs at first glance, but for the most part, they're features, because the flexibility they provide allows the web to be decoupled. I can do what I like with my site without caring or even knowing what links to it. Since a Memex is a closed system, none of this really applies. On the one hand, it's not a problem, but on the other hand, it's not a problem because a Memex is not a distributed system, which the web as we know it very much is.

Finally, the mechanism of linking is noticeably different from what HTML does. You have a pair of links between documents (or maybe pages of documents?). An HTML link is between a particular piece of the source document to, in general, a particular anchor on the destination document. To be fair, this doesn't seem like an essential difference. You could imagine a Memex with a linking mechanism that goes from a piece of one document to a piece of another, which would be much more like an HTML link (and, arguably, more like a Xanadu transclusion).


So did Vannevar Bush anticipate the web by nearly half a century?

I think the fair answer is "not really", because the distributed, dynamic nature of the web is critical.

Did he anticipate the idea of an interconnected web of documents? I think the fair answer is "sorta". Again, actual web links are one-directional and non-intrusive. You can link from document A to document B without doing anything at all to document B or its associated metadata. You don't need a backlink and you generally won't have one.

This one-way form of link was not a new idea. Documents have been referencing each other forever. Bush's notion of linking is different from an HTML link, and since an HTML link is structurally the same as a reference in a footnote in a book, it's different from that as well.

In other words, the original idea in Bush's work is more an evolutionary dead end than an innovation. A pretty interesting dead end, but a dead end just the same.


Postscript:

There's one more thing that I'd been meaning to mention but, embarrassingly enough, forgot to: search. Bush is quite right in saying that people access information by content, but in the Memex world everything eventually boils down to an index number. You access document 12345, not "any documents mentioning Memex" or whatever.

Search is probably the aspect of the web with the least precedent in mid-1900s technology. There were ways to attach index numbers to things, or even content tags, and retrieve them, with a minimum of human intervention. Bush goes into those at length. But if you wanted to get to something by what was in it, you needed a person for that, if only to add indexing information. Indeed, Memex is aimed directly at making it easier for a human to do that task, by making it easy to leave a trail of breadcrumbs a human could easily follow.

It would be almost a half-century before documents could be easily accessed by way of what was in them.


Oh, and also ... in Bush's vision, linking documents together would be a frequent activity for anyone using a Memex. In today's web, not so much, except, I think, in the particular case of re-whatevering a piece of social media content. I think the reason for that is also search (see this early post for a take on that).

Sunday, January 5, 2025

The future still isn't what it used to be: Cyberspace

 In the previous post, I said 

Telecommuting and remote work exist, but they don't dominate, they only really make sense for some professions and they don't mean jacking into a Snow Crash or Neuromancer virtual world, even though one of the largest corporations in the world has rebranded itself around exactly that vision.

This very morning, I decided to add David Foster Wallace's Infinite Jest to my reading list. In the preface to the 20th anniversary edition (in 2015), Tom Bissell writes

Yes, William Gibson and Neal Stephenson may have gotten there first with Neuromancer and Snow Crash, whose Matrix and Metaverse, respectively, more accurately surmised what the internet would look and feel like.

Um, did they? Bissell goes on to say

(Wallace, among other things, failed to anticipate the break from cartridge- and disc- based entertainment)

Fair, but ...

Yes, there is a major difference between on-demand streaming and broadcast streaming, where a broadcaster puts out content according to its schedule. There is also a difference, though it seems like a smaller one, between obtaining a physical object that allows you to view something when you want to and being able to view something more or less instantly via an always-on connection (using "view" in a fairly general sense here that would include listening to audio).

Having the combination of "what you want" and "when you want it" without the friction of obtaining a physical artifact like a book, record, tape or disk does seem like something new and significant (more musings on that here), so in that sense, to the extent Wallace's world is limited to physical media, it's farther from our reality than one with data flowing freely over networks.

With one exception, though (which I'll get to) the modern web/internet that I'm familiar with has little to do with Neuromancer's matrix or Snow Crash's metaverse.


Let's start with how you get there (one small disclaimer: While I finally got around to reading Snow Crash a couple of years ago, the last time I read Neuromancer was, um, closer to when it came out, so I'm relying on fairly old memories plus secondary sources for that one; for reference, Neuromancer was published in 1984, Snow Crash nearly a decade later in 1992, Infinite Jest in 1995). 

You get to Gibson's cyberspace by jacking in, that is, connecting your central nervous system to a computer interface that delivers a completely immersive experience. To access Stephenson's metaverse, you need a terminal and googles, either a high-quality private terminal or a free public one which provides only a grainy, black-and-white experience. In either case, the experience in Snow Crash is immersive in that you are generally not aware of the outside world, but it's not the full-sensory experience of Neuromancer.

Back in our world, of course, people generally access the web through their own computing devices, whether a phone, a tablet, a TV set, a laptop or even a desktop computer. There is no scarcity of devices. If you have access to any at all, you probably have easy access to several. You can even visit a public library and use a computer there. You do need an internet connection, but those are nearly everywhere, too. You can get on the internet in a cafe, for example, by connecting to their WiFi (as far as I can tell, actual internet cafes are nearly extinct).

In most cases, you're aware of the world around you, or at least, the internet experience doesn't take over your entire sensorium. The semi-exception is gaming, which in some cases makes an effort to be truly immersive, more or less along the lines of Snow Crash. VR headsets have been around  in some form since the 80s (if not before), and they're a natural fit for applications like FPS games, so this is not exactly a surprise.

Long story short, in much of the world the internet is easy to access with readily available equipment. Going online often means using your phone or watching TV, that is, using something that's recognizably derived from a technology that existed before the internet. Immersive experiences are only a bit harder to get to, but in any case they're not the norm.

In Neuromancer, jacking in requires special equipment on both the human and computer end (though Gibson does speak elsewhere of billions of people having access). The bar is lower in Snow Crash, but it's not something that most people spend much time on. It's interesting that the 1992 version is a bit more mundane than the 1984 version, almost as though computing in the real world had become more commonplace. It's also telling, I think, that access to the virtual worlds of the novels is difficult enough to hang a plot point on, particularly in Gibson's earlier version, almost as though stories were written by writers.

OK, once you're in the virtual world, how do you get around? I'll focus more on Snow Crash here, mainly because memories are fresher. The key point about Cyberspace is that it's a space. In particular, it's a three-dimensional construct centered around a 100-meter-wide road 216 (65,536) kilometers long following a great circle on a  virtual sphere.

If you want to meet with someone else online, you arrange to go to the same space by moving your avatars. You can move your avatar around by walking or running, or use a vehicle, or take the transit system, which has 256 express ports, with 256 local ports in between each, at one kilometer intervals. There are special spaces within the metaverse, many with restricted access.

From an immersive gaming perspective, this makes perfect sense. From the perspective of the web, it makes no sense at all. If you chase a link from here to the Wikipedia article on Snow Crash, you just go. This page goes away and you see the Wikipedia page. Or it opens in a separate tab and you can flip back and forth, or whatever. You don't do anything even metaphorically like moving from this page to that. There's no concept of distance. At worst, one or the other of the pages might load slowly, but you don't have a sense of motion while that's happening (well, I don't, at least).

In other words, the key feature of Cyberspace, that it's a space, is at best completely irrelevant to the modern web, and at worst it's actually in the way. As I recall, Gibson's matrix is similar. For example, if you encounter ICE (Intrusion Countermeasures Electronics) you see an actual wall of ice or some other material that you have to get through.

Gibson's matrix, at least, is also spatial in another way: its contents are tied to physical computers in the real world. In particular, the two AIs Wintermute and Neuromancer are physically located in Bern and Rio de Janeiro, respectively. That is, they are presumably running on hardware located in those cities. Wintermute would like to be able to join with Neuromancer, its other half (Neuromancer is less concerned about this).

Data in today's internet is much more distributed. Not everything is in the cloud in the sense that there's no single well-defined physical location for data or the processors that process it, but a lot is, and even when a service or database is single-homed in a particular place, it usually doesn't matter exactly where that is. Even if two servers are located on different continents, they can still communicate easily because of the internet.


In the end, the technology of Neuromancer and Snow Crash isn't particularly prescient. The parts that are still around, such as a data-carrying network that's accessible across the world, or an immersive VR, were already under development in the 1980s. Gibson and Stephenson were drawing on cool and experimental, but real, technology as a jumping-off point for fiction. Moreover, they also copied some of the limitations of the technology of the time, particularly the need for specialized access terminals and on services being hosted on particular equipment located in particular places.

But in the end, Neuromancer and Snow Crash are not really about the technology. Snow Crash is more an exploration of Anarcho-Capitalism in a world where the official government has collapsed and ceded power to a collection of private entities. Neuromancer is in large part a conventional thriller, even including a physical ROM module as a MacGuffin (not withstanding what Bissell says about breaking away from physical media).

But for my money the computing technology and its relation -- or lack thereof -- to today's web isn't the interesting part of either book. Neuromancer is a ripping yarn set in a magical world whose magic happens to be presented narratively as a computerized virtual world. Snow Crash is a philosophical novel that uses an array of inventions, including but very much not limited to the metaverse, to frame its investigations. 

In both cases, the strange but also familiar technology is telling us that the novel's world is a different world from ours. The authors, particularly Stephenson, use those differences to explore our own world. As such, there's no particular need for them to have predicted the actual world of a generation later.

The future still isn't what it used to be: Tog

In 1994, so about 30 years ago, UX designer Bruce "Tog" Tognazzini's Tog on Software Design was published with this introduction. I wrote a post about it a mere 15 years later with a take on which predictions had and hadn't panned out. Another 15 years having passed, this seems as good a time as ever to take another look.

My first post included several direct quotes, which had the advantage of showing Tognazzini's actual words, but the disadvantage of leaving out some of them. This time around, I'm going to try summarizing the main point of each paragraph, with a few direct quotes for statements that seem particularly notable. Please have a look at the Tog's original page, as well. Unlike many old links on this blog, it still works, and kudos for that.

Tog's main points, as I see them, in the order originally written, were:

  • Phones, fiber and computers are [in 1994] about to converge. The whole world will be wired and national boundaries will no longer matter. Governments are trying to control this, but it's not going to work.
  • In particular, the Clipper Chip is a fool's errand because people can do their own encryption on top of it. Individuals will have access to strong encryption while banks and other institutions will be forced to use weak, government-approved encryption.
  • For example, the government of Singapore banned Wired magazine for an unfavorable article, but an online version was available immediately. "Traffic on the Internet cannot be selectively stopped without stopping the Internet itself"
  • Intellectual property laws can't keep up with new forms that build on putting together bits of existing content. There will be increasing repression as corporate lawyers try to stop this.
  • But this will end as corporations find ways to monetize content by having lots of people pay a little instead of a few people paying a lot [licensing fees at the time could run into the thousands of dollars] "As the revolution continues, our society will enjoy a blossoming of creative expression the likes of which the world has never seen."
  • While everyone's attention is focused on script kiddies, corporations will sneak around "America's boardrooms and bedrooms", destroying any illusion of privacy.
  • Security is also an illusion, but "The trend will be reversed as the network is finally made safe, both for business and for individuals, but it will be accomplished by new technology, new social custom, and new approaches to law."
  • The previous computer revolution, in the 1980s, resulted in a completely unexpected result: self-published paper zines. However [in 1994] it's hard to get distribution. Cyberspace [sic] will fix that, and creators will no longer need publishers in order to be heard. "[R]eaders will be faced with a bewildering array of unrefereed, often inaccurate (to put it mildly), works"
  • Tablets with high-resolution, paper-white displays will put an end to physical bookstores.
  • Retail will see increasing pressure from "mail-order, as people shop comfortably and safely in the privacy of their own homes from electronic, interactive catalogs"
  • "More and more corporations are embracing telecommuting, freeing their workers from the drudgery of the morning commute"
  • Schools will come to accept "that their job is to help students learn how to research, how to organize, how to cooperate, create, and think" and textbooks "will be swept away by the tide of rough, raw, real knowledge pouring forth from the Cyberspace spigot"
  • The term "information superhighway" is obsolete, because it doesn't do justice to Cyberspace, which will be "just as sensory, just as real, just as compelling as the physical universe"
  • A new economy will arise, based on barter and anonymous currencies that no government will be able to touch [this was written over a decade before the Bitcoin paper came out].
  • Initially, there will be digital haves and have-nots, but this will improve quickly as hardware becomes cheaper. The real problem is that the internet of the 1990s was built by mostly male hackers for their own use. There needs to be an "an easier, softer way" to access it, and only then will it see widespread adoption.
  • It's crucial to supplant the obsolete operating systems of the 1990s -- UNIX, Windows and Mac -- with object-oriented technology. Even 15 years after bitmapped displays were widely available [i.e., the first Macintosh came out in 1984], computers are barely shedding their old teletype-based look. We can't afford to wait another 15 years for OO to become widespread.
  • If all this is going to work, we need coordinated long-term strategies instead of each major player doing their own thing and hoping it all works out.

Honestly, I don't think my take on this has changed greatly in the past 15 years, because I think Tog's take is just as true as it was 15 years ago, or when it was written, even. That is, some parts are true and some parts are way off base, and which parts those are hasn't changed much. And, of course, it's likely that my opinions haven't changed greatly in the past 15 years.

Instead of comparing this post to the previous one, I'd like to look at the same themes from (I hope) a somewhat different angle. Last time around, I opined that the predictions that missed were mainly the result of assuming that a new development that's on the upswing will continue that way until it replaces everything that came before. I still think that's true, but what stands out to me more this time around is the apparent motivation behind the predictions.

Tog seems mostly to be grappling with the idea that computing technology of the 90s was poised to fundamentally overhaul our social structures. It should be clear to even the occasional reader of this blog (I'm pretty sure there are at least some) that I'm on the skeptical side of this one, but what really comes through in Tog's writing is a strong desire for this to be true, and in particular ways:

National boundaries will be obsolete. Government attempts to rein in technology will fail. Publishers will be irrelevant as entirely new forms of creativity emerge. Schools will change their entire mission. We will escape our physical bonds by working and living in a Cyberspace that's only distinguishable from the real world by its being more vibrant and vivid. OO will fundamentally change the way software is developed and open up whole new possibilities. Corporations and other major players will have to learn to work together in whole new ways.

No boundaries. No gatekeepers. No government interference. No physical bounds at all. New possibilities. New forms of expression. New ways of working. If you zoom out to that level, I don't think it would be much trouble to find a similar set of predictions from the 1960s, or the 1860s, or as far back as you want to go.

Or the 2020s, for that matter.

But national boundaries are still here. Reserve currencies are still around. Banking regulations still matter, even in the crypto world. Publishers, studios and record labels are still gatekeepers. To the extent schooling has changed, technology hasn't been a primary force (and remote schooling certainly did not replace students physically going to class). Telecommuting and remote work exist, but they don't dominate, they only really make sense for some professions and they don't mean jacking into a Snow Crash or Neuromancer virtual world, even though one of the largest corporations in the world has rebranded itself around exactly that vision.

Within this, a few particulars seem worth particular notice.

Tog wasn't the only one musing about new forms based on quoting existing material. Ted Nelson's Xanadu project was all about that, and by the time Tog was writing audio sampling had found its way from 1970s hip hop into the mainstream, eventually giving rise to whole new genres.

But this was neither a new idea nor anything revolutionary (see these old posts for more detail). Quotations and allusions have been around forever. It's more a matter of how they're used. Sample-based sound fonts are widely-used, for example, but the whole point of most of them is to imitate live instruments as closely and unobtrusively as possible. In practice, sampling is quite often done in support of existing forms.

On the other hand, answer songs, which have been around forever, are all about the reference to a known song. It's common for an answer song to use the original tune or quote the original lyric, but it doesn't have to. The point is the reference to an existing work, regardless of how that reference is made.

A sample of the Amen break might be a deliberate reference that the audience is meant to recognize -- even if they most likely recognize it from other samples of the break -- or it might be reshaped or reprocessed beyond all recognition, or maybe some of both.

In short, there mere act of sampling or quoting is neither necessary nor sufficient for the creation of a new form. To the extent that there's even such a thing as a truly new form, people create them because that's what creative people do. Some new forms may make use of new technology.

I think "new form" is somewhat of a red herring anyway. I can think of several examples of encountering something wildly new, only to later understand its deep and direct connections to what came before. An album that sounded like it was from another planet suddenly made a new kind of sense after I'd heard a different album from decades before. And then it turns out that the songwriter behind that one had studied poetry in college and cut their teeth in Tin Pan Alley (I'm deliberately being a bit coy about which particular albums these might be, because this is just one example and my claim here is that the particulars don't really matter).

The newness was real -- nothing quite like either album had been produced before -- but so were the connections. And a lot of the newness was newness to me. As exciting as that may be, it can't go on forever, but fortunately it doesn't have to. The connections are just as interesting.

It's easy to get excited about something new and to want the world to look like the new thing. I think this is particularly easy for technologists, since our whole gig is to try to make new and (ideally) better things.

Tog in particular played a key role in developing Apple's early UIs (the term user experience (UX) was just coming into usage when Tog published Tog on Software Design). Apple products were, by and large, much easier to use than MS-DOS PCs. It's not hard to understand someone who'd helped make that happen wanting to sweep away obsolete rules and systems. Given that Windows was announced in 1985, the year after the famous 1984 Macintosh ad, it's not hard to understand the feeling that this was actually happening in real time. The ad itself does a great job of conveying the desire to change the world.

The world, for its part, has its own opinions.


Before I go, I wanted to touch on the predictions that did pan out.

The Clipper Chip did, in fact, fall into oblivion, not long after Tog was writing about it. Tog was hardly a Cassandra here, though. If anything, the Clipper Chip was a great example of how a group of people really, really wanting something to happen doesn't necessarily make it happen. The idea that you can use end-to-end encryption to get around an insecure transport layer, whether that insecurity is accidental or a deliberate back door, is old. Arguably, it's ancient, but in any case PGP, for all its flaws, had been around for a few years by 1994. Even government agencies seem to have thrown in the towel on this one in recent years.

Overall, there is a pattern of yes ... but.
  • Corporations did, of course, figure out how to make money by charging a bit at a time, mostly by running ads or by charging for subscriptions ... but neither of these is a new business model (in-app purchases are an interesting case, though).
  • New case law and social conventions have developed around digital property ... but these look a lot like adaptations of existing law and conventions rather than something wholly new
  • Corporations have collected huge amounts of personal data about people, some of it, like genetic data, very personal indeed ... but it's hard to argue that "the internet has finally been made safe" from this as predicted. In fact ...
  • Security on the internet did indeed become a nightmare ... and it's still a nightmare
  • Zines morphed into blogs ... but even during the heyday of blogs, most of them went unread, and the same is true for podcasts, social media channels and so on today ("zines morphed into blogs" seems like one of those test sentences linguists use to show that we can understand a certain portion of language even if the words are totally made up)
  • Tablets did happen ... but they'd been a staple of science fiction for decades, and Apple itself had been working on the idea for a while by 1994 (the Newton came out in 1993), so this was more a matter of Tog asserting that eventually some kind of tablet would take off. Again, an assertion like that doesn't necessarily mean it will happen on a large scale, but it wasn't exactly a shot in the dark ... and, of course, bookstores are still around.
  • Online retail has had a huge impact ... but as I said the first time around, the term "mail order" is a big hint that this was more a shift in the mix of how goods are delivered (the original post snarkily mentioned WebVan, eToys and Pets.com, all of which were long gone by that time)
  • Telecommuting is a thing ... but it's also not a thing
  • "Information superhighway" stopped being a cool thing to say, if it ever was ... but (as I snarked the first time around) "cyberspace" also stopped being a cool thing to say, if it ever was
  • Cryptocurrencies happened, which seems striking since the Bitcoin paper was over a decade in the future ... but as to a "new economy [...] based on barter and anonymous currencies that no government will be able to touch" ... I've beaten this one pretty much into the ground here, so you be the judge
  • Object-oriented platforms have become mainstream ... but ... I'm not going to wade into the discussion of why software is the way it is, at least not here, but it's safe to say there are ills that the advent of OO platforms has not cured.
And then there are a few points where Tog's original post contains contradictory ideas because, I think, the underlying reality contains them as well:
  • The operating systems that Tog complained about (UNIX, Windows and Mac)  are still around, but  in a Ship of Theseus sort of way (see this followup post from the time -- just to muddy the waters, today's MacOS is a mashup of the original and UNIX by way of BSD and NeXTSTEP). So take your pick: Tog was wrong since they're still around, Tog was right since they've all been completely restructured over time, or some of each
  • In some sense, the internet knows no boundaries, but the Great Firewall shows no sign of going away and other regimes have found ways to severely restrict access. One way to look at it is that by default the internet knows no boundaries, but it can in practice if the local regime works to make that happen. This doesn't seem that much different from the earlier mass media, particularly TV, radio and print
  • The contrast between "often inaccurate (to put it mildly)" web publishers and "raw real knowledge" was jarring the first time around, and it's still jarring. The actual web/internet has been a mixture of both more or less from the outset.
  • Similarly, the tension between an internet built for geeks by geeks and an internet built for the whole world has been around from early days, and it's still around. Likewise for the underlying social issues around who gets access to technology and who pays the costs. Underneath this, particularly now that so many people are online, is the question of how much technology reflects society and how much it shapes society.

As I said above, I don't think my take on all this has changed much. I think I've mellowed on how I feel about the missed predictions, from "this is just horribly wrongheaded" to more like "this is a particularly clear example of something we all do", but what I think hasn't changed is the feeling that, however much I may disagree with many of the points, Tog is worth engaging with, by virtue of putting forth a strong and clear vision of the world, backed up by examples.

Sunday, December 29, 2024

Tell us about the olden days

The previous post talked in generalities about how the web and internet may or may not have changed how we communicate and live. To go along with that, I thought it might be interesting to consider some specific examples. Since these are drawn from personal experience, this post will show my age more than most do, but so be it. If you find it amusing to append old man to the questions here, well, I don't suppose I can stop you.

I'm going to answer these on the assumption that you have no memory of anything before, say, the 1990s, so please bear with me if some of this is already obvious. I'm honestly not sure how much of this will be "wow, I didn't know that" to a typical reader and how much will be "well, yeah, no kidding." Also, though I'll generally write in past tense, many of the things I'll mention are still true. I'll call that out here and there, but not necessarily everywhere, so if you find yourself thinking "but ... they still have those", you're probably right.

If nothing else, this post will probably serve as a reminder that as much as I grumble about kidstechnology these days, a lot of this stuff is nice to have.


What did you do before GPS and mapping apps?

I grew up in a midwestern town that was built on a grid. The north/south blocks were long (eight to a mile) and the east-west blocks were short (twelve to a mile), so most addresses were on the north/south streets. First street was at the south end, running east-west, then second street and so on. The north-south streets had names in alphabetical order from east to west.

The first digit or digits of an address on a north-south street were the number of the nearest numbered street to the south. The last two digits of the address were 00 for the northeast corner lot and 01 for the northwest corner and generally increased by 4 per house from there. If I lived at 1234 Elm street and I knew your address was 2133 Maple, I knew to go nine blocks north (just over a mile) and, um, several blocks west, and your house would be on the west side of the street, toward the north end.

The main thoroughfares were a mile apart, since they'd started out as section roads, so you knew you could take 3rd, 11th, 19th or (later, as things got built out) 27th to get across town from east to west, and Cedar, Oak or (again later) Agate to get from north to south. A lot of towns were laid out using some version of this kind of scheme, and for that matter so were a lot of cities. San Francisco is a notable example -- a lot of folks would have built streets to follow the contour of the hills (to be fair, some do).

I say "were" and "could", but of course they didn't rename or renumber anything just because GPS came along, though it does certainly seem to matter less now. I currently live in an area with a large-scale grid of section roads, and many of the towns are on small-scale grids, but I've never bothered to learn the exact numbering schemes, even in my own neighborhood, because GPS is just easier. I do know the section roads reasonably well.

My first answer, in other words, was "you just got to know your way around town" and "the addresses were set up to make that easier".

That worked fine until I moved to an area on the East Coast where nothing was on a grid. At that point MapQuest was around, but I didn't have a smartphone. I ended up doing a fair bit of printing out directions off the web, trying to mostly memorize the way before starting out, peeking at the directions while stopped at stoplights and keeping a weather eye out for street signs and house numbers.

And getting lost fairly often.

Gradually, I learned the main roads and how they connected together, and how the smaller connectors connected to those, and where the main places I wanted to get to related to all that, and things got easier. People would also give general directions like "It's near Chestnut and Amethyst where the main library is. Turn left on Locust Street after the light and Smith Court will be a few blocks down". If you already knew where the main library was, or even where Chestnut and Amethyst were, you had a pretty good shot.

There were also some clues like the common pattern of naming a main road after a city it was headed toward. For example, Richmond Road in Twickenham goes toward Richmond and Mortlake Road in Richmond goes toward Mortlake, and it's probably not a coincidence that in both cases you're heading toward London proper (there is no Twickenham Road in Richmond or Richmond Road in Mortlake, but on the other hand Chapel Hill Road in Durham goes toward Chapel Hill, where it becomes Durham Road in Chapel Hill ...).

Later, I realized that learning the main road/smaller road pattern was something I'd dealt with before before, traveling in Europe, except that instead of main streets it was usually the public transit system -- get on the subway at your stop, follow the subway maps to your destination stop, find your actual destination from there.

My main problem then was that I don't have a great sense of overall direction. If a road takes a bend here and a curve there, I might think I'm headed pretty much the same direction I was before, when in fact I've turned almost 90 degrees.

So I really like having GPS available as a backup, even if I wish there were an easy way to say "yeah, I know this part, start giving me directions when we get to this part and just let me listen to my music until then."

The other part, of course, particularly before MapQuest, was knowing how to read a street or road map, which seems to be something of a lost art, a clear sign that GPS is just plain easier (particularly if your brain doesn't deal well with maps).

For long trips you had the Rand McNally Road Atlas, which showed all the interstates, federal highways and main state roads, along with cities and towns, with mileage shown on each segment. The distance between one town or exit and the next was shown as a number halfway between in one color. Some of those waypoints had a special dot in a different color, and the distance between those with special dots was shown in that color, so you didn't have to add up all the segments in between. There was also a schematic depiction of the interstate system with mileage numbers.

In other words, the road atlas encoded exactly the same kind of edge-weighted graph that mapping software uses, and you could use that to figure out the shortest route from point A to point B along main roads. If you had time, you could look for cutoffs on secondary roads. If you were adventurous, you could try to find local shortcuts and hope that at least you could find your way back to something that was on the map.

You might also carry smaller-scale state or regional maps, which you could get at any gas station (maybe still can). If you were staying in a city for a while, you'd pick up a city map, too. The road atlas also included maps of the main roads in most cities, and you could usually get by with that if you were just passing through [re-reading, I realize I forgot to mention that you could also ... stop and ask directions].

Any of these maps would be overlaid with a square grid with numbers in one direction and letters in the other, and there would be an index, so you could find out that Springfield was in square 5A and quickly find exactly where it was and figure out how to get there.

When I lived in the LA area, the Thomas Brothers map worked basically the same way (as does London's A to Z, along with, I'm sure, many, many others), so you could figure out that to get to the Sherman Oaks Galleria you take Wilshire to the 405, get off at the Ventura exit, hang a left on Sepulveda and a right onto Ventura and there you are.

But what about traffic? To this day, many local radio stations will provide frequent updates on road conditions and traffic, and make money off this information by selling ads. Just sayin'

Summing this all up

  • Many places were designed to be easy to get around
  • Pretty much any city has a system of main roads, secondary roads and side streets that you can just learn if you need to
  • There are maps available at several scales. Larger scale maps include distance information and pretty much all have grids and indexes.
  • Getting around is easier now, but it wasn't really that hard before smartphones and GPS, because there was already quite a lot of infrastructure to make it easier, particularly if maps are friendly to your brain.

What did you do before cell phones and texting?

Cell phones may have had the most noticeable effect on day-to-day life of all the web/internet/telecommunications advances of the past few decades.

Besides the clothes and hairstyles, one sure-fire sign that a movie is old (or the screenwriter is a bit behind the times) is a plot device that depends on a phone call. Our hero needs to get in touch with someone urgently. Can they make it to a payphone? Will the person they're calling be at home or at their desk? Will the line be busy? Will the wrong person answer the phone? Or maybe the right person is at home but they're afraid to pick up the phone because it might be the villain calling? If the hero had to leave a message, will the other person get home to check their answering machine in time? Will the wrong person overhear them leaving the message on the machine?

None of these really works today because
  • Phones are now associated with people, while they used to be associated with places
  • Today's phones can do more. For most of the landline era there was no caller ID and most phones could only handle one call at a time
  • Messages can now be stored in the cloud rather than locally on analog tape
Since a landline is associated with a place, the vast majority of households had a single phone line, though there might be multiple phones in the house connected to it. If you called the number for that phone, you were calling the house. Someone would answer ("Hello?"), you'd say who you were ("Hi, it's Dave") and, if you wanted to talk to someone else at the house, who you wanted to talk to ("Could I speak to Earl?"). If that person was somewhere else, you could ask the person you were talking to to leave a message.

You could also just hang out and chat with them -- if you know Earl, you probably know Chris, the housemate, or Chris's good friend Sam, who doesn't live there but hangs out enough that everyone's comfortable with them answering the phone. The chance of talking to someone other than the person you were calling for wasn't necessarily a bad thing, though of course it could be.

The other half of a phone being associated with a place was that if you wanted to make a call, you had to get to a phone. That's why there were payphones (still are, here and there, I'm pretty sure). Or you could stop by a friend's house and ask to borrow their phone. In a pinch, you might be able to drop into a nearby business and ask to use their phone, but it had better be an emergency.

You could also call to a payphone, since they each had their own number, but that was pretty rare, to the point that a lot of people weren't aware that you could even do that. You'd mostly see it done in a movie, where the villain tells the hero to wait for a call at the payphone at 12th and Main, and some innocent bystander steps in to make a call at just the wrong moment.

But that also meant that if you were away from a phone, no one could call you and no one expected to be able to. Earl's not home? Cool, I'll try later, or maybe I'll run into him. Likewise, no one expected you to be able to call them. The most likely answer to "Why haven't they called me back??" was "They're not home yet."

Honestly, this was kinda nice. I still miss it from time to time. Sure, you can unplug today, but it's not the default.

Voicemail today is mostly the same as it was fifty years ago. You could record whatever outgoing message you liked. When someone called your phone and the answering machine was turned on, it would play your outgoing message, beep and start recording whatever was on the line until the connection ended or (I'm pretty sure) until you picked up the phone on your end.

Depending on how a switch was set, it would also play what it was recording on its speaker, so you could hear the message that was being left. People who were at home could and did screen calls that way, so leaving a message might look like:

... Please leave your message at the beep

Hey, it's me ...

[picking up phone] Oh hey! I was hoping you'd call

but it might also look like:

... Please leave your message at the beep

Hey, it's me ...

[muttering to self and not picking up phone] Yeah well you can just take that phone and ...

... and I just wanted to say ... again ... I'm sorry I'm sorry I'm sorry

Sure, you can still screen calls and now even block people (which you couldn't do), but there's something special about listening in in real time.

Again, the main difference is that the answering machine is tied to a landline, which is tied to a place, so typically you'd check your answering machine for messages when you got home and either turn it off, or leave it on and screen calls. If you turned it off, you had to remember to turn it back on the next time you left ("Oh no, I'm sorry you couldn't leave a message. I forgot to turn my machine on."). It was also possible to access your voicemail by calling in and using a Touch-Tone™ keypad to put in a PIN, but to do that ... you'd have to get to a phone (and even late in the game, a lot of phones still had rotary dials, so not just any phone).

But of course, and this is the part that surprised me enough I remember discussing it in at least one post here, nobody leaves voicemail any more. I mean, you can still do it, but I'm not sure when I last left a voicemail for a person, as opposed to a business or doctor's office.

It took me a while to understand why. If you call someone and it goes to voicemail, surely it's easier to just say a message than to hang up and type out a text. Fair enough, but it's even easier to just type out the text without calling and waiting for an answer or voicemail. The setup for a voice connection is heavier weight than one might expect.

It's also a lot easier for the receiver to glance at a text than to access voicemail and then listen through. After hearing "Why didn't you just text me?" over and over, voicemail starts to look less and less attractive. With smart keyboards and speech-to-text, texting isn't that hard anyway, at least in my experience. And so came the return of telegram style and enough abbreviations, slang and conventions to (arguably) constitute a new dialect.

So the main differences here are that it's harder to unplug and ... text.

I was going to emphasize how utterly disruptive it is to always be connected but ... maybe not. Yes, I'm reachable by phone most of my waking hours, but I don't actually get that many phone calls. In particular, I don't get a lot of cold calls. Most of the time if someone calls me it's an actual person that's either a friend/family member or someone I'd asked to call me. I don't get a lot of spam calls to begin with, and if I do, I can either decline and let it go to voicemail, to check (and delete) at my leisure, or use a screening feature to ask them to leave a voicemail (which they never do).

I think some of this is regulatory, but spammers/scammers don't generally let regulation get in their way too much, so this must mostly be a matter of there being cheaper and more effective ways to spam and scam.

Most of the interruptions I get, by far, are notifications from apps which I chose to get. Or at least, I didn't diligently ask not to get. Many of these are email notifications. I don't get a ton of email, but I do get a steady stream through the day, just almost enough to want to Do Something About It.  I also get notifications for texts, which are generally from people I know, so I tend to look at them right away, and from a couple of news sources, which are pretty selective about only sending out alerts for major news. The main thing that's bugging me right now is the stream of "hey this movie just came out" notifications that I don't recall asking for, but that seems to have tapered off (or I turned them off?).

In other words, being "always on" doesn't seem to require being very "on", and there are a few things I could do to make it less disruptive yet. On the flip side, I can call or text pretty much any time I want, as long as I'm not driving, and even then it's usually not that hard to pull over. If I'm at the grocery store and I want to double-check what a household member wanted, that's easy. If I'm in an accident, I can call 911 (or if I can't, I have bigger problems). And so forth.

It's also easier to meet up, which is nice. I remember arranging to meet friends in Berlin not long after the Wall came down. After a few phone calls (and maybe even letters and postcards?), we arrived at a plan: meet on such-and-such date at the Kaiser-Wilhelm-Gedächtniskirche at high noon. If not everyone was there, come back at 1:00 and so forth, so that whoever was already there would have to sit around waiting. I don't think we set a time to give up waiting, but it was understood that if it got to be too late, whoever was there would just go on and see the city and whoever couldn't make it couldn't make it.

As it turned out, most of us were there at noon and the others showed up at 1:00 and off we went. During the visit, we'd occasionally arrange a rendezvous point to meet up at if we got separated, which may or may not have actually happened. This was pretty normal and it tended to work pretty well, but again, I'm not sure when was the last time I've made a plan like that, because why bother when you can just text or call? It's still a good trick to keep in mind, though, I think.

So staying connected is disruptive, but not really all that disruptive. Being more connected also has some conveniences, but is it really all that much more convenient? 


What did you do before search?

For the purposes of this post, let's assume that search Just Works: you can easily find any particular bit of information you need, assuming it's on the web somewhere (and "the web" basically means "whatever your search engine can find").

Sometimes, you'd end up just not finding something out. But there were options.

If you wanted a phone number or address, you could look someone up in the phone book. Since phone books were physical things, and fairly hefty ones in many cases, you could really only access them locally, though many libraries had phone books for major cities. In other words, there was an element of privacy protection built in, which tended to be enough for most purposes, though people did get unlisted numbers for various reasons.

Adding on to my claim that changes in how phones work have been among the most disruptive, that whole paragraph is from another age. If I want to call a business, their number is on their web page. If I want to call a person, we'll have exchanged phone numbers (likely by text, of course). My phone will remember my contacts, but that's actually not such a big deal. It doesn't take a lot of space to write down names and addresses of people, and you could get a miniature notebook for just such a purpose (I still have one somewhere).

The main convenience is being able to tap on an icon and have the phone place the call without even having to know the number -- there are only a few numbers I have memorized now, but mostly because I use them for supermarket loyalty programs and such, not because I dial them.

But what would one actually search for?

There are two main categories, I think. One is day-to-day information: Where is there a good restaurant that serves X kind of food? When is the DMV open? Does the local hardware store carry left-handed socket wrenches? (No, that's not a thing).

This is largely a matter of advertising (which, for the purposes of this post, at least, is distinct from search).  Businesses have an incentive to let as many people as possible know that they're around, so there's probably a local restaurant guide that will tell you who serves what, and there are probably multiple copies of it in various drawers in the house, or under the couch, because they just seem to keep turning up and, yikes, maybe they can reproduce?

You probably got something in the mail telling you when local government offices like the DMV are open, and they're probably listed in the yellow pages as well (back to phones ... there were actually two kinds of phone books: the white pages had residential listings for anyone with a phone who didn't opt out, and the yellow pages had paid listings for various businesses and similar entities. It's been so long since I've used one that I almost forgot that DMV would be in there).

Unless you lived in a major city, there probably weren't that many restaurants in town anyway, and it didn't take long to get to know them. As to hours, it was a good bet that anything that was open for business would at least be open between 10:00 and 4:00 on a weekday, and anything retail was at least open on Saturday (though maybe not Sunday, depending on where you were).  Again I say "was" and "would", but as far as I can tell, that's still mostly true.

In other words, if you search for "X restaurant near me", you're not asking something that could only be answered, or only be conveniently answered, once search engines came along. You're asking something that used to be reasonably easy to answer and is now somewhat easier, in principle.

As to what's available for sale where, some outlets would put out catalogs (the Sears Catalog is a famous example -- I hope that when you chase that link it says more about the cultural significance of that catalog) and many stores would put out flyers in the local newspaper saying what they had on sale that week.

Or (once more back to phones) you could call the hardware store and ask whether they had left-handed socket wrenches, and most likely someone would actually pick up the phone on the other end and tell you (and try not to giggle too loudly).

Long story short, most of the "Where can I find this in the physical world right now?" questions could be answered pretty easily, because people had an interest in making them easy to answer, just as they do now. The main difference is that there were more people involved. For example, there were more people working at a typical retail store to help customers and also to answer the phone if someone called. And that was kinda nice. I still miss it from time to time.

The other main thing I use search for is research, for example looking up material to put in a blog post. Having so much material online and searchable changes things considerably, but despite what the cartoon might suggest, it wasn't impossible to find things out.

To be sure, this wasn't something most people could do at home, if it wasn't in the dictionary or encyclopedia that were much more commonplace or in some book or magazine that you happened to have on hand, and it helped, a lot, to have a university library or similar institution in your area. If you could get to one of those, though, there were definitely resources:
  • There were probably microfilm copies of major newspapers and magazines plus local and regional publications
  • Reference books like The Reader's Guide to Periodical Literature would tell you what was in those publications
  • There would be copies of major scientific journals
  • There would be a large reference section full of reference books on a wide variety of subjects
  • And, of course, there would be a large collection of fiction, and non-fiction history, science, art, music and many other subjects
  • Along with a card catalog to tell you where to find all of the above
  • Your local school library would be a miniature of this, so you could practice finding books in the catalog and maybe even reading a microfilm copy of a news article you found in the Reader's Guide.
The main problem, besides not having access if you didn't live near such a library, is that it's harder to keep a collection of physical texts up to date. Even so, the library would have subscriptions to many major publications, and the Reader's Guide published updates biweekly, so you still could get a pretty good idea of the latest developments.

All this depended on your library carrying the type of information you were interested in and the major reference publishers indexing it. These being human endeavors and resources being limited, there was plenty of room for conscious bias, unconscious bias and plain old budgetary constraints to skew the picture.

Today, of course, you can search anything the major search engines index, major news sources update their pages continuously, preprints are up on ArXiv as soon as the authors want them to be, and information is generally available more widely and more quickly than it used to be. I'm going to steer well clear of how the current web-centric view of the world of information may be biased, and why, but I'll certainly acknowledge that it's a worthy topic of discussion.

But then, how many people are in the business of serious research? For us amateurs, how much does it really matter whether I find out about a new development right away, or in a month or a year when it finds its way into the library system, or a friend mails me a photocopy of someone's lecture notes, or whatever else. For the pros, even a good search engine will only get you so far.

Beyond that, as far as I can tell, you still need a good network of sources, whether primary sources or people who can point you to them or pull together, assess and summarize the information from primary sources. It remains to be seen what role LLMs will end up playing in that, particularly in professional-level research.

Search engines make day-to-day questions more convenient to answer, and they make the amateur researcher's job quite a bit easier, but were those all that hard to begin with?


What did you do before streaming?

Bought CDs, bought/rented DVDs or videotapes, watched TV, went out to movies, not to mention quite a few live shows.

Sometimes even talked to people.


What did you do before LLMs?

Dunno ... what did you do? It hasn't been that long!



What changed?

This is one of those posts that started as one thing, trying to make some sort of Larger Point, but ended up as ... something. It started out on the long-running theme of not-so-disruptive technology, then devolved into a technical exploration as I tried to back that point up, and then went a somewhat different direction because of what I actually found when I went researching, before sorta circling back to the general vicinity of the of the original theme and pulling together some threads from some of the first posts on this blog from, oh, a minute or two ago. Rather than try to polish all this up into some sort of coherent essay, I've decided to leave it pretty much as written. Perhaps as some sort of compensation, I've included a lot more links than I usually do.


Looking back I see that in 2024, I've already doubled my output from 2023 (by a score of two posts to one), so maybe I should quit while I'm ahead. But I had an idea for a post, and after re-reading back to July of 2020 (that is, seven posts), I'm pretty sure I haven't explored this particular point before, at least not recently. Or rather, I have, given that the not-so-disruptive technology tag is in second place behind annoyances, but if I've stepped back and surveyed it from a broader point of view, it hasn't been in the last four years.

(I also notice that the link to Intermittent Conjecture is for a four-year-old post, probably because that particular feature is no longer particularly supported, because of course it's not. Grandpa, what's a "blogroll"?)

I considered editing that last bit of snark out, especially since annoyances is already well represented, but I think that it's probably in line with the rest of this post, though maybe in a roundabout way.


It's almost an axiom that newly-developed technology will Change the World. I say "almost" because technically an axiom is a statement that you assume to be true because it's essential to the rest of your logical framework, but you don't have any other way to prove it to be true, so you have to just assume it. I'm thinking of mathematical axioms like "a thing is equal to itself" or, more esoterically, "if you have a collection of sets, you can form a new set by choosing one element from each" (it took quite a bit of work to figure out that you can't prove that from other axioms like "two sets are equal if you can match up their elements one-to-one in both directions").

"New technology changes everything" is a statement that people often assume to be true, and it's essential to at least some people's logical frameworks, but I wouldn't call it an axiom because you can actually look at any given new technology and, I claim, come to a reasonable conclusion as to whether it changed everything. And then, maybe, as a followup question, by how much?


To take a couple of easy, well-known examples, it's not hard to argue that, say agriculture changed everything, or antibiotics changed everything. Except ... depending on what you call "agriculture", you could argue that agriculture was around for thousands of years before cities like Shuruppak or Dholavira arose. On a smaller timescale, the first modern antibiotic was extracted from mold growing on a bacterial culture in 1928, but it wasn't available in useful quantities until the early1940s.

It's not the discovery of a technology that makes the difference. There wasn't even any one event that you could call "the discovery of agriculture." There was an event that could be called "the discovery of (modern) antibiotics (that were known to work by killing microbes)", but that in itself didn't change anybody's life greatly.

The point here that simple statements like "agriculture/antibiotics changed everything" turn a bit mushy after even a little prodding. More accurate versions might be "over the millennia, developments in agriculture have had a significant impact on human population and living patterns" or "the development, mass manufacture and widespread deployment of several types of antibiotics in the latter half of the 1900s had a significant impact on human health outcomes."

Clearly there have been significant changes in how people live, and clearly developments in agriculture and medicine, including the development of antibiotics, have played a significant role in that, but it's not a simple matter of "agriculture happened" or "antibiotics happened" followed by "everything changed". The actual stories are full of false starts, backtracks, accidental discoveries, social upheavals, twists of fate and all sorts of other seemingly extraneous factors. Which is the interesting part.


What got me started on all this was thinking about how the web has changed communication, and in particular telecommunication. Except, as soon as I wrote that, I realized that it's more a matter of the internet changing communication, since I've already argued that it's the web of links that makes the web webby, and I'll just claim here that this webbiness hasn't had a large impact on how we communicate with each other.

We could just as well have Skype and Zoom without the web. For that matter, to a large extent each social media platform is its own web, and not "the" web. But that way lies yet another round of fretting over what exactly am I blogging about here ... For now, let's file communication technology under "the web at large" or something and get on with it.


For most of human existence, the only way to communicate detailed information over a long distance was by people moving around. Travelers would bring stories and knowledge and trade items with them and information would diffuse across large areas, but if that traveler wanted to send a specific message to someone they'd met years ago while traveling someplace far from their current location, well, good luck with that. It may not have been impossible, but it couldn't have been commonplace.

Several thousand years ago, digital communication came along and changed this. With writing came the option of moving a written message with the sender's exact words (there wasn't any single "invention of writing", either, but let's just roll with it). Messages could be sealed so that their contents couldn't be easily changed, signed so that you could tell who they came from, and even encrypted so that only the intended reader could read them, or at least that was the idea.

Digital telegraph systems, also dating back thousands of years, could transmit text from point A to point B without even needing a person having to carry it. The Greek phryctoria, a system of towers on mountaintops with torches, are a good example but not the only one.

Two key measures of telecommunication are bandwidth, which is how many bits can be transmitted in a given amount of time, and latency, which is how long it takes to transmit any particular bit from sender to receiver. As usual, the actual definitions are more subtle, particularly for bandwidth, but these will do here. If you're feeling technical, feel free to read bandwidth as bitrate.

For example, if it takes three seconds to switch the torches in a telegraph tower around to show a new letter, and there are 24 possible letters, then the bandwidth is about 4.6/3 bits per second, or about 1.5bps. The latency from one tower to the next, around 30km away, is negligible (about 0.1 milliseconds).

If the message is supposed to be relayed to the next tower in a series of towers, it will take some amount of time for someone to read the arrangement of torches in the sending tower and put the same torches up so the next tower can see them.  Let's say there are two people in the tower, one reading and one putting up torches, and it takes an extra second for the reader to read and announce the next letter, on top of three seconds to arrange the torches. Latency is then four seconds per tower.  That is, if the first tower is sending a message and the second is relaying it to the third, the third tower is getting the message four seconds after it is sent. A fourth tower would be eight seconds behind, and so forth.

Suppose I want to send a message to someone ten towers away. Latency is still pretty good, relatively speaking. The last tower will be 36 seconds behind the sender (nine relays for ten towers). If that receiver sends a reply, I can get it just over a minute after sending my message (in more technical terms, round-trip latency is on the order of a minute). While this is glacial by today's standards, it's outstanding in comparison to a multi-day journey to get from where I am to where the receiver is, and I don't have to worry about someone waylaying my messenger along the way (or my messenger deciding they have better things to do with their time).

Bandwidth, though, is not so great. If I'm sending a short message like "Prepare for attack from the north," that's not a problem. Transmitting that message will take a couple of minutes and my receiver will have the whole thing half a minute after I finish sending it. But suppose I'm sending a trade agreement proposal that amounts to 12,000 bits -- still tiny by today's standards. That will take a couple of hours, which is still doable, though not a lot of fun for anyone involved.

But the people on the other end will want to respond with their own counterproposals, and so on. Pretty soon we're into days, and spare a thought for the twenty people up in the towers shuffling torches around and looking out for torches at other towers through the night  (I'm going to go out on a limb and say this system works better at night).

Probably better to send a trusted emissary with the text of my proposal and maybe some other written instructions. And while they're at it, they could carry messages from other people in my area to people in the receiver's area, or anywhere along the way, and we have ourselves the beginnings of a postal system.  The latency of a postal system is measured in days, but the bandwidth is essentially limited only by how fast people can actually write and read and how many people are sending and receiving messages -- you can fit a lot of sheets of paper onto a horsecart. Not to mention that you can also send drawings and diagrams easily on a sheet of paper.

This may seem like a lot of speculative detail about ancient systems of communication, and it probably is, but it covers the bulk of human history (the written-down part, as opposed to prehistory, which is most of human existence). From ancient times until the late 1800s, long-distance communication was mainly a matter of moving physical texts around, with limited use of alternatives that were much faster (in latency) but also much, much slower (in bandwidth), and quite a bit more expensive. This includes the era of the modern optical telegraph (late 1700s) and electrical telegraph (mid 1800s).

What happens next is interesting. I originally wrote "then came along the telephone," with the idea that it was a major leap to have the bandwidth to carry voice instead of the dots and dashes of morse code. Fortunately, I did a little double-checking and discovered that

  • The bandwidth of a telegraph was not that low. A punched-tape system around the time of the telephone's invention could transmit upwards of 400 words per minute. At roughly 12 bits per word, that comes out to about 80 bits per second. That's nothing by modern standards, but it's about 50 times my guess for the phryctoria. Some of that is because Morse code encodes text more efficiently than torches, but most of it is due to the switch to electromagnetic transmission (um, light from torches is also electromagnetic ...).
  • The bandwidth of human speech is not that high. In this old post I cited a world record of 10 words per second, or about 120 bits per second, but normal speech is much slower.
In other words, a telephone and a high-speed telegraph are transmitting words at about the same rate, though the telephone has the advantage of carrying tone of voice and not requiring someone to transcribe words onto a paper tape. I suppose this shouldn't be too surprising since both the telephone and telegraph are using the same underlying transmission medium of electromagnetic waves traveling along copper wires or, a little later, over the air.

The same technology could also transmit images. The first facsimile machine (perhaps you've heard of "faxes"?) was developed around the same time as the telephone. Later, in the 1920s, a number of inventors on a number of continents (including Leon Theremin, better known for the musical instrument) developed various systems for transmitting moving images. Early television station WRGB ("RGB" can't be a coincidence, can it?) transmitted 40-line images at 20 frames per second. Let's guess that a 40-line image equates to 1600 8-bit pixels. That comes out to about 260 thousand bits per second (260kbps).

This is already a remarkable increase in bandwidth*, from a hundred or so bits per second in the mid 1800s to hundreds of thousands in the early 1900s. By the dawn of the internet, let's say 1974 -- fifty years ago -- when the proposal for TCP was published, a leased telephone line could carry around 50kbps (56kbps as I recall and Wikipedia seems to confirm). That was the basic unit -- it was entirely possible, and typical, to lease more than one. By the mid 1980s, NFSNET was using 1.5Mbps T1 lines. Later came T3 lines at 45Mbs (so a T3 is worth 30 T1, go figure), and today we're talking gigabits or more. 

This is all a matter of how bandwidth is sold. The actual transmission cables are much heftier. Fiber optic cables can carry petabits per second (Pbs). A peta is a million gigas, that is, a petabit per second is a quadrillion bits per second, or about 125 thousand bits per second for every person on the planet. Commercially available cables are somewhat smaller, but not much, measured in hundreds of terabits, that is, hundreds of trillions of bits per second.


There are still some specialized applications that can give that much bandwidth a workout, but in human terms the amount of bandwidth available is absolutely ridiculous ("available to whom?" is a fair question). Which brings me back to one of the earliest themes on this blog: limits on human bandwidth. That is, how much information can any individual person deal with? I discussed several aspects of this in this post about, oh, seventeen years ago.

In terms of bits per second, our highest use of bandwidth is probably the visual system,.which processes somewhere around a gigabit per second considered as raw pixels, but there's a lot of redundancy in there. A good MP4-compressed video stream, which includes audio, is more like 10Mbps. Since a format like MP4 is tuned to provide only the information we actually process, it's probably a better measure of how much data the visual system is actually processing.

There's a lot we don't know about our other sensory input -- touch, smell, proprioception and whatever else, but it's clearly operating at a much lower bandwidth (for example, a walking robot does not need a fiber optic cable to tell the CPU how far its knee is bent or how much pressure its foot is exerting).

In other words, there are many, many ordinary houses with much more than enough bandwidth to saturate the sensory input of all the humans in them, if said sensory inputs could all be magically connected to a stream of bits. In practice, it means that there's enough bandwidth for everyone in the place to spend all their time watching video.

But -- and maybe this really is leading to some sort of point about technology changing everything -- that's been true for quite a while, at least since the advent of 24-hour cable TV, which is to say, also about 50 years ago, which I've just called the dawn of the internet. I don't think this is at all a coincidence. Let's try to boil all the stuff about bandwidth down to a few bullet points:
  • For most of human existence, long-distance, low-latency bandwidth was zero -- there was no way to get a specific message across a long distance quickly. You could interact with some directly at short distance with high bandwidth and low latency, but that was about it.
  • For most of human history, long-distance, low-latency bandwidth has been very low. In some times and places it was possible to quickly transmit a short message over a long distance, but even then, latency was measured in minutes and bandwidth in single-digit bits per second.
  • Starting in the 1800s, electromagnetic transmission led to huge increases in low-latency, long-distance bandwidth, from single-digit bits per second to current rates, which are enough to enable video calls between any two internet-connected points.
  • In the mid to late 1900s, bandwidth was high enough and cheap enough to enable two innovations:
    • Cable TV carrying over a hundred channels 24/7
    • Wide-area digital networking
Of the two, digital networking was by far the slower. Early networks mainly transmitted text, whether in human or computer languages. If you had a terminal at home, you could typically connect to your local network at speeds of 110 to 2400 baud (in general a different unit from bits per second, but in this case the same), and hope that you'd remembered to turn off call waiting on your landline. Then, after a long day of hacking, you could flip on the TV and watch at something like a megabit (resolution was lower in those days).

Even backbone connections were very slow by today's standards. This doesn't seem like a technical limitation, since ordinary coax cable could handle megabits, but more a matter of there not being that much digital information to send. If I wanted to talk to a colleague on the other side of the country, I wouldn't have tried to set up a call over the internet at the time. I would just pick up the phone.

The digital convergence that happened gradually over the next couple of decades consisted largely of building up the internet backbone, which was based on telephone and cable technology (mostly telephone, I believe), to the point where it could carry digital information at a rate comparable to the analog technologies that had been around since the beginning of the whole exercise.

Technically, this was revolutionary. For most intents and purposes, anything that was analog in the mid 1900s, particularly television, telephone and radio, is now carried digitally on the same network infrastructure that you can use to send purely digital information like ... text and emails? Source code?

This is a kind of interesting way to look at it. Hiding inside the massive digital network that delivers sound and video to us is a tiny replica of the original internet, albeit expanded from a few thousand researchers to a significant slice of the world's population. Billions are bigger than thousands, of course, a million times bigger, in fact, but overall digital bandwidth has increased by much more than a factor of a million.

(The early internet wasn't just used for email and source or object code. It was also used to transmit scientific data. Some datasets can be quite large, particularly in astronomy and particle physics, large enough to saturate even the modern backbone. But in such cases data is generally transmitted by putting it on physical media, which is then shipped. The postal service still wins on bandwidth. And yes, I am proudly using both data and media as mass nouns here.)


I think what I'm trying to sort out here is that the digital convergence can be looked at two ways. The original vision was to bring the intelligence of the internet to existing audio and video media. A TV cable brings a fixed set of channels into your house and very little back out. An analog phone circuit delivers voice traffic from point A to point B. A digital network can carry information from any number of senders to any number of receivers and do any kind of processing along the way.

On the other hand, technically, the digital convergence was a shift from sending analog data over analog lines (or over the air) to sending the same data over the same lines, or at least the same types of lines plus the cell network (also fundamentally analog), but encoded digitally, then re-encoded into analog signals and likewise decoded and re-decoded on the other end.

Why do that?

The wilder speculations of the 1990s haven't really panned out. A phone call is still a phone call. True, most of the time it's easier just to text, but texting needs much less bandwidth than calling. It certainly does not require a huge buildout of digital bandwidth. All the texts you send in a year would probably amount to a few seconds of audio.

TV shows are still TV shows and movies are still movies. Exciting new possibilities like interactive choose-your-own-adventure TV are an occasional novelty. Live streams allow viewers to interact with the presenter/performer, but so did call-in TV shows.

The difference is control. Outside the occasional news program or sporting event, I'm not sure I can remember the last time I watched something at the same time it was broadcast, if it was ever broadcast at all. I haven't bought an album in years, even in digital form. I stream what I want to watch or listen to, and I'm hardly a bleeding-edge early adopter. If I want to participate in a livestream, I can choose that. More importantly, if a creator wants to put on a live stream, they can easily do that. If I want to set up a video call with some people at work (or not at work), that's easy, too.

Some of these might be possible with the old technology. I could imagine a high-bandwidth phone service that would allow you to call a special number to connect to a video server and pick out what to watch on your video-enabled phone terminal, but putting everything on a digital network that handles data as bits regardless of its content or where it's going has made all of this much easier.

This is all sliced finely enough that individual people can decide which individual people to communicate with, from friend group to celebrity influencers to major organizations and whatever else. I'm personally not sure how much the behavior that this has enabled is new and how much is stuff that people were doing anyway. I explored that theme fairly early on, here, here and here for example, but I don't really do much with social media, even if you count blogging and the occasional visit to LinkedIn.


I think "Digital communication has changed everything" is true in about the same way as "Agriculture has changed everything". On the one hand, it has to be true. Being able to communicate instantly with any of billions of people has to be different from only being able to communicate instantly with the people around you. Being able to transmit high-resolution video across the world with negligible delay has to be different from being able to send a letter across a continent in days or weeks.

Being able to stream from a wide collection of audio and video is certainly different from having to buy or borrow books, records/CDs and videotapes/DVDs, and since that shift has happened well within living memory, it can certainly seem like things are changing rapidly.

But on the other hand, digital technology, including digital telecommunication, has been around for thousands of years. Analog telecommunication has been around for about a century and a half. What we might call the digital revolution is a change in how we transmit and access information, primarily audio and video, that had previously been analog, sitting on top of a huge increase in overall telecommunication bandwidth that began happening over a hundred years ago.

Just as there is no particular beginning of agriculture, there is no particular beginning of digital communication. Even if you could pinpoint the first time a person deliberately planted a seed with the intention of harvesting food later, or the first time a person deliberately made marks to represent words with the intention of someone else reading them later, it wouldn't tell you much. What matters isn't the particular starting point, but the long history of development and use over the millennia.


So far, advances in communication have been about people communicating with people. Machines do communicate with other machines without direct human involvement, but this is mainly in service of people communicating with people. This may change, but that's for another blog.

As far as people communicating with people, the limiting factor is mainly the people themselves. There are only so many conversations one can have and so many people to have them with. The whole point of a video conversation is to make the call as much like talking face to face as possible, that is, to accommodate our limitations in how we communicate. There are now ways of broadcasting a message from one person to millions of people, or even a billion, but even if one person can broadcast a message to a billion people instantly, those billion people will make sense of it in terms of their own lives, their own views and their own desires. 

The how of communicating with other people has changed greatly over the millennia, and particularly greatly in recent decades. This in turn has significantly affected whom we can communicate with. But what we talk about, even if we're talking about how quickly things appear to be changing, doesn't really seem to have changed much at all.


One of the earliest themes of this blog was trying to understand what effect the web and the internet would have on how we talk to each other. My instinct has been generally been to push back against "It's all different now" narratives, and I think my instinct has largely been borne out (but then, I would think that, wouldn't I?).

And yet, I can't believe that nothing has changed. A lot has changed. Some part of me wishes that, after nearly two decades, I could arrive at some sort of grand summing-up of What The Web Is About and what effect it's had, but after all this time, I'm not sure I have much beyond my original take: "It's not nothing, but I'm not sure what it is, except whatever it is doesn't line up that well with the hype."