Monday, August 18, 2025

Hyperlinks vs. the web

In the previous post on the 1968 Mother of all Demos by Doug Engelbart and company, I mentioned stumbling on the fact that Andries van Dam and Ted Nelson had put together HES, which is generally regarded as the first hypertext system, the year before. All the parts of that were familiar. I've written extensively, though not always favorably, about Nelson's later project, Xanadu. Foley, van Dam et. al.'s Computer Graphics: Principles and Practice was on the shelf at the first place I coded for money, and I'm pretty sure I've run across the idea of hypertext at some point. What I hadn't realized was that Nelson and van Dam had worked together, and that hypertext went back quite that far.

To be fair, I wasn't exactly shocked that hypertext dated to 1967, particularly in the context of Engelbart & co.'s demo, which included what were recognizably hyperlinks. What did catch my attention was that all the building blocks of Web 1.0 were in place in 1968, twenty-three years before the first actual web servers appeared in 1991:

  • the concept of hyperlinks and hypertext
  • the realization of that concept in running code
  • connectivity between computers in different physical locations (ARPANET itself would come along a bit later, but computers were already talking to each other)
  • interactive graphic displays
  • the mouse
The bullet point I didn't quite add is file servers, but FTP dates to 1971. I haven't dug up solid evidence that there was such a thing as a file server in 1968, but I certainly wouldn't bet against it. If you prefer to put the "Web 1.0 could just as well have happened" point at 1971 and say it was only twenty years later that it actually happened, fine.

What took so long? According to van Dam's account, the people who saw the Mother of all Demos were generally impressed, but the overall reaction was to say "Wow, that was some demo" and then get back to work. Was this a failure of the imagination, that the computer researchers of the time couldn't wrap their heads around something that wasn't 80-column punched cards, even if they'd seen it with their own eyes?

Possibly, but there were also more mundane concerns. That list of pieces in place comes with a few disclaimers:
  • Connectivity was generally at 1200 Baud, or approximately one millionth of a gigabit per second. This will deliver text faster than you can read it, but it amounts to a megabyte every two minutes. You can actually fit quite a bit of information into a megabyte, and 1.5Mb/s T1 lines were available, (for a hefty charge, generally to institutions or large corporations), but you're not going to run YouTube or Netflix on the bandwidth available at the time.
  • Interactive graphic displays were a thing, but they were normally vector-based (think Asteroids, if you've heard of that) and in any case they were expensive specialized equipment. Graphic displays didn't become commonplace until the 80s. Even then they weren't cheap and they looked absolutely primitive by today's standards
I think it's fair to say that in 1968 you could have put something together that looked quite a bit like Web 1.0, but it would have been a curiosity: slow, expensive and without anywhere near enough content to make for a compelling experience.

The first web servers were intended as an easier way to get to the content that had accumulated on various FTP/Gopher/UUCP servers over the years, which was getting to the point where it was hard to just know where something you were interested in was located. Not too long after that (AltaVista came along in 1995), there were enough index pages that it was hard to keep track of where a good index for what you were looking for was, much less the data itself, and web search was born.

So it wasn't just a matter of the world turning its back on the wonderful potential of Engelbart's NES and Nelson and van Dam's HES and then suddenly waking up in 1991 to realize what they should have known all along. Things were happening in the meantime:
  • Computing power, storage and bandwidth were increasing exponentially (as in, actually exponentially, at a more-or-less constant proportion per unit time, and not just "by a lot")
  • As both a driving cause and an effect, the number of people with access to computing power, and the amount of data they wanted stored, also increased exponentially
  • People continued to experiment with ways of organizing information and navigating complex webs of connections (1987's HyperCard comes to mind, but it's not the only example)
I think this is a common thread in technology in general: Things take a while to develop. It's not enough just to have a good concept, or even a working realization of it. You generally need a certain amount of infrastructure and a growing need that your new concept will meet, and some partial successes along the way.

Some other examples that come to mind:

  • SketchPad, which anticipated several major developments, particularly the Graphical User Interface, was written in 1963, GUIs weren't really widespread until the 1980s
  • The object-oriented language Simula came out in 1962 and SmallTalk in 1972. Objective-C was introduced in 1984, but OO languages didn't really get major traction until the mid 1990s
  • The concept of a neural network dates back to 1943, at least. Perceptrons were introduced in 1969. Hopfield networks were a topic of research in the 1980s. Transformers were proposed in 2017, now eight years ago. ChatGPT came out five years later.
It shouldn't be surprising that concepts run ahead of implementation. Concepts are a lot easier to come up with. More interesting is how we get from (some) concepts to widespread implementation, and which concepts get there.

Sunday, August 17, 2025

The future still isn't what it used to be: Doug Engelbart

If you're exploring the origins of the web and pondering how early ideas ended up incorporated -- or not -- into what we see now, sooner or later you're going to run into "The Mother of all Demos", a 1968 presentation by Doug Engelbart and company demonstrating the oN-Line System, or NLS for short, produced by the Augmentation Research Center at the Stanford Research Institute.

I watched it a few months ago. Having somehow missed it for all these years, I went in cold and -- deliberately -- without researching what I was supposed to be looking at. I took copious notes, in some cases replaying to try to make sure I got things right ... and then got distracted by other things. What follows is not fresh and detailed, but I want to at least include something about the demo, so here goes ...

NLS was directly inspired by Vannevar Bush's Memex concept, so if nothing else it's an interesting case study in what happens when you try to turn someone's architectural vision into a real system.

Everything's in glorious black and white, the text on the screens is UPPERCASE and clunky, with actual capital letters indicated by an OVERBAR, and some of the participants look more than a bit like the 1980s stereotype of nerds, because that's where the 1980s stereotype comes from. Actual 1980s nerds were hopelessly uncool in completely different ways (so I hear).

On the other hand, Engelbart is wearing a headset mic, moving a cursor (the team called it a "bug") around on the screen using a mouse, pointing, clicking, narrating the whole way through, linking up with colleagues in a different county in real time, cross-fading and compositing live video and generally giving the appearance of an ordinary livestream, just in black-and-white and what now qualifies as period costume, and without a discernible boss level.

About 20 minutes in,  Engelbart says "I'm going to do something called 'jump on a link'" and explains that a link points to a particular location in a particular file with a particular "view type" saying what kind of content you're looking at -- yeah, that's hypertext.


Physically, NLS is nothing like Bush's original description of a desk-shaped piece of furniture holding a library of microfilm and some sort of magnetic recording medium. More than twenty years had passed since Bush's original article and whole new technologies had developed in the interim, particularly in computing hardware.

NLS was running on a 24-bit SDS-940, clocked at somewhere on the order of 0.1MHz, with approximately 200KB of main memory, 5MB of swap space and 100MB of disk(-ish) storage. It had a text and graphic-capable display and a 1200-baud modem for connectivity.  It also ran QED (Quick EDitor), the very first text editor I learned to use (a bit later, though), and a direct ancestor of vi, which I still occasionally use.

Tiny as this might seem today, I'd argue that there's a bigger gulf between it and 1945's ENIAC than between it and equivalent commercially-available equipment from 1991 (as many years from NLS as NLS was from ENIAC): say a farm of a couple dozen 80MHz SPARCstation 2s, each with 128MB of ram and a few GB of disk. Yes, the SPARCstations could do a lot more, but the SDS has essentially the same pieces: Enough RAM to be meaningfully programmable (as opposed to requiring switches to be set manually), offline storage, connectivity*, a graphical display, the ability to connect a variety of peripherals and so forth, which the ENIAC has essentially none of. The hardware available when NLS was developed was essentially a smaller, slower version of the hardware that Web 1.0 was built on. Nothing of the sort was available when Bush was thinking up Memex.

NLS is clearly recognizable, almost 60 years later, for what it is: a computer system. Memex, just as clearly, is a thought experiment, though one with enough detail to make it the next best thing to a demo.

To be sure, even if the difference between NLS hardware and today's is mainly a matter of quantity, the quantities have changed by a lot. Today's hardware is several orders of magnitude beyond the SDS, and there are several orders of magnitude more computers in the world now, and they are connected by networks several orders of magnitude faster than 1200 baud. That all has to make a difference, and it has. I hope to dig into this a little more deeply at some point.

There is another key difference between NLS and Memex. Memex is intended for a single person working alone. NLS is inherently multiuser and connected. While the actual demo takes place in San Francisco, the system itself is running in Menlo Park, about 50km/30mi away and probably about as much of a pain to drive to and from then as now. After a fair bit of introductory material, Engelbart is joined by Bill (English, or possibly Paxton?) in Menlo Park, and the two proceed to edit shared documents together.

In the post on Memex, I argued that this difference is one of the main reasons that as far as I'm concerned Bush did not invent the Web. I wouldn't say that Engelbart's team invented the Web, either, but just as the SDS-940 has much more in common with modern computers than with ENIAC, NLS has much more in common with the modern Web than with Memex.


Along with demonstrating the NLS hardware and software, Engelbart made a point of discussing the project that produced it and the approach behind it. That approach was

  • Empirical: Try things out and see what works
  • Evolutionary ("steepest ascent"): You can't see everything in advance, so take a heuristic approach and look for highest-impact changes at every point
  • Whole system: While different people had different skills and responsibilities, the project itself encompassed hardware, software and anything else needed to produce a working system
  • Bootstrapping (later called "eating you're own dogfood"): The people developing the system used the system. Past a certain early point, further development of the system was done in the system itself.
"It's a struggle doing it that way," Engelbart said, "but it's beginning to pay off."

I get the feeling both parts of that were knowing understatements.

What we now call dogfooding is a powerful approach that provides a constant reality check, but with the pitfall that it can skew toward the researcher's use cases. A system that's useful for developing software might or might not be useful for other things. Engelbart appears to have been aware of this hazard. One of his prominent examples is his own shopping/errand list. With its repetitions and loose categorization it looks natural. Maybe it wasn't his real list, but it feels like it could have been.

Besides making the system more relatable by giving an example that the audience would be familiar with, constructing a loosely-organized shopping/errand list on the fly demonstrates the system's ability to deal flexibly with free-form content.

This wasn't a given. I could easily imagine someone trying to demo a more rigidly structured system: "OK, now first I create a shopping list template ... Oh, I forgot about vegetables. Let me go back and add a vegetables section to the template. Now where was I?" as the audience's eyes glaze over. Engelbart just puts a list together. It doesn't hurt that he's a fairly engaging speaker and is intimately familiar with the system, but even taking that into account, it doesn't seem like the system is putting arbitrary barriers in his way.

That's not to say that using NLS would be simple for someone encountering it for the first time. After a while, you come to understand that Engelbart and company have memorized quite a few one-letter commands and established quite a few conventions to help find their way around. This is a classic ease-of-use vs. ease-of-learning tradeoff (it's not always a tradeoff, but I'll leave it at that before I start mumbling things about Pareto optimality).

There are some high-level goals that read a lot like what we would now call mission statements:

  • Improve the effectiveness with which individuals work at intellectual tasks
  • as a subgoal: better use of human capabilities
  • Develop a system-oriented discipline for designing the means by which greater effectiveness is achieved
Again, Engelbart makes a point of calling that last point out. Just as important as producing a usable product is producing a way of developing such products.

NLS is meant to be "an instrument/vehicle helping humans to operate within the domain of complex information structures". While it's not explicit in this statement, the demo makes clear that, unlike Memex, NLS is intended for collaboration. It's also subtly larger in that it aims at "humans" rather than "men of science".

Engelbart puts forth several principles that sound surprisingly modern:
  • "content represents concepts" Note the use of "content". It didn't start life as a marketing term.
  • "structure represents relationships, or human-thought product"
  • said structures are "too complex for direct human study", that is, the computer is taking on some of the weight of dealing with a complex structure
  • there is a "control metalanguage" note use of "meta". Again, not a new term (well, it goes back to Aristotle's Metaphysics at least).

So it's 1968, Doug Engelbart has just demonstrated hypertext, computers are starting to get networked together, a bunch of standards ending in "TP" for "Transport Protocol" are about to be developed (FTP is announced in 1971) ... so we can expect the Web as We Know It to appear sometime around ... 1991? Wait, what? That's like, 20 years later.

Audience member Andries (Andy) van Dam's assessment of the impact of the demo was that computing mostly went on unchanged afterwards. Besides writing one of the most important early textbooks in computer graphics, van Dam, along with Xanadu founder Ted Nelson -- and I only just now learned this while researching this post -- developed HES, the Hypertext Editing System, at Brown University in 1967. As far as I can tell this was independent of the NLS work, though clearly the two teams would have known about each other.

Van Dam's assessment notwithstanding, several NLS alumni ended up at Xerox PARC (Palo Alto Research Center), which produced, among other things, the influential object-oriented language Smalltalk and a bitmapped mouse-driven GUI ... and these went nowhere until, so the story goes, Steve Jobs stopped by and basically grabbed the whole concept for the Apple Lisa.

The Lisa also went nowhere for a while, but was later reworked to the Macintosh, which eventually took off. I had the fortune of stumbling across a demo of a Lisa at school. This being a tech school, the hardware-oriented folks talked the Apple rep into taking the cover off so they could poke around. The Apple rep though visibly nervous, was generally a good sport. But I digress.

The famous Macintosh ad was in 1984, 16 years after Engelbart's demo, itself 18 years after the project was initiated. We'd like to think we move much faster now, but do we, really, once you measure comparable points of development: start of project, first major demo, first real product, with error bars representing variability in how long things take?). I hope to dig a bit deeper into that as well.



*Though NLS used its modems to allow people to connect to it, computers were already connecting to each other. The ARPANET project was already underway and would be live in a year or two.