Sunday, December 13, 2009

Additive change considered useful

This post is going to be a bit more hard-core geekly than most, but as with previous such posts, I'm hoping the main point will be clear even if you replace all the geek terms with "peanut butter" or similar.

Re-reading my post on Tog's predictions from 1994, I was struck by something that I'd originally glossed over. The prediction in question was:
The three major operating systems in use today, DOS/Windows, Macintosh, and Unix, were all launched in the seventies. They are old, tired, and creaking under the weight of today's tasks and opportunities. A new generation of object-oriented systems is waiting in the wings.
My specific response was that object-oriented programming has indeed become prominent, but that for the most part object-oriented applications run on top of the same three operating systems. I also speculated, generally, that such predictions tend to fail because they focus too strongly on present trends and assume that they will continue to the logical conclusion of sweeping everything else aside. But in fact, trends come and go.

Fair enough, and I still believe it goes a long way towards explaining how people can consistently misread the implications of trends, but why doesn't the new actually sweep aside the old, even in a field like software where everything is just bits in infinitely modifiable memory?

The particular case of object-oriented operating systems gives a good clue as to why, and the clue is in the phrase I originally glossed over: object-oriented operating systems. I instinctively referred to object-oriented programming instead, precisely because object-oriented operating systems didn't supplant the usual suspects, old and creaky though they might be.

The reason seems pretty simple to me: Sweeping aside the old is more trouble than it's worth.

The operating system is the lowest level of a software platform. It's responsible for making a collection of hard drives and such into a file system, sending the right bits to the video card to put images on the screen, for telling the memory management unit what goes where, dealing with processor interrupts and scheduling, and other such finicky stuff. It embodies not just the data structures and algorithms taught in introductory OS classes, but, crucially, huge amounts of specific knowledge about CPUs, memory management units, I/O buses, hundreds of models of video cards, keyboards, mice, etc., etc., etc.

For example, a single person was able to put together the basic elements of the Linux kernel, but its modern incarnation runs to millions of lines and is maintained by a dedicated core team and who knows how many contributors in all. This for the kernel. The outer layers are even bigger.

It's all written in an unholy combination of assembler and C with a heavy dose of magical functions and macros you won't find anywhere else, and it takes experience and a particular kind of mind to hack in any significant way. I don't have the particulars on Mac OS and DOS/Windows, but the basics are the same: Huge amounts of specialized knowledge distributed through millions of lines of code.

So, while it might be nice to have that codebase be written in your favorite OO language, leaving aside that using an OO platform enables but certainly does not automatically bring about several improvements in code quality, why would anyone in their right mind want to rewrite millions of lines of tested, shipping code? As far as function is concerned, it ain't broke, and where it is broke, it can be fixed for much, much less than the cost of rewriting. Sure, the structure might not be what you'd want, and sure, that has incremental costs, but so what? The change just isn't worth it*.

So instead, we have a variety of ways to write desktop applications, some of them OO but all running on one of the old standbys.


Except ...

An application developer would rather not see an operating system. You don't want to know what exact system calls you need to open a TCP connection. You just want a TCP connection. To this end, the various OS vendors also supply standard APIs that handle the details for you. Naturally, each vendor's API is tuned toward the underlying OS, leading to all manner of differences, some essential, many not so essential. If only there were a single API to deal with no matter which platform you're on.

There have been many attempts at such a lingua franca over the years. One of the more prominent ones is Java's JVM, of course. While it's not quite the "write once, run anywhere" magic bullet it's sometimes been hyped to be, it works pretty well in practice. And it's OO.

And it has been implemented on bare metal, notably on the ARM architecture. If you're running on that -- and if you're writing an app for a cell phone you may well be** -- you're effectively running on an OO operating system [Or nearly so. The JVM, which talks to the actual hardware, isn't written in Java, but everything above that is][Re-reading, I don't think I made it clear that in the usual Java setup, the JVM is relying on the underlying OS to talk to the hardware, making it a skin over a presumably non-OO OS.  In the ARM case, you have an almost entirely OO platform from the bare metal up, the exception being the guts of the JVM].

Why did this work? Because ARM was a new architecture. There was no installed base of millions of users and there weren't hundreds of flavors of peripherals to deal with. Better yet, a cell phone is not an open device. You're not going to go out and buy some new video card for it. The first part gives room to take a new approach to the basic OS task of talking to the hardware. The second makes it much more tractable to do so.

What do the two cases, of Java sitting on existing operating systems in the desktop world but on the bare metal in the cell phone world, have in common? In both cases the change has been additive. The existing operating systems were not swept away because in the first case it would have been madness and in the second case there was nothing to sweep away.



* If you're curious, the Linux kernel developers give detailed reasons why they don't use C++ (Linus's take is particularly caustic). Whether or not we choose to count C++ as an OO language, the discussion of the costs and benefits is entirely valid. Interestingly, one of the points is that significant portions of the kernel are object-oriented, even though they're written in plain C.

** I wasn't able to run down clear numbers on how many cell phones in actual use run with this particular combination. I believe it's a lot.

2 comments:

David Hull said...

Note to self: this is relevant to the recent posts on agile/waterfall

David Hull said...

Also the Linux link looks broken