Thursday, October 28, 2021

Why so quiet?

I hadn't meant for things to go so quiet here, and it's not just a matter of being busy.  I've also been finding it harder to write about "the web", not because I don't want to, but because I'm just not running across as many webby things to write about.

That got me thinking, just what is the web these days?  And that in turn got me thinking that the web is, in a way, receding from view, even as it becomes more and more a part of daily life, or, in fact, because it's more and more a part of daily life.

There is still plenty of ongoing work on the technical side.  HTML5 is now a thing, and Adobe Flash is officially "end of life" (though there's a bit of a mixed message in that Adobe's site for it still says "Adobe Flash Player is the standard for delivering high-impact, rich Web content." right below the banner that says "Flash Player’s end of life is December 31st, 2020").  Microsoft has replaced Internet Explorer with Edge, built on the Chromium engine.  Google is working to replace cookies.  I realize those are all fairly Google-centric examples, and I don't want to imply that no one else is doing important work.  Those were just the first examples that came to mind, for some strange reason.

On the one hand, those are all big developments.  Adobe Flash was everywhere.  It's hard to say how many web pages used it, but at the peak, there would be on the order of billions of downloads when Adobe pushed a release, because it was in every browser.  Internet Explorer was the most-used browser for over a decade, and the standard browser on Windows, which would put its user base in the billions as well (even if some of us only use it to download Chrome).  Somewhere around 20% of web sites, however many that is, use cookies.

On the other hand, they are all nearly invisible.  I can remember a few times, early in the process a couple of years ago, when Chrome wouldn't load some particular website because Flash was disabled, but not enough to cause any real disruption.  I'm sure that the shift from Explorer to Edge was disruptive to some, but when I set up a laptop for a relative a little while ago, they were much more concerned with being able to check email, write docs or play particular games than which browser was making that happen.  As for cookies, I haven't looked into exactly how they're being replaced, because I don't have to and I haven't made time to look it up.

Because the web is everywhere, the huge number of websites and people browsing means that it's most important to keep everything running smoothly.  Unless you're introducing some really amazing new feature, it's usually bad news if anyone knows that you made some change behind the scenes (whatever you think of Facebook as a company, please spare a thought for the people who had to deal with that outage -- even with a highly-skilled, dedicated team keeping the wheels turning, these things can happen, and it can be devastating to those involved when it does).

The upshot here is that I don't really have much interesting to say about much of the technical infrastructure behind everyday web experience.  Besides not having been close to the standards process for several years,  I figured out very early that I didn't want to write about the standards and protocols themselves -- there are plenty of people who can do that better than I can -- but how they appear in the wild.  Thus the field notes conceit.

It was interesting to write about, say, Paul Vixie's concerns about DNS security or what copyrights mean in the digital age, but topics like that seem less interesting today.   Regardless of the particular threats, the real benchmark of computer security is whether people are willing to put their money on the web -- buy, sell, send money to friends, check their bank statements or retirement accounts, and so forth.  That's been the case for a while now, through a combination of security technology and legal protections.  Importantly, the technology doesn't have to be perfect, and a good thing, that.

The question of how creators get paid on the web is still shaking out, but one the one hand, I think this is one of those problems that is always shaking out without ever getting definitively resolved, and on the other hand, I'm not sure I have anything significant to add to the discussion.


As much as I don't want to write a purely technical blog, I also don't want to lose sight of the technical end entirely.  I'm a geek by training and by nature.  The technical side is interesting to me, and it's also where I'm most likely to know something that isn't known to a general audience.

Obviously, a lot of the important discussion about the web currently is about social media, but I don't want to jump too deeply into that pool.  Not only is it inhabited by a variety of strange and not-always-friendly creatures, but if I were commenting on it extensively, I'd be commenting on sociology, psychology and similar fields.  I muse about those on the other blog, but intermittently conjecturing about what consciousness is or how language works is an entirely different thing from analyzing social media.

Even so, Twitter is one of the top tags here, ironic since I don't have a Twitter account (or at least not one that I use).

My main point on social media was that some of the more utopian ideas about the wisdom of crowds and the self-correcting nature of the web don't tend to hold up in practice.  I made that point in the context of Twitter a while ago, in this post in particular.  I wasn't the first and I won't be the last.  I think it's pretty widely understood today that the web is not the idyllic place some said it would be a few decades ago (not that that kept me from commenting on that very topic in the most recent post before this one).

On the other hand, it might be interesting to look into why the web can be self-correcting, if still not idyllic, under the right circumstances.  Wikipedia comes to mind ...


Finally, I've really been trying to keep the annoyances tag down to a dull roar.  That might seem a bit implausible, since it's generally the top tag on the list (48 posts and counting), but in my defense it's fairly easy to tell if something's annoying or not, as opposed to whether its related to, say, copyrights, publishing, both or neither, so it doesn't take a lot of deliberation to decide to apply that label.  Also, with the web a part of everyday life, there's always something to be annoyed about.


So if you take out "technical stuff that no one notices unless it breaks", "social media critiques", "annoying stuff, unless maybe it's particularly annoying, funny or interesting", along with recusing myself from "hmm ... what's Google up to these days?", what's left?

Certainly something.  I haven't stopped posting entirely and I don't plan to.  On the other hand, there doesn't seem to be as much low-hanging fruit as there used to be, at least not in the particular orchard I'm wandering through.  Some of this, I think, is because the web has changed, as I said up top.  Some of it is because my focus has changed.  I've been finding the topics on the other blog more interesting, not that I've been exactly prolific there either.  Some of it is probably the old adage that if you write every day, there's always something to say, while if you write infrequently, it's hard to get started.

A little while ago, I went through the whole blog from the beginning and made several notes to myself to follow up, so I may come back to that.  In any case new topics will certainly come up (one just did, after all, about why Wikipedia seems to do much better at self-correcting).  I think it's a safe bet, though, that it will continue to be a while between posts.  Writing this has helped me to understand why, at least.

Saturday, May 1, 2021

Please leave us a 5-star review

It's been long enough that I can't really say I remember for sure, and I can't be bothered to look it up, but as I recall, reviews were supposed to be one of the main ways for the web to correct itself.  I might advertise my business as the best ever, even if it's actually not so good, but not to worry.  The reviewers will keep me honest.  If you're searching for a business, you'll know to trust your friends, or you'll learn which reviewers are worth paying attention to, good information will drive out bad and everyone will be able to make well-informed decisions.

This is actually true, to an extent, but I think it's about the same extent as always.  Major publications try to develop a reputation for objective, reliable reviews, as do some personalities, but then, some also develop a reputation for less-than-objective reviews.  Some, even, may be so reliably un-objective that there's a bit of useful information in what they say after all.  And you can always just ask people you know.

But this is all outside the system of customer reviews that you find on web sites all over the place, whether provided by the business itself, or companies that specialize in reviews.  These, I personally don't find particularly useful or, if I were feeling geekly, I'd say the signal/noise ratio is pretty low.  It turns out there are a couple of built-in problems with online reviews, that were not only predictable, but were predicted at the time.

First, there's the whole question of identity on the internet.  In some contexts, identity is an easy problem: an identity is an email address or a credit or debit account with a bank, or ownership of a particular phone, or something similar that's important to a person in the real world.  Email providers and banks take quite a bit of care to prevent those kind of identities from being stolen, though of course it does still happen.  

However, for the same reason, we tend to be a bit stingy with this kind of identity.  I try hard not give out my credit card details unless I'm making an actual purchase from a reputable merchant, and if my credit card details do get stolen, that card will get closed and a new one opened for the same account.  Likewise, I try not to hand out my personal email or phone number to just anyone, for whatever good that does.

When it comes to reviews, though, there's no good way to know who's writing.  They might be an actual customer, or an employee of the business in question, or they might be several time zones away writing reviews for money, or they might even be a bot.   Platforms are aware of this, and many seem to do a good job of filtering out bogus reviews, but there's always that lingering doubt.  As with identities in general, the stakes matter.  If you're looking at a local business, the chances are probably good that everyone who's left a review has actually been there, though even then they might still have an axe to grind.  In other contexts, though, there's a lot more reason to try to game the system.

But even if everyone is on the up-and-up and leaving the most honest feedback they can, there are still a few pitfalls.  One is selection bias.  If I've had a reasonably good experience with a business, I'll try to thank the people involved and keep them in mind for future work, or mention them if someone asks, but I generally don't take time to write a glowing review -- and companies that do that kind of work often seem to get plenty of business anyway.

If someone does a really horrible job, or deals dishonestly, though, I might well be in much more of a mood to share my story.  Full disclosure: personally I actually don't tend to leave reviews at all, but it's human nature to be more likely to complain in the heat of the moment than to leave a thoughtful note about a decent experience, or even an excellent experience.  In other words, you're only seeing the opinions of a small portion of people.  That wouldn't be so bad if the portion was chosen randomly, but it's anything but.  You're mostly seeing the opinions of people with strong opinions, and particularly, strong negative opinions.

The result is that reviews tend to cluster toward one end or the other.  There are one-star "THIS PLACE IS TERRIBLE!!!" reviews, there are five-star "THIS PLACE IS THE MOST AWESOME EVER!!!" reviews, and not a lot in between.  A five-point scale with most of the action at the endpoints is really more of a two-point scale.  In effect, the overall rating is the weighted average of the two: the number of one-star reviews plus five times the number of five-star reviews, divided by the total number of reviews.  If the overall rating is close to five, then most of the reviews were 5-star.  If it's 3, it's much more likely that the good and the bad are half-and-half than most of the reviews being 3-star.

The reader is left to try to decide why the reviewers have such strong opinions.  Did the car wash do a bad job, or was the reviewer somehow expecting them to change the oil and rotate the tires as well and then get angry when they didn't?  Is the person praising a consultant's integrity actually just their cousin?  Does the person saying that a carpenter did a great job with their shelves actually know much about carpentry or did they just happen to like the carpenter's personality?  If the shelves collapse after a year and a half, are they really going to go back and update their review?  Should they, or should they maybe not store their collection of lead ingots from around the world on a set of wooden shelves?

Specifics can help, but people often don't provide much specific detail, particularly for positive reviews, and when they do, it's not always useful.  If all I see is three five-star reviews saying "So and so was courteous, professional and did great work", I'm not much better off than when I started.  If I see something that starts out with "Their representative was very rude.  They parked their truck in a place everyone in the neighborhood knows not to park.  The paint on the truck was chipped.  Very unprofessional!" I might take what follows with a grain of salt.


There's a difference, I think, between an opinion and a true review.  A true review is aimed at laying out the information that someone else might need to make a decision.  An opinion is just someone's general feeling about something.  If you just ask people to "leave a review", you're going to get a lot more personal impressions than carefully constructed analyses.  Carefully constructing an analysis is work, and no one's getting paid here.

Under the "wisdom of crowds" theory, enough general impressions will aggregate into a complete and accurate assessment.  A cynic would say that this is like hoping that if you put together enough raw eggs, you'll end up with a soufflĂ©, but there are situations where it can actually work (for a crowd, that is, not for eggs).  The problem is that in many cases you don't even have a crowd.  You have a handful of people with their various experiences and opinions.


This all reaches its logical conclusion in the gig economy.  When ride share services first started, I used to think for a bit about what number to give a driver.  "They were pretty good, but I wish they had driven a bit less (or in some cases maybe more) aggressively".  "The car was pretty clean, but there was a bit of a funny smell" or whatever.

Then I started noticing that almost all drivers had 5-star ratings, or close.  The number before the decimal point doesn't really mean anything.  You're either looking at 5.0 or 4.something.  A 4.9 is still a pretty good rating, but a 4.0 rating is actually conspicuously low.  I don't know the exact mechanics behind this, but the numbers speak for themselves.

It's a separate question to what extent we should all be in the business of rating each other to begin with, but I'll let Black Mirror speak to that.

Following all this through, if I give someone a 4-star review for being perfectly fine but not outstanding, I may actually be putting a noticeable dent in their livelihood, and if I give someone 3 stars for being pretty much in the middle, that's probably equivalent to their getting a D on a test.  So anyone who's reasonably good gets five stars, and if they're not that good, well, maybe they were just having a bad day and I'll just skip the rating.  If someone actively put my life in danger, sure, they would get an actual bad rating and I'd see if I could talk to the company, but beyond that ... everyone is awesome.

Whatever the reasons, I think this is a fairly widespread phenomenon.  Reviews are either raves or pans, and anyone or anything with reviews much short of pure raves is operating at a real disadvantage.  Which leads me back to the title.

Podcasts that I listen to, if they mention reviews at all, don't ask "Please leave a review so we can tell what's working and what we might want to improve".  They ask "Please leave a 5-star review".  The implication is that anything less is going to be harmful to their chances of staying in business.  Or at least that's my guess, because I've heard this from science-oriented podcasts and general-interest shows that clearly take care to present their stories as objectively as they can, the kind of folks who might genuinely appreciate a four-star review with a short list of things to work on.

This is a shame.  A five-point scale is pretty crude to begin with, but when it devolves to a two-point scale of horrible/awesome, it's not providing much information at all, pretty much the opposite of the model that I'm still pretty sure people were talking about when the whole ratings thing first started.

Saturday, September 5, 2020

One thing at a time

 As much as I gripe about UX annoyances (and all manner of other annoyances), I really do try to look out for specific ways to improve.  I don't come up with many, most likely because UX is hard and lots of people who are better at it than I am have spent a lot of time on the problem and come up with a lot of good ideas.  Much of the low-hanging fruit has been picked, and so has a lot of the not-so-low-hanging fruit.

However, while grumbling at a particular web page today, I think I hit upon a good rule.  I doubt it's new, because a lot of sites follow it (and see above regarding fruit), but a lot don't, so I'll put it out here anyway, for my vast audience, just in case.

Changing one setting on a page should only change the corresponding thing(s) on that page

For example, say I'm looking at statistics on farm production in US states.  I can rank output by, say, yield per acre, dollar value, amount per capita and dollar value per capita.  I can pick a specific list of states or crops.  I pick corn and soybeans for crops and North and South Dakota, Nebraska, Kansas and Oklahoma for states.  Up comes a nice table, initially sorted alphabetically by state.  I change the sorting order to dollars per capita, from high to low.  So far so good.

Now I decide to add wheat to the set of crops.  On a well-designed page, I will now see the same data, for the new set of crops, sorted the same way as before.  On all too many sites, I see the data for corn, beans and wheat, but sorted alphabetically by state, because that's how all tables start life.  I changed one thing -- which crops I'm interested in -- but two things changed, namely the data being shown and the sort order.  I only wanted one thing to change, namely the set of crops.

This is a small example, but I'd be surprised if you haven't run across something similar.  As described, it's a minor annoyance, but as the options get more sophisticated, annoyance turns into unusability.  If I've spent five minutes setting up a graph or chart of, say, crop distribution as a function of latitude, I don't want that all to go away if I decide to include Colorado or Iowa in my set of states.

This is not to say you can't have settings with wider-ranging effects.  If there's a tab on the page for, say, trends in agricultural veterinary medicine, I wouldn't expect my graph of crop production to stick around (though I would very much like it to still be there if I go back to its tab).  That's fine.  I changed one setting, but it's a big setting and the "corresponding things" that need changed are correspondingly big.

Again, this is nothing new.  For example, it fits nicely into considerate software remembers.  Still, it's often useful to find specific examples of more general principles.

Saturday, July 25, 2020

Still here, still annoyed with the internet

Looks like it's been several months since the last post, which has happened before but probably not for quite this long.  I've been meaning to put something up, first about working from home (or WFH as we like to call it), then more about machine learning (or ML as we like to call it), which seems to be going interesting places but probably not as far and fast as some might say.  I probably will get back to at least one of those topics, but so far, having settled into a new routine, I just haven't worked in much time for the blogs.

I have been reading quite a bit, on various topics, a lot of it on my phone.  I've managed to train my news feed to deliver a reasonable mix of nerdy stuff, light entertainment and what's-going-on-in-the-world.  I'm often happy to read the light entertainment in particular, since I get to use my analytical brain plenty between work and writing the occasional analytical blog post.  The only problem with the light reading is the actual reading.

I've always said that writers, and "content creators" in general, need to get paid, and I don't mind having to look at the occasional ad or buy the occasional subscription to support that.  It's just that the actual mechanics of this are getting a bit out of hand.

Generally one of three things happens.  For major outlets, or most of the nerdy stuff, or publications for which I do have a subscription, I click through and read.  Great.

If there's a paywall, I usually see the first paragraph or so, enough to confirm what the article is about, and then a button asking me to join in order to see more.  I pretty much never do, even though I'm fine with the concept and subscriptions are generally pretty cheap, because
  • Dude, I just wanted to read the article and it sure would have been nice to have seen a paywall notice before I clicked through (sometimes they're there, but usually not).
  • I'm leery of introductory rates that quietly start charging you more because you forgot to go back and cancel.
  • And combining the previous two items, I don't really want to dig through the subscription terms and find out how much I'm really paying and what I'm actually paying for.
I'm a bit more amenable to the "You have N free articles left this month" approach, because I get to read the particular article I was interested in and figure out the subscription stuff at my leisure.  I seldom get around to that last part, but nonetheless I think all the subscriptions I've actually bought have been on that basis.  I'm sure there have been theses written about the psychology behind that.

Having re-read the whole blog a while ago, I recall that Xanadu advocated for a similar pay-as-you-go approach.  As far as I could tell from the demo I saw, it would have led to a sort of taxicab-like "meter is running" experience.  This seems even slightly less pleasant than paywalls and subscriptions, but Xanadu could probably have supported either model, in theory.

The more common experience, of course, is ads, particularly in the light entertainment department.  What happens is interesting: You see the ads so much you don't see them, and depending on your level of patience, you might not bother to see the light entertainment either.

Suppose you run across a suitably light-looking title.  Some popular ones are "Learn something new about <your favorite movie, album, artist etc.>" and "N best/worst/most surprising/... Xs".  In either case, there are always two or three paragraphs of things you already know.  "My Cousin the Vampire Chauffeur [not a real movie that I know of] was one of the great hits of the 1980s, starring Moviestar McMoviestarface as the vampire and That One Actor as their best friend.  At first, the friend only thinks it's a little odd that the Chauffeur only drives at night and has removed the rearview mirror from the car, but events take an unexpected turn when ..."  Yep, knew that.  I clicked through on this because I liked that movie so yes, I've seen it.

About that time the whole screen starts to rearrange itself as various ad-things jostle for position.  Often, it all settles back down with the text you were reading still in roughly the same place, but sometimes you have to scroll.  About the same time, a video starts playing across the bottom of the screen.  There's generally a tiny "x" box at the corner to make it go away, but that's a fool's errand.  Another hydra head will regrow to take its place, and there's always the chance you'll accidentally click through instead of dismissing.  Instead, stare steadfastly at the text on the screen and nothing else, secure in the knowledge that the whole "subliminal advertising" thing was most likely overblown.

Finish the paragraph you're on and scroll past the display ad between it and the next paragraph.  With a fair wind and a favorable moon phase, you'll get to the next paragraph.  If not, the game of musical chairs will resume until the new batch of ads have all found places, at which point I generally head for the exit.  But you persevere.  You quickly realize that this paragraph as well is more filler, so you try to scroll to the bottom for the nugget of information you were really after.  You scroll too far, too fast, and land in a column of photos and links for similar articles, some of which you've already read because, well, we're all human, right?

Scroll back up and you find the object of your quest, that last paragraph, derive whatever edification you can from it and hit the back button.  Rather than going back to the news feed, you quite likely go back to a previous version of the page you were reading, and maybe another after that, before ending up back in civilization.  I could write a whole other rant about "Why doesn't the back button just take me back?" but I'm not sure that would improve either my life or yours.

I mean, in the grand scheme of things this is all pretty trivial, but then, in the grand scheme of things so is this blog, so I guess we're even.

Except for ads in the middle of lists-of-N-things that disguise their click-through buttons as "next item" buttons.  Those are pure evil.

So, still here, still annoyed with the internet.

Tuesday, October 29, 2019

Did the internet kill the radio tower?

"Half your advertising budget is wasted.  You just don't know which half."

The other day I turned on my radio on the drive home from work.  There was a breaking news story I was interested in ("breaking" as in, it was actually happening at the time, not "someone told the Chyron writer to make something look important").  I hadn't done that in months.  Years ago, listening to the radio was an integral part of making a cross-country trip, just as reading the Sunday funnies and (lightly) browsing the rest of the newspaper used to be a regular habit.  Even not so long ago, listening to the news on the way home was the default option.

Then came podcasts.  I was a bit late to the game, mostly because I'm somewhat deliberately lazy about adopting new technology, but once I got a suitable app set up to my liking, podcasts rapidly took over.  I could pick out whatever information or entertainment I wanted streamed into my brain, rewind or fast forward as needed and never have to worry about reception.  The only downside is needing to get everything set up nicely before actually starting the car moving.  I'm sure there are apps for that, but as I said, I'm a bit lazy about apps and such.

I know that people still listen to the radio.  Somehow my switching over didn't magically cause everyone else to stop tuning in to their favorite on-air personalities and call-in shows.  But for a certain kind of listener, there's little reason to fiddle with a radio.  Chances are you can livestream your favorite sports events if you like, though then you do have to worry about reception.

Podcasts, livestreams and other online content don't just change things for listeners.  There's a crucial difference for the people creating and distributing the content.  Even if "podcast" deliberately sounds like "broadcast", it's actually a classic example of narrowcasting -- delivering content directly to the "content consumers" based on their particular preferences.

Broadcasting is anonymous.  I send a signal into the ether and whoever picks it up picks it up.  I have no direct way of knowing how many people are listening, much less who.  Obviously this is much more anonymous than the internet.  It also has economic implications.

There are two main ways of paying for content: subscription and advertising.  In either case, it's valuable to know exactly who's on the other end.  Narrowcasting gives very find-grained information about that, while broadcasting provides only indirect, aggregated information based on surveys or, failing that, the raw data of who's buying advertising and how much they're paying.  Between that and satellite radio's subscription model, is there any room left for broadcast radio?

Probably.  I mean, it hasn't gone away yet, any more than printed books have.

Sure, broadcasters and the people who advertise on broadcast radio don't have detailed information about who's listening to the ads, but that may not matter.  The advertiser just needs to know that spending $X on old-fashioned radio advertising brings in more than $X in business.  The tools for figuring that out have been around since the early days of radio.

If people still find radio advertising effective, the broadcaster just has to know that enough people are still buying it to keep the lights on and the staff paid.  In a lot of cases that staff is shared across multiple physical radio stations anyway (and the shows, I would expect, are sent to those stations over the internet).  In other words, it may be valuable to know in detail who's listening to what, but it's not essential.

On the other hand, if broadcast radio does go away, I probably won't find out about it until I happen to switch my car audio over to it and nothing's there.

Tuesday, July 16, 2019

Space Reliability Engineering

In a previous post on the Apollo 11 mission, I emphasized the role of software architecture, and the architect Margaret Hamilton in particular, in ensuring the success of the Apollo 11 lunar landing.  I stand by that, including the assessment of the whole thing as "awesome" in the literal sense, but as usual there's more to the story.

Since that non-particularly-webby post was on Field Notes, so is this one.  What follows is mostly taken from the BBC's excellent if majestically paced podcast 13 Minutes to the Moon [I hope to go back and recheck the details directly at some point, but searching through a dozen or so hours of podcast is time-consuming and I don't know if there's a transcript available -- D.H.], which in turn draws heavily on NASA's Johnson Space Center Oral History Project.

I've also had a look at Ars Technica's No, a "checklist error" did not almost derail the Apollo 11 mission, which takes issue with Hamilton's characterization of the incident and also credits Hal Laning as a co-author of the Executive portion of the guidance software which ultimately saved the day (to me, the main point Hamilton was making was that the executive saved the day, regardless of the exact cause of the 1202 code).

Before getting too far into this, it's worth reiterating just how new computing was at the time.  The term "software engineer" didn't exist (Hamilton coined it during the project -- Paul Niquette claims to have coined the term "software" itself and I see no reason to doubt him).  There wasn't any established job title for what we now call software engineers.  The purchase order for the navigation computer, which was the very first order in the whole Apollo project, didn't mention software, programming or anything of the sort.  The computer was another piece of equipment to be made to work just like an engine, window, gyroscope or whatever.  Like them it would have to be installed and have whatever other things done to it to make it functional.  Like "programming" (whatever that was).

In a way, this was a feature rather than a bug.  The Apollo spacecraft have been referred to, with some justification, as the first fly-by-wire vehicles.  The navigational computer was an unknown quantity.  At least one astronaut promised to turn the thing off at the first opportunity.  Flying was for pilots, not computers.

This didn't happen, of course.  Instead, as the podcast describes so well, control shifted back and forth between human and computer depending on the needs of the mission at the time, but it was far from obvious at the beginning that this would be the case.

Because the computer wasn't trusted implicitly, but treated as just another unknown to be dealt with, -- in other words, another risk to be mitigated -- ensuring its successful operation was seen as a matter of engineering, just like making sure that the engines were efficient and reliable, and not a matter of computer science.  This goes a long way toward explaining the self-monitoring design of the software.

Mitigating the risk of using the computer included figuring out how to make it as foolproof as possible for the astronauts to operate.  The astronauts would be wearing spacesuits with bulky gloves, so they wouldn't exactly be swiping left or right, even if the hardware of the time could have supported it.  Basically you had a numeric display and a bunch of buttons.  The solution was to break the commands down to a verb and a noun (or perhaps more accurately a predicate and argument), each expressed numerically.  It would be a ridiculous interface today.  At the time it was a highly effective use of limited resources [I don't recall the name of the designer who came up with this. It's in the podcast --D.H.].

But the only way to really know if an interface will work is to try it out with real users.  Both the astronauts and the mission control staff needed to practice the whole operation as realistically as possible, including the operation of the computer.  This was for a number of reasons, particularly to learn how the controls and indicators worked, to be prepared for as many contingencies as possible and to try to flush out unforeseen potential problems.  The crew and mission control conducted many of these simulations and they were generally regarded as just as demanding and draining as the real thing, perhaps moreso.

It was during one of the simulations that the computer displayed a status code that no one had ever seen before and therefore didn't know how to react to.  After the session was over, flight director Gene Kranz instructed guidance software expert Jack Garman to look up and memorize every possible code and determine what course of action to take when it came up.  This would take a lot of time searching through the source code, with the launch date imminent, but it had to be done and it was.  Garmin produced a handwritten list of every code and what to do about it.

As a result, when the code 1202 came up with the final opportunity to turn back fast approaching, capsule communicator (CAPCOM) Charlie Duke was able to turn to guidance controller Steve Bales, who could turn to Garman and determine that the code was OK if it didn't happen continuously.  There's a bit of wiggle room in what constitutes "continuously", but knowing that the code wasn't critical was enough to keep the mission on track.  Eventually, Buzz Aldrin noticed that the code only seemed to happen when a particular radar unit was being monitored.  Mission Control took over the monitoring and the code stopped happening.


I now work for a company that has to keep large fleets of computers running to support services that billions of people use daily.  If a major Google service is down for five minutes, it's headline news, often on multiple continents.  It's not the same as making sure a plane or a spaceship lands safely or a hospital doesn't lose power during a hurricane, but it's still high-stakes engineering.

There is a whole profession, Site Reliability Engineer, or SRE for short, dedicated to keeping the wheels turning.  These are highly-skilled people who would have little problem doing my job instead of theirs if they preferred to.  Many of their tools -- monitoring, redundancy, contingency planning, risk analysis, and so on -- can trace their lineage through the Apollo program.  I say "through" because the concepts themselves are considerably older than space travel, but it's remarkable how many of them were not just employed, but significantly advanced, as a consequence of the effort to send people to the moon and bring them back.

One tool in particular, Garman's list of codes, played a key role at a that critical juncture.  Today we would call it a playbook.  Anyone who's been on call for a service has used one (I know I have).



In the end, due to a bit of extra velocity imparted during the maneuver to extract the lunar module and dock it to the command module, the lunar module ended up overshooting its intended landing place.  In order to avoid large boulders and steep slopes in the area they were now approaching, Neil Armstrong ended up flying the module by hand in order to find a good landing spot, aided by a switch to increase or decrease the rate of descent.

The controls were similar to those of a helicopter, except the helicopter was flying sideways through (essentially) a vacuum over the surface of the moon, steered by precisely aimed rocket thrusts while continuing to descend, and was made of material approximately the thickness of a soda can which could have been punctured by a good jab with a ball-point pen.  So not really like a helicopter at all.

The Eagle landed with eighteen seconds of fuel to spare.  It helps to have a really, really good pilot.

Tuesday, April 16, 2019

Distributed astronomy

Recently, news sources all over the place have been reporting on the imaging of a black hole, or  more precisely, the immediate vicinity of a black hole.  The black hole itself, more or less by definition, can't be imaged (as far as we know so far).  Confusing things a bit more, any image of a black hole will look like a black disc surrounded by a distorted image of what's actually in the vicinity, but this is because the black hole distorts space-time due to its gravitational field, not because you're looking at something black.  It's the most natural thing in the world to look at the image and think "Oh, that round black area in the middle is the black hole", but it's not.

Full disclosure: I don't completely understand what's going on here.  Katie Bouman has done a really good lecture on how the images were captured, and Matt Strassler has an also really good, though somewhat long overview of how to interpret all this.  I'm relying heavily on both.

Imaging a black hole in a nearby galaxy has been likened to "spotting a bagel on the moon".  A supermassive black hole at the middle of a galaxy is big, but even a "nearby" galaxy is far, far away.

To do such a thing you don't just need a telescope with a high degree of magnification.  The laws of optics place a limit on how detailed an image you can get from a telescope or similar instrument, regardless of the magnification.  The larger the telescope, the higher the resolution, that is, the sharper the image.  This applies equally well to ordinary optical telescopes, X-ray telescopes, radio telescopes and so forth.  For purposes of astronomy these are all considered "light", since they're all forms of electromagnetic radiation and so all follow the same laws.

Actual telescopes can only be built so big, so in order to get sharper images astronomers use interferometry to combine images from multiple telescopes.  If you have a telescope at the South Pole and one in the Atacama desert in Chile, you can combine their images to get the same resolution you would with a giant telescope that spanned from Atacama to the pole.  The drawback is that since you're only sampling a tiny fraction of the light falling on that area, you have to reconstruct the rest of the image using highly sophisticated image processing techniques.  It helps to have more than two telescopes.  The Event Horizon Telescope project that produced the image used eight, across six sites.

Even putting together images from several telescopes, you don't have enough information to precisely know what the full image really would be and you have to be really careful to make sure that the image you reconstruct shows things that are actually there and not artifacts of the processing itself (again, Bouman's lecture goes into detail).  In this case, four teams worked with the raw data independently for seven weeks, using two fundamentally different techniques, to produce the images that were combined into the image sent to the press.  In preparation for that, the image processing techniques themselves were thoroughly tested for their ability to recover images accurately from test data.  All in all, a whole lot of good, careful work by a large number of people went into that (deliberately) somewhat blurry picture.

All of this requires very precise synchronization among the individual telescopes, because interferometry only works for images taken at the same time, or at least to within very small tolerances (once again, the details are ... more detailed).  The limiting factor is the frequency of the light used in the image, which for radio telescopes is on the order of gigahertz. This means that images from the telescopes have to be recorded on the order of a billion times a second.  The total image data ran into the petabytes (quadrillions of bytes), with the eight telescopes producing hundreds of terabytes (that is, hundreds of trillions of bytes) each.

That's a lot of data, which brings us back to the web (as in "Field notes on the ...").  I haven't dug up the exact numbers, but accounts in the popular press say that the telescopes used to produce the black hole images produced "as much data as the LHC produces in a year", which in approximate terms is a staggering amount of data.  A radio interferometer comprising multiple radio telescopes at distant points on the globe is essentially an extremely data-heavy distributed computing system.

Bear in mind that one of the telescopes in question is at the south pole.  Laying cable there isn't a practical option, nor is setting up and maintaining a set of radio relays.  Even satellite communication is spotty.  According to the Wikipedia article, the total bandwidth available is under 10MB/s (consisting mostly of a 50 megabit/second link), which is nowhere near enough for the telescope images, even if stretched out over days or weeks.  Instead, the data was recorded on physical media and flown back to the site where it was actually processed.

I'd initially thought that this only applied to the south pole station, but in fact all six sites flew their data back rather than try to send it over the internet (just to throw numbers out, receiving a petabyte of data over a 10GB/s link would take about a day).   The south pole data just took longer because they had to wait for the antarctic summer.

Not sure if any carrier pigeons were involved.

Thursday, April 4, 2019

Martian talk

This morning I was on the phone with a customer service representative about emails I was getting from an insurance company and which were clearly meant for someone else with a similar name (fortunately nothing earth-shaking, but still something this person would probably like to know about).  As is usually the case, the reply address was a bit bucket, but there were a couple of options in the body of the email: a phone number and a link.  I'd gone with the phone number.

The customer service rep politely suggested that I use the link instead.  I chased the link, which took me to a landing page for the insurance company.  Crucially, it was just a plain link, with nothing to identify where it had come from*.  I wasn't sure how best to try to get that across to the rep, but I tried to explain that usually there are a bunch of magic numbers or "hexadecimal gibberish" on a link like that to tie it back to where it came from.

"Oh yeah ... I call that 'Martian talk'," the rep said.

"Exactly.  There's no Martian talk on the link.  By the way, I think I'm going to start using that."

We had a good laugh and from that point on we were on the same page.  The rep took all the relevant information I could come up with and promised to follow up with IT.

What I love about the term 'Martian talk' is that it implies that there's communication going on, but not in a way that will be meaningful to the average human, and that's exactly what's happening.

And it's fun.

I'd like to follow up at some point and pull together some of the earlier posts on Martian talk -- magic numbers, hexadecimal gibberish and such -- but that will take more attention than I have at the moment.


* From a strict privacy point of view there would be plenty of clues, but there was nothing to tie the link to a particular account for that insurance company, which was what we needed.

Thursday, January 3, 2019

Hats off to New Horizons

A few years ago, around the time of the New Horizons encounter with Pluto (or if you're really serious about the demotion thing, minor planet 134340 Pluto), I gave the team a bit of grief over the probe having to go into "safe mode" with only days left before the flyby, though I also tried to make clear that this was still engineering of a very high order.

Early on New Year's Day (US Eastern time), New Horizons flew by a Kuiper Belt object nicknamed Ultima Thule (two syllables in Thule: THOO-lay).  I'm posting to recognize the accomplishment, and this post will be grief-free.

The Ultima Thule encounter was much like the Pluto encounter with a few minor differences:
  • Ultima Thule is much smaller.  Its long axis is about 1-2% of Pluto's diameter
  • Ultima Thule is darker, reflecting about 10% of light that reaches, compared to around 50% for Pluto. Ultima Thule is about as dark as potting soil.  Pluto is more like old snow.
  • Ultima Thule is considerably further away (about 43 AU from the sun as opposed to about 33 AU for Pluto at the time of encounter -- an AU is the average distance from the Sun to the Earth)
  • New Horizons passed much closer to Ultima Thule than it did to Pluto (3,500 km vs. 12,500 km).  This requires more accurate navigation and to some extent increased the chances of a disastrous collision with either Ultima Thule or, more likely, something near it that there was no way to know about.  At 50,000 km/h, even a gravel-sized chunk would cause major if not fatal damage.
  • Because Ultima Thule is further away, radio signals take proportionally longer to travel between Earth and the probe, about six hours vs. about four hours.
  • Because Ultima Thule is much smaller, much darker and significantly further away, it's much harder to spot from Earth.  Before New Horizons, Pluto itself was basically a tiny dot, with a little bit of surface light/dark variation inferred by taking measurements as it rotated.  Ultima Thule was nothing more than a tinier dot, and a hard-to-spot dot at that.
  • We've had decades to work out exactly where Pluto's orbit goes and where its moons are.  Ultima Thule wasn't even discovered until after New Horizons was launched.  Until a couple of days ago we didn't even know whether it had moons, rings or an atmosphere (it appears to have none).  [Neither Pluto nor Ultima Thule is a stationary object, just to add that little additional degree of difficulty.  The Pluto flyby might be considered a bit more difficult in that respect, though.  Pluto's orbital speed at the time of the flyby was around 20,000 km/h, while Ultima Thule's is closer to 16,500 km/h.  I'd think this would mainly affect the calculations for rotating to keep the cameras pointed, so it probably doesn't make much practical difference.]
In both cases, New Horizons had to shift from pointing its radio antenna at Earth to pointing its cameras at the target.  As it passes by the target at around 50,000 km/h, it has to rotate to keep the cameras pointed correctly, while still out of contact with Earth (which is light-hours away in any case).  It then needs to rotate its antenna back toward Earth, "phone home" and start downloading data at around 1,000 bits per second.  Using a 15-watt transmitter slightly more powerful than a CB radio.  Since this is in space, rotating means firing small rockets attached to the probe in a precise sequence (there are also gyroscopes on New Horizons, but they're not useful for attitude changes).

So, a piece of cake, really.

Seriously, though, this is amazing engineering and it just gets more amazing the more you look at it.  The Pluto encounter was a major achievement, and this was significantly more difficult in nearly every possible way.

So far there don't seem to be any close-range images of Ultima Thule on the mission's web site (see, this post is actually about the web after all), but the team seems satisfied that the flyby went as planned and more detailed images will be forthcoming over the next 20 months or so.  As I write this, New Horizons is out of communication, behind the Sun from Earth's point of view for a few days, but downloads are set to resume after that.  [The images started coming in not long after this was posted, of course --D.H. Jul 2019]

Thursday, December 13, 2018

Common passwords are bad ... by definition

It's that time of the year again, time for the annual lists of worst passwords.  Top of at least one list: 123456, followed by password.  It just goes to show how people never change.  Silly people!

Except ...

A good password has a very high chance of being unique, because a good password is selected randomly from a very large space of possible passwords.  If you pick your password at random from a trillion possibilities*, then the odds that a particular person who did the same also picked your password are one in a trillion, the odds that one of a million other such people picked your password are about one in a million, as are the odds that any particular two people picked the same password.  If a million people used the same scheme as you did, there's a good chance that some pair of them accidentally share a password, but almost certainly almost all of those passwords are unique.

If you count up the most popular passwords in this idealized scenario of everyone picking a random password out of a trillion possibilities, you'll get a fairly tedious list:
  • 1: some string of random gibberish, shared by two people
  • 2 - 999,999: Other strings of random gibberish, 999,998 in all
Now suppose that seven people didn't get the memo.  Four of them choose 123456 and three of them choose password.  The list now looks like
  • 1: 123456,  shared by four people
  • 2: password,  shared by three people
  • 3: some string of random gibberish, shared by two people
  • 4-999,994:  Other strings of random gibberish, 999,991 in all
Those seven people are pretty likely to have their passwords hacked, but overall password hygiene is still quite good -- 99.9993% of people picked a good password.  It's certainly better than if 499,999 people picked 123456 and 499,998 picked password, two happened to pick the same strong password and the other person picked a different strong password, even though the resulting rankings are the same as above.

Likewise, if you see a list of 20 worst passwords taken from 5 million leaked passwords, that could mean anything from a few hundred people having picked bad passwords to everyone having done so.  It would be more interesting to report how many people picked popular passwords as opposed to unique ones, but that doesn't seem to make its way into the "wow, everyone's still picking bad passwords" stories.

From what I was able to dig up, that portion is probably around 10%.  Not great, but not horrible, and probably less than it was ten years ago.  But as long as some people are picking bad passwords, the lists will stay around and the headlines will be the same, regardless of whether most people are doing a better job.

(I would have provided a link for that 10%, but the site I found it on had a bunch of broken links and didn't seem to have a nice tabular summary of bad passwords vs other passwords from year to year, so I didn't bother)

*A password space of a trillion possibilities is actually pretty small.  Cracking passwords is roughly the same problem as the hash-based proof-of-work that cyrptocurrencies use.  Bitcoin is currently doing around 100 million trillion hashes per second, or a trillion trillion hashes every two or three hours.  The Bitcoin network isn't trying to break your password, but it'll do for estimating purposes.  If you have around 100 bits of entropy, for example if you choose a random sequence of fifteen capital and lowercase letters, digits and 30 special characters, it would take a password-cracking network comparable to the Bitcoin network around 400 years to guess your password.  That's probably good enough.  By that time, password cracking will probably have advanced far beyond where we are and, who knows, maybe we'll have stopped using passwords by then.

Saturday, December 8, 2018

Software cities

In the previous post I stumbled on the idea that software projects are like cities.  The more I thought about it, I said, the more I liked the idea.  Now that I've had some more time to think about it, I like the idea even more, so I'd like to try to draw the analogy out a little bit further, ideally not past the breaking point.

What first drew me to the concept was realizing that software projects, like cities, are neither completely planned nor completely unplanned.  Leaving aside the question of what level of planning is best -- which surely varies -- neither of the extremes is likely to actually happen in real life.

If you try to plan every last detail, inevitably you run across something you didn't anticipate and you'll have to adjust.  Maybe it turns out that the place you wanted to put the city park is prone to flooding, or maybe you discover that the new release of some platform your depending doesn't actually support what you thought it did, or at least not as well as you need it to.

Even if you could plan out every last detail of a city, once people start living in it, they're going to make changes and deviate from your assumptions.  No one actually uses that beautiful new footbridge, or if they do, they cut across a field to get to it and create a "social trail" thereby bypassing the carefully designed walkways.  People start using an obscure feature of one of the protocols to support a use case the designers never thought of.  Cities develop and evolve over time, with or without oversight, and in software there's always a version 2.0 ... and 2.1, and 2.2, and 2.2b (see this post for the whole story).

On the other hand, even if you try to avoid planning and let everything "just grow", planning happens anyway.  If nothing else, we codify patterns that seem to work -- even if they arose organically with no explicit planning -- as customs and traditions.

In a distant time in the Valley, I used to hear the phrase "paving the cow paths" quite a bit.  It puzzled me at first.  Why pave a perfectly good cow path?  Cattle are probably going to have a better time on dirt, and that pavement probably isn't going to hold up too well if you're marching cattle on it all the time ...  Eventually I came to understand that it wasn't about the cows.  It was about taking something that people had been doing already and upgrading the infrastructure for it.  Plenty of modern-day highways (or at least significant sections of them) started out as smaller roads which in turn used to be dirt roads for animals, foot traffic and various animal-drawn vehicles.

Upgrading a road is a conscious act requiring coordination across communities all along the roadway.  Once it's done, it has a significant impact on communities on the road, which expect to benefit from increased trade and decreased effort of travel, but also communities off the road, which may lose out, or may alter their habits now that the best way to get to some important place is by way of the main road and not the old route.  This sort of thing happens both inside and outside cities, but for the sake of the analogy think of ordinary streets turning into arterials or bypasses and ring roads diverting traffic around areas people used to have to cross through.

One analogue of this is in software is standards.  Successful standards tend to arise when people get together to codify existing practice, with the aim of improving support for things people were doing before the standard, just in a variety of similar but still needlessly different ways.  Basically pick a route and make it as smooth and accessible as possible.  This is a conscious act requiring coordination across communities, and once it's done it has a significant impact on the communities involved, and on communities not directly involved.

This kind of thing isn't always easy.  A business district thrives and grows, and more and more people want to get to it.  Traffic becomes intolerable and the city decides to develop a new thoroughfare to carry traffic more efficiently (thereby, if it all works, accelerating growth in the business district and increasing traffic congestion ...).  Unfortunately, there's no clear space for building this new thoroughfare.  An ugly political fight ensues over whose houses should get condemned to make way and eventually the new road is built, cutting through existing communities and forever changing the lives of those nearby.

One analog of this in software is the rewrite.  A rewrite almost never supports exactly the same features as the system being rewritten.  The reasons for this are probably material for a separate post,  but the upshot is that some people's favorite features are probably going to break with the rewrite, and/or be replaced by something different that the developers believe will solve the same problem in a way compatible with the new system.  Even if the developers are right about this, which they often are, there's still going to be significant disruption (albeit nowhere near the magnitude of having one's house condemned).


Behind all this, and tying the two worlds of city development and software develop together, is culture.  Cities have culture, and so do major software projects.  Each has its own unique culture, but, whether because the same challenges recur over and over again, leading to similar solutions, or because some people are drawn to large communities while others prefer smaller, the cultures of different cities tend to have a fair bit in common, perhaps more in common with each other than with life outside them.  Likewise with major software projects.

Cities require a certain level of infrastructure -- power plants, coordinated traffic lights, parking garages, public transport, etc. -- that smaller communities can mostly do without.  Likewise, a major software project requires some sort of code repository with version control, some form of code review to control what gets into that repository, a bug tracking system and so forth.  This infrastructure comes at a price, but also with significant benefits.  In a large project as in a large city, you don't have to do everything yourself, and at a certain point you can't do everything yourself.  That means people can specialize, and to some extent have to specialize.  This both requires a certain kind of culture and tends to foster that same sort of culture.


It's worth noting that even large software projects are pretty small by the standards of actual cities.  Somewhere around 15,000 people have contributed to the git repository for the Linux kernel.  There appear to be a comparable (but probably smaller) number of Apache committers.  As with anything else, some of these are more active in the community than others.  On the corporate side, large software companies have tens of thousands of engineers, all sharing more or less the same culture.

Nonetheless, major software projects somehow seem to have more of the character of large cities than one might think based on population.  I'm not sure why that might be, or even if it's really true once you start to look more closely, but it's interesting that the question makes sense at all.