I hadn't meant for things to go so quiet here, and it's not just a matter of being busy. I've also been finding it harder to write about "the web", not because I don't want to, but because I'm just not running across as many webby things to write about.
That got me thinking, just what is the web these days? And that in turn got me thinking that the web is, in a way, receding from view, even as it becomes more and more a part of daily life, or, in fact, because it's more and more a part of daily life.
There is still plenty of ongoing work on the technical side. HTML5 is now a thing, and Adobe Flash is officially "end of life" (though there's a bit of a mixed message in that Adobe's site for it still says "Adobe Flash Player is the standard for delivering high-impact, rich Web content." right below the banner that says "Flash Player’s end of life is December 31st, 2020"). Microsoft has replaced Internet Explorer with Edge, built on the Chromium engine. Google is working to replace cookies. I realize those are all fairly Google-centric examples, and I don't want to imply that no one else is doing important work. Those were just the first examples that came to mind, for some strange reason.
On the one hand, those are all big developments. Adobe Flash was everywhere. It's hard to say how many web pages used it, but at the peak, there would be on the order of billions of downloads when Adobe pushed a release, because it was in every browser. Internet Explorer was the most-used browser for over a decade, and the standard browser on Windows, which would put its user base in the billions as well (even if some of us only used it to download Chrome). Somewhere around 20% of web sites, however many that is, use cookies.
On the other hand, they are all nearly invisible. I can remember a few times, early in the process a couple of years ago, when Chrome wouldn't load some particular website because Flash was disabled, but not enough to cause any real disruption. I'm sure that the shift from Explorer to Edge was disruptive to some, but when I set up a laptop for a relative a little while ago, they were much more concerned with being able to check email, write docs or play particular games than which browser was making that happen. As for cookies, I haven't looked into exactly how they're being replaced, because I don't have to and I haven't made time to look it up.
Because the web is everywhere, the huge number of websites and people browsing means that it's most important to keep everything running smoothly. Unless you're introducing some really amazing new feature, it's usually bad news if anyone knows that you made some change behind the scenes (whatever you think of Facebook as a company, please spare a thought for the people who had to deal with that outage -- even with a highly-skilled, dedicated team keeping the wheels turning, these things can happen, and it can be devastating to those involved when it does).
The upshot here is that I don't really have much interesting to say about much of the technical infrastructure behind everyday web experience. Besides not having been close to the standards process for several years, I figured out very early that I didn't want to write about the standards and protocols themselves -- there are plenty of people who can do that better than I can -- but how they appear in the wild. Thus the field notes conceit.
It was interesting to write about, say, Paul Vixie's concerns about DNS security or what copyrights mean in the digital age, but topics like that seem less interesting today. Regardless of the particular threats, the real benchmark of computer security is whether people are willing to put their money on the web -- buy, sell, send money to friends, check their bank statements or retirement accounts, and so forth. That's been the case for a while now, through a combination of security technology and legal protections. Importantly, the technology doesn't have to be perfect, and a good thing, that.
The question of how creators get paid on the web is still shaking out, but one the one hand, I think this is one of those problems that is always shaking out without ever getting definitively resolved, and on the other hand, I'm not sure I have anything significant to add to the discussion.
As much as I don't want to write a purely technical blog, I also don't want to lose sight of the technical end entirely. I'm a geek by training and by nature. The technical side is interesting to me, and it's also where I'm most likely to know something that isn't known to a general audience.
Obviously, a lot of the important discussion about the web currently is about social media, but I don't want to jump too deeply into that pool. Not only is it inhabited by a variety of strange and not-always-friendly creatures, but if I were commenting on it extensively, I'd be commenting on sociology, psychology and similar fields. I muse about those on the other blog, but intermittently conjecturing about what consciousness is or how language works is an entirely different thing from analyzing social media.
Even so, Twitter is one of the top tags here, ironic since I don't have a Twitter account (or at least not one that I use).
My main point on social media was that some of the more utopian ideas about the wisdom of crowds and the self-correcting nature of the web don't tend to hold up in practice. I made that point in the context of Twitter a while ago, in this post in particular. I wasn't the first and I won't be the last. I think it's pretty widely understood today that the web is not the idyllic place some said it would be a few decades ago (not that that kept me from commenting on that very topic in the most recent post before this one).
On the other hand, it might be interesting to look into why the web can be self-correcting, if still not idyllic, under the right circumstances. Wikipedia comes to mind ...
Finally, I've really been trying to keep the annoyances tag down to a dull roar. That might seem a bit implausible, since it's generally the top tag on the list (48 posts and counting), but in my defense it's fairly easy to tell if something's annoying or not, as opposed to whether its related to, say, copyrights, publishing, both or neither, so it doesn't take a lot of deliberation to decide to apply that label. Also, with the web a part of everyday life, there's always something to be annoyed about.
So if you take out "technical stuff that no one notices unless it breaks", "social media critiques", "annoying stuff, unless maybe it's particularly annoying, funny or interesting", along with recusing myself from "hmm ... what's Google up to these days?", what's left?
Certainly something. I haven't stopped posting entirely and I don't plan to. On the other hand, there doesn't seem to be as much low-hanging fruit as there used to be, at least not in the particular orchard I'm wandering through. Some of this, I think, is because the web has changed, as I said up top. Some of it is because my focus has changed. I've been finding the topics on the other blog more interesting, not that I've been exactly prolific there either. Some of it is probably the old adage that if you write every day, there's always something to say, while if you write infrequently, it's hard to get started.
A little while ago, I went through the whole blog from the beginning and made several notes to myself to follow up, so I may come back to that. In any case new topics will certainly come up (one just did, after all, about why Wikipedia seems to do much better at self-correcting). I think it's a safe bet, though, that it will continue to be a while between posts. Writing this has helped me to understand why, at least.
No comments:
Post a Comment