The Semantic Web: The basic idea, from Tim Berners-Lee's Weaving the Web, is that "[m]achines become capable of analyzing all the data on the Web - the content, links, and transactions between people and computers." There are any number of refinements, restatements and variations, and there is probably more than the usual danger of the term being applied to anything and everything, but that's the concept in a nutshell, straight from the horse's mouth (now there's an image).
This is really material for several posts (or books), but my quick take is that the web will indeed gradually become more machine-understandable. Presumably we'll know more precisely what that means when we see it.
I'm not sure whether that will happen more because data becomes more structured or because computers get better at extracting latent structure from not-deliberately-structured data. Either way, I don't believe we need anywhere near all data on the web to be machine-understood in order to benefit, and conversely, I'm not sure to what extent all of it ever will be machine-understandable. Is everything on the web human-understandable?
Artificial Intelligence: Well. What would that be? AI is whatever we don't understand how to do yet. Not so long ago a black box that you type a few words into and get back relevant documents would have been AI. Now it's a search engine. In the context of the web, AI will be things like non-trivial image processing (find me pictures of mountains regardless of whether someone tagged them "mountain") or automatic translation.
(Translation seems to be slowly getting better. The sentence above, round-tripped by way of Spanish with a popular translation engine, came back as "In the context of the fabric, the AI will be things like the process of image non-trivial (encuéntreme the mountain pictures without mattering if somebody marked with label "mountain to them") and the automatic translation". Believe it or not, this looks to be an improvement over, say, a year ago)The article mentions cellular automata and neural networks, two incarnations of massively parallel computing. I tend to think the technology matters much less than understanding the problem.
It took quite a while to figure out that playing chess is (relatively) easy and walking is fiendishly difficult (particularly if you're supposed to see where you're walking). It also took a while to figure out that matching up raw words and looking at the human-imposed structure of document links works better than trying to "understand" documents in any deep sense. I call this theme "dumb is smarter" and one of these days I'll round up a good list of examples.
As the article points out AI and the semantic web are related. One way to look at it: A machine that could "understand" the web as well as a human would be a de facto AI.
Virtual worlds: In the hardcore version, we all end up completely virtual beings, our every sensory input supplied electronically. Or perhaps we no longer physically exist at all. I'm not willing to rule this sort of thing, or at least the first version, out for the still-fairly-distant future, but in the near term there are some obstacles.
I've argued that our senses are (probably) dominated by sight and sound and that available bandwidth is more or less enough to saturate those by now. But it's pretty easy to fake out the eyes and ears. Faking out the vestibular sense or the kinesthetic senses may well require surgery. Even smell has proved difficult. So the really fully immersive virtual world is a ways away and bandwidth is not the problem.
In the meantime, as the article points out, lots of interesting stuff is going on, both in creating artificial online worlds and in making the physical world more accessible online. Speaking for myself, other than dipping my toes in MUD several years back I'm not virtualized to any significant degree, but Google Earth is one of my personal favorite timesinks.
Interestingly, William Gibson himself has done a reading in Second Life. Due to bandwidth limitations, it was a fairly private affair. Gibson's take:
"I think what struck me most about it was how normal it felt. I was expecting it to be memorably weird, and it wasn't," he says. "It was just another way of doing a reading."I think this is an example of the limitations imposed by the human element of the web. We can imagine a lot of weird stuff, but we can only deal with so much weirdness day to day.
Gibson also argues that good old fashioned black-marks-on-a-white-background is a pretty good form of virtual reality, using the reader's imagination as a rendering engine. I tend to agree.
Mobile: I've already raved a bit about a more-mobile web experience. To me mobile computing is more about seamlessness than the iPhone or any particular device. Indeed, it's a lot about not caring which particular device(s) you happen to be using at a given time or where you're using them.
Attention Economy: "Paying attention" is not necessarily just a metaphor. The article references a good overview you may want to check out if the term is not familiar.
OK, we have to pay for all this somehow, and it's pretty clear the "you own information by owning a physical medium" model that worked so well for centuries is breaking down. But if no one pays people to create content, a lot less will be created (hmm ... I'm not getting paid to write this).
Because we humans can only process so much information, and there's so much information out there, our attention is relatively scarce and therefore likely to be worth something. Ultimately it's worth something at least in part because what we pay attention to will influence how we spend money on tangible goods or less-tangible services. So we should develop tools to make that explicit and to reduce the friction in the already-existing market for attention.
My take is that this will happen, and is happening, more due to market forces than to specific efforts. That doesn't mean that such efforts are useless, just that markets largely do what they're going to do. They make the waves, we ride them and build breakwaters here and there to mitigate their worst effects.
Web Sites as Web Services: The idea here is that information on web sites will become more easily accessible programatically and generally more structured. This is one path to the Semantic Web. It's already happening and I have no doubt it will happen more. A good thing, too.
On the other hand, I wonder how far this will go how fast. Clearly there is a lot of information out there that would quite a bit more useful with just a bit more structure. It would also be nice if everyone purveying the same kind of information used the same structure. Microformats are a good step in this direction.
My guess is that tooling will gradually have more and more useful stuff baked in, so that when you put up, say, a list of favorite books it will be likely to have whatever "book" microformatting is appropriate without your doing too much on your part. For example if you copy a book title from Amazon or wherever, it should automagically carry stuff like the ISBN and the appropriate tagging.
In other words, it will, and will have to, become easier and easier for non-specialists to add in metadata without realizing they're doing it. I see this happening by fits and starts, a piece at a time, and incompletely, but even this will add considerable value and drive the tooling to get better and better.
Online Video/Internet TV: I don't really have much to add to what the article says. It'll be interesting and fun to (literally) watch this play out. It'll be particularly interesting to see if subscription models can be made to work. If so, I doubt it will be because of some unbreakable protection scheme.
Rich Internet Apps: I occasionally wonder how much longer browsers will be recongizable as such. The features a browser provides -- tabs, searching, bookmarks and such, are clearly applicable to any resource and sure enough, editors, filesystem explorers and such are looking like more like browsers. OS's are getting into the act, too, allowing you to mount web resources as though they were local objects, perhaps doing some conversion or normalization along the way.
Browsers are also growing more and more toolbars, making them look more like desktops, and desktops are growing widgets that show information you used to get through a browser. Behind the scenes, toolkits will continue to go through the usual refactoring, making it easier to present resource X in context Y.
The upshot is that the range of UI options gets bigger and the UI presented for various resources gets better tuned to both the resource and your preferences. Good stuff, and it will continue to happen because it's cool and useful and people can get paid to make it happen.
International Web: Well, yeah!
Personalizaiton: This is a thread through a couple of the trends above, including Attention Economy and Rich Internet Apps. It will also aid internationalization. The big question, of course, is privacy. But that's a thread in itself.
2 comments:
"The words "wise" and "wizened" come from the same root..."
Now, who told you that? "Wise," "wit," "wizard" all come from the same root (OE "wittan," cognate with the romance root of "vision"). "Wizened" is relted to "wither," and thence to "weather."
You gotta watch these things. Someone might be checking.
Note to self: good candidate for follow-up
Post a Comment