One site I visit from time to time allows you to restrict the date range of the query it's built from. The default range is about a month, but you can restrict it to as little as a day if you like, and I often do. The site uses a script to present the date range on a calendar instead of as plain text boxes. It's a nice convenience, particularly in places like travel sites, where you're likely to want to consult a calendar when filling in a date.
Generally, if you want to see if the content of a web page has changed, you just re-load it. But in this case if I restrict the query range to a single day, then reload, I don't see the new results for that day. I see the new results for the month.
The script that allows me to set the date from a calendar has its own state. When I reset the date range, I change that state, but it doesn't get saved anywhere (the site in question could use cookies to track this, but it doesn't). When I reload the page, the calendar goes back to its default state.
This breaks the fundamental "reload if you want to see the latest data" behavior of web pages. There's nothing wrong with changing this behavior, per se. It's often great when pages are able to update themselves, instead of waiting for me to poll again. But that's not what's happening here.
A query page represents a view of the data. If you reload a vanilla query page, you see the same view of the new data. Likewise for an asynchronous page that updates itself without being asked. In the case I'm describing, reloading doesn't just show the new data. It resets the view, disrupting the "view" metaphor.
This isn't a major breakage, but it's annoying, and it points out that moving intelligence from the server into the browser comes at a price: You have to consider issues that are already considered for you when you just color within the lines with plain old HTML.
I also notice that the mouse wheel doesn't work when the cursor is over certain parts of the page in question, presumably because those parts look at the mouse wheel events and don't give them back when they're done looking. So if I want to scroll that page up with the mouse wheel, I have to reposition the cursor when that part of the page scrolls under the cursor and the scrolling stops. Again, changing the behavior of the mouse wheel can be useful, as when you want to scroll a text box in a page and not the whole page it's on. But in the page I'm describing the change is annoying. It's a minor annoyance, but an annoyance nonetheless.
Thursday, February 28, 2008
Wednesday, February 27, 2008
TCP: A gold standard
If you're looking for a paradigm for network standards, you could do worse than TCP. It's been in continuous use for decades now with very little revision. It's been implemented on a staggeringly wide variety of platforms. It works on hardware spanning orders of magnitude in performance. I don't mean this just historically, though hardware has certainly become faster over time. You could be using a dial-up connection to talk to someone with a T1 connection by way of gigabit or faster networks in the middle and TCP will happily manage the whole connection end to end.
TCP underlies most (though not all) of the other famous Ps: HTTP, SMTP, NNTP, FTP, POP, IMAP, etc., etc. If you're reading this, your browser almost certainly used HTTP over TCP to fetch the contents. TCP works on anything from a single computer (using loopback) to the whole internet. And it does it so quietly you can easily forget it's even there.
TCP is successful because of two well-chosen abstractions:
TCP underlies most (though not all) of the other famous Ps: HTTP, SMTP, NNTP, FTP, POP, IMAP, etc., etc. If you're reading this, your browser almost certainly used HTTP over TCP to fetch the contents. TCP works on anything from a single computer (using loopback) to the whole internet. And it does it so quietly you can easily forget it's even there.
TCP is successful because of two well-chosen abstractions:
It uses the abstraction of a best-effort datagram protocol. That is, it assumes that there is a way of sending a packet of data, up to a given size, to a given address on the network. It doesn't assume that such a datagram will always get there, or that any two datagrams will arrive in the order sent, or will arrive intact, or will only arrive once. It works best when these things happen, but it still works even if they don't, as long as they don't happen too much.
TCP provides a useful abstraction. Once you establish a TCP connection to another host, one of three things will happen:
Not bad at all for something invented when the Net as we now know it was still firmly in the realm of speculation.
TCP provides a useful abstraction. Once you establish a TCP connection to another host, one of three things will happen:
- The data you send will arrive, in the order you sent it, intact and reasonably timely (and likewise for data the other end sends back).
- Failing that, the connection will be closed and you will get an error.
- Failing that, you will have bigger problems to deal with than a broken TCP connection.
Not bad at all for something invented when the Net as we now know it was still firmly in the realm of speculation.
Wednesday, February 20, 2008
Renaming social networking
I've previously argued that what we call "social networking" comprises two very separate things:
I'm sure that the combination will continue to be called "social networking," whether or not the label really fits, but it would be good to have nice crisp labels for the parts when we want to talk about them without confusion.
- A set of features for navigating from your immediate connections to their connections and so on, generally with a filtering capability ("Who in my network is over 30/says they liked Casablanca/joined since I last looked/etc?")
- Names, logos, slogans and other branding to identify people as having something humanly meaningful in common ("all the cool kids/movie buffs/seasoned professionals/etc join this site")
I'm sure that the combination will continue to be called "social networking," whether or not the label really fits, but it would be good to have nice crisp labels for the parts when we want to talk about them without confusion.
How many lawyer jokes does it take to build the new web?
From my previous post (and perhaps this one), it might seem that I take a dim view of lawyers. I don't. True, I've found that using the phrase "my attorney" is generally a sign that things are not going as well as they might, but that's not the attorney's fault. My point is not that lawyers are bad, but that there is a definite legal angle to the growth of the web. It's inevitable that lawyers will be involved.
Personally, I tend to think this will be a good thing. In the long run the legal picture will be clearer than if (somehow) no lawyers were involved. I realize that not everyone will agree with this. A good lawyer, of course, could make a decent case for either side ...
Personally, I tend to think this will be a good thing. In the long run the legal picture will be clearer than if (somehow) no lawyers were involved. I realize that not everyone will agree with this. A good lawyer, of course, could make a decent case for either side ...
Tuesday, February 19, 2008
And yet, I remain strangely optimistic ...
So here's how I think we might eventually get to a world of personal datastores, in two easy steps:
Doubtless XML will be involved at several points.
And lawyers.
- (In progress) More and more people notice that, say, health provider A and health provider B have their own data fiefdoms. That presents an opportunity to actualize paradigm-busting disruptive technology by combining them into a single, personalized, customer-focused data vault. Maybe the patient even gets to see what's in it, if that doesn't violate any privacy rules ...
- People start to notice that they now have a health-records data fiefdom, a travel-records fiefdom, an entertainment-preferences fiefdom, a "who's connected to whom" fiefdom, and so forth. These are provided by several different entities, each with its own local UI customs and its own data model for nuts and bolts like names, dates, addresses, ratings and so forth. That presents an opportunity to unleash previously roadblocked potentialities, synergizing to achieve optimality by combining them into a single, personalized, individual-focused datastore.
Doubtless XML will be involved at several points.
And lawyers.
Sunday, February 17, 2008
Health care and datastores
One possible killer app for personal datastores is medical record keeping. Right now, (in the US, at least) every health care provider I use has its own copy of my medical history. It's generally not hard to get your old provider to send your new provider your records, but the point is that they need to be sent at all. The inevitable result is a multitude of small mistakes and discrepancies as essentially the same form gets filled in or transcribed over and over again, leaving you to wonder what sort of large mistakes and discrepancies might creep in while no one's looking.
Joe Andrieu relates a story of Doc Searls dealing with just such problems -- in a state with universal health care -- in his own case [The story in question starts with the section marked "User centrism as system architecture," but as usual the whole post is worth reading]. As he says, a personal datastore would make the whole situation much simpler. Your medical data is part of your datastore. You give your providers permission to read and update it. There's only one copy of it, so all the data replication problems go away. You control access to it.
If you want a new provider to have access, just say so. You could even give blanket permission to any accredited hospital, in case of emergency. This permission would, of course, live in the world-readable part of your datastore.
I have to say it sounds beautiful, and I'm confident that it, or something functionally equivalent, will eventually happen. But how do we get there?
Given that this is very personal medical information, privacy is a major concern. Health care providers (again in the US, at least) are bound by strict privacy rules. Without digging into the details of HIPAA, one of whose aims is actually to promote electronic data interchange, suffice it to say that achieving HIPAA compliance has been a long, expensive and sometimes painful process for the US medical industry.
One result of this process is that providers (and any other "covered entities") are limited in what they may disclose to other parties. While the intent of the privacy rules seems very much in harmony with the idea of a personal datastore, the realization laid out in the law is very much built on the idea of each provider having its own data fiefdom, with strictly limited interchange among the various fiefdoms.
By contrast, in a personal datastore world, providers would never have to worry about disclosing data to other providers. In fact, it would be best for a provider never even to take a local copy, except perhaps for emergency backup purposes, since the patient's datastore itself is the definitive, up-to-date version. This could be particularly important if, say, a patient in a hospital is also being treated by an unaffiliated specialist. Anything one of them does is automatically visible by the other, unless there is a particular reason to restrict access.
The geek in me is fascinated to once again see the concepts of cache coherency and abstraction turning up in the larger world (whence we geeks are only borrowing them). But the health care consumer in me is concerned that the less-than-abstract form of the law, together with the need to implement it, has almost certainly produced a system with far too much inertia to switch to a datastore-centered approach anytime soon.
Obstacle one is that hospital's data systems just aren't set up for it, and after going through the wringer to get the present setup in place, they are not going to be in any hurry to implement a new scheme. Even with that out of the way, it is the providers that are on the hook to ensure privacy. They will want some assurance that relying on personal datastores does not expose them to any new liability.
That in turn will depend on personal datastores having been shown to be secure and reliable. Which is why, although I have no doubt that medical record keeping would be a great application for personal datastores, it seems unlikely to be the first, "killer" app that breaks them into the mainstream.
On the other hand, in chasing down the links for the lead paragraph, I ran across this post on Joe Andrieu's blog about Microsoft HealthVault. It looks like a step in the right direction, but curiously enough, only health care providers can access your vault directly. You can't.
Joe Andrieu relates a story of Doc Searls dealing with just such problems -- in a state with universal health care -- in his own case [The story in question starts with the section marked "User centrism as system architecture," but as usual the whole post is worth reading]. As he says, a personal datastore would make the whole situation much simpler. Your medical data is part of your datastore. You give your providers permission to read and update it. There's only one copy of it, so all the data replication problems go away. You control access to it.
If you want a new provider to have access, just say so. You could even give blanket permission to any accredited hospital, in case of emergency. This permission would, of course, live in the world-readable part of your datastore.
I have to say it sounds beautiful, and I'm confident that it, or something functionally equivalent, will eventually happen. But how do we get there?
Given that this is very personal medical information, privacy is a major concern. Health care providers (again in the US, at least) are bound by strict privacy rules. Without digging into the details of HIPAA, one of whose aims is actually to promote electronic data interchange, suffice it to say that achieving HIPAA compliance has been a long, expensive and sometimes painful process for the US medical industry.
One result of this process is that providers (and any other "covered entities") are limited in what they may disclose to other parties. While the intent of the privacy rules seems very much in harmony with the idea of a personal datastore, the realization laid out in the law is very much built on the idea of each provider having its own data fiefdom, with strictly limited interchange among the various fiefdoms.
By contrast, in a personal datastore world, providers would never have to worry about disclosing data to other providers. In fact, it would be best for a provider never even to take a local copy, except perhaps for emergency backup purposes, since the patient's datastore itself is the definitive, up-to-date version. This could be particularly important if, say, a patient in a hospital is also being treated by an unaffiliated specialist. Anything one of them does is automatically visible by the other, unless there is a particular reason to restrict access.
The geek in me is fascinated to once again see the concepts of cache coherency and abstraction turning up in the larger world (whence we geeks are only borrowing them). But the health care consumer in me is concerned that the less-than-abstract form of the law, together with the need to implement it, has almost certainly produced a system with far too much inertia to switch to a datastore-centered approach anytime soon.
Obstacle one is that hospital's data systems just aren't set up for it, and after going through the wringer to get the present setup in place, they are not going to be in any hurry to implement a new scheme. Even with that out of the way, it is the providers that are on the hook to ensure privacy. They will want some assurance that relying on personal datastores does not expose them to any new liability.
That in turn will depend on personal datastores having been shown to be secure and reliable. Which is why, although I have no doubt that medical record keeping would be a great application for personal datastores, it seems unlikely to be the first, "killer" app that breaks them into the mainstream.
On the other hand, in chasing down the links for the lead paragraph, I ran across this post on Joe Andrieu's blog about Microsoft HealthVault. It looks like a step in the right direction, but curiously enough, only health care providers can access your vault directly. You can't.
Labels:
Doc Searls,
health care,
Joe Andrieu,
personal datastore,
privacy
Friday, February 15, 2008
Who owns a social connection?
Re-reading, I see I didn't draw out a point I meant to draw out in my recent post on Facebook:
If I keep my own database of social connections, in my own store and on my own dime, and then essentially copy that information up to a new social networking site when I join it, in what sense can that site claim it owns that connection?
Just to keep the bot question out of it, suppose that my friends and I make an agreement that whenever one of us joins a site another is already on, we'll link up with each other. This is pretty much what is going to happen anyway. The agreement just formalizes the process (and the bot, if any, automates it).
As far as I can tell, a site that provides social networking features is doing two things:
The graph of everyone's connections to everyone doesn't belong to anyone. You can navigate through it by getting permission from the various owners. This means that the graph is split up and that there's probably no one who can navigate anywhere at will, but I tend to think that's a good thing. It exactly reflects privacy in the real world because that, too, rests with the individual (modulo some classic philosophical arguments about the rights of individuals vs. society as a whole -- but the point is that those arguments apply equally in the real and virtual worlds).
Technically, it also seems dodgy that each entity that wants to feature social networking produces its own slightly different implementation.
On the other hand it seems natural and probably useful for sites to provide banners that people can organize themselves under.
If I keep my own database of social connections, in my own store and on my own dime, and then essentially copy that information up to a new social networking site when I join it, in what sense can that site claim it owns that connection?
Just to keep the bot question out of it, suppose that my friends and I make an agreement that whenever one of us joins a site another is already on, we'll link up with each other. This is pretty much what is going to happen anyway. The agreement just formalizes the process (and the bot, if any, automates it).
As far as I can tell, a site that provides social networking features is doing two things:
- Making it easy to navigate from your immediate circle, which you instinctively hold in your head, to the next few layers out, which quickly become larger-than-human-sized. (the networking part)
- Provide a badge of identity for something resembling a community. (the social part).
The graph of everyone's connections to everyone doesn't belong to anyone. You can navigate through it by getting permission from the various owners. This means that the graph is split up and that there's probably no one who can navigate anywhere at will, but I tend to think that's a good thing. It exactly reflects privacy in the real world because that, too, rests with the individual (modulo some classic philosophical arguments about the rights of individuals vs. society as a whole -- but the point is that those arguments apply equally in the real and virtual worlds).
Technically, it also seems dodgy that each entity that wants to feature social networking produces its own slightly different implementation.
On the other hand it seems natural and probably useful for sites to provide banners that people can organize themselves under.
Facebook and copy protection
A slightly different take on the previous post:
Facebook's terms of service bring out an interesting gray area in copy protection. The gist of the contract appears to be that you own whatever you put up, but you don't own anything else. In particular, you don't own your friends list. Facebook does. I haven't checked, but I assume this is the norm on such sites, not just a Facebook thing.
But what does that ownership mean? Facebook obviously doesn't mind you telling people "I know so-and-so on Facebook." That's good for business. They shouldn't mind if you happen to have an email address book that has the exact same contents as your friends list. That's none of their business, and it would clearly fit into the "personal, non-commercial use" exception.
On the other hand, they definitely mind if you, say, write a script to crawl your friends list and whatever can be reached from there and make a copy of it. There are very specific "no bots" clauses aimed at just that.
The presumption is that if you're using a bot, as opposed to personally browsing, cutting and pasting, you must have some commercial reason for it. It will be interesting to see how well that holds up.
Another thought: For most of history, copy protection has relied on it just being too slow to copy things yourself. Technology has disrupted that, starting with the photocopier and the tape recorder, spurring the development of cryptographic copy protection.
"No bot" clauses are a sort of throwback. The content is unprotected, beyond requiring a password to get into the system, but if you access too much of it too fast, the hammer falls.
Facebook's terms of service bring out an interesting gray area in copy protection. The gist of the contract appears to be that you own whatever you put up, but you don't own anything else. In particular, you don't own your friends list. Facebook does. I haven't checked, but I assume this is the norm on such sites, not just a Facebook thing.
But what does that ownership mean? Facebook obviously doesn't mind you telling people "I know so-and-so on Facebook." That's good for business. They shouldn't mind if you happen to have an email address book that has the exact same contents as your friends list. That's none of their business, and it would clearly fit into the "personal, non-commercial use" exception.
On the other hand, they definitely mind if you, say, write a script to crawl your friends list and whatever can be reached from there and make a copy of it. There are very specific "no bots" clauses aimed at just that.
The presumption is that if you're using a bot, as opposed to personally browsing, cutting and pasting, you must have some commercial reason for it. It will be interesting to see how well that holds up.
Another thought: For most of history, copy protection has relied on it just being too slow to copy things yourself. Technology has disrupted that, starting with the photocopier and the tape recorder, spurring the development of cryptographic copy protection.
"No bot" clauses are a sort of throwback. The content is unprotected, beyond requiring a password to get into the system, but if you access too much of it too fast, the hammer falls.
Labels:
copy protection,
facebook,
Intellectual Property
Thursday, February 14, 2008
Personal datastores, agents and Facebook
If you become my friend on Facebook, who owns that connection? Facebook says they do, and why not? Connections are the basis of their business, people join Facebook for the connections, and Facebook is paying for the servers. If I understand their terms of service, you own the material that you put up on your site (photos, videos, etc.) and they own everything else.
I ran across the back-story for this while browsing Joe Andrieu's blog. It turns out that Robert Scoble of Scobleizer fame got himself kicked off Facebook by running a script (he doesn't say what script) to scrape data off the site. The interesting thing is that that's verboten. The ominous thing is that it may even run you afoul of the law (presumably the DMCA).
A while ago I mused about a converse situation, wherein you and I keep our connection information in our personal datastores. When I join a new service, an agent (which I own and control) looks at my connection information and finds people that I already know that I should link up with if they're on the new service. You're on that list. My agent calls your agent, they do lunch and then go push the appropriate buttons to make the link on the service.
The net result is that when I join a new service, in short order all my friends outside the service are friends inside the service (or maybe not all -- I might tell my agent only to invite "professional" contacts on one service, "casual" contacts on another, etc. They might also tell their agents to decline the invitation.). This all costs an extra agent, but only one per customer, and again, control of the agent rests with the individual, not the social networking site.
Unlike Scoble's scraping predicament, this scheme pushes data onto the service instead of pulling it off. However, it's still probably against Facebook's terms of service (and possibly illegal), since it's being done by a bot.
The question is whether the social-networking-enabled sites would go for such a thing. On the one hand, it helps create connections, and connections are their bread and butter. On the other hand, it leaves the actual connections in the users' control.
Assuming my friends' agents cooperate, I can easily figure out who's in who's network on what service without actually logging into that service. In fact, if their agents grant me permission, I can find out if they're connected on a particular site without even joining that site. And why not? They could tell me that in person without violating any agreement, so why not over the web?
I can't get at the site-specific data for people I find without actually logging into the site, unless they also make that data available from their datastore via their agent.
But why wouldn't they? If they've put their job history up on some professional site and they're willing to let me link up with them on that site, obviously they don't mind my seeing that data. They may even want me to. Which leads to a couple of points:
I ran across the back-story for this while browsing Joe Andrieu's blog. It turns out that Robert Scoble of Scobleizer fame got himself kicked off Facebook by running a script (he doesn't say what script) to scrape data off the site. The interesting thing is that that's verboten. The ominous thing is that it may even run you afoul of the law (presumably the DMCA).
A while ago I mused about a converse situation, wherein you and I keep our connection information in our personal datastores. When I join a new service, an agent (which I own and control) looks at my connection information and finds people that I already know that I should link up with if they're on the new service. You're on that list. My agent calls your agent, they do lunch and then go push the appropriate buttons to make the link on the service.
The net result is that when I join a new service, in short order all my friends outside the service are friends inside the service (or maybe not all -- I might tell my agent only to invite "professional" contacts on one service, "casual" contacts on another, etc. They might also tell their agents to decline the invitation.). This all costs an extra agent, but only one per customer, and again, control of the agent rests with the individual, not the social networking site.
Unlike Scoble's scraping predicament, this scheme pushes data onto the service instead of pulling it off. However, it's still probably against Facebook's terms of service (and possibly illegal), since it's being done by a bot.
The question is whether the social-networking-enabled sites would go for such a thing. On the one hand, it helps create connections, and connections are their bread and butter. On the other hand, it leaves the actual connections in the users' control.
Assuming my friends' agents cooperate, I can easily figure out who's in who's network on what service without actually logging into that service. In fact, if their agents grant me permission, I can find out if they're connected on a particular site without even joining that site. And why not? They could tell me that in person without violating any agreement, so why not over the web?
I can't get at the site-specific data for people I find without actually logging into the site, unless they also make that data available from their datastore via their agent.
But why wouldn't they? If they've put their job history up on some professional site and they're willing to let me link up with them on that site, obviously they don't mind my seeing that data. They may even want me to. Which leads to a couple of points:
- Social networking is a feature, not a service. In particular, it's a feature that can be added to personal datastores without any centralized service at all, as long as there's a defined protocol for agents to talk to each other.
- In which case, exactly why would we need social networking sites?
Labels:
DMCA,
facebook,
personal datastore,
Robert Scoble,
social networks
Wednesday, February 13, 2008
Samsara in your browser
Back in the olden days, a web browser was a simple thing. There was one window. There was one thread of execution. It went
The browser ran on top of the native operating system and its window was managed by the native window manager.
I forget exactly in what order, but over time the features crept in, as features are wont to do:
In short, it looks a lot like an operating system. This is good, because it means browsers are powerful, but also not so good, because it means that the browser is doing many of the same things the operating system is doing, but slightly differently and with a completely separate code base.
Generally when this happens, pressure builds for consolidation. Whether this will happen in the present case is anyone's guess, but I wouldn't be shocked if someone started touting (or is already touting) a system in which the window management, plug-in handling and other such advanced browser features migrate back to the operating system.
Leaving the browser as a simple, lightweight mini-application which gets a URL, loads it, and displays it.
- Get a URL, either typed in or from clicking on a link
- Fetch whatever's there
- Display it
The browser ran on top of the native operating system and its window was managed by the native window manager.
I forget exactly in what order, but over time the features crept in, as features are wont to do:
- It's nice to have more than one window up at one time without having to fire up a second instance of the browser.
- Those windows take up a lot of space and have a way of getting lost if there are lots of other, unrelated windows on the screen. Tabs are more compact and are easy to find and switch between.
- Pop-up windows can be a pain, but they can also be very helpful when used responsibly.
- HTTP has no memory of past requests, so servers need some other way of remembering what has happened so far. Cookies allow the server to store small pieces of information on the client machine and have the browser send them back in later requests. Generally, cookies are used as reference keys to whatever information you really want to persist.
- Old-school HTML forms require every action to go through the server. This is particularly annoying when you've made one silly typo in the middle of a large form, but you don't find out about it until the server processes the whole thing. If the browser could keep track of what you've already entered and perform simple checks itself, everything would run more smoothly.
- HTML is nice, but there are all sorts of things it doesn't support. Enter ECMAscript, Flash other programming platforms. These also allow the browser to handle forms more smoothly (see previous item).
- There are more different kinds of document than just HTML. Not everyone wants everything, so it would be great if any third party that felt the need could add support for a new format. Modern browsers have a plug-in facility allowing new features to be added (via a URL, of course).
- With multiple tabs or windows open at one time, the browser should be able to do more than one thing at once. Otherwise a slow response on one page grinds everything to a halt.
In short, it looks a lot like an operating system. This is good, because it means browsers are powerful, but also not so good, because it means that the browser is doing many of the same things the operating system is doing, but slightly differently and with a completely separate code base.
Generally when this happens, pressure builds for consolidation. Whether this will happen in the present case is anyone's guess, but I wouldn't be shocked if someone started touting (or is already touting) a system in which the window management, plug-in handling and other such advanced browser features migrate back to the operating system.
Leaving the browser as a simple, lightweight mini-application which gets a URL, loads it, and displays it.
Monday, February 11, 2008
The National New York Geographic Times
A little bit ago I mentioned a late-80s Media Lab brainstorming session in which someone suggested that, given "effectively infinite" bandwidth of 500Mb/s, one could have "a daily National Geographic-quality New York Times with today's news rather than yesterday's, manufactured at the breakfast table."
I wanted to take another look at how that stacks up with where we really are, 20 years later, because I can see two opposite conclusions one could draw, equally not-quite-convincing:
I wanted to take another look at how that stacks up with where we really are, 20 years later, because I can see two opposite conclusions one could draw, equally not-quite-convincing:
- It happened. Printed newspapers and magazines are not gone, but it's quite possible to get one's news entirely from online sources, up to the minute, with pictures and even video (The video's a bit on the small and grainy side, but at least you don't have to have someone standing by the TV set with a coat hanger, a roll of aluminum foil and one hand out the window. Progress.)
- It's nowhere near happening. No one even makes a mass-market printer that will produce broadsheet-sized, glossy magazine-quality copy quickly and (just as vital) cheaply. A few dozen pages at magazine-quality resolution is probably running into gigabytes even with compression (photos take way, way more space than text, so the question is what portion of the page is photos). That's a daunting number for most of us.
Thursday, February 7, 2008
Steam-powered computing
I generally try to stay a bit behind the curve, but here's a case where I'm just now hearing about something cool that's been around for twenty to forty years, depending on how you count. I don't feel too bad, though. The whole point of steampunk is that it draws on technology that fell out of favor around the same time as the bustle and the ascot tie.
As one might expect of a genre that's been around for a few decades, steampunk comprises a fairly extensive and varied body of work, including literature, film, comics and art objects. Steampunkopedia has a fairly impressive pile o' links, if you want to browse further. I've personally only scratched the surface, and will probably not have time to do much more, though I would like to read The Difference Engine at some point.
What drew me into all this bounty in the first place was a radio mention of the Steampunk Workshop which, along with other enterprises of its kind, actually produces modern gadgets in Victorian dress, such as this retrofitted keyboard (step lightly around the AdSense solicitations). I particularly like the Roman-numeral function keys. And why stop there? Datamancer, an affiliated site, will do you a whole steampunk laptop.
Also of interest, in the same spirit if not quite the same aesthetic, is a water-cooled computer constructed with parts from the local Home Depot.
Neat hacks, all.
As one might expect of a genre that's been around for a few decades, steampunk comprises a fairly extensive and varied body of work, including literature, film, comics and art objects. Steampunkopedia has a fairly impressive pile o' links, if you want to browse further. I've personally only scratched the surface, and will probably not have time to do much more, though I would like to read The Difference Engine at some point.
What drew me into all this bounty in the first place was a radio mention of the Steampunk Workshop which, along with other enterprises of its kind, actually produces modern gadgets in Victorian dress, such as this retrofitted keyboard (step lightly around the AdSense solicitations). I particularly like the Roman-numeral function keys. And why stop there? Datamancer, an affiliated site, will do you a whole steampunk laptop.
Also of interest, in the same spirit if not quite the same aesthetic, is a water-cooled computer constructed with parts from the local Home Depot.
Neat hacks, all.
Wednesday, February 6, 2008
Tourism today
Here's Richard Stallman talking to an audience in Stockholm, at the Kungliga Tekniska Hogskolan (Royal Institute of Technology), in October 1986. I've edited a bit to draw out the point I want to make. Please see the original transcript for further detail.
The obvious counter-argument is that open source software (a term, I should mention, that rms disfavors) works quite well on the same basic principles, albeit not always strictly according to the FSF model. It's a good point, but the average open source project is a fairly small system. I doubt there are many with more than a few dozen active participants at any given time. Such projects also tend to have a limited audience, a limited pool of potential contributors, or both.
However, there is at least one very large and prominent system, with hundreds of thousands of participants, that the bullet points above describe almost as though I'd written them with it in mind (which I did): Wikipedia.
Now "tourism" is a very old tradition at the AI lab, that went along with our other forms of anarchy, and that was that we'd let outsiders come and use the machine. Now in the days where anybody could walk up to the machine and log in as anything he pleased this was automatic: if you came and visited, you could log in and you could work. Later on we formalized this a little bit, as an accepted tradition specially when the Arpanet began and people started connecting to our machines from all over the country.In sum:
Now what we'd hope for was that these people would actually learn to program and they would start changing the operating system . If you say this to the system manager anywhere else he'd be horrified. If you'd suggest that any outsider might use the machine, he'll say ``But what if he starts changing our system programs?'' But for us, when an outsider started to change the system programs, that meant he was showing a real interest in becoming a contributing member of the community.
We would always encourage them to do this. [...] So we would always hope for tourists to become system maintainers, and perhaps then they would get hired, after they had already begun working on system programs and shown us that they were capable of doing good work.
But the ITS machines had certain other features that helped prevent this from getting out of hand, one of these was the ``spy'' feature, where anybody could watch what anyone else was doing. And of course tourists loved to spy, they think it's such a neat thing, it's a little bit naughty you see, but the result is that if any tourist starts doing anything that causes trouble there's always somebody else watching him.
So pretty soon his friends would get very mad because they would know that the continued existence of tourism depended on tourists being responsible. So usually there would be somebody who would know who the guy was, and we'd be able to let him leave us alone. And if we couldn't, then what we would [do] was we would turn off access from certain places completely, for a while, and when we turned it back on, he would have gone away and forgotten about us. And so it went on for years and years and years.
- Everyone can change the system. In fact, everyone is openly encouraged to change the system.
- People who make good changes rise in the ranks and eventually help run the place.
- Everyone can see what people are up to, and in particular what changes they're making.
- Vandals are locked out and generally go away after a bit (likely to be replaced by new, nearly identical vandals). Sites can be blocked if blocking individuals doesn't work.
The obvious counter-argument is that open source software (a term, I should mention, that rms disfavors) works quite well on the same basic principles, albeit not always strictly according to the FSF model. It's a good point, but the average open source project is a fairly small system. I doubt there are many with more than a few dozen active participants at any given time. Such projects also tend to have a limited audience, a limited pool of potential contributors, or both.
However, there is at least one very large and prominent system, with hundreds of thousands of participants, that the bullet points above describe almost as though I'd written them with it in mind (which I did): Wikipedia.
Labels:
computing history,
open source,
Richard Stallman,
wikipedia
Tuesday, February 5, 2008
How much bandwidth does a man need, Mr. Tolstoy?
Earl comments that we probably already have enough, or nearly enough bandwidth to saturate human capacity, but not enough to satisfy our human desire for new shiny pretty things. Or at least I hope that's a reasonable paraphrase.
I would say it depends on what kind of information you're trading in. If you're dealing in metadata like who knows whom, or dealing in random facts like the score of every major sports event in the world, or in text like newspaper copy, even a dial-up connection can feed you information faster than you can process it.
If you're dealing in sound, a dial-up connection can certainly carry voice (that's what it was designed for, after all). Decent stereo sound requires more like 100kb/s (mp3 takes about a megabyte a minute). That's a bit beyond dial-up, but cable and DSL can handle it.
Analogously, if you want YouTube-quality video, you can certainly get that over cable or DSL, but plain DVD quality, at 4Mb/s, needs what's still an uncommonly big pipe. And that still leaves plenty of room before we hit the limits of perception. Those finely-tuned eyes of ours are bandwidth-hungry.
On the other hand, as Earl points out, you hit diminishing returns well before you hit the limits of perception. That last Mb/s of bandwidth doesn't improve the experience nearly as much as the first one. The difference between regular TV and HD is not nearly the difference between TV and no TV (and some would argue whether that last one represents an improvement, either).
Storage hardware is already entering a new regime of plenty. You can now get a terabyte disk off the shelf. My spell checker flags "terabyte", but soon enough we'll be throwing around (I'm guessing) "T" like we now throw around "gig" and "meg". Borrowing from those displays you see in the camera/music player department, a terabyte can hold:
If you have dial-up at 56Kb/s, it would take 4-5 years to fill a terabyte, assuming the connection is running full-tilt, constantly, with every bit recorded on the disk. When you're done with that, you'll probably be able to pop in a 10TB disk for the same price and keep going ... The question is, will you still have a 56Kb/s dial-up connection?
In the big happy family of Moore's law, bandwidth into the house is the poor cousin (processing power is the crazy uncle in the attic, but maybe I'll get to that later). At some point, bandwidth will reach the point that disk is reaching now, where we'll actually have to think a bit about how to use it all. Anomalously, that point still seems fairly far off.
But then, how much do we really need? In Tolstoy's famous story How much land does a man need? (which I haven't read), it turns out you need about three feet by six feet. By six feet deep.
I would say it depends on what kind of information you're trading in. If you're dealing in metadata like who knows whom, or dealing in random facts like the score of every major sports event in the world, or in text like newspaper copy, even a dial-up connection can feed you information faster than you can process it.
If you're dealing in sound, a dial-up connection can certainly carry voice (that's what it was designed for, after all). Decent stereo sound requires more like 100kb/s (mp3 takes about a megabyte a minute). That's a bit beyond dial-up, but cable and DSL can handle it.
Analogously, if you want YouTube-quality video, you can certainly get that over cable or DSL, but plain DVD quality, at 4Mb/s, needs what's still an uncommonly big pipe. And that still leaves plenty of room before we hit the limits of perception. Those finely-tuned eyes of ours are bandwidth-hungry.
On the other hand, as Earl points out, you hit diminishing returns well before you hit the limits of perception. That last Mb/s of bandwidth doesn't improve the experience nearly as much as the first one. The difference between regular TV and HD is not nearly the difference between TV and no TV (and some would argue whether that last one represents an improvement, either).
Storage hardware is already entering a new regime of plenty. You can now get a terabyte disk off the shelf. My spell checker flags "terabyte", but soon enough we'll be throwing around (I'm guessing) "T" like we now throw around "gig" and "meg". Borrowing from those displays you see in the camera/music player department, a terabyte can hold:
- A million minutes, or more than two solid years, of music.
- Hundreds of thousands of high-quality photos.
- Hundreds of hours of DVD-quality video.
If you have dial-up at 56Kb/s, it would take 4-5 years to fill a terabyte, assuming the connection is running full-tilt, constantly, with every bit recorded on the disk. When you're done with that, you'll probably be able to pop in a 10TB disk for the same price and keep going ... The question is, will you still have a 56Kb/s dial-up connection?
In the big happy family of Moore's law, bandwidth into the house is the poor cousin (processing power is the crazy uncle in the attic, but maybe I'll get to that later). At some point, bandwidth will reach the point that disk is reaching now, where we'll actually have to think a bit about how to use it all. Anomalously, that point still seems fairly far off.
But then, how much do we really need? In Tolstoy's famous story How much land does a man need? (which I haven't read), it turns out you need about three feet by six feet. By six feet deep.
Subscribe to:
Posts (Atom)