Leaving aside the questionable implication that two optic nerves allow us to process much more information than one -- unlikely both because the two eyes generally see almost the same image and because if they don't the result is generally less informative than the normal case -- I was happy to hear some hard numbers, apparently based on a careful, peer-reviewed study, regarding human visual bandwidth.
So all I needed to do was track down Tufte's assertion on the web, follow that to the original study and write it up: Our optic nerves can handle approximately X, so a display purporting to handle more than X may not be that useful (and maybe that's why Blu-Ray doesn't seem to be taking the world by storm). Granted, this is just a crude measure of bandwidth and leaves aside many, many details of human visual perception, but it's still a useful number for sanity checking and ballpark estimates.
Alas, I'm stuck at step 1. I'm only mostly sure the number was 20 and the units were megapixels per second, and I'm assuming that a pixel is more or less three bytes, based on fairly well-known results in color perception. So instead, here are some facts and factoids that turned up:
- The human eye has about 100 million receptors. This is sometimes quoted as "the eye has 100 megapixels," but trying to compare rods and cones to camera pixels is really apples and oranges.
- Unlike the uniform grid of digital cameras and video displays, the eye instead has about 100 million light/dark-sensitive rods and 5 million color-sensitive cones. The cones are clustered around the focus point of the lens. Peripheral vision is much less color-sensitive.
- Most people can't really tell the difference between a 6"x4" photograph printed at 150 dpi and one printed at 300 dpi when both are viewed at normal distance. 6"x4"x(150dpi)2 is about half a megapixel. Half a megapixel times 20 frames per second is about 10 megapixels per second; that's low of Tufte's figure, but then a 6"x4" photo at normal distance doesn't completely fill the field of vision -- just the most acute portion.
- The optic nerve contains about 1.2 million fibers. That's a bit more than one for every hundred receptors, so either some aggregation is done on the retina, or the neurons are able to multiplex information from multiple receptors, or both.
- 1.2 million fibers times 20 frames per second is close to Tufte's 20 million per second.
If we can only process a megapixel or so, why have a bigger display than that? Good typographic resolution is more like 1200dpi. On an 8 1/2" x 11" page that's over 100 megapixels. Isn't that overkill?
Not really. You don't look at the entire page at once. You scan it, focusing on on piece, then the next. Each of those pieces needs to be sharp. A large, finely-printed page will give you about a hundred high-resolution patches to focus on. Similarly, you can't take in all of an IMAX image at once. Rather, you have a huge image that looks sharp no matter where you look at it.
A sharp display with only a megapixel of resolution would have to cover the entire field of view, and it would have to track eye movements so that which megapixel you got depended on where you were looking. Maybe some sort of really high-tech contact lens?
2 comments:
Just look at an Ansel Adams print and anything done with a cell phone camera, and you will know that the eye can see more than it can see.
Note to self: there's a lot of redundant information in those 20 fps. Video compression is tuned to or perception, so compressed data rates are probably a good approximation to what's really going on.
Post a Comment