Showing posts with label Apple. Show all posts
Showing posts with label Apple. Show all posts

Wednesday, October 5, 2011

Steve Jobs, 1955-2011

Well, we all knew it was coming, but you could still feel the earth shift.  None of us in the tech business has remained untouched by Jobs' work, and by extension, Jobs himself.  There was never, nor will there ever be, anyone quite like him.

RIP

Wednesday, June 9, 2010

i.e.

Consider two prefixes: i and e, both in lowercase, e. e. cummings-style. Once they were emblematic of all things new and shiny and dot-com-y. Where are they now?

e- still has its webby connotations, quite possibly because e-mail is still prevalent. We still have eBay, eHarmony, esurance, Epinions, eFileCabinet and others, though perhaps not as many as one might expect.

i-, on the other hand, was blatantly hijacked by Apple. It used to mean "internet-" or something, but through some masterstroke of Steve Jobs's patented legerdemain, it now means "cool, shiny and Apple-y". In fact, according to Wikipedia, the name "iPod" was already trademarked, for internet kiosks, when freelance copywriter Vinnie Chieco decided the prototype reminded him of 2001, A Space Odyssey, particularly the phrase "Open the pod bay door, Hal!" and proposed the name. How the initial i got attached is not clear, at least not to me.

While Jobs didn't come up with the name himself, he must have made the final call on going with it. The sleight-of-hand was being able to market something with no direct internet connectivity with such a name (the much webbier iTunes didn't come along for another couple of years).

Two other affixes from the era still seem to have life in them. The notion of calling the customized view of FooCorp "myFooCorp" lives on here and there, not to mention mySpace.

And, of course, .com has more or less become punctuation.

Finally, there's camelCase. When I was starting out, there were still widely-used programming languages with ridiculously short limits on names. Classic FORTRAN was limited to eight characters and BASIC dialects varied but could be even worse. Single-case, conventionally ALL CAPS, was still prevalent as well.

[You got around these restrictions by dropping any letter you could -- "parameters" became PARMS, "first name index" might be FSTNMIDX.  Well-organized FORTRAN code typically built variable names up from abbreviated parts and had block comments in key places explaining what all the abbreviations meant.

Early versions of FORTRAN also had the convention that the first letter of the name indicated whether a variable was integer or floating point, so you'd get names like IRANK, since plain RANK would be floating point.  While that led to a lot of names starting with I, I doubt that's where the dot-com-era i- prefix comes from.  --D.H. October 2015]

Two popular languages were less restrictive: C and Pascal. C coding style called for all-lowercase names except for constants, with underscores serving as spaces: my_variable_name. Pascal, on the other hand, didn't allow underscores in names (or maybe they were just considered uncool?). Instead, Pascal code used capitals to break up long names: MyVariableName.

I really don't know how mixed case came to be the dominant style, but it has. I still remember a TA (who would later spend some years working for Apple) complaining that my C-style names_with_underscores hurt his eyes and why didn't I do things TheRightWay. Fast forward a few years and if you want to look web.hip you have to go camelCase. Spaces are so old economy.

The astute reader may notice the subtle distinction between camelCase (starting with lowercase) and PascalCase (starting with uppercase). Both are used in actual code. For example, Java conventions call for names of classes to start with a capital and most other names to start with lowercase. I suspect that dot-commers chose lowercase (for the most part) because it just looked less conventional.

Whatever the reasons, it seems to have caught on, more so, in fact, than any of the particular prefixes.



How much dot-com-y goodness will fit in one name? What's the equivalent of a tall double half-caf soy vanilla latte? My guess is it would be somewhere around "myENet.com", but I may have missed a step.

[A quick search reveals that "tall double half-caf soy vanilla latte" is small beans. The real bidding starts at "Venti, sugar-free, non-fat, vanilla soy, double shot, decaffinated, no foam, extra hot, Peppermint White Chocolate Peppermint Mocha with light whip, upside-down, 1 pump of peppermint, 1 and 3/8 pumps vanilla,180 degrees, heavy whip-cream, 3 ice cubes, 1/4 teaspoon Nutmeg sprinkled on top, with green sprinkles, lightly cinnamon dusted on, stirred, with no lid, double cupped, and a straw"]

Friday, April 30, 2010

Mr. Jobs's eras

Apple and Adobe have a long history together, as Steve Jobs points out in an explanation of why iPhones and their cousins won't run Flash. Good times, good times, he says, reminiscing about their shared past, and then goes on to give, in a cool and evenhanded tone, six fairly blunt reasons for the choice.

Now clearly, supporting flash or not, and choosing HTML5 and other standards in favor of it, is all about the web, but what jumped out at me was Jobs's contrast between the "PC era" and the "Mobile era", which would seem to be more about generations of hardware. Guess which era he puts Flash in. Give up? OK, I'll tell you (or rather, I'll let Jobs tell you):
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
There are several interesting implications in that one little paragraph. In particular, it would seem that mobile devices are in some way webbier than PCs. Even before the web, people were using PCs, to write documents, play games and whatever else. Sure, the web can enhance all that, but PCs were a success before the web ever came along.

The distinguishing feature of a mobile device is not just that you can move it around, but that it (generally) stays connected when you do. A good portion of the pizzaz of an iPhone etc. comes from its webbiness. Not only is there an app for that, you can get it right now, and chances are that app interacts with the web in some essential way.

Now, you can have mobile devices without the web. The first generations of cell phone were exactly that. Nonetheless, as Jobs asserts and I tend to agree, the real potential of mobile devices comes from their fit with the Web As We Know It. The tighter the fit, the better.

Sunday, January 31, 2010

Apple: innovation vs. breakthrough

From time to time, Apple announces its latest creation. Even if you're not a particular follower of Apple, you can tell it's coming by the steady stream of breathless press releases. "Rumor has it Steve Jobs is about to announce ..." Apple didn't get into this happy position by chance. The company has been extraordinarily successful in building its brand through a highly effective combination of engineering and marketing.

Before I go on, let me be clear: That's a compliment.

Rather than dig into the details of the latest product, I want to put forth a small thesis: Apple has succeeded not by producing stunning technological breakthroughs, but by expertly pulling existing pieces together to fill a gap in the mass market.

That's also a compliment.

Looking at a timeline of Apple products, it's clear that this goes all the way back to the original Apple I, a pre-assembled motherboard in a time where kits were popular. Likewise, the Apple II (or Apple ][, if you prefer) came fully-assembled and had color when its main competitors didn't, but it certainly wasn't the first computer with a color display. The Lisa and Macintosh brought Xerox's UI breakthroughs out of suspended animation at the PARC to the world at large, but they weren't the first systems with mice and windows.

And let me pause here for another point: Just as much brain sweat can go into synthesizing existing pieces as creating new ones. The hacks behind the Apple II's handling of color and sound were ferociously good. I had the privilege of seeing an early (pre-release, I think) Lisa in college. I was suitably impressed. The hardware geeks in the room managed to persuade the sales rep to pull the cover off and, as I recall, made similar appreciative noises.

The Lisa wasn't really ready for prime time, but the Mac certainly was. The ROM carrying its graphics and other system support was known for having squeezed in more functionality into 64KB(!) than should have fit, and of course the coherence of the whole concept and execution laid crucial foundations for the brand we know today.

If you prefer to call those achievements engineering breakthroughs, I won't argue. My point is more that Apple didn't win by inventing color, or the GUI, or the portable music player, but by bringing them to market early and very, very well.

I think this is why I tend to find Apple's announcements exciting and underwhelming at the same time. The iPod, mini and nano followed in logical succession. With them, and their cousin the iPhone the news was not just that Apple had hit the existing digital music player and cell phone with its pretty stick, but that it had managed to partner with major players to give them a decent shot at working. Someone was bound to do that, might as well be Apple.

The MacBook Air is a cool piece of engineering, but however much you hype it it's a thinner, lighter MacBook. The latest offering is, as Jobs himself says, halfway between an iPhone and a MacBook. What's worth noting here is that Apple is re-entering a market where it and others have stumbled (remember the Newton?) not by jumping into the void but by anchoring firmly to two existing successes.

This is the usual way we progress, on the web and off. Breakthroughs are important, but they don't make it out of the lab until someone puts them into usable form and brings them to the world at large. This generally means connecting the breakthrough to enough of the familiar that people will know what to do with it. Apple does this as well as anyone and (with inevitable missteps) has from the outset.

Wednesday, November 26, 2008

CD Player. Comes with music.

This is take two of the post I was trying to write when I ended up writing about BodyNet instead.

Technically, there's not a lot of difference between a cell phone and a streaming audio player. Throw in some flash memory and downloaded tunes are no problem either. Add a screen and you can say the same thing for video. But how do you get the content to the phone? Two models spring to mind:
  1. A big happy open web-driven marketplace. Surf wherever you want. Find something you like? Download it to your phone just like you'd download it to your PC. Pay whoever you need to when you download (or pay for a subscription). This is pretty similar to the CD/DVD market. Sounds nice, but as far as I know you can't do it. It's a lot easier to do DRM on a captive device like a cell phone, and cell phone makers are pretty aggressive about making sure you don't tamper with their devices.
  2. A collaboration between the content owners (i.e., studios and record labels, not to be confused with singers, songwriters, screenwriters, actors etc.) and the service providers. Subscribe to a service and you can also download or stream content from whatever content owners the provider has partnered with. This is pretty similar to the cable TV model. It ensures that everybody gets a cut (as always, we can argue over who gets what cut) and a number of partnerships have formed.
There's another model that doesn't come to mind because when you try to map it back to "old media" terms, it doesn't really fit. Yet there are at least two examples going, one of them recent:
  1. The cell phone makers sell the content. As the title suggests, this seems like selling a CD player and then selling the CDs to go with it. You see this in niches (e.g., Disney makes an MP3 player and sells plug-in cards with songs from their artists), and I wouldn't be surprised if some early phonograph maker tried it, but it doesn't seem like a great idea. Selling electronic widgets and selling bits are just two different things. Nonetheless, it certainly worked for Apple and the iPod/iPhone, and now Nokia is trying the same approach with Comes With Music (TM). It's not quite the same model as iPhone -- for a subscription fee, you can download all you want and keep it forever -- but it does share the feature of putting the phone maker in the content business.
So maybe they know something I don't. Wouldn't be the first time.

Friday, September 28, 2007

This iPhone will self-destruct in five seconds

Two questions come to mind about Apple's recent iPhone update which, as Apple had warned, makes hacked iPhones inoperable:

Who's better off for this? Owners of hacked phones now have $500 paperweights. Granted, they were warned and I would think were in violation of some license or service agreement. There are reports that some owners of non-hacked phones have lost contact data and possibly the use of their phones. Apple comes off looking like The Man instead of The Rest of Us, thereby calling down the wrath of hackers everywhere, but what were they going to do? The one group clearly to gain is makers of whizzy phones that aren't locked to a single carrier and/or don't self-destruct if you try to unlock them.

Just how does the self-destruct feature work? Apple asserts that the hacked phones are now "permanently inoperable". Did the update fry some hard-to-replace chip? If not, just what claim is Apple making? Clearly the self-destruct update will have left affected phones unable to receive further updates the usual way. But is it impossible even in principle to re-load the OS, for example by copying the image from a working phone? I would expect it to be difficult -- dongle-based copy protection is a lot easier to pull off for something highly integrated like a phone -- but could not even Apple do it back at the factory? [My understanding is that they just re-flashed the firmware and that Apple could fix such a phone at the factory (but has no reason to). In some cases, such a phone might also be fixable without help from Apple.]

Thursday, August 23, 2007

E-Tickets and copy protection

(Back in the day, back before my day, "e-ticket" meant the best rides at Disneyland -- or maybe the Pasadena Freeway. I forget.)

Two questions come to mind about buying tickets to a events online:
  • Do they have to call the because-we-can and the because-the-venue-can fees "convenience charges"? Just exactly whose convenience are we talking about here?
  • Copy protection always fails. Why doesn't that matter here?
I can't answer the first one, but the second one is easy. You can make as may copies of an electronic ticket as you want. Knock yourself out. All you're really doing is copying a number. But you can only use that number once.

It says so right on the ticket, something like "This ticket may only be scanned once." If someone gets hold of one of the copies you've made and they get to the gate first, you're out of luck. Sorry, that ticket, meaning the magic number and not the piece of paper it's on, has already been used.

That's interesting, actually. You can make as many copies as you want, but you don't want to make any more than you have to. Enforcement is by incentive, not prohibition. This is a bit different from airline tickets, which are tied to an individual so that the would-be ticket thief will also need to forge your ID. I remember being a bit surprised that I didn't have to show ID to use an e-ticket to a show.

The underlying principle is that copy protection exists in the physical world. Things can only be in one place at a time and a particular event only happens once. To limit copy protection in the virtual world, you have to tie your virtual object to something the physical world, either an object or an event.

Copy protection based on objects has a long history. It worked fine when copying was physically difficult. Printing a book ties the virtual object (the contents of the book) to physical ink on a physical page. For centuries this was something most people couldn't easily do. Even with a photocopier, it's not something most people could do very cheaply or easily.

Since this approach worked so well it's no surprise that a lot of early copy protection schemes tried to emulate it. I remember writing code to talk to a dongle that hung off the printer port of a PC every so often to make sure the thing was still there and shut down the application if it wasn't.

The dongle itself was (according to its manufacturer) based on strong encryption, so you weren't going to be able to make a working copy of the dongle for your friend without factoring some infeasible-to-factor numbers. But I could never figure out why you'd have to.

The tie between the virtual object (the app) and the physical one (the dongle) was inherently weak. It couldn't be too hard for someone to figure out what part of the code talked to the dongle and replace it with something that only pretended to.

I remember spending quite a bit of time trying to design dodges like putting the dongle-handling code in some encrypted dynamic module, but it never took long to figure out a way around that, too. I'm pretty sure I later ran across papers in the literature saying the same thing more rigorously, and evidently the market came to the same conclusion. You don't see dongles anymore.

The same basic story has played out repeatedly. CDs worked fine until everyone had a CD burner (so copying the CD was easy) or an MP3 player (cutting the virtual/physical tie so you didn't need a CD player to hear a song). DVDs are more or less in the same boat now (and I'm not even talking about CSS).

The iPod effectively tried to tie iTunes songs to the player, but Apple's heart was never really in it. If nothing else you could burn songs to CD and re-rip them for your favorite player (so much for CDs as a copy-protection mechanism!). Certain recent operating systems appear to try to tie playback to physical artifacts like MAC addresses, but at this point I'm thinking I've seen this movie before and I know how it ends.

On the other hand, tying virtual objects to events seems to fare rather better. E-tickets work fine and rake in tons of money in because-we-can fees. Smartcard-based authentication systems are a variation of the same theme. A particular magic number will only flash on the display once and the server knows when that will happen.

I'm not sure how broadly this applies, though. In both the cases above the virtual object is the key to something physical of interest. It's not the object of interest itself. If it's the (virtual) content that's of interest, it's not clear that the tie to the physical world can ever be ironclad.

I was about to cite live broadcasting as an example, but this really depends on control of the broadcast mechanism. There's no technical reason I couldn't take the picture on my TV screen and stream it to all my friends and have a big virtual pay-per-view party. I personally don't have the bandwidth for such things, not to mention it being illegal, but the bandwidth will be there sooner or later and illegality won't deter everyone.

Other models are on even shakier technical ground. Producing an advertising-free public copy of a particular news source or private database is not a problem technically. It's an interesting question why it doesn't happen more.

Is it because The Man can come after someone who put up such a site (and why put it up unless lots of people will find out about it)? Is it because people don't like breaking the law? Is it because most people intuitively understand that if writers can't make money there won't be any content to steal? Or is it maybe because people just don't mind ads that much and it's not worth trying to pirate the content?

My guess is that it's a combination of all of those factors. It has to be something. Technology is not going to protect content. For most mass-market applications "strong" copy protection, where the virtual/physical tie is inherently strong and does not depend on control of some particular mechanism, seems doomed from the beginning.

That leaves a web (if you will) of legal and social constructs. Same as makes the rest of the world go round.