Thursday, June 2, 2022

Check out this new kitchen hack!

In case that title somehow clickbaited you to this quiet backwater, no this isn't really about cooking, but for your trouble: The easiest and least tearful way I know to slice onions is to cut them in half lengthwise, so each half has a little piece of the roots holding it together.  If you think of the roots as the South Pole and the stem end as the North Pole, the first slice is from pole to pole.

Chop off the stalk end and peel off the outer layers, under cold running water if that seems to help (I think this is a little easier than slicing the stem off first, but your mileage may vary).  Put the halves down on the flat side and slice vertically with the slices parallel, also running north-south.  Julia Child recommends another pass, horizontally, still slicing north-south, and who am I to argue?  At this point, the root and the shape of the onion layers are still holding everything together.  Finally, slice vertically, but with the slices running east-west.  Each cut slices off a little pile of nicely diced pieces.

This isn't new -- I first heard about it on a Chef Tell segment many years ago, Mastering the Art of French Cooking came out in 1961 and I'm sure it's been around much longer -- but it works a charm.  Bon Apetit, and remember that a dull kitchen knife is more dangerous than a sharp one.


So it's not new, but is it a hack?  And what's with all these "life hack" articles that have nothing to do with writing clever code?

For my money, the onion-dicing method is absolutely a nice hack.  A hack, really, is an unexpected way of using something to solve a problem.  The usual way to dice something is to slice it, then cut the slices crosswise into strips, then cut the strips crosswise into little dice.  If you try that with an onion, the root is in the way of the north-south slices described above, and the easy way to start is to slice it east-west, into rings.  You then have to dice up the rings, which are hard to stack since they're already separated, and like to slide around and separate into individual rings, and have a lot of exposed surface area to give off tear-producing onion fumes.  In short, you have a mess.

The chef's method takes advantage of the two things that otherwise cause problems:  It uses the root end to hold things in place and keep the exposed area to a minimum, and it uses the layering of the onion to save on cutting (if you omit the horizontal slices, as I usually do, you still get decently-diced pieces, good for most purposes, just a bit coarser).  This is the essence of a hack: using something in a non-obvious way to get the result you want.  It's particularly hackish to take advantage of something that seems to be an obstacle.

Not every hack is nice, of course.  The other popular meaning of hacking, that many geeks including myself find annoying, the computing analog of breaking and entering or vandalizing someone's property, stems from a particular type of hacking: finding unexpected vulnerabilities in a system and taking advantage of them to break the system's security.  As I've discussed at length elsewhere, this isn't necessarily bad.  White hat hackers do just this in order to find and patch vulnerabilities and make systems more secure.  The annoying part isn't so much that hack is associated with breaking and entering, but that it's associated with any kind of breaking and entering, regardless of whether there's any skill or actual hacking -- in the sense of making unexpected use of something -- involved.

I should note somewhere that hack often has negative connotations in software engineering for a completely different reason: If you take advantage of some undocumented feature of a system just to get something working, you have a fragile solution that is liable to break if the system you're hacking around changes in a future update.  In widely-used systems this leads to Hyrum's law, which basically says that people will write to what your system does, regardless of what you say it does, and with enough people using it, any externally visible change in behavior will break someone's code, even if it's not supposed to.

Hacking lives in gray areas, where behavior isn't clearly specified.  "Dice this onion with this knife" doesn't say exactly how to dice the onion.  Someone taking advantage of a quirk in an API can usually say "nothing said I couldn't do this".  There's nothing wrong with unspecified behavior in and of itself.  It's actively helpful if it gives people latitude to implement something in a new and better way.  The trick is to be very specific about what can happen, but put as few restrictions as possible on how.

There's an art to this.  If you're writing a sorting library, you could say "It's an error to try to sort an empty collection of things".  Then you have to make sure to check that, and raise an error if the input is empty, and whoever's using your library has to be careful never to give it an empty collection.  But why should it be an error?  A collection with only one thing in it is always sorted, since there's nothing else for it to get out of order with.  By that reasoning, so is an empty collection.  If you define sorted as "everything in order", that raises the question "but what if there isn't anything?".

If you define sorted as "nothing out of order -- no places where a bigger thing comes before a smaller thing", then the question goes away.  If there isn't anything in the collection, nothing's out of order and it's already sorted.  In math, something is vacuously true if there's no way to make it false.  "Nothing out of order" is vacuously true for an empty collection.  Often, allowing things to be vacuously true makes life easier by sidestepping special cases.

As a general rule, the fewer special cases you need to specify what happens, the easier a system is to write and maintain, the more secure it is against unwanted forms of hacking like security exploits and Hyrum's law, and the friendlier it is to good kinds of hacking, like people finding clever new ways to improve the implementation or to use the system.


So what about all this "life hacking"?  Should people use computing jargon for things that have nothing to do with computing?  I have two answers.

First, the term hack isn't really about computing.  It's about problem solving.  The first definition in the Jargon File (aka Hacker's Dictionary) is "Originally, a quick job that produces what is needed, but not well.", with no mention of computing, and elsewhere it attributes early use of the term to ham radio hobbyists.  As it happens, the actual definitions of hack in the Jargon File don't really include "using something in a non-obvious way to get the result you want", but I'd argue that the definition I gave is consistent with the The Meaning of 'Hack' section.

Second, though, even if hack was originally only applied to coding hacks, so what?  Language evolves and adapts.  Extending hack to other clever tricks reveals something new about what people are trying to get at by using the word, and in my view it's a lot better than restricting it to security exploits, clever or not.  Sure, not every "kitchen hack" or "life hack" is really that hackish, and headline writers are notoriously pressed for time (or lazy, if you're feeling less generous, or more apt to make money with clickbait, if you're feeling cynical), but there are plenty of non-computing hacks floating around now that are just as hackish as anything I've ever done with code.