Malcolm Gladwell: Blink

[editorial note: this review/essay/whatever was originally published as three separate entities over the course of a month.]

surprise benefits of pseudo-vegetarianism

I’ve been reading Malcolm Gladwell’s Blink in fits and starts over the past two months — it’s on the library’s short-term loan list, so I request it, read as much as I can before it’s due, return it, and repeat. I don’t think it’s a bad way to read such an information-dense book; it provides opportunities to digest and reflect on Gladwell’s theses.

I don’t think he delivers on the implicit promise that has made his books bestsellers among business readers. The Tipping Point provides tools for understanding why some messages — like teen anti-smoking campaigns — don’t “stick.” But it doesn’t provide tools for making messages stick. I think that’s because societies’ response to stimuli is fundamentally chaotic. Ensuring any particular meme spreads is impossible. Even Steven Spielberg directed an unequivocal flop once.*

Blink suffers from a similar problem: it identifies situations in which rapid intuitive assessments — “thin-slicing,” in Gladwall’s parlance — are invaluable, and other situations in which they’re extremely harmful. It doesn’t provide foolproof guidelines for distinguishing “good” thin-slicing from “bad.” Again, I don’t think it’s a soluble problem.

I’m not an expert on cognition; I’m a lay person with probably just enough information to be dangerous. But I think a major component of what makes for human intelligence is that our brains are abstract pattern-recognition machines. The engine that recognizes individual human faces is the same engine that sees animal shapes in clouds and inkblots. I think it’s always going to be subject to errors, particularly in high-stakes situations that require snap judgments: “He’s drawing a gun!” versus “He’s pulling out his wallet.”

Even if I don’t think Gladwell’s books quite live up to their hype, they’re informative, provocative, fascinating, and lucidly written.

For instance, his account of Sheena Iyengar’s research on consumer choice provided insight into something that’s intrigued me for the past decade. Iyengar found that customers given an opportunity to taste 6 jams in a store were far more likely to make a purchase than customers who had a chance to taste 24 different jams.

I’m a pseudo-vegetarian. This generally makes dining out straightforward: most of the menu is automatically excluded from consideration. I usually pick from the small set of available options rapidly and without much conscious deliberation. When I dine at a vegetarian or seafood specialty restaurant, I have a larger field to winnow. My selection process is radically different (and much slower). I typically try to find the entrée that maximizes features I like: the one with the ginger, tofu, and straw mushrooms. Sometimes I experience a kind of stress that’s unusual for me: no dish has the poblano pepper sauce, guacamole, and melted jack cheese; I can only get different combinations of two of those ingredients. Then I feel vaguely dissatisfied with a meal that I would unhesitatingly and happily choose if I had fewer options.

Iyengar’s research suggests that this behavior isn’t just me-being-weird. Gladwell’s synthesis provides a framework for understanding it: I “thin-slice” among a few choices, but not among a dozen.

*Of course, Gladwell has certainly “tipped” his own books, so maybe, just maybe, he knows something about hidden marketing levers that he’s not sharing.

the warren harding error error

In Blink, Gladwell devotes a chapter to exploring what he calls the “Warren Harding error.” He contends that the primary reason for Harding’s political success was that the man looked presidential.

Gladwell doesn’t apply this line of reasoning to politicians of the current era (although later he does quote Paul Ekman — who, with Wallace Friesen, assembled the “Facial Action Coding System — claiming that in 1992 he saw Clinton’s tendency for marital indiscretions literally written on his face.)

Whatever I thought of his policies or the abilities he brought to the job, I think I have to concede that Ronald Reagan looked presidential (at least some of the time). He was certainly always too much the gunslinger for my taste. But he could be dignified without entirely losing the humanizing mischievous twinkle in his eyes. If he’d been an actor cast in the role of the president, I think I could have bought it.

The real mystery is the election — twice, yet — of George Walker Bush. The presidential debates of 2004 crystallized this for me. John Kerry with his imposing height and resonant voice, looked and sounded presidential. His opponent looked like a used-car salesman by comparison: shifty-eyed, almost sneering, his voice often distinctly petulant if not actually whining.

And yet he won. Where are you now, oh Warren Harding error? Come back. We need you.

In other news, I took a few of the Implicit Association Tests Gladwell describes in the same chapter (it’s essentially the “be careful about judging books by their covers” segment of the book). Gladwell (and Greenwald, Banaji and Nosek, who developed the tool) claim that the test design is effective even when you know you’re being tested (unlike many sociological tests).

I’m not convinced. I took a test designed to identify an “implicit association” (e.g., an ingrained unconscious bias, more or less) for males/sciences and females/liberal arts. I was prompted by the survey I took beforehand to think fleetingly of famous scientists like Ada Lovelace and Marie Curie, and famous creative types like Julio Cortàazar and Pablo Picasso. My biggest problem was that every time I was shown the words “history” and “philosophy” I had to consciously think “soft science? or liberal art?” But taking the test to the best of my ability still produced outlying data.

Then I took a test to identify implicit associations between ethnic groups and positive and negative concepts. When I was told I was supposed to associate images of caucasian men with negative concepts and images of black men with positive concepts, I muttered “black, good; white, evil” under my breath. No sweat.

deli slices of security

I was initially critical of Malcolm Gladwell’s Blink for not delivering on its implied promises, but I’ve revised my opinion of it substantially. It’s had a real impact on the way I think about certain types of situations. I still don’t think it provides a foolproof method for applying its principles, but it does offer tools for identifying problematic patterns in processes. As one example, it provides a framework for examining my misgivings about approaches to security in the post-September 2001 United States.

The administration argues that the lack of major terrorist incidents within the US demonstrates the effectiveness of the Homeland Security and Transportation Safety initiatives. This argument is obviously specious. The lack of a major incident in the first half of 2001 scarcely proved that the US was well-protected from a terrorist attack in the second half of the year. And the penetration of the new system by the “shoe bomber” and razor-blade-toting blog readers (for example) makes a strong case that the new system is not necessarily more effective at threat identification than the old system.

Back when the major concerns of airport security were preventing the influx of drugs and illegal (but peaceable) aliens, I was involved with a competitive bid to develop training for the Immigration and Naturalization Service. As part of the effort, members of our team accompanied INS personnel on airport security details and took some of the courses given to the agents. (For the record: all of the material I was exposed to was unclassified.) It was obvious that the most effective agents relied heavily on the sort of intuitive assessments Gladwell describes in Blink. In particular, they were very good at identifying people who had something to hide. Other people have written about the hazards of inexperienced personnel and over-reliance on trickable technology. But I wonder: does a process that makes all passengers nervous and uncomfortable make it fundamentally easier for people with malicious intent to slip through?

As part of my ongoing research on improving MBTA usability, I’ve been listening to the chatter between MBTA dispatchers, bus drivers, train operators, station managers, and other staff. Shortly before Christmas, toward the end of evening rush hour, I heard an exchange that that went like this:

We have an incident of an unattended package that has been sighted on the east platform of [station name].

About half a minute later, I heard the following reply:

A passenger forgot her package. She’s on her way back to the platform to retrieve it now. Please just let her get her bag.

In Gladwell’s parlance, I felt that I had ample opportunity to “thin slice” the conversation. The first speaker was officious, with a pseudo-military quality that verged on pompous. He used the passive voice and awkward, redundant, and jargon-y terminology.

The second speaker was clearly fed up with the first speaker. I had the distinct impression it wasn’t the first such conversation. The tone of voice — and the word “please” — suggested that the speaker thought it was unlikely that the woman would be allowed to get her bag back without additional hassle.

The second speaker had a good opportunity to make a realistic assessment of how likely the passenger was to pose a terrorist threat. The second speaker implied face-to-face contact with the passenger — who was probably cramming in last-minute shopping on the way home from work, and carrying one package too many. The first speaker was making decisions on the basis of a blurry picture on a monitor and (I suspect) a procedural manual revised in the wake of September 2001.

I’ve spent much of my career working on training products for state and federal agencies, and I think it’s likely that the new rule book specifies that any unattended package must got through the full threat evaluation procedure, no matter what the station manager recommends. After all, there’s always a chance that the station manager has somehow been coerced into making a false statement.

The problem is, this approach just doesn’t work. Being on high-alert forever is the same as not being on alert at all — people aren’t wired to maintain peak vigilance indefinitely. Procedures that are excessively cumbersome will eventually be disregarded. And while I understand that discounting the judgment of those closest to a potential threat situation may protect the MBTA from liability, I’m far from convinced that it’s the best way to actually increase the overall safety of the system.

Needs More Demons? No. I’m not even going to make a corny joke about devils in details.

Steve Squyres: Roving Mars

You could be excused for thinking that Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet is a science book. It’s got a Martian landscape on the front cover, and the author was the “Principal Investigator” of the projects it chronicles. If you’re not careful, you might even learn a little bit about geology.

Mostly, though, Roving Mars is a book about project management. Squyres often speaks, somewhat disconcertingly, about “doing science” as if science is merely a product of having assets correctly positioned, in the same way that a movie’s revenue is the product of having copies of the film in theatres. He admits that, from his perspective, one of the critical goals of the Spirit and Opportunity missions was to justify more Mars missions, in the same way a succesful product generates more demand in the marketplace.

Much of the ground Squyres covers will be familiar to anyone who’s manged a difficult project (perhaps especially a software development effort). He covers intial brainstorming; marketing and proposal development; forming strategic alliances with competitors; the struggle for budgetary, schedule, and manpower resources; risk mitigation strategies; motivational techniques; benefits and drawbacks of delegation and outsourcing; troubleshooting and quality assurance; and aproaches to consensus-building and fostering effective decision-making. It’s a fast and engaging read. Several chapters are written in the form of Squyres’ journal entries, which gives it a “you are there,” sort of immediacy. For a book about project management, it’s often surprisingly suspenseful and moving, and Squyres’ “boldly go where no one has gone before”-style enthusiasm is palpable.

Throughout he makes a solid case for his own talents as a manager (despite his penchant for tantrums). And throughout he reinforces my growing sense that there is something fundamentally and systemically wrong with the current best-practice management of complex engineering development efforts.

The Mars rover project is repeatedly stymied by mistakes that simply shouldn’t be made: instruments designed to work sideways but not upright, confusion between English and metric units, pieces that are fabricated to the wrong size. It’s perhaps especially disheartening to compare these errors to the highly-publicized mistakes NASA has made in recent history, from grinding the Hubble’s mirror to the wrong spec to the material science failures that cost the lives of space shuttle astronauts.

Also disturbing — but eerily familiar to me — was the degree to which the developers of the Mars rover software were unable to predict its behavior. I was shocked by how frequently the rover team was faced by problems I’ve faced with notoriously buggy commercial software. Computer that crashes as soon as it boots up? Been there, fixed that. Corrupted flash memory? Ate my second cellphone alive.

I’m convinced that the issue isn’t stupidity or incompetence on the part of the team, not just because these folks have high-falutin’ degrees in their fields, but also because every smart team I’ve had a chance to observe or directly work with — including some folks who made me feel positively dim — has made similarly obvious mistakes on sufficiently complex projects. On the biggest projects I’ve been associated with, it was sometimes painfully obvious that no single person understood the whole requirements document. I once saw a data entity diagram that covered a large conference room wall from floor to ceiling. I saw team members literally start sobbing when it became evident that fundamental assumptions underlying that diagram — which represented over a year of work and several million dollars — had never been valid.

I’ve begun to think of it as a big picture/little picture problem. When teams are stovepiped, each group can do its “little-picture” work and check and resolve its internal errors. On small, well-characterized projects, group leaders can grasp the “big picture” at a level of detail that permits identification and resolution of problems that cross group lines. But on projects that are bigger and more uncertain, it becomes impossible for anyone to grasp the gestalt of the project at a sufficient level of detail. Things start to slip through the cracks.

Since Malcolm Gladwell’s books — particularly The Tipping Point — have had more influence on my thinking than any others in a decade or so, I’m inclined to wonder if large engineering projects are being constrained by the fundamental limits of human cognition. I’m even tempted to wonder if Gladwell’s “magic number” 150 might crop up somewhere in a calculation of maximum manageable size.

I don’t think the problem is insoluble, but I think it calls for new techniques for asserting correctness. There are mathematical methods for “proving” the correctness of software. They’re seldom applied in the real world, partly because they’re cumbersome and expensive, but also, I think, because they rely on not changing requirements during development. I argue that since no one ever understands the requirements for complex projects, it’s almost inevitable that the requirements will change when one or more deficencies are identified midstream. My anecdotal experience suggests strongly that many serious engineering errors arise from failure to understand the consequences of a requirements change during the development cycle.

The engineering development process of the future should attack this problem from three angles:

  • The requirements definition phase must systemically address the inability of humans to fully characterize the behavior of extremely complex systems.
  • Throughout the development cycle it must embody consistency checks that prevent errors of the English/metric variety
  • Throughout the development cycle it must explicitly maintain the constraints on its own behavior, so that flaws resulting from requirements changes are immediately evident.
    (Software often has implicit constraints, e.g., it only works if only one document is open. Currently, information about these constraints may only exist in the mind of a single developer.)

Two other takeaways from Roving Mars:

  • Good golly, rocket scientists drink more than I would have guessed.
  • Wow, a lot of Mars probes have just flat out disappeared. Some enterprising sci-fi writer ought to be able to get at least a short story out of the conceit that the Martians shoot down any probe that gets too close to their cities, and play games keeping just out of camera range of the ones they allow to land.

Needs More Demons? No, Squyres’ project is plenty bedevilled.

Jen Banbury: Like a Hole in the Head

I’m not a big fan of movies that rely on “twist” endings. I think the value of surprise as an artistic technique is easily overrated. If it’s not a good movie if you know the ending, it’s just not a good movie, period.

But on the other hand, it can be really rewarding to see a film with no preconceptions at all. Surprises can be fun (even if they’re not sufficient to redeem a bad flick). I treasure some of the film experiences I’ve had where I knew nothing about the movie I was about to see. (I used to belong to a preview club that screened films that hadn’t yet secured distribution deals; I miss it.) I’m glad I went through the effort necessary to see Blair Witch Project and Memento without much foreknowledge.

Some years ago I read Like a Hole in the Head, Jen Banbury’s first (and, sadly, still only) novel. Like a Hole in the Head starts in a very conventional light comic mystery mode, and abruptly turns into a completely different sort of book. Knowing its genre could reduce some of the joy of a first reading, even if it wouldn’t exactly constitute a spoiler. And — this is the tricky part, and the reason I never tried to review the novel — even knowing in advance that there will be a shift in narrative tone and focus could lessen its impact.

The film Incident at Loch Ness left me with a similar feeling. It reminded me powerfully of a wonderful TV miniseries that I don’t think I should name. If you think you agree with me about makes for a good movie, I urge you to just see it without reading another word about it. It’s directed by Zak Penn (perhaps best known as the screenwriter of X-men 2) and features a scene in which Werner Herzog shops for razor blades. Agh, I’ve said too much already.

I will succumb to the temptation to mention that the DVD commentary track is entirely worthwhile, and then I will shut up.

Needs More Demons? Nope.

Jack Vance: The Killing Machine

It’s apparently de rigueur to mention that the stories of (currently popular and prolific) SF writer Matthew Hughes owe a debt to the Old Earth stories of Jack Vance. Vance is one of those old-school SF writers from whom I always meant to get around to reading something, but never quite did. In fact, although I didn’t have any of his Old Earth stories in particular, I long ago squirrelled away a few of his “Demon Princes” novels. I just read the second, The Killing Machine.

Killing Machine cover art by Gino D'Achille

I found it rather unintentionally hilarious. It’s certainly not fair to fault a work of speculative fiction from another generation (this one was written in 1964) for failing to anticipate developments like personal computing and the Internet. Nonetheless, it’s hard to read with a straight face a scene in which a guy has to compute square roots with his slide rule, or in which the closest analogue to a database search requires flying to a planet where you can look things up.

It might likewise seem unfair to criticize Vance for the reflexive, unexamined sexism of his work, but not all of his contemporaries exhibit that deficiency. James H. Schmitz, for instance, in the 1950s and ’60s portrayed a similar interstellar cosmopolitan society which happened to include several tough, smart female characters. He didn’t even make a big to-do over his female characters’ toughness or smartness; his male characters accepted female equality as a natural state of affairs. (Many of Schmitz’s stories have recently been reprinted in several hefty anthologies from Baen books. I loved these tales when I was a teenager, and I was delighted at how unembarrassing they were to return to as an adult.)

One clear similarity Vance shares with Hughes is both writer’s frequent — even excessive — use of the passive voice to evoke a general air of sophistication. Vance winds up evincing the stiltedness of 19th prose without much of its grace or music; Hughes (whom I think deserves roughly half of the hype he seems to have) fares a little better with the device, mostly because he can write dialogue that’s not patently ludicrous.

Once you subtract the spaceships and rayguns, The Killing Machine is basically a cops and robbers story. Ubervillain Kokor Hekkus (one of the titular “Demon Princes,” and one of the two titular “Killing Machines” — the other is the pictured giant mechanical 36-legged arthropod, which for some obscure reason is referred to as a “mobile fort”) is engineering a rash of kidnappings to raise a vast sum of money (for frankly absurd purposes). Keith Gersen is the grudge-bearing, rule-ignoring bounty hunter who’s sworn to bring Hekkus down.
The financial focus of the plot leads to gripping scenes like this:

“We had best consider the matter of recompense,” said Gersen. “Here I speak for Mr. Patch, of course. He wants the full sum of the original contract, plus the cost of modifications and the normal percentage of profit.”
Otwal considered a moment. “Minus, of course, those developmental funds already advanced. SVU 427,685, I believe to be the sum.”
Patch began to sputter. Otwal could not restrain a faint smile.
“There have been additional expenses,” said Gersen. “To a total of SVU 437,685. This must be included in the total reckoning.”

and this:

Half an hour later, Patch called the area Branch of the Bank of Rigel, inserted his account tab into the credit card slot. Yes, he was told, the sum of SVU 1,181,490 had been deposited to his acount.
“In that case,” said Patch, “please open an account in the name of Keith Gersen — ” he spelled the name ” — and deposit to this account the sum of SVU 500,000.”
The transaction was performed, both Patch and Gersen affixing signatures and thumbprints to tabs. Patch then turned to Gersen. “You will now write me a receipt, and destroy the partnership agreement.”

and (as the novel’s sole female character pays her own ransom to an institution that brokers payment between kidnappers and their extortees), this:

“There is another matter,” said the clerk. He adressed Aluzs Iphigenia. “Since you are acting the peculiar capacity of your own sponsor, the money, minus our 12 1/2 percent fee, is yours.”
Alusz Iphigenia stared at him apparently without comprehension.
“I suggest,” said Gersen, “that you prepare a bank draft, so that she need not carry around so much negiotiable currency.”
There was a flurry of consultation, a shrugging of the shoulders, a flutter of hands; finally the bank draft was drawn upon the Planetary Bank of Sasani at Sagbad, in the sum of SVU 8,749,993,581: ten billion minus 12 1/2 percent, minus charges of SVU 6,419 for special AA accommodation.
Gersen scrutinized the document with suspicion. “Presumably this is a valid draft? You have funds to cover?”

This book is available from me through Bookmooch, if you’re interested.

Needs More Demons? Has a “Demon Prince,” but needs fewer details of financial transactions.

(I suggest, dear reader, that you pause here to allow your heart rate to settle before activating the clicker on your computator to access the remainder of this (or any other) informational repository. I must inform you that I can not be held responsible for any consequences that could arise from your failure to heed this warning.)