Should we build lots more housing in San Francisco? Three reasons people disagree

Some people, such as YIMBYs, advocate building lots more housing in San Francisco. Their basic argument is:

Housing in SF is the priciest in the country, with the average one bedroom apartment renting for over $3,000 per month (compared to the nationwide average of $1,200.)

The main reason rents are so high is because the supply of housing has been artificially restricted — new developments are constantly getting blocked by land use regulations and neighborhood associations. Meanwhile, demand to live in SF continues to rise. And since supply is not keeping pace, rents go up, as a growing number of would-be tenants outbid each other for the limited housing available.

Therefore, it’s important that we find a way to increase the rate at which we’re building new housing in SF, or it will be a city in which only the rich can afford to live.

I’ve been trying to understand why others are critical of this argument. I think there are three main areas of disagreement between what I’ll call the advocates and the critics, and I’ll briefly explain each in turn. (Note that I’m trying to present the strongest version of each argument, which may be different from the most common version.)

Disagreement #1: Would adding new housing have a noticeable effect on prices?

Critics are pessimistic about our ability to rein in housing prices in SF by increasing supply. Some simply reject or ignore the economic argument, and deny that there’s any reason to think supply would affect prices.

But the more thoughtful critics concede the advocates’ basic economic argument — that, all else equal, increasing housing supply should slow the growth of prices. Nevertheless, they’re pessimistic because they hold some combination of the following views:

Effect of supply on price is small. Critics argue that supply only has a small effect on prices, and that effect is swamped in the long run by the much larger effect of demand on prices. So even if SF adds a lot of additional housing, prices will still rise almost as quickly as they would have anyway, as long as demand to live here continues to soar. This view is mainly based on examples of other desirable cities, like New York or Singapore, which have built new housing at a faster rate than SF but nevertheless saw steep increases in price.[i]

Don’t trust studies. Advocates often cite quantitative analyses that try to estimate the effect of supply on prices. (For example, this one estimates that adding 5,000 new units each year would be enough to stabilize real prices.) But critics are more skeptical of such analyses, pointing out that they inevitably make major simplifying assumptions, and that it’s all too easy to set up an analysis to get the conclusion you want.

Effects are regional. Critics point out that the relationship between supply and prices is much weaker at the local level than the regional level. So it’s unclear whether we can reduce prices within SF itself by building more housing in SF. (I think many advocates don’t dispute this, actually — they just reply that reducing regional prices would be a great outcome.)

Induced demand. Critics worry that building new housing could actually backfire by creating new demand. If we build nice new buildings in SF, that will make the city overall a more attractive place to live, causing more outsiders to want to live here, and putting upward pressure on prices.[ii] (Interestingly, advocates seem to be split on whether the “induced demand” scenario is incoherent or merely unlikely.[iii])

Disagreement #2: Does new housing help poor tenants?

Many critics argue that any new housing built in SF would be high-end, and therefore only benefit the upper middle class (e.g., programmers at Google and Facebook).

Advocates reply, “No, actually, high-end housing would also benefit poorer tenants.” They make two arguments:

Shifting demand. If wealthy tenants have an easier time finding high-end housing, they’ll be less likely to compete for mid-range housing. That will reduce demand for mid-range housing, thereby reducing its price… And so on, with the effects of increased supply at the top rippling down the spectrum of housing quality.

Filtering. Affordable homes today were once newly-built luxury homes, which then depreciated as they aged. So building expensive new housing now is how we end up with affordable housing in the future. And evidence suggests that the faster we build new housing, the faster existing housing depreciates.

Critics are not very enthused with the filtering argument, because it won’t happen in time to help today’s generation of poor renters. I’m not sure what they think of the “shifting demand” argument, though — it seems like that should affect lower-end housing prices much more quickly than depreciation would.

But critics also worry about new housing actively hurting poor tenants, not merely failing to help them. Their concern here is displacement: poor tenants being forced to leave their current apartments. This could happen directly, via eviction, if an old building is sold to a developer and the current tenants have to leave. Or it could happen indirectly, via gentrification — an influx of wealthy people moving into new housing increases the cost of living in the neighborhood as a whole, making it unaffordable for poor tenants.

Advocates respond by pointing to studies suggesting that adding new housing in a given neighborhood is good for poor tenants in that neighborhood, and actually reduces the number of people who get displaced.[iv] (Presumably because new housing reduces low-end housing prices enough to compensate for any cost-of-living increases and evictions.)

Critics don’t find those studies compelling. Partly that’s because the causal effects are so hard to disentangle, and the quality of the evidence is far from overwhelming.

But they also feel that even if new housing reduces low-end housing prices, that doesn’t easily make up for the harms done to the unlucky tenants who get evicted, and to the communities broken up by those evictions. (Some critics also argue that there’s no justification for building new housing in existing neighborhoods, where poor tenants will be displaced, when we could instead be building on greenfields outside of the city.)

So, to sum up: this branch of the disagreement is partly empirical, over how new housing affects low-end housing prices and the risk of displacement for poor tenants. And it’s partly about values — if we can make a neighborhood more affordable for poor tenants in the long run, but at the cost of evicting some of the pre-existing poor tenants, is that fair?

Disagreement #3: Are NIMBY objections legitimate?

The archetypal opponent of new housing is the NIMBY: a current homeowner, who benefits from development restrictions because they keep his property values high and the character of his neighborhood unchanged.

NIMBYs are being selfish, advocates argue. How can society ask poorer, younger renters to pay more for their apartments, just to protect the property values and aesthetic preferences of (statistically much richer) homeowners?

Critics disagree with the advocates here for several reasons:

Many homeowners are highly leveraged. Critics argue that the way NIMBYs are portrayed, as millionaires complaining about obstructions to their view, understates the potential harm to homeowners. For lots of homeowners, their houses are their main investment, and a highly leveraged one. If we could significantly reduce housing prices, that might be better for everyone overall, but it would deal a big blow to the main investment of a bunch of middle class people.

Neighborhood character is a public good. Critics acknowledge that NIMBYs benefit disproportionately from preserving “neighborhood character.” But they think that, nevertheless, neighborhood character might also be a public good worth preserving for others. Compare San Jose and San Francisco — don’t the ethnic enclaves of the latter make it a more beautiful city to visit?

Incumbents deserve extra consideration. Advocates tend to view the desires of current and would-be residents symmetrically, and say incumbents have no more “right” to live in SF than outsiders. But critics see an asymmetry — they assign value to community, and people’s attachment to place, in a way that advocates don’t. So they think it’s worse to displace an incumbent than to prevent a migrant from moving in (hence their greater concern with displacement, above), and they’re more willing to grant incumbents some right to steward their own communities.

(Thanks to Steve Randy Waldman, Noah Smith, Brian Hanlon, Kim-Mai Cutler, Jan Sramek, and others for helping me get a handle on this issue! Any mistakes are mine alone.)

[i] Advocates and critics often interpret the same case studies differently. For example, in Tokyo it’s much easier to build new housing than it is in San Francisco, and Tokyo’s housing prices have risen much more slowly than San Francisco’s. Seems like a point in favor of the advocates’ case, right? However, the critics counter that Tokyo’s population growth has been declining, so they chalk up the city’s (relative) affordability to low demand rather than high supply.

[ii] There’s another version of this same argument in which all the wealthy people from outside SF moving into new housing here generate more demand for services, causing more lower-income workers to move into the city to provide those services, putting upward pressure on prices of lower-income housing.

[iii] Here’s an argument for why it’s unlikely, paraphrased from Noah Smith and Jeff Kaufman: “Imagine destroying a bunch of expensive apartment buildings in SF. Do you think demand for the remaining housing would fall, because the neighborhood would now be less attractive and current residents would move away? If that seems unlikely to you, then you should also think it unlikely that creating new luxury housing would cause demand to rise.” However, critics don’t buy this thought experiment. They object that there’s an asymmetry it doesn’t account for — that current residents of a city are willing to pay more to stay than outsiders are willing to pay to move there.

[iv] For example, CA’s Legislative Analyst’s Office found “displacement was more than twice as likely in low–income census tracts with little market–rate housing construction (bottom fifth of all tracts) than in low–income census tracts with high construction levels (top fifth of all tracts).” Also see this article by Richard Florida which summarizes some of the studies on new housing and displacement.

Does irrationality fuel innovation?

I’ve been having a fascinating debate on Twitter with my friend Michael Nielsen, a quantum physicist who’s currently doing research at Y-combinator. It all started when I argued that, on the margin, people having more accurate beliefs would be useful, and Michael objected:

I think you’re over-rating accuracy / truth as a primary goal… I think there’s a tension between behaviours which maximize accuracy & which maximize creativity. Can’t always have both. A lot of important truths come from v. irrational ppl.

We continued the debate in this thread, where Michael basically argues that there isn’t enough experimentation with “crazy” ideas in science and tech, and says,

Insofar as long-held suspension of skepticism helps overcome this, I’m in favour of it. Indeed, that long-held suspension of skepticism seems to be one of the main mechanisms we currently have for this.

Here’s my reaction, written as a direct response to Michael.

I totally agree that we need more experimentation with “crazy ideas”! I’m just skeptical that rationality is, on the margin, in tension with that goal. For two main reasons:

1. In general, I think overconfidence stifles experimentation.

Most people look at a “crazy idea” — like seasteading — and say: “That’s obviously dumb and not worth trying, lol, you morons.”

In my experience, rationalists* are far more likely to look at that crazy idea and say: “Well, my inside view says that’s dumb. But my outside view says that brilliant ideas often look dumb at first, so the fact that it seems dumb isn’t great evidence about whether it will pan out. And when I think about the EV here [expected value] it seems clearly worth the cost of someone trying it, even if the probability of success is low.”

I think the first group — the vast majority of society — is being very overconfident. Remember, “overconfidence” doesn’t just mean being too confident that something’s going to succeed, it also means being too confident that something’s going to fail!

And I think it’s their overconfidence (plus a lack of thinking in terms of EV or marginal value) that punishes experimentation with low-probability but high-EV ideas. People mock the ideas, won’t fund them, etc.

2. I’m not sure (long-term) overconfidence is the best way to motivate innovators.

You might object that, okay, yes, maybe we want funders to think like rationalists, but we still need the innovators themselves to be overconfident so that they’re motivated to pursue their low-probability but high-EV ideas.

Possibly! I wouldn’t be shocked if this turned out to be true. But long-term overconfidence does come with costs, and my (weak) suspicion is that there are less costly ways to motivate oneself to pursue crazy ideas.

For example, you can get better at tying your motivation to EV, not to probability. I know a bunch of rationalists who are worried about some global catastrophic risk (e.g., a pandemic) and who are working on strategies for guarding against that risk — even though they think that their chance of success is low! They just think it’s high EV and therefore totally worth trying.

I also like the strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as if you were 100% confident. My friend Spencer Greenberg, a mathematician running a startup incubator, is very pro-calibration and rationality in general, but finds this temporary “overconfidence” very useful.

…I use quotes around overconfidence there because I don’t know if that tactic really counts as epistemic overconfidence — it reminds me more of the state of suspended disbelief we slip into during movies, where we allow our emotions to respond as if the movie were real. But that doesn’t really mean we “believe” the movie is real in the same way that we believe we’re sitting in a chair watching a movie.

Anyway, whether you want to refer to that as temporary irrationality, or not, doesn’t matter too much to me. The crucial point is that Spencer pops out of it occasionally to determine if it’s worth continuing down his current path.

But, sure, I’ll acknowledge that I don’t actually know whether these two strategies are feasible for most innovators. Maybe they only work well for a small subset of people, and for most innovators, the actual choice they face is between overconfidence or inaction. If that’s the world we live in, then I’d agree with the claim you seemed to be making on Twitter, that irrationality (at least on the part of the innovators themselves) is essential to innovation.

One last point: Even if it turned out to be true that irrationality is necessary for innovators, that’s only a weak defense of your original claim, which was that I’m significantly overrating the value of rationality in general. Remember, “coming up with brilliant new ideas” is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, I’m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.

*Yeah, yeah, the label “rationalist” isn’t a great one, but no one’s yet found a better alternative! “People who substantially agree about certain principles of epistemology including…” is more accurate, but not exactly sticky.

Contra Tyler, on “Is rationality a religion?”

Tyler Cowen called the rationality community a “religion” on Ezra Klein’s podcast the other day. Relevant excerpt, on Tyler’s blog:

Ezra Klein

Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.

Tyler Cowen

Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.

My quick reaction:

Basically all humans are overconfident and have blind spots. And that includes self-described rationalists.

But I see rationalists actively trying to compensate for those biases at least sometimes, and I see people in general do so almost never. For example, it’s pretty common for rationalists to solicit criticism of their own ideas, or to acknowledge uncertainty in their claims.

Similarly, it’s weird for Tyler to accuse rationalists of assuming their ethos is correct. Everyone assumes their own ethos is correct! And I think rationalists are far more likely than most people to be transparent about the premises of their ethos, instead of just treating those premises as objectively true, as most people do.

For example, you could accuse rationalists of being overconfident that utilitarianism is the best moral system. Fine. But you think most people aren’t confident in their own moral views?

At least rationalists acknowledge that their moral judgments are dependent on certain premises, and that if someone doesn’t agree with those premises then it’s reasonable to reach different conclusions. There’s an ability to step outside of their own ethos and discuss its pros and cons relative to alternatives, rather than treating it as self-evidently true.

(It’s also common for rationalists to wrestle with flaws in their favorite normative systems, like utilitarianism, which I don’t see most people doing with their moral views.)

So: while I certainly agree rationalists have room for improvement, I think it’s unfair to accuse them of overconfidence, given that that’s a universal human bias and rationalists are putting in a rare amount of effort trying to compensate for it.

A palindromic poem for Douglas Hofstadter

(Originally published here)

Since it first came out in 1979, Douglas Hofstadter’s Pulitzer Prize-winning book “Gödel, Escher, Bach: An Eternal Golden Braid” has widened the eyes of multiple generations of nerdy kids, and I was certainly no exception. The book draws all sorts of parallels between music, art, math, and computer science, ultimately shaping them into a bold thesis about how consciousness arises from self-reference and recursion. It’s also a very playful book, full of puzzles, puns, and imagined dialogues between Achilles and a tortoise which weave in and out of the main chapters, illustrating the concepts therein.

One of those dialogues, titled “Crab Canon,” seems puzzling when you begin reading it – sprinkled with seeming non sequiturs, the word choice a bit awkward and off-kilter. Then shortly after the halfway point, when you start to see recent lines repeated, in reverse order, you realize: the whole dialogue is a line-level palindrome. The first line is the same as the last, the second line is the same as the second-to-last, and so on. But because Hofstadter chooses his sentences carefully, they often have different meanings when they reoccur in the reverse order. So, for example, the following bit of dialogue in the first half…

Tortoise: Tell me, what’s it like to be your age? Is it true that one has no worries at all?
Achilles: To be precise, one has no frets.
Tortoise: Oh, well, it’s all the same to me.
Achilles: Fiddle. It makes a big difference, you know.
Tortoise: Say, don’t you play the guitar?

… becomes this bit of dialogue in the second half:

Achilles: Say, don’t you play the guitar?
Tortoise: Fiddle. It makes a big difference, you know.
Achilles: Oh, well, it’s all the same to me.
Tortoise: To be precise, one has no frets.
Achilles: Tell me, what’s it like to be your age? Is it true that one has no worries at all?

Hofstadter does “cheat” a bit, by allowing himself to vary punctuation (for example, “He often plays, the fool” reoccurs later in a new context as “He often plays the fool”). Nevertheless, it’s an impressive execution of a clever conceit.

Thumbing through Gödel, Escher, Bach again recently, I came across the Crab Canon and was struck with the desire to attempt a similar feat myself: a line-level palindrome that tells a linear story, that is, a story in which a series of non-repeating events occur.

It’s maddeningly difficult trying to come up with lines that make sense, that in fact make a different sense, in both directions. And because all of the lines interlock — each relying simultaneously on the line preceding it, the line following it, and its mirror-image line — changing any one line in the poem tends to set off a ripple effect of necessary changes to all the other lines as well.

I eventually figured out a few crucial tricks, like relying on ambiguous pronouns (“they,” “their”) and using images that carry a different meaning depending on what’s already happened. Below is the final result – my own canon, in homage to the book that dazzled my teenaged self years ago:

Seaside Canon (for Douglas Hofstadter)

The ocean was still.
In an empty sky, two gulls turned lazy arcs, and
their keening cries echoed
off the cliff and disappeared into the sea.
When the child, scrambling up the rocks, slipped
out of her parents’ reach,
they called to her. She was already
so high, but those distant peaks beyond —
they called to her.  She was already
out of her parents’ reach
when the child, scrambling up the rocks, slipped
off the cliff and disappeared into the sea.
Their keening cries echoed
in an empty sky. Two gulls turned lazy arcs, and
the ocean was still.

-Julia Galef

The answer to life, the universe and everything

(Originally published here)

The Austrian philosopher Ludwig Wittgenstein gets credit for pointing out that many classic philosophical conundrums are unsolvable not because they are so profound, but because they are incoherent. Instead of trying to solve such questions, he argued, we should try to dissolve them, by demonstrating how they misuse words and investigating the confusion that motivated the question in the first place.

But with all due respect to Wittgenstein, my favorite example of the “dissolving questions” strategy comes from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, which contains a cheeky and unforgettable dissolution of which I’m sure Wittgenstein himself would have been proud:  A race of hyper-intelligent, pan-dimensional beings builds a supercomputer named Deep Thought, so that they can ask it the question that has preoccupied philosophers for millions of years: “What is the answer to life, the universe, and everything?”

After seven and a half million years of computation, Deep Thought finally announces the answer: Forty-two. In response to the programmers’ howls of disappointment and confusion, Deep Thought rather patiently points out that the reason his answer doesn’t make any sense is because their original question didn’t make any sense either. As I’ve written before, questions like this one, or the very similar “What is the meaning of life?” question, seem to be committing a basic category error: life isn’t the kind of thing to which the word “meaning” or “answer” applies.

But in this article I want to take my analysis a little further than that.

After all, in fairness to those poor, disappointed, pan-dimensional beings, it’s not always easy to figure out what question you need to ask to resolve the confusion that you’re feeling. And throughout the course of human history, many people have felt an overwhelming confusion when they contemplate life and the universe, and concluded that there must be some key which if they had it, would make them say: “Aha, now life makes sense.”

Of course, there isn’t necessarily going to be any such key.  But here, at least, is my diagnosis of the four most common reasons why people feel like there should be such a thing, and what The Hitchhiker’s Guide has to say about each of them:

#1: What’s the point of anything if we’re all going to be dead someday?

In The Hitchhiker’s Guide, Earth is demolished suddenly and unceremoniously by aliens called Vogons, who are clearing a path for a hyperspace bypass. Our real-life future may not hold a Vogon constructor fleet, but it definitely holds our demise. Eventually the human race will be wiped out, whether through war, or a virus, or some environmental disaster like an asteroid of the sort that did in the dinosaurs. Even if we manage to escape all those risks on Earth, eventually our Sun is going to expand, boiling away all of our oceans and atmosphere, and probably swallowing us up. And even if we manage to escape our solar system, eventually the universe will expand until it rips apart all of our individual molecules.

For many people, the fact that the world is going to end someday invalidates everything that happens up to that point. When Ford Prefect accidentally teleports two million years back in time to prehistoric Earth, he says as much to the people living there. “I’ve seen your future,” he tells them. “It doesn’t matter a pair of fetid dingo’s kidneys what you all choose to do from now on… Two million years you’ve got, and that’s it.”

It’s a common attitude. I’ve met many people who have argued that life is pointless since we’re all going to be dead someday. And you might remember the iconic scene at the beginning of Annie Hall in which Woody Allen’s character, as a young boy, has just found out that the universe is expanding. (“He won’t do his homework!” his mother shrieks, beside herself. “What’s the point?” little Woody glumly replies.)

In several ways, however, it’s an odd position to take. The assumption seems to be that if all paths lead to the same endpoint then it doesn’t matter what happens along the way.  But why should we place all the importance on the endpoint of our story and none on the rest of it? Does it really not matter to people whether humanity has a long and glorious existence, or a short and miserable existence, as long as both end in our destruction?

What’s even odder is that even if it weren’t the case that “we’re all going to be dead someday,” many people would still feel like life lacked meaning. There’s another Hitchhiker’s Guide character who represents a common fear about immortality: that it would be soul-crushingly dull. Wowbagger the Infinitely Prolonged, an alien who has had eternal life thrust upon him after a freak accident with an irrational particle accelerator, a pair of rubber bands and a liquid lunch, enjoys his immortality immensely at first. But then the ennui sets in:

“In the end, it was the Sunday afternoons he couldn’t cope with, and that terrible listlessness which starts to set in at about 2:55, when you know that you’ve had all the baths you can usefully have that day, that however hard you stare at any given paragraph in the papers you will never actually read it, or use the revolutionary new pruning technique it describes, and that as you stare at the clock the hands will move relentlessly on to four o’clock, and you will enter the long dark teatime of the soul.”

I don’t mean to suggest that we can know what the actual psychological effects of immortality would be. The reason I’m juxtaposing Ford’s attitude with Wowbagger’s is just because of what it reveals about our conception of meaning: if you feel that life lacks meaning if it’s all going to end someday and you feel that life would lack meaning if it’s going to last forever, then that says more about the incoherence of your own conception of meaning than it does about the nature of the universe.

#2: What’s the purpose of our existence?

At one point in The Hitchhiker’s Guide to the Galaxy, the characters are under attack by automated missiles. In desperation, Arthur activates their spaceship’s “Infinite Improbability Drive,” a new invention with the power to trigger vastly improbable events. The missiles promptly transform into a bowl of petunias and a sperm whale. As the confused whale plummets down through the atmosphere, his freshly-minted mind is racing with questions: “What’s happening? Er, excuse me, who am I? Hello? Why am I here? What’s my purpose in life? ”

Of course, we readers know that his life has no purpose. He’s just a random creation of the Improbability Drive, generated with no intent and for no end. And after a few paragraphs of excited rumination about the world and his place in it, the whale splatters onto the rocky surface of the planet below. It doesn’t take too much poetic license to see this as an encapsulation of human existence, boiled down to a minute and a half.

It’s true that scientists have a pretty solid explanation of the series of events that led to the existence of humanity. We’re still uncertain about how the very first life actually formed, but we have some good hypotheses, and we’re clear on the subsequent process of evolution which produced homo sapiens. But to many people, that isn’t an answer to the question of “why” we are here. When they ask “why” we exist, they don’t mean the question in a causal sense (i.e., “What was the series of events that caused our existence?”). They mean the question teleologically — they’re seeking some externally-bestowed purpose for our existence.

But what if our lives did have a predetermined purpose? And what if we could learn what it was? There’s no guarantee that it would yield the sense of meaning we crave. Just ask Arthur Dent, who finds out in The Hitchhiker’s Guide that the Earth was actually a giant computer program commissioned, paid for, and run by a race of hyperintelligent pandimensional beings… or, as we know them in our dimension, mice. For a concise rebuttal to the idea that discovering the purpose of our existence would give our lives meaning, you need look no farther than Arthur’s facial expression after he discovers his. I’d describe it less as “serene enlightenment,” and more as “dismayed bewilderment.”

 #3: How can any of our lives matter in the grand scheme of things?

Here’s how Douglas Adams describes the introduction to the Hitchhiker’s Guide to the Galaxy:

“’Space,’ it says, ‘is big. Really big. You just won’t believe how vastly hugely mindboggingly big it is. I mean you may think it’s a long way down the road to the chemist, but that’s just peanuts to space. Listen …’ and so on… The simple truth is that interstellar distances will not fit into the human imagination.”

My suspicion is that the enormity of the universe plays a key role in the sensation that life lacks meaning. Psychological research suggests that when we evaluate an act of charity, we don’t judge it based on the amount of good it would accomplish — we judge it based on the amount of good it would accomplish relative to the size of the problem. For example, one study found that people cared more about saving 4,500 refugees if they were told that the refugees lived in a camp of 11,000 than if they lived in a camp of 250,000. In both cases the same number of lives were at stake (4,500) but in the latter case saving those lives was judged less worthwhile because they seemed like just a drop in the proverbial ocean.

So you can see why contemplating the vast size of the universe might seem to rob life of meaning: however consequential our lives and our actions typically seem to us, they’re going to look pretty insignificant when we start comparing them to the universe as a whole. Who cares about humanity’s joys, agonies, and achievements? Divide them by an infinite universe and they dwindle to nothing.

That’s the logic behind one subplot in The Hitchhiker’s Guide in which a character invents something called the “Total Perspective Vortex.” It’s a fiendish contraption which, when you step inside it and turn it on, displays to you the entirety of existence. The shock of seeing how infinitely tiny you are in comparison to the universe immediately annihilates your brain. The takeaway: “If life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion.”

#4: Things seem to happen without rhyme or reason.

So much of what happens in the world is random or unpredictable or unfair, at least compared to the orderly way we feel like things should work. We expect good deeds to be rewarded and bad deeds punished; we expect hard work to pay off and major events to yield lessons or morals.

Absurdist fiction and theater tries to capture what it feels like to have those expectations upended. In the worlds of the absurdist master Franz Kafka, people wake up to find themselves transformed into giant bugs, or put on trial and convicted of unspecified crimes. The examples are fantastical, but the feelings are real: the world makes no sense, and you’re a small, helpless and insignificant entity being tossed around by vast forces you will never comprehend.

Although it’s more lighthearted in tone, The Hitchhiker’s Guide to the Galaxy has a strong absurdist streak, especially in the incongruity between causes and their effects. In The Hitchhiker’s Guide, the Earth is destroyed mere minutes before the completion of its multi-billion-year program, just so some aliens can build a highway. Devastating wars are triggered when a trivial comment falls through a freak wormhole into another time and place. And the entire universe is ruled by a simple-minded hermit who isn’t sure whether anything exists outside his little shack.

So where did we get our futile expectations of an orderly world? One plausible hypothesis is that the ability to pick up on patterns and causal relationships evolved to help our ancestors notice predators hiding in the bushes, learn to avoid poisonous fruits, figure out where the best fishing spots were, and so on. And if pattern-seeking is a survival skill, it’s also not surprising that it goes into overdrive when we feel threatened or helpless. Recent experimental evidence shows that when people are feeling vulnerable, they’re more likely to see correlations in financial data; to perceive objects in randomly-generated visual noise; to believe conspiracy theories; and to see cause-and-effect relationships between events with no logical connection, like a man stamping his foot three times before a business meeting and then succeeding in his business pitch.

The Hitchhiker’s Guide offers a cheeky explanation for the absurdity of the world: the confusing things that happened to us were all part of an experiment run by mice. In fact, the narrator says, it is very odd that Earth humans were not aware of that fact. “Because without that fairly simple and obvious piece of knowledge,” he says, “nothing that ever happened on Earth could possibly make the slightest bit of sense.”

Incongruity in humor, art, and science

(Originally published here)

“One morning I shot an elephant in my pajamas. How he got in my pajamas, I don’t know.”

-Groucho Marx 

Like most jokes, Groucho’s works almost instantaneously. We hear the joke, we laugh, and we don’t need to think consciously about the process connecting those two points. But what if we played that process in slow motion?

Here’s a plausible account of what that might look like: the joke’s first sentence triggers an image, or at least a concept, of Groucho wearing pajamas and shooting an elephant. But the next phrase, “How he got in my pajamas,” doesn’t make sense in the framework of our current model of the situation, and we’re thrown into confusion. So we think, Hang on, I must’ve missed something, and we go back and re-evaluate the first sentence to see if there was some other, alternative way of interpreting it. Sure enough, now that we’re looking for it, there it is: an alternate meaning of “I shot an elephant in my pajamas” pops out, and you can almost hear the gears grinding as we shift from “I, in my pajamas, shot an elephant” to “I shot an elephant who was wearing my pajamas.”

Groucho’s joke is an example of paraprosdokia, a figure of speech whose latter half surprises us, forcing us to go back and reconsider the assumptions we’d made about what was going on in the first half. Other examples include Mitch Hedberg’s “I haven’t slept for two weeks — because that would be too long,” and Stephen Colbert’s “Now, if I am reading this graph correctly… I’d be very surprised.” The jolt of gratification we feel at converting confusion into clarity is exactly what Incongruity Resolution Theory, a popular theory in the psychological study of humor, predicts: humor is the satisfying “click” of an incongruity within the joke being resolved after you find the appropriate interpretive framework.

One particularly interesting thing about paraprosdokian jokes is how they represent, in miniature, the process of theory-revision in scientific inquiry. You start out with a straightforward working theory, based on your initial observations and on whatever prior assumptions and expectations you bring with you from past experiences. As you collect more observations that don’t seem to fit your theory, you either dismiss them as an anomaly, or find a way to shoehorn them into the framework of your theory, or you go back and try to re-interpret your original data in the framework of an alternate theory that will fit all your data, both old and new.

The third option becomes increasingly compelling as the incongruities mount, which is the basis for Thomas Kuhn’s famous theory of paradigm shifts in science. Scientific theories aren’t immediately discarded when contradictory evidence comes to light, Kuhn observed. Instead, scientists will dismiss contradictory observations as errors or anomalies, or tweak the theory to accommodate them, until the contradictions become numerous enough that they can no longer be plausibly explained within the original theory, at which point the field seeks out a new explanatory paradigm to replace their old, increasingly flawed one.

Although Kuhn’s theory doesn’t describe all scientific practice, it’s nevertheless a strong model for many cases, including – quintessentially – the shift from geocentrism to heliocentrism. Mounting evidence from cosmologists like Kepler, Copernicus, and Galileo became increasingly difficult to reconcile with the idea of a stationary earth at the center of the other planets’ orbits. Many of their contemporaries certainly tried, however, devising ever-more complex modifications to their geocentric model to accommodate new astronomical observations, until it became clear that heliocentrism simply did a better job of explaining the facts.

The satisfying “Aha!” produced by getting a joke or explaining a phenomenon is not all that different from the satisfying “Aha!” produced by certain kinds of art. Think of the modulation, in music, from one key to another. Each musical key signature contains a different subset of all the sharps, flats, and naturals on the scale. Use a note that’s not in your key, and the result is dissonance, which we interpret as tension, or ugliness, or as something being “off.” But since every note is shared by multiple different key signatures, you can use shared notes as pivot points to transition from one key to another. The resultant effect is similar to paraprosdokia: having initially interpreted the pivot-point as belonging to the original key, you’re momentarily disoriented to hear it followed by notes which don’t belong to that key, until you shift your mental framework to a new key in which those notes “make sense.”

Rudolf Arnheim, one of the most celebrated writers on the psychology of aesthetic perception, has a nice example of musical modulation in his 1978 book, The Dynamics of Architectural Form. This is a passage from a violin sonata by Jean Marie Leclair: (From Rudolf Arnheim's Dynamics of Architectural Form)
Arnheim explains:

“Two notes refer to the same tone, but they are written differently because the b-flat is experienced dynamically as the outcome of a steep ascent in the key of D, which has, as it were, overshot the mark by a half tone and is straining downward toward the dominant, a. The same tone written as an a-sharp presses upward as the leading tone in the new key of B, thereby assuming a new function in a different structural context.”

Space can be modulated just like sound and meaning to produce a mental paradigm shift. An elegant example is the Hotel de Matignon, a mansion built in Paris in 1725 that now serves as the residence of the French Prime Minister. Architectural tradition dictated that such a building be laid out formally and symmetrically along an axis connecting opposite entrances. But because the site itself was uneven, the front and back entrances were misaligned. So the architect shifted the building’s axis mid-stream, using one axis as an off-center wing of the other axis.

Hotel de Matignon, Paris (1725)

Arnheim dubs it an “ingenious” solution to the site’s constraints, and highlights the parallel to musical modulation:

“The device used by the architect cannot but remind us of what musicians call an enharmonic modulation, that is, the almost imperceptible shift from one key to another, in the course of which certain tones act as bridges by fulfilling different functions in the two keys and thereby display a double allegiance. The transitional moment generates a slight sensation of seasickness, unwelcome or exhilarating depending on the listener’s disposition, because the frame of reference is temporarily lost.”

In honor of it being Valentine’s Day, I’ll close with a nod to Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who wrote an article for Minds and Machines called “Explanation as Orgasm.” In it, Gopnik argued that the pleasure we get from the feeling of understanding plays a role analogous to that of the orgasm. Perhaps, she suggests, our enjoyment of those “Aha!” moments evolved to entice us to figure out the world around us, just as orgasms evolved to entice us to reproduce. In a way, I hope that’s the case, because it puts a delightful new gloss on aesthetic enjoyment. How much more delicious are those moments of musical and visual and comedic resolution when you view them as being — like non-procreative sex – our cunning species’ way of bypassing the evolutionary carrot-on-a-string, and instead, harvesting tasty carrots of our own?

The fallacy of difference, in science and art

(Originally published here)

It’s not often that you find something that’s a fallacy both logically and creatively — that is, a fallacy to which both researchers and artists are susceptible. Perhaps you’re tempted to tell me I’m committing a category mistake, that artistic fields like fiction and architecture aren’t the sort of thing to which the word “fallacy” could even meaningfully be applied. An understandable objection! But let me explain myself.

I first encountered the term “fallacy of difference” in David Hackett Fischer’s excellent book, Historians’ Fallacies, in which he defines it as “a tendency to conceptualize a group in terms of its special characteristics to the exclusion of its generic characteristics.” So for instance, India’s caste system is a special characteristic of its society, and therefore scholars have been tempted to explain aspects of Indian civilization in terms of its caste system rather than in terms of its other, more generic features. The Puritans provide another case in point: “Only a small part of Puritan theology was Puritan in a special sense,” Fischer comments. “Much of it was Anglican, and more was Protestant, and most was Christian. And yet Puritanism is often identified and understood in terms of what was specially or uniquely Puritan.”

Here’s a less scholarly example from my own experience. I’ve heard several non-monogamous people complain that when they confide to a friend that they’re having relationship troubles, or that they broke up with their partner, their friends instantly blame their non-monogamy. But while non-monogamy certainly does make a relationship unusual, it’s hardly the only characteristic relevant to understanding how a relationship works, or why it doesn’t. Non-monogamous relationships are subject to the same misunderstandings, personality clashes, insecurities, careless injuries, and other common tensions that tend to plague intimate relationships. But the non-monogamy stands out, so people tend to focus on that one special characteristic, and ignore the many generic characteristics that can cause any kind of relationship to founder.

So the fallacy of difference is a fallacy of science (broadly understood as the process of investigating the world empirically) but how is it also a fallacy of art? Because artists, like scientists, are concerned with understanding the world, though their respective goals are different. While scientists want to model the world accurately in order to answer empirical questions, artists want to make us believe in the story they’re telling us or the scene they’re showing us, or to highlight some feature of the world that they find particularly beautiful or interesting, or to successfully provoke a desired reaction. That tends to require a pretty sophisticated understanding of how the world looks and acts and feels.

When they fail, it’s often the fallacy of difference at work. In novels, TV shows and movies, a flat, “one-dimensional” character is a telltale sign of a clumsy writer who focused on his character’s one or two special traits at the expense of all the generic traits common to most human beings. It’s an easy trap to fall into, because it’s such a straightforward template for creating a character: you start with one or two unique traits — “She’s the rebel!” or “He’s the funny one!” — and then whenever your character has to react to some situation you can just ask yourself, “Okay, how would a rebel react here?” or “What would a funny guy say to that?” But no one is a rebel or a clown full-time. Most of the time, they’re just a person.

Same goes for building a fictional setting, which in many comic books or movies functions a lot like a character in its own right. It’s certainly true that real cities have distinct flavors to them, just like people have distinctive personalities, so if you’re walking in Brooklyn, it feels unmistakably different from walking in Manhattan, or Baltimore, or San Francisco. There are characteristic building styles and features that define a city’s aesthetic, like Baltimore’s row houses, or Brooklyn’s brownstones. So it’s tempting to design your fictional city around some special aesthetic theme, like “futurism” or “noir.” But even the most futuristic of cities wouldn’t really only consist of sleek skyscrapers and helipads, and even the most sordid and noirish city wouldn’t really be all dark alleyways and disreputable bars. Like all cities, their special features should be offset against all the generic ones: the nondescript office buildings, bus stops, grocery stores, laundromats, and so on. (Or whatever their equivalents are, in your fictional universe.)

If you’re building a real city instead of imagining one, the fallacy of difference comes into play in a different way. Architecture is really more design than art, in that each building is supposed to provide a solution to a particular problem — e.g., “We need a place to educate our children,” or “We want an office building that encourages interdepartmental interactions.” So the temptation for architects is to focus on the special characteristics their building should have to solve that problem, at the expense of the generic characteristics that all buildings need in order to be comfortable and pleasant. Bryan Lawson’s The Language of Space contains a thoughtful discussion of this trap, though he doesn’t explicitly call it the fallacy of difference: “When architects come to design specialized buildings, such as a psychiatric unit, they tend to focus on the special factors rather than the ordinary ones,” he says. “We design lecture theaters with no windows as perfectly ergonomic machines for teaching, and then forget how unpleasant such a place might be for the student who is there for many hours, day after day.”

The fact that this fallacy pops up in creative pursuits as well as empirical ones is interesting in its own right, I think, but it’s particularly worth noting as a reminder to fight our tendency to compartmentalize what we learn. You’ve seen this before, no doubt, on a smaller scale. For example, most people who ace the logic problems on the LSAT, after leaving the classroom will blithely make the same kinds of arguments that they easily identified as fallacious when they were in “spot the fallacy” mode. But concordances spanning endeavors as seemingly dissimilar as art and science suggest the existence of even broader compartments — and the benefit of noticing them, and breaking them down.