Does irrationality fuel innovation?

I’ve been having a fascinating debate on Twitter with my friend Michael Nielsen, a quantum physicist who’s currently doing research at Y-combinator. It all started when I argued that, on the margin, people having more accurate beliefs would be useful, and Michael objected:

I think you’re over-rating accuracy / truth as a primary goal… I think there’s a tension between behaviours which maximize accuracy & which maximize creativity. Can’t always have both. A lot of important truths come from v. irrational ppl.

We continued the debate in this thread, where Michael basically argues that there isn’t enough experimentation with “crazy” ideas in science and tech, and says,

Insofar as long-held suspension of skepticism helps overcome this, I’m in favour of it. Indeed, that long-held suspension of skepticism seems to be one of the main mechanisms we currently have for this.

Here’s my reaction, written as a direct response to Michael.

I totally agree that we need more experimentation with “crazy ideas”! I’m just skeptical that rationality is, on the margin, in tension with that goal. For two main reasons:

1. In general, I think overconfidence stifles experimentation.

Most people look at a “crazy idea” — like seasteading — and say: “That’s obviously dumb and not worth trying, lol, you morons.”

In my experience, rationalists* are far more likely to look at that crazy idea and say: “Well, my inside view says that’s dumb. But my outside view says that brilliant ideas often look dumb at first, so the fact that it seems dumb isn’t great evidence about whether it will pan out. And when I think about the EV here [expected value] it seems clearly worth the cost of someone trying it, even if the probability of success is low.”

I think the first group — the vast majority of society — is being very overconfident. Remember, “overconfidence” doesn’t just mean being too confident that something’s going to succeed, it also means being too confident that something’s going to fail!

And I think it’s their overconfidence (plus a lack of thinking in terms of EV or marginal value) that punishes experimentation with low-probability but high-EV ideas. People mock the ideas, won’t fund them, etc.

2. I’m not sure (long-term) overconfidence is the best way to motivate innovators.

You might object that, okay, yes, maybe we want funders to think like rationalists, but we still need the innovators themselves to be overconfident so that they’re motivated to pursue their low-probability but high-EV ideas.

Possibly! I wouldn’t be shocked if this turned out to be true. But long-term overconfidence does come with costs, and my (weak) suspicion is that there are less costly ways to motivate oneself to pursue crazy ideas.

For example, you can get better at tying your motivation to EV, not to probability. I know a bunch of rationalists who are worried about some global catastrophic risk (e.g., a pandemic) and who are working on strategies for guarding against that risk — even though they think that their chance of success is low! They just think it’s high EV and therefore totally worth trying.

I also like the strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as if you were 100% confident. My friend Spencer Greenberg, a mathematician running a startup incubator, is very pro-calibration and rationality in general, but finds this temporary “overconfidence” very useful.

…I use quotes around overconfidence there because I don’t know if that tactic really counts as epistemic overconfidence — it reminds me more of the state of suspended disbelief we slip into during movies, where we allow our emotions to respond as if the movie were real. But that doesn’t really mean we “believe” the movie is real in the same way that we believe we’re sitting in a chair watching a movie.

Anyway, whether you want to refer to that as temporary irrationality, or not, doesn’t matter too much to me. The crucial point is that Spencer pops out of it occasionally to determine if it’s worth continuing down his current path.

But, sure, I’ll acknowledge that I don’t actually know whether these two strategies are feasible for most innovators. Maybe they only work well for a small subset of people, and for most innovators, the actual choice they face is between overconfidence or inaction. If that’s the world we live in, then I’d agree with the claim you seemed to be making on Twitter, that irrationality (at least on the part of the innovators themselves) is essential to innovation.

One last point: Even if it turned out to be true that irrationality is necessary for innovators, that’s only a weak defense of your original claim, which was that I’m significantly overrating the value of rationality in general. Remember, “coming up with brilliant new ideas” is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, I’m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.

*Yeah, yeah, the label “rationalist” isn’t a great one, but no one’s yet found a better alternative! “People who substantially agree about certain principles of epistemology including…” is more accurate, but not exactly sticky.

Contra Tyler, on “Is rationality a religion?”

Tyler Cowen called the rationality community a “religion” on Ezra Klein’s podcast the other day. Relevant excerpt, on Tyler’s blog:

Ezra Klein

Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.

Tyler Cowen

Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.

My quick reaction:

Basically all humans are overconfident and have blind spots. And that includes self-described rationalists.

But I see rationalists actively trying to compensate for those biases at least sometimes, and I see people in general do so almost never. For example, it’s pretty common for rationalists to solicit criticism of their own ideas, or to acknowledge uncertainty in their claims.

Similarly, it’s weird for Tyler to accuse rationalists of assuming their ethos is correct. Everyone assumes their own ethos is correct! And I think rationalists are far more likely than most people to be transparent about the premises of their ethos, instead of just treating those premises as objectively true, as most people do.

For example, you could accuse rationalists of being overconfident that utilitarianism is the best moral system. Fine. But you think most people aren’t confident in their own moral views?

At least rationalists acknowledge that their moral judgments are dependent on certain premises, and that if someone doesn’t agree with those premises then it’s reasonable to reach different conclusions. There’s an ability to step outside of their own ethos and discuss its pros and cons relative to alternatives, rather than treating it as self-evidently true.

(It’s also common for rationalists to wrestle with flaws in their favorite normative systems, like utilitarianism, which I don’t see most people doing with their moral views.)

So: while I certainly agree rationalists have room for improvement, I think it’s unfair to accuse them of overconfidence, given that that’s a universal human bias and rationalists are putting in a rare amount of effort trying to compensate for it.

A palindromic poem for Douglas Hofstadter

(Originally published here)

Since it first came out in 1979, Douglas Hofstadter’s Pulitzer Prize-winning book “Gödel, Escher, Bach: An Eternal Golden Braid” has widened the eyes of multiple generations of nerdy kids, and I was certainly no exception. The book draws all sorts of parallels between music, art, math, and computer science, ultimately shaping them into a bold thesis about how consciousness arises from self-reference and recursion. It’s also a very playful book, full of puzzles, puns, and imagined dialogues between Achilles and a tortoise which weave in and out of the main chapters, illustrating the concepts therein.

One of those dialogues, titled “Crab Canon,” seems puzzling when you begin reading it – sprinkled with seeming non sequiturs, the word choice a bit awkward and off-kilter. Then shortly after the halfway point, when you start to see recent lines repeated, in reverse order, you realize: the whole dialogue is a line-level palindrome. The first line is the same as the last, the second line is the same as the second-to-last, and so on. But because Hofstadter chooses his sentences carefully, they often have different meanings when they reoccur in the reverse order. So, for example, the following bit of dialogue in the first half…

Tortoise: Tell me, what’s it like to be your age? Is it true that one has no worries at all?
Achilles: To be precise, one has no frets.
Tortoise: Oh, well, it’s all the same to me.
Achilles: Fiddle. It makes a big difference, you know.
Tortoise: Say, don’t you play the guitar?

… becomes this bit of dialogue in the second half:

Achilles: Say, don’t you play the guitar?
Tortoise: Fiddle. It makes a big difference, you know.
Achilles: Oh, well, it’s all the same to me.
Tortoise: To be precise, one has no frets.
Achilles: Tell me, what’s it like to be your age? Is it true that one has no worries at all?

Hofstadter does “cheat” a bit, by allowing himself to vary punctuation (for example, “He often plays, the fool” reoccurs later in a new context as “He often plays the fool”). Nevertheless, it’s an impressive execution of a clever conceit.

Thumbing through Gödel, Escher, Bach again recently, I came across the Crab Canon and was struck with the desire to attempt a similar feat myself: a line-level palindrome that tells a linear story, that is, a story in which a series of non-repeating events occur.

It’s maddeningly difficult trying to come up with lines that make sense, that in fact make a different sense, in both directions. And because all of the lines interlock — each relying simultaneously on the line preceding it, the line following it, and its mirror-image line — changing any one line in the poem tends to set off a ripple effect of necessary changes to all the other lines as well.

I eventually figured out a few crucial tricks, like relying on ambiguous pronouns (“they,” “their”) and using images that carry a different meaning depending on what’s already happened. Below is the final result – my own canon, in homage to the book that dazzled my teenaged self years ago:

Seaside Canon (for Douglas Hofstadter)

The ocean was still.
In an empty sky, two gulls turned lazy arcs, and
their keening cries echoed
off the cliff and disappeared into the sea.
When the child, scrambling up the rocks, slipped
out of her parents’ reach,
they called to her. She was already
so high, but those distant peaks beyond —
they called to her.  She was already
out of her parents’ reach
when the child, scrambling up the rocks, slipped
off the cliff and disappeared into the sea.
Their keening cries echoed
in an empty sky. Two gulls turned lazy arcs, and
the ocean was still.

-Julia Galef

The answer to life, the universe and everything

(Originally published here)

The Austrian philosopher Ludwig Wittgenstein gets credit for pointing out that many classic philosophical conundrums are unsolvable not because they are so profound, but because they are incoherent. Instead of trying to solve such questions, he argued, we should try to dissolve them, by demonstrating how they misuse words and investigating the confusion that motivated the question in the first place.

But with all due respect to Wittgenstein, my favorite example of the “dissolving questions” strategy comes from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, which contains a cheeky and unforgettable dissolution of which I’m sure Wittgenstein himself would have been proud:  A race of hyper-intelligent, pan-dimensional beings builds a supercomputer named Deep Thought, so that they can ask it the question that has preoccupied philosophers for millions of years: “What is the answer to life, the universe, and everything?”

After seven and a half million years of computation, Deep Thought finally announces the answer: Forty-two. In response to the programmers’ howls of disappointment and confusion, Deep Thought rather patiently points out that the reason his answer doesn’t make any sense is because their original question didn’t make any sense either. As I’ve written before, questions like this one, or the very similar “What is the meaning of life?” question, seem to be committing a basic category error: life isn’t the kind of thing to which the word “meaning” or “answer” applies.

But in this article I want to take my analysis a little further than that.

After all, in fairness to those poor, disappointed, pan-dimensional beings, it’s not always easy to figure out what question you need to ask to resolve the confusion that you’re feeling. And throughout the course of human history, many people have felt an overwhelming confusion when they contemplate life and the universe, and concluded that there must be some key which if they had it, would make them say: “Aha, now life makes sense.”

Of course, there isn’t necessarily going to be any such key.  But here, at least, is my diagnosis of the four most common reasons why people feel like there should be such a thing, and what The Hitchhiker’s Guide has to say about each of them:

#1: What’s the point of anything if we’re all going to be dead someday?

In The Hitchhiker’s Guide, Earth is demolished suddenly and unceremoniously by aliens called Vogons, who are clearing a path for a hyperspace bypass. Our real-life future may not hold a Vogon constructor fleet, but it definitely holds our demise. Eventually the human race will be wiped out, whether through war, or a virus, or some environmental disaster like an asteroid of the sort that did in the dinosaurs. Even if we manage to escape all those risks on Earth, eventually our Sun is going to expand, boiling away all of our oceans and atmosphere, and probably swallowing us up. And even if we manage to escape our solar system, eventually the universe will expand until it rips apart all of our individual molecules.

For many people, the fact that the world is going to end someday invalidates everything that happens up to that point. When Ford Prefect accidentally teleports two million years back in time to prehistoric Earth, he says as much to the people living there. “I’ve seen your future,” he tells them. “It doesn’t matter a pair of fetid dingo’s kidneys what you all choose to do from now on… Two million years you’ve got, and that’s it.”

It’s a common attitude. I’ve met many people who have argued that life is pointless since we’re all going to be dead someday. And you might remember the iconic scene at the beginning of Annie Hall in which Woody Allen’s character, as a young boy, has just found out that the universe is expanding. (“He won’t do his homework!” his mother shrieks, beside herself. “What’s the point?” little Woody glumly replies.)

In several ways, however, it’s an odd position to take. The assumption seems to be that if all paths lead to the same endpoint then it doesn’t matter what happens along the way.  But why should we place all the importance on the endpoint of our story and none on the rest of it? Does it really not matter to people whether humanity has a long and glorious existence, or a short and miserable existence, as long as both end in our destruction?

What’s even odder is that even if it weren’t the case that “we’re all going to be dead someday,” many people would still feel like life lacked meaning. There’s another Hitchhiker’s Guide character who represents a common fear about immortality: that it would be soul-crushingly dull. Wowbagger the Infinitely Prolonged, an alien who has had eternal life thrust upon him after a freak accident with an irrational particle accelerator, a pair of rubber bands and a liquid lunch, enjoys his immortality immensely at first. But then the ennui sets in:

“In the end, it was the Sunday afternoons he couldn’t cope with, and that terrible listlessness which starts to set in at about 2:55, when you know that you’ve had all the baths you can usefully have that day, that however hard you stare at any given paragraph in the papers you will never actually read it, or use the revolutionary new pruning technique it describes, and that as you stare at the clock the hands will move relentlessly on to four o’clock, and you will enter the long dark teatime of the soul.”

I don’t mean to suggest that we can know what the actual psychological effects of immortality would be. The reason I’m juxtaposing Ford’s attitude with Wowbagger’s is just because of what it reveals about our conception of meaning: if you feel that life lacks meaning if it’s all going to end someday and you feel that life would lack meaning if it’s going to last forever, then that says more about the incoherence of your own conception of meaning than it does about the nature of the universe.

#2: What’s the purpose of our existence?

At one point in The Hitchhiker’s Guide to the Galaxy, the characters are under attack by automated missiles. In desperation, Arthur activates their spaceship’s “Infinite Improbability Drive,” a new invention with the power to trigger vastly improbable events. The missiles promptly transform into a bowl of petunias and a sperm whale. As the confused whale plummets down through the atmosphere, his freshly-minted mind is racing with questions: “What’s happening? Er, excuse me, who am I? Hello? Why am I here? What’s my purpose in life? ”

Of course, we readers know that his life has no purpose. He’s just a random creation of the Improbability Drive, generated with no intent and for no end. And after a few paragraphs of excited rumination about the world and his place in it, the whale splatters onto the rocky surface of the planet below. It doesn’t take too much poetic license to see this as an encapsulation of human existence, boiled down to a minute and a half.

It’s true that scientists have a pretty solid explanation of the series of events that led to the existence of humanity. We’re still uncertain about how the very first life actually formed, but we have some good hypotheses, and we’re clear on the subsequent process of evolution which produced homo sapiens. But to many people, that isn’t an answer to the question of “why” we are here. When they ask “why” we exist, they don’t mean the question in a causal sense (i.e., “What was the series of events that caused our existence?”). They mean the question teleologically — they’re seeking some externally-bestowed purpose for our existence.

But what if our lives did have a predetermined purpose? And what if we could learn what it was? There’s no guarantee that it would yield the sense of meaning we crave. Just ask Arthur Dent, who finds out in The Hitchhiker’s Guide that the Earth was actually a giant computer program commissioned, paid for, and run by a race of hyperintelligent pandimensional beings… or, as we know them in our dimension, mice. For a concise rebuttal to the idea that discovering the purpose of our existence would give our lives meaning, you need look no farther than Arthur’s facial expression after he discovers his. I’d describe it less as “serene enlightenment,” and more as “dismayed bewilderment.”

 #3: How can any of our lives matter in the grand scheme of things?

Here’s how Douglas Adams describes the introduction to the Hitchhiker’s Guide to the Galaxy:

“’Space,’ it says, ‘is big. Really big. You just won’t believe how vastly hugely mindboggingly big it is. I mean you may think it’s a long way down the road to the chemist, but that’s just peanuts to space. Listen …’ and so on… The simple truth is that interstellar distances will not fit into the human imagination.”

My suspicion is that the enormity of the universe plays a key role in the sensation that life lacks meaning. Psychological research suggests that when we evaluate an act of charity, we don’t judge it based on the amount of good it would accomplish — we judge it based on the amount of good it would accomplish relative to the size of the problem. For example, one study found that people cared more about saving 4,500 refugees if they were told that the refugees lived in a camp of 11,000 than if they lived in a camp of 250,000. In both cases the same number of lives were at stake (4,500) but in the latter case saving those lives was judged less worthwhile because they seemed like just a drop in the proverbial ocean.

So you can see why contemplating the vast size of the universe might seem to rob life of meaning: however consequential our lives and our actions typically seem to us, they’re going to look pretty insignificant when we start comparing them to the universe as a whole. Who cares about humanity’s joys, agonies, and achievements? Divide them by an infinite universe and they dwindle to nothing.

That’s the logic behind one subplot in The Hitchhiker’s Guide in which a character invents something called the “Total Perspective Vortex.” It’s a fiendish contraption which, when you step inside it and turn it on, displays to you the entirety of existence. The shock of seeing how infinitely tiny you are in comparison to the universe immediately annihilates your brain. The takeaway: “If life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion.”

#4: Things seem to happen without rhyme or reason.

So much of what happens in the world is random or unpredictable or unfair, at least compared to the orderly way we feel like things should work. We expect good deeds to be rewarded and bad deeds punished; we expect hard work to pay off and major events to yield lessons or morals.

Absurdist fiction and theater tries to capture what it feels like to have those expectations upended. In the worlds of the absurdist master Franz Kafka, people wake up to find themselves transformed into giant bugs, or put on trial and convicted of unspecified crimes. The examples are fantastical, but the feelings are real: the world makes no sense, and you’re a small, helpless and insignificant entity being tossed around by vast forces you will never comprehend.

Although it’s more lighthearted in tone, The Hitchhiker’s Guide to the Galaxy has a strong absurdist streak, especially in the incongruity between causes and their effects. In The Hitchhiker’s Guide, the Earth is destroyed mere minutes before the completion of its multi-billion-year program, just so some aliens can build a highway. Devastating wars are triggered when a trivial comment falls through a freak wormhole into another time and place. And the entire universe is ruled by a simple-minded hermit who isn’t sure whether anything exists outside his little shack.

So where did we get our futile expectations of an orderly world? One plausible hypothesis is that the ability to pick up on patterns and causal relationships evolved to help our ancestors notice predators hiding in the bushes, learn to avoid poisonous fruits, figure out where the best fishing spots were, and so on. And if pattern-seeking is a survival skill, it’s also not surprising that it goes into overdrive when we feel threatened or helpless. Recent experimental evidence shows that when people are feeling vulnerable, they’re more likely to see correlations in financial data; to perceive objects in randomly-generated visual noise; to believe conspiracy theories; and to see cause-and-effect relationships between events with no logical connection, like a man stamping his foot three times before a business meeting and then succeeding in his business pitch.

The Hitchhiker’s Guide offers a cheeky explanation for the absurdity of the world: the confusing things that happened to us were all part of an experiment run by mice. In fact, the narrator says, it is very odd that Earth humans were not aware of that fact. “Because without that fairly simple and obvious piece of knowledge,” he says, “nothing that ever happened on Earth could possibly make the slightest bit of sense.”

Incongruity in humor, art, and science

(Originally published here)

“One morning I shot an elephant in my pajamas. How he got in my pajamas, I don’t know.”

-Groucho Marx 

Like most jokes, Groucho’s works almost instantaneously. We hear the joke, we laugh, and we don’t need to think consciously about the process connecting those two points. But what if we played that process in slow motion?

Here’s a plausible account of what that might look like: the joke’s first sentence triggers an image, or at least a concept, of Groucho wearing pajamas and shooting an elephant. But the next phrase, “How he got in my pajamas,” doesn’t make sense in the framework of our current model of the situation, and we’re thrown into confusion. So we think, Hang on, I must’ve missed something, and we go back and re-evaluate the first sentence to see if there was some other, alternative way of interpreting it. Sure enough, now that we’re looking for it, there it is: an alternate meaning of “I shot an elephant in my pajamas” pops out, and you can almost hear the gears grinding as we shift from “I, in my pajamas, shot an elephant” to “I shot an elephant who was wearing my pajamas.”

Groucho’s joke is an example of paraprosdokia, a figure of speech whose latter half surprises us, forcing us to go back and reconsider the assumptions we’d made about what was going on in the first half. Other examples include Mitch Hedberg’s “I haven’t slept for two weeks — because that would be too long,” and Stephen Colbert’s “Now, if I am reading this graph correctly… I’d be very surprised.” The jolt of gratification we feel at converting confusion into clarity is exactly what Incongruity Resolution Theory, a popular theory in the psychological study of humor, predicts: humor is the satisfying “click” of an incongruity within the joke being resolved after you find the appropriate interpretive framework.

One particularly interesting thing about paraprosdokian jokes is how they represent, in miniature, the process of theory-revision in scientific inquiry. You start out with a straightforward working theory, based on your initial observations and on whatever prior assumptions and expectations you bring with you from past experiences. As you collect more observations that don’t seem to fit your theory, you either dismiss them as an anomaly, or find a way to shoehorn them into the framework of your theory, or you go back and try to re-interpret your original data in the framework of an alternate theory that will fit all your data, both old and new.

The third option becomes increasingly compelling as the incongruities mount, which is the basis for Thomas Kuhn’s famous theory of paradigm shifts in science. Scientific theories aren’t immediately discarded when contradictory evidence comes to light, Kuhn observed. Instead, scientists will dismiss contradictory observations as errors or anomalies, or tweak the theory to accommodate them, until the contradictions become numerous enough that they can no longer be plausibly explained within the original theory, at which point the field seeks out a new explanatory paradigm to replace their old, increasingly flawed one.

Although Kuhn’s theory doesn’t describe all scientific practice, it’s nevertheless a strong model for many cases, including – quintessentially – the shift from geocentrism to heliocentrism. Mounting evidence from cosmologists like Kepler, Copernicus, and Galileo became increasingly difficult to reconcile with the idea of a stationary earth at the center of the other planets’ orbits. Many of their contemporaries certainly tried, however, devising ever-more complex modifications to their geocentric model to accommodate new astronomical observations, until it became clear that heliocentrism simply did a better job of explaining the facts.

The satisfying “Aha!” produced by getting a joke or explaining a phenomenon is not all that different from the satisfying “Aha!” produced by certain kinds of art. Think of the modulation, in music, from one key to another. Each musical key signature contains a different subset of all the sharps, flats, and naturals on the scale. Use a note that’s not in your key, and the result is dissonance, which we interpret as tension, or ugliness, or as something being “off.” But since every note is shared by multiple different key signatures, you can use shared notes as pivot points to transition from one key to another. The resultant effect is similar to paraprosdokia: having initially interpreted the pivot-point as belonging to the original key, you’re momentarily disoriented to hear it followed by notes which don’t belong to that key, until you shift your mental framework to a new key in which those notes “make sense.”

Rudolf Arnheim, one of the most celebrated writers on the psychology of aesthetic perception, has a nice example of musical modulation in his 1978 book, The Dynamics of Architectural Form. This is a passage from a violin sonata by Jean Marie Leclair: (From Rudolf Arnheim's Dynamics of Architectural Form)
Arnheim explains:

“Two notes refer to the same tone, but they are written differently because the b-flat is experienced dynamically as the outcome of a steep ascent in the key of D, which has, as it were, overshot the mark by a half tone and is straining downward toward the dominant, a. The same tone written as an a-sharp presses upward as the leading tone in the new key of B, thereby assuming a new function in a different structural context.”

Space can be modulated just like sound and meaning to produce a mental paradigm shift. An elegant example is the Hotel de Matignon, a mansion built in Paris in 1725 that now serves as the residence of the French Prime Minister. Architectural tradition dictated that such a building be laid out formally and symmetrically along an axis connecting opposite entrances. But because the site itself was uneven, the front and back entrances were misaligned. So the architect shifted the building’s axis mid-stream, using one axis as an off-center wing of the other axis.

Hotel de Matignon, Paris (1725)

Arnheim dubs it an “ingenious” solution to the site’s constraints, and highlights the parallel to musical modulation:

“The device used by the architect cannot but remind us of what musicians call an enharmonic modulation, that is, the almost imperceptible shift from one key to another, in the course of which certain tones act as bridges by fulfilling different functions in the two keys and thereby display a double allegiance. The transitional moment generates a slight sensation of seasickness, unwelcome or exhilarating depending on the listener’s disposition, because the frame of reference is temporarily lost.”

In honor of it being Valentine’s Day, I’ll close with a nod to Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who wrote an article for Minds and Machines called “Explanation as Orgasm.” In it, Gopnik argued that the pleasure we get from the feeling of understanding plays a role analogous to that of the orgasm. Perhaps, she suggests, our enjoyment of those “Aha!” moments evolved to entice us to figure out the world around us, just as orgasms evolved to entice us to reproduce. In a way, I hope that’s the case, because it puts a delightful new gloss on aesthetic enjoyment. How much more delicious are those moments of musical and visual and comedic resolution when you view them as being — like non-procreative sex – our cunning species’ way of bypassing the evolutionary carrot-on-a-string, and instead, harvesting tasty carrots of our own?

The fallacy of difference, in science and art

(Originally published here)

It’s not often that you find something that’s a fallacy both logically and creatively — that is, a fallacy to which both researchers and artists are susceptible. Perhaps you’re tempted to tell me I’m committing a category mistake, that artistic fields like fiction and architecture aren’t the sort of thing to which the word “fallacy” could even meaningfully be applied. An understandable objection! But let me explain myself.

I first encountered the term “fallacy of difference” in David Hackett Fischer’s excellent book, Historians’ Fallacies, in which he defines it as “a tendency to conceptualize a group in terms of its special characteristics to the exclusion of its generic characteristics.” So for instance, India’s caste system is a special characteristic of its society, and therefore scholars have been tempted to explain aspects of Indian civilization in terms of its caste system rather than in terms of its other, more generic features. The Puritans provide another case in point: “Only a small part of Puritan theology was Puritan in a special sense,” Fischer comments. “Much of it was Anglican, and more was Protestant, and most was Christian. And yet Puritanism is often identified and understood in terms of what was specially or uniquely Puritan.”

Here’s a less scholarly example from my own experience. I’ve heard several non-monogamous people complain that when they confide to a friend that they’re having relationship troubles, or that they broke up with their partner, their friends instantly blame their non-monogamy. But while non-monogamy certainly does make a relationship unusual, it’s hardly the only characteristic relevant to understanding how a relationship works, or why it doesn’t. Non-monogamous relationships are subject to the same misunderstandings, personality clashes, insecurities, careless injuries, and other common tensions that tend to plague intimate relationships. But the non-monogamy stands out, so people tend to focus on that one special characteristic, and ignore the many generic characteristics that can cause any kind of relationship to founder.

So the fallacy of difference is a fallacy of science (broadly understood as the process of investigating the world empirically) but how is it also a fallacy of art? Because artists, like scientists, are concerned with understanding the world, though their respective goals are different. While scientists want to model the world accurately in order to answer empirical questions, artists want to make us believe in the story they’re telling us or the scene they’re showing us, or to highlight some feature of the world that they find particularly beautiful or interesting, or to successfully provoke a desired reaction. That tends to require a pretty sophisticated understanding of how the world looks and acts and feels.

When they fail, it’s often the fallacy of difference at work. In novels, TV shows and movies, a flat, “one-dimensional” character is a telltale sign of a clumsy writer who focused on his character’s one or two special traits at the expense of all the generic traits common to most human beings. It’s an easy trap to fall into, because it’s such a straightforward template for creating a character: you start with one or two unique traits — “She’s the rebel!” or “He’s the funny one!” — and then whenever your character has to react to some situation you can just ask yourself, “Okay, how would a rebel react here?” or “What would a funny guy say to that?” But no one is a rebel or a clown full-time. Most of the time, they’re just a person.

Same goes for building a fictional setting, which in many comic books or movies functions a lot like a character in its own right. It’s certainly true that real cities have distinct flavors to them, just like people have distinctive personalities, so if you’re walking in Brooklyn, it feels unmistakably different from walking in Manhattan, or Baltimore, or San Francisco. There are characteristic building styles and features that define a city’s aesthetic, like Baltimore’s row houses, or Brooklyn’s brownstones. So it’s tempting to design your fictional city around some special aesthetic theme, like “futurism” or “noir.” But even the most futuristic of cities wouldn’t really only consist of sleek skyscrapers and helipads, and even the most sordid and noirish city wouldn’t really be all dark alleyways and disreputable bars. Like all cities, their special features should be offset against all the generic ones: the nondescript office buildings, bus stops, grocery stores, laundromats, and so on. (Or whatever their equivalents are, in your fictional universe.)

If you’re building a real city instead of imagining one, the fallacy of difference comes into play in a different way. Architecture is really more design than art, in that each building is supposed to provide a solution to a particular problem — e.g., “We need a place to educate our children,” or “We want an office building that encourages interdepartmental interactions.” So the temptation for architects is to focus on the special characteristics their building should have to solve that problem, at the expense of the generic characteristics that all buildings need in order to be comfortable and pleasant. Bryan Lawson’s The Language of Space contains a thoughtful discussion of this trap, though he doesn’t explicitly call it the fallacy of difference: “When architects come to design specialized buildings, such as a psychiatric unit, they tend to focus on the special factors rather than the ordinary ones,” he says. “We design lecture theaters with no windows as perfectly ergonomic machines for teaching, and then forget how unpleasant such a place might be for the student who is there for many hours, day after day.”

The fact that this fallacy pops up in creative pursuits as well as empirical ones is interesting in its own right, I think, but it’s particularly worth noting as a reminder to fight our tendency to compartmentalize what we learn. You’ve seen this before, no doubt, on a smaller scale. For example, most people who ace the logic problems on the LSAT, after leaving the classroom will blithely make the same kinds of arguments that they easily identified as fallacious when they were in “spot the fallacy” mode. But concordances spanning endeavors as seemingly dissimilar as art and science suggest the existence of even broader compartments — and the benefit of noticing them, and breaking them down.

Map of Bay Area Memespace

(originally published here, in 2013)

The Bay Area is unusually dense with idea-driven subcultures that mix and cross-pollinate in fascinating ways, many of which are already enriching rationalist culture.

This map is my attempt at illustrating that landscape of subcultures, and at situating the rationalist community within it. I’ve limited myself to the last 50 years or so, and to subcultures defined by ideology (as opposed to, say, ethnicity). I’ve also depicted some of the major memes that have influenced, and been influenced by, those subcultures:

(Click to enlarge)

bay-area-memespace1

Note that although many of these memes are widely influential, I only drew an arrow connecting a meme to a group if the meme was one of the defining features of the group. (For example, yoga may be popular among many entrepreneurs, but that meme -> subculture relationship isn’t strong enough to make my map.).

Below, I expand on the map with a quick tour through the landscape of Bay Area memes and subcultures. Instead of trying to cover everything in detail, I’ve focused on nine aspects of that memespace that help put the rationalist community in context:

1. Computer scientists

Some of the basic building blocks of rationality come from computer science, and the Bay Area is rich with the world’s top computer scientists, employed by companies like Intel, IBM, Google, and Microsoft, and universities like Stanford and UC Berkeley. The idea of thinking in terms of optimization problems – optimizing for these outcomes, under those constraints — has roots in computer science and math, and it’s so fundamental to the rationalist approach to problem-solving that it’s easy to forget how different it is from people’s normal way of thinking.

Another rationalist building block, Bayesian inference, is several centuries old, but had fallen out of favor until the computing methods and power of the 1970s made it actually usable. Widespread use of Bayesianism in the field of artificial intelligence (e.g. Bayes nets) also contributed to its resurgent popularity.

2. Startup culture 

There’s a distinctive culture behind the successes of the Bay Area’s startups, and it’s one that I see benefiting rationalists as well. Business in general is good real-world rationality training: you test your theories, you update your models, or you fail. And startup culture in particular promotes a “try things fast” attitude that can be a perfect antidote to the “sit around planning and theorizing forever” failure mode we’re sometimes prone to. Startup culture’s “think big, be ambitious” meme is also something I could see impacting rationalist culture in the coming years. (For a look at this meme turned up to 11, you can check out the only-partially-tongue-in-cheek Yudkowsky ambition scale.)

3. Hacker culture

A lot of the credit for the culture of Silicon Valley’s startup scene goes to the first generation of computer programmers, whose “hacker” culture originated at MIT in the late 1950s, but shortly thereafter sprung up in a few other early-adopter schools like Stanford and UC Berkeley. In addition to being passionate about coding, hackers were unimpressed by “bogus” status signals, like age and higher education, and judged people only by the cleverness and usefulness of the things they could create. (One of the most admired among the original hackers was twelve-year-old Peter Deutsch.) It was, in other words, the perfect cultural soil for the seeds of a paradigm-busting startup culture.

The hacker ethic also included an itch to fix broken or inefficient systems, and an impatience with the bureaucracy that prevented them from doing so. “In a perfect hacker world, anyone pissed off enough to open up a control box near a traffic light and take it apart to make it work better should be perfectly welcome to make the attempt,” Steven Levy wrote in Hackers, Heroes of the Computer Revolution. That’s one reason I credit hacker culture for another Bay Area meme that I see in many entrepreneurs, rationalists, and others: building creative alternatives to establishment institutions like government, education, and health.

For examples, look at all the approaches to alternative education that have sprung up in the Bay – UnCollegeCourseraUdacity, and General Assembly. Or look at Quantified Self, the community of people figuring out how to improve their health by tracking and analyzing their own biometrics. Or the Seasteaders, who believe the free market can produce better societies than the ones historical forces left us with. Or MetaMed, the company staked on the idea that we can improve significantly on mainstream medicine if we apply rationalist research tools to the medical literature.

4. Eastern spiritualities

Although the heyday of the counterculture was over by the 1980s, its memes still influence the Bay Area, and through it, rationalist culture. Yoga and meditation were introduced to the US via the hippies’ exploration of Eastern religions, but those practices have been mostly stripped of their original spiritual meanings by now, and are popular for their benefits to mental and physical well-being.

Meditation in particular has become common among rationalists, and has some interesting overlaps with rationality I hadn’t noticed before I moved out here. Meditation seems to train you to stop automatically identifying with all of your thoughts, so that, for example, when the thought “John’s a jerk” pops into your head, you don’t assume that John necessarily is a jerk. You take the thought as something your brain produced, which may or may not be true, and may or may not be useful — and this ability to take a step back from your thoughts and reflect on them is arguably one of the building blocks of rationality.

5. Human Potential movement

Another pillar of counterculture was the Human Potential movement, named after Aldous Huxley’s argument that the human brain is capable of much more insight, fulfillment, and varied experiences than we’ve been aware of thus far. In 1962 the Esalen Institute, a retreat built on hot springs south of San Francisco, started running classes designed to help people realize more of their “potential,” through activities like roleplaying, primal screams, and group therapy. The Landmark Forum, originally known as est, was another leader of the movement, and emphasized taking responsibility and questioning the narratives you construct around events in your life. And I’d count other popular practices like nonviolent communicationradical honesty, and internal family systems in this same tradition.

Rationality also focuses on personal development, of course, but there’s not much connection between the Human Potential movement’s approach and the rationalists’. As far as I can tell they developed independently of each other and have very different epistemologies. From my perspective, Human Potential practices span a spectrum from common sense techniques backed up by anecdotal evidence, to unsupported psychotherapy, to outright mysticism that makes claims that are clearly wrong or not even wrong.

Nevertheless, the basic goals of seeking fulfillment and becoming a better version of yourself are fine ones, and the fact that so many people today are interested in pursuing those goals is thanks in large part to the impact the Human Potential movement had on American society. And despite my qualms about their epistemology, I’d be willing to bet that there are at least a few practices the movement and its outgrowths discovered that really are useful, even if their practitioners don’t have correct models of why they’re useful. So I consider the Human Potential and related movements to be a source of hypotheses, if not conclusions.

6. Alternative lifestyles

Finally, the counterculture was also famous for its destigmatization and exploration of alternative lifestyles, which helped create San Francisco’s vibrant kink culture and LGBTQ community. Beyond their live-and-let-live attitude about alternative sexualities, people in the Bay generally put less stock in standard scripts for how a life “should” go. So being nomadic, or living on a boat, or setting up a co-parenting group, or not wanting children, or having an unusual job, or changing your gender, doesn’t raise eyebrows here the way it would in most parts of the country.

And having an expanded space of hypotheses about how to live is complementary with trying to improve your rationality, because a lot of rationality involves pushing past cached thoughts about what you believe, or what kind of person you are, and giving fair consideration to hypotheses that hadn’t even been in your choice set before. In other words: I don’t think it’s necessarily the case that most rationalists live alternative lifestyles. But I do think the conventional lifestyles led by rationalists are consciously chosen to a greater extent than are most people’s lifestyles.

7. Burning Man

One aspect of memespace I wasn’t able to depict on the map is how the cross-pollination between groups actually occurs. To some extent it’s caused by normal social interactions and the blogosphere, as you’d expect, but the Burning Man festival is another important driver of cross-pollination that might be less obvious to outsiders. In fact, I wanted to put “Burner culture” on my memespace map, but was flummoxed by the fact that it would have to be connected to essentially all other groups and memes.

Burning Man consists of 50,000 people, including people from all of the subcultures, coming together for one week in the Nevada desert to create a temporary city. And though the old-school hippies might distrust Silicon Valley’s wealthy elites, and the rationalists might look askance at the New Age aura-readers, the communal spirit of Burning Man does a pretty effective job of breaking down those barriers. It’s only a yearly event, but the sense of community lingers afterwards, and Burning Man social connections are reinforced throughout the year at events like Ephemerisle (Burning man + libertarianism) and the BIL conference (Burning man + social entrepreneurship).

8. Social pressure to give back

Bay Area society puts a high value on helping the world. Perhaps it’s an echo of the social justice movements in the 1960’s, or perhaps it comes from the hackers’ conviction that technology should be used for the public good. Whatever the origins of this social more, you can see it in the idealistic language entrepreneurs use to describe their startups, and in the recent growth of a segment of startup culture known as “social entrepreneurship” that focuses on the kinds of global problems traditionally addressed by charities.

And sure, it’s often the case that the “world-changing” rhetoric entrepreneurs use to describe their startups is just rhetoric. But the fact that they feel obliged to frame their business in altruistic terms is at least a symptom of the fact that the Bay Area expects you to try to make a positive contribution to the world. Which means that self-made millionaires in this region are more likely than elites elsewhere to put their wealth towards good causes.

That, in turn, forges social connections between the Bay Area’s entrepreneurs, investors and engineers, and the effective altruists, rationalists, transhumanists, and other groups they support. So this phenomenon is doubly fortunate for the rationalists: not just because Bay Area philanthropy helps us directly, but because those connections expose us to memes from different subcultures that can help round out our worldview.

9. Effective Altruists

The social pressure to give back is also fortunate because it makes the Bay a hospitable environment for the Effective Altruists, one of the newest communities to hit the Bay, and a close cousin to the rationalists. Arguably, the movement is still centered at Oxford, where the Center for Effective Altruism is based. But EA organizations Givewell and Leverage Research recently relocated to the Bay Area from New York, and Leverage hosted the first-ever global Effective Altruism Summit here this summer, which I think is enough to qualify this as a fledgling Bay Area subculture. I would also consider MIRI to be an older member of this group, specifically in the far future-focused subset of EA culture. [EDIT: This section is outdated, but the main point stands, in the sense that Effective Altruism has grown dramatically since 2013, and is now arguably centered in the Bay Area.]