Map of Bay Area Memespace

(originally published here, in 2013)

The Bay Area is unusually dense with idea-driven subcultures that mix and cross-pollinate in fascinating ways, many of which are already enriching rationalist culture.

This map is my attempt at illustrating that landscape of subcultures, and at situating the rationalist community within it. I’ve limited myself to the last 50 years or so, and to subcultures defined by ideology (as opposed to, say, ethnicity). I’ve also depicted some of the major memes that have influenced, and been influenced by, those subcultures:

(Click to enlarge)

bay-area-memespace1

Note that although many of these memes are widely influential, I only drew an arrow connecting a meme to a group if the meme was one of the defining features of the group. (For example, yoga may be popular among many entrepreneurs, but that meme -> subculture relationship isn’t strong enough to make my map.).

Below, I expand on the map with a quick tour through the landscape of Bay Area memes and subcultures. Instead of trying to cover everything in detail, I’ve focused on nine aspects of that memespace that help put the rationalist community in context:

1. Computer scientists

Some of the basic building blocks of rationality come from computer science, and the Bay Area is rich with the world’s top computer scientists, employed by companies like Intel, IBM, Google, and Microsoft, and universities like Stanford and UC Berkeley. The idea of thinking in terms of optimization problems – optimizing for these outcomes, under those constraints — has roots in computer science and math, and it’s so fundamental to the rationalist approach to problem-solving that it’s easy to forget how different it is from people’s normal way of thinking.

Another rationalist building block, Bayesian inference, is several centuries old, but had fallen out of favor until the computing methods and power of the 1970s made it actually usable. Widespread use of Bayesianism in the field of artificial intelligence (e.g. Bayes nets) also contributed to its resurgent popularity.

2. Startup culture 

There’s a distinctive culture behind the successes of the Bay Area’s startups, and it’s one that I see benefiting rationalists as well. Business in general is good real-world rationality training: you test your theories, you update your models, or you fail. And startup culture in particular promotes a “try things fast” attitude that can be a perfect antidote to the “sit around planning and theorizing forever” failure mode we’re sometimes prone to. Startup culture’s “think big, be ambitious” meme is also something I could see impacting rationalist culture in the coming years. (For a look at this meme turned up to 11, you can check out the only-partially-tongue-in-cheek Yudkowsky ambition scale.)

3. Hacker culture

A lot of the credit for the culture of Silicon Valley’s startup scene goes to the first generation of computer programmers, whose “hacker” culture originated at MIT in the late 1950s, but shortly thereafter sprung up in a few other early-adopter schools like Stanford and UC Berkeley. In addition to being passionate about coding, hackers were unimpressed by “bogus” status signals, like age and higher education, and judged people only by the cleverness and usefulness of the things they could create. (One of the most admired among the original hackers was twelve-year-old Peter Deutsch.) It was, in other words, the perfect cultural soil for the seeds of a paradigm-busting startup culture.

The hacker ethic also included an itch to fix broken or inefficient systems, and an impatience with the bureaucracy that prevented them from doing so. “In a perfect hacker world, anyone pissed off enough to open up a control box near a traffic light and take it apart to make it work better should be perfectly welcome to make the attempt,” Steven Levy wrote in Hackers, Heroes of the Computer Revolution. That’s one reason I credit hacker culture for another Bay Area meme that I see in many entrepreneurs, rationalists, and others: building creative alternatives to establishment institutions like government, education, and health.

For examples, look at all the approaches to alternative education that have sprung up in the Bay – UnCollegeCourseraUdacity, and General Assembly. Or look at Quantified Self, the community of people figuring out how to improve their health by tracking and analyzing their own biometrics. Or the Seasteaders, who believe the free market can produce better societies than the ones historical forces left us with. Or MetaMed, the company staked on the idea that we can improve significantly on mainstream medicine if we apply rationalist research tools to the medical literature.

4. Eastern spiritualities

Although the heyday of the counterculture was over by the 1980s, its memes still influence the Bay Area, and through it, rationalist culture. Yoga and meditation were introduced to the US via the hippies’ exploration of Eastern religions, but those practices have been mostly stripped of their original spiritual meanings by now, and are popular for their benefits to mental and physical well-being.

Meditation in particular has become common among rationalists, and has some interesting overlaps with rationality I hadn’t noticed before I moved out here. Meditation seems to train you to stop automatically identifying with all of your thoughts, so that, for example, when the thought “John’s a jerk” pops into your head, you don’t assume that John necessarily is a jerk. You take the thought as something your brain produced, which may or may not be true, and may or may not be useful — and this ability to take a step back from your thoughts and reflect on them is arguably one of the building blocks of rationality.

5. Human Potential movement

Another pillar of counterculture was the Human Potential movement, named after Aldous Huxley’s argument that the human brain is capable of much more insight, fulfillment, and varied experiences than we’ve been aware of thus far. In 1962 the Esalen Institute, a retreat built on hot springs south of San Francisco, started running classes designed to help people realize more of their “potential,” through activities like roleplaying, primal screams, and group therapy. The Landmark Forum, originally known as est, was another leader of the movement, and emphasized taking responsibility and questioning the narratives you construct around events in your life. And I’d count other popular practices like nonviolent communicationradical honesty, and internal family systems in this same tradition.

Rationality also focuses on personal development, of course, but there’s not much connection between the Human Potential movement’s approach and the rationalists’. As far as I can tell they developed independently of each other and have very different epistemologies. From my perspective, Human Potential practices span a spectrum from common sense techniques backed up by anecdotal evidence, to unsupported psychotherapy, to outright mysticism that makes claims that are clearly wrong or not even wrong.

Nevertheless, the basic goals of seeking fulfillment and becoming a better version of yourself are fine ones, and the fact that so many people today are interested in pursuing those goals is thanks in large part to the impact the Human Potential movement had on American society. And despite my qualms about their epistemology, I’d be willing to bet that there are at least a few practices the movement and its outgrowths discovered that really are useful, even if their practitioners don’t have correct models of why they’re useful. So I consider the Human Potential and related movements to be a source of hypotheses, if not conclusions.

6. Alternative lifestyles

Finally, the counterculture was also famous for its destigmatization and exploration of alternative lifestyles, which helped create San Francisco’s vibrant kink culture and LGBTQ community. Beyond their live-and-let-live attitude about alternative sexualities, people in the Bay generally put less stock in standard scripts for how a life “should” go. So being nomadic, or living on a boat, or setting up a co-parenting group, or not wanting children, or having an unusual job, or changing your gender, doesn’t raise eyebrows here the way it would in most parts of the country.

And having an expanded space of hypotheses about how to live is complementary with trying to improve your rationality, because a lot of rationality involves pushing past cached thoughts about what you believe, or what kind of person you are, and giving fair consideration to hypotheses that hadn’t even been in your choice set before. In other words: I don’t think it’s necessarily the case that most rationalists live alternative lifestyles. But I do think the conventional lifestyles led by rationalists are consciously chosen to a greater extent than are most people’s lifestyles.

7. Burning Man

One aspect of memespace I wasn’t able to depict on the map is how the cross-pollination between groups actually occurs. To some extent it’s caused by normal social interactions and the blogosphere, as you’d expect, but the Burning Man festival is another important driver of cross-pollination that might be less obvious to outsiders. In fact, I wanted to put “Burner culture” on my memespace map, but was flummoxed by the fact that it would have to be connected to essentially all other groups and memes.

Burning Man consists of 50,000 people, including people from all of the subcultures, coming together for one week in the Nevada desert to create a temporary city. And though the old-school hippies might distrust Silicon Valley’s wealthy elites, and the rationalists might look askance at the New Age aura-readers, the communal spirit of Burning Man does a pretty effective job of breaking down those barriers. It’s only a yearly event, but the sense of community lingers afterwards, and Burning Man social connections are reinforced throughout the year at events like Ephemerisle (Burning man + libertarianism) and the BIL conference (Burning man + social entrepreneurship).

8. Social pressure to give back

Bay Area society puts a high value on helping the world. Perhaps it’s an echo of the social justice movements in the 1960’s, or perhaps it comes from the hackers’ conviction that technology should be used for the public good. Whatever the origins of this social more, you can see it in the idealistic language entrepreneurs use to describe their startups, and in the recent growth of a segment of startup culture known as “social entrepreneurship” that focuses on the kinds of global problems traditionally addressed by charities.

And sure, it’s often the case that the “world-changing” rhetoric entrepreneurs use to describe their startups is just rhetoric. But the fact that they feel obliged to frame their business in altruistic terms is at least a symptom of the fact that the Bay Area expects you to try to make a positive contribution to the world. Which means that self-made millionaires in this region are more likely than elites elsewhere to put their wealth towards good causes.

That, in turn, forges social connections between the Bay Area’s entrepreneurs, investors and engineers, and the effective altruists, rationalists, transhumanists, and other groups they support. So this phenomenon is doubly fortunate for the rationalists: not just because Bay Area philanthropy helps us directly, but because those connections expose us to memes from different subcultures that can help round out our worldview.

9. Effective Altruists

The social pressure to give back is also fortunate because it makes the Bay a hospitable environment for the Effective Altruists, one of the newest communities to hit the Bay, and a close cousin to the rationalists. Arguably, the movement is still centered at Oxford, where the Center for Effective Altruism is based. But EA organizations Givewell and Leverage Research recently relocated to the Bay Area from New York, and Leverage hosted the first-ever global Effective Altruism Summit here this summer, which I think is enough to qualify this as a fledgling Bay Area subculture. I would also consider MIRI to be an older member of this group, specifically in the far future-focused subset of EA culture. [EDIT: This section is outdated, but the main point stands, in the sense that Effective Altruism has grown dramatically since 2013, and is now arguably centered in the Bay Area.]

Can we intentionally improve the world? Planners vs. Hayekians

(This post is part of a series of Open Questions, in which I describe important questions about which thoughtful people disagree)

The Planner looks at the world and says, “What are the most important problems that need fixing?” and then comes up with a plan for how they might make a dent in some of those problems.

Effective Altruists are Planners. So are lots of other groups that don’t necessarily identify with Effective Altruism’s utilitarian principles, such as the environmentalist or animal rights movements.

You could argue that the value of Planning is just common sense — surely, we should expect better outcomes if thoughtful people actually try to help the world than if they don’t try? But Planners can also bolster their case by pointing to examples of governments, companies, or charitable foundations intentionally creating a lot of social good.

For example, governments and global charities have funded vaccination campaigns that have wiped out major diseases from entire countries (e.g., polio). And the Rockefeller Foundation funded a program to improve wheat production in Mexico; while working on that project, biologist Norman Borlaug developed a new strain of wheat that saved over a billion people from starvation worldwide.

But there’s another type of person who also cares about helping the world, and is wary of the Planner approach. They argue:

  1. It’s very hard to predict in advance how a plan will affect the world.
  2. Most plans to improve the world have failed, some catastrophically (see: Marxism).
  3. Most improvements to the world were not the result of planning, but rather the result of some mixture of serendipity and people pursuing their own ends.
  4. If we all limited ourselves to projects whose social value can be justified, then we’ll be stuck exploring a narrow slice of project-space, and will be missing out on some of the highest-value projects simply because they didn’t seem valuable from within our current, limited worldview.
  5. Therefore, the optimal approach to improving the world is for each of us to pursue projects we find interesting or exciting. In the process, we should keep an eye out for ways those projects might yield opportunities to produce a lot of social value — but we shouldn’t aim directly at value-creation.

This camp is sometimes called “Hayekian,” because their view of value-creation is similar to Friedrich Hayek’s: any individual firm has limited foresight, so the way we increase innovation is to increase the number of firms. The more different things we try, the greater chance of us hitting upon a few big wins. In other words, given our limited ability to “exploit” known strategies for creating social value, we should focus on boosting exploration instead of exploitation.

The lines between Planners and Hayekians start to blur in some interesting grey-area cases, such as Bell Laboratories, which produced some of the 20th century’s most valuable innovations (lasers, transistors, information theory, etc.). On the one hand, Bell Labs was funded by AT&T with the explicit goal of producing valuable innovations, which makes it seem like a point for the Planners. But on the other hand, one thing that made Bell Labs distinctive was how much free rein its researchers were given to explore projects that were interesting to them, without having to produce short-term results or justify their work in terms of a bottom line. Point for the Hayekians?

Relevant open questions:

How efficient is the market for creating social value? (coming soon)

Relevant reading:

  • Beware Systemic Change, by Scott Alexander (see especially the hypothetical dialogue between Bob and Alice, though Scott has apparently changed his mind since he wrote this post)
  • The Republic of Science, by Michael Polanyi, is a Hayekian argument applied to scientific innovation
  • The Idea Factory, by Jon Gertner, explores the unique culture of Bell Labs
  • Theory of Change, a post by Aaron Swartz on the value of planning

Are you motivated by obligation, or opportunity?

(This post is part of a series of Open Questions, in which I describe important questions about which thoughtful people disagree)

People who care about making the world better fall into roughly two camps: those motivated by obligation, and those motivated by opportunity.

Here’s the basic case for obligation: There’s a child drowning in a pool. Do you jump in to save him, even at the cost of ruining your expensive suit?

Most of us would feel obliged to save the hypothetical child. Therefore, the argument goes, we should also feel obliged to save the lives of real children in need — children who are dying of treatable diseases like malaria — even if they’re not right in front of us, and even if it costs us the equivalent of an expensive suit. The “obligation” argument is that we’re morally obliged to help others as long as it isn’t a huge burden on us.

The “opportunity” camp rejects this purported obligation, and instead says: Look, I don’t think I’m morally obliged to help strangers, but I like helping strangers, especially when I find an opportunity to do so that seems especially promising. So I’ll go about my life looking for exciting opportunities to do good, but without feeling obliged.

It’s an empirical question which approach produces the most good, and different people have different intuitions about that question. On the one hand, the opportunity camp points out that the feeling of obligation can be punishing, causing burnout or anxiety that can actually reduce your ability to help others. On the other hand, there are plenty of examples of valuable pro-social behaviors that happen mostly because people feel obliged — do you wait your turn in line, or pick up your litter, or do your fair share of the chores, because you find it exciting? Or because you’d feel guilty if you didn’t?

Meanwhile, the obligation camp sometimes notes that the opportunity camp hasn’t actually explained how they can reject the logic of obligation. There are various answers to this challenge. One of them is the Demandingness objection: that if we accept the logic of obligation then we’re required to give until “it hurts,” that is, until we ourselves are suffering as much as our recipients, or at least until we start impairing our ability to earn money to give away. Many people consider that a reductio ad absurdum of the obligation argument. (Some, however, bite that bullet.)

Another way to counter the obligation argument is to say that the drowning child thought experiment is flawed. There are a lot of ways to attack it, but I’ll pick one: It’s not fair to posit a single drowning child. The more appropriate thought experiment would be to imagine millions of drowning children in pools, since that’s the magnitude of the actual problem facing us. And in that thought experiment, it’s less clear that we’d feel obliged to keep jumping in to rescue the children, until all of our suits (and other possessions) were spent.

Finally, one rejoinder I hear from the obligation camp is that the opportunity folks clearly do accept the existence of moral obligation in some areas. Surely they feel obliged not to murder, cheat, steal, and so on. So why don’t they feel obliged to sacrifice some luxuries to save the lives of suffering people? How do they draw that line?

This doesn’t address the philosophical arguments, but for what it’s worth, I see a trend pointing in favor of the opportunity camp. It seems to me that groups of thoughtful people who are actively trying to help the world tend to move from an “obligation” mindset to an “opportunity” mindset over time.

Relevant reading:

  • The names “obligation” vs. “opportunity” come from this post by Luke Muehlhauser, which lays out the basic dichotomy. (In the comments, several leaders of the Effective Altruism movement talk about how common the “opportunity” view is in their circles.)
  • Philosopher Peter Singer’s essay in which he poses the “drowning child” thought experiment
  • A collection of responses to Singer’s essay and the Demandingness objection to it
  • This post from Holden Karnofsky makes the case for “Excited Altruism” (an opportunity perspective)

When is overconfidence useful (if ever)?

(This post is part of a series of Open Questions, in which I describe important questions about which thoughtful people disagree)

This might be the most persistent divide I see between the tech/entrepreneurship communities on the one hand (who tend to be very pro-overconfidence), and the finance/rationalist/academic communities on the other hand (who tend to think overconfidence is harmful).

Key arguments against overconfidence:

  1. You end up choosing bad projects if you’re overconfident — you could instead have been doing something else with a higher expected payoff
  2. You lose the ability to fix flaws (in your own skills, or your strategy) if you’re convinced they don’t exist

Key arguments for overconfidence:

  1. Without overconfidence you’d never have the motivation to achieve anything ambitious
  2. Overconfidence is self-fulfilling; believing in yourself makes you more effective
  3. Overconfidence convinces others (investors, employees) to trust and follow you

Looking at these arguments suggests a few candidates for “cruxes” on which this disagreement might rest. For example, if you believe overconfidence is net good, that might be because you believe that a person’s confidence and motivation are more pivotal in determining his success than his objective skills or strategy. (Whereas someone who thinks overconfidence is net bad might reverse the relative weights on those factors.)

Another possible crux involves the type of overconfidence. Perhaps overconfidence about one’s abilities is net good, because the social benefits of that type of overconfidence are especially high, but overconfidence about one’s knowledge is net bad.

One resolution that’s been suggested to the tradeoff between overconfidence’s costs and benefits is to separate your emotional/motivational attitude from your epistemic beliefs. That is, you want to be able to adopt a cheerfully confident attitude while saying “Yup, this project has a low chance of success, but it’s totally worth trying anyway.” I know several people who do this successfully, and their attitude seems to inspire confidence in others, as well (taking care of the third concern in the “pro-overconfidence” list).

The crux here seems to be whether that ability is easily replicable or not. If it’s too difficult for most people to be (emotionally) confident while being (epistemically) well-calibrated, then overconfidence could be the best patch for humans in some cases.

Also, it’s important to note that all of the above arguments are about why overconfidence is bad or good for the individual. But there’s another set of arguments at the group level: even if overconfidence is bad for you, some people argue, it’s good for society. Overconfidence means that 10,000 individual entrepreneurs end up worse off because they all wrongly think they’re going to create the next Google — but as a result, society gets one Google.

Related open questions: Rationality vs. Postrationality (coming soon)

Relevant reading:

A taxonomy of ways books change your worldview

Books that offer data

  1. Books that provide a window onto an interesting piece of the world
    • Examples: Hillbilly Elegy, Courtroom 302, The Power Broker
  2. Books that present surprising case studies, events that force the question, “What does it imply about the world, that X could happen?”
    • Examples: The Idea Factory, Extraordinary Popular Delusions and the Madness of Crowds, The Man Who Mistook his Wife for a Hat
  3. Books that highlight patterns in the world
    • Examples: Anti-intellectualism in American Life, Connections, Bowling Alone, On Bullshit, Metaphors We Live By, Better Angels of our Nature

Books that offer theory

  1. Books with models of how a phenomenon works
    • Examples: Thinking Fast and Slow, How Animals Work, On the Origin of Species, Consciousness Explained
  2. Books with models of what makes something succeed or fail
    • Examples: Zero to One, Film as Film, Democracy in America, Death and Life of Great American Cities, Seeing Like a State
  3. Books that point out a problem
    • Examples: Bad Pharma, Breaking the News
  4. Books that make predictions
    • Examples: Superintelligence, Age of Em, The End of History
  5. Books that give you a general concept or lens you can use to analyze many different things
    • Examples: The Strategy of Conflict, Black Swan, A Pattern Language, Small Worlds, Clock of the Long Now

Books that change your values

  1. Books that make an explicit argument about values
    • Examples: Against Democracy, Robot’s Rebellion, Genealogy of Morals, Doing Good Better, A Theory of Justice
  2. Books that function as thought experiments for you to reflect on how you feel about something
    • Examples: Brave New World, Age of Em, An Inspector Calls
  3. Books written from a holistic value structure, letting you experience that value structure from the inside
    • Examples: Atlas Shrugged, Walden, The Trial, Hitchhiker’s Guide to the Galaxy

Books that change your thinking style

  1. Books that teach principles of thinking directly
    • Examples: How to Solve It, Language Truth and Logic, Philosophical Investigations, Intuition Pumps
  2. Books from which you can learn a style of thinking by studying the author’s approach to the world, or to his material
    • Examples: Surely You’re Joking Mr. Feynman, Freakonomics, Godel Escher Bach
  3. Books that tickle your aesthetic sense in a way that obliquely makes you a more interesting, generative thinker
    • Examples: Labyrinths, Invisible Cities, Arcadia, Aha! Insight

Who benefits from unsolicited criticism?

(Originally a thread on FB here)

Discussions about whether it’s good to get unsolicited criticism tend to feel like people talking past each other. The “yay unsolicited criticism!” side keeps pointing out how criticism helps you improve. But I don’t think that’s the real crux of the disagreement, for the “boo unsolicited criticism!” people.

Instead, I think that the value of unsolicited criticism to a particular person depends on a few key variables:

  1. How much effort do you already spend looking for your own flaws?
  2. How good are you at picking up on implicit feedback from other people’s reactions to you? (To be clear, 1 & 2 determine how likely you are to already have noticed the problem someone else is pointing out to you.)
  3. How much difficulty do you have self-modifying — i.e., acting on feedback?
  4. How much stress or anxiety do you feel when you’re reminded of things you know you’re doing wrong but can’t change?

I hypothesize that people whose answers are “more than average” to these questions are the ones who don’t usually appreciate receiving unsolicited criticism, even when it’s well-intentioned.

My point here: yes, probably there’s some element of irrationality on the part of the “boo unsolicited criticism!” people, in the sense that they’re unwilling to endure short-term discomfort in exchange for long-term gains. But to a large extent, I think they’re rationally evaluating the costs and benefits of unsolicited criticism, for themselves, and correctly perceiving that it’s not a good deal for them.