The goal of the Update Project is to help decisionmakers improve the accuracy of their models of the world. What this means in practice is that I assemble small groups of thinkers from tech, academia, the nonprofit world, government, etc., who have some interesting differences in their models of a given topic. (See my open questions page for some of the disagreements that have emerged so far.)
In the process, I’m trying to learn and summarize best practices for generating useful updates: How much initial disagreement between participants’ views is ideal? (I don’t know the answer, but I’m pretty sure it’s not “as much as possible.”) And what kinds of heuristics are helpful? For example, I think it’s valuable to quantify one’s uncertainty in terms of a subjective probability (0-1), when possible. I also think it’s important to have an affordance for prefacing one’s claims with “I’m not sure why I think this, but…” so that we avoid confabulation.
Why the name? In my circles, an “update” refers to a revision to one’s model. (Technically the reference is to a Bayesian update, but the way we use the term colloquially doesn’t usually mean something quite so precise.) For example, you might say, “I’ve updated on the fact that…” or “Okay, I’m updating in favor of the idea that…”
I like the word because of how matter-of-fact it is. “Changing one’s mind” feels so weighty and dramatic, but updating is just this workaday thing we do — or should do — all the time.
My time and expenses for this project are covered by a contract with the Open Philanthropy Project.