JustinShovelain

I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last eleven years I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.

Wiki Contributions

Comments

Goodhart's Law Causal Diagrams

I think causal diagrams naturally emerge when thinking about Goodhart's law and its implications. 

I came up with the concept of Goodhart's law causal graphs above because of a presentation someone gave at the EA Hotel in late 2019 of Scott's Goodhart Taxonomy. I thought causal diagrams were a clearer way to describe some parts of the taxonomy but their relationship to the taxonomy is complex. I also just encountered the paper you and Scott wrote a couple weeks ago when getting ready to write this Good Heart Week prompted post, and I was planning in the next post to reference it when we address "causal stomping" and "function generalization error" and can more comprehensively describe the relationship with the paper.

In terms of the relationship to the paper, I think that the Goodhart's law causal graphs I describe above are more fundamental and atomically describe the relationship types between the target and proxies in a unified way. I read how you were using causal diagrams in your paper as rather describing various ways causal graph relationships may be broken by taking action rather than simply describing relationships between proxies and targets and ways they may be confused with each other (which is the function of the Goodhart's law causal graphs above). 

Mostly the purpose of this post and the next are to present an alternative, and I think cleaner, ontological structure for thinking about Goodhart's law though there will still be some messiness in carving up reality. 

 

As to your suggested mitigations, both randomization and secret metric are good to add though I'm not as sure about post hoc. Thanks for the suggestions and the surrounding paper. 

Subspace optima

I like the distinction that you're making and that you gave it a clear name.

Relatedly, there is the method of Lagrangian multipliers for solving things in the subspace.

On a side note: there is a way to partially unify subspace optimum and local optimum by saying that the subspace optimum is a local optimum with respect to the local set of parameters you're using to define the subspace. You're at a local optimum with respect to defining the underlying space to optimize over (aka the subspace) and a local optimum within that space (the subspace). (Relatedly, moduli spaces.)

COVID-19: An opportunity to help by modelling testing and tracing to inform the UK government

I've decided to try modelling testing and contact tracing over the weekend. If you wish to join and want to ping me my contact details are in the doc.

Why don't we have active human trials with inactivated SARS-COV-2?

I think virus inactivation is a normal vaccination approach and is probably being pursued here? The hardest part is probably growing it in vitro at scale and perhaps ensuring that all of them are inactive.

Conflict vs. mistake in non-zero-sum games

Nice deduction about the relationship between this and conflict vs mistake theory! Similar and complementary to this post is the one I wrote on Moloch and the Pareto optimal frontier.

Metaphilosophical Mysteries

By new "term" I meant to make the clear that this statement points to an operation that cannot be done with the original machine. Instead it calls this new module (say a halting oracle) that didn't exist originally.

Metaphilosophical Mysteries

Are you trying to express the idea of adding new fundamental "terms" to your language describing things like halting oracles and such? And then discounting their weight by the shortest statement of said term's properties expressed in the language that existed previously to including this additional "term?" If so, I agree that this is the natural way to extend priors out to handle arbitrary describable objects such as halting oracles.

Stated another way. You start with a language L. Let the definition of an esoteric mathematical object (say a halting oracle) E be D in the original language L. Then the prior probability of a program using that object is discounted by the description length of D. This gives us a prior over all "programs" containing arbitrary (describable) esoteric mathematical objects in their description.

I'm not yet sure how universal this approach is at allowing arbitrary esoteric mathematical objects (appealing to the Church-Turing thesis here would be assuming the conclusion) and am uncertain whether we can ignore the ones it cannot incorporate.

Think Before You Speak (And Signal It)

Interesting idea.

I agree that trusting newly formed ideas is risky, but there are several reasons to convey them anyway (non-comprehensive listing):

  • To recruit assistance in developing and verifying them

  • To convey an idea that is obvious in retrospect, an idea you can be confident in immediately

  • To signal cleverness and ability to think on one's feet

  • To socially play with the ideas

What we are really after though is to asses how much weight to assign to an idea off the bat so we can calculate the opportunity costs of thinking about the idea in greater detail and asking for the idea to be fleshed out and conveyed fully. This overlaps somewhat with the confidence (context sensitive rules in determining) with which the speaker is conveying the idea. Also, how do you gauge how old an idea really is? Especially if it condenses gradually or is a simple combination out of very old parts? Still... some metric is better than no metric.

Load More