Epistemic Status: Not canon, not claiming anything, citations not provided, I can’t prove anything and you should definitely not be convinced unless you get there on your own.

Style Note: Phrases with Capitalized Words other than that one, that are not links, indicate potential future posts slash ideas for future rationalist concepts, or at least a concept that is (in a self-referential example) Definitely A Thing.

I got a lot of great discussion, at this blog and elsewhere, about my post What Is Rationalist Berkley’s Community Culture? There are a lot of threads worth full responses. I hope to get to at least some of them in detail.

This post came out of a discussion with Ben Hoffman. I mentioned that I had more places to go next and more threads to respond to than I could possibly have time to write, and he suggested I provide a sketch of all the places. As an experiment, I’m going to try that, at least for the subset that comes to mind. Even as I finish up writing this, I think of new possible paths.

None of this is me making claims. This is me sketching out claims I would like to make, if I was able to carefully build up to and make those claims, and if after doing that I still agreed with those claims; trying to robustly defend your own positions is sometimes a great way to end up changing your mind.

In particular, I am definitely not claiming the right to make these claims. An important problem is that often people know something, but do not have the social right to claim that it is true, leaving them unable to share, defend or act upon that knowledge. As we as a culture become unable to state more and more true things explicitly, this problem gets worse. I have left out some claims that I am not socially safe even saying that I believe, let alone claiming that they are true or claiming the right to claim that they are true.

One thread is to expand on the question of folk values versus mythic values, as I think the original post was pointing to a hugely important thing but got important things about it wrong – I think that people with only folk values are helping, if you chose your folk values wisely, and should be honored. So we might call this (1A) Yay Folk Values.

A second thread is to address the question Sarah asked: “What Is The Matrix Mission?” This deserves not only one but at least two careful and full posts, one (2A) about the mission to preserve our mission and values, The Mission Must Preserve Itself. Preserving your utility function (for both groups and individuals) is hard! If your utility function and culture do not explicitly and expensively protect themselves, they will become something else and eventually end up orthogonal to your original goals. If people decide they don’t want to go on (2B) The Mission To Save The World or support that mission, I respect that decision, because that’s not the part that needs to preserve itself – it follows from thinking carefully about things. What can’t be compromised are the epistemic standards and values. That’s what we must fight for.

Thread (2C) would then talk about How the Mission was Lost, as the group allowed its culture and utility function to shift over time, partly through entry of non-core people, but also by choosing goals that were too much about attracting those people, which is doubly toxic to preserving your utility function and culture – you shift your own priorities in ways that are hard to compartmentalize or contain, and you bring in people with different culture.

The follow-up from some people was, why preserve the mission

One I do intend to get to, because it seems super important is that you sometimes need to (3A) Play On Hard Mode. A lot of paths make your life easy now, but make it hard later, because you develop habits and tools, skills and relationships and assets, that won’t get you what you want, and that incentivize you further towards developing the wrong things. The wrong hills are climbed. You need to avoid optimizing for things, or taking actions, that would subject you to Goodhart’s Law – you need to (3B) Nuke Goodhart’s Law From Orbit. Repeatedly. It won’t stay down.

Getting what you want is also about avoiding what you don’t want. This partially answers, if done right, the question of (3C) How To Keep The Wrong People Out. If your keep your thing unattractive to the wrong people, the wrong people who you wouldn’t feel comfortable kicking out, will self-select out. The ones who still refuse to leave will make it easier on you; you still have to grow something of a spine, but not that much. New York has inadvertently done a strong job of this. The cost, of course, is the false negatives, and I’d explore how to minimize that without losing what is valuable. This is also related to (3D) Become Worthy, Avoid Power, which is still on my eventual maybe-to-do list.

(3E) Goodhart’s Law Is The Enemy. It is behind far more of our problems than we realize. Almost all of today’s biggest and most dangerous problems are related. That is in fact the place I am going, slowly, in the background. There’s also another track about letting people see how my brain works and (3F) Teaching The Way By Example but let’s cut the thread off there before I get even more carried away.

Then there’s (4A) which would be Against Polyamory. You can be busy solving the nerd mating problem, but if you’re busy solving that you’re going to have a hard time being busy solving other things, especially if those other things aren’t helping your nerd mating success. I think that setting aside all issues of whether it makes people happy, whether it is moral and whether it has good long term effects or is good for families, I think it has even more important problems: It is a huge time sink, it hijacks people’s attention constantly, vastly increasing complexity (Complexity Is Bad), and it makes anything and everything about sexual bargaining power and status because everyone always has tons of choices and Choices Are Really Bad. It makes utility function preservation even harder, and it is an example of (4B) Bad Money Drives Out Good. I have some stories to tell about other people in other communities, as well, to the extent I can find a way to tell them here.

At some point I do need to respond to the intended central point Sarah made, and that Ozy reminded us of elsewhere, that the rationalists by themselves don’t have an especially strong record of actually doing outward facing things, especially for things that are not charitable organizations or causes. There are essentially three ways I’ve thought of to respond to that.

Response one (5A) is that this is partly because of fixable problems. We should Fix It! Response (5B) is to explain why keeping the projects Within the System is important, and worth having a reduced chance of commercial success. If you can’t retain control of the company, and the company’s culture, it will inevitably drift away and be captured, and not only will other get most of the money, they’ll also steal the credit and the ability to leverage the creation to change the world. That doesn’t mean never go out on your own, but there are reasons to aim high even when everyone tells you not to; each project has the same problem that the overall culture does. Of course, you’ll need some people from outside, we aren’t big enough to cover all the skill sets, even with the non-core people. Response (5C) is that it’s fine to go out and do things on your own and with outsiders, of course it is, but you need A Culture Of Extraordinary Effort or some such, to make those things actually happen more often, and even in that case (5D) Hire and Be Hired By Your Friends And Network because that is how most people get hired.

Then finally there’s a question that both points to a huge problem and deserves an in-depth answer: Effective Altruism (EA) seems reasonably on-brand and on-similar-mission, with an explicit ‘save the world’ message, so why shouldn’t the people who want that migrate over there? Then I have to finally face writing out my thoughts on EA, and doing it very carefully (but not as carefully as Compass Rose, see basically the entire archives and I do not have that kind of time), without having ever been to more than local meetup level events for EA – I will at some point, but I have this job thing and this family and all that.

Given Scott’s recent post,  Fear and Loathing at Effective Altruism Global 2017, combined with this being perhaps the best challenge most in need of response, the timing seems right to do that.

I have a lot of thoughts about EA. Most of them involve philosophy, and many in EA are pretty deep into that, so it’s tough to write things that are new and/or couldn’t be torn to shreds by people who have been over all the arguments for years and years, but at some point we all know I’m going to do it anyway. So to those people, when you do tear it all to shreds, or point out none of it is new, or what not, I won’t be offended or anything. It’s going to take me a bunch of iterating and exploring and mind changing to get it right, if I ever get it right. Just be nice about it!

I’ve said before that I am a big fan of Effective but skeptical on Altruism. I mean, sure, I’m a fan, it does a lot of good, but (6A) The Greatest Charity In The World is not Open Phil or the Bill Gates Foundation, it’s Amazon.com, and if Jeff Bezos wants to help the world more, he needs to stop worrying about charity (although if he’s still looking, I do have (6B) Some Potential New Effective Altruist Ideas) and get back to work. Even when you discount the profit motive, there are plenty of other good reasons to do positive sum things that help others. At a minimum, (6C) Altruism Is Incomplete, also it gets more credit for the good it does than its competitors, and when it hogs credit for too many good things, this distorts people’s thinking. This ended up expanding beyond the scope indicated here.

It’s all well and good to score high in Compassion and Sacrifice, but if you do that by neglecting Honor and Honesty, or even Justice, Humility, Valor or plain old Curiosity, you’re gonna have a bad time.

To be filed under posts I do not feel qualified to write properly and that would be torn to shreds if I tried, there’s (6D) Can I Interest You In Some Virtue Ethics? or even (6E) Against Utilitarianism. At some point I want to try anyway.

There’s some pretty bizarre thinking going on about what is good or worth thinking about. Some of it even involves people in EA.

There’s the level of weird where one doubts the mechanism, like worrying about unfriendly AI, nuclear war, asteroid strikes or pandemic plagues. This is weird in the sense that most people don’t think the probabilities are high enough to worry about, but it’s totally normal in the sense that if a random person believed one of those was imminent they would quite rightfully freak the hell out and (one hopes) try to stop it or control the damage. They’re weird maps of reality, but the morality isn’t weird at all. As a group they get massive under-investment, and if we funded all of them ten times more, even if we also did that for a bunch of deeply stupid similar things, the world would be a safer place at a discount price. In this sense, we can all strongly agree we want to ‘keep EA weird.’ (6F) Encourage Worrying About Weird Stuff.

Then there’s the morally weird. Scott pointed to a few examples, and there was some very good analysis in various comments and posts about what the heck is going on there. I think the right model is that these people started by taking the idea that (6G) Suffering Is Bad, and (6H) Happiness Is Good. I don’t disagree, but they then get more than a little carried away. Then that gets turned into the only thing that matters, and these principles are treated these axioms as on the same level as I think therefore I am or the existence of rice pudding and income tax.

The people involved then saw where that led, drawing increasingly weird conclusions that made less and less sense. After lots of thinking, they then decided to endorse their conclusions anyway. I’m sorry, but when you are worried that protons are suffering when they repel each other, you screwed up. (7A) Wrong Conclusions Are Wrong. (7B) Life Is Good. I still am a big believer in (7C) Yes, You Can Draw And Use Long Chains of Logical Inference but they’re pretty easy to mess up. If your chain ends up somewhere false, it’s time to find at least one mistake. In my mind such people have made several mistakes, some more obvious than others. I’d start with how you are defining suffering and why exactly you have the intuition that it is so importantly bad.

In general, (8A) The Past Is Not A Dystopian Nightmare, and (8B) Nature Is Not A Dystopian Nightmare. Again, Life Is Good. No one is saying (I hope) that awful things didn’t happen all the time, or that things could have been better, but I consider saying the past had negative utility as one of those signposts that you messed up.

A big issue in EA, which many have written about and so I have some research I should likely do before writing about it, is (9A) How To Cooperate With Human Paperclip Minimizers. I think not only some of the absurd, hopelessly weird EA causes are orthogonal to value, but also some of the less weird ones. In general, I follow the principle of (9B) Help People Do Things, especially help them figure out how to do things more efficiently, even when I think the things are pointless, so this doesn’t get in the way too much. I could also just write (9C) Stop Paperclip Minimizing and try to argue that those people are doing something useless, but I do not need that kind of trouble, and I doubt my arguments would be convincing at this stage anyway. Still, I am tempted, I think largely but far from entirely for the right reasons. At some point, one must try to figure out (9D) What Is Good In Life?

A last important thread is the problem of motivation. (10A) Yay Motivation! It’s hard to get motivated, and it’s only getting harder. Just about everything we use to motivate ourselves and others is now considered unhealthy in one form or another, and/or a cause of unhappiness. It’s not that this is wrong, exactly, but we need to get our motivation from somewhere. If we’re going to withdraw things from the pool of officially acceptable motivations, we need to be adding to it as fast or faster, or we’ll get a lot of people who don’t have motivation to do things. Which we have.

A particular focus of this problem is ambition. This problem is hard. Empty ambition is clearly harmful to happiness and motivation, but it can also lead to real ambition, and the things that harm empty ambition seem to also harm real ambition, or prevent it in the future. (10B) On Ambition would address this hard problem, which I ended folding some of into 6B, but deserves a deeper dive. Another way to look at the issue is, we want people to feel a little sad that they don’t do more or aim higher, or somehow make them prefer doing more to doing less, because we want them to strive to do more, but we also want them to not feel too sad about it, especially if they’re doing plenty already. Scott Alexander feeling bad about not doing enough is kind of insane.

We particularly want to make sure that people can work on themselves first, put their own house in order, and keep themselves rewarded for all their hard work. We need a robust way to defeat the argument that everything you do is killing someone by way of failing to fund saving their life. Thinking in this way is not cool, it’s not healthy, and it does not lead to lives being net saved. We must deal with (10C) The Altruist Basilisk. It is (10D) Out To Get You for everything you’ve got. We must take a day of rest, and (10E) Bring Back The Sabbath.

 

 

 

 

 

 

 


New to LessWrong?

New Comment