Posts

Sorted by New

Wiki Contributions

Comments

Many people within the Boston EA community seem to have come to it post college and through in-person discussions.

Hmm. I haven't spent much time in the area, but I went to the Cambridge, MA LessWrong/Rationality "MegaMeetup" and it was almost exclusively students. Is there a Boston EA community substantially disjoint from this LW/Rationality group that you're talking about?

More generally, are there many historical examples of movements that experience rapid growth on college campuses but then were able to grow strongly elsewhere? Civil rights and animal welfare are candidates, but I think they mostly fail this test for different reasons.

If you can convince one new person to be an EA for $100k you're more efficient than successfully raising your kid to be one, and that's ignoring time-discounting.

I honestly do not think this is possible, and again I look to religious organizations as examples where (my impression is that) finding effective missionaries is much harder than getting the minimal funding they need to operate at near-maximum efficiency. This is something we need more data on, but I expect a lot of the rosy pictures people have of translating money or other fungibles into EA converts will not stand up to scrutiny, in much the same way that GiveWell has raised by an order of magnitude its estimates of the cost of saving a life in the developing world. I especially think that the initial enthusiasm of new EAers converted through repeatable methods (like 80k hours) will fade more quickly in time than "organic" converts and children raised in EA households (to an even greater extent than for religions).

I think religions mostly expand at first through conversion and then once they start getting diminishing returns switch to expanding through reproduction. EA isn't to this changeover point yet, and isn't likely to be for a while.

Maybe. I have the impression that religions most used missionaries to expand geographically, and hit diminishing returns very quickly once they had a foothold. Basically, I guess that as soon as a potential convert knows the organization exists, you've essentially already hit the wall of diminishing returns. I agree as long as EA stuff has non-structural geographic lumpiness (i.e. geographic concentrations that are a result of accidents of history rather than for intrinsic reasons related to where EA memes are most effective) then EA missionary work may be the major driver of growth. But I think the EA memes are most effective on a wealthy, technologically connected sub-population which we may quickly saturate in just a few years.

I hear many more people describe their own conversion experience as something akin to "I heard the argument, and it just immediately clicked" (even if personal inertia prevented them making immediate drastic changes). I do not hear many people describe it as "I had heard about these ideas a few times, but it was only when Bob [who was supported by EA funding] took the time to sit and talk with me for a few hours that I was convinced." (Again, that's just anecdotal.)

Can we look at the history of the Catholic church during times when new populations of potential converts became accessible through exploration/colonization? What fraction of the church's resources went to missionary work, and did the church reduce its emphasis on having children so that parents would have more free money to give to the church?

Incidentally, these kind of questions are what make me wish we had more EA historians. We could use a lot more data and systematic analysis.

I mostly disagree with both parts of the sentence "Except that it's much cheaper to convince other people's kids to be generous, and our influence on the adult behavior of our children is not that big." I would argue that

(1) Almost all new EA recruits are converted in college by friends and/or by reading a very small number of writers (e.g. Singer). This is something that cannot be replicated by most adults, who are bad writers and who are not friends with college students. We still need good data on the ability of typical humans to covert others to EA ideas, but my anecdotal observations (e.g. Matt Wage) suggest that this is MUCH harder than you might think.

(2) Despite one's acceptance of genetic fatalism, it's known that the biggest influences a parent can have are the religious and political associations of their children. Insofar as donating is determined more by affiliating with the EA movement than by bio-determined factors like IQ, we can expect parents to strongly induce giving by their children.

We can look to evangelical religions to get an idea of what movement building techniques are most effective for the bulk of the population. Yes, many religions have missionaries, but this is usually a small group of unusually motivated and charismatic people. But having lots of children is a strategy that many religions have effectively employed for the bulk of their members.

(One potential counter example I'd be interested to hear about is the effectiveness of the essentially compulsory missionary work for Mormon men.)

The invention of nuclear weapons seems like the overwhelmingly best case study.

  1. New threat/power comes from fundamental new scientific insight.
  2. Existential risks (nuclear winter, run-away nitrogen fusion in atmosphere).
  3. Massive potential effects, both positive and negative (nuclear power for everything, medical treatments, dam building and other manipulation of Earth's crust, space exploration, elimination of war, nuclear war, increased asymmetric warfare, reactor meltdowns, increased stability of dictatorships). Some were realized.
  4. Very large first-mover advantage with times scales of less than a year.
  5. Feasible development in secret.

Nuclear weapons differed in that the world was already at war when they were developed, so policy makers would be in a different mindset and have different incentives. But otherwise, I think the parallels are as good as you could possibly hope for. The only other competitor is the (overly broad) case of molecular nano-tech, but this hasn't actually happened yet so you don't have much to go on. In contrast, the Manhattan Project is extensively documented.

With respect, I've always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person's behavior, realize that it doesn't follow a time-invariant utility function, and say "Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!" But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. ("Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...")

On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can't measure it, and you were to just look at my actions, this is what you'd say: "Look, some days he cleans his room and some days he doesn't even though the benefit--a room clean for about 1 day--is the same. When he doesn't clean his room, and you ask him why, he says he just really didn't feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!" But the real reason is not that my utility function is varying. It's that I find cleaning my room soothing on some days, whereas other days it's torture.

I agree with you in general, and would especially like to hear from some LW psychologists. I think this field is pretty new, though, and not heavily dependent on any canon.

I've never heard of willpower depletion....Surely willpower is a long-term stat like CON, not an diminishable resource like HP.

In fact, previous research has shown that it is a lot like HP in many situations. See the citations near the beginning of the article.

Sure, on average it's negative sum. But I have to guess that society as a whole suffers greatly from having many (most?) of its technically skilled citizens at the low end of the social-ability spectrum. The question would be whether you could design a set of institutions in this area which could have a net positive benefit on society. (Probably not something I'll solve on a Saturday afternoon...)

I'm pretty sure this varies state-to-state.

Well, there's three kinds of meetups I can imagine.

(1) You go for the intellectual content of the meeting. This is what I was hoping for in Santa Barbara. For the reasons I mentioned above, I now think it's unlikely that the intellectual content will ever be worthwhile unless somebody does some serious planning/preparation.

(2) You go for the social enjoyment of the meeting. I confirmed my suspicion in SB that I personally wouldn't socially mesh with the LW crowd, although maybe this was a small sample size thing.

(3) You go to meet interesting people. In my life I've had a lot of short-term and a few long term friends with whom I've had fun. But I've probably only known 3-4 truly interesting people, in the sense that they challenged my thinking and were pleasing enough to spend a lot of time getting to know well.

Any of the above would get me to go to a meetup, although I'd be most excited about (3).

I suffer from exactly the same thing, but I don't think this what Roko is worring about, is it? He seems to worry about "ugh fields" around important life decisions (or "serious personal problems"), whereas you and I experience them around normal tasks (e.g. responding to emails, tackling stuck work, etc.). The latter may be important tasks -- making this an important motivation/akrasia/efficiency issue -- but it's not a catastrophic/black-swan type risk.

For example, if one had an ugh field around their own death and this prevented them from considering cryonics, this would be a catastrophic/black-swan type risk. Personally, I rather enjoy thinking about these types of major life decisions, but I could see how others might not.

Load More