All of Carl_Shulman's Comments + Replies

I.e. I agree with your analysis that they (and artemisinin treatment) are great and worth doing if the local governments don't tax or steal them (in various ways) too intensively.


It's $1000 per life not per net, because in most cases nets or treatment won't avert a death.


There's plenty of room to work on vaccines and drugs for tropical diseases, improved strains of African crops like cassava, drip irrigation devices, charcoal technology, etc.


The best interventions today seem to cost $1000 per life saved. Much of the trillion dollars was Cold War payoffs, bribing African leaders not go Communist, so the fact that it was stolen/wasted wasn't that much of a concern.

I tend to prefer spending money on developing cheaper treatments and Africa-suitable technologies, then putting them in the public domain. That produces value but nothing to steal.

Regarding g's point, I note that there's a well-established market niche for this sort of thing: it's like the popularity of Ward Connerly among conservatives as an opponent of affirmative action, or Ayaan Hirsi Ali (not to downplay the murderous persecution she has suffered, or necessarily to attack her views) among advocates of war against Muslim countries. She'll probably sell a fair number of books, get support from conservative foundations, and some nice speaking engagements.


This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn't apply well to Pascalian cases.


Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.

The idea that further study wouldn't unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.

The fallacious arguments against Pascal's Wager are usually followed by motivated stopping.

"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.

A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approxim... (read more)

This seems like a non-standard way of thinking that needs some explanation. It's not clear to me that it matters whether my emotions scale linearly, if I'll reflectively endorse the statement "if there are X good things, and you add an additional good thing, the goodness of that doesn't depend on what X is". It's also not clear to me that utilitarians can be seen as having an intrinsic preference for utilitarian behavior as opposed to a belief that their "true" preferences are utilitarian.

utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences

I know this is not what you were suggesting, but this made me think of goal systems of the form "take the action that I think idealized agent X is most likely to take," e.g. WWAIXID.

A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high... (read more)


What standard do you use to identify "good tastes and values" to be open to?

This looks like a relatively clear case of excessive narrative-to-signal.

And again, babyeating norms need to invade in a similar fashion, and without norms other than baby-eating, the communal feeding pen selects for zero provisioning effort.

"If most of the total cost of growing a child lies in feeding it past the rapid growth stage, rather than birthing 50 infants and feeding them up to that point,"

From their visibility in the transmitted images it seems the disproportion isn't absurdly great. Also, if the scaling issues with their brains were so extreme, why didn't they become dwarfs? One big tool-using crystal being versus 500 tool-using dwarfs of equal intelligence seems like bad news for the giant.

"You're also postulating that a whole group gets this mutation in one shot - ... (read more)

"I fear that you have not managed to convince me of this. If the general idiom of children in pens is stable, then the adults contributing lots and lots of children (as many as possible) is also evolutionarily stable."

I have a tribe of Babyeaters that each put 90% of their effort into reproducing, and 10% into contributing to the common food supply of the pen. This winds up producing 5000 offspring, 30 of which are not eaten, and are just adequately fed by the 10% of total resources allocated to the food supply. Now consider an allele, X, that di... (read more)


I guess it depends on whether the fantastic element can adequately stand in for whatever it is supposed to represent. Magic starship physics can be used to create a Prisoner's Dilemma without trouble, since PDs are well understood, and it's fairly obvious that we will face them in the future. No-Singularity and FTL, so that we can have human characters, are also understandable as translation tools. If Babyeaters are a stand-in for 'abhorrent alien evolved morality' to an audience that already grasps the topic, then the details of their evolution do... (read more)

Eliezer, you're right that the coordination mechanisms would be imperfect, so it's an overstatement to say NO babyeating would occur, I meant that you wouldn't have the 'winnowing' sort of babyeating with consistent orders-of-magnitude disproportions between pre- and post-babyeating offspring populations.

Nits. I'd say there are probably lots of at-least-Babyeater-level-abhorrent evolutionary paths (not that Babyeaters are that bad, I'd rather have a Babyeater world than paperclips) making up a big share of evolved civilizations (it looks like the great maj... (read more)

I wonder about the psychological mechanisms and intuitions at work in the Babyeaters. After all, human babies don't look like Babyeater babies, they're less intelligent, etc. Their intellectual extension of strong intuitions to exotic cases might well be much more flexible than their applications to situations from the EEA, e.g. satisfying them by drinking cocktails containing millions of blastocysts. Similarly, human intuitions start to go haywire in exotic sci-fi thought experiments and strange modern situations.

"I don't understand why you think that provisioning your own offspring is a group advantage." If parents could selectively provision their own offspring in the common pen, then the group would not be wracked by intense commons-problem selective pressures driving provisioning towards zero and reproduction towards the maximum (thus resulting in extermination by more numerous tribes).

Actually, babyeating in the common pen isn't even internally stable. Let's take the assumptions of the situation as given:

  1. There is intertribal extermination warfare. Larger tribes tend to win and grow. Even division of food among excessive numbers of offspring results in fewer surviving adults, and thus slower tribal population growth and more likely extermination.
  2. All offspring are placed in a common pen.
  3. Food placed in the common pen is automatically equally divided among those in the pen and adults cannot selectively provision.
  4. Group selection has res
... (read more)

I.e. sister ants with their parents alive don't need complex social recognition and punishment mechanisms to deal with conflicting individual and group interests, since their best outcomes coincide. That coincidence of interests can be almost as complete as for a group of clones.

Given ant chromosomal structure, an ant is more related to her sisters than her offspring, and a single female can convert food/resources to offspring roughly as well as two females each with half the resources.

Even relatively strong social recognition and coordination systems, as in primates, leave plenty of opportunities to shirk and betray. Behaviors of selective provisioning and parental investment (the cheating that already sometimes occurs and is punished among Babyeaters) serves both group and individual fitness, reducing the strength of group selection needed to maintain the altruistic punishment of shirkers. It would thus be easier for it to evolve, and groups of selective-provisioners would on average have a competitive advantage (since the group-benefi... (read more)

Re: "MST3K Mantra"

Very improbable evolved beings don't make for good warnings about the precious moral miracle of human values. It would be better to use an example of a plausible 'near-miss,' e.g. by extrapolating from something common in Earth species.

"Why doesn't modern society securitize hard assets into money of zero maturity, instead of using a purely abstract debt-based currency to denominate debts? Because it would be slightly more complicated, that's why." Eliezer,

I think you're mistaken about the relative complexity of parents selectively provisioning their own offspring, versus the baroque and complex adaptations for social intelligence and coordination required for this system to be stable.

"And anyone who tried to cheat, to hide away a child, or even go easier on their own child... (read more)

"makes the large numb" Is obviously a result of an incomplete edit.

Why didn't the Babyeaters develop the practice of separate pens for each family, with tribes redistributing common resources (e.g. erratic, potentially rotting, meat from hunts) among parents, and parents feeding children out of their share? Maybe their brains lacked the capacity to recognize so many distinct off-spring, but why not spray them with a pheromone? Producing vast numbers of offspring with big expensive full-size brains (which is itself implausible) makes the large numb to be destroyed immediately would impose huge metabolic costs relative to privatizing the commons and distinguishing between offspring, then adjusting clutch-size based on parental resources.


You missed (5): preserve your goals/utility function to ensure that the resources acquired serve your goals. Avoiding transformation into Goal System Zero is a nearly universal instrumental value (none of the rest are universal either).


Those are instrumental reasons, and could be addressed in other ways. I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.

"Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can."

So you would want to eliminate your special care for family, friends, and lovers? Or are you really just saying that your degree of ingroup-outgroup concern is less than average and you wish everyone was as cosmopolitan as you? Or, because ingroup-concern is indexical, it results in different values for different ingroups, so you wish every shared ... (read more)

Roko, the Minimum Message Length of that wish would be MUCH greater if you weren't using information already built into English and our concepts.


"I have set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair." I'm not sure whether this would prohibit the attainment or creation of superintelligence (capable of overwhelming the guards), but if not then this doesn't do that much to resolve existential risks. Still, unaging beings would look to the future, and thus there would be plenty of people who remembered the personal effects of an FAI screw-up when it became possible to try again (although it might also lead to overconfidence).

"How about "Every time nerds on OB discuss human relationships, one decibel of evidence is added to the hypothesis that the singularity will look like a sci-fi fanfic novel""

That gets to near-certainty too fast.

Interest in previously boring (due to repetition) things regenerates over time. Eating strawberries every six months may not be as good as the first time (although nostalgia may make it better), but it's not obvious that it declines in utility.

We may also actively value non-boredom in some mid-level contexts, e.g. in sexual fidelity, or for desires that we consider central to our identity/narratives.

"Eating strawberries every six months may not be as good as the first time (although nostalgia may make it better), but it's not obvious that it declines in utility." Isn't "not being as good" just what "declines in utility" means?


The Brave New World was exceedingly stable and not improving. Our current society has some chance of becoming much better.

Well, they won't be doing numerically identical pieces of work. Are you thinking of things like patronage and nepotism positions that exist solely to hand money to their holders? An auto company employee who comes to 'work' and sits at a desk doing nothing from 9 to 5 in order to collect a paycheck, which is offered because of the UAW, isn't contributing anything to the company or the economy, but his enrichment makes a difference to the union leaders, since he will provide union dues and a vote. Many people are in this category, but the most blatant ones ... (read more)

I'm just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought 'mutation' might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.

"(b) my being a mutant,"

It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values without mutation (although personality genetics probably play a role here too). I don't think you would characterize this as an instance of c), would you?


Every decision rule we could use will result in some amount of suffering and death in some Everett branches, possible worlds, etc, so we have to use numbers and proportions. There are more and simpler interpretations of a human brain as a mind than there are such interpretations of a rock. If we're not mostly Boltzmann-brain interpretations of rocks that seems like an avenue worth pursuing.


In that case can you respond to Eliezer more generally: what are some of the deviations from the competitive scenario that you would expect to prefer (upon reflection) that a singleton implement?

On the valuation of slaves, this comment seemed explicit to me.


A solar powered holodeck would be in trouble in deep space, particularly when the nearby stars are surrounded with Matrioshka shells/Dyson spheres. Not to mention being followed and preceded by smarter and more powerful entities.


Do you think singleton scenarios in aggregate are very unlikely? If you are considering whether to push for a competitive outcome, then a rough distribution over projected singleton outcomes, and utilities for projected outcomes, will be important.

More specifically, you wrote that creating entities with strong altruistic preferences directed towards rich legacy humans would be bad, that the lives of the entities (despite satisfying their preferences) would be less valuable than those of hardscrapple frontier folk. It's not clear why you think that th... (read more)


Some brute preferences and values may be inculcated by connected social processes. Social psychology seems to point to flexible moral learning among young people (e.g. developing strong moral feelings about ritual purity as one's culture defines it through early exposure to adults reacting in the prescribed ways). Sexual psychology seems to show similar effects: there is a dizzying variety of learned sexual fetishes, and they tend to be culturally laden and connected to the experiences of today, but that doesn't make them wrong. Moral education dedic... (read more)

"It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular."

This doesn't make sense to me. A superintelligence could:

  1. A superintelligence could create a semi-random plausible human brain emulation de novo, and whatever this emulation was, it would be the continuation of some set of human lives.

  2. A superintelligence could conduct simulations to explore the likely distribution of minds across the multiverse, as wel

... (read more)

"If this is so, isn't it almost probability 1 that CEV will be abandoned at some point?"

Phil, if a CEV makes choices for reasons why would you expect it to have a significant chance of reversing that decision without any new evidence or reasons, and for this chance to be independent across periods? I can be free to cut off my hand with an axe, even if the chance that I'll do it is very low, since I have reasons not to.

" I can see arguing with the feasibility of hard takeoff (I don't buy it myself), but if you accept that step, Eliezer's intentions seem correct."


Robin has already said just that. I think Eliezer is right that this is a large discussion, and when many of the commenters haven't carefully followed it, comments bringing up points that have already been explicitly addressed will take up a larger and larger share of the comment pool.

"Carl and Roko, I really wasn't trying to lay out a moral position," I was noting apparent value differences between you and Eliezer that might be relevant to his pondering of 'Lines of Retreat.'

"though I was expressing mild horror at encouraging total war, a horror I expected (incorrectly it seems) would be widely shared." It is shared, but there are offsetting benefits of accurate discussion.

"Eliezer, sometimes in a conversation one needs a rapid back and forth, often to clarify what exactly people mean by things they say. In such a situation a format like the one we are using, long daily blog posts, can work particularly badly." Why not have an online chat, and post the transcript?


A broadly shared moral code with provisions for its propagation and defense is one of Bostrom's examples of a singleton. If altruistic punishment of the type you describe is costly, then evolved hardscrapple replicators won't reduce their expected reproductive fitness by punishing those who abuse the helpless. We can empathize with and help the helpless for the same reason that we can take contraceptives: evolution hasn't yet been able to stop us without outweighing disadvantages.

Load More