All of gregconen's Comments + Replies

Not that it isn't interesting, but it seems confused, and somewhat trivial.

Trivial, because it basically says: Keep in mind that the map is not the territory applies even if the map is a scientific model. A good thing to keep in mind, nevertheless.

But in the details, you seem to misunderstand some of the problems "mathematics appears to have perfect conformity with reality" is, as Vladmir Nesov points out, exactly backwards. Mathematics qua mathematics has no relation to reality, and (properly) makes no claim as to reflections of reality. Your... (read more)

I'd say lack of honesty because claims are hard to verify and therefore it's all about signaling competence to gain status. On the other hand the basics of austrian economics are almost trivial and since you don't get points for stating the obvious austrian economics is marginalized although overall it leads to better results.

In most cases, signing up for cryonics and signing up as an organ donor are not mutually exclusive. The manner of death most suited to organ donation (rapid brain death with (parts of) the body still in good condition, generally caused by head trauma) is not well suited to cryonic preservation. You'd probably need a directive in case the two do conflict, but such a conflict is unlikely.

Alternatively, neuropreservation can, at least is theory, occur with organ donation.

The point of the reproductive analysis is that it explains the status seeking and attention seeking - whilst also explaining the fees paid for IVF treatments and why ladies like to keep cute puppies. It is a deeper, better theory - with firm foundations in biology.

Evolutionary analysis can if used properly. But evolutionary analysis is properly identifying adaptations, not:

people's desires should be nailed down as hard as possible to those things that lead to raising good quality babies.

It never said that was the whole of evolutionary theory. It seems like a reasonable 1-line summary of the point I was trying to make - if quoted in context. Your 1-line summary seems to have some flaws too - there is a lot more to evolutionary theory than identifying adaptations.

As ever, both stories are studies in irrelevancy and emotional appeal.

Which probably reflects a good bit of what's wrong with the criminal justice system.

Though unscientific scientific testimony is also a serious problem, apparently also seen in this case.

It may have helped if you'd explained yourself to onlookers in English, or simply asked in English (given Thomas's apparent reasonable fluency).

I disagree with the downvotes, though.

All I said was, "Do you prefer to speak German? That's a language that I can also do [sic]. Do you want me to sometimes translate for you?"

Calling it an "infection" or a "malfunction" implicitly judges the behavior. That's your own bias talking.

The fact that someone desires something because of a meme instead of a gene (to oversimplify things; both are always in play) does not make the desire any less real or any less worthy.

A solely status-based analysis misses things, just as a solely reproductive analysis misses things. The point is that you can't nail desires down to simply "making good babies" or "being high status" or "having lots of sex"; any or all of these may be true desires in a given person.

It is standard practice to regard some meme-gene conflicts as cases of pathogenic infections. See, for example the books "Virus of the Mind" and "Thought Contagion". Similarly with malfunctions: a suicidal animal has gone wrong - from perspective of the standard functional perspective of biologists - just as much as a laptop goes wrong if you try and use it underwater. Biologists from Mars would have the same concepts in these areas. The point of the reproductive analysis is that it explains the status seeking and attention seeking - whilst also explaining the fees paid for IVF treatments and why ladies like to keep cute puppies. It is a deeper, better theory - with firm foundations in biology.

Almost 7 billion humans shows how well this theory works.

And yet subreplacement fertility in a number of rich countries (the very place where people have copious resources) points to a serious flaw. It's apparent that many people aren't having babies.

People are adaptation executors, not fitness maximizers.

For a highly simplified example, people like sex. In the ancestral environment sex would lead to babies. But the development of condoms, hormonal birth control, etc, has short-circuited this connection. The tasks of caring for a baby (which are e... (read more)

Some people prefer other things. Mostly, that is ultimately due to memetic infections of their brains - which divert resources to reproducing memes - rather than genes. Yes: some people act to serve parasitic genes rather than their own genes. Yes: some people malfunction, and go wrong. Yet the basic underlying theory has much truth in it - truth an analysis on the level of status-seeking misses out. Of course the theory works much better if you include memes - as well as DNA-genes. An analysis of whether the modern low birth rate strategy in some developed countries is very much worse than the high birth rate strategies elsewhere may have to wait for a while yet. High birth rate strategies tend to be in countries stricken by war, famine and debt. Maybe their genes will prevail overall - but also maybe they won't.

Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don't think it's a likely future.

Keep in mind selection bias. The pool of people who would unschool their children is systematically different from the general population. Aspects of child-rearing unrelated to schooling (at least conventional schooling) and/or genetics probably played a role in determining the adult personality of their children.

Indeed. The eldest of them hypothesizes that it wasn't so much unschooling that caused the good effects, but more likely other factors, most relevantly "the parents having bothered to make any decision regarding their children's schooling", which has been shown to matter in other contexts.

This solves nothing. If we knew the failure mode exactly, we could forbid it explicitly, rather than resort to some automatic self-destruct system. We, as humans, do not know exactly what the AI will do to become Unfriendly; that's a key point to understand. Since we don't know the failure mode, we can't design a superstition to stop it, anymore than we can outright prohibit it.

This is, in fact, worse than explicit rules. It requires the AI to actively want to do something undesirable, instead of it occurring as a side effect.

Indeed. I'm not saying the karma system is a bad thing.

Also, the karma system adds an additional barrier, at least in my mind. Knowing that your comment is going to be explicitly judged and your score added to a "permanent record" can be intimidating.

Whether we like it or not, that "intimidation" may be the single most important factor in keeping the level of discourse in the comments unusually high. Status games can be beneficial.

If you haven't already, do check out Eby's Instant Irresistible Motivation video for learning how to create positive motivation.

Interesting. In fact, it seems to mesh with the process I've successfully used to do things like cleaning my desk.

Unfortunately, many of the tasks I have to do don't lend themselves to the visualization in step 1. How does one visualize having studied for an exam, or completed an exercise routine?

In the middle of writing this comment, I realized that I have no experience with IIM, so I'm not qualified to speak from experience. Therefore, please believe what I'm saying only because logic requires that it be true. For the exam, visualize yourself knowing the material, or getting a good grade, or finishing school, or getting a good job. The technique requires you to feel the desire to achieve in your body, so keep moving forward until you hit something that gives a physical reaction. Boom, you've completed the first two steps, so now you can do the third step: compare your current situation to that, while still feeling good about "that". According to what Eby says, your brain should then start planning how to achieve "that", working down from whatever goal you discovered until it hits what you need to do next--which may or may not be actually studying for the exam. For the exercise routine, same thing. Visualize yourself at the gym, or exercising, or having finished your routine, or being in good (or better) shape, or looking attractive, or living longer.
You don't; you visualize whatever it is you're going to get by having done those things. As you'll notice in the video, one of the major functions of the visualization is to engage the feeling of desire - that's why the questions are "what's good about that? what do you like about it?" Since you probably don't actually desire (in the emotional, feeling sense) having studied for an exam or completing an exercise routine, it wouldn't work for that even if you could visualize those things. So, visualize something you DO desire about those things. (Yes, his is still tricky, precisely because you're probably trying to set up these actions in order to avoid bad consequences like failing the exam. As long as this is the outer frame in which your thinking is taking place, the technique won't work very well -- the pain brain usually wins over the gain brain. The way that I fix this with clients is to teach them to identify the specific emotional SASS threat (i.e. Status, Affiliation, Safety, or Stimulation), and disconnect it. Once the threat is gone, positive motivation operates naturally.
If you've exercised before, you can probably remember the feeling in your body when you're finished--the 'afterglow' of muscle fatigue, endorphins, and heightened metabolism--and you can visualize that. If you haven't, or can't remember, you can imagine feelings in your mind like confidence and self-satisfaction that you'll have at the end of the exercise. As for studying, the goal isn't to study, per se; it's to do well on the test. Visualizing the emotional rewards of success on the test itself can motivate you to study, as well as get enough sleep the night before, eat appropriately the day of, take performance enhancing drugs [], etc. Imagination is a funny thing. You can imagine things that could physically never happen--but if you try to imagine something that's emotionally implausible to you, you'll likely fail. Just now I imagined moving objects with my mind, with no trouble at all; then I tried to imagine smacking my mother in the face and failed utterly. If you actually try [] to imagine having something--not just think about trying--and fail, it's probably because deep down you don't believe you could ever have it.
That is indeed challenging; I've had difficulty with it myself. You could try to visualize getting your results and seeing that you've gotten a good grade, or imagine the feeling after a exhausting exercise routine.

technology changes the game by making it easier to commit systematic mass murder.

Not to mention the simple expedient of having more people around.

As a percentage of the population, the Thirty-Years War, at least nominally between the Catholics and Protestants in 17th century Germany, was the bloodiest in history, with estimates of 20% to 25% of the population dying.

That's not my point. My point is that Gall's law is unfalsifiable by anything short of Omega converting its entire light cone into computronium/utilium in a single, plank-time step.

Edit: Not to say that Gall's Law can't be useful to keep in mind during engineering design.

A repeat [], but a good one.

Do not imagine that mathematics is hard and crabbed, and repulsive to common sense. It is merely the etherealization of common sense.

WIlliam Thomson, Lord Kelvin

One I got while reading Jaynes's Probability Theory recently: -- Laplace

Suppose a hyperintelligent alien race did build a space shuttle equivalent as their first space-capable craft, and then went on to build interplanetary and interstellar craft.

Alien 1: The [interstellar craft, driven by multiple methods of propulsion and myriad components] disproves Gall's Law.

Alien 2: Not at all. [Craft] is a simple extension of well-developed principles like the space shuttle and the light sail.

You can simply define a "working simple system" as whatever you can make work, making that a pure tautology.

I would say that Gall's Law is about the design capacities of human beings (like Dunbar's Number), or is something like "there's a threshold to how much new complexity you can design and expect to work", with the amount of complexity being different for humans, superintelligent aliens, chimps, or Mother Nature. (the limit is particularly low fo Mother Nature - she makes smaller steps, but got to make much more of them)
I agree. All of these concepts are imprecisely connected to the real world. Does anyone have an idea for how we could more precisely define Gall's Law to more ably discuss real expected experience? I'm considering a definition which might include the phrase: "Reducible to previously understood components"

You should probably make an explicit karma balance post for this.

You weren't just presenting evidence. You were making an argument. Some people believed that you were engaged in motivated reasoning and/or privileging the hypothesis.

Please discuss the merits of the argument in the original thread, if desired. I'd prefer to keep the discussions of the merits of the argument and the reactions to it separate.

So is social deference the missing ingredient in my post?

It would help, but the difference I was refer to was that Jayson was embarrassed by his failure of rationality, while you either failed to recognize yours or were proud of it.

Could you be more specific in what exactly was/is my failure and why/how I was arrogant about it, and what are the ad hominems?

Ad hominem arguments are attacks against the arguers, rather than the arguments. For example:

what can we say about the epistemological waterline here?

Comments like that will not impress peopl... (read more)


I think the idea is to have both accurate and inaccurate positive self-beliefs, and no negative self-beliefs, accurate or otherwise.

Whether this is desirable or even possible I take no stance.

They were also both written in English. The question is, can you see the difference?

Jayson apologetically expressed misunderstanding of rationality combined with an apparent willingness to be corrected. You arrogantly expressed your failure, and responded to criticism with ad hominems and whining.

Edit: In that post. Some of you responses were productive, and one is, at time of this writing, at positive karma.


But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?


I have a strong preferences for simple set of moral preferences, with minimal inconsistency.

I admit that the idea of holding "killing babies is wrong" as a separate principle from "killing humans is wrong", or holding that "babies are human" as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.

Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).

Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).

Of course, my own draft used if(var % 3 == 0 && var % 5 == 0) instead of the more reasonable x%15.

Mine does, but I'm aware that it's good coding practice to specify anyway. I was maintaining his choice. Yep, but I don't remember how else to signify an intrinsically infinite loop, and bogus' code seems to use an explicit return (which I wanted to keep for accuracy's sake) rather than checking the variable as part of the loop. My method of choice would be for(var=0; var<100; ++var){} (using LSL format []), which skips both explicitly returning and explicitly incrementing the variable.

I'm not sure what "arbitrary" means here. You don't seem to be using it in the sense that all preferences are arbitary.

That seemed to be exactly how he's using it. It would be how I'd respond, had I not worked it through already. But there is a difference between arbitrary in: "the difference between an 8.5 month fetus and a 15 day infant is arbitrary" and "the decision that killing people is wrong is arbitrary".

Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you c... (read more)

I am generally confused by the metaethics sequence, which is why I didn't correct Pengvado. Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent? So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, 'My preferences aren't logical. They evolved.' If there's a difference in two positions in the moral landscape, we needn't justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don't consider logical consistency to be the most important moral principle.

I see my joke fell flat.

In the world at large, sanity is valued much less than it is here at lesswrong. Absurd as it sounds, many people would value righteous indignation above rational debate, or even above positive results.

See the recent discussion on jokes with Rain. The joke implication missed. I almost wish that did sound absurd.

You mean not appearing to have been mind-killed is a bad thing?

Welcome to the world. Sanity is not always valued so highly here as you might be used to.

Don't confuse preference with prediction. Where else have I been where sanity is valued more highly and how do I get back to it?

EY's posts undermine (1) significantly

What works for EY may not work for everyone else. For better or worse, he enjoys a special status in this community.

For better or worse, [EY] enjoys a special status in this community.

A status earned precisely by writing posts that people enjoy reading!

If you're suggesting that the ordinary academic/intellectual norm of only allowing high-status people to write informally, with everyone else being forced to write in soporific formal-sounding prose, is operative here, then I suggest we make every effort to nip that in the bud ASAP.

This is a blog; let's keep it that way.

The kind that comes from more than a single person, for a start. An unequivocal sign of a conspiracy (like an actual explosive attached to a support).

Failing that, a report free of clear signs of confusion (like the aforementioned confusion at 4:39). Reports of explosions from people actually familiar with explosions, and/or experience and a track record of cool under threat ("a boiler guy" and bureaucrat don't qualify, without more of a evidence). A witness who hasn't changed his story back and forth. Etcetera.

Well, along with medical research, organ donation and cryonics also probably exceed the expected utility of cannibalism or necrophilia.

That said, I'm not sure they would be mutually exclusive. My head for my future self, my innards for the sick, my penis and anus for lovers, and my arms and legs for the hungry.

Cryonics and organ donation is really a winning combination. It solves the organ donor's worry that doctors might not take long shots at saving your life if they can harvest your organs instead.

My head for my future self, my innards for the sick, my penis and anus for lovers, and my arms and legs for the hungry.


That adds some weight. But it's still not particularly convincing. Even assuming he's not being intentionally deceptive or deceptively cut (which I'm not sure is true), it's not anything close to extraordinary evidence, as a claim like that requires.

Remember that witnesses perceptions and memories will be distorted. Clearly, events were confused (look at his statement at 4:39, where he's confused on whether he's standing on a landing or hanging). He "knows" he heard explosions, apparently based on his experience as "a boiler guy"; e... (read more)

What kind of eyewitness testimony would be more convincing to you?

I agree with the sentiment here.

However, in a community like this one, Aumann's agreement theorem would suggest that most of the commonly held views, at least the views commonly held to be very likely, rather than just somewhat likely, should be correct.

A single eyewitness account, presumable handpicked and stagemanaged by people with an agenda, does not make particularly strong evidence.


First, the scenario you describe explicitly includes death, and as such falls under the 'embellishments' exception.

You're going to die (or at least cease) eventually, unless our understanding of physics changes significantly. Eventually, you'll run out of negentropy to run your thoughts. My scenario only changes what happens between then and now.

Failing that, you can just be tortured eternally, with no chance of escape (no chance of escape is unphysical, but so is no chance of death). Even if the torture becomes boring (and there may be ways around that), an eternity of boredom, with no chance to succeed any at any goal, seems worse than death to me.

Most versions of torture, continued for your entire existence. You finally cease when you otherwise would (at the heat death of the universe, if nothing else), but your entire experience spent being tortured. The type isn't really important, at that point.

First, the scenario you describe explicitly includes death, and as such falls under the 'embellishments' exception. Second, thanks to the hedonic treadmill, any randomly-selected form of torture repeated indefinitely would eventually become tolerable, then boring. As you said, Third, if I ever run out of other active goals to pursue, I could always fall back on "defeat/destroy the eternal tormetor of all mankind." Even with negligible chance of success, some genuinely heroic quest like that makes for a far better waste of my time and resources than, say, lottery tickets.

People that find human infants cuter than rabbit, dog, or cat infants isn't a direct contradiction of the hypothesis, as humans would be particularly likely to find human infants cute (just as dogs are particularly likely to be protective and nurturing to puppies).

The point is that animals with large litters are particularly likely to have cute infants other things (like degree of genetic closeness) equal, and that large litter animals would be sufficiently cute to overcome the fact that we're not related. Of course, domestic puppies and kittens have an... (read more)

The baby elephants I saw on safari recently were pretty cute []:

Let me build on this. You say (and I agree) that fixing the damage caused by vitrification is much harder than fixing most causes of death. Thus, by the time that devitrification is possible, very few new people will be vitrified (only people who want a one-way trip to the future).

This leads me to 2 conclusions: 1) Most revivals will be of people who were frozen prior to the invention of the revivification technology. Therefore, if anyone is revived, it is because people want to revive people from the past. 2) The supply of people frozen with a given te... (read more)

That's a reasonable scenario. As time goes on, though, you run into a lot more what-ifs. At some point, the technology will be advanced enough that they can extract whatever information they want from your brain without reviving you. I think it would be really interesting to talk to Hitler. But I wouldn't do this by reviving Hitler and setting him loose. I'd keep him contained, and turn him off afterwards. Is the difference between yourself and Hitler large compared to the difference between yourself and a future post-Singularity AI possessing advanced nanotechnology?

Even setting aside a post-FAI economy, why should this be the case? Your PS3 metaphor is not applicable. Owners of old playstations are not an unserved market in the same way that older frozen bodies are. If PS(N) games are significantly more expensive than PS(N+1) games, people will simply buy a PS(N+1). Not an option for frozen people; older bodies will be an under served market in a way PS3 owners cannot be.

If there's a "mass market" for revivals, clearly people are getting paid for the revivals, somehow. I see no reason why new bodies w... (read more)

Regardless of whether the ultimate effects of global warming are a net positive or negative, there are likely to be costly disruptions, as areas currently good for agriculture and/or habitation cease to be good for them, even if they're replaced by other areas.


A good point.

Your solution does have Omega maximize right answers. My solution works if Omega wants the "correct" result summed over all Everett branches: for every you that 2-boxes, there exists an empty box A, even if it doesn't usually go to the 2-boxer.

Both answers are correct, but for different problems. The "classical" Newcomb's problem is unphysical, just as byrnema initially described. A "Quantum Newcomb's problem" requires specifying how Omega deals with quantum uncertainty.

Interesting. Since the spirit of Newcomb's problem depends on 1-boxing have a higher payoff, I think it makes sense to additionally postulate your solution to quantum uncertainty, as it maintains the same maximizer. That's so even if the Everett interpretation of QM is wrong.

The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb's problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a "random" but deterministic process, Omega knows how it turns out, even if you don't, so Pb=0 or 1). Let F be your expected return.

Regardless of what Omega does, you... (read more)

I'm not sure why you take Po = Pb. If Omega is trying to maximize his chance of predicting correctly then he'll take Po = 1 if Pb > 1/2 and Pb = 0 if Pb < 1/2. Then, assuming A > B / 2, the optimal choice is Po = 1/2. Actually, if Omega behaves this way there is a jump discontinuity in expected value at Po=1/2. We can move the optimum away from the discontinuity by postulating there is some degree of imprecision in our ability to choose a quantum coin with the desired characteristic. Maybe when we try to pick a coin with bias Po we end up with a coin with bias Po+e, where e is an error chosen from a uniform distribution over [Po-E, Po+E]. The optimal choice of Po is now 1/2 + 2E, assuming A > 2EB, which is the case for sufficiently small E (E < 1/4 suffices). The expected payoff is now robust (continuous) to small perturbations in our choice of Po.

The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.

And, also, we define winning as winning on average. A person can get lucky and win the lottery -- doesn't mean that person was rational to play the lottery.

Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.

Interesting. I was idly wondering about that. Along somewhat different lines: I've decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box. Winning the extra amount in this way in a handful of worlds won't do anything to my average winnings-- it won't even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.

I am inclined to doubt that nature's values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values - you can reasonably be expected to agree on a number of things.

From the perspective of the universe at large, humans are at best an interesting anomaly. Humans, plus all domesticated animals, crops, etc, compose less than 2% of the earth's biomass. The entire biomass is a few parts per billion of the earth (maybe it's important as a surface feature, but lif... (read more)

It's not like no status seeking occurs in those fields.

I agree that making a lump-sum donation is a bad idea. But 200 million dollars (going by the OP's estimate) per year is still a lot of money for a charity to absorb. Givewell puts the "room for more funding" at $2.5 million (for 2010). This may (probably will) go up for later years, but it's a long way from $200 million. Stop TB Partnership is a bit larger, but still not $200M/year large.

Probably. But would the general public find IEC (or SIAI) compelling? I'm thinking not.

For a something like this, we need something that will appeal to the average person (at least the average Facebook/Craigslist user), and I think human development projects are more likely to do that than research projects or existential risk projects.

9Eliezer Yudkowsky13y
Then all versions of this project that I'm interested in won't work. Still seems worth a try. I guess I'm astounded by the degree to which people seem to value "succeeding at what we set out to do" over "trying to do something important".
Convincing the general public that these causes are worthwhile sounds like a worthwhile make a desperate effort [] level project. Or we can just ignore expected utlity, and attempt to satisfy our desire for warm fuzzies.

I'm still waiting for hard evidence that average charity spending has significant net positive impact.

Be that as it may, there exist above-average charities which have a net positive impact. If we select among those charities (or choose a grantmaker likely to select above-average charities), ,we can have a net positive impact.

The problem is not finding an effective, productive, and reputable charity. There are plenty out there (even if a majority are not). It's finding a charity than can effectively and productively use an extra billion dollars. Many charities don't have the oversight and planning infrastructure to use a windfall of that size.

Mimicking the Gates Foundation grants to GAVI could absorb a lot, but would risk missing a lot of the potential to use this to promote more efficient giving.
Philanthropy by Americans alone is about $300 billion per year []. The guesstimated annual cashflow here is less than one-thousandth of that.
There is an obvious solution to this: fund multiple charities.
Load More