Martin Randall

Posts

Sorted by New

Wiki Contributions

Comments

Let us suppose that there is pirate treasure on an island. I have a map to the treasure, which I inherited from my mother. You have a different map to the same treasure that you inherited from your mother. Our mothers are typical flawed humans. Our maps are typical flawed pirate maps.

Because I have the map I have, I believe that the treasure is over here. The non-epistemic generator of that belief is who my mother was. If I had a different mother I would have a different map and a different belief. Your map says the treasure is over there.

To find the treasure, I follow my map. An outsider notices that I am following a map that I know I am only following for non-epistemic reasons and that I have Moorean confusion. Perhaps so. But I cannot follow your map, because I don't have it. So it's best to follow my map.

If we shared our maps perhaps we could find the treasure more quickly and split it between us. But maybe it is hard to share the map. Maybe I don't trust you not to defect. Maybe it is a zero-sum treasure. In pirate stories it is rarely so simple.

Similarly, Alice is a committed Christian and knows this is because she was raised to be Christian. If she was raised Muslim she would be a committed Muslim, and she knows this too. But her Christian "map" is really good and her Muslim "map" is a sketch from an hour long world religions class taught by a Confucian. It's rational to continue to use her Christian map even if the evidence indicates that Islam has higher probability of truth.

I anticipate the reply that Alice can by all means follow her Christian map as long as it is the most useful map she had, but she should not mistake the map for the territory. This is an excellent thought. It is also thousands of years old and already part of Alice's map.

Many of my beliefs have the non-epistemic generator "I was bored one afternoon (and started reading LessWrong)". It is very easy to recognize the non-epistemic generator of my belief and also have it. My confusion is how anyone could not recognize the same thing.

Maybe specify that they resolve yes if they achieve something important for alignment, as opposed to general importance in the field of science?

Thanks for the feedback! If I understand you correctly, these markets would be more helpful, is that right?

 

For example, randomly sample N voters from the electorate, and run ML taking account only of those voters. Fails Condorcet, like RD, but it gives the behavior I specified. Politically infeasible, of course. And I'm not really sure I buy the argument-from-war for RD.

The last section is of course a joke. It would be much better if we eliminated the bad party. In a one candidate election all voting methods return the same answer.

When choosing between random dictator and maximal lottery, are there good options to compromise between these extremes? Eg, suppose that I want a 75% majority to win 95% of the time.

The first thing that comes to mind is a % chance of each, but that doesn't seem especially elegant.

The 1% number was intended to be illustrative, not definitive. I'm not a nuclear risk expert. The QALY figure may also vary. A $40/yr cost could be a $400 investment that depreciates over ten years. In that case I would value it based on the projected risk over ten years. I'm not seeing additional value in "nuclear dignity points" above these admittedly hard-to-calculate figures.

To preserve x-risk research during civilizational collapse I think attempts to preserve information and insights would perform better than attempting to preserve individual researchers, especially since it could be done in parallel with preserving other information that aids recovery.

If I survive for an extra two weeks, that's about 4% of a QALY, which is about $4,000. So if there's a 1% yearly risk of nuclear apocalypse, it's worth spending $40/year for that chance of extra time. I don't think there's any additional return from my death being dignified. If I try and die I'm still dead.

In case of nuclear apocalypse humanity appears to be best off if most people die quickly, because this will improve the ratio of food to mouths and reduce the risk of long-term problems where, for example, we eat all the fish and go extinct.

If you have a model where there are collective benefits from one more family surviving two more weeks I'm interested in hearing it.

I think A and B are both wrong in the quoted text because they're forgetting that IGF doesn't apply at the species level. A species can evolve to extinction to optimize IGF.

The general lack of concern about existential risk is very compatible with the IGF-optimization hypothesis. If the godshatter hypothesis or shard theory hypothesis is true then we have to also conclude that people are short-sighted idiots. Which isn't a big leap.

We can introspect without sharing the results of our introspection, but then the title for this post should not be "Humans aren't fitness maximizers". That's a general claim about humans that implies that we are sharing the results of our introspections to come to a consensus. The IGF-optimization hypothesis predicts that we will all share that we are not fitness maximizers and that will be our consensus.

In any case, people are not perfect liars, so the IGF-optimization hypothesis also predicts that most people will answer NO in the privacy of their own minds. This isn't as strong a prediction, it depends on your model of the effectiveness of lying vs the costs of self-deception. It also predicts that anyone who models themselves at having a high likelihood of getting a socially unacceptable result from introspection will choose not to do the introspection.

This isn't specific to IGF-optimization. Saying and thinking socially acceptable things is instrumentally convergent, and any theory of human values that is reality-adjacent predicts that mostly everyone says and thinks socially acceptable things, and indeed that is what we mostly observe.

Load More