Posts

Sorted by New

Wiki Contributions

Comments

bliumchik12y-30

I agree. I'd be more worried about civilisation collapsing in the interim between being frozen and the point when people would have worked out how to revive me.

shortly after posting this I realised that the value to me of $1000 is only relevant if you assume the odds of Omega predicting your actions correctly are 50/50ish. Need to think about this some more.

So, I'm sure this isn't an original thought but there are a lot of comments and my utility function is rolling its eyes at the thought of going through them all to see whether this comment is redundant, as compared to writing the comment given I want to sort my thoughts out verbally anyway.

I think the standard form of the question should be changed to the one with the asteroid. Total destruction is total destruction, but money is only worth a) what you can buy with it and b) the effort it takes to earn it.

I can earn $1000 in a month. Some people could earn it in a week. What is the difference between $1m and $1m + $1000? Yes, it's technically a higher number, but in terms of my life that is not a statistically significant difference. Of course I'd rather definitely have $1m than risk having nothing for the possibility of having $1m + $1000.

The causal decision theory versions of this problem don't look ridiculous because they take the safe option, they look ridiculous because the utility of two-boxing is not significant in comparison with the potential utility of one-boxing. That is, a one-boxer doesn't lose much if they're wrong: IF box 2 already contained nothing when they chose it, they only missed their chance at $1000, whereas IF box 2 already contained $1m a two-boxer misses their chance at $1m.

Obviously an advanced decision theory needs a way to rank the potential risks - if you postulate it as the asteroid, the risk is much more concrete.

[This comment is no longer endorsed by its author]Reply

Here’s a fairly simple one for thinking concretely about the abstraction lattice. Mine an encyclopedia article for topical words (i.e. omit “the” and “and” and their ilk – also omit duplicates, should be fairly easy to program for). Place each word on an index card and have the students arrange them on a large flat surface in order of abstraction – I’d have some large amusing goalposts at either end, but this is not strictly necessary. It should probably be acceptable for some words to be judged equally concrete.

Because this is a collaborative exercise, students will have to talk about what makes a word more or less abstract in order to justify their hypothesis that one word is more abstract than another word. If the concepts in this article had just been presented to them, particularly the section about superconcepts, I hope such discussions will help students become more adept at making those judgements. Using an encyclopedia article is only one of many ways to keep the words vaguely related to one another and hence more comparable – my instinct that this will help may not be accurate, in which case a random selection of X words from the dictionary will suffice. I expect that'd come out in testing.

(An alternate version splits the students into Team Abstract and Team Concrete, with the former responsible for seeing words that are more abstract than their current position and remedying this, and the latter with the inverse task. Then swap for a different lexicon. I’m not sure what effect that would have beyond or indeed specific to priming bias, and I’d be interested to see whether the alternate version consistently produces different results to the original version, and if so in what way, but that’s an entirely different rationality lesson, I think.)