## LESSWRONGLW

Tetraspace Grouping

Sorted by New

# Wiki Contributions

Is there a beeminder without the punishment?

Beemium (the subscription tier that allows pledgeless goals) is $40/mo currently, increased in January 2021 from$32/mo and in 2014 from the original \$25/mo.

Some phrases in The Map that... Confuse me- help please, to make my review of it better!

The essay What Motivated Rescuers During the Holocaust is on Lesswrong under the title Research: Rescuers during the Holocaust - it was renamed because all of the essay titles in Curiosity are questions, which I just noticed now and is cute. I found it via the URL lesswrong.com/2018/rescue, which is listed in the back of the book.

The bystander effect is an explanation of the whole story:

• Because of the bystander effect, most people weren't rescuers during the Holocaust, even though that was obviously the morally correct thing to do; they were in a large group of people who could have intervened but didn't.
• The standard way to break the bystander effect is by pointing out a single individual in the crowd to intervene, which is effectively what happened to the people who became rescuers by circumstances that forced them into action.
Is there a "coherent decisions imply consistent utilities"-style argument for non-lexicographic preferences?

Why would you wait until ? It seems like at any time  the expected payoff will be , which is strictly decreasing with .

2 innovative life extension approaches using cryonics technology

One big advantage of getting a hemispherectomy for life extension is that, if you don't tell the Metaculus community before you do it, you can predict much higher than the community median of 16% - I would have 71 Metaculus points to gain from this, for example, much greater than the 21 in expectation I would get if the community median was otherwise accurate.

Rafael Harth's Shortform

This looks like the hyperreal numbers, with your  equal to their .

0 And 1 Are Not Probabilities

The real number 0.20 isn't a probability, it's just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product * such that A:B * C:D = AC:BD). You are right about how you would convert the odds into a probability at the end.

Just before she is able to open the envelope, a freak magical-electrical accident sends a shower of sparks down, setting it alight. Or some other thing necessiated by Time to ensure that the loop is consistent. Similar kinds of problems to what would happen if Harry was more committed to not copying "DO NOT MESS WITH TIME".

Coherent decisions imply consistent utilities

I have used this post quite a few times as a citation when I want to motivate the use of expected utility theory as an ideal for making decisions, because it explains how it's not just an elegant decisionmaking procedure from nowhere but a mathematical inevitability of the requirements to not leave money on the table or to accept guaranteed losses. I find the concept of coherence theorems a better foundation than the normal way this is explained, by pointing at the von Neumann-Morgensten axioms and saying "they look true".

The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.

Humans Who Are Not Concentrating Are Not General Intelligences

This post uses the example of GPT-2 to highlight something that's very important generally - that if you're not concentrating, you can't distinguish GPT-2 generated text that is known to be gibberish from non-gibberish.

And hence gives the important lesson, which might be hard to learn oneself if they're not concentrating, that you can't really get away with not concentrating.