Tetraspace Grouping

Rafael Harth's Shortform

This looks like the hyperreal numbers, with your equal to their .

0 And 1 Are Not Probabilities

The real number 0.20 isn't a probability, it's just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product `*`

such that `A:B * C:D = AC:BD`

). You are right about how you would convert the odds into a probability at the end.

Hermione Granger and Newcomb's Paradox

Just before she is able to open the envelope, a freak magical-electrical accident sends a shower of sparks down, setting it alight. Or some other thing necessiated by Time to ensure that the loop is consistent. Similar kinds of problems to what would happen if Harry was more committed to not copying "DO NOT MESS WITH TIME".

Coherent decisions imply consistent utilities

I have used this post quite a few times as a citation when I want to motivate the use of expected utility theory as an ideal for making decisions, because it explains how it's not just an elegant decisionmaking procedure from nowhere but a mathematical inevitability of the requirements to not leave money on the table or to accept guaranteed losses. I find the concept of coherence theorems a better foundation than the normal way this is explained, by pointing at the von Neumann-Morgensten axioms and saying "they look true".

The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.

Humans Who Are Not Concentrating Are Not General Intelligences

This post uses the example of GPT-2 to highlight something that's very important generally - that if you're not concentrating, you can't distinguish GPT-2 generated text that is known to be gibberish from non-gibberish.

And hence gives the important lesson, which might be hard to learn oneself if they're not concentrating, that you can't really get away with not concentrating.

This is self-sampling assumption-like reasoning: you are reasoning as if experience is chosen from a random point in your life, and since most of an immortal's life is spent being old, but most of a mortal's life is spent being young, you should hence update away from being immortal.

You could apply self-indication assumption-like reasoning to this: as if your experience is chosen from a random point in *any* life. Then, since you are also conditioning on being young, and both immortals and mortals have one youthhood each, just being young doesn't give you any evidence for or against being immortal that you don't already have. (This is somewhat in line with your intuitions about civilisations: immortal people live longer, so they have more Measure/prior probability, and this cancels out with the unlikelihood of being young given you're immortal)

Yes Requires the Possibility of No

Yes requiring the possibility of no has been something I've intuitively been aware of in social situations (anywhere where one could claim "you would have said that anyway").

This post does a good job of applying more examples and consequences of this (the examples cover a wide range of decisions), and tying to to the mathematical law of conservation of evidence.

The next AI winter will be due to energy costs

In The Age of Em, I was somewhat confused by the talk of reversible computing, since I assumed that the Laudauer limit was some distant sci-fi thing, probably derived by doing all your computation on the event horizon of a black hole. That we're only three orders of magnitude away from it was surprising and definitely gives me something to give more consideration to. The future is reversible!

I did a back-of-the-envelope calculation about what a Landauer limit computer would look like to rejiggle my intuitions with respect to this, because "amazing sci-fi future" to "15 years at current rates of progress" is quite an update.

Then, the lower limit is with or [...] A current estimate for the number of transistor switches per FLOP is .

The peak of human computational ingenuity is of course the games console. When doing something very intensive, the PS5 consumes 200 watts and does 10 teraFLOPs ( FLOPs). At the Landauer limit, that power would do bit erasures per second. The difference is - 6 orders of magnitude from FLOPs to bit erasure conversion, 1 order of magnitude from inefficiency, 3 orders of magnitude from physical limits, perhaps.

One big advantage of getting a hemispherectomy for life extension is that, if you don't tell the Metaculus community before you do it, you can predict much higher than the community median of 16% - I would have 71 Metaculus points to gain from this, for example, much greater than the 21 in expectation I would get if the community median was otherwise accurate.