What if AGI is near?

Consider that "if AGI is very near" probably means that it's already happened (or, equivalently, that we are past the point of no return) on Copernican grounds, since the odds of living in a very special moment where the timelines are short but it's not too late yet are very low. Not seeing an obvious AGI around likely means that either it's not very near, or that the take-off is slow, not fast. 

Ironically, it's not Roko's basilisk that is an infohazard, it's the "AGI go foom!" idea that is.

A New Center? [Politics] [Wishful Thinking]

It's tempting to try to reinvent the wheel, but this dynamic is by no means new. There have been viable political alternatives popping in the middle in various places around the world. Not as many as those emerging from the right or from the left,  One can argue that the US is unique in many ways, and it sure is, but the degree of uniqueness would only become clear once you identify the common trends. 

From what I understand, the process of emergence of a centrist party is usually by one of the mainstream parties not being radical enough for a large chunk of its base, splitting the party in two, one more extreme and one more centrist. It happened in Canada, Germany, Israel, Italy and many other places. The odds of creating a centrist political force from scratch are not good, and require much shallow equilibria than those in most de facto two-party systems.  For example, the Israel Resilience Party was created in 2018 on the multi-party background and many years of political gridlock.

Identifiability Problem for Superrational Decision Theories

Despite this, superrational reasoning gives us different results.

what is that "superrational" reasoning that gives different results?

Learning Russian Roulette

If you appear to be an outlier, it's worth investigating why precisely, instead of stopping at one observation and trying to make sense of it using essentially an outside view. There are generally higher-probability models in the inside view, such as "I have hallucinated other people dying/playing" or "I always end up with an empty barrel"

Why 1-boxing doesn't imply backwards causation

Hmm, it sort of makes sense, but possible_world_augmented() returns not just a set of worlds, but a set of pairs, (world, probability). For example for the transparent Newcomb's you get possible_world_augmented() returns {(<1-box, million>, 1), (<2-box, thousand>, 0)}. And that's enough to calculate EV, and conclude which "decision" (i.e. possible_world_augmented() given decision X) results in maxEV. Come to think of it, if you tabulate this, you end up with what I talked about in that post.

Why 1-boxing doesn't imply backwards causation

I'm confused... What you call the "Pure Reality" view seems to work just fine, no? (I think you had a different name for it, pure counterfactuals or something.) What do you need counterfactuals/Augmented Reality for? Presumably making decisions thanks to "having a choice" in this framework, right? In the pure reality framework the "student and the test" example one would dispassionately calculate what kind of a student algorithm passes the test, without talking about making a decision to study or not to study. Same with the Newcomb's, of course, one just looks at what kind of agents end up with a given payoff. So... why pick an AR view over the PR view, what's the benefit?

Preferences and biases, the information argument

"look through this collection of psychology research and take it as roughly true"

Well, you are an intelligence that "is well-grounded and understands what human concepts mean", do you think that the above approach would lead you to distill the right assumptions?

AstraZeneca vaccine shows no protection against Covid-19 variant from Africa

Total of 39 cases of SA variant seems too underpowered to make any conclusions.

The best things are often free or cheap

Note that everything on your list has zero or near zero replication cost. A lot of essentials, tangible and intangible, are not like that. Food, companionship, living accommodations, etc. I don't know how far one can get on easy-clone stuff.

Load More