scarcegreengrass

Comments

Nuclear war is unlikely to cause human extinction

Can you clarify why the volcano triggering scheme in 3 would not be effective? It's not obvious. The scheme sounds rather lethal.

Open & Welcome Thread – October 2020

Welcome! Discovering the rationalsphere is very exciting, isn't it? I admire your passion for self improvement.

I don't know if I have advice that isn't obvious. Read whoever has unfamiliar ideas. I learned a lot from reading Robin Hanson and Paul Christiano.

As needed, journal or otherwise speak to yourself.

Be wary of the false impression that your efforts have become ruined. Sometimes i encounter a disrespectful person or a shocking philosophical argument that makes me feel like giving up on a wide swathe of my life. I doubt giving up is appropriate in these disheartening circumstances.

Seek to develop friendships with people you can have great conversations with.

Speak to rationalists like you would speak to yourself, and speak tactfully to everyone else.

That's the advice i would give to a version of myself in your situation. Have fun!

The Solomonoff Prior is Malign

Okay, deciding randomly to exploit one possible simulator makes sense.

As for choosing exactly what to see the output cells of the simulation to... I'm still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?

The Solomonoff Prior is Malign

Great post. I encountered many new ideas here.

One point confuses me. Maybe I'm missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.

TLDR How do the consequentialists guess which direction to bias the output towards?

A letter on optimism about human progress

a) Agreed, although I don't find this inappropriate in context.

b) I do agree that the fact that many successful past civilizations are now in ruins with their books lost is a important sign of danger. But surely there is some onus of proof in the opposite direction from the near-monotonic increase in population over the last few millennia?

c) These are certainly extremely important problems going forwards. I would particularly emphasize the nukes.

d) Agreed. But on the centuries scale, there is extreme potential in orbital solar power and fusion.

e) Agreed. But I think it's easy to underestimate the problems our ancestors faced. In my opinion, some huge ones of past centuries include: ice ages, supervolcanic eruptions, the difficulty of maintaining stable monarchies, the bubonic plague, Columbian smallpox, the ubiquitous oppression of women, harmful theocracies, majority illiteracy, the Malthusian dilemma, and the prevalence of total war as a dominant paradigm. Is there evidence that past problems were easier than 2019 ones?

It sounds like your perspective is that, before 2100, wars and upcoming increases in resource scarcity will cause a inescapable global economic decline that will bring most of the planet to a 1800s-esque standard of living, followed by a return to slow growth (standard of living, infrastructure, food, energy, productivity) for the next couple centuries. Do I correctly understand your perspective?

Arguments about fast takeoff

Epistemics: Yes, it is sound. Not because of claims (they seem more like opinions to me), but because it is appropriately charitable to those that disagree with Paul, and tries hard to open up avenues of mutual understanding.

Valuable: Yes. It provides new third paradigms that bring clarity to people with different views. Very creative, good suggestions.

Should it be in the Best list?: No. It is from the middle of a conversation, and would be difficult to understand if you haven't read a lot about the 'Foom debate'.

Improved: The same concepts rewritten for a less-familiar audience would be valuable. Or at least with links to some of the background (definitions of AGI, detailed examples of what fast takeoff might look like and arguments for its plausibility).

Followup: More posts thoughtfully describing positions for and against, etc. Presumably these exist, but i personally have not read much of this discussion in the 2018-2019 era.

Disentangling arguments for the importance of AI safety

This is a little nitpicky, but i feel compelled to point out that the brain in the 'human safety' example doesn't have to run for a billion years consecutively. If the goal is to provide consistent moral guidance, the brain can set things up so that it stores a canonical copy of itself in long-term storage, runs for 30 days, then hands off control to another version of itself, loaded from the canonical copy. Every 30 days control is handed to a instance of the canonical version of this person. The same scheme is possible for a group of people.

But this is a nitpick, because i agree that there are probably weird situations in the universe where even the wisest human groups would choose bad outcomes given absolute power for a short time.

Disentangling arguments for the importance of AI safety

I appreciate this disentangling of perspectives. I had been conflating them before, but i like this paradigm.

Load More