scarcegreengrass

Comments

We should probably buy ADA?

I'm not sure either. Might only be needed for the operating fees.

We should probably buy ADA?

Agreed. We might refer to them as 'leaderless orgs' or 'staffless networks'.

Why do stocks go up?

Does this reduction come from seniority? Is the idea that older organizations are generally more reliable?

Covid 12/24: We’re F***ed, It’s Over

Are you saying there would be a causal link from the poor person's vaccine:other ratio to the rich person's purchasing decision? How does that work?

Nuclear war is unlikely to cause human extinction

Can you clarify why the volcano triggering scheme in 3 would not be effective? It's not obvious. The scheme sounds rather lethal.

Open & Welcome Thread – October 2020

Welcome! Discovering the rationalsphere is very exciting, isn't it? I admire your passion for self improvement.

I don't know if I have advice that isn't obvious. Read whoever has unfamiliar ideas. I learned a lot from reading Robin Hanson and Paul Christiano.

As needed, journal or otherwise speak to yourself.

Be wary of the false impression that your efforts have become ruined. Sometimes i encounter a disrespectful person or a shocking philosophical argument that makes me feel like giving up on a wide swathe of my life. I doubt giving up is appropriate in these disheartening circumstances.

Seek to develop friendships with people you can have great conversations with.

Speak to rationalists like you would speak to yourself, and speak tactfully to everyone else.

That's the advice i would give to a version of myself in your situation. Have fun!

The Solomonoff Prior is Malign

Okay, deciding randomly to exploit one possible simulator makes sense.

As for choosing exactly what to see the output cells of the simulation to... I'm still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?

The Solomonoff Prior is Malign

Great post. I encountered many new ideas here.

One point confuses me. Maybe I'm missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.

TLDR How do the consequentialists guess which direction to bias the output towards?

Load More