D2AEFEA1

Posts

Sorted by New

Comments

Is there math for interplanetary travel vs existential risk?

While Earth would be easier to terraform due to available resources and global conditions already closer to something inhabitable, it would not be safer, as mistakes in the terraforming process are not going to be as catastrophic when you try to terraform a backup, uninhabited planet.

Toying with complex, poorly understood processes at a time when we wouldn't even have our current resources, manpower on a ravaged Earth whose environment might just be one wrong step from becoming much worse, could destroy a majority of what remains of humanity, the economy and valuable resources, making it impossible for us to ever recover.

(I am however assuming we were talking about global terraforming of the whole planet, not making minute changes to local spots)

Boltzmann Brains and Anthropic Reference Classes (Updated)

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.

You're assuming that there exists something like our universe, with at least one full human being like you having beliefs causally entwined with Obama existing. What if there is none, and there are only Boltzmann brains or something equivalent?

In a Boltzmann brain scenario, how can you even assume that the universe in which they appear is ruled by the same laws of physics as those we seemingly observe? After all, the observations and beliefs of a Boltzmann brain aren't necessarily causally linked to the universe that generated it.

You could well be a single "brain" lost in a universe whose laws make it impossible for something like our own Hubble volume to exist, where all your beliefs about physics, including beliefs about Boltzmann brains, is just part of the unique, particular beliefs of that one brain.

Boltzmann Brains and Anthropic Reference Classes (Updated)

Wait, would an equivalent way to put it be evidential as in "as viewed by an outside observer" as opposed to "from the inside" (the perspective of a Boltzmann brain)?

Raising safety-consciousness among AGI researchers

Most of this seems unrelated to what the OP says. Are you sure you posted this in the right place?

One possible issue with radically increased lifespan

I would second that. On the other hand, how would you decide what weight to give to someone's vote? Newcomers vs older members? Low vs high karma? I'm not sure a function of both these variables would be sufficient to determine meaningful voting weights (that is, I'm not sure such a simple mechanism would be able to intelligently steer more karma towards good quality posts even if they were hidden, obscure or too subtle).

One possible issue with radically increased lifespan

Would it be difficult (and useful) to change the voting system inherited from reddit and implement one where casting a vote would rate something on a scale from minus ten to ten, and then average all votes together?

Short Primers on Crucial Topics

How well do they though? I've seen a few academics from around me having enough command of English to get by, but they might still miss some of the subtle points. They just can't reason as well in English as they do in their mother tongue.

When None Dare Urge Restraint, pt. 2

labeling a death as "heroic" can be a similar sort of rationalization.

Homer, about 2800 years ago :

It is entirely seemly for a young man killed in battle to lie mangled by the bronze spear. In his death all things appear fair.

Is a Purely Rational World a Technologically Advanced World?

Strategies would be different for an individual as opposed to societies. Both would as a first approximation only be as cautious as they need to be in order to preserve themselves. That's where the difference between local and global disasters comes into play.

A disaster that can kill an individual won't usually kill a society. The road to progress for society has been paved by countless individual failures, some of which took a heavy toll, but in the end they never destroyed everything. It may be a gamble for an individual to take a risk that could destroy it, and risk-averse people will avoid it. But for society as a whole, non-risk averse individuals will sometime strike the motherlode, especially as the risk to society (the loss of one or a few individuals out of the adventurous group at a time) is negligible enough. Such individuals could therefore conceivably be an asset. They'd explore venues past certain local optima for instance. This would also benefit those few individuals who'd be incredibly successful from time to time, even if most people like them are destined to remain in the shadow.

Of course nowadays even one person could fail hard enough to take everything along with it. That mat be why you get that impression that rational people are perhaps too cautious, and could hamper progress. The rules for the game have changed, you can't just be careless anymore.

Load More