AprilSR's Posts

Sorted by New

AprilSR's Comments

What long term good futures are possible. (Other than FAI)?

I have no comment on how plausible either of these scenarios are. I'm only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.

Hazard's Shortform Feed

I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.

What long term good futures are possible. (Other than FAI)?

Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.

ozziegooen's Shortform

If you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless "expected value" is referring to the expected value of something other than your utility function, in which case it should've been specified.

ozziegooen's Shortform

Doesn't being willing to accept a trade *directly follow* from the expected value of the trade being positive? Isn't that like, the *definition* of when you should be willing to accept a trade? The only disagreement would be how likely it is that losses of knowledge / epistemics are involved in positive value trades. (My guess is it does happen rarely.)

Solution to the free will homework problem

Eh. The next question to ask is going to depend entirely upon context. I feel like most of the time people use it in practice they’re talking about the extent of capabilities, where whether you were able to want something is irrelevant. There are other cases though.

Solution to the free will homework problem

I think when people say “Could I have done X?” We can usually interpret it as if they said “Could I have done X had I wanted to?”


Are you sure they weren't using kill metaphorically?

An optimal stopping paradox

Reminds me of the thought experiment where you’re in hell and there’s a button that will either condemn you permanently, or, with probability increasing over time, will allow you to escape. Since permanent hell is infinitely bad, any decreased chance of that is infinitely good, so you either wait forever or make an arbitrary unjustifiable decision.

The Simulation Epiphany Problem

Do we need it to predict people with high accuracy? Humans do well enough at our level of prediction.

Load More