AprilSR

AprilSR's Posts

Sorted by New

AprilSR's Comments

Solution to the free will homework problem

Eh. The next question to ask is going to depend entirely upon context. I feel like most of the time people use it in practice they’re talking about the extent of capabilities, where whether you were able to want something is irrelevant. There are other cases though.

Solution to the free will homework problem

I think when people say “Could I have done X?” We can usually interpret it as if they said “Could I have done X had I wanted to?”

Wrinkles

Are you sure they weren't using kill metaphorically?

An optimal stopping paradox

Reminds me of the thought experiment where you’re in hell and there’s a button that will either condemn you permanently, or, with probability increasing over time, will allow you to escape. Since permanent hell is infinitely bad, any decreased chance of that is infinitely good, so you either wait forever or make an arbitrary unjustifiable decision.

The Simulation Epiphany Problem

Do we need it to predict people with high accuracy? Humans do well enough at our level of prediction.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

I believe there is some amount of broken arms over the course of my life that would be worse than losing a toe, even though the broken arms are non-permanent and the toe is permanent.

Troll Bridge

"(It makes sense that) A proof-based agent can't cross a bridge whose safety is dependent on the agent's own logic being consistent, since proof-based agents can't know whether their logic is consistent."

If the agent crosses the bridge, then the agent knows itself to be consistent.

The agent cannot know whether they are consistent.

Therefore, crossing the bridge implies an inconsistency (they know themself to be consistent, even though that's impossible.)

The counterfactual reasoning seems quite reasonable to me.

Odds are not easier

If they didn’t need exactly the same amount of information I would be very interested in what kind of math wizardry is involved.

Predicted AI alignment event/meeting calendar

If both of those things happened I would be very interested in hearing about the person who decided to make a paperclip maximizer despite having an explicit model of human utility function they could implement.

Actually, I wouldn’t be interested in anything. I would be paperclips.

Load More