Wiki Contributions

Comments

Do they think it's a hardware/cost issue? Or do they think that "true" intelligence is beyond our abilities?

This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a "leftest" issue if a conservative analogy is not introduced at the same time.

I think of it as deferring to future me vs. deferring to someone else.

Another consideration is how much money someone has to hand. If someone only make $1,000 a month, they may choice $25 shoes that will last a year over $100 shoes that will last 5 years. Essentially, it is the complimentary idea of economy of scale.

Personhood is a legal category and an assumed moral category that policies can point to. Usually, the rules being argued about are about the acceptability of killing something. The category is used differently depending on the moral framework, but it is usually assumed to point at the same objects. Therefore disagreements are interpreted as mistakes.

Personally, I have my doubts on there being an exact point in development that you can point to where a human becomes a person. If there is it might be weeks after birth.

If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

I think the point is that people try to point to things like God's will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God's decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.

While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.

I don't think the fundamental ought works as a default position. Partly because there will always be a possibility of being wrong about what that fundamental ought is no matter how long it looks. So the real choice is about how sure it should be before it starts acting on it's best known option.

The right side can't be NULL, because that'd make the expect value of both actions NULL. To do meaningful math with these possibilities there has to be a way of comparing utilities across the scenarios.

Load More