Wiki Contributions

Comments

What kind of professional could I discuss this with?

I'm not, what makes it unlikely? Would it prevent an AGI from reviving me, too?

I'm sorry but that's not actually what I meant. I didn't mean that the two are incompatible and I agree with you that they're not. I meant what the other user wrote: my friend was wondering if "most here 'just' want to be immortal no matter the cost and don't really care about morality otherwise."

I'll try to be more clear with my wording here in the future. I try to keep it short to not waste readers time, since the time of users here is a lot more impactful than that of most others.

Yea that was their hypothesis, and thanks for the answer

It would imply a moral system based on maximizing one's personal desires, instead of maximizing well-being across all life capable of suffering (which is what i meant by utilitarianism), or other moral systems.

You can disregard it if you want, I was just curious what moral beliefs motivate the users here. 

They don't necessarily have any relation, which is the point, it's a different motive.

I think the most likely scenario of actually trying this with an AI in real life is that you end up with a strategy that is convincing to humans and ends up being ineffective or unhelpful in reality.

I agree this would be much easier. However, I'm wondering why you think an AI would prefer it, if it has the capability to do either. I can see some possible reasons (e.g., an AI may not want problems of alignment to be solved). Do you think that would be an inevitable characteristic of an unaligned AI with enough capability to do this?

Thanks for the response. I did think of this objection, but wouldn't it be obvious if the AI were trying to engineer a different situation than the one requested? E.g., wouldn't such a strategy seem unrelated and unconventional?

It also seems like a hypothetical AI with just enough ability to generate a strategy for the desired situation would not be able to engineer a strategy for a different situation which would both work, and deceive the human actors. As in, it seems the latter would be harder and require an AI with greater ability. 

edit: reposted this comment as a 'question' here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation

I'm new to alignment (been casually reading for a couple months). I'm drawn to the topic by long-termist arguments. I'm a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?

Load More