LESSWRONG
LW

hollowing
94180
Message
Dialogue
Subscribe

meow

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
(Cryonics) can I be frozen before being near-death?
hollowing3y10

What kind of professional could I discuss this with?

Reply
(Cryonics) can I be frozen before being near-death?
hollowing3y30

I'm not, what makes it unlikely? Would it prevent an AGI from reviving me, too?

Reply
What moral systems (e.g utilitarianism) are common among LessWrong users?
hollowing3y*10

I'm sorry but that's not actually what I meant. I didn't mean that the two are incompatible and I agree with you that they're not. I meant what the other user wrote: my friend was wondering if "most here 'just' want to be immortal no matter the cost and don't really care about morality otherwise."

I'll try to be more clear with my wording here in the future. I try to keep it short to not waste readers time, since the time of users here is a lot more impactful than that of most others.

Reply
What moral systems (e.g utilitarianism) are common among LessWrong users?
hollowing3y10

Yea that was their hypothesis, and thanks for the answer

Reply
What moral systems (e.g utilitarianism) are common among LessWrong users?
hollowing3y10

It would imply a moral system based on maximizing one's personal desires, instead of maximizing well-being across all life capable of suffering (which is what i meant by utilitarianism), or other moral systems.

You can disregard it if you want, I was just curious what moral beliefs motivate the users here. 

Reply
What moral systems (e.g utilitarianism) are common among LessWrong users?
hollowing3y00

They don't necessarily have any relation, which is the point, it's a different motive.

Reply
Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI?
hollowing3y10

I think the most likely scenario of actually trying this with an AI in real life is that you end up with a strategy that is convincing to humans and ends up being ineffective or unhelpful in reality.

I agree this would be much easier. However, I'm wondering why you think an AI would prefer it, if it has the capability to do either. I can see some possible reasons (e.g., an AI may not want problems of alignment to be solved). Do you think that would be an inevitable characteristic of an unaligned AI with enough capability to do this?

Reply
Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI?
hollowing3y10

Thanks for the response. I did think of this objection, but wouldn't it be obvious if the AI were trying to engineer a different situation than the one requested? E.g., wouldn't such a strategy seem unrelated and unconventional?

It also seems like a hypothetical AI with just enough ability to generate a strategy for the desired situation would not be able to engineer a strategy for a different situation which would both work, and deceive the human actors. As in, it seems the latter would be harder and require an AI with greater ability. 

Reply
hollowing's Shortform
hollowing3y*10

edit: reposted this comment as a 'question' here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation

Reply
hollowing's Shortform
hollowing3y10

I'm new to alignment (been casually reading for a couple months). I'm drawn to the topic by long-termist arguments. I'm a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?

Reply
Load More
1hollowing's Shortform
3y
8
6(Cryonics) can I be frozen before being near-death?
Q
3y
Q
16
1What moral systems (e.g utilitarianism) are common among LessWrong users?
Q
3y
Q
9
1Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI?
Q
3y
Q
6
1hollowing's Shortform
3y
8