LESSWRONG
LW

lacker
9070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Simulation argument meets decision theory
[+]lacker11y-80
In order to greatly reduce X-risk, design self-replicating spacecraft without AGI
lacker11y10

Hmm, it is interesting that that exists but it seems like it is cannot have been very serious because it dates from over 30 years ago and no followup activity happened.

Reply
In order to greatly reduce X-risk, design self-replicating spacecraft without AGI
lacker11y40

How would one even start "serious design work on a self-replicating spacecraft"? It seems like the technologies that would require to even begin serious design do not exist yet.

Reply
LessWrong's attitude towards AI research
lacker11y70

That doesn't seem like the consensus view to me. It might be the consensus view among LessWrong contributors. But in the AI-related tech industry and in academia it seems like very few people think AI friendliness is an important problem, or that there is any effective way to research it.

Reply
A kind or reverse "tragedy of the commons" - any solution ideas?
lacker11y00

Another problem you need to avoid is misjudging how much money you would actually save. That seems more common when the pain of misjudgment is shared.

Reply
Simulate and Defer To More Rational Selves
lacker11y30

Ah, it's just like "What Would Jesus Do" bracelets.

Reply
What are you learning?
lacker11y20

It seems like picking this nonhuman metafictional stuff is a tough way to start writing. Maybe pick something easier just so that you succeed without getting so much writers' block.

Reply
No posts to display.