"Oh, Klurl, don't be ridiculous!" cried Trapaucius. "Our own labor is a rare exception to the rule that most people's tasks are easy! That is why not just anyone can become a Constructor!"
"I wonder if perhaps most other people would say the same about their own jobs, somehow," said Klurl thoughtfully.
I for one would say that the work I do is actually pretty easy, and the only reason I'm paid as well as I am for it is most other people's inexplicable inability to do objectively[1] easy work and inexplicable capacity for doing objectively[1] much harder things instead. No idea how many other people feel the same way.
Objectivity not guaranteed
I give myself a small amount of credit for sensibly-incorrectly predicting
". . . and then the weirdly-un-optimized AIs got eaten by the not-weirdly-un-optimized AI humanity constructed."
Thanks for running this. It didn't work out like you hoped, but you get kudos for trying (there are way too few practical tests/challenges on LW imo) and for having your game break the 'right' way (a cheese-able challenge still helps people develop their cheese-ing skills, and doesn't take up too much of anyone's time; my least favorite D&D.Scis are ones where my screwups led to players wasting meaningful amounts of effort on scenarios where the central concept didn't work).
If you make something like this again, and want someone to playtest it before release, please let me know.
5 is obviously the 'best' answer, but is also a pretty big imposition on you, especially for something this speculative. 6 is a valid and blameless - if not actively praiseworthy - default. 2 is good if you have a friend like that and are reasonably confident they'd memoryhole it if it's dangerous and expect them to be able to help (though fwiw I'd wager you'd get less helpful input this way than you'd expect: no one person knows everything about the field so you can't guarantee they'd know if/how it's been done, and inferential gaps are always larger than you expect so explaining it right might be surprisingly difficult/impossible).
I think the best algorithm would be along the lines of:
5 iff you feel like being nice and find yourself with enough spare time and energy
. . . and if you don't . . .
7, where the 'something else' is posting the exact thing you just posted and seeing if any trustworthy AI scientists DM you about it
. . . and if they don't . . .
6
I'm curious to see what other people say.
A beautiful and haunting story. Not entirely sure what it's doing on LessWrong but I'm glad it's here because I'm here and I'm glad I read it.
I want to strong-downvote this on principle for being AI writing but I also want to strong-upvote this on principle for admitting to being AI writing, so I'm writing this comment instead of doing either of those things.
This seems like the sort of thing best addressed by me adding a warning / attention-conservation-notice at the start of the article, though I'm not sure what would be appropriate. "Content Note: Trolling"?
ETA: This comment has been up for 24 hours and it has positive agreement karma and no-one's suggested a better warning to use, so I'm doing the thing. Hopefully this helps?
Done.