The subtext is only clear in retrospect.
Is the intent in the review phase to display the number of nominations received (which will impact which posts get reviewed) or not (which fails to display information that I am likely to find useful in using the list of posts that have been nominated by enough people to form a reading list)?
Is the thought behind "Wait as long as possible before hiring people" that you will be better able to spread values to people when you are busier, or that you can hire a bunch of people at once and gain economy of scale when indoctrinating them?
Because the naive view would be to hire slowly and well in advance, and either sync up with the new hires or terminate them if they can't get into the organizational paradigm you're trying to construct, and that requires more slack.
One error of the stag/rabbit hunt framing is that it makes it explicit that it's a coordination problem, not a values problem. To frame it differently would require that the stag and rabbit hunts not produce different utility numbers, but yield different resources or certainties of resource. If a rabbit hunt yields 3d2 rabbits hunted per hunter, but the stag hunt yields 1d2-1 stag hunted if all hunters work together and 0 if they don't, then even with a higher expected yield of meat and of hide from the stag hunt, for some people the rabbit hunt might yield higher expected utility, since the certainty of not starving is much more utility than an increase in the amount of hides.
In order to confidently assert that a Schelling point exists, one should have viewed the situation from everyone's point of view and applying their actual goals- NOT look at everyone's point of view and apply your goal, or the average goals, or the goals they think they have.
An answer of "There is probably one but I can't figure out what it is." is equivalent to an answer of "I can't find one."
I'm not making a mathematical conjecture that is probably true but might not have a proof; I'm asking what is wrong with engineering fully sentient catgirls who want to serve people in a volcano fortress that isn't also wrong with allowing existing people to follow their dreams of changing themselves into sentient catgirls and serving people in a volcano fortress.
Is there any significant difference between finding sentient beings who self-modify into becoming sentient catgirls for the purpose of serving you in your volcano fortress and engineering de novo sentient catgirls who desire to serve you in your volcano fortress?
I don't think it's inherently difficult to tell the difference between someone who is speaking N levels above you and someone who is speaking N+1 levels above you. The one speaking at a higher level is going to expand on all of the things they describe as errors, giving *more complex* explanations.
The difficulty is that it's impossible to tell if someone who is higher level than you is wrong, or telling a sophisticated lie, or correct, or some other option. The only way to understand how they reached their conclusion is to level up to their level and understand it the hard way.
There's a related problem, where it's nigh impossible to tell if someone who is actually at level N but speaking at level N+X is making shit up completely unless you are above the level they are (and can spot errors in their reasoning).
Take a very simple case: A smart kid explaining kitchen appliances to a less smart kid. First he talks about the blender, and how there's an electric motor inside the base that makes the gear thingy go spinny, and that goes through the pitcher and makes the blades go spinny and chop stuff up. Then he talks about the toaster, and talks about the hot wires making the toast go, and the dial controls the timer that pops the toast out.
Then he goes +x over his actual knowledge level, and says that the microwave beams heat radiation into the food, created by the electronics, and that the refrigerator uses an 'electric cooler' (the opposite of an electric heater) to make cold that it pumps into the inside, and the insulated sides keep it from making the entire house cold.
Half of those are true explanations, and half of those are bluffs, but someone who is barely has the understanding needed to verify the first two won't have the understanding needed to refute the last two. If someone else corrects the wrong descriptions, said unsophisticated observer would have to use things other than the explanation to determine credibility (in the toy cases given, a good explanation could level up the observer enough to see the bluff, but in the case of +5 macroeconomics that is impractical). If the bluffing actor tries to refute the higher-level true explanation, they merely need to bluff more; people high enough level to see the bluff /already weren't fooled/, and people of lower level see the argument see the higher level argument settle into an equilibrium or cycle isomorphic to all parties saying "That's not how this works, that's not how anything works; this is how that works", and can only distinguish between them by things other than the content of what they say (bias, charisma, credentials, tribal affiliation, or verified track records are all within the Overton Window for how to select who to believe).
How useful would it be to have someone who produced luminators that were pegboards with lights mounted via zip ties or something equally aesthetically bad? If the labor of collecting and assembling the components can efficiently be outsourced into buying a nonstandard light fixture, it might be more accessible.
Are you suggesting blacklightboxes?
Has anyone who has gotten relief by using luninators done rigorous a/b testing with different temperatures/colors or intensity or duration or other possibly important variables?
Not just gold standard clinical trials, something like “I tried color a for a week and logged 3 episodes, but color b for a week resulted in 8” could be informative for people deciding which type of bulb to get.