Are we thinking from the transmitter end, the receiver end, or doesn't it matter? The obvious answer seems to me to be filters, specifically a band-pass filter.
I do not understand Logical Induction, and I especially don't understand the relationship between it and updating on evidence. I feel like I keep viewing Bayes as a procedure separate from the agent, and then trying to slide LI into that same slot, and it fails because at least LI and probably Bayes are wrongly viewed that way.But this post is what I leaned on to shift from an utter-darkness understanding of LI to a heavy-fog one, and re-reading it has been very useful in that regard. Since I am otherwise not a person who would be expected to understand it, I think this speaks very well of the post in general and of its importance to the conversation surrounding LI.
This also is a good example of the norm of multiple levels of explanation: in my lay opinion a good intellectual pipeline needs explanation stretching from intuition through formalism, and this is such a post on one of the most important developments here.
Congratulations on finishing your doctorate! I'm very much looking forward to the next post in the sequence on multi-winner methods, and I'm especially the metric you mention.
I think this post should be included in the best posts of 2018 collection. It does an excellent job of balancing several desirable qualities: it is very well written, being both clear and entertaining; it is informative and thorough; it is in the style of argument which is preferred on LessWrong, by which I mean makes use of both theory and intuition in the explanation.
This post adds to the greater conversation by displaying rationality of the kind we are pursuing directed at a big societal problem. A specific example of what I mean that distinguishes this post from an overview that any motivated poster might write is the inclusion of Warren Smith's results; Smith is a mathematician from an unrelated field who has no published work on the subject. But he had work anyway, and it was good work which the author himself expanded on, and now we get to benefit from it through this post. This puts me very much in mind of the fact that this community was primarily founded by an autodidact who was deeply influenced by a physicist writing about probability theory.
A word on one of our sacred taboos: in the beginning it was written that Politics is the Mindkiller, and so it was for years and years. I expect this is our most consistently and universally enforced taboo. Yet here we have a high-quality and very well received post about politics, and of the ~70 comments only one appears to have been mindkilled. This post has great value on the strength of being an example of how to address troubling territory successfully. I expect most readers didn't even consider that this was political territory.
Even though it is a theory primer, it manages to be practical and actionable. Observe how the very method of scoring posts for the review, quadratic voting, is one that is discussed in the post. Practical implications for the management of the community weigh heavily in my consideration of what should be considered important conversation within the community.
Carrying on from that point into its inverse, I note that this post introduced the topic to the community (though there are scattered older references to some of the things it contains in comments). Further, as far as I can tell the author wasn't a longtime community member before this post and the sequence that followed it. The reason this matters is that LessWrong can now attract and give traction to experts in fields outside of its original core areas of interest. This is not a signal of the quality of the post so much as the post being a signal about LessWrong, so there is a definite sense in which this weighs against its inclusion: the post showed up fully formed rather than being the output of our intellectual pipeline.
I would have liked to see (probably against the preferences of most of the community and certainly against the signals the author would have received as a lurker) the areas where advocacy is happening as a specific section. I found them anyway, because they were contained in the disclosures and threaded through the discussion, and clicking the links, but I suspect that many readers would have missed them. This is especially true for readers less politically interested than I, which most of them. The obvious reason is for interested people to be able to find it more easily, which matters a lot to problems like this one. The meta-reason is that posts that tread dangerous ground might benefit from directing people somewhere else for advocacy specifically, kind of like a communication-pressure release valve. It speaks to the quality of the post this wasn't even an issue here, but for future posts on similar topics in a growing LessWrong I expect it to be.
Lastly I want to observe the follow-up posts in the sequence are also good, suggesting that this post was fertile ground for more discussion. In terms of additional follow-up: I would like to see this theory deployed at the level of intuition building, in a way similar to how we use markets, Prisoner's Dilemmas, and more recently considered Stag Hunts. I feel like it would be a good, human-achievable counterweight to things like utility functions and value handshakes in our conversation, and make our discussions more actionable thereby.
Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.
I'm not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don't-compute-evil rule is pretty efficient even if it were arbitrarily chosen.
I feel that questions like this have a hard time escaping confusion because the notion of linear time is so deeply associated with causality already.
Could you point me to the arguments about a high-entropy universe being expected to decrease in entropy?
I think I agree with your intuition, though I submit that size is really only a proxy here for levels of hierarchy. We expect more levels in a bigger organization, is all. I think this gets at the mechanisms for why the kinds of behaviors in Moral Mazes might appear. I have seen several of the Moral Mazes behaviors play out in the Army, which is one of the largest and most hierarchical organizations in existence.
I don't see why being consumed by your job would predict any of the rest of it; programmers, lawyers, and salesmen are notorious for spending all of their time on work, and those aren't management positions. Rather, I expect that all these behaviors exist on continua, and we should see more or less of them depending on how strongly people are responding to the incentives.
My intuition is that the results problem largely drives the description to which you are responding. Front line people and front line managers usually have something tangible by which to be measured, but once people enter the middle zone of not being directly connected to the top line or the bottom line results, there's nothing left but signalling. So even a 9-5 guy who goes fishing is still likely to play politics, avoid rocking the boat, pass the blame downhill, and think that outcomes are determined by outside forces.
I would be shocked to my core if Moral Mazes behaviors rarely appeared under such conditions.
One of the largest in the country. The core organization is less than a thousand, but they have state affiliate organizations and as of recently international ones as well.
It is exceedingly top-heavy; I want to say it was approaching 5% executives, not counting their immediate staff.
The organization is functionally in free-fall now; they are hemorrhaging people and money. I expect if it were for-profit this is the part where they would go bankrupt. The transition from well-functioning to free-fall took ~5 years.
When considering a barrier to exit, do they usually include the cost to go somewhere else? Quitting is free and easy, but getting another job elsewhere isn't, especially when considering opportunity costs.
By contrast this does match my wife's experiences as a senior manager in a large non-profit. There were repeated and consistent messages about being expected to respond to emails and calls at all hours as you moved up the hierarchy; the performance metrics were fixed so everyone fit within a narrower band; actual outcomes of programs did not matter suggesting that they did was punished (culminating in one fascinating episode where a VP seems to have made up an entire program which delivered 0.001 of projected revenue, resulting in a revenue shortfall of some 25% for the whole organization, and who was not fired).