LessWrong developer, rationalist since the Overcoming Bias days. Connoisseur of jargon.
In general, I think people aren't using finance-style models in daily life enough. When you think of models as capital investments, dirty dishes as debts (not-being-able-to-use-the-sink as debt servicing), skills as assets, and so on, a lot of things click into place.
(Though of course, you have to get the finance-modeling stuff right. For some people, "capital investment" is a floating node or a node with weird political junk stuck to it, so this connection wouldn't help them. Similarly, someone who thinks of debt as sin rather than as a tool that you judge based on discount and interest rates, would be made worse off by applying a debt framing to their chores.)
Maybe merge into the Consciousness tag? In any case, the description ought to be rewritten to actually resolve (or link to a resolution of) the problem. Academic philosophy likes to treat THPoC as an open problem with a permanent taxonomy of candidate answers, none of which will ever be accepted. We should not humor this. The Hard Problem of Consciousness is simply confusion about how a algorithm feels from inside.
(In general, when there is a clear way and a confused way to think about something, I think it's bad to name and reify the confused way without being very clear that's what's happening. This is most of how philosophy got stuck.)
This crystallizes a class of strategies which I was aware of, and used sometimes -- eg I have in the past had dice-tables of topics to think about in the shower. But I didn't make the connection to older practices, and I don't think most people would've recognized this as a useful strategy (as opposed to a gimmick). So now the situation is that this post exists to explain the randomized-library-of-strategies approach, but there isn't much in the way of well curated strategy-libraries to sample from.
In an ideal world, the randomly-sampled strategies would be explicit about what they are (rather than disguising themselves as predictions), and would have a feedback mechanism attached. You'd go to a web page, click "tell me what to think about", and it says "pay attention to relationships you might be neglecting" or something, and 24 hours later you rate whether that caused you to notice anything important. Hopefully bringing attention to this concept will cause people to build more tools like that.
He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often
This is true, but I'm confused about how to relate to it. Part of Michael's explicit strategy seems to be identifying people stuck in bad equilibria, and destabilizing them out of it. If I were to take an evolutionary-psychology steelman of what a psychotic episode is, a (highly uncertain) interpretation I might make is that a psychotic episode is an adaptation for escaping such equilibria, combined with a negative retrospective judgment of how that went. Alternatively, those people might be using psychedelics (which I believe are in fact effective for breaking people out of equilibria), and getting unlucky with the side effects. This is bad if it's not paired with good judgment about which equilibria are good vs. bad ones (I don't have much opinion on how good his judgment in this area is). But this seems like an important function, which not enough people are performing.
At the town hall, you remarked that you had made a list of many explanations, each of which looked apparently likely to be true when viewed in isolation, but none of which seemed obviously more compelling than the others. I attempted the same exercise, and got a similar result. Looking at my list and thinking about it, I'm not convinced there is necessarily a center; some of the hypotheses look like they plausibly could be, but the alternative is that there are a bunch of mechanisms which push to higher and lower levels of corruption, which mostly wind up cancelling out, but that some of the mechanisms are positive-feedback loops.
Here's my list of hypotheses, clustered in a few groupings. This is somewhere half-way between "developed theories that I fully believe" and this babble challenge.
Institutions Which Corrode Others in Proportion to their Own Corrosion
News media. The judiciary. Higher education. Unions. Grantmakers. Each of these institutions (or categories of institutions) has a surface area through which it influences other institutions, and is influenced by them in return; that influence might be positive or negative, depending on how well functioning it is. News media and the judiciary influence the behavior or other institutions through scandal and liability, respectively; that distortion can be positive or negative.
Loss of Slack
The parts of an institutional culture that trade off short- and long-term incentives rely on the institution and the people within it having slack. For a number of reasons, both people and organizations seem to have less slack than they used to. There are a few big well-known ones affecting individuals: student loans, housing prices, and health care prices, in particular. But the ones affecting institutions and person-instutition relationships are more interesting:
Financialization: It is fairly common for an investor to take an organization with a secure income stream and a lot of slack, and use financial instruments to transform it into a precarious organization plus money elsewhere. The prototypical instance of this is the leveraged buyout, which seems to have first started being a thing in the 1980s.
No More Company Men (/Women): Decades ago, and still in some countries, the relationship between an employee and a corporation was a long term and loyal one; people would spend most of their career inside a single organization. This implies a low risk of being fired, a long time horizon in their relationship with the company, and a lot of time in which to absorb a company's culture.
Loss of Ability to Filter
In order to function, an institution needs to recruit people filtered for both competence and alignment, which can be done either by recruiters assessing people directly, or by delegating and judging based on work history or formal credentials. Most large institutions have shifted heavily towards the latter, while the formal-credential-giving institutions have shifted away from selecting on intelligence and towards selecting on time-expenditure and conscientiousness. As a result, many institutions seem to have filled up with people who have papers that say they're qualified, who nevertheless aren't.
Some norms are enforced in a distributed way, where the enforcement only works if the violation is known widely enough, eg boycotts in response to corporate misbehavior. If the population doubles, the number of corporations doubles, and the per-capita number of boycott targets stays constant, then the total influence of boycotts halves. Similar effects exist internally when scaling up institutions, when increasing the number of participants in a market, and when increasing the complexity of regulation.
A noteworthy part of the experience of hanging out in any sort of niche forum is that US's major institutions have a recurring cast of scandals which never seem to be resolved, and which the relevant people don't show much awareness of. In 2015, the Congressional reaction to police brutality and discrimination didn't look like disagreement, it looked like not having enough bandwidth to investigate or think clearly about it. This is what we should expect to happen more and more to fixed-size institutions as the world gets more complex.
A related problem is that as the influence of an institution grows, the rewards for capturing or corrupting it also grow, and the growth between offense and defense is not symmetric. An example of this is the relationship between the PR and newspaper industries; the ratio of PR resources to investigative-reporting resources has grown drastically, and this fact seems like a natural consequence of economic growth.
Foreign intelligence agencies are actively working to reduce unity within the US.
A history of corruption in powerful institutions created a cultural backlash against institutional power in general, which isn't selective enough.
Institutions are easier to sabotage than they are to create, so scaling up the world disfavors them.
Decreased Attention Spans
When I was growing up, it was a standard aphorism that "television rots your brain"--ie, that consuming too much media messes people up in some way. I recall a more specific claim, which was that television decreases attention span -- ie, people who watch a lot of television have more scattered attention. People now say similar things about social media, and I think the effect is the same: people tend to spend less consecutive seconds on each thought, and have a harder time dealing with long inferential distances.
Many of the major problems look like leaders are being too miserly with cognition; there's a correct model of the problem which has some inferential distance, and a competing model of the problem which is simpler but wrong, and we find leaders acting as though they believe the latter. Eg the "landlords are greedy" model (simple, wrong) vs the "prices are high when housing construction is inhibited by regulation" model (correct, but more complex).
Assorted Other Hypotheses
Exposure to induces causes resistance to a class of messaging which includes both advertising and organizational ideology.
Business schools are teaching MBAs a strategy that doesn't rely on understanding the culture of an institution, and so they go on to destroy the local culture wherever they go.
The same thing that's causing an obesity crisis, also changes people's psychology in a way that makes them hard to build institutions out of.
Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there.
Many organizations relied on an implicit hierarchy in which older people were higher ranked and less numerous; the decrease in birth rate broke this.
Con artists are more skilled than they used to be, because TV/social media/something else made it easier for them to practice, and most organizations are being captured by them.
The state can use its centralized negotiating power to get lower prices on standardized packages, and accordingly a floor for welfare could be provided at a lower cost than just redistributing income, which, though it gives people more agency, can more easily just inflate prices generally and leave people more liable to impulse spending.
I think this part of the argument fails, at least for things that work well as market commodities. The main advantage of individuals selecting suppliers over governments selecting suppliers isn't that it gives people more agency, it's that it has better aligned incentives and is more resistant to corruption. Impulse buying is a minor problem; packages containing the wrong items, and items that are useless for hard-to-recognize reasons, is a major problem.
In general, when there are conspicuous mismatches between what the government tells people to buy and what they actually buy, there are usually good reasons for it. Nutrition, in particular, is an area where increasing government control would be disastrous; the US goernment already exert some control over some poorer peoples' food purchases via eligibility restrictions, and while the intent is to encourage people to eat healthier, the actual effect is mostly the opposite, because they operate on a model of nutrition that's less accurate than most peoples' instincts.
A simpler solution to the impulse-purchase problem would be time restricted funds: regular money, except it can only be spent on items with a long shipping delay and a cancellation window.
The first Reddit link bottoms out at this study. The key detail not mentioned in the abstract is that this is in rats whose pupils have been dilated with atropine, and the rats are in conditions where a human with pharmaceutically-dilated eyes would not be able to function without sunglasses. This makes the paper's comparison between different types of light sources and wavelengths pretty uninformative.
The rest are all about what level of brightness is acceptable. But we have a pretty good point of comparison: we know sunlight is safe (as long as you don't look at the sun directly); and the indoor lighting solutions under consideration are all significantly dimmer than sunlight.
When new users submit posts/comments, they go into a moderator review queue. In addition to weeding out spam, we also sometimes use this to reject posts that don't meet our quality standards (enforcing a somewhat higher standard than we enforce on established accounts). Because this mechanism was mainly built for dealing with spam, it is unfortunately over-inclusive (deleting the whole account rather than just the post) and doesn't provide good feedback to affected users. This is an aspect of the site software that we plan to improve.
You first posted this text under the title "A scientific, and physical explanation for the resurrection, and assent into heaven, of Jesus Christ", and the moderator who reviewed it clicked the spam/purge button. I was not the moderator who first reviewed it and clicked the delete-account button, but I agree with their decision; it's approved on this resubmit so that there's somewhere to put this reply, but I don't think it meets the standards we want for new accounts' first posts.
Ordinarily I might suggest resubmitting this as a Shortform post, as that section is meant for less-polished writing and this post might fit there. However, since you made a threat in the title of this resubmission, and we do not respond to threats, I instead ask that you leave and not engage with LessWrong in the future.
I think this is a complaint about use of unfamiliar jargon. This does seem like something that deserves a link/hover-over.
This comment was crossposted with Facebook, and Facebook auto-edited the link while I was editing it there. Edited now to make it a direct link.