I have signed no contracts or agreements whose existence I cannot mention.
They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.
This seems potentially useful if you find yourself regularly getting told you are manipulative or rude, but if not, I don’t see the value proposition.
I’m also skeptical general, universal, rules like this exist for everyone, as well as “dissonance” being bad in every circumstance for everyone.
As a concrete example, sometimes my friends will tell me they are stupid or inadequate, and also sometimes I know that’s dumb & stupid, so I tell them. This no doubt causes dissonance but a good kind because I know those people well enough to know they just aren’t thinking clearly (and if they were, my confident assertions would not much change their trajectory).
Less concretely, many people thrive in direct confrontation, and NVC seems opposed to that.
The ancient Greeks had many tragic stories too then, for example, Prometheus Bound.
Its a good effort to fight against new medically inaccurate terms which give the wrong impression of real medical phenomena, but sadly I think the ship is long sailed on this one, and the term is already much more popularly watered down than what is described in this post (though the term even then does refer to real & harmful psychological consequences of interacting with AIs).
I will note that historically, despite the positive externalities, fire insurance companies often put out not only their subscribers’ fires, but everyone’s.
@Mo Putera has asked for concrete examples of
over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to "buy" such a complicated tool
I think AI 2027 is a good example of this sort of thing. Similarly, the notorious Rethink Priorities welfare range estimates on animal welfare, and though I haven't thought deeply about it enough to be confident, GiveWell's famous giant spreadsheets (see the links in the last section) are the sort of thing I am very nervous about. I'll also point to Ajeya Cotra's bioanchors report.
I think we both agree insofar as we would give similar diagnoses, but we maybe disagree insofar as we would give different recommendations about what to change.
I would recommend LessWrongers read more history, do more formal math and physics, and make more mathematical arguments[1].
I would expect you would recommend LessWrongers spend more time looking at statistics (in particular, our world in data), spend more time forecasting, and make more mathematical models.
Is this accurate?
This is not an exclusive list. I think also LessWrongers should read more textbooks about pretty much everything. ↩︎
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
My problems with the EA forum (or really EA-style reasoning as I've seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to "buy" such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction. But I think the way EAs think about these things is wrong, even directionally for LessWrong (though not for people in general), to do.
I think this leads EAs to have a pretty big streetlight bias in their thinking, and the same with forecasters, and in particular EAs seem like they should focus more on bottleneck-style reasoning (eg focus on understanding & influencing a small number of key important factors).
See here for concretely how I think this caches out into different recommendations I think we'd give to LessWrong.
Its not obvious this is dumb to me. If two people are super angry at each other, that conversation seems likely to create more heat than light.
Why a single necessary and sufficient policy? What if the most realistic way of helping everyone is several policies that are by themselves insufficient, but together sufficient? Doesn't this focus us on dramatic actions unhelpfully, in the same way that a "pivotal act" arguably so focuses us?
I agree the phrasing here is maybe bad, but I think its generally accepted that "X and Y" is a policy when "X" and "Y" are independently policies, so I would expect a set of policies which are together sufficient would be an appropriate answer.
Economics, magician, sigh, ???, ditto, disgusting, easy to digest, moloch, found a bug, yup that’s what I said, passes the sniff test, missing link, not missing link in argument chain, and marginally changed my mind.