Wiki Contributions

Comments

I think binary examples are deceptive in the reversed stupidity is not intelligence sense. Thinking through things from first principles is most important in areas that are new or rapidly changing where there are fewer references classes and experts to talk to. It's also helpful for areas where the consensus view is optimized for someone very unlike you.

If some some pre-modern hominids ate high animal diets, and some populations of humans did, and that continued through history, I wouldn't call that relatively recent. I'm not the same person making the claim that there is overwhelming evidence that saturated fats can't possibly be bad for you. I'm making a much more restricted claim.

I would have guessed high T is associated with lower neuroticism, but studies found weak or no effects afaict.

AFAIK, analysis of paleolithic diets is that there were a range of things depending on availability and some groups were indeed pretty high on animal protein. We don't have differential analysis of the resulting health, but I just wanted to point out that the trope of 'trad diets were low protein' is not super well supported. Trad diets were mostly lower fat does have some support, as raising very fatty, sedentary animals is more recent, and accelerated a bunch in the last hundred years. Although the connection between higher fat diets and negative health outcomes is then another inferential step that hasn't been strongly supported and is, AFAIK, somewhat genetically mediated (some people/groups do much better on high fat diets than others in terms of blood lipid profiles).

These can often be operationalized 'How much of the variance in the output do you predict is controlled by your proposed input?'

Our sensible Chesterton fences

His biased priors

Their inflexible ideological commitments

In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you'll have an advantage, and resisted.

You may share the experience I've had that most attempts at discussion don't go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other's positions drill down for long enough they'll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions.

My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn't work, after a bunch of good faith efforts from a bunch of people. I'm not sure what's next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people's priors on how much effort to spend on novel skills with unclear payoffs. And a theory of change that has a curiosity stopper that halts on "other people should do this thing that they clearly aren't going to do" is also not very useful.

Unclear. High fat and high carb diets have been directly compared and not found to be a smoking gun.

Correlation to increased consumption of hidden trans fats looks like a promising angle for figuring out some of the conflicting data.

I don't have a cite handy, but proportion of free acids was found to strongly increase with repeated heating of vegetable oils in cooking. There's a story here where pufa is more fragile, and incorporation of damaged fats into bodily tissue is not good. In particular, fat cells made up of damaged fats might mess with normal lipid balance processes. This is one possible story for why processed meats are so bad. We'd be doubling up on this process, feeding animals such that they have lots of damaged fats in their tissues (eg we feed pigs expired candy because it is cheap, and high BMI is desirable), killing and processing them such that it's even more damaged, and then eating it.

Overall, I'm bullish on the story that processing is bad, potentially through multiple mechanisms.

I'm bearish on pufa being bad in generality, if it were I don't think we'd see some of the strongest effects in nutrition science on reduced mortality from nuts and fish. I personally consume both raw on the processing is bad story.

CRPGs with a lot of open world dynamics might work, where the goal is for the person to identify the most important experiments to run in a limited time window in order to manmax certain stats.

Tracing out the chain of uncertainty. Lets say that I'm thinking about my business and come up with an idea. I'm uncertain how much to prioritize the idea vs the other swirling thoughts. If I thought it might cause my business to 2x revenue I'd obviously drop a lot and pursue it. Ok, how likely is that based on prior ideas? What reference class is the idea in? Under what world model is the business revenue particularly sensitive to the outputs of this idea? What's the most uncertain part of that model? How would I quickly test it? Who would already know the answer? etc.

Load More