The Rationalist Move Club
Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.
Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.
Maybe one way to solve this is to ask for escalating credible commitments.
A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.
If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.
By the end of the pre-defined planning period, there could be one of three majority consensus states, determined by vote (approval vote, obviously!):
Obviously the timetables, monetary commitments could be modified. Other "commitment checkpoints" could be added in as well. I don't live in the Bay Area, but if those of you who do feel this framework could be helpful, please feel free to steal it.
The mind's a hack. So maybe it's just harnessing the same mechanism you use to suppress socially undesirable thoughts?
Rather than saying "forget about it," it's saying "ABSOLUTELY DO NOT SAY THIS THING."
Saying it shows the mind there's no bad consequences and allows it to stop focusing on avoidance.
I’m less interested in comparing groups of forecasters with each other based on brier scores than with getting a referendum on forecasting generally.
The forecasting industry has a collective interest in maintaining their reputation for predictive accuracy on general questions. I want to know if they are in fact accurate in general questions, or whether some of their apparent success rests on choosing the questions that they address with some cunning.
UBI could enhance production for some people, if it enables them to invest more in job skills or other forms of capital. The argument for every social program - the military and police, vaccination, education, infrastructure, scientific R&D, and so on - is that they produce more value than they cost.
This also applies to forms of welfare. For example, the ER visits circumvented by housing the homeless may save the taxpayer more money than providing the housing costs.
The essential argument about UBI is not whether greater leisure time is worth the cost.
It's whether we can get more leisure time and more production at a net savings to the taxpayer with UBI.
For example, I am currently in school preparing for a degree in bioinformatics, but I am also working part-time in my old job as a piano teacher. Society could allow me to pump more STEM knowledge into my head if I didn't have to work 20 hours a week providing an after-school activity for bored rich children. It could also reduce the risk that I'll burn out before I make good on my investment.
Whether this sort of dynamic outweighs the productive loss from people choosing to live off UBI and not work at all is an empirical question.
I have an idea along these lines: adversarial question-asking.
I have a big concern about various forms of forecasting calibration.
Each forecasting team establishes its reputation by showing that its predictions, in aggregate, are well-calibrated and accurate on average.
However, questions are typically posed by a questioner who's part of the forecasting team. This creates an opportunity for them to ask a lot of softball questions that are easy for an informed forecaster to answer correctly, or at least to calibrate their confidence on.
By advertising their overall level of calibration and average accuracy, they can "dilute away" inaccuracies on hard problems that other people really care about. They gain a reputation for accuracy, yet somehow don't seem so accurate when we pose a truly high-stakes question to them.
This problem could be at least partly solved by having an external, adversarial question-asker. Even better would be some sort of mechanical system for generating the questions that forecasters must answer.
For example, imagine that you had a way to extract every objectively answerable question posed by the New York Times in 2021.
Currently, their headline article is "Duty or Party? For Republicans, a Test of Whether to Enable Trump"
Though it does not state this in so many words, one of the primary questions it raises is whether the Michigan board that certifies vote results will certify Biden's victory ahead of the Electoral College vote on Dec. 14.
Imagine that one team's job was to extract such questions from a newspaper. Then they randomly selected a certain number of them each day, and posed them to a team of forecasters.
In this way, the work of superforecasters would be chained to the concerns of the public, rather than spent on questions that may or may not be "hackable."
To me, this is a critically important, and to my knowledge totally unexplored question that I would very much like to see treated.
My impression of where this would lead is something like this:
While enormous amounts of work has been done globally to develop and employ epistemic aids, we have relatively little study being done to explore which epistemic interventions are most useful for specific problems.
We can envision an analog to the medical system. Instead of diagnosing physical sickness, it diagnoses epistemic illness and prescribes solutions on the basis of evidence.
We can also envision two wings of this hypothetical system. One is the "public epistemic health" wing, which studies mass interventions. Another is patient-centered epistemic medicine, which focuses on the problems of individual people or teams.
"Effective epistemics" is the attempt to move toward mechanistic theories of epistemology that are equivalent in explanatory power to the germ theory of disease. Whether such mechanistic theories can be found remains to be seen. But there was also a time during which medical research was forced to proceed without a germ theory of disease. We'd never have gotten medicine to the point where it is today if early scientists had said "we don't know what causes disease, so what's the point in studying it?"
So having a reasonable expectation that formal study would uncover mechanisms with equivalent explanatory power would be a good use of resources, considering the extreme importance of correct decision-making for every problem humanity confronts.
Is this a good way to look at what you're trying to do?
You raise two issues here. One is about vitamin D, and the other is about trust.
Regarding vitamin D, there is an optimal dose for general population health that lies somewhere in between "toxically deficient" and "toxically high." The range from the high hundreds to around 10,000 appears to be well within that safe zone. The open question is not whether 10,000 IUs is potentially toxic - it clearly is not - but whether, among doses in the safe range, a lower dose can be taken to achieve the same health benefits.
One thing to understand is that in the outdoor lifestyle we evolved for, we'd be getting 80% of our vitamin D from sunlight and 20% through food. In our modern indoor lifestyles, we are starving ourselves for vitamin D.
"Supplement a bit lightly is safer than over-supplementing" is only a meaningful statement if you can define the dose that constitutes "a bit lightly" and the dose that is "over-supplementing." Beyond these points, we'd have "dangerously low" and "dangerously high" levels.
To assume that 600 IU is "a bit lightly" rather than "dangerously low" is a perfect example of begging the question.
On the issue of trust, you could just as easily say "so you don't trust these papers, why do you trust your doctor or the government?"
The key issue at hand is that in the absence of expert consensus, non-experts have to come up with their own way of deciding who to trust.
In my opinion, there are three key reasons to prefer a study of the evidence to the RDA in this particular case:
However, I have started an email conversation with the author of The Big Vitamin D Mistake, and have emailed the authors of the original paper identifying the statistical error it cites, to try and understand the research climate further.
I want to know why it is difficult to achieve a scientific consensus on these questions. Everybody has access to the same evidence, and reasonable people ought to be able to find a consensus view on what it means. Instead, the author of the paper described to me a polarized climate in that field. I am trying to check with other researchers he cites about whether his characterization is accurate.
An end run around slow government
The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.
According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.
But 3 years is a long time to wait. Especially when vitamin D deficiency is linked to COVID mortality. And if we want to be good progressives, we can also note that vitamin D deficiency is linked to race, and may be driving the higher rates of death in black communities due to COVID.
We could call the slowness to update the RDA an example of systemic racism!
What do we do when a regulatory board isn't doing its job? Well, we can disseminate the truth over the internet.
But then you wind up with an asymmetric information problem. Reading the health claims of many people promising "the truth," how do you decide whom to believe?
Probably you have the most sway in tight-knit communities, such as your family, your immediate circle of friends, and online forums like this one.
What if you wanted to pressure the FNB to reconsider the RDA sooner rather than later?
Probably giving them some bad press would be one way to do it. This is a symmetric weapon, but this is a situation where we don't actually have anybody who really thinks that incorrect vitamin D RDA levels are a good thing. Except maybe literal racists who are also extremely informed about health supplements?
In a situation where we're not dealing with a partisan divide, but only an issue of bureaucratic inefficiency, applying pressure tactics seems like a good strategy to me.
How do you start such a pressure campaign? Probably you reach out to leaders of the black community, as well as doctors and dietary researchers, and try to get them interested in this issue. Ask them what's being done, and see if there's some kind of work going on behind the scenes. Are most of them aware of this issue?
Prior to that, it's probably important to establish both your credibility and your communication skills. Bring together the studies showing that the issue is a) real and b) relevant in a format that's polished and easy to digest.
And prior to that, you probably want to gauge the difficulty from somebody with some knowhow, and get their blessing. Blessings are important. In my case, my dad spent his career in public health, and I'm going to start there.
Thanks for that information. I'll pass it along.
This post motivated me to order Vitamin D supplements, and write a thoughtful email to my family advocating that they do the same. Note: Mayo Clinic advocates around 600 IU/day for young adults and 800 IU/day for adults over 80. Too high a dose can apparently counteract the benefits. Most Vitamin D supplements on Amazon are in the 5,000-10,000 IU range. These ones are 400 IU/day.