It appears Operation Warp Speed had to be funded by raiding other sources because Congress couldn’t be bothered to fund it. As MR points out, this is a scandal because it was necessary, rather than because it was done. It’s scary, because it implies that under a different administration Operation Warp Speed could easily have not happened at all.
There are gaps in the reporting on Operation Warp Speed funding, because apparently a bunch of the money that Congress did allocate for vaccines hasn't been spent yet. I don't understand why the White House spent other money but not that money.
Voting is like donating thousands of dollars to charity
If you care about social impact, why is voting important?
There are advantages to this style of writing even when the general term isn't contentious.
These kinds of concrete descriptions encourage readers to look at the world and see what's there, rather than engaging primarily with you and your concepts.
This can be good for people who know less about the topic, since looking at the world has fewer prerequisites. And it can be good for people who know more about the topic, since they can gain texture and depth by looking at new examples.
Though with non-contentious topics it's easier to add a general term at the end as a label to remember, or to tie the post into a larger conversation, without overshadowing the rest of the post.
Related: Insights from 'The Strategy of Conflict'
The full-blown process of in-depth contract negotiations, etc., is presumably beyond the scope of the current competitive forecasting arena.
One of the main things that I get out of the sports comparison is that it points to a different way of using (and thinking of) metrics. The obvious default, with forecasting, is to think of metrics as possible scoring rules, where the person with the highest score wins the prize (or appears first on the leaderboard). In that case, it's very important to pick a good metric, which provides good incentives. An alternative is to treat human judgment as primary, whether that means a committee using its judgment to pick which forecasters win prizes, or forecasters voting on an all-star team, or an employer trying to decide who to hire to do some forecasting for them, or just who has street cred in the forecasting community. And metrics are a way to try to help those people be more informed about forecasters' abilities & performance, so that they'll make better judgment. In that case, the standards for what is a good metric to include are very different. (There's also a third use case for metrics, where the forecaster uses metrics about their own performance to try to get better at forecasting.)
Sports also provide an example of what this looks like in action, what sorts of stats exist, how they're presented, who came up with them, what sort of work went into creating them, how they evaluate different stats and decide which ones to emphasize, etc. And it seems plausible that similar work could be done with forecasting, since much of that work was done by sports fans who are nerds rather than by the teams; forecasting has fewer fans but a higher nerd density. I did some brainstorming in another comment on some potential forecasting stats which draws a lot of inspiration from that; not sure how much of it is retreading familiar ground.
Here' s a brainstorm of some possible forecasting metrics which might go in those tables (probably I'm reinventing some wheels here; I know more about existing metrics for sports than for forecasting):
The thing that I was more surprised by, looking at the scoring system, is that Metaculus is set up as a platform for maintaining a forecast rather than as a place where you make a forecast at a particular time. (If I'm understanding the scoring correctly.)
Metaculus scores your current forecast at each moment, from the moment you first enter a forecast on the question until the moment the question closes. Where "your current forecast" at each moment is the most recent number that you entered, and the only thing that happens when you enter an updated prediction is that for the rest of the moments (until you update it again) "your current forecast" will be a different number. Every moment gets equal weight regardless of whether you last entered a number just now or three weeks ago (except that the very last moment when the question closes gets extra weight).
So it's not like a literal betting market where you're buying at the current market price at the moment that you make your forecast. If you don't keep updating your forecast, then you-at-that-moment is going up against the future consensus forecast.
So the scoring system rewards the activity of entering more questions, and also the activity of updating your forecasts on each of those questions again and again to keep them up-to-date.
There was a lesswrong post about this a while back
I was also imagining the distinctions of
adaptation-executers vs. fitness-maximizers
selection + unconscious reinforcement vs. conscious strategizing
which are similar.
And neither of you voted for it!