This critique seemed very persuasive to me. Thank you for putting it together.
The timeline forecast is blended distribution of the superexponential (40% - 45%), exponential (45% - 50%), and subexponential (10%). I would think there is going to be a pretty consistent rank-ordering, where almost all of the mass of the superexponential is earlier than the almost all of the mass of the exponential. Similarly, almost all of the mass of the subexponential is going to be later than either the exponential or superexponential.
This is a simplification, but running with it for a moment... Because the super-exponential block contains < 50 % of the total probability mass, the overall median will come from the exponential block, likely in the earliest 10–20 % of exponential outcomes (the percentile needed to lift cumulative probability to 50 % once the super-exponential weight is counted).
One weird quirk of this is that the more uncertainty they build into the parameters for their exponential (i.e. the wider the lognormals for the prior distributions), the earlier their median prediction will be. Because the median is always going to end up being one of the fastest exponentials, and building in more uncertainty will just make it go faster.
This critique seemed very persuasive to me. Thank you for putting it together.
The timeline forecast is blended distribution of the superexponential (40% - 45%), exponential (45% - 50%), and subexponential (10%). I would think there is going to be a pretty consistent rank-ordering, where almost all of the mass of the superexponential is earlier than the almost all of the mass of the exponential. Similarly, almost all of the mass of the subexponential is going to be later than either the exponential or superexponential.
This is a simplification, but running with it for a moment... Because the super-exponential block contains < 50 % of the total probability mass, the overall median will come from the exponential block, likely in the earliest 10–20 % of exponential outcomes (the percentile needed to lift cumulative probability to 50 % once the super-exponential weight is counted).
One weird quirk of this is that the more uncertainty they build into the parameters for their exponential (i.e. the wider the lognormals for the prior distributions), the earlier their median prediction will be. Because the median is always going to end up being one of the fastest exponentials, and building in more uncertainty will just make it go faster.
Titotal - is that a good way to think about it?