As a part of my job, I recently created an econometric model. My boss wants someone to look over the math before its submitted internally throughout the company. We have a modest amount of money set aside for someone to audit the process.

The model is an ARMA(2,1) with seasonality, trend, and a dummy variable. There's no heteroscedasticity or serial correlation, but the Ramsey Reset test suggests a more different model might work better.

I currently have the data in an eviews file, so you'd need to do zero data entry.

There's a small chance this will be used in court, but none of the liability will be transferred to you. There should be an emphasis placed on parsimony. You'd have to sign a confidentiality agreement.

If you're qualified to review this/suggest a marginally better model, then this would be an easy way for you to make bank in a couple hours time. If it goes well, there might be more work like this in the future.

Let me know if you're interested.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 1:30 PM

If you're qualified to review this/suggest a marginally better model

How will you know if you have a better model?

How much data do you have?

There are a lot of metrics of how good a model is (i.e. adjusted R-squared, AIC, SIC).

Edit to add: Backcasting would be a good measure.

76 data points, after adjustments.

Hmmm. Personally, I think the AIC/SIC stuff is a hack.

I don't think statistics works in the N=76 regime. In Bayesian terms, the data is sufficient to justify only a minor update, so whatever conclusions you draw will be dominated by your choice of prior. It's interesting that it might be used in court, because it means that the lawyers will effectively be arguing about the justifications of statistical inference - or, if they are both Bayesians, how to choose a prior.

I can't go into specifics, but the alternatives are far worse.

I don't think statistics works in the N=76 regime. In Bayesian terms, the data is sufficient to justify only a minor update, so whatever conclusions you draw will be dominated by your choice of prior.

I don't understand why you call that statistics "not working." Do you mean frequentist statistics?

Also, that's just flat out wrong. Often in a Bayesian analysis with a sample size of n=75 or so (or smaller!), you'll draw the same conclusions for any reasonable choice of prior, including diffuse priors which primarily reflect the data. Choosing a different reasonable model still might result in different conclusions, so if you're using the word 'prior' to include the data model, then I don't disagree.

Choosing a different reasonable model still might result in different conclusions, so if you're using the word 'prior' to include the data model, then I don't disagree.

Right, so the word "reasonable" is prominent here, and implies some kind of subjective evaluation. Different people may very well have different notions of what constitutes a reasonable model. If we were arguing different sides of the case in court, I could just claim your model was unreasonable and determined according to your subjective preferences.

I'm not sure what your point is.

Upvoted, because the economy in general benefits from this kind of wheel-greasing. And rationalism should indeed pay the rent.