# All of Matt_Simpson's Comments + Replies

Since utility functions are only unique up to affine transformation, I don't know what to make of this comment. Do you have some sort of canonical representation in mind or something?

-2[anonymous]9y
In the context of this thread, you can consider U(status quo) = 0 and U(status quo, but with one more dollar in my wallet) = 1. (OK, that makes +10000 an unreasonable estimate of the upper bound; pretend I said +1e9 instead.)

See also Kreps, Notes on the Theory of Choice. Note that one of these two restrictions are required in order to specifically prevent infinite expected utility. So if a lottery spits out infinite expected utility, you broke something in the VNM axioms.

For anyone who's interested, a quick and dirty explanation is that the preference relation is primitive, and we're trying to come up with an index (a utility function) that reproduces the preference relation. In the case of certainty, we want a function U:O->R where O is the outcome space and R is the real...

I do this too, though in smaller bites. fitfths? fourths? I'm not sure, actually, but it seems to work.

Funny, I read your post and my initial reaction was that this evidence cuts against PUA. (Now I'm not sure whether it supports PUA or not, but I lean towards support).

PUA would predict that this phrase

...while I devote myself to worshiping the ground she walks on.

is unattractive.

[anonymous]9y13

I dunno, in the context it sounds clearly tongue-in-cheek -- though you usually can't countersignal to people who don't know you (see also).

Well based on your track record there, it seems like a prudent move to avoid making bets with you ;)

(Though I agree with you and should be shaming them rather than defending them.)

If the basilisk is correct* it seems any indirect approach is doomed, but I don't see how it prevents a direct approach. But that has it's own set of probably-insurmountable problems, I'd wager.

* I remain highly uncertain about that, but it's not something I can claim to have a good grasp on or to have thought a lot about.

0[anonymous]9y
My position on the basilisk: if someone comes to me worrying about it, I can probably convince them not to worry (I've done that several times), but if someone comes up with an AI idea that seems to suffer from basilisks, I hope that AI doesn't get built. Unfortunately we don't know very much. IMO open discussion would help.

I think I understand X, and it seems like a legitimate problem, but the comment I think you're referring to here seems to contain (nearly) all of X and not just half of it. So I'm confused and think I don't completely understand X.

Edit: I think I found the missing part of X. Ouch.

0cousin_it9y
Yeah. The idea came when I was lying in a hammock half asleep after dinner, it really woke me up :-) Now I wonder what approach could overcome such problems, even in principle.
0[anonymous]9y
You probably understand it correctly. I say "half" because Paul didn't consider the old version serious, because we hadn't made the connection with basilisks.

I also have this problem and would like to know how to fix it / if dual n-back might help.

Politics is the mind killer for a variety of reasons besides ridiculously strong priors that are never swayed by evidence. Strong priors isn't even the entirety of the phenomena to be explained (though it is a big part), let alone a fundamental explanation.

Also, I really like Noah's post (and was about to post it in the current open thread before I found your post). Not only did Noah attach a word to a pretty commonly occurring phenomenon, the word seems to have a great set of connotations attached to it, given some goals about improving discourse.

What do you mean by 'content' here? The basic narrative each model tells about the economy?

I think I agree with you. The big difference between the models I learned in undergrad and the models I learned in grad school was that in undergrad, everything was static. In grad school, the models were dynamic - i.e. a sequence of equilibria over time instead of just one.

0framsey9y
Right. Plus most undergrad models have an analog in grad macro, i.e. the AD-AS model and the New Keynesian model, or Quantity theory of money and a basic cash in advance model. True in general. Some intermediate macro courses use a two-period framework to explore basic dynamics. Williamson's textbook does this.

FWIW I'm a grad student in econ, and in my experience the undergrad and graduate macro are completely different. I recall Greg Mankiw sharing a similar sentiment on his blog at some point, but can't be bothered to look it up.

0framsey9y
I would say that undergrad and grad econ are very different methodologically (at least at most schools), but a lot of the content is the same. Stephen Williamson's intermediate macro textbook tries to bring in a lot of grad-level models/concepts, albeit in a "toy" form.

That was like, half the point of my post. I obviously suck at explaining myself.

I think the combination of me skimming and thinking in terms of the underlying preference relation instead of intertheoretic weights caused me to miss it, but yeah, It's clear you already said that.

Thanks for throwing your brain into the pile.

No problem :) Here are some more thoughts:

It seems correct to allow the probability distribution over ethical theories to depend on the outcome - there are facts about the world which would change my probability distribution over e...

This strikes me as the wrong approach. I think that you probably need to go down to the level of meta-preferences and apply VNM-type reasoning to this structure rather than working with the higher-level construct of utility functions. What do I mean by that? Well, let M denote the model space and O denote the outcome space. What I'm talking about is a preference relation > on the space MxO. If we simply assume such a > is given (satisfying the constraint that (m1, o1) > (m1, o2) iff o1 >_m1 o2 where >_m1 is model m1's preference relation) , ...

3[anonymous]9y
That was like, half the point of my post. I obviously suck at explaining myself. And yes, I agree now that starting with utility functions is the wrong way. We should actually just build something from the ground up aimed squarely at indirect normativity. Even though my post is an argument, the point really is to get us all thinking about this and see where we can go with it. Thanks for throwing your brain into the pile.

One interesting fact from Chapter 4 (on weather predictions) that seems worth mentioning: Weather forecasters are also very good at manually and intuitively (i.e. without some rigorous mathematical method) fixing the predictions of their models. E.g. they might know that model A always predicts rain a hundred miles or so too far west from the Rocky Mountains. So to fix this, they take the computer output and manually redraw the lines (demarking level sets of precipitation) about a hundred miles east, and this significantly improves their forecasts.

Also: th...

I just started a research project with my adviser developing new posterior sampling algorithms for dynamic linear models (linear gaussian discrete time state space models). Right now I'm in the process of writing up the results of some simulations testing a couple known algorithms, and am about to start some simulations testing some AFAIK unknown algorithms. There's a couple interesting divergent threads coming off this project, but I haven't really gotten into those yet.

Off the cuff: it's probably a random walk.

Edit: It's now pretty clear to me that's false, but plotting the ergodic means of several "chains" seems like a good way to figure it out.

Edit 2: In retrospect, I should have predicted that. If anyone is interested, I can post some R code so you can see what happens.

The book, Wicked is based on Wizard of Oz and has some related themes IIRC. (I really didn't like the musical based on the book though. But I might just dislike musicals in general; FWIW I also didn't like the only other musical I've seen in person - Rent.)

Experimental economists use mechanical turk sometimes. At least, were encourage to use it in the experimental economics class I just took.

In response to:

I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process.

and

"But," the soft Bayesians might say, "how do you expand that 'something else' into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn't fit looks the same as what we do, why pretend it's Bayesian inference?"

I think a hard line needs...

Even then, the reason this happens might be plausibly explained by the changing information of the bookstore rather than actual intransitivity.

It's a Schelling point, er, joke isn't the right word, but it's funny because the day was supposed to be a Schelling point. And you forgot about it.

This is simultaneously hilarious and weak evidence that the holiday isn't working as intended (though I think repeating the holiday every year will do the trick).

2Luke_A_Somers10y
It has a lot more to do with how crazy my schedule has been lately. And what the heck people, upvoting that to +4? HEAD SCRATCH

I notice that I'm confused: the maximum score on the Quantitative section is 800 (at that time), and Ph.D. econ programs won't even consider you if you're under a 780. The quantitative exam is actually really easy for math types. When you sign up for the GRE, you get a free CD with 2 practice exams. When I took it, I took the first practice exam without studying at all and got a 760 or so on the quantitiative section (within 10 pts). After studying I got a 800 on the second practice exam and on the actual exam, I got a 790. The questions were basic algebra...

0gwern10y
I was actually too lazy to study for my GRE, so I think I got like in the 600s on the math section (it had been a long time since I had studied any of that stuff); I realized while taking it that this was a stupid mistake and I was perfectly capable of answering everything, but the GRE cost too much for me to want to take it a second time. Oh well. Statistics does not seem to be broken out in the latest GRE scores I found: https://www.ets.org/s/gre/pdf/gre_guide.pdf [https://www.ets.org/s/gre/pdf/gre_guide.pdf] I think statistics is almost always part of the math department. My guess is that there are a lot of grad schools (consider law schools, the standard advice is to not bother unless you can make the top 10, yet there are scores if not hundreds of active law schools), and few actually intend to do a PhD.

Not surprising, given my experience. Most religion majors I've met were relatively smart and often made fun of the more fundamentalist/evangelical types who typically were turned off by their religion classes. Religion majors seemed like philosophy-lite majors (which is consistent with the rankings).

Edit: Also, relative to Religion, econ has a bunch of poor english speakers that pull the other two categories down. (Note: the "analytical" section is/was actually a couple of very short essays)

That seems to explain why Econ majors get a premium, but that doesn't seem to explain why econ majors don't rank higher, or am I missing something?

I didn't look at the data. I was commenting on your assessment of what they did, which showed that you didn't know how the F test works. Your post made it seem as if all they did was run an F test that compared the average response of the control and treatment groups and found no difference.

Ok, yeah, translating what the researchers did into a Bayesian framework isn't quite right either. Phil should have translated what they did into a frequentist framework - i.e. he still straw manned them. See my comment here.

Both the t-test and the F-test work by assuming that every subject has the same response function to the intervention:

response = effect + normally distributed error

where the effect is the same for every subject.

The F test / t test doesn't quite say that. It makes statements about population averages. More specifically, if you're comparing the mean of two groups, the t or F test says whether the average response of one group is the same as the other group. Heterogeneity just gets captured by the error term. In fact, econometricians define the error ...

-4PhilGoetz10y
Why do you say that? Did you look at the data? They found F values of 0.77, 2.161, and 1.103. That means they found different behavior in the two groups. But those F-values were lower than the thresholds they had computed assuming homogeneity. They therefore said "We have rejected the hypothesis", and claimed that the evidence, which interpreted in a Bayesian framework might support that hypothesis, refuted it.

It's just not an argument against Phil that someone might take some of the data in the paper and do a Bayesian analysis that the authors did not do.

That's not what I'm saying. I'm saying that what the authors did do IS evidence against the hypothesis in question. Evidence against a homogenous response is evidence against any response (it makes some response less likely)

-2PhilGoetz10y
I know that. That's not the point. They claimed to have proven something they did not prove. They did not present this claim in a Bayesian framework.
What they did do? Are you saying the measurements they took make their final claim more likely, or that their analysis of the data is correct and justifies their claim? Yes, if you arrange things moderately rationally, evidence against a homogenous response is evidence against any response, but much less so. I think Phil agrees with that too, and is objecting to a conclusion based on much less so evidence pretending to have much more justification than it does.

hey do, but did the paper he dealt with write within a Bayesian framework? I didn't read it, but it sounded like standard "let's test a null hypothesis" fare.

You don't just ignore evidence because someone used a hypothesis test instead of your favorite Bayesian method. P(null | p value) != P(null)

I ignore evidence when the evidence doesn't relate to the point of contention. Phil criticized a bit of paper, noting that the statistical analysis involved did not justify the conclusion made. The conclusion did not follow the analysis. Phil was correct in that criticism. It's just not an argument against Phil that someone might take some of the data in the paper and do a Bayesian analysis that the authors did not do.
2stcredzero10y
Yes, but instead of the mechanism making the beliefs more radical in the context of the whole society, it acts to make beliefs more mainstream. Though, one could argue that a more jingoistic China would be more radical in the analogous larger context.

My advisor, Jarad Niemi, has posted a bunch of lectures on Bayesian statistics to youtube, most of them short and all pretty good IMHO. The lectures are made for Stat 544, a course at Iowa State University. They assume a decent familiarity with probability theory - most students in the course have seen most of chapters 1-5 of Casella and Berger in detail - and some knowlege of R.

If it is indeed a megameetup, I'd like to attend (from Ames, IA so in the 7 hour range).

EDIT: FWIW I'm also willing to carpool with anyone (nearly) passing through or (nearly) on the way.

I agree, but I'm not sure it was intended as an insult. The effect in (some) readers is similar though, so maybe I'm splitting hairs.

The best way not to do something is to do the best thing you could be doing instead in the best way.

Sure, I'm just saying that personal usefulness shouldn't be the only reason you upvote.

This honestly made me smile in a "man do I love LW" sort of way.

Upvote comments that you think are useful on LW in general, not just comments you found personally useful. (A note to myself as I read this thread).

7latanius10y
As for this thread: wouldn't upvoting commens that you think are useful for someone else but not for you be actually an indirect case of other-optmizing?
1Qiaochu_Yuan10y
I think that depends on the current number of upvotes the comment has. I'll upvote comments with no upvotes that I personally found useful by way of thanks.

You're right - and I think this is a common failure mode of the population at large, but my most common failure mode is not finding something in a quick google search then failing to just ask someone else who probably knows while either wasting too much time searching or giving up. At the risk of the typical mind fallacy, perhaps this is the most common failure mode of the average LW member as well. If the grandparent could somehow be changed to target people like me better, I think that would improve it the most.

Also related (specifically, getting offended by people who are acting, gasp, irrationally): The problem with too many rational memes

See especially the comments. There are some good strategies in there for dealing with offense in this specific context, some of which may generalize.

Wei Dai suggests that offense is experienced when people feel they are being treated as being low status.

I would generalize this and say that offense is experienced when people feel they are being treated as being lower status than they feel they are/deserve.

The reason for the generalization: some people get offended by just about everything, it seems, and one way to explain it is a blatant grab for status. It's not that they think they're being treated as low status in an absolute sense necessarily, they just think they should be treated as higher status relative to however they're being treated.

I think that's much closer (and upvoted), but you don't need to invoke such an extreme example to demonstrate it; you just need to notice that offense thresholds are different in different contexts. Treating your boss as if she's your drinking buddy is likely to provoke offense. So's treating your drinking buddy as if he's a child. Yet you're generally safe treating boss as boss, buddy as buddy, and child as child -- in other words, giving people the status they contextually expect.

Does anyone know if there any negative effects of drinking red bull or similar energy drinks regularly?

I typically use tea (caffeine) as my stimulant of choice on a day to day basis, but the effects aren't that large. During large Magic: the Gathering tournaments, I typically drink a red bull or two (depending on how deep into the tournament I go) in order stay energetic and focused - usually pretty important/helpful since working on around 4 hours of sleep is the norm for these things.

Red bull works so well that I'm considering promoting it to semi-daily ...

(what, you were expecting large RCTs? dream on)

Ahem.

You may say I'm a dreamer,

but I'm not the only one

I hope someday we'll randomize

and control, then we'll have fun

(Mediocre, but it took me two minutes. I'm satisfied.)

What Luke said. Also, signalling "don't mess with me" though perhaps that use isn't relevant here.

Great post!

One potential problem is having too many maximal probability moments at once, depending one the nature of the hacks you're trying to implement. It's an embarassment of riches, honestly.

For example, I had a maximal probability moment for about 7 or 8 life-hacks after I came back from minicamp and there was no way I could implement all of them at once because each one would require some amount of concerted effort, so I was better off focusing on a couple at first. When this comes up, often there is a best hack or two to focus on, but the trivial ...

But then I have to interact with peo- NO BRAIN!!! SHUT UP!!! INTERACTING WITH PEOPLE ISN'T BAD!! AND WE'LL HAVE TO INTERACT WITH THEM ANYWAY WHEN THEY DON'T SHOW UP AND IT WILL BE MUCH WORSE!!! WE ARE MAKING THAT PHONE CALL!!!

The conversation I have with myself every time I implement this strategy. Yes, I yell at my brain. Otherwise the insolent bastard won't listen.

0satt10y
Maybe your brain's true objection is that it doesn't like making phone calls. It's come up before [http://lesswrong.com/lw/e5e/how_to_get_cryocrastinators_to_actually_sign_up/789j?context=1] !