All of BruceyB's Comments + Replies

In the case of countries, the main problem seems to be that as you grow the population becomes more culturally heterogeneous. People on average disagree more with whatever federal policies are chosen, giving them a reason to split off into smaller countries. Also there are increasing coordination costs in size.

I'm far from an expert on LOESS (in fact, I hadn't heard the term before now), but it looks like it doesn't perform a comparable function to MIC. LOESS seems to be an algorithm for producing a non-linear regression while MIC is an algorithm to measure the strength of a relationship between two variables.

In the paper (figure 2A), they compare it to Pearson correlation coefficient, Spearman rank correlation, mutual information, CorGC, and maximal correlation on data in a variety of shapes. Basically, it is effective on a wider range of shapes than any of them.

Check out figures S5.D and S6 from the SOM. If the relationship is functional (the linear, parabolic, sinusoidal cases on Figure S6), then the R2 calculated from LOESS regression is quite close to this MIC score, and that's not a coincidence. Of course LOESS R2 just dies when it encounters a non-functional relationship.

Recently I've been using Evernote to organize my notes. It has a nice phone app that I can use to take quick notes while away from my computer, a computer program, and a browser plugin that lets me clip articles. When it comes to notes I try to think that every time I record an idea I would have forgotten, it is roughly equivalent to thinking of one new idea.

I tend to write out outlines after I finish books or some interesting articles partially to see the arguments more clearly and to refer back to in the future.

It's always interesting/fun to go back browse old ideas I've forgotten about.

Took it!

For the probability questions, I think it might have been useful for people to be able to specify confidence in their estimate. An estimate of X% from someone who is familiar with almost all of the relevant arguments and evidence is different from an estimate of X% by someone with only a cursory understanding of the issue. Then we can target the subjects people are most uncertain about to produce the most informative discussions.

A good bayesian way to make that question quantitative would be, "If we ask you again in 10 years, how much do you expect your number to change? Express your answer as a factor of the percentage or the inverse percentage, whichever is smaller. So 1 would mean you expect no change, and 3 would mean you expect, with about 50% confidence, that your estimate and its inverse will both be more than a third and less than triple of what they are today." I know that it should really be a matter of p(1-p) but that's close enough. Oh, and taken, so one of the karma here is for that.
It would also be very interesting to compare the variance in those with a low certainty with the variance of those with a high certainty.

Here is a (contrived) situation where a satisficer would need to rewrite.

Sally the Satisficer gets invited to participate on a game show. The game starts with a coin toss. If she loses the coin toss, she gets 8 paperclips. If she wins, she gets invited to the Showcase Showdown where she will first be offered a prize of 9 paperclips. If she turns down this first showcase, she is offered the second showcase of 10 paper clips (fans of The Price is Right know the second showcase is always better).

When she first steps on stage she considers whether she shoul... (read more)

Cool example! But your argument relies on certain vagueness in the definitions of "satisficer" and "maximiser", that between: * A: an agent "content when it reaches a certain level expected utility"; and * B: "simply a maximiser with a bounded utility function" (These definitions are from the OP). Looking at the situation you presented: "A" would recognise the situation as having an expected utility as 9, and be content with it (until she loses the coin toss...). "B" would not distinguish between the utility of 9 and the utility of 10. Neither agent would see a need to self-modify. Your argument treats Sally as (seeing itself) morphing from "A" before the coin toss to "B" after - this, IMO, invalidates your example.
I like this, I really do. I've added a mention to it in the post. Note that your point not only shows that a non-timeless satisficer would want to become a maximiser, but that a timeless satisficer would behave as a maximiser already.
Ah, good point. So "picking the best strategy, not just the best individual moves" is similar to self-modifying to be a maximizer in this case. On the other hand, if our satisficer runs on updateless decision theory, picking the best strategy is already what it does all the time. So I guess it depends on how your satisficer is programmed.

Hi. I'm a Caltech student in math/econ.