Open Thread, April 1-15, 2013

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

254 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:25 PM
Select new highlight date

For kicks, and reminded by all my recent searching for digging up long-forgotten launch and shut down dates for Google properties, I've compiled a partial list of times I've posted searches & results on LW:

Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search classes; knowing how to search seems like the sort of skill that would pay off constantly over a lifetime.

Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search class; knowing how to search seems like the sort of skill that would payoff constantly over a lifetime.

It appears to me that in half of these examples people hadn't tried to google at all. It doesn't seem particularly likely to me that the class would develop such a habit. Not that I have a better idea.

My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.

Received word a few days ago that, (unofficially, pending several unresolved questions) my GJP performance is on track to make me eligible for "super forecaster" status (last year these were picked from the top 2%).

ETA, May 9th: received the official invitation.

I'm glad to report that I am one of those who make this achievement possible by occupying the other 98%. Indeed I believe I am supporting the high ranking of a good 50% of the forecasters.

More seriously, congratulations. :)

Congratulations! I also received it (thanks not the least to your posts). I wonder how many other LWers participate and who else (if anybody) got their invitations.

I participate and was invited the first season to be a super-forecaster in the second. It is kind of a lot of work and I have been very busy, so I really quit doing anything about it at all pretty early on, but mysteriously have been invited to participate again in the third season.

We may find out a little about that; super-forecasters will form teams, so it's somewhat likely some of us will end up on the same team.

Congrats to the others too, anyway!

I participate (http://www.gwern.net/Prediction%20markets#iarpa-the-good-judgment-project); and haven't been invited. (While I stopped trying in season 2, my season 1 scores were merely great & not stellar enough to make it plausible that I could have made it.)

Iain (sometimes M.) Banks is dying of terminal gall bladder cancer.

Of more interest is the discussion thread on Hacker News regarding cryonics. There's a lot of cached responses and misinformation going around on both sides.

It's really, really saddening that he of all people has been an outspoken deathist and now it's depriving him of any chance whatsoever. (Well, except for hypothetical ultra-remote reconstruction by FAI or something.)

In the Culture novels, he has all humans just sorta choosing to die after a millennium of life, despite there being absolutely no reason for humans to die since available resources are unlimited, almost all other problems solved, aging irrelevant, and clear upgrade paths available (like growing into a Mind).

Its not entirely clear cut. He has had characters from outside the culture describing it as a 'fashion' and a sign of the culture's decadence. And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.

Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough. Has he ever made a direct comment about cryonics? I can't find any. So its still possible eh would eb open to it given up to date information.

And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.

Stories would tend to focus on characters who are interested or involved in traumatically interesting events, so not sure how much one could infer from that.

Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough.

A thousand years instead of 70 is just deathism with a slightly different n.

A thousand years instead of 70 is just deathism with a slightly different n.

Eh, I kinda agree with you in a sense, but I'd say there's still a qualitative difference if one has successfully moved away from the deathist assumption that the current status quo for life-span durations is also roughly the optimal life-span duration.

A thousand years instead of 70 is just deathism with a slightly different n.

Then some form of deathism may be the truth anyway.

On the other hand, I can't remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don't think the later is incompatible with anti-deathism -- is Lazarus Long a deathist, after all?

EDIT: On the gripping hand, there's also a substantial bit of business in the Culture about subliming.

Instead of arguing on in this vein, I know that he's made comments in the past about how he believes death is a natural part of life. I just can't find the right interview now that "Iain Banks death" and variants are nearly-meaningless search terms.

now that "Iain Banks death" and variants are nearly-meaningless search terms

If you want to search the past, go to google, search, click "Search tools," "Any time," "Custom range..." and fill in the "To" field with a date, such as "2008."

On the other hand, I can't remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don't think the later is incompatible with anti-deathism -- is Lazarus Long a deathist, after all?

I don't recall seeing any people who are supposed to be older than a thousand years without mechanics like cryostorage/scanning; if you present a world in which pretty much everyone does want to die after a trivial time period, you're presenting a deathist world and you may well hold deathist views.

EDIT: On the gripping hand, there's also a substantial bit of business in the Culture about subliming.

About not subliming, specifically.

I don't recall seeing any people who are supposed to be older than a thousand years without mechanics like cryostorage/scanning

Such a character appears in the latest Culture novel, "The Hydrogen Sonata". But he is stated to be extremely unusual.

IIRC, most inchoate Minds sublime during construction, but I could be wrong about that.

"A Few Notes on the Culture":

Philosophy, again; death is regarded as part of life, and nothing, including the universe, lasts forever. It is seen as bad manners to try and pretend that death is somehow not natural; instead death is seen as giving shape to life.

While burial, cremation and other - to us - conventional forms of body disposal are not unknown in the Culture, the most common form of funeral involves the deceased - usually surrounded by friends - being visited by a Displacement Drone, which - using the technique of near-instantaneous transmission of a remotely induced singularity via hyperspace - removes the corpse from its last resting place and deposits it in the core of the relevant system's sun, from where the component particles of the cadaver start a million-year migration to the star's surface, to shine - possibly - long after the Culture itself is history.

None of this, of course, is compulsory (nothing in the Culture is compulsory). Some people choose biological immortality; others have their personality transcribed into AIs and die happy feeling they continue to exist elsewhere; others again go into Storage, to be woken in more (or less) interesting times, or only every decade, or century, or aeon, or over exponentially increasing intervals, or only when it looks like something really different is happening....

I'm on the fence as to whether or not this really constitutes full-blown deathism or just a belief that sentient beings should be permitted to cause their own death.

I suspect that any cultural norm inconsistent with treating the death of important life forms as an event to be eradicated from the world is at least an enabler to "deathism" as defined locally.

There seems to be some appeal to nature floating around in it, at the very least.

Sure, death is natural. So is Ophiocordyceps, but that doesn't mean I want parasitic mind-altering fungi in my life.

Great point I saw in the discussion:

Look at it like a cryptographer: Is putting a brain in liquid nitrogen a secure erasure method against all future attacks from a determined opponent with lots of resources? Would you trust your financial data to such a method of data erasure?

Hopefully he'll get around to doing some sort of "where I wanted to take the Culture series; ideas which I may have spun into novels" brain dump, before he dies.

I've been writing blog articles on the potential of educational games, which may be of interest to some people here:

I'd be curious to hear any comments.

I realise it's a constructed example, but a videogame that would be even remotely accurate in modelling the causes of the fall of the Roman Empire strikes me as unrealistically ambitious. I would at any rate start out with Easter Island, which at least is a relatively small and closed system.

Another point is that, if you gave the player the same levers that the actual emperors had, it's not completely clear that the fall could be prevented; but I suppose you could give points on doing better than historically.

Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?

Sure, games about physics should be able to present a reasonably accurate model so that if you understand their model, you end up knowing something about physics... but with history:

actually, what's the goal of studying history?

  • if the goal is to do well on tests, we already have a nice model for that, under the name of Anki. Of course, this doesn't make things really fun, but still.
  • if we want to make students remember what happened and approximately why (that is, "should be able to write an essay about it"), we can make up an arbitrary, dumb and scripted thing, not even close to a real model, but exhibiting some mechanics that cover the actual reasons. (e.g. if one of the causes would have been "not enough well-trained soldiers", then make "Level 8 Advanced Phalanx" the thing to build if you want to survive the next wave of attacks.)
  • if we'd like to see students discover general ideas throughout history, maybe build a game with the same mechanics across multiple levels? (and they also don't need to be really accurate or realistic.)
  • and finally, if we want to train historians who could come up with new theories, or replacement emperors to be sent back in time to fix Rome... well, for that we would need a much better model indeed. Which we are unlikely to end up with. But do we need this level in most of the cases?

TL;DR by creating games with wildly unrealistic but textbook-accurate mechanics we are unlikely to train good emperors, but at least students would understand textbook things much more than the current "study, exam, forget" level.

Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?

If what they learned about "evolution" comes from Pokemon, then yes.

When did Pokemon become an educational game about evolution?

Pokemon is an example of what an educational game which doesn't care about realism could look like. People should be expected to learn the game, not the reality, and that will especially be the case when the game diverges from reality to make it more fun/interesting/memorable. If you decide that the most interesting way to get people to play an interactive version of Charles Darwin collecting specimens is to make him be a trainer that battles those specimens, then it's likely they will remember best the battles, because those are the most interesting part.

One of the research projects I got to see up close was an educational game about the Chesapeake; if I remember correctly, children got to play as a fish that swum around and ate other fish (and all were species that actually lived in the Chesapeake). If you ate enough other fish, you changed species upwards; if you got eaten, you changed species downwards. In the testing they did afterwards, they discovered that many of the children had incorporated that into their model of how the Chesapeake worked; if a trout eats enough, it becomes a shark.

I'd like to hear more about that Chesapeake result.

I'm seeing if I can find a copy of their thesis. I'll share it if I manage to.

The GAMER thesis is here. (Also looking for an official copy.)

The ILL thesis is here.

It's true that you don't need a model that lets you form new theories of the downfall of the Empire; but my point is that even the accepted textbook causes would be very hard to model in a way that combines fun, challenge, and even the faintest hint of realism. Take the theory that Rome was brought down partly by climate change; what's the Emperor supposed to do about it? Impose a carbon tax on goats? Or the theory that it was plagues what did it. Again, what's the lever that the player can pull here? Or civil wars; what exactly is the player going to do to maintain the loyalty of generals in far-off provinces? At least in this case we begin to approach something you can model in a game. For example, you can have a dynastic system and make family members more loyal; then you have a tradeoff between the more limited recruiting pool of your family, which presumably has fewer military geniuses, versus the larger but less loyal pool of the general population. (I observe in passing that Crusader Kings II does have a loyalty-modelling subsystem of this sort, and it works quite well for its purposes. Actually I would propose that as a history-teaching game you could do a lot worse than CKII. Kaj, you may want to look into it.) Again, suppose the issue was the decline of the smallholder class as a result of the vast slaveholding plantations; to even engage with this you need a whole system for modelling politics, so that you can model the resistance to reform among the upper classes who both benefit by slavery and run most of your empire. Actually this sounds like it could make a good game, but easy to code it ain't.

It gets even more complicated when these causes interact. A large part of the reason for the decline of smallholders and the rise of vast manors using serfs (slavery was in decline during that period), was the fact that farmers had to turn to the lords for protection from barbarians and roving bandits. The reason there were a lot of marauding bandits is that the armies were to busy fighting over who the next emperor was going to be to do their job of protecting the populace.

Dynasty Warriors and Romance of the Three Kingdoms, while heavily stylized and quite frequently diverging from actual history, nevertheless do a pretty good job of conveying the basics of the time period and region.

A big part of education today is memorization. Perhaps it is wrong, but it is going to stay here for a while anyway. And at least partially it is necessary; how else would one learn e.g. a vocabulary of a foreign language?

So while it is great to invent games that teach principles instead of memorization, let's not forget that there is a ton of low-hanging fruit in making the memorization more pleasant. If we could just take all the memorization of elementary and high school, and turn it into one big cool game, it would probably make the world a much better place. How much resources (especially human resources) do we spend today on forcing the kids to learn things they try to avoid learning? Instead we could just give them a computer game, and leave teachers only with the task of explaining things. Everyone could get today's high school level education without most of the frustration.

Recently I started using Anki for memorization, and it seems to work great. But I still need some minimum willpower to start it every day. For me that is easy, because with my small amounts of data, I get usually 10-20 questions a day. But if I tried to use it in real time for high-school knowledge, that would be much more. Also, today I know exactly why I am learning, but for a small child it is an externally imposed duty, with uncertain rewards in a very far future. So some additional rewards would be nice.

It could be interesting to make a school where in the morning the students would play some gamified Anki system, and in the afternoon they would work in groups or discuss topics with teachers.

Sure, that's big too. I just didn't talk about it as much because everyone else seems to be talking about it already.

Have you played the Portal games? They include lots of things you mention... they introduce how to use the portal gun, for example, not by explaining stuff but giving you a simplified version first... then the full feature set... and then there are all the other things with different physical properties. I can definitely imagine some Portal Advanced game when you'll actually have to use equations to calculate trajectories.

Nevertheless... I'd really like to be persuaded otherwise, but the ability to read Very Confusing Stuff, without any working model, and make sense of it can't really be avoided after a while. We can't really build a game out of every scientific paper, due to the amount of time required to write a game vs. a page of text... (even though I'd love to play games instead of reading papers. And it sounds definitely doable with CS papers. What about a conference accepting games as submissions?)

I've played the first Portal game for a bit, and I liked it, but haven't finished it because puzzle games aren't that strongly my thing. I wonder whether not liking them much is a benefit or a disadvantage for an edugame designer. :-)

the ability to read Very Confusing Stuff, without any working model, and make sense of it can't really be avoided after a while

True enough. But I don't think that very much of education consists of trying to teach this skill in the first place (though one could certainly argue that it should be taught more), and having a solid background in other stuff should make it easier when you do get to that point.

What I found fascinating about Portal is the effort they made in testing the game on players. There is a play-mode with developer commentary (thought perhaps it's only available after the first play-through) in which they comment on all the details they changed to make sure that the players learned the relevant concepts, that they didn't forget them and that they have enough hints to solve the puzzle (for example, it's difficult to make a player look up). It'd be awesome if educational material (not necessarily just edugames) or even whole courses were designed and tested that well.

Thanks, I saw the developer commentary option but didn't try it out. Now that you've told me what it consists of, I'll have to check it out.

These games already exist for many things, good enough that watching Letsplays of them are probably more efficient than most deliberately educational videos. It's just finding them and realizing it's that tricky.

Games relevant to this discussion include Rome: Total War and Kerbal Space Program. Look them up.

I think I would really enjoy watching civ 5 lps that simultaneously discuss world history.

One point is that while memorizing specific causes of the fall of the roman empire may not be especially useful, acquiring the self-discipline necessary to do this without a game to motivate you might be very useful.

Perhaps, but if the task doesn't also feel interesting and worthwhile by itself, then we're effectively teaching kids that much of learning is dull, pointless and tedious, detached from anything that would have any real-world significance, and something that you only do because the people in power force you to. That's one of the most harmful attitudes that anyone can pick up. Let's associate learning with something fun and interesting first, and then channel that interest into the ability to self-motivate yourself even without a game later on.

I mentioned that I was attending a Landmark seminar. Here is my review of their free introductory class that hopefully adds to the conversation for those who want to know:

Coaches - They are the people who lead the class and I found them to be genuine in their belief in the benefits of taking the courses. These coaches were unpaid volunteers. I found their motives for coaching were for self-improvement and to some degree altruism. In short, it helped them, and they really want to share it.

Material - The intro course consists of more informative ideas rather than exercises. Their informative ideas are also trade-marked phrases, which makes it gimmicky and gives it more importance than an idea really warrants. We were not told these ideas were evidence-based. Lots of information on how to improve one's life was thrown around but no research or empirical evidence was given. Not once was the words " cognitive science" or "rationality" used. I speculate that the value the course gives its students is not from their informative ideas, but probably from the exercises and motivation that one gets from being actively pushed by their coaches to pursue goals.

Final thoughts - If you are rationality minded then this is not for you. I am no worse for going, and I do not believe that anyone who is rationality minded and attends will be worse off either, however I do believe that it is most likely damaging for a person's rationality , who is naive in rationality to begin with, to attend. I have never attended CFAR but just from browsing their website I can tell that Landmark is very far from what CFAR does. I think people in general would benefit more from attending CFAR than Landmark.

I am generally still very bad at steelmanning, but I think I am now capable of a very specific form of it. Namely, when people say something that sounds like a universal statement ("foos are bar") I have learned to at least occasionally give them the benefit of the doubt instead of assuming that they literally mean "all foos are bar" and subsequently feeling smug when I point out a single counterexample to their statement. I have seen several people do this on LW lately and I am happy to report that I am now more annoyed at the strawmanners than the strawmanned in this scenario.

It sounds to me like it has a lot in common with the noncentral fallacy. There's a general tendency to think of groups in terms of their central members and not their noncentral ones. This both makes sneaking in connotations by noncentral labels possible, and makes "all central foos are bar" feel like the same thing as "all foos are bar".

Even more so with the "No foo is a bar". Those statements are most probably either very common definitions like "no mammal is a bird" and therefore not very informative, either they are improbable. Like "no man can live more than X minutes without oxygen, ever".

In the last case, even if the X is huge, we can assume that maybe it can be done under some (unseen yet) circumstances.

In other words, don't be too hasty with universal negations!

Why can't people say "some foos are bar" or "foos tend to be bar"? My default interpretation of "foos are bar" is "all foos are bar". I tend to classify confident assertions that "foos are bar" with clear counterexamples as blustering. We already know from Philip Tetlock's work that hedgehogs who make predictions based on simple models tend to be more confident, more widely quoted in the media, and more wrong than foxes who make equivocal, better-calibrated predictions based on more complicated models.

I think there may actually be a bit of a group coordination problem here--hedgehogs gain status from appearing confident and getting quoted in the media, but they're spreading low-quality info. So it's a case of personal gain at the expense of group loss. I'm inclined to call people out for hedgehog-style behavior as a way of dealing with this coordination problem. (In case it's not obvious, I frequently see hedgehog-style predictions from LW-affiliated people and find them annoying and unconvincing.)

Why can't people say "some foos are bar" or "foos tend to be bar"?

I mean, of course they can, but sometimes they won't. People aren't careful with their language and it's uncharitable to assume that people mean what you think their words should have meant instead of what they most likely actually meant.

I also think you have a different prototypical case in mind than I do. I'm thinking the kind of nitpicking where someone says something like "fire is hot" and someone responds "nuh-uh, there's a special type of fire you can make that is actually cool to the touch" or something like that.

I also think you have a different prototypical case in mind than I do.

Fair.

People can't/don't say that "some foos are bar" or "foos tend to be bar" because it is often less accurate than "all foos are bar" or better yet "foos are bar". This is because truth is fuzzy, not binary or digital. For example, "some humans have two arms" gives you very little information. Do 10 out of seven billion humans have two arms? 6.99999 billion out of 7 billion? Maybe half of humans?

By contrast the statement, "humans each have two arms" or even "all humans have two arms" is mostly true, probably better than 99% true, despite the existence of rare counter examples. You can make useful plans based on the knowledge that "all humans have two arms".

If we see truth as binary, and allow a mostly true statement to be invalidated by a single rare counterexample, we have lost a lot of real information. If I know that 100% of humans have two arms, I have a more complete and accurate, though imperfect, view of the world than if I know only that "some humans have two arms".

Best of all of course is if I know that 99.9834% +/- 0.0026% of humans have two arms. However absent such precise information, the statement "humans have two arms" is a pretty accurate and useful representation of reality.

It was recently brought to my attention that Eliezer Yudkowsky regards the monetary theories of Scott Sumner (short overview) to be (what we might call) a "correct contrarian cluster", or an island of sanity when most experts (though apparently a decreasing number) believe the opposite.

I would be interested in knowing why. To me, Sumner's views are a combination of:

a) Goodhart's folly ("Historically, an economic metric [that nobody cared about until he started talking about] has been correlated with economic goodness; if we only targeted this metric with policy, we would get that goodness. Here are some plausible mechanisms why ..." -- my paraphrase, of course)

b) Belief that "hoarded" money is pure waste with no upside. (For how long? A day? A month?)

If you are likewise surprised by Eliezer's high regard for these theories, please join me in encouraging him to explain his reasoning.

To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well. They include most obviously Iceland (despite a severe financial crisis), and some less obvious instances like Australia, Poland and Isreal. One could point to the UK as the clearest counterexample, but just about everyone agrees that they have severe structural problems, which NGDPLT is not intended to address. And even then, monetary easing has allowed the conservative government to implement fiscal austerity without crashing the economy - this was widely expected to happen and there was a lot of public concern (compare the situation in the US wrt the "fiscal cliff" and "sequestration" scares. Here too, the Fed offset the negative fiscal effect by printing money).

As for (b), nobody argues that money hoarding is a bad thing per se. But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations. Inflation targeting is a very rough way of doing this, but it's just not good enough (see George Selgin's book Less than Zero for an argument to this effect). ISTM that this is not well understood in the mainstream ("NK") macro literature, where supply shocks are confusingly modeled as "markup shocks". I have seen cutting-edge papers pointing out that these make inflation targeting unsound (sorry for not having a ref here).

To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well.

None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence. Rather, they had policies which, despite not specifically intending to, were followed by rising NGDP. The purported similarity to NGDPLT is typically justified on the grounds that the policy caused something related to happen, but there is a very big difference between that and directly targeting NGDP. And hence why it can't demonstrate why targeting a metric (that, again, no one even cared about until Sumner started blogging about it) will have the causal power that is claimed of it.

As for (b), nobody argues that money hoarding is a bad thing per se.

I disagree; I have yet to see any anti-hoarders mention anything positive whatsoever about hoarding and take it as a given that eliminating it is bad. Landsburg says it better than I can: the very people promoting anti-hoarding policies lack any framework in which you can compare the benefits of hoarding to the hoarders against its costs, and thus know whether it's on net bet. The best answer he gets is essentially, "well, it's obvious that there's a shortfall that needs to be rectified" -- in other words, it's just assumed.

To find an example of anyone saying anything positive about hoarding, you have to go to fringe Austrian economists, like in this article.

But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations.

But until you've quantified (or at least acknowledged the existence of) the benefits of hoarding, you can't know if these supposed misallocations are worse than the benefits given by the hoarding. You can't even know if they are misallocations, properly understood.

For once you accept that there's a benefit to hoarding, then the changes in prices induced by it are actually vital market signals, just like any price. Which would mean that you can't eliminate the price change without also destroying information that the market uses to improve resource use. I mean, oil shocks cause widespread price changes, but any attempt to stop these price changes is going to worsen the misallocation problem.

Toy example to illustrate the benefits, and important signal sent by, hoarding: let's say we have a class of typical investors, with no special non-public knowledge about specific companies. So when they invest, they invest in the economy as a whole. (Let's say they won't even consider using this part of their money for consumption.) But! 70% of the economy's investment venues are unsustainable and are actually destroying value in a way not currently obvious. In that case, it would be much better for these potential investors to hoard, rather than further advance this malinvestment. Sure, they'll starve the good 30% of projects of funds, but they'll also pull back on the bad 70%.

So I have yet to see any actual recognition of the benefits of hoarding among this group, which puts them in a ridiculous position. If holding money is bad, then the optimal situation is for any money received to be instantly spent on something else (whether consumption or investment). But this requires that you know what you're going to spend the money on before you earn it -- which just takes us back to barter! Thus, we see the benfit of hoarding/holding money: retaining the option value when you lack certainty about what you will spend it on. It thus signals consumers' uncertainty that they will be able to enter sustainable patterns of trade, and cannot be costlessly squashed (as another another Econ School though of interest -- that it could be zeroed without negative consequence).

None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence.

I think my examples do constitute supporting evidence of some kind. Yes, it would be good to have examples of countries specifically targeting NGDP, to prevent spurious correlations or Lucas critique problems. But even so, Iceland and to a lesser extent, Poland - and, to be fair, the UK - specifically accepted a rise in inflation in order to sustain demand - it wasn't a simple case of exogenously strong RGDP growth. (I think this might also apply to Australia, actually. Their institutional framework would certainly allow for that.) This makes the evidence quite credible, although it's not perfect by any means.

Also, Sumner was not at all the first economist to care about NGDP as a possible target. He is a prominent popularizer, but James Meade and Bennett McCallum had proposed it first.

Your example of the "benefits of hoarding" doesn't address the very specific problems with hoarding the unit of account for all prices in the economy, when prices are hard to adjust. Yes, money has a real option value, so money hoarding might signal some kind of uncertainty. However, you have not made the case that this "signaling" has any positive effects, especially when the operation of the price system is clearly impaired. By analogy, if peanuts were the unit of account and medium of exchange, then widespread hoarding of peanuts might signal uncertainty about the next harvest. But it would still cause a recession, and it wouldn't actually cause the relative price of peanuts to rise (or rise much at any rate), which is what might incent additional supply.

Moreover, in practice, an uncertain agent can attain most (if not all) of the benefit of hoarding money by holding some other kind of asset, such as low-risk bonds, gold or whatever the case may be. It's not at all clear that hoarding money specifically provides any additional benefit, or that such incremental benefits could be sustained without inflicting greater costs on other agents.

Yes!

The comment is from hacker news thread about Bitcoin hitting $100. It would be cool to have him also expand more on bitcoin itself which he seems to regard as destructive but not necessarily doomed to fail? Here he entertains the idea about combining NGDP level targeting (which I don't understand) with the best parts of Bitcoin. This all sounds very interesting.

Bitcoin hitting $100

I downloaded a Bitcoin client a couple weeks and was going to buy a few bitcoins, but the inconvenience of having to get a Mt Gox account or something made me keep on putting that off. Whoops. Hopefully this'll teach me to be less of a procrastinator.

You think that's bad? I considered buying a ~hundred Bitcoins after the last crash, when they were going for at less than $1, but could never be bothered. :-)

Bitcoin is a form of currency that's supposed to be used. Now that so many people jump on the speculation train, and the rest mostly hold onto their bitcoins in the hope the price keeps rising, the practical viability of bitcoins can be called into question. I saw a stick of RAM for sale for the equivalent of over a thousand dollars. For a currency to rise so fast is terribly disruptive, and the rise (lack of supply in relation to demand) itself creates a vicious circle, since the faster it rises the more many bitcoin holders are tempted to further keep their bitcoins out of circulation.

If the price is then calculated by only the very small portion that's up for sale (with the rest being held for often speculative purposes) - compared to a lot of prospective buyers like you who want to join the gold rush - what do you think could easily happen once some people start cashing in, the price drops, seeing the price dropping more people want to cash in, and suddenly there's a very large portion up for sale, in conjunction with a loss of interested buyers (most don't buy into a falling market, tautologically)?

My advice, which I may even follow myself: buy at 100, sell at 150, never look back.

[-]army1987
0 points