Unnamed

Wiki Contributions

Comments

See also the heuristics & biases work on framing effects, e.g. Tversky and Kahneman's Rational Choice and the Framing of Decisions

Alternative descriptions of a decision problem often give rise to different preferences, contrary to the principle of invariance that underlines the rational theory of choice. Violations of this theory are traced to the rules that govern the framing of decision and to the psychological principles of evaluation embodied in prospect theory. Invariance and dominance are obeyed when their application is transparent and often violated in other situations. Because these rules are normatively essential but descriptively invalid, no theory of choice can be both normatively adequate and descriptively accurate.

A hypothesis for the negative correlation:

More intelligent agents have a larger set of possible courses of action that they're potentially capable of evaluating and carrying out. But picking an option from a larger set is harder than picking an option from a smaller set. So max performance grows faster than typical performance as intelligence increases, and errors look more like 'disarray' than like 'just not being capable of that'. e.g. Compare a human who left the window open while running the heater on a cold day, with a thermostat that left the window open while running the heater.

A Second Hypothesis: Higher intelligence often involves increasing generality - having a larger set of goals, operating across a wider range of environments. But that increased generality makes the agent less predictable by an observer who is modeling the agent as using means-ends reasoning, because the agent is not just relying on a small number of means-ends paths in the way that a narrower agent would. This makes the agent seem less coherent in a sense, but that is not because the agent is less goal-directed (indeed, it might be more goal-directed and less of a stimulus-response machine).

These seem very relevant for comparing very different agents: comparisons across classes, or of different species, or perhaps for comparing different AI models. Less clear that they would apply for comparing different humans, or different organizations.

Seems like the concept of "coherence" used here is inclined to treat simple stimulus-response behavior as highly coherent. e.g., The author puts a thermostat in the supercoherent unintelligent corner of one of his graphs.

But stimulus-response behavior, like a blue-minimizing robot, only looks like coherent goal pursuit in a narrow set of contexts. The relationship between its behavioral patterns and its progress towards goals is context-dependent, and will go off the rails if you take it out of the narrow set of contexts where it fits.  That's not "a hot mess of self-undermining behavior", so it's not the lack-of-coherence that this question was designed to get at.

A bunch of things in this post seem wrong, or like non sequitors, or like they're smushing different concepts together in weird ways.

It keeps flipping back and forth between criticizing people for thinking that no one was fooled, and criticizing people for thinking that some people were fooled. It highlights that savviness is distinct from corruptness or support for the regime, but apparently its main point was that the savvy are collaborating with the regime.

As I understand it, the main point of Scott's Bounded Distrust post is that if you care about object-level things like whether taxes are increasing, whether wearing a mask reduces your risk of getting covid, whether the harvest will be good, or whether Russia will invade Ukraine, then you can extract some information from what's said by authorities/institutions like Fox News, the New York Times, the CDC, Joe Biden, Vladimir Putin, etc., even though they often present distorted pictures of the world, as long as you're savvy enough about understanding how they communicate and interpreting what they say.

This post categorizes everyone into dissidents and supporters of "the regime" and somehow stuffs savviness in there and says things that don't map onto the concept of savviness or the examples of savviness that come to mind.

If the important thing about higher levels is not tracking the underlying reality, why not define the category in terms of that rather than a specific motive (fitting in with friends) which sometimes leads to not tracking reality?

People say & do lots of things to fit in, some of which involve saying true things (while tracking that they match reality) and some of which don't have propositional content (e.g. "Yay X" or "Boo X"). And there are various reasons for people to say nonsense, besides trying to fit in.

I was assuming that the lack of inflation meant that they didn't fully carry out what he had in mind. Maybe something that Eliezer, or Scott Sumner, has written would help clarify things.

It looks like Japan did loosen their monetary policy some, which could give evidence on whether or not the theory was right. But I think that would require a more in-depth analysis than what's in this post. I don't read the graphs as showing 'clearly nothing changed after Abe & Kuroda', just that there wasn't the kind of huge improvement that hits you in the face when you look at a graph, which is what I would've expected from fixing a trillions-dollar mistake. If we're looking for smaller effects, I'd want a more careful analysis rather than squinting at graphs. (And when I do squint at these graphs, I see some possible positive signs. 2013-19 real GDP growth seems better than I would've predicted if I had only seen the pre-Kuroda graph, and Kuroda's first ~year is one of the better years.)

Parts of your description sound misleading to me, which probably just means that we have a disagreement? 

My read is that, if this post's analysis of Japan's economy is right, then Eliezer's time1 view that the Bank of Japan was getting it wrong by trillions of dollars was never tested. The Bank of Japan never carried out the policies that Eliezer favored, so the question about whether those policies would help as much as Eliezer thought they would is still just about a hypothetical world which we can only guess at. That makes the main argument in Inadequate Equilibria weaker because one of its central examples of being able to see that experts were getting it wrong is untested rather than confirmed by evidence. And the book has the further flaw that the author mistakenly marked his conditional prediction as "True" rather than "N/A, antecedent not met".

Phrases like "the better policy did not have the advertised effect" and "the claim that trillions of dollars were left on the table is not well-supported" would be appropriate in a world where the Bank of Japan did raise inflation and keep it high for a few years, and real growth didn't materialize. But we're instead in a world where inflation didn't increase much.

(This is all just accepting this post's analysis of Japan's economy, for simplicity.)

It didn't become loose enough to generate meaningful inflation, right? And I thought Sumner & Eliezer's views were that monetary policy needed to be loose enough to generate inflation in order to do much good for the economy.

That's what I had in mind by not "all that loose"; I could swap in alternate phrasing if that content seems accurate.

Attempted paraphrase of this post:

At time1, Eliezer thought that Sumner's macroeconomic analysis was correct, and that it showed that the Bank of Japan's monetary policy was too tight, at a cost of trillions of dollars.

At time2, Eliezer wrote Inadequate Equilibria where he used this view of time1 Eliezer as one of his central examples, and claimed that events since then had provided strong evidence that it was true: Japan had since loosened its monetary policy, and their economy had improved.

Now, at time3, you are looking back at Japan's economy and saying that it didn't actually do especially well at that time, and also that [EDITED] its monetary policy never actually became loose enough to generate inflation. So Eliezer at time2, who thought that the views of time1 Eliezer had been tested and shown correct, was wrong about that.

Eliezer's views at time1 might have been correct, but we never got a clear empirical test of that, and Eliezer at time2 was wrong to claim that we had.

Presumably they agreed with Scott's criticisms of it, and thought they were severe enough problems to make it not Review-worthy?

I didn't get around to (?re-)reading & voting on it, but I might've wound up downvoting if I did. It does hit a pet peeve of mine, where people act as if 'bad discourse is okay if it's from a critic'.

Load More