johnswentworth

johnswentworth's Comments

What happens in a recession anyway?

Here's a very compressed summary and some links on standard economic theory around recessions. Of course economists argue about this stuff to no end, so take it all with a grain of salt.

First, there's a high-level division around what causes recessions. Two main models:

  • Real shocks: a hurricane, war, virus, etc directly decreases economic output.
  • Sticky prices + volatile currency: contracts are denominated in dollars, so if the value of a dollar goes up relative to everything else, lots of debtors/employers/etc are unable to pay.

The former is the domain of real business cycle theory (RBC), the latter includes most of both Keynesian and monetarist models - MRU has a good set of intro-level videos on all that. If you want a perspective with more analysis and less politics, I recommend jumping straight into recursive macro models - though this does require a strong math background.

Real-world recessions can involve either or both of these causes. The textbook example of a real shock would be the 1970's oil drama - though textbooks over the next few decades will likely use coronavirus as an example instead. The 2008 meltdown and the Great Depression, on the other hand, are generally seen as primarily monetarily-driven recessions.

Coronavirus-Specific

One pattern of major concern today is that a real shock induces a minor recession, the value of a dollar goes up relative to most other things due to the real shock (i.e. people have trouble paying their debts), but then the Fed reacts too slowly/too little to bring the value of a dollar back down (i.e. by "printing money") and a monetary recession results. The Fed's announcements over the past few weeks have been very good on that front - it remains to be seen whether they're enough, but qualitatively they're clearly doing the right sort of things, and they're signalling willingness to do more if needed. We've come a long way since 2008.

As long as the monetary situation continues to look good, we'll mainly want to use an RBC-style model. Things to place more/less emphasis on:

  • In an RBC model, we don't worry so much about "financial contagion", bank runs, etc. That's not to say the Fed shouldn't worry about any of that; rather, those are the problems which can be avoided if policymakers are on-the-ball.
  • Heterogeneous goods: a real shock will reduce production of some goods but not others. An oil shock looks different from a potato crop failure or a pandemic. Monetary recessions all center on a "shortage" of the same "good" - i.e. money - so we'd expect them to look more similar to each other, whereas real shocks (absent a monetary problem) will look more different from each other.
  • Loss of capital goods: a lot of study goes into the extent to which real shocks have long-lasting effects, vs a rapid economic bounce-back. The main mechanism for lasting effects, in most models, is that production of capital goods slows due to the shock, but existing capital goods continue to break down. To see how relevant that will be, I'd look at how the virus is impacting production of the major capital sinks: construction, oil wells & pipelines, data infrastructure, power plants & the electric grid, roads & railroads, etc.
A Kernel of Truth: Insights from 'A Friendly Approach to Functional Analysis'

Probably too late at this point for you, but in case other people come along... I'd recommend learning functional analysis first in the context of a theoretical mechanics course/textbook, rather than a math course/textbook. The physicists tend to do a better job explaining the intuitions (and give far more exposure to applications), which I find is the most important thing for a first exposure. Full rigorous detail is something you can pick up later, if and when you need it.

Alignment as Translation

That's a marginal cost curve at a fixed time. Its shape is not directly relevant to the long-run behavior; what's relevant is how the curve moves over time. If any fixed quantity becomes cheaper and cheaper over time, approaching (but never reaching) zero as time goes on, then the price goes to zero in the limit.

Consider Moore's law, for example: the marginal cost curve for compute looks U-shaped at any particular time, but over time the cost of compute falls like , with k around ln(2)/(18 months).

Alignment as Translation

Of course the limit can't be reached, that's the entire reason why people use the phrase "in the limit".

johnswentworth's Shortform

For short-term, individual cost/benefit calculations around C19, it seems like uncertainty in the number of people currently infected should drop out of the calculation.

For instance: suppose I'm thinking about the risk associated with talking to a random stranger, e.g. a cashier. My estimated chance of catching C19 from this encounter will be roughly proportional to . But, assuming we already have reasonably good data on number hospitalized/died, my chances of hospitalization/death given infection will be roughly inversely proportional to . So, multiplying those two together, I'll get a number roughly independent of .

How general is this? Does some version of it apply to long-term scenarios too (possibly accounting for herd immunity)? What short-term decisions do depend on ?

Alignment as Translation
A finite sized computer cannot contain a fine-grained representation of the entire universe.

cannot ever be zero for finite , yet it approaches zero in the limit of large x. The OP makes exactly the same sort of claim: our software approaches omniscience in the limit.

Alignment as Translation

The rules it's given are, presumably, at a low level themselves. (Even if that's not the case, the rules it's given are definitely not human-intelligible unless we've already solved the translation problem in full.)

The question is not whether the low-level AI will follow those rules, the question is what actually happens when something follows those rules. A python interpreter will not ever deviate from the simple rules of python, yet it still does surprising-to-a-human things all the time. The problem is accurately translating between human-intelligible structure and the rules given to the AI.

The problem is not that the AI might deviate from the given rules. The problem is that the rules don't always mean what we want them to mean.

Alignment as Translation

I'm pretty sure none of this actually affects what I said: the low-level behavior still needs produce results which are predictable to humans in order for predictability to be useful, and that's still hard.

The problem is that making an AI predictable to a human is hard. This is true regardless of whether or not it's doing any outside-the-box thinking. Having a human double-check the instructions given to a fast low-level AI does not make the problem any easier; the low-level AI's behavior still has to be understood by a human in order for that to be useful.

As you say toward the end, you'd need something like a human-readable communications protocol. That brings us right back to the original problem: it's hard to translate between humans' high-level abstractions and low-level structure. That's why AI is unpredictable to humans in the first place.

Alignment as Translation
I think you get "ground truth data" by trying stuff and seeing whether or not the AI system did what you wanted it to do.

That's the sort of strategy where illusion of transparency is a big problem, from a translation point of view. The difficult cases are exactly the cases where the translation usually produces the results you expect, but then produce something completely different in some rare cases.

Another way to put it: if we're gathering data by seeing whether the system did what we wanted, then the long tail problem works against us pretty badly. Those rare tail-cases are exactly the cases we would need to observe in order to notice problems and improve the system. We're not going to have very many of them to work with. Ability to generalize from small data sets becomes a key capability, but then we need to translate how-to-generalize in order for the AI to generalize in the ways we want (this gets at the can't-ask-the-AI-to-do-anything-novel problem).

Alignment as Translation

(The other comment is my main response, but there's a possibly-tangential issue here.)

In a long-tail world, if we manage to eliminate 95% of problems, then we generate maybe 10% of the value. So now we use our 10%-of-value product to refine our solution. But it seems rather optimistic to hope that a product which achieves only 10% of the value gets us all the way to a 99% solution. It seems far more likely that it gets to, say, a 96% solution. That, in turn, generates maybe 15% of the value, which in turn gets us to a 96.5% solution, and...

Point being: in the long-tail world, it's at least plausible (and I would say more likely than not) that this iterative strategy doesn't ever converge to a high-value solution. We get fancier and fancier refinements with decreasing marginal returns, which never come close to handling the long tail.

Now, under this argument, it's still a fine idea to try the iterative strategy. But you wouldn't want to bet too heavily on its success, especially without a reliable way to check whether it's working.

Load More