orthonormal

Sequences

Staying Sane While Taking Ideas Seriously

Comments

Decoherence is Falsifiable and Testable

There's certainly a tradeoff involved in using a disputed example as your first illustration of a general concept (here, Bayesian reasoning vs the Traditional Scientific Method).

A Technical Explanation of Technical Explanation

I can't help but think of Scott Alexander's long posts, where usually there's a division of topics between roman-numeraled sections, but sometimes it seems like it's just "oh, it's been too long since the last one, got to break it up somehow". I do think this really helps with readability; it reminds the reader to take a breath, in some sense.

Or like, taking something that works together as a self-contained thought but is too long to serve the function of a paragraph, and just splitting it by adding a superficially segue-like sentence at the start of the second part.

A Technical Explanation of Technical Explanation

It may not be possible to cleanly divide the Technical Explanation into multiple posts that each stand on their own, but even separating it awkwardly into several chapters would make it less intimidating and invite more comments.

(I think this may be the longest post in the Sequences.)

My Childhood Role Model

I forget if I've said this elsewhere, but we should expect human intelligence to be just a bit above the bare minimum required to result in technological advancement. Otherwise, our ancestors would have been where we are now.

(Just a bit above, because there was the nice little overhang of cultural transmission: once the hardware got good enough, the software could be transmitted way more effectively between people and across generations. So we're quite a bit more intelligent than our basically anatomically equivalent ancestors of 500,000 years ago. But not as big a gap as the gap from that ancestor to our last common ancestor with chimps, 6-7 million years ago.)

Why haven't we celebrated any major achievements lately?

Additional hypothesis: everything is becoming more political than it has been since the Civil War, to the extent that any celebration of a new piece of construction/infrastructure/technology would also be protested. (I would even agree with the protesters in many cases! Adding more automobile infrastructure to cities is really bad!)

The only things today [where there's common knowledge that the demonstration will swamp any counter-demonstration] are major local sports achievements.

(I notice that my model is confused in the case of John Glenn's final spaceflight. NASA achievements would normally be nonpartisan, but Glenn was a sitting Democratic Senator at the time of the mission! I guess they figured that in heavily Democratic NYC, not enough Republicans would dare to make a stink.)

Decoherence is Falsifiable and Testable

Eliezer's mistake here was that he didn't, before the QM sequence, write a general post to the effect that you don't have an additional Bayesian burden of proof if your theory was proposed chronologically later. Given such a reference, it would have been a lot simpler to refer to that concept without it seeming like special pleading here.

2020's Prediction Thread

It's not explicit. Like I said, the terms are highly dependent in reality, but for intuition you can think of a series of variables  for  from  to , where  equals  with probability . And think of  as pretty large.

So most of the time, the sum of these is dominated by a lot of terms with small contributions. But every now and then, a big one hits and there's a huge spike.

(I haven't thought very much about what functions of  and  I'd actually use if I were making a principled model;  and  are just there for illustrative purposes, such that the sum is expected to have many small terms most of the time and some very large terms occasionally.)

2020's Prediction Thread

No. My model is the sum of a bunch of random variables for possible conflicts (these variables are not independent of each other), where there are a few potential global wars that would cause millions or billions of deaths, and lots and lots of tiny wars each of which would add a few thousand deaths.

This model predicts a background rate of the sum of the smaller ones, and large spikes to the rate whenever a larger conflict happens. Accordingly, over the last three decades (with the tragic exception of the Rwandan genocide) total war deaths per year (combatants + civilians) have been between 18k and 132k (wow, the Syrian Civil War has been way worse than the Iraq War, I didn't realize that).

So my median is something like 1M people dying over the decade, because I view a major conflict as under 50% likely, and we could easily have a decade as peaceful (no, really) as the 2000s.

Frequently Asked Questions for Central Banks Undershooting Their Inflation Target

An improvement in this direction: the Fed has just acknowledged, at least, that it is possible for inflation to be too low as well as too high, that inflation targeting needs to acknowledge that the US has been consistently undershooting its goal, and that this leads to the further feedback of the market expecting the US to continue undershooting its goal. And then it explains and commits to average inflation targeting:

We have also made important changes with regard to the price-stability side of our mandate. Our longer-run goal continues to be an inflation rate of 2 percent. Our statement emphasizes that our actions to achieve both sides of our dual mandate will be most effective if longer-term inflation expectations remain well anchored at 2 percent. However, if inflation runs below 2 percent following economic downturns but never moves above 2 percent even when the economy is strong, then, over time, inflation will average less than 2 percent. Households and businesses will come to expect this result, meaning that inflation expectations would tend to move below our inflation goal and pull realized inflation down. To prevent this outcome and the adverse dynamics that could ensue, our new statement indicates that we will seek to achieve inflation that averages 2 percent over time. Therefore, following periods when inflation has been running below 2 percent, appropriate monetary policy will likely aim to achieve inflation moderately above 2 percent for some time.

Of course, this say nothing about how they intend to achieve this—seigniorage has its downsides—but I expect Eliezer would see it as good news.

Matt Botvinick on the spontaneous emergence of learning algorithms

The claim that came to my mind is that the conscious mind is the mesa-optimizer here, the original outer optimizer being a riderless elephant.

Load More