orthonormal

Sequences

Staying Sane While Taking Ideas Seriously

Comments

Adding Up To Normality

I have to disagree with you there.  Thanks to my friends' knowledge, I stopped my parents from taking a cross-country flight in early March, before much of the media reported that there was any real danger in doing so. You can't wave off the value of truly thinking through things.

But don't confuse "my model is changing" with "the world is changing", even when both are happening simultaneously. That's my point.

Your Cheerful Price

One problem: a high price can put more stress on a person, and raising the price further won't fix that!

For instance, say that you leave a fic half-finished, and someone offers a million dollars to MIRI iff you finish it. Would you actually feel cheerful and motivated, or might you feel stressed and avoidant and guilty about being slow, and have a painful experience in actually writing it?

(If you've personally mastered your relevant feelings, I think you'd still agree that many people haven't.)

I don't know what to do in that case.

Understanding “Deep Double Descent”

If this post is selected, I'd like to see the followup made into an addendum—I think it adds a very important piece, and it should have been nominated itself.

What failure looks like

I think this post (and similarly, Evan's summary of Chris Olah's views) are essential both in their own right and as mutual foils to MIRI's research agenda. We see related concepts (mesa-optimization originally came out of Paul's talk of daemons in Solomonoff induction, if I remember right) but very different strategies for achieving both inner and outer alignment. (The crux of the disagreement seems to be the probability of success from adapting current methods.)

Strongly recommended for inclusion.

Soft takeoff can still lead to decisive strategic advantage

It's hard to know how to judge a post that deems itself superseded by a post from a later year, but I lean toward taking Daniel at his word and hoping we survive until the 2021 Review comes around.

Rationality, Levels of Intervention, and Empiricism

I can't think of a question on which this post narrows my probability distribution.

Not recommended.

Chris Olah’s views on AGI safety

The content here is very valuable, even if the genre of "I talked a lot with X and here's my articulation of X's model" comes across to me as a weird intellectual ghostwriting. I can't think of a way around that, though.

AlphaStar: Impressive for RL progress, not for AGI progress

That being said, I'm not very confident this piece (or any piece on the current state of AI) will still be timely a year from now, so maybe I shouldn't recommend it for inclusion after all.

Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary

Ironically enough for Zack's preferred modality, you're asserting that even though this post is reasonable when decoupled from the rest of the sequence, it's worrisome when contextualized.

The AI Timelines Scam

I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.)

Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.

Load More