Nominated Posts for the 2019 Review

Posts need at least 2 nominations to continue into the Review Phase.
Nominate posts that you have personally found useful and important.
Sort by: fewest nominations
32Calibrating With Cards
[anonymous]
3
1 0
167Make more land
36
2 5
133Blackmail
55
2 2
60Dual Wielding
23
2 3
137Everybody Knows
21
2 1

2019 Review Discussion

[epistemic status: that's just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold]

"If you see fraud and do not say fraud, you are a fraud." --- Nasim Taleb

I was talking with a colleague the other day about an AI organization that claims:

  1. AGI is probably coming in the next 20 years.
  2. Many of the reasons we have for believing this are secret.
  3. They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

His response was (paraphrasing): "Wow, that's a really good lie! A lie that can't be disproven."

I found this response refreshing, because he immediately jumped to the most likely conclusion.

Near predictions generate more funding

Generally, entrepreneurs who

...
56sarahconstantin
in retrospect, 6 years later: wow, I was way too bearish about the "mundane" economic/practical impact of AI.  "AI boosters", whatever their incentives, were straightforwardly directionally correct in 2019 that AI was drastically "underrated" and had tons of room to grow. Maybe "AGI" was the wrong way of describing it. Certainly, some people seem to be in an awful hurry to round down human capacities for thought to things machines can already do, and they make bad arguments along the way. But at the crudest level, yeah, "AI is more important than you think, let me use whatever hyperbolic words will get that into your thick noggin" was correct in 2019. also the public figures I named can no longer be characterized as only "saying true things." Polarization is a hell of a drug. 

I would totally agree they were directionally correct, I under-estimated AI progress. I think Paul Christiano got it about right.

I'm not sure I agree about the use of hyperbolic words being "correct" here; surely, "hyperbolic" contradicts the straightforward meaning of "correct".

Partially the state I was in around 2017 was, there are lots of people around me saying "AGI in 20 years", by which they mean a thing that shortly after FOOMs and eats the sun or something, and I thought this was wrong and a strange set of belief updates (which was not adequately j... (read more)

This is Part VIII of the Specificity Sequence

When you teach someone a concept, you’re building a structure in their mind by connecting up some of their mental concepts in a certain way. But you have to go in through their ears. It’s kind of like building this ship-in-a-bottle LEGO set.

In this post, we’ll visualize what’s happening in a learner’s brain and see how a teacher can wield their specificity powers to teach concepts better.

Mind-Hanging A Concept

Reading a startup’s pitch begins as a learning exercise: learning what the startup does. In How to Apply to Y Combinator, Paul Graham writes:

We have to read about 100 [applications] a day. That means a YC partner who reads your application will on average have already read 50 that day and have 50
...
niplav20

A superintelligent mind with a reasonable amount of working memory could process generic statements all day long and never whine about dangling concepts. (I feel like the really smart people on LessWrong and Math Overflow also exhibit this behavior to some degree.) But as humans with tragically limited short-term memories, we need all the help we can get. We need our authors and teachers to give us mind-hangers.

I think we can do substantially better than four items in working memory, but not have a working memory with thousands of slots. That is because... (read more)

Load More