Comments

niplav3d30

Thanks, that makes sense.

niplav3d50

Someone strong-downvoted a post/question of mine with a downvote strength of 10, if I remember correctly.

I had initially just planned to keep silent about this, because that's their good right to do, if they think the post is bad or harmful.

But since the downvote, I can't shake off the curiosity of why that person disliked my post so strongly—I'm willing to pay $20 for two/three paragraphs of explanation by the person why they downvoted it.

niplav5d20

The standard way of dealing with this:

Quantify how much worse the PRC getting AGI would be than OpenAI getting it, or the US government, and how much existential risk there is from not pausing/pausing, or from the PRC/OpenAI/the US government building AGI first, and then calculating whether pausing to do {alignment research, diplomacy, sabotage, espionage} is higher expected value than moving ahead.

(Is China getting AGI first half the value of the US getting it first, or 10%, or 90%?)

The discussion over pause or competition around AGI has been lacking this so far. Maybe I should write such an analysis.

Gentlemen, calculemus!

niplav6d20

The obsessive autists who have spent 10,000 hours researching the topic and writing boring articles in support of the mainstream position are left ignored.

It seems like you're spanning up three different categories of thinkers: Academics, public intellectuals, and "obsessive autists".

Notice that the examples you give overlap in those categories: Hanson and Caplan are academics (professors!), while the Natália Mendonça is not an academic, but is approaching being a public intellectual by now(?). Similarly, Scott Alexander strikes me as being in the "public intellectual" bucket much more than any other bucket.

So your conclusion, as far as I read the article, should be "read obsessive autists" instead of "read obsessive autists that support the mainstream view". This is my current best guess—"obsessive autists" are usually not under much strong pressure to say politically palatable things, very unlike professors.

niplav6d50

My best guess is that people in these categories were ones that were high in some other trait, e.g. patience, which allowed them to collect datasets or make careful experiments for quite a while, thus enabling others to make great discoveries.

I'm thinking for example of Tycho Brahe, who is best known for 15 years of careful astronomical observation & data collection, or Gregor Mendel's 7-year-long experiments on peas. Same for Dmitry Belayev and fox domestication. Of course I don't know their cognitive scores, but those don't seem like a bottleneck in their work.

So the recipe to me looks like "find an unexplored data source that requires long-term observation to bear fruit, but would yield a lot of insight if studied closely, then investigate".

niplav9d80

I think the Diesel engine would've taken 10 years or 20 years longer to be invented: From the Wikipedia article it sounds like it was fairly unintuitive to the people at the time.

niplav10d145

A core value of LessWrong is to be timeless and not news-driven.

I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility.

Our current goal is to produce a recommendations feed that both makes people feel like they're keeping up to date with what's new (something many people care about) and also suggest great reads from across LessWrong's entire archive.

I hope that we can avoid getting swallowed by Shoggoth for now by putting a lot of thought into our optimization targets

(Emphasis mine.)

Here's an idea[1] for a straightforward(?) recommendation algorithm: Quantilize over all past LessWrong posts by using inflation-adjusted karma as a metric of quality.

The advantage is that this is dogfooding on some pretty robust theory. I think this isn't super compute-intensive, since the only thing one has to do is to compute the cumulative distribution function once a day (associating it with the post), and then inverse transform sampling from the CDF.

Recommending this way has the disadvantage of not being recency-favoring (which I personally like), and not personalized (which I also like).

By default, it also excludes posts below a certain karma threshold. That could be solved by exponentially tilting the distribution instead of cutting it off (, otherwise to be determined (experimentally?)). Such a recommendation algorithm wouldn't be as robust against very strong optimizers, but since we have some idea what high-karma LessWrong posts look like (& we're not dealing with a superintelligent adversary… yet), that shouldn't be a problem.


  1. If I was more virtuous, I'd write a pull request instead of a comment. ↩︎

Load More