User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

In defence of epistemic modesty

40 points
35 min read
Show Highlightsubdirectory_arrow_left

Contra double crux

50 points
7 min read
Show Highlightsubdirectory_arrow_left

Beware surprising and suspicious convergence

14 points
15 min read
Show Highlightsubdirectory_arrow_left

Log-normal Lamentations

12 points
8 min read
Show Highlightsubdirectory_arrow_left

Against the internal locus of control

6 points
4 min read
Show Highlightsubdirectory_arrow_left

Funding cannibalism motivates concern for overheads

26 points
3 min read
Show Highlightsubdirectory_arrow_left

Why the tails come apart

119 points
6 min read
Show Highlightsubdirectory_arrow_left

UFAI cannot be the Great Filter

36 points
3 min read
Show Highlightsubdirectory_arrow_left

Recent Comments

This seems right to me, and at least the 'motte' version of growth mindset accepts that innate ability may set pretty hard envelopes on what you can accomplish regardless of how energetic/agently you pursue self improvement (and this can apply across a range of ability - although it seems cruel and ...(read more)

> A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot...(read more)


It also risks a backfire effect. If one is in essence a troll happy to sneer at what rationalists do regardless of merit (e.g. "LOL, look at those losers trying to LARP enders game!"), seeing things like Duncan's snarky parenthetical remarks would just _spur me on,_ as it implies I'm successfull...(read more)

> I also think I got things about right, but I think anyone else taking an outside view would've expected roughly the same thing.

I think you might be doing yourself a disservice. I took the majority of contemporary critcism was more directed towards (in caricature) 'this is going to turn into a na...(read more)

Bravo - I didn't look at the initial discussion, or I would have linked your pretty accurate looking analysis (on re-skimming, Deluks also had points along similar lines). My ex ante scepticism was more a general sense than a precise pre-mortem I had in mind.

Although I was sufficiently sceptical of this idea to doubt it was 'worth a shot' _ex ante_,(1) I was looking forward to being pleasantly surprised _ex post_. I'm sorry to hear it didn't turn out as well as hoped. This careful and candid write-up should definitely be included on the 'plus' side of t...(read more)

This new paper may be of relevance (H/T Steve Hsu). The abstract: The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taki...(read more)

I endorse Said's view, and I've written a couple of frontpage posts.

I also add that I think Said is a particularly able and shrewd critic, and I think LW2 would be much poorer if there was a chilling effect on his contributions.

I'm also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the...(read more)