Wiki Contributions


The Role of Deliberate Practice in the Acquisition of Expert Performance (PDF)

This link seems broken (though a google search finds many copies of the PDF).

To anyone landing on this page, the CFAR handbook is now available on LessWrong as a native sequence.

It might be useful to add a quick summary of how arXiv works. I vaguely had the impression that anyone could upload PDFs to it, but some of the comments seem to pretty solidly disagree with that.

I would especially especially love it if it popped out a .tex file that I could edit, since I'm very likely to be using different language on LW than I would in a fancy academic paper.

FYI the screenshots here say "Request feedback" but the actual button currently says "Get feedback". Might trip someone up if they're trying to search for the text.

I feel generally agreeable towards this concept, and also towards the idea of being careful to use phrases as they are defined.

But I feel something else after starting to read the Arbital page. Since you quadruple insisted on it, I went ahead and actually opened the page and started reading it. And several things felt off in quick succession. I'm going to think out loud through those things here.

The first part is the concept of "guarded term". Here's part of the definition of that.

stretching it ... is an unusually strong discourtesy.

...You can't just say that something is a discourtesy. I have never heard of "guarded term" and I'm pretty sure it's a thing that the people writing these pages made up, and is not well-known basically anywhere. So it's pretty weird to say "if you do thing X, you're being discourteous". The way rudeness works is complicated, but it doesn't work this way. You need bigger social agreement before something is actually rude.

Synonyms include 'pivotal achievement' and 'astronomical achievement'.

It feels pretty weird an unnecessarily confusing to tell the reader about two synonyms right away, especially when I'm pretty sure that all of these terms are obscure. It seems like it would have been a lot better to just declare the title of the page to be the one term for this, and to let any "synonyms" fade into non-use.

The next paragraph defines two other terms, a contrasting term, and a superset term, each with their own abbreviations.

Then the next paragraph tells me about two deprecated terms!

Why on earth are you dumping all these random, extremely similar but different, not-at-all widely used terms on me? You're both making it weirdly difficult for me to come away using terms you want me to use, and also making it seem like there's a whole big history of using these terms when there really isn't.

Next bit:

but AI alignment researchers kept running into the problem


Usage has therefore shifted such that (as of late 2021) researchers use...

Okay yeah, this is getting super annoying. Who is speaking for all "AI alignment researchers"? I'm like 95% sure this is all just referring to like half a dozen people having a series of conversations in the MIRI office. But it seems to be making it sound like a whole extant field, as if me using these terms wrong will cause miscommunication with "AI alignment researchers" --


...oooh. This is the feeling of detecting Frame Control. Yeah, that feels clarifying. I am getting increasingly weirded out by this page in part because it seems to be trying to control the frame.

To be clear, I don't think this is intentional, or that any bad intent was necessarily being executed. And for all I know, maybe the About page of Arbital says something like "here I will write articles as if terms were in established use in my preferred way." Maybe the whole thing was semi-aspirational/semi-fictional. But I'm not going to go looking for more explanation. My heuristic for dealing with frame control is to leave. You get a certain number of chances to say your thing and make me understand what you're trying to say, and after a certain number of frame-control-detection strikes, I just leave.

So, I'm not going to finish reading the Arbital page on Pivotal Act even though Raemon quadruple recommended it. And I guess I'll just go ahead using "pivotal act" the same way I hear other people using it, maybe while vaguely remembering the one-sentence definition I did get, and continuing to independently evaluate the validity of the concept.

I have found throughout my life that there is virtually no correlation between what media other people like (friends, critics, etc) and what I like. Not even a negative correlation; just none. I have given up trying to understand this particular phenomenon.

I share some of your frustrations with what Yudkowsky says, but I really wish you wouldn't reinforce the implicit equating of [Yudkowsky's views] with [what LW as a whole believes]. There's tons of content on here arguing opposing views.

I'm trying out independent AI alignment research.

Load More