OpenPhil's 5th-highest-compensated employee earned about $184k in 2023[1], which gives you a ceiling. Anthropic currently extends offers of ~$550k to mid-level[2] engineers and researchers. Joe's role might not be on the same ladder as other technical roles, but companies like Anthropic tend to pay pretty well across the board.
Edit: retracted first half of the claim, see this reply.
According to their public Form 990 filing.
I realize the job title says "Senior Software Engineer", but given the way their ladder is structured, I think mid-level is probably closer (though it's fuzzy).
Curated. I disagree with some stronger/broader forms of the various claims re: missing "mental elements", but I'm not sure you intend the stronger forms of those claims and they don't seem load bearing for the rest of the piece in any case. However, this is an excellent explanation[1] of why LLM-generated text is low-value to engage with when presented as a human output, especially in contexts like LessWrong. Notably, most of these reasons are robust to LLM output improving in quality/truthfulness (though I do expect some trade-offs to become much more difficult if LLM outputs start to dominate top human outputs on certain dimensions).
To the point where I'm tempted to update our policy about LLM writing on LessWrong to refer to it.
No. It turns out after a bit of digging that this might be technically possible even though we're a ~7-person team, but it'd still be additional overhead and I'm not sure I buy that the concerns it'd be alleviating are that reasonable[1].
Not a confident claim. I personally wouldn't be that reassured by the mere existence of such a log in this case, compared to my baseline level of trust in the other admins, but obviously my epistemic state is different from that of someone who doesn't work on the site. Still, I claim that it would not substantially reduce the (annualized) likelihood of an admin illicitly looking at someone's drafts/DMs/votes; take that as you will. I'd be much more reassured (in terms of relative risk reduction, not absolute) by the actual inability of admins to run such queries without a second admin's thumbs-up, but that would impose an enormous burden on our ability to do our jobs day-to-day without a pretty impractical level of investment in new tooling (after which I expect the burden would merely be "very large").
I don't think we currently have one. As far as I know, LessWrong hasn't had any requests made of it by law enforcement that would trip a warrant canary while I've been working here (since July 5th, 2022). I have no information about before then. I'm not sure this is at the top of our priority list; we'd need to stand up some new infrastructure for it to be more helpful than harmful (i.e. because we forgot to update it, or something).
Expanding on Ruby's comment with some more detail, after talking to some other Lightcone team members:
Those of us with access to database credentials (which is all the core team members, in theory) would be physically able to run those queries without getting sign-off from another Lightcone team member. We don't look at the contents of user's DMs without their permission unless we get complaints about spam or harassment, and in those cases also try to take care to only look at the minimum information necessary to determine whether the complaint is valid, and this has happened extremely rarely[1]. Similarly, we don't read the contents or titles of users' never-published[2] drafts. We also don't look at users' votes except when conducting investigations into suspected voting misbehavior like targeted downvoting or brigading, and when we do we're careful to only look at the minimum amount of information necessary to render a judgment, and we try to minimize the number of moderators who conduct any given investigation.
I don't recall ever having done it, Habryka remembers having done it once.
We do see drafts that were previously published and then redrafted in certain moderation views. Some users will post something that gets downvoted and then redraft it; we consider this reasonable because other users will have seen the post and it could easily have been archived by e.g. archive.org in the meantime.
Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
@CMDiamond, please don't post more direct LLM output, or we'll remove your posting permissions.
Most of the time they can't be replicated, but I figure most people using this site know what I'm talking about.
This is not my experience of using the site. We're happy to have bugs reported to us, even if they're difficult to reproduce; we don't have the bandwidth to fix all of them on any given timeline, but do try to triage those that are high-impact (which will often by true for bugs introduced during such a migration, since those are more likely to affect a large number of users).
Mod note: this post violates our LLM Writing Policy for LessWrong, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
@Yeonwoo Kim, please don't post more direct LLM output, or we'll remove your posting permissions.
We don't need most donors to make decisions based on the considerations in this post, we need a single high-profile media outlet to notice that the interesting fact that the same few hundred names keep showing up on the lists of donors to candidates with particular positions on AI. The coalition doesn't need to be large in an absolute sense; it just needs to be recognizably something you can point to when talking to a "DC person" and they'll go, "oh, yeah, those people". (This is already the case! Just, uh, arguably in a bad way, instead of a good way.)
Oh, alas. Thank you for the correction!
(I still expect OpenPhil the LLC to have been paying comparable amounts to its most-remunerated employees, but not so confidently that I would assert it outright.)