I don't think we currently have one. As far as I know, LessWrong hasn't had any requests made of it by law enforcement that would trip a warrant canary while I've been working here (since July 5th, 2022). I have no information about before then. I'm not sure this is at the top of our priority list; we'd need to stand up some new infrastructure for it to be more helpful than harmful (i.e. because we forgot to update it, or something).
Expanding on Ruby's comment with some more detail, after talking to some other Lightcone team members:
Those of us with access to database credentials (which is all the core team members, in theory) would be physically able to run those queries without getting sign-off from another Lightcone team member. We don't look at the contents of user's DMs without their permission unless we get complaints about spam or harassment, and in those cases also try to take care to only look at the minimum information necessary to determine whether the complaint is valid, and this has happened extremely rarely[1]. Similarly, we don't read the contents or titles of users' never-published[2] drafts. We also don't look at users' votes except when conducting investigations into suspected voting misbehavior like targeted downvoting or brigading, and when we do we're careful to only look at the minimum amount of information necessary to render a judgment, and we try to minimize the number of moderators who conduct any given investigation.
I don't recall ever having done it, Habryka remembers having done it once.
We do see drafts that were previously published and then redrafted in certain moderation views. Some users will post something that gets downvoted and then redraft it; we consider this reasonable because other users will have seen the post and it could easily have been archived by e.g. archive.org in the meantime.
Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
@CMDiamond, please don't post more direct LLM output, or we'll remove your posting permissions.
Most of the time they can't be replicated, but I figure most people using this site know what I'm talking about.
This is not my experience of using the site. We're happy to have bugs reported to us, even if they're difficult to reproduce; we don't have the bandwidth to fix all of them on any given timeline, but do try to triage those that are high-impact (which will often by true for bugs introduced during such a migration, since those are more likely to affect a large number of users).
Mod note: this post violates our LLM Writing Policy for LessWrong, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
@Yeonwoo Kim, please don't post more direct LLM output, or we'll remove your posting permissions.
We don't need most donors to make decisions based on the considerations in this post, we need a single high-profile media outlet to notice that the interesting fact that the same few hundred names keep showing up on the lists of donors to candidates with particular positions on AI. The coalition doesn't need to be large in an absolute sense; it just needs to be recognizably something you can point to when talking to a "DC person" and they'll go, "oh, yeah, those people". (This is already the case! Just, uh, arguably in a bad way, instead of a good way.)
People don't explore enough.
I think this is true for almost everyone, on the current margin. Different frames and techniques help different people. And so, curated! Let us have our ~annual reminder to actually try to solve our problems, instead of simply suffering with them.
1 minute and 10 turns of an allen key later, it was fixed.
Also important to remember that some problems can literally be solved in one minute. (You, the person reading this - is there something you keep forgetting to buy on Amazon to solve a problem you're dealing with?)
Mod note: this seems like an edge case in our policy on LLM writing, but would ask that future such uses (explicitly demarked summaries) are put into collapsible sections.
Nit: if it was common enough for people within a specific coalition to donate to candidates of both parties due to their single-issue concern, one might imagine that it would lose a lot of its strength as a negative signal (except maybe with the current admin, which as you note is very loyalty-focused).
No. It turns out after a bit of digging that this might be technically possible even though we're a ~7-person team, but it'd still be additional overhead and I'm not sure I buy that the concerns it'd be alleviating are that reasonable[1].
Not a confident claim. I personally wouldn't be that reassured by the mere existence of such a log in this case, compared to my baseline level of trust in the other admins, but obviously my epistemic state is different from that of someone who doesn't work on the site. Still, I claim that it would not substantially reduce the (annualized) likelihood of an admin illicitly looking at someone's drafts/DMs/votes; take that as you will. I'd be much more reassured (in terms of relative risk reduction, not absolute) by the actual inability of admins to run such queries without a second admin's thumbs-up, but that would impose an enormous burden on our ability to do our jobs day-to-day without a pretty impractical level of investment in new tooling (after which I expect the burden would merely be "very large").