Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Right now there are quite a few private safety docs floating around. There's evidently demand for a privacy setting lower than "only people I personally approve", but higher than "anyone on the internet gets to see it". But this means that safety researchers might not see relevant arguments and information. And as the field grows, passing on access to such documents on a personal basis will become even less efficient.

My guess is that in most cases, the authors of these documents don't have a problem with other safety researchers seeing them, as long as everyone agrees not to distribute them more widely. One solution could be to have a checkbox for new posts which makes them only visible to verified Alignment Forum users. Would people use this?

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

That wouldn't stay secret. I'm pretty confident that someone would leak all this information at some point. But beyond this is creates difficulties in who gets access to the Alignment Forum as it then wouldn't just be about having sufficient knowledge to comment on these issues, but also about trust.

1 comment, sorted by Click to highlight new comments since: Today at 9:43 AM

What would be more useful is a release panel system. Suppose I've had an idea that might be best to make public, might be best to keep secrete, and might be unimportant. I don't know much strategy. I would like somewhere to send it for importance and info hazard checks.