An anonymous user named Omega posted a critique of Redwood on the EA Forum. The post highlights four main areas: (1) Lack of senior ML staff, (2) Lack of communication & engagement with the ML community, (3) Underwhelming research output, and (4) Work culture issues.
I'm linkposting it here, since I imagine some LW users will have thoughts/comments. See also this comment from Nate Thomas, and note that Redwood has an anonymous feedback form.
We believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals.
Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential.
Standard caveat that I don't agree with everything in the post or even endorse its main conclusions; also see my comment.
Copying over my comment from the EA Forum version.
I think it's great that you're releasing some posts that criticize/red-team some major AIS orgs. It's sad (though understandable) that you felt like you had to do this anonymously.
I'm going to comment a bit on the Work Culture Issues section. I've spoken to some people who work at Redwood, have worked at Redwood, or considered working at Redwood.
I think my main comment is something like you've done a good job pointing at some problems, but I think it's pretty hard to figure out what should be done about these problems. To be clear, I think the post may be useful to Redwood (or the broader community) even if you only "point at problems", and I don't think people should withhold these write-ups unless they've solved all the problems.
But in an effort to figure out how to make these critiques more valuable moving forward, here are some thoughts:
I'm also guessing that there are some low-hanging fruit interventions that external red-teamers could identify. For example, here are three things that I think Redwood should do:
These are three examples of interventions that seem valuable and (relatively) low-cost to me. I'd be excited to see if your team came up with any intervention ideas, and I'd be excited to see a "proposed intervention" section in future reports. (Though again, I don't think you should feel like you need to do this, and I think it's good to get things out there even if they're just raising awareness about problems).
We've crossposted the full text on LessWrong here: https://www.lesswrong.com/posts/SuZ6Guuos7CjfwRQb/critiques-of-prominent-ai-safety-labs-redwood-research