Stephen McAleese

I'm a software engineer from Ireland who's interested in EA and AI safety research.

Wiki Contributions

Comments

Which posts? Would you mind sharing some links? I couldn't find many posts related to black swans.

Apart from the first section summarizing black swans, everything here is my personal opinion.

I agree. Having an idea of how many AI safety researchers there are is important for knowing how neglected the problem area is.

Note that the EA Forum version has more upvotes because this is a crosspost from there.

It would be great if someone created a project like this for AI alignment. Is there also a link to the code for creating the tech tree?

One major reason why there is so much AI content on LessWrong is that very few people are allowed to post on the Alignment Forum.

I analyzed some recent AI posts on LessWrong and found that only about 15% of the authors were also members of the Alignment Forum. I'm personally very interested in AI but I post all of my AI content on LessWrong and not the Alignment Forum because I'm not a member.

Anecdotally, I several people working full-time on AI safety who are still not members of the Alignment Forum and consequently post all their work on LessWrong.

My recommendation is to increase the number of people who are allowed to post on the Alignment Forum because the bar seems too high. And instead of having just a single class of members, there could be more members but different grades of members.

There are other reasons why AI has become more popular relative to rationality. Rationality isn't really a field that progresses as fast as AI and consequently, writing on topics such as cognitive biases is already covered in The Sequences.

On the other hand, breakthroughs are made every week in the field of AI which prompts people to write about it.

I recently analyzed the past 6 months of LessWrong posts about found that about 25% were related to AI.

Edit: a rewrote the section on Price's Law to use Lotka's Law instead.

More edits:
- DeepMind: 5 -> 10.
- OpenAI: 5 -> 10.
- Moved GoodAI from the non-technical to technical table.
- Added technical research organization: Algorithmic Alignment Group (MIT): 4-7.
- Merged 'other' and 'independent researchers' into one group named 'other' with new manually created (accurate) estimate.

Thanks for the explanation. It seems like Lotka's Law is much more accurate than Price's Law (though Price's Law is simpler and more memorable).

Since this seems to be Carn's first post on LessWrong, I think some of the other readers should have been more lenient and not downvoted the post or explained why they downvoted the post.

I would only downvote a post if it was obviously bad, flawed, very poorly written, or a troll post.

This post contains lots of interesting ideas and seems like a good first post.

The original post "Reward is not the optimization target" has 216 upvotes and this one has 0. While the original post was written better, I'm skeptical of the main idea and it's good to see a post countering it so I'm upvoting this post.

Edits based on feedback from LessWrong and the EA Forum:

EDITS:
- Added new 'Definitions' section to introduction to explain definitions such as 'AI safety', 'researcher' and the difference between technical and non-technical research.

UPDATED ESTIMATES  (lower bound, estimate, upper bound):

TECHNICAL
- CHAI: 10-30-60 -> 5-25-50
- FHI: 10-10-40 -> 5-10-30
- MIRI: 10-15-30 -> 5-10-20

NON-TECHNICAL
- CSER: 5-5-10 -> 2-5-15
- Delete BERI from the list of non-technical research organizations
- Delete SERI from the list of non-technical research organizations
- Levelhume Centre: 5-10-70 (Low confidence) -> 2-5-15 (Medium confidence)
- FLI: 5-5-20 -> 3-5-15
- Add OpenPhil: 2-5-15
- Epoch: 5-10-15 -> 2-4-10
- Add 'Other': 5-10-50

Load More