A noob goes to the SERI MATS presentations
I have been cooped up in suburbia trying to take a break. But not having a routine and being around peers made the days feel monotonous. I was procrastinating writing my Prospect reflection and operations guide, and ended up reading alignment papers and being confused about how to figure out...
ThomasW recommended [1] Unsolved Problems in ML Safety, [2] X-Risk Analysis for AI Research, and [3] Is Power-Seeking AI an Existential Risk? He said [3] is good for people with high openness to weird things and motivation for x-risk. And if they're not as open, [1] has research areas and ML cred. He says he wouldn't share Yudkowsky stuff to ML people, and people don't really like openings with x-risk and alarmism. Personally, I like "Is Power-Seeking AI an Existential Risk" because of the writing style and it's a pretty comprehensive introduction. There's also a bounty for AI Safety Public Materials.