Disclaimer: I am writing this post after doing some amount of reading into alignment, but I acknowledge that my priors in this case are not well established. I have conducted limited experiments into ML research (largely focusing on the utility and interpretability of language and vision transformers, including occlusion sensitivity), and have since broadened my reading into more social and humanistic writing on the subject of AI development.
I am somewhat surprised that this particular document has never surfaced on the Forums that I can find with the search function, since it on some level echoes many concerns I have seen in the space (that AI development is unconstrained, suffers from poor use of metrics in place of human goals i.e. goodhart's law, is being pursued without regards to ethics or safety concerns etc.). In particular, I believe this writing addresses three areas of concern I have found when reviewing some views within the AI alignment community:
Overall, even if you do not agree with these points, I highly encourage at least skimming the report I have linked.