Here's a list of arguments for AI safety being less important, although some of them are not object-level.
I've added a tag for object-level AI risk skepticism arguments. I've included my own post about deceptive alignment and Katja Grace's post about AI X-risk counterarguments. What other arguments should be tagged?