When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala and Eric Drexler with his QNR and CAIS. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety.
For more general AI & alignment optimism I would suggest also Rohin Shah. See also here.
I believe it will be made available to ChatGPT Plus subscribers, but I don't think it's available yet
EDIT: as commenters below mentioned, it is available now (and it had already been for some at the time of this message)