All of nz's Comments + Replies

I believe it will be made available to ChatGPT Plus subscribers, but I don't think it's available yet

 

EDIT: as commenters below mentioned, it is available now (and it had already been for some at the time of this message)

7Qumeric2mo
I just bought a new subscription (I didn't have one before), it is available to me.
9gwern2mo
No, it already is, it's just apparently staggered. EDIT: should be available to everyone now and I've also received API access.
3Optimization Process5mo
I paid a bounty for the Shard Theory link, but this particular comment... doesn't do it for me. It's not that I think it's ill-reasoned, but it doesn't trigger my "well-reasoned argument" sensor -- it's too... speculative? Something about it just misses me, in a way that I'm having trouble identifying. Sorry!
Answer by nzDec 06, 202230

When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala and Eric Drexler with his QNR and CAIS. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety

 

For more general AI & alignment optimism I would suggest also Rohin Shah. See also here.

1Optimization Process5mo
* Kaj Sotala: solid. Bounty! * Drexler: Bounty! * Olah: hrrm, no bounty, I think: it argues that a particular sort of AI research is good, but seems to concede the point that pure capabilities research is bad. ("Doesn’t [interpretability improvement] speed up capabilities? Yes, it probably does—and Chris agrees that there’s a negative component to that—but he’s willing to bet that the positives outweigh the negatives.")