LESSWRONG
LW

nz
209Ω8430
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
GPT-4
nz2y*40

I believe it will be made available to ChatGPT Plus subscribers, but I don't think it's available yet

 

EDIT: as commenters below mentioned, it is available now (and it had already been for some at the time of this message)

Reply
Who are some prominent reasonable people who are confident that AI won't kill everyone?
nz3y21

+1 for Quintin. I would also suggest this comment here.

Reply
Who are some prominent reasonable people who are confident that AI won't kill everyone?
Answer by nzDec 06, 202230

When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala and Eric Drexler with his QNR and CAIS. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety. 

 

For more general AI & alignment optimism I would suggest also Rohin Shah. See also here.

Reply
22NeuroAI for AI safety: A Differential Path
7mo
0
23Language models can explain neurons in language models
Ω
2y
Ω
0
17The Quantization Model of Neural Scaling
2y
0
151GPT-4
2y
150