Wiki Contributions

Comments

nahoj19d10

Thanks, I have applied most suggestions.

Indeed I didn't choose the formulas myself but just told GPT to produce some, and then removed a few that seemed dubious or irrelevant.

nahoj1y10

Right. So, considering that the most advanced AIs of a leading AI company such as OpenAI are not agents, what do you think of the following plan to solve or help solve AI risk: keep making more and more powerful Q&A AIs that are not agents until we have ones that are smarter than us, then ask them how to solve the problem. Do you think this is a safe and reasonable pursuit? Or do you think we just won't get to superhuman intelligence that way?

nahoj1y10

I'm not sure I understand, do you mean that considering these possibilities is too difficult because there are too many or that it's not a priority because AIs not designed as agents are less dangerous? Or both?

nahoj1y10

Thank you for your answer. In my example I was thinking of an AI such as a language model that would have latent ≥human-level capability without being an agent, but could easily be made to emulate one just long enough for it to get out of the box, e.g. duplicate itself. Do you think this couldn't happen?

More generally, I am wondering if the field of AI safety research studies somewhat specific scenarios based on the current R&D landscape (e.g. "A car company makes an AI to drive a car and then someone does xyz and then paperclips") and tailor-made safety measures in addition to more abstract ones like the ones in A Tentative Typology of AI-Foom Scenarios for instance.