There is the persistent meme that AIs such as large language models (ChatGPT etc.) do, in a fundamental sense, lack the ability to develop human-like intelligence. Central to it is the idea that LLMs are merely probability-predictors for the-next-word based on a pattern-matching algorithm, and that they therefore cannot possibly...
There is a popular tendency to dismiss people who are concerned about AI-safety as "doomsday prophets", carrying with it the suggestion that predicting an existential risk in the near future would automatically discredit them (because "you know; they have always been wrong in the past"). Example Argument Structure > Predictions...
Exposé I regularly have encounters with people who "are not afraid of misaligned AGI as long as it doesn't even have a body to inflict harm with in the real world". This article is intended as a study of how to deal with this kind of argument. It can also...
Disclaimer: This post is largely based on my long response to someone asking why it would be hard to stop or remove an AI that has escaped into the internet. It includes a list of obvious strategies such an agent could employ. I have thought about the security risk of...