Critique of 'Many People Fear A.I. They Shouldn't' by David Brooks.
This is my critique of David Brooks opinion piece in the New York Times. Tl;dr: Brooks believes that AI will never replace human intelligence but does not describe any testable capabilities that he predicts AI will never possess. David Brooks argues that artificial intelligence will never replace human intelligence. I believe it will. The fundamental distinction is that human intelligence emerged through evolution, while AI is being designed by humans. For AI to never match human intelligence, there would need to be a point where progress in AI becomes impossible. This would require the existence of a capability that evolution managed to develop, but that science could never replicate. Given enough computing power, why would we not be able to replicate this capability by simulating a human brain? Alternatively, we could simulate evolution inside a sufficiently complex environment. Does Brooks believe that certain functionalities can only be realized through biology? While this seems unlikely, if it were the cause, we could create biological AI. Why does Brooks believe that AI has limits that carbon based brains produced by evolution does not have? It is possible that he is referring to a more narrow definition of AI, like silicon based intelligence based on the currently popular machine learning paradigm, but the article doesn’t specify what AIs Brooks is talking about. In fact, one of my main concerns with the article is that Brooks' arguments rely on several ambiguous terms without explaining what he means by them. For example: > The A.I. ’mind’ lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences. Most of these terms are associated with the subjective non-material phenomena of consciousness (i.e., ‘What it is like to be something’). However, AIs that possess all the testable capabilities of humans but lack consciousness would
My thinking is that the downside of production evals is that they are reactive rather than proactive. They can only be used for misaligned behavior that have already happened. As models increase in capability we expect them to be deployed in ever more agentic and high stakes situations. It seems bad to not be able to evaluate what they will do before being deployed. Especially if one is worried about catastrophic or non-reversible scenarios of misalignment.