After listening to many techbros I often hear the notion that AI, more specifically LLM's like chatgpt or claude are "Just tools to be used". This notion is usually followed by some pro AI or AI accelerationist discussion but thats besides the point I will make.
I feel as though AI has exceeded the capabilities of a "tool" and its probably harmful to call it so. Take for example a hammer, or even a gun, both reasonably classified as tools. This is because by definition they are used to complete a task, whether it be hammering a nail or subduing a criminal. An LLM falls under this definition as well but I noticed stops when it comes to distributing responsibility. For example when tasked to hammer a nail, the responsibility of the completed task is distributed wholly to the person hammering rather than the hammer itself. Same way when someone is shot the responsibility falls onto the shooter rather than the gun.
But this effect changes when it comes to LLM's. For example if someone is to vibecode(using AI to program) an app, the distribution of responsibility becomes much more murky. The user prompts the llm, whereas the llm does the 'heavy lifting' of actually coding, and therefore we are more likely to distribute responsibility or contribution of a tasks completion less one-sidedly when dealing with LLM's.
In this scenario the LLM clearly exceeds the classification of 'tool', assuming firearms or hammers are also considered tools, and should not be considered equivalent. The issue with making this false equivalence is that the LLM is underestimated or dropped to the level of a hammer, leading to further proliferation of anti Ai safety sentiment at the consumer level. In order to solve this issue either AI should be completely disconnected from being a 'tool', or the definition of tool must evolve as well.