Putnam showed that any physical system implements any FSA under a suitable mapping. This is bad for computationalism: if any rock implements any computation, "this system is computing X" stops meaning anything.
The standard response (Chalmers) is that real implementations need to handle counterfactual inputs correctly, not just the actual ones. I think this misses the more obvious problem with Putnam's construction.
Think about how a CPU actually maps to bits. You look at specific physical regions, count electrons via voltage, threshold the count. That mapping is fixed before you run any program and doesn't change depending on what you run. It's just a fact about the hardware.
Putnam's rock mapping doesn't work like that. To know which microstate of the rock corresponds to which computational state, you need to already know what input led to that microstate. The mapping is constructed after the fact, using knowledge of the computation to reverse-engineer a physical correspondence. The rock isn't doing the computation, the mapping is.
More concretely: for a CPU you can describe what the machine looks like physically when a program finishes, without knowing anything about what the program does. For the rock, you can't say which microstate means "done" without already knowing the answer. The mapping presupposes the result.
A cleaner criterion than Chalmers': a genuine implementation mapping has to be stateable before and independently of running the computation. Putnam's mappings all fail this.
And the conclusion here isn't that rocks don't compute. Rocks compute rock dynamics, that's a real computation fully grounded in physics. There's just a fact of the matter about what a system is computing, and that fact rules out everything simultaneously. Which is all computationalism actually needs.