A predicted difficulty of advanced safety is asserted to be "patch-resistant" or a "resistant problem" if it is asserted that proposed simple solutions ("patches") will fail to solve the difficulty and just regenerate the difficulty somewhere else.
The notion of a problem being "patch-resistant" is relative to the current state of the art. If a speaker asserts a problem is "patch-resistant" and is using that term as defined here, it means the speaker thinks both of the following:
Thus the Nearest unblocked strategy problem may be called "patch-resistant" by someone who thinks that (1) it is possible to design an advanced agent with a full coverage goal system, (2) a full coverage goal system would negate most Nearest unblocked strategy problems, but (3) we don't yet know how to build a full coverage goal system.
To call a problem "patch-resistant" is not to assert that it is unsolvable, but it does mean the speaker is cautioning against naive or simple solutions and that the speaker thinks some larger research project is required to deal with it.
See e.g. the history of nonmonotonic logic intended to represent 'explaining away' evidence, before the invention of Bayesian networks, as recounted in e.g. Probabilistic Reasoning in Intelligent Systems
.See also http://lesswrong.com/lw/l9/artificial_addition/.