A predicted difficulty of advanced safety is asserted to be "patch-resistant" or a "resistant problem" if it is asserted that proposed simple solutions ("patches") will fail to solve the difficulty and just regenerate the difficulty somewhere else.
The notion of a problem being "patch-resistant" is relative to the current state of the art. If a speaker asserts a problem is "patch-resistant" and is using that term as defined here, it means the speaker thinks both of the following:
Thus the NearestUnblockedNeighbor problem may be called "patch-resistant" by someone who thinks that (1) it is possible to design AdvancedAgents with a FullCoverage goal system, (2) a FullCoverage goal system would negate most NearestUnblockedNeighbor problems, but (3) we don't yet know how to build a FullCoverage goal system.
To call a problem "patch-resistant" is not to assert that it is unsolvable, but it does mean the speaker is cautioning against naive or simple solutions and that the speaker thinks some larger research project is required to deal with it.
See e.g. the Todoof nonmonotonic logic intended to represent 'explaining away' evidence, before the invention of Bayesian networks, as recounted in e.g. cite[Probabilistic Reasoning in Intelligent Systems].
See also http://lesswrong.com/lw/l9/artificial_addition/.
Todo this is why we can't have the simple amazing formal criterion that solves everything