A Turing machine is a universal computer: it can compute anything that any other computer can compute. A human being can specify a Turing machine and the data it's acting on and carry out the steps that the machine would execute. Human beings have also constructed computers with the same repertoire as a Turing machine, such as the computer on which I am writing this question. There are articles on Less Wrong about mind design space, such as this one:
in which the author writes:
The main reason you could find yourself thinking that you know what a fully generic mind will (won't) do, is if you put yourself in that mind's shoes - imagine what you would do in that mind's place - and get back a generally wrong, anthropomorphic answer.
But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.
So what is the Less Wrong position on whether we could understand AIs and how is that claim compatible with the universality of computation?