A mindblind genie is an AI that understands material objects and technology extremely well, but has been somehow averted from modeling human minds, or possibly its own mind, beyond some point. (Presumably via a corrigible injunction that averts it wanting to model such minds.)
This is because many of the dangers in AI seem to be associated with the AI having a sufficiently advanced model of human minds or AI minds, including:
Limiting the degree to which the AI can understand cognitive science, other minds, its own programmers, and itself is a very severe restriction that would prevent a number of obvious ways to make progress on the AGI subproblem.
However, if we could in fact have an AI with an excellent grasp of how to accomplish material goals not involving the direct modeling and manipulation of human minds, it would not be irrelevant; it could carry out instructions that were game-changing for the larger dilemma.
And while a mindblind AI would still require most of genie theory and corrigibility to be solved, it's plausible that the restriction away from modeling humans, programmers, and some types of reflectivity, would collectively make it significantly easier to make a safe form of this genie.
Thus, a mindblind AI is one of few open candidates for "AI that is restricted in a way that actually makes it safer to build, without it being so restricted as to be incapable of game-changing achievements".