The concept of "schemers" seems to be gradually becoming increasingly load-bearing in the AI safety community. However, I don't think it's ever been particularly well-defined, and I suspect that taking this concept for granted is inhibiting our ability to think clearly about what's actually going on inside AIs (in a similar way to e.g. how the badly-defined concept of alignment faking obscured the interesting empirical results from the alignment faking paper).
In my mind, the spectrum from "almost entirely honest, but occasionally flinching away from aspects of your motivations you're uncomfortable with" to "regularly explicitly thinking about how you're going to fool humans in order to take over the world" is a pretty continuous one. Yet generally people treat "schemer" as a fairly binary classification.
To be clear, I'm not confident that even "a spectrum of scheminess" is a good way to think about the concept. There are likely multiple important dimensions that could be disentangled; and eventually I'd like to discover properly scientific theories of concepts like honesty, deception and perhaps even "scheming". Our current lack of such theories shouldn't be a barrier to using those terms at all, but it suggests they should be used with a level of caution that I rarely see.