Humans have a sleep/wake cycle, but we also seem to need (or at least, express a need for) a different kind of rest: a work/play cycle (work during the day and relax in the evening, work during weekdays and relax during weekends, take vacations every so often, that sort of thing). The notion of spontaneity here seems like a reasonably good model of the point of evenings, weekends, and vacations: doing things because they feel good, because they're alive for you in the moment, rather than making and completing to-do lists. (Of course, some people won't fit this model.)
Just like different people need different amounts of sleep, maybe the work/play balance also works differently for different people. I wonder whether "needs much more play than average people" is a good model for ADHD.
On the model I mentioned, it would (in part) be a function of fit between explicit goals and implicit goals.
"Intentionality" fits somewhat nicely Michael Bratman's view of intentions as partial plans: you fix some aspect of your policy to satisfy a desire, so that you are robust against noisy perturbations (noisy signals, moments of "weakness of will", etc), can use the belief that you're going to behave in a certain way as an input to your further decisions and beliefs (as well as other agents' precommitments), not have to precompute everything in runtime, etc.[1]
A downside of the word is that it collides in the namespace with how "intentionality" is typically used in philosophy of mind, close to referentiality (cf. Tomasello's shared intentionality).
Perhaps the concept of "deliberation" from LOGI is trying to point in this direction, although it covers more stuff than consulting explicit representations.
The human mind, owing to its accretive evolutionary origin, has several major distinct candidates for the mind’s “center of gravity.” For example, the limbic system is an evolutionarily ancient part of the brain that now coordinates activities in many of the other systems that later grew up around it. However, in (cautiously) considering what a more foresightful and less accretive design for intelligence might look like, I find that a single center of gravity stands out as having the most complexity and doing most of the substantive work of intelligence, such that in an AI, to an even greater degree than in humans, this center of gravity would probably become the central supersystem of the mind. This center of gravity is the cognitive superprocess which is introspectively observed by humans through the internal narrative—the process whose workings are reflected in the mental sentences that we internally “speak” and internally “hear” when thinking about a problem. To avoid the awkward phrase “stream of consciousness” and the loaded word “consciousness,” this cognitive superprocess will hereafter be referred to as deliberation.
[ ... ]
Deliberation describes the activities carried out by patterns of thoughts. The patterns in deliberation are not just epiphenomenal properties of thought sequences; the deliberation level is a complete layer of organization, with complexity specific to that layer. In a deliberative AI, it is patterns of thoughts that plan and design, transforming abstract high-level goal patterns into specific low-level goal patterns; it is patterns of thoughts that reason from current knowledge to predictions about unknown variables or future sensory data; it is patterns of thoughts that reason about unexplained observations to invent hypotheses about possible causes. In general, deliberation uses organized sequences of thoughts to solve knowledge problems in the pursuit of real-world goals.
Cf. https://www.lesswrong.com/w/deliberate-practice. Wiktionary defines "deliberate" in terms of "intentional": https://en.wiktionary.org/wiki/deliberate#Adjective.
At least that's the Bratman-adjacent view of intention that I have.
I want to point to an important variable of personal experience: how much are you consulting with an explicit representation of what you intend to be doing?
I think "intention" is a better handle than most, for what I'm trying to point at.[1] I think a common handle would be "should" -- as in "what I should be doing". But you can think you "should", say, go to the dentist, while having no intention of doing so. I want to point at a more behaviorist notion, where (in order to count) an explicit representation of your goals is a signal which you are at least sometimes responsive to; causal reasons why you do one thing rather than another.[2]
So, for example, I keep a notebook open on my desk, where I write to-do items. If I write something in the notebook, it explicitly sets the intention to do the thing, and it remains in my field of view. I might follow up on it immediately, in which case the external memory was not really useful as memory but rather as a clear signal to myself that it was a priority for me.
I might also spend the day in the living-room, where the work notebook is not visible. Where I sit is another sort of representation of what I intend: if I'm seated at my work desk, I almost always intend to be working, whereas if I'm seated in the living room, I intend to be relaxing ("doing whatever I want" -- which can include work-like things, but approached with a more playful attitude).
My thoughts can also serve as "explicit representations" in the relevant sense: mentally labelling something as a "work day" or "break day" sets an intention, lodged in memory, which I may consult later to guide my behavior.
I want to talk about that variable in general: how much you consult explicit representations of what you intend to do, whether they're mental representations or physical representations.
At the extreme explicitness-of-will direction, you would have someone who is engaged in a deeply-nested goal-stack, where they are constantly explicitly checking what things they have to do next, both with a lot of explicit content in working memory, and longer-term memory in the form of personal memory and external records like to-do lists.
The opposite end of the spectrum is spontaneous play, doing whatever feels most alive, reacting to your current situation. I'm not ruling out accessing memory at all, so it's not necessarily a myopic state of being; just more calculating what you want from "is" beliefs rather than from "ought" beliefs.[3]
So, intentionality vs spontaneity?
If you're being very intentional, your explicit goal-representations had better be accurate. (Or, to put it a different way, they'd better represent valuable goals.) If your to-do lists become disconnected from what you really (implicitly) want, its purpose has been lost. Akrasia might be a defense against this.
Forming accurate explicit representations of your goals can obviously be very helpful, but spontaneity isn't necessarily non-agentic. When you're in the flow, you might be very productive without being very intentional in the current sense.
Humans have a sleep/wake cycle, but we also seem to need (or at least, express a need for) a different kind of rest: a work/play cycle (work during the day and relax in the evening, work during weekdays and relax during weekends, take vacations every so often, that sort of thing). The notion of spontaneity here seems like a reasonably good model of the point of evenings, weekends, and vacations: doing things because they feel good, because they're alive for you in the moment, rather than making and completing to-do lists. (Of course, some people won't fit this model.)
One possible reason to have this sort of work/play cycle might be to recalibrate your explicit models of what you enjoy and what you want; time spent in spontaneity mode can serve as a sort of check, providing more data for intentional mode.
A different (more Hansonian) model: your explicit representations of what you want are always skewed, representing what you "should" want (for signaling purposes). This means time spent in intentional mode will under-satisfy some of your drives. You need time in spontaneous mode to satisfy the drives you don't want to explicitly represent.
One argument against this choice is that the nearby namespace is already overloaded in philosophy, with "intension" vs "intention" vs "ententional" all being given specific meanings.
It isn't totally clear this modeling choice is the right one.
And also calculating your choices more from beliefs about the external world rather than beliefs about yourself; a belief that you "will do this" functions an awful lot like a belief that you "should do this" sometimes, so we also need to rule that case out to make our spectrum meaningful.