I want to point to an important variable of personal experience: how much are you consulting with an explicit representation of what you intend to be doing?
I think "intention" is a better handle than most, for what I'm trying to point at.[1] I think a common handle would be "should" -- as in "what I should be doing". But you can think you "should", say, go to the dentist, while having no intention of doing so. I want to point at a more behaviorist notion, where (in order to count) an explicit representation of your goals is a signal which you are at least sometimes responsive to; causal reasons why you do one thing rather than another.[2]
So, for example, I keep a notebook open on my desk, where I write to-do items. If I write something in the notebook, it explicitly sets the intention to do the thing, and it remains in my field of view. I might follow up on it immediately, in which case the external memory was not really useful as memory but rather as a clear signal to myself that it was a priority for me.
I might also spend the day in the living-room, where the work notebook is not visible. Where I sit is another sort of representation of what I intend: if I'm seated at my work desk, I almost always intend to be working, whereas if I'm seated in the living room, I intend to be relaxing ("doing whatever I want" -- which can include work-like things, but approached with a more playful attitude).
My thoughts can also serve as "explicit representations" in the relevant sense: mentally labelling something as a "work day" or "break day" sets an intention, lodged in memory, which I may consult later to guide my behavior.
I want to talk about that variable in general: how much you consult explicit representations of what you intend to do, whether they're mental representations or physical representations.
At the extreme explicitness-of-will direction, you would have someone who is engaged in a deeply-nested goal-stack, where they are constantly explicitly checking what things they have to do next, both with a lot of explicit content in working memory, and longer-term memory in the form of personal memory and external records like to-do lists.
The opposite end of the spectrum is spontaneous play, doing whatever feels most alive, reacting to your current situation. I'm not ruling out accessing memory at all, so it's not necessarily a myopic state of being; just more calculating what you want from "is" beliefs rather than from "ought" beliefs.[3]
So, intentionality vs spontaneity?
If you're being very intentional, your explicit goal-representations had better be accurate. (Or, to put it a different way, they'd better represent valuable goals.) If your to-do lists become disconnected from what you really (implicitly) want, its purpose has been lost. Akrasia might be a defense against this.
Forming accurate explicit representations of your goals can obviously be very helpful, but spontaneity isn't necessarily non-agentic. When you're in the flow, you might be very productive without being very intentional in the current sense.
Humans have a sleep/wake cycle, but we also seem to need (or at least, express a need for) a different kind of rest: a work/play cycle (work during the day and relax in the evening, work during weekdays and relax during weekends, take vacations every so often, that sort of thing). The notion of spontaneity here seems like a reasonably good model of the point of evenings, weekends, and vacations: doing things because they feel good, because they're alive for you in the moment, rather than making and completing to-do lists. (Of course, some people won't fit this model.)
One possible reason to have this sort of work/play cycle might be to recalibrate your explicit models of what you enjoy and what you want; time spent in spontaneity mode can serve as a sort of check, providing more data for intentional mode.
A different (more Hansonian) model: your explicit representations of what you want are always skewed, representing what you "should" want (for signaling purposes). This means time spent in intentional mode will under-satisfy some of your drives. You need time in spontaneous mode to satisfy the drives you don't want to explicitly represent.
One argument against this choice is that the nearby namespace is already overloaded in philosophy, with "intension" vs "intention" vs "ententional" all being given specific meanings.
It isn't totally clear this modeling choice is the right one.
And also calculating your choices more from beliefs about the external world rather than beliefs about yourself; a belief that you "will do this" functions an awful lot like a belief that you "should do this" sometimes, so we also need to rule that case out to make our spectrum meaningful.