Through my arduous and repetitive drilling of various practice tasks - mental rotation, typing, beat saber, osu - I've noticed, as my skills improve, that I process less and less detail regarding their execution. There is a clear process to it; at first, an insight into some difficulty or complexity, then, the incremental implementation of a solution to the problem, and finally, the climb into automaticity, such that I can perform the task by rote, be it by muscle memory or cognitive shortcut.

I can feel it, from the inside; I'm gradually forming cognitive black boxes. When I type, I have a mental representation of the words I wish to transcribe - typically imagined acoustically - and a mystery function that converts my internal monologue into movements of the fingers, shifting the keyboard and putting the monologue into the computer.

I sometimes type to and too and two in an identical way, unintentionally. This is because, I think, I think my words in sounds, and sometimes my typing blackbox misbehaves. These are homophonic heterographs: a matched representation in the acoustic form, but a mismatched one when transcribed. To other minds, that think in transcribed text, this mistake is probably much less likely to happen. But then people, for the most part, learn to talk before type, and a "natural textual thinker" may be relatively rare.

So I have my typing blackbox, my conversion method from thought to finger waggling, a sort of sign language dependent on the presence of a keyboard. I have my mental rotation blackbox - my opaque series of checksums, with increasing automaticity, that tell me two objects match. That one's a little autoencoderish of me, isn't it? I take a complex shape, reduce its dimensionality to an embedding space, with the ability to then decompress the representation into an alternative configuration. So am I still learning mental rotation, to an extent, or am I learning how to get away without it? Is there an actual distinction?

As more becomes automatic, we can parallelise; I can pack the dishwasher while listening to a podcast. How much can become automatic? How many of my social responses are preprogrammed? I once ran into a celebrity on the street, whose face I recognised but in no way personally knew, and raised my hand in greeting before my brain metabolised that I did not know this person. How much of what I say is on cruise control? Certainly syntax, it's second nature, I don't need to pay attention. Maybe greetings as well, sure, and the followup questions of "how are you doing?", and maybe a rote recollection of my recent past.

As time goes on - as I have more conversations, as I practice execution of communication of information - how much more am I becoming an automaton, without noticing? How much of my day is spent truly conscious? We want some automaticity - I don't want to manually pilot my legs into walking, with intentionality behind every step - but I don't want to drift into turning into a conversational zombie.

...how much of my own internal dialogue is "truly necessary" to replicate my actions in the outside world? What if one day my internal self gets factored away to save calories, and I become hollow on the inside? I'd even insist I wasn't - perhaps even "believe" it, as far as belief can exist with nobody home. It might seem like an absurd proposition, but it's true for typing and drawing and conversation and art and all the rest. What if one day my mind decides I'm not worth keeping around? And what could I do to avoid such a fate?

New to LessWrong?

New Comment