Epistemological Status: There's been a fair amount of discussion regarding counting over the last couple of weeks. I'm not aiming to be correct here, just less wrong than when I started thinking about this.
Tell Me In a Low Count of Words:
Counting itself is useful insofar as it allows an entity to order a succession of recognitions. The most basic recognition is perception. To recognize the succession of perceptions is an internal abstraction of time. I'll argue that this comes before everything else. Whenever the succession of recognitions does not depend on the contents of the recognition, just that they occur, counting is a useful abstraction. There is also a possibility of relating external abstractions for counting to the internal ones which creates an interesting coordination problem / numbers.
Tell Me in a Long Count of Words:
I think it's important to start by noting that counting is hard. AllAmericanBreakfast recounts a disagreement for an inventory count on a project to install signage for a college campus's botanical collection. However, no one could agree on how many posts were installed.
They spent a significant amount of time pinning down an 'exact' number in order to create consensus around where their project stands. Reflecting on this they write,
...[W]e should be relieved that the project of "getting hard data" (i.e. science) is able to create some consensus some of the time....Strategically, the "hardness" of a number is its ability to convince the people you want to convince, and drive the work in the direction you think it should go.
Numbers allow for synchronization or coordination. Eigil Rischel also thinks counting is hard. They note that in experiments grown human brains don't come hardwired with an arbitrarily powerful "compare the size of two collections" module. Counting is a technology - it had to be invented.
Given the hardness of counting, can we at least recognize when it would be useful? Johnswentworth proposes something abstract here. Take the inventory disagreement. There is job to be done. Let's pose the following query that takes in the state and returns an answer to, AllAmericanBreakfast looks at the state of the project and wants to make a determination about what should be done next. Say the state loosely consisted of an abstraction of the botanical garden as it is. The goal is explicitly to install signs so it seems clear we should pose the following query that takes in an (location) attention distribution and the map and returns, Signs occupy space, so there can only be so many non-overlapping locations signs could be. Therefore the number of locations is finite and we can form the query collection, AllAmericanBreakfast specifically writes that,
In our meeting with the VP yesterday, this inventory helped cut through the BS and create a consensus around where the project stands. Instead of arguing over whose estimate is correct, I can redirect conversation to "we need to get this inventory done so that we'll really know."
In other words, we should be able to do the reduction, Now an inventory of the signs would be something like a collection or set of locations for the current signs. Say we swapped the location of two of the signs does that change the final estimate we're interested in? No. This is where Johnswentworth's idea comes into play,
Formally, we can define a function COUNTS such that,
- A function for which any invariant under reordering inputs can be written as for some .
- COUNTS is invariant under reordering its inputs
According to this definition if the reduction above holds what remains to be doesn't change if we swap the location of two signs then we have further reduction, In words, if we count the number of signs in the botanical then we can make a determination of what needs to be done to finish. To whatever degree what's left to be done only depends on the result of COUNTS is the degree to which the last reduction is valid.
One problem with the previous argument is that the problem of modeling the inventory via is non-trivial. We've replaced the problem of recognizing numbers with the problem of recognizing (sets) abstraction.
One alternative is to use recursion. However, John argues that most applications of numbers do not involve induction in any obvious way - not even implicitly. On the other hand, AllAmericanBreakfast and Eigil Rischel seem to have two halves of the argument I'm about the present.
Eigil notes that there is a simple technology available to count,
...pair off the elements one after the other. If the collections are exhausted at the same time, they're the same size.
AllAmericanBreakfast suggests that "getting hard data" is a social coordination mechanism. We have to agree on our abstractions.
It seems relatively uncontroversial to note that we can determine the sucession of events. We can ask the question: does this event succeed the other? Yet, this requires something. Through whatever means we perceive the world there must also be an additional abstraction that attaches itself to each perception. It must also be the case that the succeeding perception is attached with an abstraction that can be surely determined as succeeding.
So to each event there is a successor. This sounds like counting, but considerations of our intuition for time tend to get into hairy philosophical issues so I'll take a different route to finish this argument.
The point can be made symbolically. Say we combine observations together with internal abstractions into new abstractions with an abstraction operator , If was something like a RNN this would be Turing complete. Note that to even discuss this we need to index the succession of internal abstractions. Not everything happens at once. What we want is for the query , to be learnable. As we noted above, the bare minimum is a mapping, into an ordered set. We also know that each event has an immediate successor. This kind of rules out a lot of bizarre looking options. We might as well just take or continue along with our requirements, Abstractly, our requirement is that the abstraction operator be conjugate to the successor relation. This is where the fun is. Vaguely we want something like this, This is a map that abstracts the abstraction operator. If we're not careful, this will become trivial, if it isn't already. Instead look at the following diagram,
We want to learn a single map that makes everything (commute) work out. The abstraction operator is the only real free 'variable' in this diagram. If we want to determine the succession internal abstractions must be constrained to fit into the above picture.
The above argument isn't about showing that every human already has 'numbers' floating around in their head, the point is that numbers can be abstracted out of every human's head. Subtle, but different. The significance is that it's easier to modify a pre-existing counting structure than to create one from scratch.
In fact, there are ways to create new external abstractions based on our internal abstractions. Maybe we make tallies, use a sundial, or place pebbles in such a way that we can learn a mapping between pebbles and our internal abstraction for succession. However, this ability to externalize an internalize abstraction is a serious piece of technology.
The issue arises when there are multiple people because then there are multiple internal abstractions. A lot of coordination needs to happen in order for a single external abstraction to map well onto the group's internal abstractions. A lot of the innovation lies in creating a good external abstraction. A lot of the work lies in teaching individuals how to relate them to their internal abstractions.
Is 'tracking' time more fundamental than 'corresponding' objects when it comes to counting?
Are external abstractions 'real' or are they just an internal abstraction that can be shared?
Does coordinating the external representation of time actually lead to counting?