Summary:
This post introduces the Jackson Model of Three-Dimensional Time (J3DT), a conceptual framework designed to model cognition and computation using three orthogonal temporal dimensions: T1 (causal linear time), T2 (event amplitude), and T3 (processor breadth). The model is inspired by both biological neuroscience and artificial intelligence architecture. It proposes a way to describe information processing systems in terms of temporal topology, rather than just scalar clock cycles or probabilistic flows. J3DT may provide insights into brain dysfunction, AI hallucinations, system throughput, and deployment planning.
Motivation
We often treat time in computation and cognition as a single arrow—forward and fixed. But this overlooks how systems (biological or artificial) process multiple concurrent streams of information, reconcile distributed modules, and operate under bounded causality. While models like Global Workspace Theory or transformer attention mechanisms hint at this complexity, they don’t explicitly model the shape of time inside the system.
J3DT proposes that information-processing systems operate within a three-dimensional temporal space:
- T1: Linear Time — Immutable causal progression. Think of this as a write-once log or blockchain ledger for events.
- T2: Event Amplitude — How many operations/events can be executed concurrently at a given T1 slice.
- T3: Processor Breadth — How many independent or semi-independent processors/modules are operating at once.
Each dimension provides constraints and capacities. A system with high T2 can multitask deeply within one moment. A system with high T3 can coordinate multiple modules, tasks, or subnetworks. T1 enforces irreversibility and anchors everything in a shared causal timeline.
Model Overview
For a given system, its effective cognitive or computational throughput is:
\text{Throughput}(t) = \frac{T2(t) \cdot T3(t)}{1 + \tau_{sync}(t)}
Where:
- T2(t): concurrent operations at moment t
- T3(t): concurrently active processors or modules
- \tau_{sync}(t): integration overhead for syncing parallel branches
Total capacity over a time window [0, T]:
\text{Cognitive Span} = \int_0^T \frac{T2(t) \cdot T3(t)}{1 + \tau_{sync}(t)} dt
Applications
- Cognitive Modeling: Human cognition may be modeled as high T3 (many specialized modules) and moderate T2 (working memory concurrency). Disorders like ADHD, epilepsy, or autism may reflect failures or imbalances in one or more time dimensions.
- AI Hallucinations: When T2 is too high and not constrained by T1 (causality), or T3 modules fail to integrate properly, language models may “hallucinate”—inventing plausible but unanchored content.
- System Design & Sizing: J3DT can be used as a framework for estimating AI compute needs or diagnosing performance bottlenecks. A task requiring high T2/T3 concurrency can’t be run effectively on a single-threaded processor, no matter how fast its clock (T1).
- Comparative Biology: Different species can be compared on T1, T2, and T3 scales. Plants may operate on hour-scale T1 and low T2/T3. Octopuses or humans score much higher.
Open Questions
- What are the biological correlates of T2 and T3? I’m not a biology person, but love a good Oliver Sacks book. I do think some people can take in more info at one time (sometimes too much) and others can put more brain power/focus than others. I absolutely know this to be true for computing systems.
- Can we instrument AI systems (e.g., LLMs or agents) to expose their active T2/T3 loads in real time? Could we use this to more properly size the requirements of a system, possibly even comparing to a biological system in a meaningful way?
- Could we design a training curriculum or architecture that adapts T2 and T3 to the domain being learned, rather than scaling parameters arbitrarily?
- Are hallucinations a breakdown in temporal topology rather than just a lack of data or context? Is the human brain experiencing similar issues with Dementia or Alzheimer’s?
Why This Might Matter
J3DT offers a compact but expressive way to think about system intelligence—biological or synthetic—not just in terms of accuracy or speed, but how time is structured inside the system. It’s a framework that allows us to compare cognition across radically different platforms, assess architectural viability, and diagnose both machine and brain failure modes with a unified model.
If you’ve encountered similar ideas—or critiques—I’d love your perspective.
About Me
I’m an automation engineer and author who works at the intersection of control systems, cybersecurity, and systems design. I’m a hobbyist in AI theory, with a strong interest in cognitive models that can bridge engineering and biology.
Happy to clarify or collaborate further if this sparks interest.