Human cortex can't represent arbitrarily complex abstractions in a single forward pass. It's depth-limited by the number of sequential inferential steps it can execute per corticothalamic cycle. That ceiling determines what kinds of reasoning are biologically possible at all, not merely how fast they happen. A single cortical area can...
First post in a sequence about cognition enhancement for AIS research acceleration.[1] No one is training a BCI deep learning model to speak neuralese[2] back to the brain. We should make something which reads and writes native neural representations. Current models, at best, encode visual stimuli for retinal implants. The...
I am distributing bounties for contributions to this project. Thanks to Parv Mahajan and the Ga Tech AISI board for reviewing core ideas. Abstract I claim that LLM steganography requires representational divergences from legible models, which divergences I further claim are practicably locatable (though not necessarily differentiable from legitimate scratchpads)...
Recent advances in optogenetics and fluorescent protein markers have helped neuroscientists locate brain cells corresponding to individual memories (engrams). This post explains how such representations might physically and semantically shift. Background “Encoding” is the short-term enpatterning of neurons to store a memory. This seems to happen in the hippocampus, a...
This was more of a research strategy than a specific project, and my foci have shifted substantially since this post. Thanks to John Wentworth for pointers on an early draft. TL;DR: I'm starting work on the Natural Abstraction Hypothesis from an overly-general formalization, and narrowing until it's true. This will...