It seems like we actually do not have a good name for the things that AI companies are building, weirdly enough...
This actually slows down my reasoning or at least writing about the topic, because I have to choose from this inadequate list of options repeatedly, often using different nouns in different places. I do not have a good suggestion. Any ideas?
They liked my suggestion of neuro-scaffold and suggested I write a short justification.
A neuro-scaffold means a composite software architecture with two key components:
openai API lets you send prompts and get responses from a neural core, such as one of their GPT-* LLMs.Crucially, the design of a neuro-scaffold includes a component of the following form:
[... -> (neural core) -> (scaffold) -> (neural core) -> (scaffold) -> ...]
A neuro-scaffold is any program that combines gen AI (including but not limited to LLMs) with additional software that autonomously transforms gen AI outputs into new gen AI prompts. The term "neuro-scaffold" refers to software design - not capabilities or essence.
The term is meant as a pragmatic way to refer to the 2025 paradigm of what are being referred to as "AI models," especially reasoning and agent-type models. As technology changes, if "neuro-scaffold" no longer seems obviously apt, I would recommend dropping it and replacing it with something more suitable.
"Neuro-scaffold" is also a term for 3D nerve cell culture. But I think it's unlikely to cause confusion except for my poor fellow biomedical engineers trying to use neuro-scaffold AI to design neuro-scaffolds for 3D nerve cell culture. Sorry, colleagues!
I chose the "-scaffold" suffix because it refers to:
These three terms seem to reflect the kinds of software programs people are building to automate interactions with a general-purpose generative AI model (neural core) to produce certain desired behaviors now often termed "reasoning" or "agentic."
Neuro-scaffold AI still has "I" for "Intelligence" in the name, but "AI" part is not a key part of the term. It's just a convenience to make it more clear about the sort of product I'm talking about. You could just say "neuro-scaffold" or "neuro-scaffold LLM" to further emphasize that the exclusively design-oriented intended meaning.
"Neuro-scaffold" riffs on the term neuro-symbolic AI, which is established jargon. Although "neuro-symbolic AI" also seems potentially apt, I wanted a new term because, at least according to Wikipedia, neuro-symbolic AI seems to refer to a specific combination of capabilities, design, and essence:
Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling.
"Integrates neural and symbolic AI architectures" is a design. "Reasoning, learning and cognitive modeling" are capabilities. They can also be seen as essences, potentially leading to debates about the true nature of "reasoning."
The only reason to debate somebody's use of the term "neuro-scaffold" to refer to a product should be if there is a dispute about the design of that product's software architecture. This is a question that should be resolvable more or less by inspecting the code.
"Self-prompting AI" is my strongest alternative to "neuro-scaffold." One disadvantage of "self-prompting AI" is that it needs the term "AI" to emphasize the mechanical nature of it, and the term "AI" can be seen as contentious or as marketing hype.
Dropping AI leaves us with "self-prompter," a term that has been used to refer to devices mean for a speaker to cue themselves during a speech. But I don't think it's at risk of becoming confusingly overloaded.
I have a few objections to this term for this use case:
Self-prompter might be a useful term as well. I just don't think it's the best choice for the specific meaning I'm getting at.
Probably not a neuro-scaffold:
openai API, gets the direct output of an LLM, and displays the response to a human user, such as a temporary chat on any of the mainstream chatbot interfaces in 2025. Except in exotic cases (i.e. a mind-controlling prompt that reliably influences the user to input further specific prompts), there's no mechanism to map responses to new prompts, so it's not a neuro-scaffold. These could be called "LLM interfaces" or "chatbot LLMs." Chatbots include a neural core, but confine themselves to [(user) -> (program) -> (neural core) -> (program) -> (user)]. I would not call the program a "scaffold" and would not call this overall design a "neuro-scaffold" because it has no semblance of an autonomous self-prompting.Ambiguously a neuro-scaffold:
Probably a neuro-scaffold: