I like the pointing out things that need names and attempting to name them. Good stuff!
To rephrase the definition pointed to by "neuro-scaffold" to see if I understood, it is "an integration of ML models and non-ML computer programs that creates nontrivial capabilities beyond those of the ML model or computer program"?
Naively I would refer to this as an "ML deployment" but the "... nontrivial capabilities beyond..." aspect is important and not implied by "ML deployment", so "ML integration" might be better, but both are clunky and "ML" can refer to many data science and AI techniques other than neural nets, so I think we're stuck with the "neuro" terminology. Although, I think I would prefer if people called them "multi-layer perceptrons" to disambiguate them from the biological neurons they were inspired by. "Artificial neural networks" would also be an improvement. "MLP" or "ANN".
I think I dislike "scaffold" because it implies a temporary structure used for building or repairing another structure, and I don't think that represents the programs the ANNs are integrated with well. The program might be temporary, but it might not be. So it could perhaps be called an "integrated ANN system" or "integrated MLP system", or acronymized, an "IANN" or "IMLP". But these suggestions seem klunky. They don't seem as easy to say or understand as a "neuro-scaffold", so "neuro-scaffold" is probably a better term despite the issues I have with the words "neuro" and the words "scaffold".
"Scaffold" sounds very natural to me, because it's been common parlance on LessWrong for at least a year. A while ago, I Googled "LLM scaffold" and was surprised to find that all of the top results are LessWrong-adjacent. Before that, I just assumed everyone in AI called it a "scaffold," but "AI agent" is actually more common. Maybe it didn't catch on here because it would cause too much confusion when we talk about "agency" and "agent foundations."
IMO, "neuro-scaffold" is clearer than the existing options and pretty easy to say. I strong-upvoted the post because I think having a Schelling point for what to call these things would be good. (Even if it may not be the very first thing I'd pick - for instance, "neural scaffold" sounds slightly less neologism-y to me.)
The term I've seen on the software industry user side for that thing is "harness". but "harnessed AI" sounds like something else (horseGPT?)
This is good context to have. If it is a Schelling point on LW that's probs a good enough reason to choose it as the term to adopt, although some consideration might be warranted for it's adoption in wider communities, but I can't think of any other term would work better for that.
Agreed that having a common term would be really nice, and this is more specific than the very broad LLM agent or AI agent.
But neuro-scaffold feels really wrong. It is not a scaffold made of a neural substance. It is a neural substance with a scaffolding around it, or a neural substance that is scaffolded. The tense of neuro-scaffold is wrong.
I don't know how much of a blocker that would be, but for me it feels much better to continue saying scaffolded LLM. I've also wondered about LHLLM for long horizon (agentic) LLM or ALLMA for agentic LLM (based) architecture.
Those don't feel quite right either. But an acronym expanding on LLM does. They do look like they could be quite a mouthful, but for me LLM is now one thought, not an expansion to large language model, so those feel fairly compact.
I think it is. But it's very broad, so there might be room for one or more specific terms inside that category.
Nathan Lebenz uses agentic LLM or agentic AI to distinguish a more general agent from the very watered-down use of "LLM agent" that currently usually refers to extremely limited and hand -crafted systems for very specific narrow workflows.
'Agentic' or 'agent' is getting a fair bit of currency ('agentic AI workflow', 'LM agent', 'AI agent', etc.)
I think that's fine, and basically accurate. Sometimes it means you need to qualify how autonomous or bounded/unbounded the looping is.
'Model' really gets my goat, is a terrible, already hopelessly conflated term, and should be banned for talking about NNs in almost all contexts. (I have been dragged kicking and screaming into using this sometimes and I'm still sad about it.) 'Reasoning model' is no better. 'Foundation model' and 'language model' are OK, but only if actually talking about foundation and/or language models per se, absent the various finetuning and scaffoldings that are involved in actual AI systems. ('Reward model' and 'world model' and such are very reasonable uses.)
I'm sorry to say that 'neuro-scaffold' isn't going to take off, and I think that's fine. 'Scaffold' is very useful on its own, but 'neuro-scaffold' is a mouthful and also doesn't really connote the specific thing you're meaning to invoke, which is the loopiness and the connection to actuators.
I've generally seen "harness" when referencing additional software, including feedback loops, that's been created to try to get an LLM to complete a complicated task that it otherwise couldn't. "AI Agent" is the marketing term, though I like that less because it's much more indistinct.
Neuro-scaffolds
Cole Wyeth writes:
They liked my suggestion of neuro-scaffold and suggested I write a short justification.
Definition
A neuro-scaffold means a composite software architecture with two key components:
openaiAPI lets you send prompts and get responses from a neural core, such as one of their GPT-* LLMs.Crucially, the design of a neuro-scaffold includes a component of the following form:
[... -> (neural core) -> (scaffold) -> (neural core) -> (scaffold) -> ...]A neuro-scaffold is any program that combines gen AI (including but not limited to LLMs) with additional software that autonomously transforms gen AI outputs into new gen AI prompts. The term "neuro-scaffold" refers to software design - not capabilities or essence.
The term is meant as a pragmatic way to refer to the 2025 paradigm of what are being referred to as "AI models," especially reasoning and agent-type models. As technology changes, if "neuro-scaffold" no longer seems obviously apt, I would recommend dropping it and replacing it with something more suitable.
"Neuro-scaffold" is also a term for 3D nerve cell culture. But I think it's unlikely to cause confusion except for my poor fellow biomedical engineers trying to use neuro-scaffold AI to design neuro-scaffolds for 3D nerve cell culture. Sorry, colleagues!
Rationale
I chose the "-scaffold" suffix because it refers to:
These three terms seem to reflect the kinds of software programs people are building to automate interactions with a general-purpose generative AI model (neural core) to produce certain desired behaviors now often termed "reasoning" or "agentic."
Neuro-scaffold AI still has "I" for "Intelligence" in the name, but "AI" part is not a key part of the term. It's just a convenience to make it more clear about the sort of product I'm talking about. You could just say "neuro-scaffold" or "neuro-scaffold LLM" to further emphasize that the exclusively design-oriented intended meaning.
"Neuro-scaffold" riffs on the term neuro-symbolic AI, which is established jargon. Although "neuro-symbolic AI" also seems potentially apt, I wanted a new term because, at least according to Wikipedia, neuro-symbolic AI seems to refer to a specific combination of capabilities, design, and essence:
"Integrates neural and symbolic AI architectures" is a design. "Reasoning, learning and cognitive modeling" are capabilities. They can also be seen as essences, potentially leading to debates about the true nature of "reasoning."
The only reason to debate somebody's use of the term "neuro-scaffold" to refer to a product should be if there is a dispute about the design of that product's software architecture. This is a question that should be resolvable more or less by inspecting the code.
What about "self-prompter?"
"Self-prompting AI" is my strongest alternative to "neuro-scaffold." One disadvantage of "self-prompting AI" is that it needs the term "AI" to emphasize the mechanical nature of it, and the term "AI" can be seen as contentious or as marketing hype.
Dropping AI leaves us with "self-prompter," a term that has been used to refer to devices mean for a speaker to cue themselves during a speech. But I don't think it's at risk of becoming confusingly overloaded.
I have a few objections to this term for this use case:
Self-prompter might be a useful term as well. I just don't think it's the best choice for the specific meaning I'm getting at.
Examples and counterexamples
Probably not a neuro-scaffold:
openaiAPI, gets the direct output of an LLM, and displays the response to a human user, such as a temporary chat on any of the mainstream chatbot interfaces in 2025. Except in exotic cases (i.e. a mind-controlling prompt that reliably influences the user to input further specific prompts), there's no mechanism to map responses to new prompts, so it's not a neuro-scaffold. These could be called "LLM interfaces" or "chatbot LLMs." Chatbots include a neural core, but confine themselves to[(user) -> (program) -> (neural core) -> (program) -> (user)]. I would not call the program a "scaffold" and would not call this overall design a "neuro-scaffold" because it has no semblance of an autonomous self-prompting.Ambiguously a neuro-scaffold:
Probably a neuro-scaffold: