This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
This post describes XCTBL Space, an ongoing experiment in using narrative environments as an interface for understanding and navigating complex software systems.
The core idea is simple:
Narrative context may reduce cognitive friction when people encounter unfamiliar or multi-layered systems, compared to traditional dashboards, documentation, or onboarding flows.
XCTBL Space is a network of interconnected sites, unified by a single sign-on (Spacewalking), where each location presents real tools or information through a shared fictional setting. The goal is not storytelling for its own sake, but to test whether world-building can function as a cognitive scaffold, helping users form more accurate mental models of what a system does and how its parts relate.
I’m posting this here because LessWrong often focuses on how humans build models, reason under uncertainty, and make sense of complex abstractions. This project treats interface design as an epistemic problem rather than a purely aesthetic one.
What the system looks like in practice
XCTBL Space consists of:
• Multiple themed sites (“locations”) connected by a shared identity • Persistent user state across those locations • Real, functional tools embedded within the environment • Optional narrative framing that provides context, metaphors, and continuity
A key constraint is that story and utility are decoupled:
• Users can access tools directly without engaging with the narrative • The narrative never blocks functionality • The tools never interrupt the narrative
This makes it possible to compare how users behave when they approach the system through story versus when they approach it purely instrumentally.
Why try this at all?
Many complex systems fail not because they lack capability, but because users struggle to build an accurate internal model of how the system works.
Traditional solutions—tutorials, documentation, onboarding checklists—often assume that users are willing to pause, read, and abstract. In practice, many users instead rely on intuition, metaphor, and memory hooks.
Narrative environments naturally provide:
• Spatial metaphors (“where am I?”) • Continuity (“what happened before?”) • Motivation (“why does this exist?”)
The hypothesis is that these properties can be repurposed to support understanding, not just engagement.
Relation to existing ideas
This experiment overlaps with prior work on:
• Metaphor in interface design • Conceptual integrity in system • Cognitive load and user comprehension
What may be unusual is treating narrative not as a layer on top of software, but as a primary organizational structure, while still keeping the underlying tools fully accessible and explicit.
Limitations and open questions
Some things I’m uncertain about:
• It’s possible narrative helps only a narrow subset of users. • The novelty effect may inflate perceived usefulness. • Narrative framing could bias users toward incorrect interpretations if metaphors are poorly chosen.
I don’t yet have clean experimental controls; most observations so far are qualitative. At this stage, I’m more interested in identifying failure modes than proving effectiveness.
What I’m looking for
I’m sharing this here to get feedback on:
• Whether this framing makes sense as an epistemic experiment • Relevant prior work I should be engaging with • Reasons this approach might be fundamentally misguided
If narrative-based interfaces turn out to obscure more than they clarify, that’s a valuable result too.
Reference
For anyone who wants to inspect the system directly:
This post describes XCTBL Space, an ongoing experiment in using narrative environments as an interface for understanding and navigating complex software systems.
The core idea is simple:
Narrative context may reduce cognitive friction when people encounter unfamiliar or multi-layered systems, compared to traditional dashboards, documentation, or onboarding flows.
XCTBL Space is a network of interconnected sites, unified by a single sign-on (Spacewalking), where each location presents real tools or information through a shared fictional setting. The goal is not storytelling for its own sake, but to test whether world-building can function as a cognitive scaffold, helping users form more accurate mental models of what a system does and how its parts relate.
I’m posting this here because LessWrong often focuses on how humans build models, reason under uncertainty, and make sense of complex abstractions. This project treats interface design as an epistemic problem rather than a purely aesthetic one.
What the system looks like in practice
XCTBL Space consists of:
• Multiple themed sites (“locations”) connected by a shared identity
• Persistent user state across those locations
• Real, functional tools embedded within the environment
• Optional narrative framing that provides context, metaphors, and continuity
A key constraint is that story and utility are decoupled:
• Users can access tools directly without engaging with the narrative
• The narrative never blocks functionality
• The tools never interrupt the narrative
This makes it possible to compare how users behave when they approach the system through story versus when they approach it purely instrumentally.
Why try this at all?
Many complex systems fail not because they lack capability, but because users struggle to build an accurate internal model of how the system works.
Traditional solutions—tutorials, documentation, onboarding checklists—often assume that users are willing to pause, read, and abstract. In practice, many users instead rely on intuition, metaphor, and memory hooks.
Narrative environments naturally provide:
• Spatial metaphors (“where am I?”)
• Continuity (“what happened before?”)
• Motivation (“why does this exist?”)
The hypothesis is that these properties can be repurposed to support understanding, not just engagement.
Relation to existing ideas
This experiment overlaps with prior work on:
• Metaphor in interface design
• Conceptual integrity in system
• Cognitive load and user comprehension
What may be unusual is treating narrative not as a layer on top of software, but as a primary organizational structure, while still keeping the underlying tools fully accessible and explicit.
Limitations and open questions
Some things I’m uncertain about:
• It’s possible narrative helps only a narrow subset of users.
• The novelty effect may inflate perceived usefulness.
• Narrative framing could bias users toward incorrect interpretations if metaphors are poorly chosen.
I don’t yet have clean experimental controls; most observations so far are qualitative. At this stage, I’m more interested in identifying failure modes than proving effectiveness.
What I’m looking for
I’m sharing this here to get feedback on:
• Whether this framing makes sense as an epistemic experiment
• Relevant prior work I should be engaging with
• Reasons this approach might be fundamentally misguided
If narrative-based interfaces turn out to obscure more than they clarify, that’s a valuable result too.
Reference
For anyone who wants to inspect the system directly:
https://XCTBL.com
It’s a live environment and very much a work in progress.