Autostructures is back! Autostructures 2025 was a smashing success, with a dozen participants, ten interface prototypes and even a follow-on paid 3-month fellowship + residency for graduates of the safety camp!
If you couldn't join last time, we're doing another round now for Q1 2026, hosted again by AI Safety Camp and Groundless Alignment. Join us to work on making the crazy AI future saner, soothing and true to what matters. We are accepting applications until December 6th! (Even though the AISC site says that applications are closed)
Autostructures is a sober yet radical take on the upcoming risks and opportunities around AI. Details are in the main section, but some things to know as a summary:
This is a specific school of design for framing our relationship to this whole “AI” thing. Our narrative and relationship around AI determines our construal of eg. what AI is, what we should put into it, how to project meaning onto what comes out, what it is supposed to do, and what we should build around it.
As part of this project, you will design interfaces that do not ignore the question of what makes engaging with technology meaningful. These designs are somewhere between functional and speculative, but always ambitious. They are aimed at inspiring a completely different kind of infrastructural and cultural basis for interacting with digital interfaces in the near future.
“Autostructures” is primarily about conceiving the barely conceivable. We want to think about the near future where we depend less on systematizing and structuring everything (notice how systems and structures are everywhere?), and instead are able to autostructure things as needed, in milliseconds. This means less of a reliance on universality or fixity, instead moving towards more context-sensitivity and fluidity. Instead of designing a systematic protocol that everyone must conform to, you might use an intelligent layer that translates the information in a non-formulaic way.
Generally, we tend to want systematic solutions to AI risks. But this is not the only option. For many issues of meaning, context-independent pre-systematized insights are in fact not sufficient. AI changes how we collectively perform cognitive labor, so we wish to take that into account for technical approaches to mitigating risks. This might enable us to do the “impossible” by making technical progress on questions that evade formalization/systematization (such as age-old philosophical problems that are highly dependent on context). Plus, such dynamic infrastructure might allow us to sensemake at pace with rapid AI development in the farther off future.
For examples and more details, you can check out:
Or continue reading.
The premises of Autostructures are simple:
There is a "sweet spot" where we integrate AI capabilities in our infrastructure but are not enfeebled or dead.
(Similar to how AI at this moment seems to be able to help with many things, even alignment-relevant things, without exhibiting dangerous levels of autonomy thus far.)
This period of integrating AI infrastructure will consist not just of enhancements or tools, but a radical shift in our ways of conceptualizing and orienting to the AI alignment issue as a whole, in ways that are barely conceivable to us now.
(Similar to how remote jobs are barely conceivable in the era of telegraph, even though the internet is technically "just" very fast, cheap, integrated telegraph.)
Even if these shifts are barely conceivable now and mostly emergent, there is worthwhile progress to be had from careful consideration of AI infrastructure design, rather than ad-hoc slapping “AI” and chatbots on existing interfaces in myopic ways. There is a lot to be discovered, experimented with, and practiced, around elements of design, engineering, and culture that can make for wholesome and impactful communities.
(Similar to how Wikipedia is held together by certain values and technology, and might never have existed or even considered realistic, if it didn’t.)
For us, the "sweet spot" mentioned earlier is where AI is efficient; extremely cheap, fast, less error-prone, yet still only moderately intelligent. This efficiency axis is a different axis from both intelligence and autonomy:
Efficiency scales more naturally than intelligence, which is still easier than autonomous machine intelligence. (Note: the autonomy axis would be scored low even if we have "AI agents" that are not particularly self-motivated.)
There is yet another axis relevant for opportunity modeling: what can happen when many people are enabled to participate in surprising new methodology. This doesn’t mean just "more democratic" or seeing a map of arguments. It’s something more subtle: infrastructure that can enable new kinds of commerce. For example, the ability to coordinate around a startup around an iPhone app depends on the existence of app store infrastructure. It is barely conceivable to gather many people’s work-hours for billions of dollars worth of returns around a phone app in the early 80s. Once the cellphone, the internet, the app store etc. have all become integrated, it becomes pretty normal for hundreds of thousands of people to spend half their waking time on such a thing.
For any given idea, we can look at this "integration axis":
...and try to imagine what it would be like if this was cranked up high. We want to explore what is possible with useful integration, rather than the kind that is parasitic on our life-energies.
To this end (of conceiving the barely conceivable), everything Autostructures does is oriented to and anticipative of the near future. While we build things that are functional and beautiful today, we know that they are, in a very real sense, prompts for tomorrow's AI software-agent equipped teams. That we are in collaboration with our future selves, and have the duty and opportunity to put salience on what matters.
The quantitative changes on these axes from today are qualitative, as in the telegraph-internet analogy from earlier. For a preview of how we’re thinking about this, keep reading.
Now here's where our bets about the future get specific. This is a mixture of philosophical narrativization and mathematical insight, so don’t worry if the pieces feel like peeking through a keyhole at this paradigm. We anticipate that absorbing the viewpoint requires dedicated time and effort, which we will spend together.
[TECHNOLOGY] I. AI lets you, importantly, do things that don't scale, at scale. Let’s explain what that means. What doesn't scale? Things that need personal attention, things that can't be structured, systematized, formalized; things that are too context-laden. What does AI do? Personalize, formalize things from informal vibes, adapt to local context. In other words, AI scales what doesn't scale. This we call autostructuring. This is highly counter-intuitive, since we try to solve most problems today by p*re-*structuring; creating systems and protocols that work in cookie-cutter ways, rather than letting them be auto-structured just-in-time. For example, the way to “scale” email, or send a mass email, is to make a template with variables and instantiate it with some database. (“Dear {Applicant_Name}, you have been selected…”) But it is possible only perhaps this year, to instead write an email-prompt that can be a personal message to each recipient, taking advantage of both informal and formal details about them, without one single template shared among them. (Please note that this example is only an intuition pump for what it means to scale without making things formulaic, not something that would actually be good to have.) |
|---|
[DESIGN/CULTURE] II. AI doesn't just do everything for you. This is both a description of the near future, and a prescriptive reminder that craftsmanship and deep work is a satisfying expression of meaningful activity, not something to be automated away. So although it points at something important, "auto" in autostructuring is a bit misleading. It shouldn't cause us to go dead. Hence we talk about "live interfaces" and "live theories". The details of how digital institutions shepherd our relationship to AI technology matter, even if affordances are very general. Naturally there is a lot here. A few ways we hold this: We're not discovering Human-AI interfaces, but Human-Human interaction that is nourished by AI infrastructure. More generally, life anchors the connection to meaning. For more on this point, see the paradigmatic shape of Live Theory. Designs that are "just add a chatbot" or otherwise trying to be fully general are lazy, not live. We care about design specifics that facilitate human expression in precise ranges. We question ham-fisted dependence on structure, but are tasteful about its use as well. We keep things calm and tightly coupled to what matters. Yet, abstractions and static cores are the solution only when appropriate, especially unnecessary when there are ways to scale specificity directly. We are careful about how we project meaning onto AI output. This includes of course any anthropomorphization, both for our own well-being and for respecting disanalogies that can be leveraged in unusual ways. For example, strict turn-taking in "chat" format between human input and AI output keeps AI operating at human speed from constant interruption. |
|---|
[RISK] III. Our attitudes towards generality are both a source of risks and a block on novel mitigations to terrifically hard problems. There is a sense among many that few alignment proposals are at all helpful. Most attempts seem to be carried in large parts with intoxicated hope, and increasingly, with equal and opposing swigs of despair. Instead of swinging back and forth, despairing completely can be helpful. In the process, we start to question that which we never thought to question: the water we swim in– the research methodology that got us here in the first place. The point about ‘scaling things that don’t scale’ extends to the issue of scaling technical insights, or as it is usually known in mathematics and sciences, generalization. That is to say: the function of general results and general methods is that they scale across a whole class of specific instances. In other words, mathematical structure and formalization is to be noticed as a purposeful methodology, with alternative methodologies to fulfil the same purposes, but better. This would be mathematical autostructuring, to replace abstraction as the only way to make connections between specifics. An important negative consequence of abstraction is to isolate regularities and, often inadvertently, de-emphasize the rest. This can lead to a too formulaic, homogenized approach to what matters. This is no small flaw. The claim is that this attitude towards generalization is subtly behind what makes the G in AGI so dangerous. There is another way. In a slogan: don’t abstract, just scale specificity directly. We won’t elaborate here, but this post explains more. The new attitude is very different from the usual impulse in technical disciplines, of generalizing via finding commonalities across patterns. The email analogy from earlier is helpful to return to: there is no common template to be found among the emails, because there is a subtler connection. A lot of our most interesting philosophical concepts around AI alignment have connections more subtle and contextual than formulas can capture. This is how we hope to serve AI risk mitigation. A self-contained paper for details is here. So don’t confuse this for “mathematicians and technical folk get AI help with their proofs.” Proofs function within a particular systematization. This is about meta-formalisms that seed entire systems, ideally in interaction with the local context that the systems are being applied to. |
|---|
[CIVILIZATION] IV. Scaling attentivity can replace foundationalism and the planetary-numbness that comes from it. “Folders” are everywhere in our designs, yet highly skeuomorphic; there is no reason, for example, to have one file belong to exactly one folder, when these entities have become digital. In fact, we do have tags that do allow you to put the same item “in” multiple tags. Folders take a rigid, spatial design stance, whereas tags lean into the deeper ideal: discoverability. Folders try to “capture” files in a foundational, hierarchical tree-like “space” that you have to manually “navigate” to, whereas tags have a softer approach of guiding search queries without zero-sum possession of files. Tags are a strict upgrade (you can do everything with tags that you can do with folders), but in many systems we’re stuck in inertial digital habits. A great example of implicit folder-structure that’s very subtle is actually platforms. Almost any app or service that involves multiple people tends to go for a platform. Platforms create a folder of sorts of people, “capturing” the network or audience or market share, in zero-sum competition with other platforms. It need not be this way. As many have pointed out, we might have less of a moat with AI. Lowered barriers to entry and lower switching costs are examples people point out. Fewer, if any, point out how bridging-agents might make switching costs irrelevant on the user-end. You could use Signal and I could use WhatsApp (or more likely, our own customized interfaces), and bridging agents could build just-in-time protocols even if the projects go in different directions. This is a movement towards a more tag-like philosophy, where audiences aren’t captured, but in connection and interoperation. More P2P than hierarchical. …But this is an instance of a bigger phenomenon, bigger even than “democratization/decentralization of digital everything”. As noted before, in “scaling what doesn’t scale” we scale attentivity-to-specifics, without needing our insights and ideas to compactly fit into some cookie-cutter formula. This is what we dub teleattention, infrastructure living atop telecommunication, doing what telecom technology couldn’t do alone: non-formulaic scaling and generalization. We’re betting the near-term future to be adaptive and sensitive to local contexts. Attention is what Moloch lacks. It is insensitive somehow to what matters, real meaning lost in the cracks of bureaucracy. If the tech and culture and governance infra are done right, a less mindless Moloch might come into existence, that is able to attend to local matters rather than contort it to some prefigured foundation. And importantly, this rebirth may nurture nuanced local needs with quality attention and work it into global functioning. You’re probably getting the tune by now: local meets global without forced conformation of a common ground is ‘subtle connectivity without abstraction’ from earlier. Instead of growth-hacking and colonizing networks for ideas and insights and problems to gain steam, we can have a more live, mature and cooperative Steaming. This is “mindfulness” at the level of civilization, rather than dead, numb, fixed, dictated grounds of meaning. |
|---|
[METAPHYSICS] V. Lastly: AI as a reminder of how our machine mythology is untenable. For those who like to dance with nonduality, I hope this also offers a stark posture-correction for our personal views on life and machines. How we a) conceive and separate machines; b) project and scale our own alienation and c) reify mechanization further as a “solution”, in maddening futility. Something else is possible. Hence: Groundless Alignment. |
|---|
We’ll work our way towards infrastructure for technical AI safety research and policy communities.
In the first phase, we’ll explore “live interfaces” while working hands-on with graduates from the previous camp and fellowship. We will design, build, and discuss to make the paradigm familiar. Every week, we will orient ourselves to both the philosophy and engineering aspects. By the end we should have some demos and revitalized perspectives.
In the second phase, we’ll continue refining a selection of tools, and move on to various Live Theory experiments. We should have some new PoCs and/or notable contributions to our current projects. Though some of the camp structure will be established, there may be dedicated space for you to express and pursue your own ideas if they seem closely related!
Our current efforts investigating opportunities afforded by Live Machinery:
Project A: Live Conversational Threads
Making a tool where users mark insights in conversation, and port them to context-sensitive formalisms.
This aims to ease the surfacing and interconnecting of good ideas, since each thread is a contextual chain. From this, we can derive various domain-dependent formalisms (rigorized notation of argument structure).
Project B: Visualizing a Platform for Live World Models (see trailer)
Making a platform to exchange meaningful ways of viewing/interpreting documents.
This enables the fine-tuning of different views as dynamic frontends for documents, with scalable precision.
With these, we allow the sharing and remixing of postformal artefacts, curated toward particular insights.
Project C: Vibe DeCoding/Live Discernment
Making a tool to extend human discernment capabilities for tracking relevant evidence and inferences.
This aims to visualize the process of determining relevance in a series of inferences or argumentation.
With this, we can navigate the multitude of formalized arguments made possible through AI math-generation.
Sought Skills
If you’re good at or interested in engineering, writing, designing or generally open-minded and quick to learn, you’re a fit. If you made it through this doc (even if you have lots of questions and confusions) or like to think in meta-systematic ways, then you’ll love it here.
For more info on the previous iteration’s outputs, see Autostructures 2025. Otherwise, you can find more information below on our specific bets for the future, or browse a longer exposition of the vision, with concrete examples.
Team size: Variable based on applicant interest. Up to ~20.
We will likely form teams and focus on individual projects (perhaps with one focused on policy), and also do a plurality of all-hands meetings that everyone can attend.
Research Leads
Kuil: kuil.schoneveld@gmail.com
I enjoyed AISC 2025 and the Autostructures Fellowship. I plan to spend roughly 15 hrs/week on meetings and logistics around this camp project.
Sahil: fufstsahil@gmail.com
Ran Autostructures 2025
Autostructuring is about moving into fresh paradigms of post-formal design, where we scale productive interactions not via replicating fixed structure but via ubiquitous attentive infrastructure*.* That’s a dense sentence, so let’s unpack.
The only way to scale so far has been cookie-cutter replication of structure.
For example, this google doc, even though quite adjustable, comes in a fixed format. Bold, italics and underline have somehow become the universal three options on text. They’re so ingrained that we don’t even imagine that we need anything else. But if we extended this, added a new style called “wave” or “overline”, then you can’t copy it and expect it to work. Who knows whether they’ll be wanted. Almost no fonts will be compatible, because they weren’t given any notice about “wave” or “overline”. People won’t know how or when to use it.
But maybe you would still like to use an “overline” style on your favorite fonts. Unfortunately, you don’t matter to the intermediating machines. It’s too expensive to take care of your whims. What happens when you share the doc with us? We’ll all have to add crap to our shared structure, if you get to have your way. It just doesn’t scale.
…unless we don’t have to share structure. Unless, you can have a custom interface designed for you, attentive to your needs. AI might take minutes to do that now, with plenty of correction and $$$, but the cost, latency, error of that is falling rapidly. And it could fulfill not just your individual structuring wishes around the doc, but also attend to interoperating wishes. Intelligently displaying or substituting overline with underline for my view of your doc could be something it comes up with. Instead of having a fixed universal format, we could have a plethora of self-maintaining near-instantaneous translation between our individual structures.
That’s an example interface of what this is about. Anti-hype around the powers of AI in the next few years, but hype around the powers of designing AI ubiquitously into our infrastructure. It’s still a mega transformation, via AI interfaces that are mildly creative, but widely adopted.
Instead of forcing human subtlety to conform to fixed shared structure just to be able to interoperate with each other, AI can autostructure: auto-adapt to, co-create with, and harmonize with the rich subtleties of living individuals.
This project will apply this interface design philosophy to postformal sensemaking. We aim to see whether we can do better than the rigid machinery of formal equations, which means redesigning theories themselves. This is called live theory below.
This is an ambitious experiment, so first we will cut our teeth on (useful) interfaces in general so as to gain familiarity with the design philosophy.
Not just tools; you build culture. You build the new design philosophies of interfaces and interaction, when you can do things that don’t scale, at scale. This is not at all what we're used to, and so it has barely shown up in the tools available today, despite the AI hype (maybe even because of the hype).
Of primary interest here is this new ability to scale attentivity/sensitivity enabled by the wish-fulfilling intelligences that will quickly become ubiquitous in our infrastructure. The wide availability of even mild but fast and cheap intelligence supports scalable detail-orientation and personal attention that can delegate, and even obviate, structuring demands from humans.
Being able to autostructure things as easily as one might do a lookup today, undermines a lot of the fundamental assumptions about how data, protocols, interfaces, information, and abstraction operate. Each of these basic ideas were shaped in a time where we’ve had to rely on fixed formalisms and logistical structure to transport our digital goods.
Without any hype, it is possible to say that we’re entering a different “postformal” time where you don’t have to formalize to make things workable. Comments are as good as code, maybe even better and more flexible. This holds the possibility of moving our interfaces away from abstracted commands and controls and into subtle wishes and prayers.
Yes, this is very much not about building an AGI god.
Here’s a definition of “prayer” apt for interfaces that are mildly adaptive but widely adopted.
"Prayer", not as in "pray to an AI god”, but "prayer" as in "send out a message to a fabric of intelligence without trying super hard to control or delineate its workings in detail, in a way that is honest to you rather than a message controlled/built for someone or something else."
When machines can actuate potential ideas from 1 to 100 nearly instantaneously, most of the work of living beings will be to supply heartfelt relevance, meaning, vision; the 0 to 1.
However, we living beings will assist the wish-fulfilling machine apparatus that will surround us, especially while we’re still in the mild-to-moderate intelligence levels. This will not look like detailed modular commands or recipes (nor genies that require no interaction at all) but context-sensitive hints and anchors that will feed into the AI-assisted actuation.
Examples of platform/product distinguishing features:
| Traditional tool platform | Our Live (Solo-ware) “platform” | |
|---|---|---|
| Bundle | Hosts standardized application packages. | Hosts heterogeneous recipe containing anything fruitfully consumable by user’s AI software agent. |
| Installation | Download the package and install the app. | “Steal” the recipe and use AI software agent to reproduce the app and integrated. |
| Customizability | App has fixed interface with limited modifiability. | App is reproduced from scratch to be tailored to user preferences and can be modified on the fly. |
| Credit | Purchase/Subscribe to the digital product or service you want to use. | Subscribe to designers whose design recipes/compositions you find aesthetic. |
| Background Infrastructure | Requires standardized protocols and focused development. | Requires AI software agents that are highly reliable in execution and somewhat faithful to intent. |
| Reviews and Updates | Send a text review to the developer, hoping they’ll take it into account in future updates. | Reviews take any form (text, podcast, conversation), that is then fed to the AI agent to incorporate them on the fly for the user. |
There is a lot of dystopic potential from such tools because of hyper-attentivity and hyper-customisation; these can easily lead to various kinds of isolation, dependence, fragmentation. We want to avoid that outcome.
Often, to do this, the hyper-attentivity will have to be met with a calm in the design. To not restrict structure but enable constraints that soothe. We don’t want to toss out structure but use it in a wholesome way rather than out of expedience.
As conceptual research can be amplified in general, including for advancing the frontiers of AI creativity, we want to aim to do so differentially, towards favoring sensitivity to beings.
Notes:
Same as here: Live Theory Part 0: Taking Intelligence Seriously — AI Alignment Forum
Additionally: David from CEEALAR, Aditya, all the participants and contributors to the two hackathons we ran.