LESSWRONG
LW

Agent FoundationsUI DesignAIWorld Modeling
Frontpage

5

Re-imagining AI Interfaces

by Harsha G.
8th Sep 2025
Linkpost from somestrangeloops.substack.com
6 min read
0

5

Agent FoundationsUI DesignAIWorld Modeling
Frontpage

5

New Comment
Moderation Log
More from Harsha G.
View more
Curated and popular this week
0Comments


1. AI interfaces still feel stuck in the past.

We've built the most advanced information systems ever created, but we're still interacting with them like it's the command-line era. It’s not wrong exactly, but feels cognitively primitive.

Conversations are a very useful pattern of engaging with other humans. But, if text-based communication was sufficient for human cognition, we never would have invented Diagrams, Flowcharts, Folder structures, and—god help us—powerpoint.

We have not evolved to be text-optimised creatures. We are pattern-matching, visual-processing, spatial-reasoning creatures who happened to develop language as a useful hack. We think in relationships and patterns, yet currently we're forced to translate everything into text requests and then back into understanding.

So why are we pretending that the pinnacle of AI interaction is typing questions and getting text back?

 

2. Chat is dead. Long live Chat.

Chat interfaces aren't going anywhere. Chat is great because working with AI fundamentally requires iteration. You ask, it answers, you clarify, it refines. This back-and-forth is essential because current AI systems are like very knowledgeable but slightly autistic friends: incredibly capable, but you need to be specific about what you want.

The problem is comprehending the output and navigating its structure. Text output is like having a world-class chef who can only serve food through a straw. Technically functional, but you're missing most of what makes the experience worthwhile.

At Oncourse, we see this constantly. Users can ask for questions through chat in text, but they overwhelmingly prefer our quiz interface to answer, because:

  • They can see all options at once
  • They can navigate non-linearly to explanations, then to another question, back to a lesson inside an explanation, and so on.
  • Visual patterns help them understand relationships
  • They don't have to remember what they asked three prompts ago

This bias toward liking a structure of quiz widget over a question as a quiz, isn't just because of their preference, it's because of their own cognitive architectures.

Users prefer a widget which lets them navigate questions faster and view explanations in a tap, rather than seeing just text output

 

3. So, where are all the magical new interfaces?

Why haven't we already seen an explosion of AI interfaces that actually work with human cognition instead of against it?

There are 3 main reasons, each of which is dissolving as we speak:

3.1. It is only now technically feasible

Until very recently, AI was too slow, expensive, and unreliable for real-time interface generation. Getting GPT-3 to generate working code was like asking a brilliant philosophy professor to fix your car—they might eventually figure it out, but you probably don't want to wait around.

Modern models change this equation entirely. GPT-4, Claude, and their successors can reliably generate visual elements, write interface code, and do it fast enough for real-time interaction. The infrastructure constraint is essentially solved.

 

3.2. Imagineering takes time

On December 9, 1968, Douglas Engelbart showed a hand-held wooden box to a thousand engineers. He moved it. Then a “bug” moved on screen. It was magical. They called this box a "mouse", because the tail came out the end. A ridiculous name that somehow stuck1

For ninety minutes, he demonstrated the impossible: he showcased video conferencing (40 years before Zoom), collaborative editing (40 years before Google Docs or Figma), hypertext (15 years before the web), and much more.

 

Douglas Engelbart presenting the “Mother of All Demos” in 1968

Look at this timeline:

  • 1950s: Terminal commands line started adoption
  • 1963: Ivan Sutherland creates Sketchpad, the first GUI prototype
  • 1968: Douglas Engelbart gives the "Mother of All Demos"
  • 1973: Xerox Alto brings GUIs to workstations
  • 1984: Apple Lisa/Macintosh makes GUIs consumer-ready
  • 1990s: GUIs become standard

 

Since the “mother of all demos”, it took an average of 30 years for most of these prototypes shown to come into production.

 

This time it won’t take 30 years though. Most of that time was spent on technical constraints and market development, not imagination. We already have the infrastructure and market demand. The 30-year cycle should compress to maybe 3 years. But we need the 21st century mother of all demos.

 

3.3. Other important problems took priority so far

Most AI work of today is still focused on taking text based chat assistants and co-pilots still to production. The prototypes of which were made 3 years ago. And just making sure chatbots that don't hallucinate and integrate with systems, are deployed in a secure manner, etc. This is all important work! But it means relatively few people are thinking about fundamentally new interaction paradigms.

Which should change. It will.


4. Some juicy examples of new UI paradigms

What should the new interfaces actually look like? Here are the patterns emerging:

 

Zoom-in & Zoom-out

The idea here is to zoom into or zoom out of different levels of abstraction. Amelia has a great talk on explorations of this paradigm.

 

Doom Scrolling to generate

Generating content from different personas while you keep scrolling is another pattern I love

 

This has already been productised by apps like Status.

 

Generative Visuals

AI that creates semantic mappings showing relationships between concepts. Ask about photosynthesis, get a dynamically generated diagram showing the relationships between sunlight, chloroplasts, glucose, and oxygen. Ask follow-up questions, watch the diagram expand and highlight relevant pathways.

This isn't just "AI that makes pretty pictures"—it's AI that understands conceptual relationships well enough to visualize them in ways that match human spatial reasoning.

This demo by Damien et al. shows a great example of this for a storytelling use-case

 

 

Canvas-style interfaces are a a related example where information branches and flows spatially. Instead of linear chat transcripts, you get living documents that grow and restructure as you explore ideas. Claude's canvas interface hints at this direction—collaborative spaces where both human and AI can directly manipulate content.

Claude AI Artifacts: Must-Know Insights! - Begins w/ AI

 

Malleable Filters

Imagine creating dynamic filters on the fly based on my needs. This is explored by Byran Min et al here as well as in Amelia’s talk.

UI on the fly or Ephermal UI (2027+)

Another really interesting upcoming development: interfaces that redesign themselves based on context and need. Ask about quarterly sales performance, get a dashboard. Ask about customer sentiment patterns, get a sentiment map. Ask about code architecture, get an interactive dependency graph.

The interface becomes as fluid as the information it displays

 

5. What about no interface?

But wait, there's another direction entirely: what if the best interface is no interface?

AI agents increasingly handle entire workflows without human intervention. Why build elaborate dashboards for expense reporting when an agent can just... handle your expenses? Why create complex project management interfaces when agents can coordinate directly?

This points toward a bifurcated future:

  • Rich, expressive interfaces for exploration, creativity, and learning
  • Invisible agent interfaces for routine tasks and automation

The interesting question is which category grows faster.

 

6. The ultimate pipe-dream: brain-computer interfaces.

Current AI interface design assumes we're stuck with keyboards, mice, and screens as input/output mechanisms. But what if we're not?

By 2035 (give or take a decade), we might have:

  • Invasive systems that can read and even write on neural signals directly (Neuralink-style)
  • Non-invasive systems that capture brain activity with sufficient resolution
  • Protocols for translating thoughts into digital commands and vice versa

This sounds like science fiction, but consider the development timeline. Before AI, this research would have taken a century. But if AI becomes superhuman at research and engineering by 2027-28, we might see breakthroughs that compress decades of work into years.

The ultimate interface is no interface—direct thought-to-computation.

 

7. So what happens next?

We're at the equivalent moment of 1968, when Engelbart showed the mouse and hypertext to an audience that had never seen anything like it. The components for radically new AI interfaces exist. The question is who builds them and how quickly they spread.

I'm building a catalog of emerging patterns and a research lab focused on this problem. Short-term goal: better ways to interact with AI that actually match human cognition. Long-term goal: figure out how to eliminate the interface entirely. Ping me if you’re interested.

The most important interfaces are the ones we haven't imagined yet. History suggests they'll arrive faster than we expect and feel more natural than the ones we're using now.

The future of UI isn't about making better chat windows. It's about making computers that work the way minds actually work.

1

His group also called the on-screen cursor a "bug", but this term was not widely adopted.