LESSWRONG
LW

AI
Frontpage

1

The Power Users We Forgot: Why AI Needs Them Now More Than Ever

by Anthony Fox
4th May 2025
4 min read
6

1

AI
Frontpage

1

The Power Users We Forgot: Why AI Needs Them Now More Than Ever
0Anthony Fox
2Seth Herd
1Anthony Fox
1Anthony Fox
1Anthony Fox
2Seth Herd
New Comment
6 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:48 PM
[-]Anthony Fox4mo00

Just to clarify where I’m coming from—

I’m not using AI to write books, run a business, or automate workflows.

The only task I’ve really been focused on is understanding AI—how it responds, where it needs help, and how it could become more useful in real-world situations.

I am technical, but only in the sense that I’ve always used tools to better understand the world around me. That’s what I’m doing here—using AI not just to produce things, but to see where it fits, what it misses, and how it can help people do meaningful work.

I’ve spent a lot of time trying to get AI to explain itself, to respond with more awareness, to handle context that isn’t in its training data. Sometimes it does well. Sometimes it gets confused. That’s what I’m paying attention to.

I’m not here to push an agenda. I just think users—people who work with the tools day to day—have insights that are worth surfacing alongside all the other important work happening in AI.

If it’s useful, I’ll keep sharing. What this process has shown me so far is simple: the real problem isn’t sentience or safety—it’s the growing gap between reality and belief. I’ve been working on a way to stay aligned across that gap. Slowly, carefully, and operationally.

Reply
[-]Seth Herd4mo20

The real problem is sentience and safety. The growing gap between reality and belief is a contributing problem, but much smaller IMO than the quite real possibility that AI wakes up and takes over. Framing it as you have suggests you think there can only be one "real problem"; I assume you mean that the gap between reality and belief is a bigger problem than AI alignhment that deserves more effort. I am almost sure that safety and sapience is getting far too little attention and work, not too much.

Alignment/AGI safety being a huge, indeed pretty clearly the biggest problem is the general opinion on LW. And this population are the ones that think hardest on average about this issue. I mention this to clarify the audience you're speaking to here (with large variances of course). In my opinion, the arguments for AI x-risk (I don't worry much about minor harms from current or next-gen systems) are overwhelming, and the vast majority of people who engage with them in good faith come to believe them.

If you think that's not true, by all means engage with the arguments and LW will listen. We would love to quit believing that AI is a deadly threat; we tend to love technology and progress and even AI - until it crosses the threshold to sapience (very roughly speaking) and its goals, whether we got them exactly right or not, control the future.

I recommend Jessicata's A case for AI alignment being difficult for understanding that part of the argument.

I don't actually know of a good reference for convincing people that alignment needs to be solved whether it's hard or easy. One perspective I want to write about is that optimists think we're building AI tools, so they aren't that dangerous and don't need to be aligned. I would agree with that, except that it seems highly likely, for deep reasons, that those tools will be used to build sapient, human-like AGI with goals and values like ours- but different enough that they will outcompete us and we will not get a world we like.

That's the context for your article and probably why it's not getting a good response.

To the content: I think your point is reasonable. And LW readers tend to like and use AI and be very interested in its progress, so this is something the local readership might care about if it weren't claimed to be more important than safety and sentience.

But are you sure that industries haven't done this? It seems like power-users opinions are amplified on Twitter for the big LLMs and developers probably pay close attentionl. For more niche AI systems like music creation, agents (currently niche, soon to be the biggest thing to ever happen IMO), each developer has Discords for power-users to give feedback.

Reply
[-]Anthony Fox4mo10

Thanks—this helps me understand how my framing came across. To clarify, I’m not arguing that AI is harmless or that alignment isn’t important. I’m saying misalignment is already happening—not because these systems have minds or goals, but because of how they’re trained and how we use them.

I also question the premise that training alone can produce sapience. These are predictive systems—tools that simulate reasoning based on data patterns. Treating them as if they might "wake up" risks misdiagnosing both the present and the future.

That’s why I focus on how current tools are used, who they empower, and what assumptions shape their design. The danger isn’t just in the future—it’s in how we fail to understand what these systems actually are right now.

Reply
[-]Anthony Fox4mo10

And we’re not going to slow this down with abstract concerns. Chip makers are going to keep making chips. There’s money in compute — whether it powers centralized server farms or locally run AI models. That hardware momentum won’t stop because we have philosophical doubts or ethical concerns. It will scale because it can.

But if that growth is concentrated in systems we don’t fully understand, we’re not scaling intelligence — we’re scaling misunderstanding. The best chance we have to stay aligned is to get AI into the hands of real people, running locally, where assumptions get tested and feedback actually matters.

Reply
[-]Anthony Fox4mo10

The development of AI today looks a lot like the early days of computing: centralized, expensive, and tightly controlled. We’re in the mainframe era — big models behind APIs, optimized for scale, not for user agency.

There was nothing inevitable about the rise of personal computing. It happened because people demanded access. They wanted systems they could understand, modify, and use on their own terms — and they got them. That shift unlocked an explosion of creativity, capability, and control.

We could see the same thing happen with AI. Not through artificial minds or sentient machines, but through practical tools people run themselves, tuned to their needs, shaped by real-world use.

The kinds of fears people project onto AI today — takeover, sentience, irreversible control — aren’t just unlikely on local machines. They’re incompatible with the very idea of tools people can inspect, adapt, and shut off.

Reply
[-]Seth Herd4mo20

That gives me a better idea of where you're coming from.

I think the crux here is your skepticism taht we will get sapient AI soon after we get useful tool AI. This is a common opiniion or unstated assumption (as it was in your original piece).

(I think "sapience" is I think the more relevant term, based roughly on "understanding", vs "sentience" based roughly on "feeling". But sentience is used where sapience should be more often than not, so if that's not what you mean, you should clarify. Similarly, safety is used overlapping with x-risk, and not. So if you meant it doesn't matter if AI feels or produces minor harms, I agree- but I don't think that's what you meant, and I'd expect it to be misinterpreted by a majority if it was.)

Now, I actually agree with you that training alone won't produce sapient AGI, what I've termed "Real AGI". Or at least not obviously or quickly.

But developers will probably pursue a variety of creative means to get to competent and therefore useful and dangerous AGI. And I think a fair assessment is that some routes could work very rapidly - nobody knows for sure. I think highly capable tool AI is setting the stage for sapient and agentic AGI very directly: with a capable enough tool, you mearly prompt it repeatedly with "continue working to accomplish goal X" and it will reflect and plan as it considers appropriate- and be very very dangerous to the extent it is competent, since your definition of goal X could easily be interpreted differently than you intended it. And if it's not, someone else in your wide web of democratized AI usage will give their proto-AGI a humanity-threatening goal, either on purpose or by accident- probably both, repeated hundreds to millions of times to various degrees.

More in LLM AGI will have memory, and memory changes alignment, ,and If we solve alignment, do we die anyway?,

Democratizing AI is a common intuition, and I think it's motivated by valid concerns. Yours are less common. See Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours for both sides of the argument.

Reply
Moderation Log
More from Anthony Fox
View more
Curated and popular this week
6Comments

The People Who Were Right Too Early

There’s a quiet frustration shared by many in this community.

You raised concerns about AI before it was cool.
You explored edge cases before they were dangerous.
You built frameworks to understand systems that didn’t yet exist.

And for years, you were ignored.

Now the world is listening—but not to you.
The language of alignment, safety, and control is everywhere—
…but the incentives are still misaligned, and the people driving policy are often just chasing headlines.

Worse, the platforms and systems you once shaped—through code, critique, and deep analysis—have become closed-off, gamified, and optimized for scale over substance.

This isn’t new.

It happened to musicians.
It happened to developers.
And now, it’s happening in AI.

Power users—the ones who push systems to their limits by actually using them—are being sidelined again.

But if those users are ignored, we lose our most important feedback loop.
We don’t just lose control of the technology.
We lose contact with reality.

What Is an AI Power User?

The term “AI power user” isn’t widely defined yet—but it needs to be.

We can borrow the concept from the early days of computing. A computer power user wasn’t a programmer or hardware engineer. They were the people who pushed machines to the edge of their capabilities—automating tasks, scripting shortcuts, bending off-the-shelf tools to fit real workflows.

They didn’t build the system.
They made it do more than it was designed for.

An AI power user is the same kind of person, in a new context.

They’re not researchers. Not alignment theorists.
They’re people trying to get real work done with AI.

That includes:
- Writers structuring arguments or summarizing hours of transcripts
- Analysts exploring huge datasets with natural language queries
- Entrepreneurs gluing together GPT and Zapier to build microservices
- Designers prototyping content in minutes instead of hours

They don’t care about the model’s elegance. They care whether it does the job.

Power users aren’t hired to “test” systems—they stress them by using them.
And when something breaks, they find a workaround, file a complaint, or switch tools.

Their value isn’t academic.
It’s operational.

They reveal what no benchmark can:
Where the tool actually fails people.

The Power Users We Left Behind

This has happened before.

Musicians: From Creators to Content
Musicians once drove innovation in digital audio—DAWs, plugins, workflows.
But as platforms centralized (Spotify, TikTok), discovery became algorithmic, tools got dumbed down, and real creators were pushed aside for “content producers.”

Developers: From Builders to Users
Developers helped build open ecosystems—Linux, Android, early web platforms.
Now those ecosystems are being closed, APIs deprecated, and the users who once extended systems are locked out.

Search Power Users
Boolean logic and advanced filters gave way to ads, AI summaries, and SEO spam.
Precision was sacrificed for engagement.

Wikipedia Editors
Early editors built the system. Now they struggle with bureaucracy and burn out while misinformation spreads.

Gamers & Modders
Modding communities kept games alive for years. Now they’re often throttled by microtransactions and closed ecosystems.

The Pattern
1. Power users build early value
2. Platforms scale
3. Customization gives way to simplicity
4. Investor logic replaces user logic
5. Real users get ignored

And now, AI is at risk of repeating it.

Why Investors Should Care

The sidelining of power users isn’t just a cultural loss—it’s a business risk.

Power users are the early warning system. They reveal what works under pressure, what breaks in the wild, and what features actually matter over time. They don't just test limits—they define them.

Ignore them, and you get tools that scale fast but fail quietly—until the failure is public, expensive, or existential.

This has happened before. Developers, musicians, and rationalist thinkers have all seen the systems they helped refine get hijacked by growth-at-all-costs logic.
Investors saw short-term returns—but lost the communities that made those systems viable long term.

If AI follows the same arc—chasing engagement, ignoring depth—it won't just lose users. It’ll lose its edge.

How AI Could Break the Pattern

AI doesn’t have to repeat history.

Unlike static tools, AI is adaptive.
It can improve based on how people use it—if that feedback is allowed to shape development.

But here’s the catch: real feedback comes from power users.
From people trying to hit real goals and encountering real failure.

That means:
- The teacher who found the AI's summary subtly wrong
- The researcher who lost an hour to hallucinated sources
- The founder who hit scale and found the tool cracked under load

These aren’t bugs in isolation. They’re friction points. And they matter.

If we integrate that feedback—if we treat users as co-creators, not just consumers—then AI can become something rare:

A tool that actually gets better the harder you push it.

But that means breaking the pattern now—not after it’s too late.

A Final Thought: Feedback as Function

If AI continues to evolve without deep feedback from those using it under real constraints, we risk designing tools that are theoretically impressive but operationally brittle.

Some of the most valuable information about these systems won’t come from benchmarks or red teaming—it will come from friction. From the places where the tool almost works, then fails in ways no spec predicted.

Those insights typically come from what we might call power users: people pushing AI systems not to explore them, but to achieve goals—under pressure, in context, with real consequences.

Integrating that kind of feedback isn't just a product design question. It may be central to building AI systems that don’t silently drift away from their intended purpose.