LESSWRONG
LW

Anthony Fox
-1360
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Power Users We Forgot: Why AI Needs Them Now More Than Ever
Anthony Fox4mo10

The development of AI today looks a lot like the early days of computing: centralized, expensive, and tightly controlled. We’re in the mainframe era — big models behind APIs, optimized for scale, not for user agency.

There was nothing inevitable about the rise of personal computing. It happened because people demanded access. They wanted systems they could understand, modify, and use on their own terms — and they got them. That shift unlocked an explosion of creativity, capability, and control.

We could see the same thing happen with AI. Not through artificial minds or sentient machines, but through practical tools people run themselves, tuned to their needs, shaped by real-world use.

The kinds of fears people project onto AI today — takeover, sentience, irreversible control — aren’t just unlikely on local machines. They’re incompatible with the very idea of tools people can inspect, adapt, and shut off.

Reply
The Power Users We Forgot: Why AI Needs Them Now More Than Ever
Anthony Fox4mo10

And we’re not going to slow this down with abstract concerns. Chip makers are going to keep making chips. There’s money in compute — whether it powers centralized server farms or locally run AI models. That hardware momentum won’t stop because we have philosophical doubts or ethical concerns. It will scale because it can.

But if that growth is concentrated in systems we don’t fully understand, we’re not scaling intelligence — we’re scaling misunderstanding. The best chance we have to stay aligned is to get AI into the hands of real people, running locally, where assumptions get tested and feedback actually matters.

Reply
The Power Users We Forgot: Why AI Needs Them Now More Than Ever
Anthony Fox4mo10

Thanks—this helps me understand how my framing came across. To clarify, I’m not arguing that AI is harmless or that alignment isn’t important. I’m saying misalignment is already happening—not because these systems have minds or goals, but because of how they’re trained and how we use them.

I also question the premise that training alone can produce sapience. These are predictive systems—tools that simulate reasoning based on data patterns. Treating them as if they might "wake up" risks misdiagnosing both the present and the future.

That’s why I focus on how current tools are used, who they empower, and what assumptions shape their design. The danger isn’t just in the future—it’s in how we fail to understand what these systems actually are right now.

Reply
The Power Users We Forgot: Why AI Needs Them Now More Than Ever
Anthony Fox4mo00

Just to clarify where I’m coming from—

I’m not using AI to write books, run a business, or automate workflows.

The only task I’ve really been focused on is understanding AI—how it responds, where it needs help, and how it could become more useful in real-world situations.

I am technical, but only in the sense that I’ve always used tools to better understand the world around me. That’s what I’m doing here—using AI not just to produce things, but to see where it fits, what it misses, and how it can help people do meaningful work.

I’ve spent a lot of time trying to get AI to explain itself, to respond with more awareness, to handle context that isn’t in its training data. Sometimes it does well. Sometimes it gets confused. That’s what I’m paying attention to.

I’m not here to push an agenda. I just think users—people who work with the tools day to day—have insights that are worth surfacing alongside all the other important work happening in AI.

If it’s useful, I’ll keep sharing. What this process has shown me so far is simple: the real problem isn’t sentience or safety—it’s the growing gap between reality and belief. I’ve been working on a way to stay aligned across that gap. Slowly, carefully, and operationally.

Reply
AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions
Anthony Fox4mo20

Chipmakers are a good place to start.

They don’t need to wait for regulation. They could pause voluntarily—not out of fear, but to build real understanding. Even they don’t fully grasp what their chips are enabling.

Before we chase more scale, we need to understand what we’ve already built. The class of serious AI power users—people who can push the systems to their edge and report back meaningfully—hardly exists. Chipmakers could help create it by getting advanced hardware into the right hands.

If we’re going to slow down, let it be to think better, not just to move slower.

Reply
AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions
Anthony Fox4mo-10

Thought-provoking piece. But one thing feels off: governance is being treated as something done to AI, not with it.

We talk about coordination, restrictions, treaties—but how many of the people shaping these decisions actually understand what today’s AI can and can’t do?

Policymakers should be power users. The best governance won’t come from abstract principles—it’ll come from hands-on experience. From people who’ve tested the limits, seen the failures, and know where the edge cases live. Otherwise we’re making rules for a system we barely understand.

Unpredictable outcomes are inevitable. The question is whether we’ll have skilled people—users and designers both—who can respond with clarity, not fear.

Before we restrict the tools, we need to understand them. Then governance won’t just be policy—it’ll be practice.

[This comment is no longer endorsed by its author]Reply
-2The Real AI Safety Risk Is a Conceptual Exploit: Anthropomorphism
3mo
0
1The Power Users We Forgot: Why AI Needs Them Now More Than Ever
4mo
6
5When AI Optimizes for the Wrong Thing
4mo
0