Posts

Sorted by New

Wiki Contributions

Comments

It seems more likely to me that the internet is going to become relatively deanonymized than that it's going to be destroyed by bots. E.g. I weakly expect that LessWrong will make it much harder for new users to post comments or articles by 2024 in an effort to specifically prevent spam / propaganda from bots. 
Maybe the internet will break down into a collection of walled gardens. Maybe services will start charging token fees to join as an easy way to restrict how many accounts spammers can have (at the cost of systematically excluding their poorest users...)

Epistemic status: probably wrong; intuitively, I feel like I'm onto something but I'm too uncertain about this framing to be confident in it

I refer to optimizers which can be identified by a measuring stick of utility as agenty optimizers

The measuring stick is optimization power. In particular, in the spirit of this sequence, it is the correlation between local optimization and optimization far away. If I have 4 basic actions available to me and each performs two bits of optimization on the universe, I am maximally powerful (for a structure with 4 basic actions) and most definitely either an agent or constructed by one. I speak and the universe trembles.

One might look at the life on Earth and see that it is unusually structured and unusually purposeful and conclude that it is the work of an agenty optimizer. And they would be wrong.

But if they looked closer, at the pipelines and wires and radio waves on Earth, they might conclude that they were the work of an agenty optimizer because they turn small actions (flipping a switch, pressing a key) into large, distant effects (water does or doesn't arrive at a village, a purchase is confirmed and a bushel of apples is shipped across the planet). And they would be correct.

In this framing, resources under my control are structures which propagate and amplify my outputs out into large, distant effects (they needn't be friendly, per se, they just have to be manipulable). Thus, a dollar (+ Amazon + a computer + ...) is an invaluable resource because, with it, I can cause any of literally millions of distinct objects to move from one part of the world to another by moving my fingers in the right way. And I can do that because the world has been reshaped to bend to my will in a way that clearly indicates agency to anyone who knows how to look.

However, I haven't the slightest idea how to turn this framework into a method for actually identifying agents (or resources) in a universe with weird physics.

Also, I have a sense that there is an important difference between accumulating asymmetric power (allies, secret AI) and creating approximately symmetrically empowering infrastructure (Elicit), which is not captured by this framework. Maybe the former is evidence of instrumental resource accumulation whereas the latter provides specific information about the creator's goals? But both *are* clear signs of agenty optimization, so maybe it's not relevant to this context?

Also possibly of note is that more optimization power is not strictly desirable because having too many choices might overwhelm your computational limitations.