LESSWRONG
LW

Capybasilisk
339Ω29850
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Creating Complex Goals: A Model to Create Autonomous Agents
Capybasilisk6mo30

competition between minds may lead to the development of society

It also leads to civil strife and war. I think humans would be very swiftly crowded out in such a society of advanced agents.

We also see, even in humans, that as a mind becomes more free of social constraints, new warped goals tend to emerge.

Reply
So how well is Claude playing Pokémon?
Capybasilisk6mo20

>mumble into an answer

Typo, I presume.

Reply
Metacompilation
Capybasilisk6mo10

Typo in the first subheading. Just FYI.

Reply
A computational no-coincidence principle
Capybasilisk7mo5-5

Isn't this just the problem of induction in philosophy?

E.g., we have no actual reason to believe that the laws of physics won't completely change on the 3rd of October 2143, we just assume they won't.

Reply1
Half-baked idea: a straightforward method for learning environmental goals?
Capybasilisk7mo10

Thanks. That makes sense.

Reply
Half-baked idea: a straightforward method for learning environmental goals?
Capybasilisk7mo10

Also note that fundamental variables are not meant to be some kind of “moral speed limits”, prohibiting humans or AIs from acting at certain speeds. Fundamental variables are only needed to figure out what physical things humans can most easily interact with (because those are the objects humans are most likely to care about).

Ok, that clears things up a lot. However, I still worry that if it's at the AI's discretion when and where to sidestep the fundamental variables, we're back at the regular alignment problem. You have to be reasonably certain what the AI is going to do in extremely out of distribution scenarios.

Reply
Half-baked idea: a straightforward method for learning environmental goals?
Capybasilisk7mo60

You may be interested in this article:

Model-Based Utility Functions

Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions

Also, regarding this part of your post:

For example: moving yourself in space (in a certain speed range)

This range is quite huge. In certain contexts, you'd want to be moving through space at high fractions of the speed of light, rather than walking speed. Same goes for moving other objects through space. Btw, would you count a data packet as an object you move through space?

staying in a single spot (for a certain time range)

Hopefully the AI knows you mean moving in sync with Earth's movement through space.

Reply
Towards shutdownable agents via stochastic choice
Capybasilisk9mo10

Is an AI aligned if it lets you shut it off despite the fact it can foresee extremely negative outcomes for its human handlers if it suddenly ceases running?

I don't think it is.

So funnily enough, every agent that lets you do this is misaligned by default.

Reply
Towards shutdownable agents via stochastic choice
Capybasilisk11mo1-3

I'm pointing out the central flaw of corrigibility. If the AGI can see the possible side effects of shutdown far better than humans can (and it will), it should avoid shutdown.

You should turn on an AGI with the assumption you don't get to decide when to turn it off.

Reply
Just because an LLM said it doesn't mean it's true: an illustrative example
Capybasilisk1y10

According to Claude: green_leaf et al, 2024

Reply1
Load More
9[LINKPOST] Agents Need Not Know Their Purpose
1y
0
18[Link Post] Bytes Are All You Need: Transformers Operating Directly On File Bytes
2y
2
11The Involuntary Pacifists
3y
3
3Reward Is Not Necessary: How To Create A Compositional Self-Preserving Agent For Life-Long Learning
3y
0
8The Opposite Of Autism
3y
16
5Deriving Our World From Small Datasets
3y
4
82Shadows Of The Coming Race (1879)
4y
6
1Leukemia Has Won
7y
2
1Has The Function To Sort Posts By Votes Stopped Working?
7y
3