LESSWRONG
LW

Aspiration-based, non-maximizing AI agent designs [Aspiration-based designs]
Frontpage

13

[Aspiration-based designs] Outlook: dealing with complexity

by Jobst Heitzig, jossoliver, thomasfinn, Simon Dima
28th Apr 2024
AI Alignment Forum
2 min read
3

13

Ω 8

Frontpage

13

Ω 8

Previous:
[Aspiration-based designs] 3. Performance and safety criteria, and aspiration intervals
No comments10 karma
Next:
[Aspiration-based designs] A. Damages from misaligned optimization – two more models
No comments6 karma
Log in to save where you left off
[Aspiration-based designs] Outlook: dealing with complexity
1Roman Malov
1Roman Malov
1Jobst Heitzig
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:52 PM
[-]Roman Malov1y10

Pick d+1 many linearly independent linear combinations fj 
isn't there at most d linearly independent linear combinations of ui?

Reply
[-]Roman Malov1y10

maybe you meant pairwise linearly independent (by looking at the graph)

Reply
[-]Jobst Heitzig1y10

You are of course perfectly right. What I meant was: so that their convex hull is full-dimensional and contains the origin. I fixed it. Thanks for spotting this!

Reply
Moderation Log
More from Jobst Heitzig
View more
Curated and popular this week
3Comments

Summary. This teaser post sketches our current ideas for dealing with more complex environments. It will ultimately be replaced by one or more longer posts describing these in more detail. Reach out if you would like to collaborate on these issues.

Multi-dimensional aspirations

For real-world tasks that are specified in terms of more than a single evaluation metric, e.g., how much apples to buy and how much money to spend at most, we can generalize Algorithm 2 as follows from aspiration intervals to convex aspiration sets:

  • Assume there are d>1 many evaluation metrics ui, combined into a vector-valued evaluation metric u=(u1,…,ud). 
  • Preparation: Pick d+1 many linear combinations fj in the space spanned by these metrics so that their convex hull is full-dimensional and contains the origin, and consider the d+1 many policies πj each of which maximizes the expected value of the corresponding function fj. Let Vj(s) and Qj(s,a) be the expected values of u when using πj in state s or after using action a in state s, respectively (see Fig. 1). Let the admissibility simplices V(s) and Q(s,a) be the simplices spanned by the vertices Vj(s) and Qj(s,a), respectively (red and violet triangles in Fig. 1). They replace the feasibility intervals used in Algorithm 2. 
  • Policy: Given a convex state-aspiration set E(s)⊆V(s) (central green polyhedron in Fig. 1), compute its midpoint (centre of mass) m and consider the d+1 segments ℓj from m to the corners Vj(s) of V(s) (dashed black lines in Fig. 1). For each of these segments ℓj, let Aj be the (nonempty!) set of actions for which ℓj intersects Q(s,a). For each a∈Aj, compute the action-aspiration E(s,a)⊆Q(s,a) by shifting a copy Cj of E(s) along ℓj towards Vj(s) until the intersection of Cj and ℓj is contained in the intersection of Q(s,a) and ℓj (half-transparent green polyhedra in Fig. 1), and then intersecting Cj with Q(s,a) to give E(s,a) (yellow polyhedra in Fig. 1). Then pick one candidate action from each Aj and randomize between these d+1 actions in proportions so that the corresponding convex combination of the sets E(s,a) is included in E(s). Note that this is always possible because m is in the convex hull of the sets Cj and the shapes of the sets E(s,a) "fit" into E(s) by construction.
  • Aspiration propagation: After observing the successor state s′, the action-aspiration E(s,a) is rescaled linearly from Q(s,a) to V(s′) to give the next state-aspiration E(s′), see Fig. 2. 

(We also consider other variants of this general idea) 

Fig. 1: Admissibility simplices, and construction of action-aspirations by shifting towards corners and intersecting with action admissibility simplices (see text for details).
Fig. 2: An action admissibility simplex Q(s,a) is the convex combination of the successor states' admissibility simplices V(s′), mixed in proportion to the respective transition probabilities PM(s′|s,a). An action aspiration E(s,a) can be rescaled to a successor state aspiration E(s′) by first mapping the corners of the action admissibility sets onto each other (dashed lines) and extending this map linearly.

Hierarchical decision making

A common way of planning complex tasks is to decompose them into a hierarchy of two or more levels of subtasks. Similar to existing approaches from hierarchical reinforcement learning, we imagine that an AI system can make such hierarchical decisions as depicted in the following diagram (shown for only two hierarchical levels, but obviously generalizable to more levels):

Fig. 3: Hierarchical world model in the case of two hierarchical levels of decision making.