[summary: A 'preference framework' is a way of deciding which outcomes an agent [1bh terminally] prefers. 'Preference framework' is a broader term than 'utility function', since 'preference framework' would also include structurally complicated meta-utility functions, such as those which appear in some proposals for Utility indifference or Moral_uncertainty.
A 'preference framework' refers to a fixed algorithm that updates, or potentially changes in other ways, to determine what the agent prefers for terminal outcomes. 'Preference framework' is a term more general than 'utility function' that includes structurally complicated generalizations of utility functions.
For example, the utility indifference proposal has the agent switching between utility functions and depending on whether a switch is pressed. We can call this meta-system a 'preference framework' to avoid presuming in advance that it embodies a VNM-coherent utility function.
An even more general term would be Decision_algorithm which doesn't presume that the agent operates by preferring outcomes.