I might give this a read, but based on the abstract I am concerned that "has a perspective" is going to be one of those properties that's so obvious that its presence can be left to human judgment, but that nonetheless contains all the complexity of the theory.
EDIT: Looks like my concerns were more or less unfounded. It's not what I would call the standard usage of the term, and I don't buy the conceptual-analysis-style justifications for why this makes sense as the definition of agent, but what gets presented is a pretty useful definition, at a fairly standard "things represented in models of the world" level of abstraction.
I usually just call these "free variables" by analogy to that term's usage in mathematics/statistics, but "aether" seems good too since, if nothing else, when I say "free variable" to people who don't know the jargon they just sort of roll over it whereas they'd have a much harder time doing that if I said "aether".
I might give this a read, but based on the abstract I am concerned that "has a perspective" is going to be one of those properties that's so obvious that its presence can be left to human judgment, but that nonetheless contains all the complexity of the theory.
I read it more as pointing towards something like embedded agency.
Winning, Jason (2019). The Mechanistic and Normative Structure of Agency. Dissertation, University of California San Diego.
I have not had a chance to read this, and my time is rather constrained at the moment so it's unlikely I will, but I stumbled across this, and it piqued my interest. Better understanding of agency appears important to the success of many research programs in AI safety, and this abstract matches enough of the pattern of what LW/AF has figured out matters about agency that it seems well worth sharing.
Full text of the dissertation here.