I don't think you need to worry about individual humans aligning ASI only with themselves because this is probably much more difficult than ensuring it has any moral value system which resembles a human one. It is much more difficult to justify only caring about Sam Altman's interests than it is for humans or life forms in general, which will make it unlikely that specifying this kind of allegiance in a way which is stable under self modification is possible, in my opinion.
Hello, I am an entity interested in mathematics! I'm interested in many of the topics common to LessWrong, like AI and decision theory. I would be interested in discussing these things in the anomalously civil environment which is LessWrong, and I am curious to find out how they might interface with the more continuous areas of mathematics I find familiar. I am also interested in how to correctly understand reality and rationality.
Maybe the appropriate mathematical object to represent trust might be related to those used to represent uncertainty in complex systems, such as wave functions associated with probabilities. After all, you can trust someone to precisely the extent to which you can constrain your own uncertainty about whether they will do things you wouldn't want. These things , while a kind of scalar, certainly contain lots of information in the form of their distribution throughout space, as well as being complex numbers.