I have started to see the Instrumental convergence problem as a part of human-to-human aliment problem.
E. Glen Weyl in "Why I Am Not A Technocrat"
Similarly, if we want to have AIs that can play a productive role in society, our goal should not be exclusively or even primarily to align them with the goals of their creators or the narrow rationalist community interested in the AIAP. Instead it should be to create a set of social institutions that ensures that the ability of any narrow oligarchy or small number of intelligences like a friendly AI cannot hold extremely disproportionate power. The institutions likely to achieve this are precisely the same sorts of institutions necessary to constrain extreme capitalist or state power.
A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc.
Weyl's technocrat critique is valid in the personal level. It did hit me hard. I have tendency to drift from important messy problems into interesting but difficult problems that might have formal solutions (Is there name for this cognitive bias?) LessWrong community supports this bias drift.
I argue that Instrumental convergence and AI aliment problems are framed incorrectly to make them more interesting to think and easier to solve.
New framing: Intelligent agents (human and nonhuman) aligning constantly to each other. Solving instrumental convergence is equal to solving the society. We can't solve it once and for all, but we can create process and institutions that adjust and manage problems that arise. Typical scenarios are superpower+superintelligence, ruling party + superintelligence, Zuck+Superintelligence, Chairman Xi + Superintelligence, Alphabet board of directors + Superintelligence.
If Scrooge McDuck’s downtown Duckburg apartment rises in price, and Scrooge’s net worth rises equally, but nothing else changes, the distribution of purchasing power is now more unequal — fewer people can afford that apartment. But nobody is richer in terms of actual material wealth, not even Scrooge. Scrooge is only “richer” on paper. The total material wealth of Duckburg hasn’t gone up at all.
Cities generate wealth from the Economies of agglomeration. Bigger cities have higher productivity than smaller cities. Some of that wealth naturally flows into the land price. Here is a good introduction: Scale Economies and Agglomeration
People become actually richer in terms of natural wealth in cities. The source of wealth is not the property and land itself, but the utility value of the land has in producing the positive agglomeration effects. Land is one of the primary 'factors of production' (land, labor, capital). Just like farmland, urban land creates value from produced output happening on it. The soil itself has no value without cultivation. Urban land happens to be more productive than farmland and it becomes more valuable.
The solution to this problem is better urban planning. Inefficient land usage is bottleneck for growth in the cities and it should be treated as scare resource and used very efficiently. Just building more, building dense and have good infrastructure helps to reduce the cost of living in high productivity areas.
As an layman who reads little academic micro- and macroeconomics, I can't but think that I am already familiar with the "Commoditize Your Complement." I lack the expertise to frame this properly, but I suspect it relates to concepts like "platform envelopment", "envelopment of complements", "platform stacks" when discussing competition in platform economies.
From Buddhist Phenomenology by Henk Barendrekt
1.7 Explaining apparent contradictions
Now we will explain how contradictions, which happen to occur in some buddhist texts, are possible. Suppose some part of reality U is described using some language L. Some of the regularities observed in L are in fact physical laws, but may be confused with logical laws. If we extend the reality U to U+, but keep as the describing language L, then statements may result that contradict statements made about U. Although the contradictions are only apparent, because the statements are about different `worlds', it may seem that logical laws are violated.
An example will be helpful. Consider a tribe living on an isolated island. Vision of the tribesman is such that they can only see the colors black and white. In their description of the world they say: ``Something is either black or white." Although we know that this is for them in fact an empirical law, the people of the island are tempted to consider this as a logical law. Sometimes they use the words `white' and `non-black' interchangeably. On some day someone has a mystical experience. In our language we can say that that person has seen the color green. In the language of the tribe she says: ``I have seen something very impressive. It was neither black nor white." For most of the people of her tribe she was saying: ``It was neither black, nor non-black." Therefore on the island one may think she is speaking nonsense. However, we know that she is not.
There are, however, stronger contradictions. In his book Exploring Mysticism already mentioned, F. Staal discusses the following so called `tetra lemma' occurring in buddhist texts.
It is not A; it is not non-A; it is both A and non-A; and it is neither A nor non-A.
Even this contradiction may be explained. Simply consider again the tribe seeing only black and white. But now our mystic sees the color gray. Indeed gray is not white, not black. And it can be said that gray is both white and black. But also that it is neither white nor black.
I hope that the examples show that contradictions occurring in texts of mystics are not a sign that something essential is wrong. Nevertheless it is preferable that descriptions of altered states of consciousness are free from contradictions in the sense of logic. I will try to fulfill this requirement in sections 2 and 3.