So… when do we get to the place where we aren't using objects to explain how the impression of objects arises?
I'm not sure about this, but David Chapman's discussion of Boundaries, objects, and connections seems tangentially relevant, curious to know your reactions to it. Quoting the part that seems relevant:
The world is not objectively divisible into separate objects. Boundaries are, roughly, perceptual illusions, created by our brains. Moreover, which boundaries we see depends on what we are doing—on our purposes.
However, boundaries are not just arbitrary human creations. The world is immensely diverse. Some bits of it stick together much more than other bits. Some bits connect with each other in many ways besides just stickiness. The world is, in other words, patterned as well as nebulous.
Therefore, objects, boundaries, and connections are co-created by ourselves and the world in dynamic interaction.
Just to avoid misinterpreting you, do you mean to say your personal opinion here sheds light on why the idea of altruism is culturally disliked in China?
(Asking since I'm following Gordon's comment to learn about why altruism is culturally disliked in China, so I'd like to filter direct responses from personal opinion tangents for myself)
Sure, I mostly agree. To repeat part of my earlier comment, you would probably more persuasive if you addressed e.g. why my intuition that #1 is more feasible than #2 is wrong. In other words, I'm giving you feedback on how to make your post more persuasive to the LW audience. This sort of response ("Well, yes, of course! Why didn't I think of it myself? /s") doesn't really persuade readers; bridging inferential gaps would.
Thomas Griffiths' paper Understanding Human Intelligence through Human Limitations argues that the aspects we associate with human intelligence – rapid learning from small data, the ability to break down problems into parts, and the capacity for cumulative cultural evolution – arose from the 3 fundamental limitations all humans share: limited time, limited computation, and limited communication. (The constraints imposed by these characteristics cascade: limited time magnifies the effect of limited computation, and limited communication makes it harder to draw upon more computation.) In particular, limited computation leads to problem decomposition, hence modular solutions; relieving the computation constraint enables solutions that can be objectively better along some axis while also being incomprehensible to humans:
A key attribute of human intelligence is being able to break problems into parts that can individually be solved more easily, or that make it possible to reuse partial solutions discovered through previous experience. These methods for making computational problems more tractable such ubiquitous part of human intelligence that they seem to be an obligatory component of intelligence more generally. One example of this is forming subgoals. The early artificial intelligence literature, inspired by human problem-solving, put a significant emphasis on reducing tasks to a series of subgoals.
However, forming subgoals is not a necessary part of intelligence, it’s a consequence of having limited computation. With a sufficiently large amount of computation, there is no need to have subgoals: the problem can be solved by simply planning all the way to the final goal.
Go experts have commented that new AI systems sometimes produce play that seems alien, precisely because it was hard to identify goals that motivated particular actions [13]. This makes perfect sense, since the actions that taken by these systems are justified by the fact that they are most likely to yield a small expected advantage many steps in the future rather than because they satisfy some specific subgoal.
Another example where human intelligence looks very different from machine intelligence is in solving the Rubik’s cube. Thanks to some careful analysis and a significant amount of computation, the Rubik’s cube is a solved problem: the shortest path from any configuration to an unscrambled cube has been identified, taking no more than 20moves [45]. However, the solution doesn’t have a huge amount of underlying structure – those shortest paths are stored in a gigantic lookup table. Contrast this with the solutions used by human solvers. A variety of methods for solving the cube exist, but those used by the fastest human solvers require around 50 moves. These solutions require memorizing a few dozen to a few hundred “algorithms” that specify transformations to be used at particular points in the process. Methods also have intermediate subgoals, such as first solving an entire side.
(Speedruns are another relevant intuition pump.)
This is why I don't buy the argument that "in the limit, superior strategies will tend to be beautiful and elegant", at least for strategies generated by AIs far less limited than humans are w.r.t. time, compute and communication. I don't think they'll necessarily look "dumb", just not decomposable into human working memory-sized parts, hence weird and incomprehensible (and informationally overwhelming) from our perspective.
At least 2 options to develop aligned AGI, in the context of this discussion:
What I don't yet understand is why you're pushing for #2 over #1. You would probably more persuasive if you addressed e.g. why my intuition that #1 is more feasible than #2 is wrong.
Edited to add: Matthijs Maas' Strategic Perspectives on Transformative AI Governance: Introduction has this (oversimplified) mapping of strategic perspectives. I think you'd probably fall under (technical: pessimistic or very; governance: very optimistic), while my sense is most LWers (me included) are either pessimistic or uncertain on both axes, so there's that inferential gap to address in the OP.

Contra "inform, not persuade", I remember reading Luke Muehlhauser's old post Rhetoric for the Good:
My sense is that Eliezer also consciously wrote persuasively as well; as a young LW lurker a decade ago it was that persuasiveness that kept me coming back.
I'm hence somewhat surprised to see "an explicit goal of this forum is that we are asked to write to inform, not to persuade" quite highly upvoted and agreed with. I wonder what changed, or whether my initial perception was just wrong to begin with.