Social preferences have worked before. But I think you are assuming that people are rational agents who choose optimal actions? My take on this is that cultures of the past taught people not to be rational to this degree, and that it was partly this lack of rationality which prevented moloch. This may be why darwinism didn't select for logical and unbiased beings to begin with (personal theory of mine).
Aren't these all partial solutions to Moloch?
Bottom-up control rather than top-down, irrational agents or individality (anything which increases the amount of optimal choices), illegibility (you can't get trapped in a meta which isn't discoverable/well-known), decentralization (molochian problems appear at asymptotic limits, which means that they emerge at size. By avoiding large systems, you avoid moloch)
I've been thinking about this topic for a while myself. I have not thought much about The Goddess of Everything Else, so I cannot tell if my current insights account for that or not.
Good and evil are naive concepts which break down once you start thinking about them and questioning them. Moral relativism is not one of many valid views, it's a logical conclusion.
The post criticizes how every age believes that they've figured out what's good, even though they're clearly flawed from the perspective of other ages. But the same thing is true when moralizers decide that "X is obviously bad and we all agree" because X feels bad, despite a complete lack of effort to challenge this belief. Morality is like religion in that it inhibits thought, and I think they're both cultural immune systems against various issues. We shouldn't do away with morality,... (read more)
As you said, Claude tore you to shreds. I think "if you’re a dickhead online, you can still be prevented from being hired" is a dangerous meme. It assumes that anyone who is affected by this modern social credit system is a bad human being, when the truth is that every actual human being has done something which can make them look bad. The only reason this problem isn't 100 times worse is that the world isn't yet legible enough for us to gather and interpret this data automatically. In fact, you've hit upon one of the solutions - the destruction of data (or even better: not recording it in the first... (read more)
If nobody perceives this evil, then it does not exist. If anything, bringing attention to something which is happening, and then deeming it evil, would increase the total suffering of the world for making people suffer from something that they would otherwise have ignored.
Society needs every cog of the system, it needs everything from simple jobs to highly respected jobs. Every part is important. Suffering starts when you start telling half the population "Your position is low, this makes you a loser, this makes you a failure, and it makes you worth less". Suffering is not a function of objective states, but of their interpretation. Halving the net suffering of the world... (read more)
Exactly, losing and winning are equivalent, they both mark the end of the game. This sort of Buddhist conclusion that the destruction of everything is preferable to a bit of negative emotion is mere pathology. We could make new challenges in a "solved" world, but people would cheat using AI just like they're cheating by using AI now. With the recent increases in cognitive off-loading, I predict that a vast majority will ruin the game for themselves because they can't stop themselves from cheating, for the same reason that they can't stop themselves from scrolling Tiktok all day (hedonism stemming from a lack of meaning).
The hope for this kind of victory is... (read more)
I really, really dislike other people telling me what to do. In fact, I've sometimes done things because other people told me that I couldn't do them (motivation through pride) or shouldn't do them (motivation through spite). I think this goes for a lot of intelligent people, unless they are working for something which aligns with their values, or for people who they like. I'm often more motivated to help my friends than I am to help myself.
So, this solution works for most people, but it doesn't generalize to people like myself who have a high need of agency and feel unfairly compensated (being twice as good of a worker rarely results... (read more)
It doesn't require conscious or direct coordination, but it does require a chain of cause and effect which affects many people. If society agrees that chasing after material goods rather than meaningful pursuits is bad taste, then the world will become less molochian. It doesn't matter why people think this, how the effect is achieved, or if people are aware of this change. Human values exist in us because of evolution, but we may accidentally destroy them with technology, through excessive social competition, or through eugenics/dysgenics.
I don't think rules make people better. One doesn't become virtuous because we make it impossible for them to break the law, true virtue is when you... (read more)
The nature of exploitation and the ratio of bad states to good states makes it impossible for a good future to exist in a highly rational society. This is because rationality leads to Moloch. The reason not all of human history has been terrible is due to how good taste prunes Molochian elements by assigning them a lower value, or directly preventing ways of thinking which leads to the discovery of such strategies in the first place. Laws and ethics are insufficient because the attack/defense asymmetry cannot be overcome. There's no difference between felling the rainforest, scamming old people, or using research to improve your dating profile. That some people will disagree... (read more)
I think the problem with Moloch is one of one-pointedness, similar to metas in competitive videogames. If everyone has their own goals and styles, then many different things are optimized for, and everyone can get what they personally find to be valuable. A sort of bio-diversity of values.
When, however, everyone starts aiming for the same thing, and collectively agreeing that only said thing has value (even at the cost of personal preferences) - then all choices collapse into a single path which must be taken. This is Moloch. A classic optimization target which culture warns against optimizing for at the cost of everything else is money. An even greater danger is that... (read 567 more words →)
By a duality principle, you can learn a lot from losers.
If somebody has made it to a high position of power despite having glaring flaws (you can think of Tate I guess), I recommend you pay attention. They must have something which balances their flaws out, something which made them succeed despite being at a disadvantage. You can figure out what it is and take it for yourself.
If an ugly person, stupid person, or socially awkward person becomes more successful than what makes sense to you, then your map is incomplete, and the advantage of the person might even be a low-hanging fruit (if a stupid person has an advantage, chances are... (read more)
Psychological observations:
You can measure the mental health of a person by their distance from the natural (taoist) viewpoint. From "child" to "intellectual", you climb the stairs of perception all the way to a crushing, recursive self-referential meta-awareness.
1: Describe the world as it appears to you -> Describe the world as it is -> describe the world as social reality dictates it is -> describe the world in a manner which signals that which social reality deems to be valuable.
2: Animalistic -> Aware of others -> Aware of self -> Judging oneself from others perspective.
Climbing up any such stairs is psychologically unhealthy. The sheer distance between the map and the territory is dangerous,... (read 484 more words →)
Status: No less important than the problem with AGI. I promise.
Here's the solution to the problem of Moloch:
Less information. Seriously. Hear me out.
Before we discovered clickbait, it could not dominate.
Now that it's well-know, it's a nash's equilibrium, we have no choice but to tend towards clickbait.
Why is make-up popular? Because it exists. It's now a nash's equilibrium, we can't get rid of it again, for girls want to look pretty. They feel pressured towards this strategy.
Why are cities all about efficiency and competition, while some rural areas still value health and well-being over productivity? It's because some people haven't fallen into the trap/dilemma yet. They're not sufficiently connected/exposed yet.
Now, lets consider two... (read 525 more words →)
All maps/definition/models/languages/etc are arbitrary, finite, self-containing axiomatic systems which are valid on the inside and nonsense when viewed from the outside. They also cannot interact with anything outside of themselves. They're isolated from reality but still very useful (as they're created and modified to fit our needs through a process much like biological evolution)
If I ask a question about reality, then my answer will answer the question, it won't answer reality. The pair (question, answer) exists in themselves and only as themselves. Questions are loaded questions, in that they contains definitions, but definitions don't rely on anything, they're assumptions/assertions. Everything is free-floating objects, there's no root, you can't "get to the bottom"... (read 374 more words →)
The "A" of "AI" is sufficient for human extinction.
What a dangerous AI might do to us, we're already doing ourselves, mainly by the help of technology, which takes charge of human decisions. It's not the addition of artificial intelligence, but the lack of human involvement, which nets us dystopia. To explain why this is the case, I'm going to borrow a really useful view from a post on qualiacomputing titled "Wireheading Done Right", namely that of "Consciousness vs. Pure Replicators". What makes humans special is that we care about valence. Pure replicators only care about winning and optimizing, whereas conscious beings care about things like joy. We're not purely about optimization, we... (read 874 more words →)
Society treats complemental aspects as opposites.
Then it tries to minimize the "bad" aspects, not realizing that all good aspects are minimized too.
This is true for happiness and suffering, it's also true for good and evil.
You can't separate creation and destuction, nor life and death.
An easy psychological example is that people with a poverty-mindset (e.g. insecurity) try to take from others, but they're stingy with giving.
But if you want to get, you should first give. If you want to rest well you should first work hard. If you want to earn money, you should first invest. If you want to do something perfectly you should first allow yourself to do it badly.
Life is... (read more)
That makes sense, but something feels off about it.
It seems to assume that all agents have the same payoff matrix, that the metrics are objective rather than subjective, that objective outcomes are the same as subjective outcomes, that the agent is not deceived about the payoff, that agents optimize for the same thing, that all agents have access to the same strategies.
It's also a mistake to compare biological behaviour to rational optimization. If I'm hungry, I gather food, and once I have enough food, I stop gathering food. The value of food depends on its scarcity, and any needs I have can be sated, limiting the destruction on my environment. Animals don't... (read more)