Crocker's rules.
I'm nobody special, and I wouldn't like the responsibility which comes with being 'someone' anyway.
Reading incorrect information can be frustrating, and correcting it can be fun.
My writing is likely provocative because I want my ideas to be challenged.
I may write like a psychopath, but that's what it takes to write without bias, consider that an argument against rationality.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Whoever claims to be fully truth-seeking is not entirely honest.
As you said, Claude tore you to shreds. I think "if you’re a dickhead online, you can still be prevented from being hired" is a dangerous meme. It assumes that anyone who is affected by this modern social credit system is a bad human being, when the truth is that every actual human being has done something which can make them look bad. The only reason this problem isn't 100 times worse is that the world isn't yet legible enough for us to gather and interpret this data automatically. In fact, you've hit upon one of the solutions - the destruction of data (or even better: not recording it in the first place). These laws only make sense for our public life, private life is (warning:politics) very different. There is no legible solution to the conflict between laws and human freedom, there's only obscurity, illegibility, unenforceability, separation of information, and the bottleneck in moderation (which is now being dissolved by AI moderation). By the way, I'm aware there's a minority of people (mostly oversocialized moralizers and workaholics) who can tolerate living their entire life in the public sphere, and that they will not be able to empathize with my worries. "People will be forced to be moral, what's wrong with that?"
As your private life is increasingly nested inside of infrastructure (not deemed utility under Section 230) which is owned by private companies, the private sphere of your life will shrink as the territory is eaten (the ratio of territory which is neutral or owned by yourself will shrink towards zero). This is all going to get much worse in 5-10 years, I know because I've been pro freedom/privacy for more than 15 years, and I've always understimated how bad things would get.
Maybe once I become financially independent
Perhaps, as long as you don't have any opinions which can get you debanked
If nobody perceives this evil, then it does not exist. If anything, bringing attention to something which is happening, and then deeming it evil, would increase the total suffering of the world for making people suffer from something that they would otherwise have ignored.
Society needs every cog of the system, it needs everything from simple jobs to highly respected jobs. Every part is important. Suffering starts when you start telling half the population "Your position is low, this makes you a loser, this makes you a failure, and it makes you worth less". Suffering is not a function of objective states, but of their interpretation. Halving the net suffering of the world by changing our perceptions of it is trivial compared to making everyones lives better in an objective sense.
There's also no absolute good or evil. Things which are "somewhat evil" might be protecting against bigger evils. Two examples: Parents force their children to do homework, and this is painful, but it's also for their own benefit and acts to prevent future evils. "Helicopter parenting" is virtue gone wrong, as one does harm in the attempt to minimize harm.
I believe that many ongoing "evils" are chesterton's fences. Perhaps socities which did not engage in evils do not exist anymore because they destroyed themselves with their virtues. Everything good is costly, after all. This is not a problem to be solved, it's life. If one finds life to be a problem, then their philosophy is wrong - their map says that the territory needs to be different for life to be enjoyable, and this interpretation harms their enjoyment.
Lastly, our moral reflections are worth nothing. Plenty of large companies are making the world a worse place because it's profitable for them, we don't need any more reflection to conclude this. But the conclusion has no power as the rich have succeeded in subverting the values of society towards consumerism and the like. And only humans working in companies can be moral, but individuals only have freedom of choice in smaller companies. In larger companies, everyone is just following orders, or some law which forces them to maximize shareholder profit, which we know conflicts with moral principles (this is the problem with Moloch).
What personally bothers me is the lack of humanity in the modern world, not the evil. To be precise, the modern world is not that evil (malicious), it's completely indifferent. Slavery, genocides and conquests would be an improvement (a step towards humanity). The suffering is not a problem, those who grow up in harsher times will create more solid mental defenses and more stoic interpretations of life, so the average level of suffering (or rather, net suffering) is unlikely to change very much. But like farm and zoo animals, we won't get to experience the authentic life that our forefathers were capable of experiencing.
Exactly, losing and winning are equivalent, they both mark the end of the game. This sort of Buddhist conclusion that the destruction of everything is preferable to a bit of negative emotion is mere pathology. We could make new challenges in a "solved" world, but people would cheat using AI just like they're cheating by using AI now. With the recent increases in cognitive off-loading, I predict that a vast majority will ruin the game for themselves because they can't stop themselves from cheating, for the same reason that they can't stop themselves from scrolling Tiktok all day (hedonism stemming from a lack of meaning).
The hope for this kind of victory is also extremely naive (and sort of cute). I think it's a cognitive bias - the one that makes people think that their commust utopia is possible, makes Musk think he will be living on mars in the future, and makes Christians think that Jesus could return any day now. Most modern problems are amplified by technological advancements, but we expect even more tech to solve all our problems even when the dangers of technology are 10 times more obvious than when Kaczynski warned us? That's like losing half your money at a casino and then betting the other half. I don't want to be forced into such a nihilistic self-destruction, which I recognize as a symptom of technology in the first place (life is less meaningful because we have less agency).
The more you meditate on the optimal design for the world, the more your utopian world will look like the world which already exists. Only a child goes "Why don't we just give everyone a lot of money?", and the more we mature, the more "bad" things we will recognize as good. "Puzzles are fun!", "Games are more fun if you discover the solutions yourself", "Exercise and other unpleasant things are healthy", "If things come too easily, we take them for granted", and so on. Keep going, and you will reconcile with suffering too. Nerds are trying to solve psychological problems using math and logic, and they're just as awkward and unsuccessful as philosophers, and for the same reasons. But I'm afraid that nerds have become smart enough that they could bring an end to the game, and it's a strange choice, as learning how to enjoy the game would actually be easier.
I really, really dislike other people telling me what to do. In fact, I've sometimes done things because other people told me that I couldn't do them (motivation through pride) or shouldn't do them (motivation through spite). I think this goes for a lot of intelligent people, unless they are working for something which aligns with their values, or for people who they like. I'm often more motivated to help my friends than I am to help myself.
So, this solution works for most people, but it doesn't generalize to people like myself who have a high need of agency and feel unfairly compensated (being twice as good of a worker rarely results in twice the salary). And I think this problem is at its worst when I interpret the actions I must take in my life as originating from the outside (society saying I need to work) rather than as being my own choice (Me choosing to work because I think it's best).
Alternative sources of motivation I've seen work in other people is morality (e.g. wanting the world to be better), hopes/dreams for the future (this is vulnerable to doubt, however), the sense of duty, sheer love for the work at hand, and putting oneself in a sitaution with no way out except doing the work. Alternative causes of Akrasia I've seen are disillusionment/nihilism, perfectionism, and fear (fear of pain, risk, the unknown, and of the feeling of cognitive load). Apathy and nihilism are both harmful to motivation as motivation is rooted in meaning and emotion.
I personally recommend decreasing ones scope of consideration to the local, where one has the most agency, and surrounding oneself with people who care a lot, as this repairs disillusionment over time
It doesn't require conscious or direct coordination, but it does require a chain of cause and effect which affects many people. If society agrees that chasing after material goods rather than meaningful pursuits is bad taste, then the world will become less molochian. It doesn't matter why people think this, how the effect is achieved, or if people are aware of this change. Human values exist in us because of evolution, but we may accidentally destroy them with technology, through excessive social competition, or through eugenics/dysgenics.
I don't think rules make people better. One doesn't become virtuous because we make it impossible for them to break the law, true virtue is when you have the freedom to do evil but choose not to. This sounds like mere philosophy, but values are in an entirely different category than rules. Value judgements and facts are necessarily unrelated, values cannot be derived, deduced or otherwise calculated, they're arbitrary and axiomatic. In fact, AGIs cannot have values, they can only act as if they do.
the punishment could be stronger
I probably did not exlain myself well enough. People get away with bad things because there's loopholes in the law which doesn't get punished because they're technically okay. You cannot cover all attack vectors, because you cannot calculate all of them.
If you did manage to find the, say 20 million different attack vectors of a system, then you'd need to defend against 20 million actions. But perhaps 19 million of these actions are already done by perfectly innocent people, for perfectly innocent reasons, and if you start going after those who exploit these vectors, then you will also start harming innocent people who don't even know of these attack vectors. (example: In some states, collecting rain water is illegal)
Innocent behaviour and malicious behaviour overlaps, with no easy way to discern the two. Then you will either have to punish innocent people, or leave the attack vector open. If you leave open too many attack vectors, exploitation will become the norm and degrade society entirely. If you keep closing the attack vectors, then human freedom will tend towards zero over time, which also means that mutually beneficial actions between individuals will tend towards zero
The nature of exploitation and the ratio of bad states to good states makes it impossible for a good future to exist in a highly rational society. This is because rationality leads to Moloch. The reason not all of human history has been terrible is due to how good taste prunes Molochian elements by assigning them a lower value, or directly preventing ways of thinking which leads to the discovery of such strategies in the first place. Laws and ethics are insufficient because the attack/defense asymmetry cannot be overcome. There's no difference between felling the rainforest, scamming old people, or using research to improve your dating profile. That some people will disagree with the latter is my argument that this community values "optimality" in a molochian manner.
Destruction is so much easier than creation that it cannot be defended against, which means that the set of possible worlds is constrained by its agents. Many destructive actions allows the agent to extract some value for itself in the process. If the agents lack good taste, then there exists no set of laws or principles which can save them.
Irrational agents can build worlds that rational agents can not, and some of these irrational worlds are superior from a human perspective (the average human - not necessarily the reader which has spent years dismantling their natural tendencies)
I think the problem with Moloch is one of one-pointedness, similar to metas in competitive videogames. If everyone has their own goals and styles, then many different things are optimized for, and everyone can get what they personally find to be valuable. A sort of bio-diversity of values.
When, however, everyone starts aiming for the same thing, and collectively agreeing that only said thing has value (even at the cost of personal preferences) - then all choices collapse into a single path which must be taken. This is Moloch. A classic optimization target which culture warns against optimizing for at the cost of everything else is money. An even greater danger is that a super-structure is created, and that instead of serving the individuals in it, it grows at the expense of the individuals. This is true for "the system", but I think it's a very general Molochian pattern.
Strong optimization towards a metric quickly results in people gaming said metric, and Goodhart's law kicks in. Furthermore, "selling out" principles and good taste, and otherwise paying a high price in order to achieve ones goals stops being frowned upon, and instead becomes the expected behaviour (example: Lying in job-interviews is now the norm, as is studying things which might not interest you).
But I take it you're refering to the link I shared rather than LW's common conception of Moloch. Consciousness and qualia emerged in a materialistic universe, and by the darwinian tautology, there must have been an advantage to these qualities. The illusion of coherence is the primary goal of the brain, which seeks to tame its environment. I don't know how or why this happened, and I think that humans will dull their own humanity in the future to avoid the suffering of lacking agency (SSRIs and stimulants are the first step), such that the human state is a sort of island of stability. I don't have any good answers on this topic, just some guesses and insights:
1: The micro dynamics of humanity (the behaviour of individual people) are different from the macro mechanics of society, and Moloch emerges as the number of people n tends upwards. Many ideal things are possible at low n's almost for free (even communism works at low n!), and at high n's, we need laws, rules, regulations, customs, hierarchical structures of stablizing agents, etc etc - and even then our systems are strained. There seems to be a law similar to the square-cube law which naturally limits the size things can have (the solution I propose to this is decentralization)
2: Metrics can "eat" their own purpose, and creations can eat their own creators. If we created money in order to get better lives, this purpose can be corrupted so that we degrade our lives in order to get more money. Morality is another example of something which was meant to benefit us but now hangs as a sword above our heads. AGI is trivially dangerous because it has agency, but it seems that our own creations can harm us even if they have no agency whatsoever (or maybe agency can emerge? Similar to how ideas gain life memetically).
3: Perhaps there can exist no good optimization metrics (which is why we can't think of an optimization metric which won't destroy humanity when taken far enough). Optimization might just be collapsing many-dimensional structures into low-dimensional structures (meaning that all gains are made at an expense, a law of conservation). Humans mostly care about meeting needs, so we minimize thirst and hunger, rather than maximizing water and food intake. This seems like a more healthy way to prioritize behaviour. "Wanting more and more" seems like a pathology than natural behaviour - one seeks the wrong thing because they don't understand their own needs (e.g. attempting to replace the need for human connection with porn), and the dangers of pathology used to be limited because reality gatekept most rewards behind healthy behaviour. I don't think it's certain that optimality/optimization/self-replication/cancer-like-growth/utility are good-in-themselves like we assume. They're merely processes which destroy everything else before destroying themselves, at least when they're taking to extremes. Perhaps the lesson is that life ceases when anything is taken to the extreme (a sort of dimensional collapse), which is why Reversed Stupidity Is Not Intelligence even here.
This problem can also be modeled as the battle against pure replicators. What Nick Land calls the shredding of all values is the tendency towards pure replicators (ones which do not value consciousness, valence, and experience). This seems similar to the religious battle against materialism.
Bluntness outcompetes social grace, immorality outcompetes morality, rationality outcompetes emotion, quantity outcompetes quality (the problem of 'slop'), Capitalism outcompetes Buddhism and Taoism, intellectualism outcompetes spiritualism and religion, etc.
The Retrocausality reminds me of Roko's basilisk. I think that self-fulfilling prophecies exist. I originally didn't want to share this idea on LW, but if Nick Land beat me to it, then I might as well. I think John Wheeler was correct to assume that reality has a reflexive component (I don't think he used this word, but I'm going to). We're part of reality, and we navigate it using our model of reality, so our model is part of the reality that we're modeling. This means that the future if affected by our model of the future. This might be why placebo is so strong, and why belief is strong in general.
While I think Christopher Langan is a fraud, I believe that his model of reality has this reflexive axiom. If he's at least good at math, then he probably added this axiom because of its interesting implications (which sort of unifies competing models of reality. For instance, the idea of manifestation which is gaining popularity online)
Thank you.
Some people do seek beauty. Beauty has a similar effect to cuteness, people who look good are generally treated better. People probably prefer traits which "feel like them", and traits which they have a natural advantage at. The goal is to bring out as many real aspects of yourself as you can, and to make them as appealing as possible. Being forced to roleplay as something you're not is painful, and losing yourself in the process of fitting into a group will make you feel empty. Society is generally correct about this problem, but I think that artistic skills is sufficient to solve it.
I think self-worth is a factor, as you say, but I expect most people to have a hard time accepting themselves unless they can find a community which accepts them.
Finally, yes, suffering can push one towards either extreme. Fetishism also has this dual component - somebody who was abused might become a masochist, but another possibility is that they will search for a partner who is extremely gentle. It depends which side wins the battle, so to speak.
Successful reinforcement learning requires being around people with better taste than yourself, or consuming material made by people with better taste. Sometimes I worry that individuals with good taste might instead be harmed by their environment (I'm friends with a vtuber. I know that her chat will have inappropriate comments, and I know that sexual topics will be rewarded with more engagement). In an abstract sense, I think people want to increase their value, and that graceful behaviour is behaviour which protects value (and treats things as if they have value in order to reinforce the illusion that they have value - the polar opposite of vulgarity/blasphemy/profanity)
Good and evil are naive concepts which break down once you start thinking about them and questioning them. Moral relativism is not one of many valid views, it's a logical conclusion.
The post criticizes how every age believes that they've figured out what's good, even though they're clearly flawed from the perspective of other ages. But the same thing is true when moralizers decide that "X is obviously bad and we all agree" because X feels bad, despite a complete lack of effort to challenge this belief. Morality is like religion in that it inhibits thought, and I think they're both cultural immune systems against various issues. We shouldn't do away with morality, but morality is too naive, and the road to hell is paved with good intentions.
Morality is mostly poor assumptions like "X is bad", and the amount of effort which goes into the evaluation usually amounts to "yep, X makes me feel bad, case closed". If discrimination is bad, we'll have to do away with exams and drivers licenses. I think we need to look at the second or third-order effects of anything in order to even begin judging if it's good or bad. You cannot simply stop at the first step and not feel responsible because your life choices only lead to death further down the chain of cause and effect (e.g. veganism also requires the death of animals, just less directly)
To be brief, there are no good or bad things that one ought to maximize or minimize, there's only trade-offs to make and balances to find. Nothing is purely good/virtuous or bad/evil, these terms cannot be decoupled from context.
But it's true that systems cannot properly evaluate themselves from the inside. It's only when you have an external reference point that you can do so. In 100 years, we can look back at 2025, and then we may discover that we deem our current society to have had moral catastrophes. But there's no one true reference frame